Search Papers | Poster Sessions | All Posters
Poster A98 in Poster Session A - Tuesday, August 6, 2024, 4:15 – 6:15 pm, Johnson Ice Rink
Does Replay Suffice for Online Continual Learning in Spiking Networks?
Raghav Patel1 (), Nicholas Soures1, Dhireesha Kudithipudi1; 1The University of Texas at San Antonio
Continual learning models must learn sequentially from a changing data distribution, while accumulating previously learned knowledge without forgetting. Replay and parameter regularization are two prominent mechanisms that have shown promise in deep learning models but have only been explored for spiking neural networks in few works. In this work, we study the application of replay to domain-incremental learning in spiking neural networks and see if metaplasticity and synaptic consolidation can help efficient continual learning. We demonstrate that simple replay schemes can achieve state-of-the-art performance and that the incorporation of metaplasticity can increase performance when using low buffer sizes. This approach can serve as a baseline for efficient continual learning that can reduce memory and training overhead of replay.
Keywords: Spiking Neural Network Regularization Replay Continual Learning