Search Papers | Poster Sessions | All Posters

Poster C92 in Poster Session C - Friday, August 9, 2024, 11:15 am – 1:15 pm, Johnson Ice Rink

Continual learning in artificial neural networks as a computational framework for understanding representational drift in biological systems

Daniel Anthes1 (), Sushrut Thorat1, Peter König1, Tim C Kietzmann1; 1University of Osnabrück

Studies monitoring neural responses over time have shown that neural representations “drift”, while behaviour stays constant - a phenomenon suggested to be linked to learning. Here we demonstrate that continual learning in deep neural networks may serve as a modelling framework for making progress in this domain, (a) for understanding the underlying computations and (b) for testing the analysis tools used. We train networks that implement two different neuroscientific theories on how stable behaviour can be maintained in light of learning new tasks. The first strategy allows for the models' readouts to 'track' the changing representations. The second confines learning to the nullspaces of previously learned readouts. Both simulations replicate hallmarks of drift observed in neuroscience - changing single-unit tuning, reduced cross-decoding performance over time, and changes in the overall population response. At the same time, existing analysis techniques cannot reliably differentiate the two implemented mechanisms. Continual learning may therefore offer a language for expressing computational hypotheses on drift, as well as a testbed for developing new analysis techniques.

Keywords: Representational Drift Continual Learning Artificial Neural Networks Normative Models 

View Paper PDF