Keynote & Tutorial

Wednesday, August 7, 3:20 - 4:00 pm, Kresge Hall (Keynote)
Wednesday, August 7, 4:30 - 6:15 pm, Little Kresge (Tutorial)

Continual Learning and Catastrophic Forgetting

Gido van de Ven
Dhireesha Kudithipudi

Gido van de Ven1, Dhireesha Kudithipudi2, 1KU Leuven, 2The University of Texas at San Antonio.

Continual learning is a key aspect of intelligence. The human brain is able to incrementally learn new skills without compromising those that were learned before, as well as to integrate and contrast new information with earlier acquired knowledge. Intriguingly, deep neural networks, although rivaling human intelligence in other ways, almost completely lack this ability to learn continually. Most strikingly, when these networks learn something new, they tend to “catastrophically” forget what was learned before. In recent years, continual learning has become a hot topic in deep learning, and it is considered one of the main open challenges in the field. In this keynote and tutorial, we will provide an overview of the deep learning research on continual learning of the past years. We will do so with a focus on the insights and intuitions that this line of research has generated with regards to the computational principles of continual learning. We hope that these insights will inform and inspire cognitive science research on the mechanisms in the brain that underlie the cognitive skill of continual learning.

The tutorial will complement the keynote’s broad overview of the deep learning literature on continual learning with illustrative examples and hands-on coding exercises. Similar as in the keynote, our focus will not be on “state-of-the-art” deep learning methods or complex applications, but rather we will use toy problems and representative example methods to illustrate the key concepts. A Colab Notebook will be provided with both demos and coding exercises, which participants with a coding background should be able to complete and run during the tutorial. The programming language will be Python. The tutorial will not assume prior knowledge of the continual learning literature.

The learning goals of the tutorial are for the audience to get familiar with both the problem of continual learning, as well as the different types of approaches that have been proposed in the deep learning literature to address it. The first part of the tutorial focuses on the continual learning problem. We provide a definition of continual learning, we discuss different types of continual learning and their different challenges, and we implement a continual learning datastream in the Colab Notebook ourselves. The second part of the tutorial covers approaches for addressing the continual learning problem. We discuss six different computational strategies for continual learning (replay, parameter regularization, functional regularization, optimization-based approaches, context-dependent processing and template-based classification), and for each strategy we highlight one representative method that we will implement in the Colab Notebook ourselves.

Tutorial_ContinualLearning_CCN2024.ipynb - Colab (google.com)