Search Papers | Poster Sessions | All Posters

Poster C118 in Poster Session C - Friday, August 9, 2024, 11:15 am – 1:15 pm, Johnson Ice Rink

Composition of simple computational tasks captures the inductive biases of animals in network models

David Hocker1, Christine Constantinople1, Cristina Savin1,2; 1Center for Neural Science, New York University, 2Center for Data Science, New York University

Although recurrent neural networks (RNN) are now ubiquitously used by brain scientists to model neural dynamics and behavior, they are not a priori guaranteed to mimic animals' behavioral strategies. One reason is a fundamental model mismatch: Unlike RNNs, animals are not cognitive blank slates at task start, but they have learned through extensive prior experience. We address this issue by pretraining RNNs on tasks that mimic animals' prior inductive biases, in particular with simple cognitive "kindergarten'' tasks that can be combined to perform more complex tasks. Using a rich decision-making task with latent states previously used to train rats, we demonstrate that only RNNs that incorporate kindergarten tasks into their training reflect rat-like strategies. Mechanistically, we find that the dynamics of pretrained networks are richer than those obtained with other training strategies, and that these dynamics develop during kindergarten pretraining. Overall, our approach demonstrates a simple strategy for improving RNNs as models of cognition in animals, opens up interesting questions about how previous experience shapes computational strategies that animals adopt, and provides testable predictions for neural recordings.

Keywords: curriculum learning decision making dynamics deep learning 

View Paper PDF