Search Papers | Poster Sessions | All Posters

Contributed Talks V

Talk Session: Friday, August 9, 2024, 4:00 – 5:00 pm, Kresge Hall

4:00 pm

The inevitability and superfluousness of cell types in spatial cognition

Xiaoliang Luo1 (), Robert M. Mok2, Bradley C. Love3; 1University College London, 2Royal Holloway, University of London, 3University College London, The Alan Turing Institute

Discoveries of functional cell types, exemplified by the cataloging of spatial cells in the hippocampal formation, are heralded as scientific breakthroughs. We question whether the identification of cell types based on human intuitions has scientific merit and suggest that "spatial cells'' may arise in non-spatial computations of sufficient complexity. We show that deep neural networks (DNNs) for object recognition, which lack spatial grounding, contain numerous units resembling place, border, and head-direction cells. Strikingly, even untrained DNNs with randomized weights contained such units and support decoding of spatial information. Moreover, when these "spatial'' units are excluded, spatial information can be decoded from the remaining DNN units, which highlights the superfluousness of cell types to spatial cognition. Now that large-scale simulations are feasible, the complexity of the brain should be respected and intuitive notions of cell type, which can be misleading and arise in any complex network, should be relegated to history.

4:12 pm

Mice in the Manhattan Maze: Rapid Learning, Flexible Routing and Generalization, With and Without Cortex

Jieyu Zheng1 (), Rogério Guimarães1, Jennifer Hu1, Pietro Perona1, Markus Meister1; 1California Institute of Technology

Mice are flexible foragers in the wild and quickly adapt to environmental changes. Here we designed a novel navigation task, the “Manhattan Maze,” to study cognitive flexibility in mice. The Manhattan Maze is easily reconfigurable and allows systematic task designs through search algorithms in a vast space of 2^{121} possible maps. Within two days, completely naïve wildtype mice learned three complex maps, each taking a sequence of nine turn decisions to solve. On Day 1, they rapidly learned the first map after ~10 round trips. On Day 2, they retained the ability to solve the map that was repeated. Further, they accelerated at learning new maps. We then tested the maze on acortical mice, a structural mutant born without the hippocampus and most of the neocortex. Although their initial solution took ~3x longer than wildtype, acortical mice successfully learned multiple maps and approached optimal performance. Surprisingly, they also learned new maps faster and were able to solve the same maze configuration when repeated after two months. Our results suggest that the mice can rapidly learn and that the cortex is not strictly required for navigating the Manhattan Maze.

4:24 pm

Planning in the Hippocampus: Linking Actions and Outcomes to Guide Behavior

Sarah Jo C Venditto1 (), Kevin J Miller2,3, Nathaniel D Daw1, Carlos D Brody1,4; 1Princeton University, 2Google DeepMind, 3University College London, 4Howard Hughes Medical Institute

Planning requires an internal model of the world that can be flexibly utilized to link actions and subsequent consequences across time and space. The hippocampus, often referred to as a “cognitive map,” is known for encoding the location of an animal within complex environments by representing salient states, both spatial and non-spatial. These representations can extend to non-local states, making them well-suited to support this internal action-outcome model. While the hippocampus has been causally linked to planning in both humans and rodents, how hippocampal representations carry out this function is poorly understood. To address this, we record from dorsal hippocampus while rats perform a multi-step reward-guided task that employs probabilistic transitions between actions and outcomes, the rat two-step task, which has been shown to reliably elicit planning. We find that hippocampal activity encodes the task space and exhibits “splitter cells” that differentiate similar positions based on preceding choice, providing distinct representations for each combination of choice and outcome. In-between trials, we find oscillating representations that encode the visited outcome paired with both possible choices; however, overall choice encoding is biased towards upcoming choices with a model-based dependence on reward and probabilistic transition.

4:36 pm

Planning With Others in Mind

Nastaran Arfaei1 (), Weiji Ma1; 1NYU

Planning is rarely done in isolation, but in the presence of other agents who affect the environment and the future action space. This shared nature of environment, cost, and reward is even more crucial to consider when planning collaboratively, where reward maximization is dependent on the actions of all collaborating agents. How do people incorporate the future actions of others in their planning process? We developed a collaborative dyadic turn-taking task with decision sequences up to length 16 to answer this question. We found that people can effectively plan in this context and evidence that they incorporate future potential moves of their partner in their planning process. We constructed and tested computational models of the behavior, among which, a collaborative heuristic search algorithm that simulated and evaluated the future actions of the partner fit the data the best. We also showed specific shortcomings of the competing models.

4:48 pm

Learning along the manifold of human brain activity via real-time neurofeedback

Erica L. Busch1 (), E. Chandra Fincke1, Guillaume Lajoie2,3, Smita Krishnaswamy1,4, Nicholas B. Turk-Browne1,4; 1Yale University, 2Mila - Quebec AI Institute, 3University of Montreal, 4Wu Tsai Institute

Learning to perform a new behavior is constrained by the geometry, or intrinsic manifold, of the neural population activity supporting that behavior. Recent work highlights the importance of manifolds capturing low-dimensional neural dynamics for learning to control brain-computer interfaces (BCIs). In non-human primate studies, BCI learning has been expedited and stabilized by mapping neural recordings from motor cortex through a low-dimensional manifold and then to a feedback display. In macaque motor cortex, the manifold uncovers more concise and plastic neural signals. Here, we investigate the manifold constraints on human learning in brain regions associated with higher-order cognitive processes using a non-invasive BCI. Using a custom neural manifold learning framework for real-time fMRI neurofeedback and a virtual reality stimulus, we trained participants in a multi-session study to perform a navigation task using their brain activity. Task performance was significantly improved by feedback based on the brain's intrinsic relative to lower-ranked ("off") manifold activity. Neural activity was modulated along the manifold over the course of neurofeedback training, such that neural activity became better aligned with the components of the manifold determining the feedback as performance improved.