Search Papers | Poster Sessions | All Posters

Poster C122 in Poster Session C - Friday, August 9, 2024, 11:15 am – 1:15 pm, Johnson Ice Rink

Rapid mapping of abstract domains through extraction and projection of generalized velocity signals via a cognitive foundation model with grid cells

Sarthak Chandra1, Abhiram Iyer1, Sugandha Sharma1, Ila Fiete1 (); 1Massachusetts Institute of Technology

Grid cells in the medial entorhinal cortex create remarkable spatial maps during navigation, but recent studies show that they also extend to mapping and organizing abstract cognitive spaces. Examples of abstract environments include images with deformable features, like a cartoon bird with stretching legs and neck, or auditory inputs varying in frequency and amplitude. While it is understood how grid cells map physical spaces using velocity estimates, how they map abstract cognitive spaces remains unknown. We hypothesize that the brain maps abstract spaces by extracting low-dimensional velocity signals using the path integration capability of grid cells, which are then error-corrected by the same circuit. We propose the first model neural circuit that explains how grid cells can represent any abstract space. The model processes abstract, time-varying inputs across modalities and identifies minimal velocity representations to capture state transition dynamics. It enforces a self-supervised geometric consistency constraint where movements in closed loops produce velocity estimates summing to zero, a computation itself performed by the grid cell circuit. Our model suggests a way for grid cells to use velocity signals to map high-dimensional abstract environments, explaining how animals perceive velocities in diverse non-spatial contexts and encode cognitive spaces.

Keywords: self-supervised learning deep learning grid cells path integration 

View Paper PDF