Search Papers | Poster Sessions | All Posters

Poster C115 in Poster Session C - Friday, August 9, 2024, 11:15 am – 1:15 pm, Johnson Ice Rink

Embodied memory through gaze control

Ruiyi Zhang1 (), Xaq Pitkow2, Dora E Angelaki1; 1New York University, 2Carnegie Mellon University

To tackle complex natural tasks, one must maintain an accurate internal model of the environment to support actions. However, neural representations of the environment are noisy and rarely accurate. Fortunately, animals with fovea and acute vision can quickly scan the environment and foveate locations relevant to the task, enabling the updating and maintenance of an accurate internal model. We hypothesize that eye movements can be used as embodied memory to locate the evolving latent goal, and this mechanism benefits both biological and artificial intelligence. To investigate this, we developed a deep reinforcement learning (RL) agent with free eye movements and trained both the agent and macaques in a navigation task. We found that, without explicit instruction, both the agent and macaques naturally developed the use of eye movements as embodied memory for the latent goal to support navigation, resulting in better performance. The agent's artificial neurons also explained posterior parietal cortex (PPC) data from macaques.

Keywords: reinforcement learning neuro-AI eye movements memory 

View Paper PDF