Search Papers | Poster Sessions | All Posters

Poster B48 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink

Place fields organize along goal trajectory with reinforcement learning

M Ganesh Kumar1 (), Cengiz Pehlevan1; 1Harvard University

When rodents learn goal-directed navigation, a high density of place fields form at reward locations, and the fields increase in width and skew against the movement direction. However, a normative framework to characterize the field distribution during task learning remains elusive. We hypothesize that the observed place field dynamics is a feature of state representation learning that helps policy learning maximize the reinforcement learning objective. We develop an agent that uses Gaussian basis functions to model place fields which directly synapse to a policy network. Each field's center, width and amplitude, and the policy parameters are updated trial by trial to maximize the cumulative discounted reward. When the agent learns to navigate to a goal in a one-dimensional track or two-dimensional environment with obstacles, a higher number of Gaussian fields organize near the goal while the rest of the fields increase in width to tile the goal trajectory. We show that the correlation between the frequency of being in a location and the field density at that location increases with training, as postulated by the efficient coding hypothesis. Additionally, Gaussian fields elongate along the goal trajectory aggregating future positions with similar actions, resembling a successor representation-like map. We further show that this learned map facilitates faster policy convergence, when the number of basis functions is low. To conclude, we develop a normative model that recapitulates several hippocampus place field learning dynamics and unify alternative proposals to offer testable predictions for future experiments.

Keywords: Place field dynamics State representation learning Normative navigation model Reinforcement learning 

View Paper PDF