Search Papers | Poster Sessions | All Posters

Poster C117 in Poster Session C - Friday, August 9, 2024, 11:15 am – 1:15 pm, Johnson Ice Rink

Learning abstract features with deep RL agents in an evidence accumulation task

James Mochizuki-Freeman1, Md Rysul Kabir1, Zoran Tiganj1 (); 1Indiana University Bloomington

Recent neuroscience studies suggest that the hippocampus encodes a low-dimensional ordered representation of evidence through sequential neural activity. Cognitive modelers have proposed a mechanism by which such sequential activity could emerge through the modulation of the decay rate of neurons with exponentially decaying firing profiles. Through a linear transformation, this representation gives rise to neurons tuned to a specific magnitude of evidence, resembling neurons recorded in the hippocampus. Here we integrated this cognitive model inside reinforcement learning agents and trained the agents to perform an evidence accumulation task designed to mimic a task used in experiments on animals. We found that the agents were able to learn the task and exhibit sequential neural activity as a function of the amount of evidence, similar to the activity reported in the hippocampus.

Keywords: Evidence accumulation Cognitive model Deep RL Neural sequences 

View Paper PDF