Search Papers | Poster Sessions | All Posters

Poster C124 in Poster Session C - Friday, August 9, 2024, 11:15 am – 1:15 pm, Johnson Ice Rink

Modeling the emergence of instrumental learning in an odor-based 2AFC task

Juliana Chase1 (), Jing-Jing Li1, Anne G. E. Collins1, Linda Wilbrecht1; 1University of California, Berkeley

Non-human animals and humans are able to learn policies rapidly with few exposures. However, the earliest moments of learning are difficult to capture and model and are thus understudied. In part, this is due to high levels of variability in learning trajectory across individuals. Here we train adolescent mice in an odor-based two-alternative forced choice (2AFC) task and then extend a recently developed latent state cognitive modeling framework to fit our behavioral data. This framework dynamically estimates decision policies on a trial-by-trial basis, capturing an animal's likelihood to remain in either of two latent decision states: reinforcement learning (RL) and biased for some action. We found that our hybrid model was a better fit than the RL policy alone, and that it successfully explained individual learning trajectories in a way that the RL model could not. All together, our task and model provide novel insight into the earliest moments of learning.

Keywords: decision making hidden Markov model learning mice behavior 

View Paper PDF