Search Papers | Poster Sessions | All Posters
Poster B103 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink
Discovering cognitive models in a competitive mixed-strategy game
Peiyu Liu1, Kevin J. Miller2, Hyojung Seo1; 1Yale University, 2Google DeepMind and University College London
Sophisticated behavioral tasks are key tools in cognitive neuroscience, but pose challenges because the cognitive processes that give rise to behavior are often incompletely understood. Matching pennies (MP) is one such task: a strategic zero-sum game that has been broadly used for theoretical and empirical analysis of dynamic social interactions across species. Disentangled recurrent neural networks (disRNN) are a recently introduced deep learning method which allows discovering cognitive hypotheses directly from behavioral datasets. Here, we apply disRNN to a widely studied dataset of non-human primates playing an iterative MP game. We find that the discovered models provide a better qualitative and quantitative match to behavior than classic behavioral models, and reveal readily-interpretable cognitive hypotheses. Specifically, they show that animal’s behavior can be described as a mixture of long-term heuristics such as choice perseveration and reward-following, as well as short-term strategies that contribute to countering the opponent’s strategy.
Keywords: matching pennies reinforcement learning behavioral models disentangled RNN