Search Papers | Poster Sessions | All Posters

Poster C161 in Poster Session C - Friday, August 9, 2024, 11:15 am – 1:15 pm, Johnson Ice Rink

Inverse reinforcement learning captures value representations in the reward circuit in a real-time driving task: a preliminary study

Sang Ho Lee1 (), Min-hwan Oh1, Woo-Young Ahn1; 1Seoul National University

A challenge in using naturalistic tasks is to describe complex data beyond simple summary of behaviors. Lee et al. (2024) showed that an inverse reinforcement learning (IRL) algorithm combined with deep neural networks is a practical framework for modeling real-time behaviors in a naturalistic task. However, it remains unknown whether the reward function inferred by IRL reflects value representations in the reward circuit. In this preliminary study (N=10), we investigate the neural correlates of the reward inferred by IRL. Human participants were scanned using fMRI while performing a real-time driving task (i.e., highway task). We show that the trajectory of IRL reward during the task strongly correlates with the trajectory of BOLD signals in the reward circuit including the prefrontal cortex, the striatum, and the insula. The results demonstrate the validity of the IRL as a modeling framework that explains both behaviors and the brain activity in a real-time task.

Keywords: naturalistic task inverse reinforcement learning fMRI deep neural networks 

View Paper PDF