Search Papers | Poster Sessions | All Posters

Poster B57 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink

Latent variable sequence identification for cognitive models with recurrent neural networks

Ti-Fen Pan1 (), Bill Thompson 1, Anne Collins1; 1University of California, Berkeley, United States

Extracting time-varying latent variables from computational cognitive models is a key step in model-based neural analysis, which aims to understand the neural correlates of cognitive processes. To derive latent variables, researchers typically fit computational models with likelihood-dependent techniques such as Maximum Likelihood Estimation. However, many relevant cognitive models have intractable likelihood, limiting our ability to use these models for analyses. Here, we present an approach to learn a direct mapping between time-series experimental data and the targeted latent variable space using recurrent neural networks trained to recover latent variable sequences in synthetic data. The results show that our approach reaches high accuracy in inferring latent variables in both tractable and intractable models. Furthermore, the approach is generalizable across different computational models and can identify both continuous and discrete latent spaces. Overall, our work suggests that using neural networks trained on synthetic data to analyze experimental data is a promising way to access a broader class of cognitive models in model-based neural analyses.

Keywords: latent variable identification intractable likelihood artificial neural network model-based neural analysis 

View Paper PDF