Search Papers | Poster Sessions | All Posters

Poster B111 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink

Component encoding: Interpretable and predictive models of neural computation

David Skrill1, Sam V. Norman-Haignere1; 1University of Rochester Medical Center

A central goal of sensory neuroscience is to build parsimonious computational models that can both predict neural responses to natural stimuli and reveal interpretable functional organization in the brain. Statistical “component” models can learn interpretable, low-dimensional structure across different brain regions and subjects, but lack an explicit “encoding model” that links these components to the stimuli that drive them, and thus cannot generate predictions for new stimuli or generalize across different experiments. The predictive power of standard encoding models has improved substantially with advances in deep neural network (DNN) modeling, but producing simple and generalizable insights from these models has been challenging. To overcome these limitations, we develop "component-encoding models" (CEMs) which approximate neural responses as a weighted sum of a small number of component response dimensions, each approximated by an encoding model. We show using simulations and fMRI data that our CEM framework can infer a small number of interpretable response dimensions across different experiments with non-overlapping stimuli and subjects (unlike standard components) while maintaining and even improving the prediction accuracy of standard encoding models.

Keywords: encoding model natural stimuli latent variable model deep neural networks 

View Paper PDF