Search Papers | Poster Sessions | All Posters

Poster C101 in Poster Session C - Friday, August 9, 2024, 11:15 am – 1:15 pm, Johnson Ice Rink

Accounting for the reliability of deep neural networks in representational modeling

Zirui Chen1 (), Michael Bonner1; 1Johns Hopkins University

In neuroscience, a critical goal is to develop computational models for explaining cortical responses to sensory stimuli. It has been widely recognized that, when evaluating the similarity between brain and model representations, it is necessary to estimate the noise ceiling of cortical activity measurements. However, one important source of noise that has been neglected is the reliability of the models themselves. For deep neural networks, a natural criterion is the consistency of representations learned across different random initializations. Here we demonstrate how to account for the reliability of both brains and models when assessing their similarity, using a metric called integrated reliability integrated reliability. We used simulated data to validate integrated reliability as a more accurate measure for evaluating the limitations in representational modeling compared with conventional noise ceiling estimates based on brain reliability alone. Furthermore, through analyses on actual neural networks and brain representations, we show that model reliability is a key constraint on representational modeling results in neuroscience. Our findings underscore the need to identify and mitigate model variability for improving computational models of cortical representation.

Keywords: deep neural network representational modeling vision noise ceiling 

View Paper PDF