Search Papers | Poster Sessions | All Posters

Poster A67 in Poster Session A - Tuesday, August 6, 2024, 4:15 – 6:15 pm, Johnson Ice Rink

Let’s disagree to agree: Model identifiability through disagreeability

Brian Cheung1, Erin Grant2, Helen Yang1, Boris Katz1, Tomaso Poggio1; 1Massachusetts Institute of Technology, 2University College London

Recent advancements in artificial intelligence (AI) have led to the development of vision systems that closely resemble biological visual systems in terms of behavior and neural recordings. However, there is increasing empirical evidence that the representations learned by such systems at scale are convergent: AI systems trained on large datasets tend to learn similar representations despite differences in architecture and training procedure. This lack of identifiability via representation and behavior presents a challenge to comparison pipelines commonly used to validate AI systems as models of biological vision, as it limits the ability to reason about the unique computational properties of an individual model. We call for a renewed focus on the stimuli that serve as the input to these pipelines and demonstrate that, for standard naturalistic image datasets used to pre-train and validate vision systems, there are a minority of stimuli that cause maximal disagreement among AI systems even if these systems achieve a high degree of agreement with the target function. We address the identifiability challenge by systematically exploring the narrowed space of these contrastive stimuli in order to provide the necessary signal to adjudicate between large-scale AI systems as models of biological vision.

Keywords: deep learning representational similarity system identification neural network architecture 

View Paper PDF