Search Papers | Poster Sessions | All Posters

Poster A115 in Poster Session A - Tuesday, August 6, 2024, 4:15 – 6:15 pm, Johnson Ice Rink

Is visual cortex really “language-aligned”? Perspectives from Model-to-Brain Comparisons in Human and Monkeys on the Natural Scenes Dataset

Colin Conwell1, Emalie MacMahon1, Kasper Vinken2, Saloni Sharma2, Akshay Jagadeesh2, Jacob Prince3, George Alvarez3, Talia Konkle1, Leyla Isik1, Margaret Livingstone2; 1Johns Hopkins University, 2Harvard Medical School, 3Harvard University

Recent progress in multimodal AI and “language-aligned” visual representation learning has re-ignited debates about the role of language in shaping the human visual system. In particular, the emergent ability of “language-aligned” vision models (e.g. CLIP) -- and even pure language models (e.g. BERT) -- to predict image-evoked brain activity has led some to suggest that human visual cortex itself may be “language-aligned” in comparable ways. But what would we make of this claim if the same procedures worked in the modeling of visual activity in a species that has no language? Here, we deploy controlled comparisons of pure-vision, pure-language, and multimodal vision-language models in prediction of human (N=4) and rhesus macaque (N=6, 5:IT, 1:V1) ventral visual activity evoked in response to the same set of 1000 captioned natural images (the “NSD1000”). Preliminary results reveal markedly similar patterns in aggregate model predictivity of early and late ventral visual cortex across both species. Together, these results suggest that language predictivity of the human visual system is not necessarily due to “language-alignment” per se, but rather to the statistical structure of the visual world as reflected in language.

Keywords: vision models language models monkeys deep neural networks 

View Paper PDF