Search Papers | Poster Sessions | All Posters

Poster A94 in Poster Session A - Tuesday, August 6, 2024, 4:15 – 6:15 pm, Johnson Ice Rink

The language network occupies a privileged position among all brain voxels predicted by a language-based encoding model

Eyas Ayesh1, Shailee Jain2, Josleen St. Luce3, Alexander Huth4, Anna Ivanova1; 1Georgia Institute of Technology, 2Massachusetts Institute of Technology, 3University of California San Francisco, 4University of Texas at Austin

We report preliminary results from the project to systematically analyze the relationship between language voxels and voxels that are well-predicted by a GPT2-based encoding model (EM). Language voxels are defined as those that respond significantly more to sentences than non-words, as identified via an auditory language localizer task. We find that ∼half of the language voxels are well-predicted by the EM, although >90% of well-encoded voxels are not language voxels. Language voxels, on average, have significantly better EM performance than non-language voxels, both among all cortical voxels and among well-predicted voxels. Finally, we project the EM voxelwise weights into a 3-PC subspace and find the the language voxels tend to have a positive bias along each PC. Consequently, we find a separating plane in the 3-PC space that separates language and non-language voxels predicted by an EM with a >75% accuracy both within-subjects and across-subjects, suggesting that language models have a clearly identifiable EM signature.

Keywords: Language Network Encoding Models fMRI Naturalistic Paradigm 

View Paper PDF