Search Papers | Poster Sessions | All Posters

Poster B131 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink

Ubiquitous visual representations during neural processing of a naturalistic movie

Hannah Small1 (), Haemy Lee Masson2, Ericka Wodka3,4, Stewart Mostofsky3,4, Leyla Isik1; 1Johns Hopkins University, 2Durham University, 3Kennedy Krieger Institute, 4Johns Hopkins School of Medicine

Social cognition depends on integrating information from both vision and language. However, prior work has mostly studied vision and language separately, not accounting for the rich social visual and verbal semantic signals that occur simultaneously in natural settings. To understand how this information is integrated during natural movie viewing, we fit a voxel-wise encoding model that included low- and mid-level visual and auditory features, as well as higher-level social and language features, including the presence of a social interaction and language model embeddings of the spoken language in the movie. We find distinct voxels supporting visual social processing and language. However, surprisingly, we also find that both social and language voxels across cortex are best predicted by visual features extracted from a convolutional neural network (CNN), suggesting that when vision and language are combined in naturalistic settings, visual features dominate neural processing.

Keywords: social processing naturalistic stimuli fMRI encoding multi-modal processing 

View Paper PDF