Search Papers | Poster Sessions | All Posters

Poster A155 in Poster Session A - Tuesday, August 6, 2024, 4:15 – 6:15 pm, Johnson Ice Rink

Dissecting visual population codes with brain-guided feature accentuation

Jacob S. Prince1 (), Jeongho Park1, Christopher Hamblin1, George A. Alvarez1, Talia Konkle1; 1Harvard University

A typical view of the world contains diverse objects and people, evoking distributed patterns of activity across visual cortex. How do different functional subregions work together in parallel to process a complex natural scene? Here we introduce brain-guided feature accentuation, which can be applied to encoding models to highlight the specific image content responsible for driving different groups of fMRI voxels. As a proof of concept, we first show that we can attribute the activation of face-selective voxels to human faces, and of scene-selective voxels to the surrounding scene context, all within the same image. Next, we show that these accentuated stimuli can raise (and lower) model-predicted activation levels in category-selective regions of a held-out test subject. Finally, we show that feature accentuation may provide a means to decompose how different scene-selective regions (PPA, RSC) contribute to the representation of individual images. These approaches may eventually help interpret subsets of visual cortex with less-well-understood tuning, and could provide a new method to non-invasively exert control over population activity in human fMRI.

Keywords: Population coding feature tuning interpretability deep neural networks 

View Paper PDF