Search Papers | Poster Sessions | All Posters

Poster B61 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink

Towards the Use of Relative Representations for Lower-Dimensional, Interpretable Model-to-Brain Mappings

T. Anderson Keller1 (), Talia Konkle1, Colin Conwell2; 1Harvard University, 2Johns Hopkins University

Current model-to-brain mappings are computed over thousands of features. These high-dimensional mappings are computationally expensive and are often difficult to interpret, due in large part to the uncertainty surrounding the relationship between the inherent structures of the brain and model feature spaces. Relative representations are a recent innovation from the machine learning literature that allow one to translate a feature space into a new coordinate frame whose dimensions are defined by a few select 'anchor points' chosen directly from the original input embeddings themselves. In this work, we show that computing model-to-brain mappings over these new coordinate spaces yields brain-predictivity scores comparable to mappings computed over full feature spaces, but with far fewer dimensions. Furthermore, since these dimensions are effectively the similarity of known inputs to other known inputs, we can now better interpret the structure of our mappings with respect to these known inputs. Ultimately, we provide a proof-of-concept that demonstrates the flexibility and performance of these relative representations on a now-standard benchmark of high-level vision and firmly establishes them as a candidate model-to-brain mapping metric worthy of further exploration.

Keywords: Relative Embedding RDM Model-Brain Mapping 

View Paper PDF