Search Papers | Poster Sessions | All Posters

Poster C57 in Poster Session C - Friday, August 9, 2024, 11:15 am – 1:15 pm, Johnson Ice Rink

A Simple Untrained Recurrent Attention Architecture Aligns to the Human Language Network

Badr AlKhamissi1, Antoine Bosselut1, Martin Schrimpf1; 1EPFL

Certain Large Language Models (LLMs) are effective models of the Human Language Network, predicting most explainable variance of brain activity in current datasets. Even with architectural priors alone and no training, model representations remain highly aligned to brain data. In this work, we investigate the key architectural components driving this surprising alignment of untrained models. To estimate LLM-to-brain similarity, we first select language-selective units within an LLM, similar to how neuroscientists identify the language network in the human brain. We then benchmark the brain alignment of these LLM units across three neural datasets and three metrics. Building a model architecture from the ground up, we identify that token aggregation is a key component driving the similarity of untrained models to brain data. Increased aggregation via multi-headed attention significantly increases brain alignment, and, for longer contexts in particular, adaptive aggregation via recurrence further boosts model similarity to neural activity. We summarize our findings in a simple untrained recurrent transformer model that achieves near-perfect brain alignment.

Keywords: Brain Alignment Large Language Models Human Language Network Functional Localization 

View Paper PDF