Search Papers | Poster Sessions | All Posters

Poster A81 in Poster Session A - Tuesday, August 6, 2024, 4:15 – 6:15 pm, Johnson Ice Rink

Invariant representations of words are rapidly constructed across the auditory cortical hierarchy

Dana Boebinger1 (), Guoyang Liao1, Kirill Nourski2, Matthew Howard2, Christopher Garcia2, Thomas Wychowski1, Webster Pilcher1, Sam Norman-Haignere1; 1University of Rochester, 2The University of Iowa

The fundamental computational challenge of auditory word recognition is that instances of the same word vary enormously in their acoustics. The auditory system is thought to construct representations of sound that are invariant to such acoustic diversity by adaptively selecting and removing spectrotemporal acoustic variation. Yet despite the importance of word recognition, little is known about how invariant representations of words are organized in the human auditory cortex, in part due to the coarse spatiotemporal precision of human neuroimaging methods. Here, we developed a novel paradigm that leverages the spatiotemporal precision of human intracranial recordings to measure the strength and timing of invariant and non-invariant representations across many different words and types of acoustic variation. We show that invariant representations of words emerge rapidly after word onset (within 200 ms), increase substantially in strength across the cortical hierarchy for many different types of acoustic variation, and are delayed by ~30 ms compared with non-invariant representations. We show that these effects cannot be explained by standard spectrotemporal filtering models nor do they require an extended adaptation period. These results indicate that invariant representations of words are computed by fast, hierarchically organized, nonlinear computations that do not depend critically on adaptive spectrotemporal filtering.

Keywords: speech invariant coding auditory cortex intracranial EEG 

View Paper PDF