Search Papers | Poster Sessions | All Posters
Poster B127 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink
Time-yoked integration throughout human auditory cortex
Samuel Norman-Haignere1, Menoua Keshishian2, Orrin Devinsky3, Werner Doyle3, Guy McKhann4, Catherine Schevon4, Adeen Flinker3, Nima Mesgarani2; 1University of Rochester Medical Center, 2Columbia University, 3NYU Langone Medical Center, 4Columbia University Medical Center
The sound structures that convey meaning in speech such as phonemes and words vary widely in their duration. As a consequence, integrating across absolute time (e.g., 100 ms) and sound structure (e.g., phonemes) reflect fundamentally distinct neural computations. Auditory and cognitive model have often cast neural integration in terms of time and structure, respectively, but whether neural computations in the auditory cortex reflect time or structure remains unknown. To answer this question, we rescaled the duration of all speech structures using time stretching/compression and measured integration windows using a new paradigm, effective in nonlinear systems. Our approach revealed a clear transition from time- to structure-yoked computation across the layers of a popular deep neural network model trained to recognize structure from natural speech. When applied to spatiotemporally precise intracranial recordings from the human auditory cortex, we observed significantly longer integration windows for stretched vs. compressed speech, but this lengthening was very small (~5%) relative to the change in structure durations, even in non-primary regions strongly implicated in speech-specific processing. These findings demonstrate that time-yoked computations dominate throughout the human auditory cortex, placing strong constraints on neurocomputational models of structure processing.
Keywords: temporal integration speech electrocorticography deep neural network