Search Papers | Poster Sessions | All Posters
Poster B128 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink
Deep neural networks reveal context-sensitive speech encoding in single neurons of human cortex
Shailee Jain1 (), Rujul Gandhi1, Matthew K. Leonard1, Edward F. Chang1; 1University of California San Francisco
Speech perception relies on continuously tracking information at different temporal scales and integrating it with past context. While prior studies have established that the human superior temporal gyrus (STG) encodes many different speech features— from acoustic-phonetic content to pitch changes and word surpisal— we are yet to understand the neural mechanisms of contextual integration. Here we used deep neural networks to investigate context-sensitive speech representations in hundreds of single neurons in STG, recorded using Neuropixels probes. Through this, we established that STG neurons show a broad diversity of context-sensitivity, independent of the speech features they are tuned to. We then used population-level decoding to investigate the role of this property in tracking spectrotemporal information, and found that neurons sensitive to long contexts faithfully represented speech over timescales consistent with higher-order word and phrase-level information (~1sec). Our results suggest that heterogeneity in both context-sensitivity and speech feature tuning enable the human STG to track multiple, hierarchical levels of spoken language representations.
Keywords: speech perception single neurons deep neural networks