Search Papers | Poster Sessions | All Posters

Poster B124 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink

Sub-lexical processing of audiovisual speech constrains lexical competition.

Aaron Nidiffer1, Edmund Lalor1; 1University of Rochester

As we listen to natural connected speech, we effortlessly transform speech acoustics into linguistic units. As that transformation begins, so does a lexical inference process that updates as each phoneme is uttered. In noisy environments, this process can become disrupted by poor inference at the phoneme level, leading to increased lexical competition and reduced word comprehension. Seeing a speaker’s face can restore comprehension, in part by constraining the competition to words consistent with auditory and visual speech. There is evidence that vision can constrain inference at the lexical level, but it is unknown whether those effects can be attributed to sub-lexical interactions or whether constraint happens only after auditory and visual lexical processes are complete. In this study we fit and evaluate EEG encoding models of lexical competition that vary depending on acoustic and visual uncertainty, and the constraint imposed by their set intersection. We use linear modeling of electroencephalography and find evidence that audiovisual lexical processing is affected by visual constraint as a word unfolds.

Keywords: audiovisual speech EEG natural language processing lexical selection 

View Paper PDF