Speakers
Description
Humans possess remarkable abilities to comprehend spoken language during face-to-face interactions. This can occur thanks to the brain’s ability to take advantage of linguistic and extra-linguistic cues to continuously integrate incoming information and predict elements at various levels [1]. Previous research has shown dissociable neural signatures of meaning and sound predictions of upcoming words [2]. Moreover, it has been reported an improvement in the processing of multisensory signals, where visual mouth cues were combined with speech, compared to auditory input alone [3]. However, the extent to which observable mouth movements influence predictions during naturalistic speech remains largely unexplored. This study investigates the impact of audiovisual integration on predictive mechanisms, focusing on whether mouth movements contribute to predictions. EEG data will be recorded from 25 Italian participants watching continuous Italian narrative videos with the mouth visible or covered with a grey rectangle. We will examine the linear mapping between brain signals and visemic (visual respective of phonemes) and semantic surprisal. Surprisal is a measure of how surprising an element is within its context. This will allow us to shed light on whether 1) words are predicted differently depending on the availability of mouth cues, 2) articulatory information conveyed through visemes influences prediction, and 3) these phenomena persist even when the mouth is not visible.
[1] Bar (2009). Philos Trans R Soc Lond B Biol Sci. https://doi.org/10.1098/rstb.2008.0310
[2] Heilbron et al. (2022). PNAS. https://doi.org/10.1073/pnas.2201968119
[3] Crosse et al. (2015). The Journal of Neuroscience. https://doi.org/10.1523/JNEUROSCI.1829-15.2015
If you're submitting a poster, would you be interested in giving a blitz talk? | Yes |
---|