Speaker
Description
Visual and auditory signals from a common source are typically integrated into a unified percept despite slight asynchronies—a phenomenon central to audiovisual (AV) speech perception. While this integration facilitates speech comprehension, it remains unclear how emotional context affects crossmodal binding. In two experiments, participants performed either a simultaneity judgment (SJ; Experiment 1, N = 45) or a temporal order judgment (TOJ; Experiment 2, N = 60) task using AV speech stimuli. Actors pronounced the word “no” with facial expressions and intonations that conveyed positive, negative, or neutral emotions. Auditory and visual speech components were misaligned across different stimulus-onset asynchronies. Emotional (positive and negative) stimuli yielded larger just noticeable differences (JNDs) than neutral ones, indicating reduced temporal sensitivity and a wider temporal binding window. Emotional stimuli also shifted the point of subjective simultaneity (PSS) closer to zero, suggesting a reduction of the typical visual-leading bias and improved synchrony perception. Notably, the SJ task showed wider JNDs and greater PSS variability across emotions than TOJ tasks, implying that SJ responses rely more on subjective judgment, whereas TOJ tasks engage more precise temporal processing. These findings reveal that emotion influences AV speech integration by widening the temporal window for binding and mitigating temporal biases, especially when synchrony is inferred indirectly.
| If you're submitting a symposium, or a talk that is part of a symposium, is this a junior symposium? | No |
|---|