Speaker
Description
The interaction of sensory-motor features in the emergence of categorical audio-visual speech remains underexplored. Recent intracranial electrocorticography recordings suggest that discrete portions of dorsal premotor and motor cortices might represent acoustic phonetic features of speech. However, it is unclear whether pure acoustic or visual features are represented in these regions since these features are often described in terms of articulatory tracts.
This pilot study aimed at: (1) characterizing the interaction between sensory-motor phonological features in the psychological speech organization by comparing perceptual and categorical similarity patterns in Italian native speakers (2) test the preliminary hypothesis that audio-visual speech multimodally recruit auditory, visual and pre-/motor regions. Twenty-four participants identified and categorized audio-visual consonant-vowel signals based on phonetic or visemic features before undergoing an fMRI odd-ball task using the same stimuli.
Behaviourally, both tasks showed reliance on visual (lip-shape) information, suggesting that visual information plays a key role in speech perception and categorization despite the sensory modality speech is conveyed with. Neurally, we observed that both auditory and visual speech information recruited auditory and visual regions in the superior temporal gyrus and middle occipito-temporal junction, as well as in a portion of the pre-/motor cortex overlapping with area 55b, recently described as an area that participates in the coordination of complex behavior, regardless of specific body parts involved. These observations provide a first attempt to better characterize the nature of phonological representations by taking into account the interactions between sensory-motor features of speech and extending the research scope beyond unisensory auditory regions.