Jul 19 – 22, 2022
SISSA - International School for Advanced Studies
Europe/Rome timezone

The role of non-acoustic sublexical probabilistic phonotactic cues during speech perception

Jul 21, 2022, 3:10 PM
20m
Aula Magna (SISSA - International School for Advanced Studies)

Aula Magna

SISSA - International School for Advanced Studies

Via Beirut, 2–4 I–34151, Grignano, Trieste (TS) Italy
Talk Predictive Processes and Statistical Learning Speech Processing

Speaker

Ms Valeriya Tolkacheva (Queensland University of Technology, School of Psychology and Counselling, Queensland, Australia)

Description

All current models of speech perception assume language processing is grounded in acoustic-phonetic properties of the speech signal (e.g., Mattys et al., 2012). However, considerable empirical evidence shows knowledge about non-acoustic probabilistic sublexical cues can influence speech perception (e.g., Auer & Luce, 2005). The present study aimed to investigate the contribution of probabilistic phonotactic cues to perceptual learning of noise-vocoded speech. In Experiment 1, listeners’ reported accuracy improved from 6% to 12% over a series of 140 vocoded sentences. Using a probe-prime-probe design with congruent, incongruent and neutral conditions, in the next three experiments, participants were presented with three different types of noise-vocoded probe sentences: real English, nonsense (containing real English words but semantically empty), and pseudo sentences (containing nonwords). In the nonsense and pseudo-sentence experiments, words and nonwords were matched in terms of phonotactic probabilities to the reference words but mismatched acoustically. For real sentences, we observed accuracy rates of 95% in the congruent condition. Crucially, despite the absence of matching lexical content, accuracy rates for vocoded nonsense and pseudo-sentences were also high (70.4% and 74%, respectively), indicating that participants were able to assemble lexical information from the context of the prime sentence based solely upon the matched probabilistic phonotactic cues. We argue these novel findings demonstrate that perceptual learning of noise-vocoded speech is largely achieved by a statistical learning mechanism operating at the level of non-acoustic, sublexical probabilistic phonotactic information. We discuss how models of speech perception may be enhanced by including this alternative mechanism for accessing meaning.

Primary authors

Ms Valeriya Tolkacheva (Queensland University of Technology, School of Psychology and Counselling, Queensland, Australia) Dr Sonia L.E. Brownsett (The University of Queensland, School of Health and Rehabilitation Sciences, Queensland, Australia; Centre of Research Excellence in Aphasia Recovery and Rehabilitation, La Trobe University, Melbourne, Victoria, Australia) Prof. Katie L. McMahon (Herston Imaging Research Facility, Royal Brisbane & Women’s Hospital, Queensland, Australia; Queensland University of Technology, School of Clinical Sciences and Centre for Biomedical Technologies, Queensland, Australia) Prof. Greig I. de Zubicaray (Queensland University of Technology, School of Psychology and Counselling, Queensland, Australia)

Presentation materials

There are no materials yet.