Speaker
Description
All current models of speech perception assume language processing is grounded in acoustic-phonetic properties of the speech signal (e.g., Mattys et al., 2012). However, considerable empirical evidence shows knowledge about non-acoustic probabilistic sublexical cues can influence speech perception (e.g., Auer & Luce, 2005). The present study aimed to investigate the contribution of probabilistic phonotactic cues to perceptual learning of noise-vocoded speech. In Experiment 1, listeners’ reported accuracy improved from 6% to 12% over a series of 140 vocoded sentences. Using a probe-prime-probe design with congruent, incongruent and neutral conditions, in the next three experiments, participants were presented with three different types of noise-vocoded probe sentences: real English, nonsense (containing real English words but semantically empty), and pseudo sentences (containing nonwords). In the nonsense and pseudo-sentence experiments, words and nonwords were matched in terms of phonotactic probabilities to the reference words but mismatched acoustically. For real sentences, we observed accuracy rates of 95% in the congruent condition. Crucially, despite the absence of matching lexical content, accuracy rates for vocoded nonsense and pseudo-sentences were also high (70.4% and 74%, respectively), indicating that participants were able to assemble lexical information from the context of the prime sentence based solely upon the matched probabilistic phonotactic cues. We argue these novel findings demonstrate that perceptual learning of noise-vocoded speech is largely achieved by a statistical learning mechanism operating at the level of non-acoustic, sublexical probabilistic phonotactic information. We discuss how models of speech perception may be enhanced by including this alternative mechanism for accessing meaning.