Speaker
Description
Language acquisition relies on environmental stimulations during sensitive periods. Native phonemes categorisation is acquired according to sensory experience during the first year of life. However, in case of congenital deafness, infants can rely only on visual input. Thus, they cannot distinguish consonant couples discernible only by acoustic properties such as b and p. By testing children with cochlear implants (CIs), we assessed whether early auditory experience is needed to encode specific phonemic features. Using EEG, we measured the neural response to continuous speech processing in 37 hearing children (HC) and 32 CI children, half with congenital deafness (CD) and half with acquired deafness (AD). Only CD participants were auditory deprived during the first year of life.
We employed multivariate lagged regression to estimate single participants encoding model and predict EEG based on a selection of stimulus features: sound envelope, phoneme onset, and, selectively for consonants with the same visual features (manner and place), voicing. Preliminary results suggest that while all groups similarly benefited from phoneme onset information, voicing processing was affected by the lack of auditory input in the first year of life. Voicing improved the model to predict neural activity in both HC and AD but not in CD group, which had lower voicing gain compared to HC.
Data showed that low-level information associated with all types of phoneme onsets is encoded independently of groups’ auditory experience. Conversely, encoding higher-level acoustic information displays a sensitive period in the first year of life in which functional hearing must be available.