Speaker
Mr
Yair Lakretz
(Tel-Aviv university)
Description
Theories of phoneme representation have been based on the notion of 'subphonemic features', i.e. variables such as place of articulation, voicing and nasalization, some binary and some multi-valued, that can be taken to characterize the production, and with some modifications also the perception, of different phonemes. However, perceptual confusion rates between phonemes cannot be simply explained by the number of different values taken by their subphonemic features. Moreover, assuming a discrete nature for these variables is incongruent with the continuous, analog neural processes that underlie the production and perception of phonemes, and with the remarkable cross-linguistic differences observed, that make the notion of a universal phonemic space rather implausible. As a first step towards a plausible neuronal theory of how phoneme representations may self-organize in each individual upon language learning, we describe methods to derive, from behavioral or neural data, distinct 'weights' for different features. Such weights provide a data-driven metric for the perceptual or motor phoneme manifold. We find that they differ by more than an order of magnitude, and differ across languages, pointing at the need to go beyond the classical digital description of phonemes.
Primary author
Mr
Yair Lakretz
(Tel-Aviv university)
Co-authors
Prof.
Alessandro Treves
(SISSA)
Dr
Evan-Gary Cohen
(Tel-Aviv University, Israel)
Prof.
Gal Chechik
(Bar-Ilan University, Israel)
Prof.
Naama Friedmann
(Tel-Aviv University, Israel)