Speaker
Description
Since the first hours of life human newborns are subjected to a multitude of stimuli and they need to learn quickly how to integrate visual and auditory input. Cross-modal integration operates not only at the perceptual level, where low-level features of stimuli such as shape, texture, and temporal occurrence are recognized, but also at a more abstract level. For instance, research has shown that human newborns are able to discern congruent and incongruent conditions in numerosity when presented simultaneously with visual and auditory stimuli. Indeed, newborns as young as 50-hours old exhibit prolonged attention towards the screen when the number of a set of dots corresponds to the number of syllables in a concurrent auditory stream, compared to when such correspondence is absent. It has been hypothesized that this ability reflects an inherent abstract numerical representation present from birth. Tacking advantage of a novel EEG frequency-tagging paradigm, we adapted the original behavioural study to investigate the neural basis of this phenomenon, aiming to understand how the brain process congruent and incongruent audio-visual numerical information. Preliminary results will be presented to discuss not only the neural underpinnings of the so-called number sense, but also to show how critical auditory stimulation is for supporting complex abstract representations in the brains of newborns.
If you're submitting a symposium talk, what's the symposium title? | The ontogenetic necessity to extract information from the auditory environment |
---|---|
If you're submitting a symposium, or a talk that is part of a symposium, is this a junior symposium? | Yes |