Speaker
Description
Our brain deals with visual impressions that change continually. Nonetheless, we experience the world as fairly stable. The neural basis of size constancy, i.e., the ability to achieve a stable experience of size even if the images projected onto the retina vary with viewing distance, remains largely unknown. In this study, we explored the temporal dynamics of the neural networks responsible for the size constancy of 3D objects when the object's size is judged compared to when the object is grasped by selecting an appropriate hand-size aperture. To this aim, we recorded electroencephalography from 64 channels while a motion capture system tracked arm movements. Furthermore, we investigated the role of multisensory integration in size constancy under conditions where vision was restricted and, as such, it did not prevail over other sensory modalities. Specifically, by systematically removing visual depth cues, we assessed whether the contribution of proprioceptive distance cues changed as a function of the visuomotor system. Preliminary results highlighted greater early components following big target objects as compared to small target objects, regardless of the task. We also found task-related differences at a later time window, revealing a P2 component greater for size judgment than grasping. These findings provide new evidence supporting the notion that size constancy for 3D real objects at actual distances takes place at early processing stages and that early visual processing remains unaffected by task demands.
If you're submitting a poster, would you be interested in giving a blitz talk? | Yes |
---|---|
If you're submitting a symposium, or a talk that is part of a symposium, is this a junior symposium? | No |