Speaker
Description
Recent studies investigating word processing highlighted the role of both linguistic (Anceresi et al. 2024) and sensory information, particularly visual information (Petilli et al., 2021), in shaping semantic representations. The present study aims to elucidate the relative contribution of linguistic and visual information to the neural representations of words, by re-analyzing a series of existing EEG/ERPs datasets covering different tasks. Using Representational Similarity Analysis (RSA), we test whether similarity patterns of brain activation to words can be predicted by their linguistic similarity and by the visual similarity of the corresponding concepts, as extracted from computational models (Petilli et al., 2021). Key questions include: i) does visual information contribute to semantic memory over and above linguistic experience? ii) do these effects have similar timing and topographies? iii) is visual information accessed even in the absence of a semantic task? iv) do we get similar results when using an approach based on sensorimotor norms (e.g. Binder et al., 2016)? Findings across all datasets, tasks, and approaches will be transparently reported.