TEX2022: Bringing together Predictive Processes and Statistical Learning

Europe/Rome
Aula Magna (SISSA - International School for Advanced Studies)

Aula Magna

SISSA - International School for Advanced Studies

Via Beirut, 2–4 I–34151, Grignano, Trieste (TS) Italy
Yamil Vidal (SISSA), Davide Crepaldi (SISSA), Stefano Liberati (SISSA), Vania Vellucci (Sissa)
Description

Although the research communities studying Predictive Processes and Statistical Learning sprout from different traditions, they share a core interest in how we/our brains pick up on patterns in the environment.

In this event, we will bring together both communities, with the goal of exchanging ideas in a rich and relaxed scientific environment. We will host lectures and round tables by our invited speakers. Attendees will have a chance to present either talks or posters. Plus there will be plenty of time for social activities and informal discussions.

Invited Speakers:

  • Floris de Lange - Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen
  • Nicola Molinaro - BCBL, Basque Center on Cognition, Brain and Language
  • Caspar Schwiedrzik - Neural Circuits and Cognition Lab, European Neuroscience Institute Goettingen. Perception and Plasticity Group, German Primate Center - Leibniz Institute for Primate Research
  • Lucia Melloni - Neural circuits, consciousness and cognition research group, Max Planck Institute for Empirical Aesthetics. Department of neurology, NYU Grossman School of Medicine
  • Maria Chait - Ear Institute, University College London
  • Lori Holt - Department of Psychology, Carnegie Mellon University
  • Ram Frost - Department of Psychology, The Hebrew University of Jerusalem
  • Clare Press - Department of Psychological Sciences, Birkbeck, University of London. Wellcome Centre for Human Neuroimaging, University College London

Registration

Registration to attend in person to the event is limited to 100 participants. Please follow the link on the left side or at the bottom of the page to register. All presentations will be done in person.

People not intending to attend in person can instead participate over Zoom. In order to participate over Zoom, please register using the following link

(if you don't plan to attend in person, please do not complete the standard registration)

Abstract Submission:

Abstracts submitted to this event will be evaluated by an independent Scientific Committee. Each abstract will be anonymized and scored by two members of the committee. All presentations will be done in person.

Scientific Committee:

  • Alejandro Blenkmann - Front Neurolab - RITMO, Department of Psychology, University of Oslo
  • Louisa Bogaerts - Department of Experimental Psychology, Ghent University
  • Fabienne Chetail - Lab Cognition Langage et Développement (LCLD), CRCN, Université Libre de Bruxelles
  • Abhishek Banerjee - Adaptive Decisions Lab, Biosciences Institute, Newcastle University (https://www.adaptive-decisions.com/)
  • Emma Holmes - Department of Speech Hearing and Phonetic Sciences, UCL. Wellcome Centre for Human Neuroimaging, UCL
  • Dezső Németh - Lyon Neuroscience Research Center (CRNL), Université Claude Bernard Lyon 1 (https://www.memoteam.org/)

Important Dates

  • Abstract submission deadline: 10th of June
  • Notification of abstract acceptance: 27th of June
  • Registration deadline: 10th of July

Shuttle Service

We will offer a shuttle service connecting the city (two different stops) to the venue.

Pick-up times Tuesday 19th:

  • Riva del Mandracchio, 4 (in front of the hotel Savoia Excelsior): 8:30
  • Piazza Guglielmo Oberdan: 8:40

Pick-up times rest of the days:

  • Riva del Mandracchio, 4 (in front of the hotel Savoia Excelsior): 8:45
  • Piazza Guglielmo Oberdan: 8:55

Return time every day:

  • Via Beirut, 2–4 (Venue): 17:30

Alternatively, you can use the public transport (Bus number 6), but we strongly recommend using the shuttle service, at least the first day, so that you can find the venue easily.

Venue

The event will take place at the Aula Magna of SISSA, located at Via Beirut, 2–4. Please note that this is NOT at the main facilities of SISSA, which are located in Via Bonomea.

Map

Here you can find a map showing the stops of the shuttle and the venue: Map

Social dinner on Wednesday

We will have dinner at Osmiza Verginella Dean (Contovello, 460, 34151 Trieste TS). The place is reachable with public buses 42 and 44. We will meet at Piazza Oberdan at 19:15 to take the bus together.

You can find the itinerary here: Map

Got lost? No worries. Call Yamil: +39 331 893 9111

NOTE: Bus tickets cannot be bought on the bus. They can be bought at shops called "Tabaccheria", or on your phones with the app "Tpl Fvg" (link). One on the bus you should validate your ticket either stamping it, or scanning a QR code with your phone.

Social trip to Piran (Slovenia)

After four days of discussions, on Saturday 23rd, we will have a well deserved little social trip. We will spend the day at the charming town of Piran, on the coast of Slovenia (map). We will travel by bus.

Pick-up times to go to Piran:

  • Piazza Guglielmo Oberdan: 9:00
  • Riva del Mandracchio, 4 (in front of the hotel Savoia Excelsior): 9:10
Return to Trieste:
  • 17:00, arriving around 18:00

Covid Measures

Body temperature will be measured at the venue. If your temperature is above 37.5 C°, it won't be possible to enter the building. If you experience symptoms that could indicate an infection, you test positive for Covid, or you were in close contact with a person that tested positive for Covid, please let us know and do not attend to the event in person. Keep in mind that the event can still be followed online over Zoom.

Accommodation in Trieste

We will provide shuttle service connecting the city and the venue. We would like to recommend the following affordable accommodations:

  • Ostello Hotello
  • Hotel Portacavana
  • Ostello ControVento
  • Albergo Alla Posta
  • Residence del Mare (discounted prices if you mention that you are attending TEX2022, hosted by SISSA)
    • 1
      Welcome and opening words
      Speakers: Davide Crepaldi (SISSA), Yamil Vidal (SISSA)
    • 2
      Is the auditory system a “smart” statistical learner?
      Speaker: Prof. Maria Chait (Ear Institute, University College London)
    • Poster Session 1 (coffee break)
      • 3
        The predictive power of intrinsic brain activity: theoretical perspectives, current investigations, and future goals

        One of the most prominent theories in cognitive neuroscience states that brains are foretelling machines. In other words, in contrast to the classic view of the brain as a passive stimulus-response machine, it constantly generates predictions about future events. This is achieved through a generative process of inference based on the statistical regularities of the environment. Βy continuously sampling common information patterns throughout development, the brain’s generative models can build and update prior knowledge(i.e., priors) and deal with noisy and ambiguous sensory input. Here, we present a new theoretical hypothesis on how these models are implemented in the intrinsic activity of the brain through spatio-temporal dynamics and connectivity patterns. In this view, the regime of intrinsic activity has a two-fold objective. First, to optimize priors during rest and second, to support the retrieval of relevant priors during perceptual performance. Although this theory has received preliminary validation by both human and animal studies, it remains largely untested. We will describe experimental paradigms to study the formation of intrinsic priors by combining behavioural and electrophysiological techniques, and we will discuss future goals and open questions.

        Speaker: Anastasia Dimakou (Padova Neuroscience Center)
      • 4
        Asymmetric learning of dynamic spatial regularities in visual search: facilitation of anticipated target locations, no suppression of predictable distractor locations

        Static statistical regularities in the placement of targets and salient distractors within the search display can be learned and used to optimize attentional guidance. Whether statistical learning also extends to dynamic regularities governing the placement of targets and distractors on successive trials has been less investigated. Here, we applied the same dynamic cross-trial regularity (one-step shift of the critical item in clock-/counterclockwise direction) either to the target or a distractor, and additionally varied whether the distractor was defined in a different (color) or the same dimension (shape) as the target. We found robust learning of the predicted target location: processing of the target at this (vs. a random) location was facilitated. But we found no evidence of proactive suppression of the predictable distractor location. Facilitation of the anticipated target location was associated with explicit awareness of the dynamic regularity, whereas participants showed no awareness of the distractor regularity. We propose that this asymmetry arises because, owing to the target’s central role in the task set, its location is explicitly encoded in working memory, enabling the learning of dynamic regularities. In contrast, the distractor is not explicitly encoded; so, statistical learning of distractor locations is limited to static regularities.

        Speaker: Hao Yu (Dept. Psychology, Allgemeine und Experimentelle Psychologie, LMU)
      • 5
        Categorically perceiving vs. Categorizing while perceiving: The role of segments' recognition and lexical access while categorizing the pragmatic function of pitch movements in speech.

        Speech perception studies have highlighted: i) auditory-articulatory mapping processes; ii) Categorical Perception (CP) (Liberman et al., 1967); iii) bottom-up formation of phonological categories through statistical learning; iv) top-down mechanisms shaping the perceptual space (Kuhl et al., 1992). Among several open questions, we focus on: i) the relation between speech perception features and other aspects of cognition involving categorization (Holt & Lotto,2010); ii) the cognitive mechanisms responsible for pitch categorization and discrimination in linguistic and non-linguistic contexts.
        Pitch in speech is organized in phonological categories (Pitch Accents, Boundary Tones (BTs)) aligned to the text and conveys syntactic, semantic, and pragmatic information (Ladd, 1996). Perception of BTs has been found Quasi-Categorical (Schneider, 2012).
        We investigated the presence of CP of BTs (Rising vs. Descending final contours) discriminating between questions and statements. In Italian, intonation alone can distinguish the two. We adopted a modified version of the CP paradigm and tested 34 participants in 2 groups, varying the linguistic segmental information. Group 1 saw: 1) existing words; 2) pseudowords; 3) pseudowords containing foreign phonemes; 4) masked segmental information (humming). Group 2 the reverse order.
        Our results show that the pragmatic interpretation of the pitch contour is top-down activated and accessed on degraded linguistic material when stimuli are presented in the word-to-humming order, and bottom-up created through a categorization process in the humming-to-word order. The results also show that in absence of recognizable segmental information (humming), pitch shows to be categorized according to its acoustic properties, rather than on its function in speech.

        Speaker: Alessandra Zappoli (University of Nova Gorica)
      • 6
        Statistical learning modulation through the variation of stimulus rhythmic structure.

        Statistical learning has been widely proposed in the literature as a fundamental mechanism underlying language acquisition. In this direction, statistical word form learning protocols have been used across different developmental stages, showing that even 11 month infants perform above chance. Specifically, this behavioral task consists of two phases. A learning phase, where participants passively listen to a continuous stream of trisyllabic pseudowords, concatenated randomly and without silence between them. Followed by a testing phase to evaluate if participants are able to recognize the presented pseudowords.
        Classically, this test presents isochronous syllables, that is, they all have the same duration. However, natural speech is characterized by temporal variability in syllabic production; through different languages, it has been observed a syllabic frequency between 2 and 8 Hz. Here, we evaluate the ecological validity of a statistical word form learning paradigm by exploring how (and if) learning is being modulated by the temporal variability of the syllables’ duration. Furthermore, since individual differences in auditory-motor synchronization abilities have been shown to interact with statistical learning (i.e., individuals with a high degree of auditory-motor integration display a higher performance in statistical learning), we included this participants’ feature as a control variable in our study. The obtained results show that statistical word form learning is still possible for asynchronous stimuli and that only for those participants with a high degree of auditory-motor integration the synchrony confers a benefit.

        Speaker: Ireri Melissa Gómez Varela (Universidad Nacional Autónoma de México)
      • 7
        The global and transitional probability of task-irrelevant dimensions impact behavior

        Organisms pick up on stimulus statistics along multiple perceptual dimensions. These statistics can be accumulated quickly, sensitively, and passively, and can even influence behavior in unrelated tasks. In the auditory domain, there is a great deal of evidence that listeners build up statistically-driven expectations of what they will hear. Despite such strong empirical demonstrations of ‘statistical listening’ there is no consensus on how these statistics influence perception, attention, and behavior. With most studies’ focus on passive accumulation of sound statistics, the influence of task is a crucial aspect of this puzzle that has been less considered. Here, we use two paradigms that tap into quite different perceptual processing of the same stimuli: 1) suprathreshold duration judgments of tones; 2) detection of near-threshold tones in noise. In each task, the statistic of interest -- the global probability of a tone’s frequency -- is task-irrelevant. This proffers an opportunity to examine how statistical learning proceeds in an active task when dimensions of regularity are task-irrelevant, and would apparently direct behavior non-optimally. Focusing on global probabilities of tone frequencies and transitional probabilities between tone frequencies, we find that listeners weigh expectations based on the global probability of a tone frequency in performing the tasks. Duration judgments are faster for more-probable-frequency tones than for less-probable-frequency tones. Moreover, detection of tones nearer to a more-probable frequency is superior. We find that the influence of expectation builds quickly, and switches rapidly with changes in probability. Even a seemingly ‘neutral’ equiprobable distribution can influence behavior in the context of a switch to statistics that bias probability prior to or after experiencing the ‘neutral’ statistics. Listeners also are influenced by transitional probabilities whereby one tone frequency tends to predict another frequency, even when frequency is task-irrelevant. In these cases, a sensory match in tone frequencies provides no additional benefit above and beyond statistical learning and the joint influence of global probability and transitional probability is not additive. Overall, our results demonstrate that statistical learning of global and transitional probabilities proceeds across dimensions of sound even when listeners are engaged in a task for which the dimension offers no information relevant to the task. Examination of statistical learning of task-irrelevant dimensions offers a productive approach to determining how statistical learning and prediction evolve across active behavior, and provides a strong foundation for dissociating competing theoretical accounts of expectation.

        Speaker: Lori Holt (Carnegie Mellon University)
      • 8
        The role of neural oscillations during trans-saccadic integration

        The question how we perceive the world as continuous has been largely debated in neuroscience. Indeed, in every-day life, we constantly interact with the external environment by scanning it through eye movements. Saccadic eye movements are the major example of an abrupt change in visual perception caused by self-movements. While it is clear that self-movements challenge perceptual stability by introducing spatiotemporal distortions, the underlying neural correlates of how the brain integrates these changes have not yet been thoroughly investigated. Specifically, how the brain integrates pre- and post-saccadic signals is still unknown. In the present project, we aim to investigate neural oscillations at the time of saccades whilst administering a location and orientation judgment task where a brief 17-ms Gabor patch of six possible orientations (±35º, ±45º, ±55º) is presented at the center of the screen at random delays from saccadic target appearance. EEG, eye-tracking and behavioural data has been recorded. Preliminary results showed a pre-saccadic neural oscillation at around 10 Hz that is locked to the onset of the saccade. Moreover, single trial analysis revealed that serial dependence may play a role in integration across saccades. Indeed, alpha power locked to saccadic onset in parieto-occipital electrodes significantly differentiated trials in which the reproduced orientation in the current trial was attracted towards the previous trial’s stimulus orientation (versus repulsive trials). These results show that (1) visual perception is coupled to cortical alpha rhythm and (2) that this oscillatory activity carries visual information about the recent past during saccadic eye movements.

        Speaker: Chiara Terzo (University of Florence, Italy)
    • 9
      Statistical learning and prediction of task-irrelevant dimensions
      Speaker: Prof. Lori Holt (Department of Psychology, Carnegie Mellon University)
    • 1:00 PM
      Lunch Break
    • Perceptual Decision Making
      • 10
        First impression bias in the development of perceptual priors

        Bayesian and predictive coding theories of cognition view perception as the combination of sensory inputs with prior knowledge of the environment. On average, this process results in more accurate and faster perceptual judgements, as long as this knowledge is accurate. To be optimal, priors should update in the face of new information. Past studies have shown that this is indeed the case for simple cue association priors, but research of more complicated priors is lacking.

        We here used a moving dots task to investigate the statistical learning and updating of a continuous prior distribution. Participants were asked to estimate the direction of motion of low-contrast dots, which follow a particular distribution. In previous studies participants developed a prior similar to the bimodal stimulus distribution within 200 trials, as evidenced by their biased estimates, lower reaction times, and false alarms. In the current implementation of the task, stimuli were initially drawn from a trimodal distribution, which was switched to a complementary bimodal one after half the trials, without notifying the participants.

        Our results show that, as expected, participants learned the initial distribution within the first 200 trials. However, they failed to update their priors, even after 300 trials of the complementary distribution. Instead, they continued exhibiting biases in accordance with a trimodal prior. Bayesian modelling of the participant estimations verified our findings. These results provide evidence for a ‘first impressions’ bias in prior acquisition, where models of the environment are resistant to change in the face of contradicting information.

        Speaker: Mr Nikitas Angeletos Chrysaitis (Institute for Adaptive and Neural Computation, University of Edinburgh)
      • 11
        Unified Framework for Perceptual Decision Making

        Perceptual memories are the storage of our experiences; they are the basis for understanding the external world and guiding our decisions. Despite fast-paced research, behavioural and cognitive constructs tend to be custom built around a preferred task and general principles across tasks seem to be missing.
        To address these issues, we aim to build a computational model comprised of interconnected functional units, each performing a specific task-independent operation; the interaction between units and the readout of the system is controlled in a top-down manner according to behavioural requirements. We aim to relate this model to neural activity in brain regions involved in perceptual decision making.
        We trained each rat to perform two different tasks requiring the elaboration of whisker stimuli: (I) a categorization task, where a single stimulus must be judged (“strong” or “weak”) according to an implicit boundary, and (II) a delayed comparison task, where a base stimulus must be stored in short-term memory for comparison to a successive stimulus.
        We find that several aspects of history, such as recent stimuli and recent choice outcomes, factor into the choice of the current trial. We use this information to produce a single model accounting for both tasks. We correlate the model with neural recordings during the execution of both tasks and find that cortical activity can fill in the model’s terms to produce behavioural output. Additionally, we perturb neural activity by optogenetic stimulation, seeing a causal link between activity in frontal and parietal regions and our model’s cognitive units.

        Speaker: Davide Giana (SISSA)
      • 12
        Tracking temporal regularities

        Our perception does not depend exclusively on the immediate sensory input. We exploit the statistical regularities in the environment, leading, e.g., to attractive perceptual choice history biases in a stable world, yet the conditions and mechanisms facilitating this flexible use of prior information to predict the future are unclear. Here we use a standard perceptual decision-making task and manipulate transitional probabilities between successive stimulus orientations to address two questions in parallel. First, at the individual differences level, we investigate the relationship between non-clinical autistic traits and history bias adjustment to the temporal statistics of the environment. It has been suggested that individuals with autism spectrum disorders (ASD) are impaired in the integration of immediate sensory evidence and long-term statistics; reduced reliance on prior choices in ASD may thus result from a failure to learn and exploit statistical regularities. Indeed, we find reduced adjustment of choice history biases in individuals with particularly high autistic traits. Second, at the general population level, we investigate the mechanism underlying these history biases by capitalizing on noise-driven fluctuations in the orientation statistics of the stimuli. Using a reverse correlation analysis, we evaluate stimulus-independent bias and stimulus-dependent sensitivity to predicted orientations. We find that both mechanisms coexist, whereby there is increased bias to respond in line with the predicted orientation and suppressed sensitivity to information inconsistent with the prediction. Together, the current study sheds new light on the mechanisms underlying history biases in perceptual decision-making and their reduced expression in individuals with particularly high autistic traits.

        Speaker: Magda del Rio
    • Poster Session 1 (coffee break)
    • Processing of Sequences
      • 13
        Learning and Memorization of a Multi-modality and Multi-cue Sequence

        Statistical learning allows us to detect and acquire different types of regularities from the environment but how multiple regularities could be integrated and learnt across time remain uncertain. This study set out to examine the multidimensional capacity and learning phases of statistical learning. We exposed 40 healthy adults to an audio-visual sequence with both conditional and distributional cues in a serial reaction time (SRT) task. The SRT task consisted of multiple exposure phases (initial, middle, and final), with each followed by a random block. Our results demonstrated that the participants could implicitly learn the multi-regularity sequence from each of the exposure phases. However, the amount of learning from the initial and middle phases was smaller than that from the final phase. We further showed that the participants could simultaneously acquire and maintain some but not all statistical information. Therefore, these results suggested that statistical learning could be implicitly operated across modalities to learn multiple regularities but there were some constraints. Particularly, the difference in the amount of learning between phases and that between regularities might be accounted for by a multi-component neuro-cognitive mechanism underlying statistical learning.

        Speaker: Ms Nicole Sin Hang Law (The University of Oxford)
      • 14
        Humans parsimoniously represent auditory sequences by pruning and completing the underlying network structure

        Successive auditory inputs are rarely independent, their relationships ranging from local transitions between elements to hierarchical and nested representations. In many situations, humans retrieve these dependencies even from limited datasets. However, this learning at multiple scale levels is poorly understood. Here We used the formalism proposed by network science to study the behavioral and MEG representations of local and higher order structures in the brain, and their interaction, in auditory sequences. We show that human adults exhibited biases in their perception of local transitions between elements, which made them sensitive to high-order structures such as network communities. This behavior is consistent with the creation of a parsimonious simplified model from the evidence they receive, achieved by pruning and completing relationships between network elements. This observation suggests that the brain does not rely on exact memories but compressed representations of the world. Moreover, this bias can be analytically modeled by a memory/efficiency trade-off. This model correctly account for previous findings, including local transition probabilities as well as high order network structures, unifying statistical learning across scales. We finally propose putative brain implementations of such bias.

        Speaker: Lucas Benjamin
      • 15
        Direct brain recordings reveal continuous encoding of structure in random stimuli

        How does the brain process randomness? Mounting evidence suggests it tries to make sense of any given sequence, generating sophisticated internal models that continuously draw on statistical structures in the unfolding sensory input to maintain a detailed representation of its environment. However, it is unknown how specifically this modelling applies to random sensory signals. Here, we investigate conditional statistics, through transitional probabilities, as an implicit structure encoding a random auditory stream. We evaluate this through a trial-by-trial analysis by applying information-theoretical principles to intracranial electroencephalography recordings. Based on high-frequency activity (75–145 Hz), we demonstrate how the brain continuously encodes conditional relations between random stimuli in a network outside of the auditory system following a hierarchical organization including temporal, frontal, and hippocampal regions. We further hypothesize that in lower frequency bands (alpha/beta), there might be a hierarchically inverse cascade of involved regions which originates in higher cortical areas and possibly encodes an event already prior to its onset. Linking the frameworks of statistical learning and predictive coding, our results illuminate an implicit process that might be crucial for the swift detection of patterns and unexpected events in the environment.

        Speaker: Julian Fuhrer (University of Oslo)
    • 16
      Optimising perception for inference and learning via prediction
      Speaker: Prof. Clare Press (Department of Psychological Sciences, Birkbeck, University of London. Wellcome Centre for Human Neuroimaging, University College London)
    • Poster Session 2 (coffee break)
      • 17
        Dissociating Contributions of Prediction and Trial-Level Adaptation Using High-Field fMRI and Population Receptive Field Modeling.

        To make sense of the intricate, noisy, and often incomplete soundscape of our dynamic world, human listeners continuously use contextual information to form predictions about future states while also adapting to past sensations. While extensive research supports the relevance of both prediction and trial-level adaptation to aid effective neural processing of sounds, differentiating between the two remains challenging. Prediction and adaptation are often correlated, making events that are surprising the same events that - because of their changes in low-level properties - cause a release from adaptation. The interplay between prediction and adaptation has been previously investigated using (human non-invasive) electrophysiological measures, but the cortical mesoscopic circuitry underlying these mechanisms is still unclear. Here we test the relative contributions of prediction and trial-level adaptation by presenting pure tones probabilistically sampled from two Gaussian distributions. We measure high resolution (sub-millimetre) functional magnetic resonance imaging at ultra-high field (7T fMRI) to observe layer-dependent effects of prediction and adaptation within the auditory cortex. We model single voxels using: 1) a low-level tuning model (using population receptive field modelling); 2) a prediction model (adopting a multilevel hierarchical Gaussian filter) and 3) a trial-level adaptation model (with long-term effects). This investigation will allow us to gain insights of where (and in what layer) prediction and adaption are integrated within the auditory cortex. In the future, we plan to use the same paradigm employing magnetoencephalography (MEG) to complement the high spatial resolution of fMRI and to understand the temporal dynamics of this interplay.

        Speaker: Mr Jorie van Haren (Maastricht University)
      • 18
        Electrophysiological Markers of the probability cueing suppression: statistical learning of distractor locations and inter-trial modulation

        Visual search performance is facilitated when the singleton distractor occurs at a high probability location where the distractor occurred frequently in the past, compared to locations where it rarely occurred. Additionally, some studies found search becomes slower when the target appeared at the location of the preceding distractor (coincident condition). However, the underlying neural mechanisms have not been closely examined based on the statistical learning experience. Here, we used lateralized event-related electroencephalogram (EEG) potentials and lateralized alpha power (8-12 Hz) to shed further light on the temporal dynamics of the distractor suppression modulated by inter-trial and statistical learning of distractor locations. Adopting an additional singleton paradigm (N = 20), we observed a stronger suppression (shorter RTs) when the color-defined distractor appeared at a specific frequent location than other rare locations in search displays. We found slower RTs in the coincident versus the non-coincident condition mirrored by the larger amplitude of the SPCN component, suggesting the distractor-target inter-trial mechanism needs further access to visual working memory in a context scene. However, the lateralized alpha power (8-12 Hz) reflects no anticipatory suppression of spatial attention based on probability cueing of distractor locations. Our findings thus provide new neurophysiological evidence for individuals' attention modulation by the statistical learning of distractor locations and inter-trial effects.

        Speaker: Nan Qiu
      • 19
        Learning the statistical properties of temporal patterns

        Series of discrete highly regular sensorimotor events are often experienced as temporal patterns. Studies on rhythm perception, sensorimotor learning and predictive coding in the auditory domain have shown that humans learn, are highly sensitive to and form expectations about the temporal regularities of the environment. A complete understanding of the basic mechanism supporting these abilities is far from clear.
        Here we investigated the process through which humans learn the statistical properties of temporal sequences of auditory stimuli (empty intervals marked by brief tones). Specifically, we generated sequences using three first-order Markov processes, whose possible states were represented by three durations. According to the underlying generative process (i.e. experimental condition) these durations could alternate based either on random or predictable transition probabilities, leading to different mean duration and entropy rate of each process’ stationary distribution.
        The generated sequences were presented to human participants in a delayed temporal reproduction task. Behavioral performances were analyzed combining Bayesian modelling and data-driven approaches, with the aim of retrieving the statistical rules underlying participants’ learning and comparing them with stimuli generative processes.
        Results showed that participants’ performance was bonded to the set of statistical rules defined by our experimental conditions, but also reflected other types of low- and high-level statistics, namely central tendency and block history, along with a shrinkage in the durations-states space.
        These results suggest that the general human ability of detecting statistical regularities in the environment can serve as a basic mechanism for the learning of a broad range of temporal patterns, either partially or fully predictable, like musical rhythms.

        Speaker: Dunia Giomo (SISSA)
      • 20
        Perceptual adaptation to speech input statistics is driven by predictions from category representations

        Prior research demonstrates that the ‘perceptual weight’ of acoustic input in signaling speech categories shifts rapidly when statistical distributions of speech input deviate from expectations, as when you encounter a foreign accent. What drives this perceptual adaptation is debated. One possibility is that accented or otherwise distorted speech carries enough information to partially activate an internal speech representation, like a speech category for /b/ or /p/. This activation, in turn, may generate predictions about the statistical regularities of speech input typically associated with the representation (e.g., higher fundamental frequencies tends to pair with /p/, not /b/). When the actual speech input statistics mismatch these predictions (as for accents), an error signal may drive rapid adjustments to the effectiveness, or perceptual weight, of an acoustic dimension in signaling the speech, with mismatched dimensions down-weighted. This internal-error driven learning account makes predictions that we test across five experiments. First, the magnitude of perceptual adaptation should be predicted by the strength of phonetic category activation (that generates a prediction), as estimated by categorization responses reliant on the dominant acoustic dimension (Exp 1). Further, signal manipulations that flip which of two acoustic dimensions best conveys category identity are expected, correspondingly, to shift which dimension effectively activates a speech representation – and therefore which dimension’s perceptual weights are adjusted, as well (Exp 2). Experiments 3-5 introduce a new paradigm that conveys short-term speech distributional speech regularities across brief sequences and examines their impact on perceptual adaptation to ascertain whether the category activation hypothesized to drive the perceptual adaptation must be supported by trial-by-trial overt category decisions to be effective. The results align with error-driven learning account predictions. Both the direction and magnitude of perceptual adaptation are predicted by graded measures of category activation. Moreover, accumulation of speech input regularities across passive listening elicits perceptual adaptation that is just as robust as when there are overt category decisions. The data are consistent with an error-driven model whereby perceptual adaptation arises from speech category activation, corresponding predictions about the statistical distributional patterns of acoustic input that align with the category, and rapid adjustments in subsequent speech perception when input mismatches expectations. At the broadest level, this series of experiments demonstrates that ‘statistical’ learning – even across passive exposure – can be guided by explicit error signals determined from internal phonetic category activation to adjust perception and behavior.

        Speakers: Alana Hodson (Carnegie Mellon University), Dr Charles Yunan Wu (Carnegie Mellon University), Prof. Barbara Shinn-Cunningham (Carnegie Mellon University), Lori Holt (Carnegie Mellon University)
      • 21
        Sequential Temporal Anticipation Characterized by Neural Power Modulation and in Recurrent Neural Networks

        Complex human behaviors involve perceiving continuous stimuli and planning actions at sequential time points, such as in perceiving/producing speech and music. To guide adaptive behavior, the brain needs to internally anticipate a sequence of prospective moments. How does the brain achieve this sequential temporal anticipation without relying on any external timing cues? To answer this question, we designed a ‘premembering’ task: we tagged three temporal locations in white noise by asking human listeners to detect a tone presented at one of the temporal locations. We selectively probed the anticipating processes guided by memory in trials with only flat noise using novel modulation analyses. A multi-scale anticipating scheme was revealed: the neural power modulation in the delta band encodes noise duration on a supra-second scale; the modulations in the alpha-beta band range mark the tagged temporal locations on a sub-second scale and correlate with tone detection performance. To unveil the functional role of those neural observations, we turned to recurrent neural networks (RNNs) optimized for the behavioral task. The RNN hidden dynamics resembled the neural modulations; further analyses and perturbations on RNNs suggest that the neural power modulations in the alpha-beta band emerged as a result of selectively suppressing irrelevant noise periods and increasing sensitivity to the anticipated temporal locations. Our neural, behavioral, and modelling findings convergingly demonstrate that the sequential temporal anticipation involves a process of dynamic gain control – to anticipate a few meaningful moments is also to actively ignore irrelevant events that happen most of the time.

        Speaker: Xiangbin Teng
      • 22
        The trans-saccadic preview effect: Adaptation or active vision?

        Our eyes move about three times per second which divides apparently continuous vision into rather discrete snapshots. These spatiotemporal dynamics implied in active vision bring about statistical regularities which impact on perceptual processing. An example are preview effects, which demonstrate that extrafoveal pre-saccadic information contributes to post-saccadic foveal processing. Preview effects can be found in task performance, eye-movement behavior, and fixation-related neural responses. However, at least three theoretical accounts can explain in particular the early post-saccadic preview effect in fixation-related neural responses was has been in the focus of previous research. First, early post-saccadic preview effects could simply result from adaptation of neurons with very large receptive fields which would mean that the preview effect could be largely independent from active vision. Second, they could result from processes that are specific to eye movements which would indicate that the preview effect relies on active vision. Third, they could result from spatiotopic adaptation which would be related to eye-movements but make opposite predictions compared with the active vision account under certain experimental conditions. We critically compare these theoretical accounts to interpretations of trans-saccadic perception in terms of predictive processing and present first results from a MEG and eye-tracking coregistration study that has been designed to differentiate between these possible explanations.

        Speaker: Christoph Huber-Huber (CCNS, University of Salzburg, Austria)
    • 23
      Predictive neural representations in vision, language and music
      Speaker: Prof. Floris de Lange (Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen)
    • 1:00 PM
      Lunch Break
    • Reading
      • 24
        Humans and baboons, but not pigeons, use letter-sequence information during orthographic processing

        Animal studies investigating orthographic processing have shown that not only humans but also baboons and pigeons can successfully perform word/non-word decisions, despite their lack of phonological or semantic representations. Here we take a comparative modeling approach and investigate the cognitive basis of this lexical decision behavior across the three species to clarify whether phylogenetic relatedness entails similar underlying cognitive operations. We specifically aim to determine which types of information inherent in the letter-string input are utilized to inform lexical decisions by humans, baboons, and pigeons. To this end, we introduce four versions of the Speechless Reader Model, a predictive coding model based on prediction error accumulation. The model was motivated by neuro-cognitive processes involved in human word recognition but is in its current form without phonological or semantic representations. Our central analysis investigates which model variant most fittingly simulates the lexical decision behavior of humans, baboons, and pigeons, respectively. From our simulations, one model emerges as most adequately for humans and baboons: the variant that integrates image-based and letter-based representations sensitive to transitional probabilities. In contrast, pigeons’ reading behavior is explained best by the model representing image-based and positional letter frequencies but not transitional probabilities. This difference could be related to the ability of primates to flexibly switch between local and global visual processing strategies, while pigeons show substantial local precedence. Thus, the explanatory value of visual-orthographic codes highlighted here speaks for the ancient origins of some cognitive abilities involved in orthographic processing.

        Speaker: Benjamin Gagl (University of Cologne)
      • 25
        Sleep-dependent memory consolidation - is it time for a revision?

        Memory consolidation is a key to stabilizing the memory traces coming from learning processes such as statistical learning. It is also crucial in building representations and models. Sleep is widely believed to be essential to learning and memory consolidation. The theory of sleep-dependent consolidation suggests that after an offline period, including sleep, performance improves more than after a period without sleep. Accordingly, several studies showed the critical role of sleep in skill and procedural learning consolidation. However, recent works suggest that the data on which this theory relies may be driven by several factors that are unrelated to sleep. In this talk, I will show empirical results and methodological pitfalls that invite a reconsideration of sleep’s role in the consolidation of memories, and I will discuss the consequences on statistical learning and predictive processes.

        Speaker: Dezso Nemeth (Centre de Recherche en Neurosciences de Lyon)
      • 26
        Hierarchy as Linear Ordering in a Multidimensional Space

        Human languages are externalized as linear sequences of atomic units.
        Generativist approaches assume categorized chunks in language to stem from primitive language-specific properties of the language faculty. Usage-based approaches to language adopt a processing perspective where chunk-formation and word-recognition are strictly tied to statistical computations on the string.
        We adopt a processing view and directly address the cognitive foundations of the human capacity to build structure from a linear ordering, exploring the relationship between precedence and containment (Vender et al., 2019, 2020).
        We report the results of a series of AGL studies developed adopting a SRT task, where the sequence of the stimuli, presented either visually or haptically, is determined by the rules of the Fibonacci grammar Fib or the foil-grammar Skip.
        Fib and Skip share the same two deterministic transitions, but they crucially differ in their structure. Only Fib is characterized by the presence of so-called k-points, which provide a bridge to hierarchical reconstruction, while not giving rise to a predictable deterministic transition. Linearly predicting k-points involves progressively larger chunks, with a non-linear relation between chunk size and prediction power.
        We examine children’s and adults’ implicit learning skills, assessing linear learning, while also crucially investigating the ability to predict k-points.
        Results provide evidence not only for sequential learning, but also for hierarchical learning in Fib. We propose that the relations of precedence and containment are not antagonistic ways of processing a temporally ordered sequence of units, rather strictly interdependent implementations of an abstract mathematical relation of linear ordering.

        Speaker: Arianna Compostella (University of Verona)
    • Poster Session 2 (coffee break)
    • Round Table
      • 27
        Round Table: Clare Press, Floris de Lange, Lori Holt and Maria Chait
        Speakers: Clare Press (Department of Psychological Sciences, Birkbeck, University of London. Wellcome Centre for Human Neuroimaging, University College London), Floris de Lange (Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen), Lori Holt (Department of Psychology, Carnegie Mellon University), Maria Chait (Ear Institute, University College London)
    • 28
      TBA
      Speaker: Prof. Lucia Melloni (Neural circuits, consciousness and cognition research group, Max Planck Institute for Empirical Aesthetics. Department of neurology, NYU Grossman School of Medicine)
    • Poster Session 3 (coffee break)
      • 29
        Perception of ambiguous visual stimuli is driven by cross-modal associative learning under uncertainty

        Perception can be understood as inference combining sensory information with prior expectations. Here, we manipulate prior expectations by associative learning and investigate the effect of cue modality. In our experiment, participants (N=29) indicated the perceived direction of illusory motion of dot pairs (640 trials). A visuo-acoustic cue preceded the target stimulus and probabilistically predicted the direction of the motion. In 30% of the trials, motion direction was ambiguous, and in half of these trials, the auditory and the visual dimension of the cue predicted opposing directions. The impact of associative learning on perceptual decisions was evidenced by slower responses to less predictable, relative to more predictable non-ambiguous stimuli and by the increased rate of cue-congruent decisions on ambiguous trials. When the visual and the auditory dimensions of the cue predicted conflicting directions of motion on ambiguous trials, decisions were mostly congruent with the prediction of the acoustic dimension. In addition to the aggregated measures, we fitted the LATER model with various levels of complexity to reaction time data, where beliefs (e.g. cue-target associations) are represented as probability distributions. Overall, priors based on auditory information seem to have a stronger weight during the perception of illusory visual motion.

        Speaker: Bertalan Polner
      • 30
        Does a surprising observation facilitate perception of all subsequent sensory input?

        Bayesian theories posit that perception is the result of combining expectations with the sensory input, and that expectations are updated when that sensory input is surprising – i.e., deviates from the expectation. To be adaptive, expectations should only be updated when the surprising observation reveals that the world is actually different from our models, but it is not yet clear how this is estimated nor which processes enable it. A recent theory suggests that surprising observations trigger a reactive gain-increase on all sensory input via noradrenaline release. This process would allow observers to reappraise the entire sensory environment if their model no longer explains their observations.
        We present stimuli in semi-predictable positions in a circular array around a central fixation point. Participants are instructed to monitor for contrast changes in the peripheral stimuli as well as changes around the fixation point. Trials are characterised by the surprise generated by the position of the peripheral stimulus and whether it signals environmental change.
        We analyse the relationship between surprise and sensitivity (hit rate) to contrast changes at both locations, and characterise the time-course of sensitivity changes. We consider how our findings cast light on perception’s role in model updating and the automaticity of this process.

        Speaker: Emma Ward (Birkbeck, University of London)
      • 31
        Statistical learning of likely distractor locations in visual search is driven by the local distractor frequency

        Salient but task-irrelevant distractors interfere less with visual search when they appear in a display region where distractors have appeared more frequently in the past. In this study we tested two different theories of such statistical distractor-location learning. It could reflect the (re-)distribution of a global, limited attentional ‘inhibition resource’. Accordingly, changing the frequency of distractor appearance in one display region should also affect the magnitude of interference generated by distractors in a different region. Alternatively, distractor-location learning may reflect a local response to distractors occurring at a particular location. In this case, the local distractor frequency in one display region should not affect distractor interference in a different region. To decide between these alternatives, we conducted three experiments in which participants searched for an orientation-defined target while ignoring a more salient orientation distractor that occurred more often in one vs. another display region. Experiment 1 varied the ratio of distractors appearing in the frequent vs. rare regions, with a fixed global distractor frequency. The results revealed the probability cueing effect to increase with increasing probability ratio. In Experiments 2 and 3, one (‘test’) region was assigned the same local distractor frequency as in one of the conditions of Experiment 1, but a different frequency in the other region – dissociating local from global distractor frequency. Together, the three experiments showed that distractor interference in the test region was not significantly influenced by the frequency in the other region, consistent with purely local learning.

        Speaker: Dr Fredrik Allenmark (Ludwig Maximilian University of Munich)
      • 32
        The babe with the predictive power: work in progress examining the role of prediction in early word encoding

        Error-based theories of language acquisition posit that predictions are a key part of language processing throughout the lifespan. They suggest that adults and children are constantly anticipating upcoming input and use discrepancies to update their linguistic knowledge from the very earliest stages of development. However, linguistic predictions are challenging to target experimentally, and existing studies typically focus on linguistic prediction in older age groups. As a result, there is limited evidence that prediction is a viable learning mechanism in infancy.

        This study targets the role of prediction in early word encoding to assess the feasibility of this learning mechanism in infancy. We have adapted an adult EEG study focusing on syllabic prediction (Vidal et al., 2019) for an infant population. In the learning phase, 39 nine-month-old infants hear two trisyllabic pseudowords. These words are then used as standard stimuli in an oddball-phase with four new words. Two of these deviant words only share their first syllable with a familiar word, while the other two share their first two syllables. We will measure whether infants’ mismatch-response (MMR) differs between standard and deviant words, to assess whether 9-month-olds make phonemic-level predictions.

        We will also assess the MMR-difference between the two kinds of deviants. An MMR difference after one versus two shared syllables would suggest that cumulative congruent input reinforces prediction. To account for the variability of infants’ MMR responses, we will also carry out a tone-change-detection Optimum-1 task, to determine the location, latency and polarity of the MMR for each infant separately.

        Reference:
        Vidal, Y., Brusini, P., Bonfieni, M., Mehler, J., & Bekinschtein, T. A. (2019). Neural signal to violations of abstract rules using speech-like stimuli. Eneuro, 6(5).

        Speaker: Dr Judit Fazekas (School of Psychological Sciences, University of Manchester)
    • 33
      Together, apart: how invariances and selectivities contribute to predictive processing
      Speaker: Dr Caspar Schwiedrzik (Neural Circuits and Cognition Lab, European Neuroscience Institute Goettingen. Perception and Plasticity Group, German Primate Center - Leibniz Institute for Primate Research)
    • 1:00 PM
      Lunch Break
    • Speech Processing
      • 34
        Categorical Perception of a vowel contrast in native speakers and second language learners.

        The perceptual space of a speaker is shaped in infancy according to the phonological inventory of the L1. Phonological categories correlate with Categorical Perception (CP) and Perceptual Magnet (PM) effects, lowering the discrimination rate between the same category’s sounds and increasing it at the Category Boundaries (Liberman et al., 1967; Kuhl et al., 1992).

        Second Language (L2) learning in adulthood requires creating new categories, some overlapping with the existing ones. When L2 and L1 categories overlap, the PM and CP effects might block the creation of the target L2 sounds, linked to Foreign Accented speech.

        In this study, I investigate with the CP paradigm, the categorization, and discrimination of two German words: ʃɔːn] (‘already’) vs. [ʃøːn] (‘beautiful’) distinguished by a vowel contrast existing in German but not in Italian. I tested: i) 20 L1 speakers of German (L1); 34 L2 learners of German, L1 speakers of Italian – ii) 14 exposed to native speech (Tandem); iii) 18 not exposed to it; iv) 20 L1 speakers of Italian (Naïve). The oddball discrimination task presented the stimuli in 6 orders: AAB, ABA, ABB, BAA, BAB, BBA. L2 learners performed the LEXTALE in German (Lemhöfer & Broersma, 2012).

        Results show that the categorization and discrimination performance linearly increase with language proficiency. Categorization only correlates with LEXTALE. Exposure to native speech is relevant. The presence of CP - as classically reported in the literature - is affected by the order of presentation of the stimuli in the oddball paradigm, emerging with BAB, ABA, BBA orders.

        Speaker: Alessandra Zappoli (University of Nova Gorica (Slovenia))
      • 35
        Impaired speech-motor control in stuttering affects EEG correlates of predictive speech comprehension

        In this study we tested the hypothesis suggesting that speech production processes support prediction during speech comprehension by investigating prediction processes in a population with impaired speech-motor control, i.e., adults who stutter (AWS). We reasoned that, if production and prediction are supported by common processes, people with impaired production should also show anomalous prediction. Participants listened to high vs low constraining (HC vs LC) sentence frames that made the final target word either predictable or not. Such paradigm allowed us to tap onto (a) prediction processes in a pre-target silent interval, comparing EEG alpha/beta power modulations in the HC vs LC conditions; and (b) the consequences of prediction, comparing the post-target N400 ERP in the LC vs HC conditions. In addition, participants were involved in a production task in which, after listening to the same HC and LC sentences, they were asked to name the picture corresponding to the target word. Compared to a control group of fluent speakers, AWS showed a different pattern of alpha/beta power modulations in the production task, compatible with their impairment. More interestingly, a difference was also observed in the comprehension task, and it was paired with a less accentuated post-target N400. Overall, the pattern supports the hypothesis that some aspects of speech production play a role in prediction during speech comprehension.

        Speaker: Dr Simone Gastaldon (Department of Developmental and Social Psychology, University of Padova)
      • 36
        The role of non-acoustic sublexical probabilistic phonotactic cues during speech perception

        All current models of speech perception assume language processing is grounded in acoustic-phonetic properties of the speech signal (e.g., Mattys et al., 2012). However, considerable empirical evidence shows knowledge about non-acoustic probabilistic sublexical cues can influence speech perception (e.g., Auer & Luce, 2005). The present study aimed to investigate the contribution of probabilistic phonotactic cues to perceptual learning of noise-vocoded speech. In Experiment 1, listeners’ reported accuracy improved from 6% to 12% over a series of 140 vocoded sentences. Using a probe-prime-probe design with congruent, incongruent and neutral conditions, in the next three experiments, participants were presented with three different types of noise-vocoded probe sentences: real English, nonsense (containing real English words but semantically empty), and pseudo sentences (containing nonwords). In the nonsense and pseudo-sentence experiments, words and nonwords were matched in terms of phonotactic probabilities to the reference words but mismatched acoustically. For real sentences, we observed accuracy rates of 95% in the congruent condition. Crucially, despite the absence of matching lexical content, accuracy rates for vocoded nonsense and pseudo-sentences were also high (70.4% and 74%, respectively), indicating that participants were able to assemble lexical information from the context of the prime sentence based solely upon the matched probabilistic phonotactic cues. We argue these novel findings demonstrate that perceptual learning of noise-vocoded speech is largely achieved by a statistical learning mechanism operating at the level of non-acoustic, sublexical probabilistic phonotactic information. We discuss how models of speech perception may be enhanced by including this alternative mechanism for accessing meaning.

        Speaker: Ms Valeriya Tolkacheva (Queensland University of Technology, School of Psychology and Counselling, Queensland, Australia)
    • Poster Session 3 (coffee break)
    • Modeling
      • 37
        Joint modeling confirms pupil dilation as neurophysiological marker of Bayesian spatial inference in dynamic auditory environments

        Bayesian inference has been used successfully to explain how listeners integrate prior information with auditory signals to stabilize perception in dynamic and noisy environments. Recently, it was suggested that the arousal system plays a notable role by modulating the relevance and reliability of priors (Krishnamurthy, Nassar, et al., 2017, Nat Hum Behav). This suggestion was based on observed correlations between pupil dilation measures and latent variables of an ideal observer model in an auditory localization task with audiovisual priors. However, it is unclear how the sequential fitting to behavioral and then physiological data may have compromised the results. Here, we propose a refined Bayesian observer model that simultaneously predicts behavioral responses and pupil size measures by explicitly defining an interpretable linking function between model variables and physiological outcomes. In a re-analysis of the original data, we jointly fitted various versions of the model to each individual’s data. We thus tested a variety of hypothesized ‘linking functions’ and selected the most parsimonious model. Our results not only indicated improved behavioral fits but, more importantly, the joint modeling approach was able to confirm and quantify the relationship between Bayesian perceptual processes and the arousal system as reflected by pupillometry. In general, our findings aim to demonstrate how integrating behavioral data and neurophysiological measurements in a single-model approach can aid our understanding of auditory perception.

        Speaker: David Meijer (Acoustics Research Institute, Austrian Academy of Sciences)
      • 38
        Predictive neural representations of dynamic sensory input revealed by a novel dynamic extension to RSA

        To successfully navigate our dynamic environment, our brain needs to continuously update its representation of external information. This poses a fundamental problem: how does the brain cope with a stream of dynamic input? It takes time to transmit and process information along the hierarchy of the visual system. Our capacity to interact with dynamic stimuli in a timely manner (e.g., catch a ball) suggests that our brain generates predictions of unfolding dynamics. While contemporary theories assume an internal representation of future external states, current paradigms typically capture a mere snapshot or indirect consequence of prediction, often utilizing simple static stimuli of which the predictability is directly manipulated. The rich dynamics of predictive representations remain largely unexplored. One approach for investigating neural representations is representational similarity analysis (RSA), which typically uses models of static stimulus features at different hierarchical levels of complexity (e.g., color, shape, category, concept) to investigate how these features are represented in the brain. Here we present a novel dynamic extension to RSA that uses temporally variable models to capture neural representations of dynamic stimuli, and demonstrate predictive neural representations of ballet dancing videos presented to subjects in an MEG scanner. This promising new approach can be used with any dynamic stimulus and any dynamic (neural) signal of interest, and it opens the door for addressing important outstanding questions on how and when our brain represents and predicts the dynamics of the world.

        Speaker: Ingmar de Vries (1. Donders Institute. 2. Centre for Mind/Brain Sciences (CIMeC))
      • 39
        Phase transitions in when feedback is useful

        Sensory observations about the world are invariably ambiguous. Inference about the world's latent variables is thus an important computation for the brain. However, computational constraints limit the performance of these computations. These constraints include energetic costs for neural activity and noise on every channel. Efficient coding is one prominent theory that describes how such limited resources can best be used. In one incarnation, this leads to a theory of predictive coding, where predictions are subtracted from signals, reducing the cost of sending something that is already known. However, this theory does not account for the costs or noise associated with those predictions. Here we offer a theory that accounts for both feedforward and feedback costs, and noise in all computations. We formulate this inference problem as message-passing on a graph whereby feedback serves as an internal control signal aiming to maximize how well an inference tracks a target state while minimizing computation costs. We apply this novel formulation of inference as control to the canonical problem of inferring the hidden scalar state of a linear dynamical system with Gaussian variability. The best solution depends on architectural constraints, which can create asymmetric costs for feedforward and feedback channels. Under such conditions, our theory predicts the gain of optimal predictive feedback and how it is incorporated into the inference computation. We show a non-monotonic dependence of optimal feedback gain as a function of both the computational parameters and the world dynamics, leading to phase transitions in whether feedback provides any utility.

        Speaker: Lokesh Boominathan (Rice University)
    • 40
      From decoding to prediction: A 35 years perspective on reading research
      Speaker: Prof. Ram Frost (Department of Psychology, The Hebrew University of Jerusalem)
    • Poster Session 4 (coffee break)
      • 41
        Information transfer in the auditory thalamocortical system

        There are extensive changes in encoding that occur between the thalamus and auditory cortex which take place over only 1 or 2 synpases. Thus far little about these information transfers is truly understood beyond the anatomical connectivity that has been well mapped with anterograde and/or retrograde tracers. In addition studies addressing the functional connectivity have primarily been conducted under anaesthesia or using very basic acoustic stimulation.
        Hence, in our study we developed a technique to record from the auditory thalamus (median geniculate body (MBG)) and the primary auditory cortex (A1) simultaneously in awake, head-fixed mice and compared these to recordings obtained under anaesthesia.
        As a proof of concept spectrotemporal response fields and responses to click trains were recorded as both local field potentials and multiunit activity. We found that both multiunits and LFP from MGB could phase lock to rates up to 480Hz while A1 could only follow much lower rates. An analysis method of directed coherence (Saito et al, 1981) was subsequently applied to determine directional correlated spontaneous activity between the two regions. Here we
        were able to distinguish between feedforward and feedback low frequency rhythms that were transfered between the MGB and A1 and that also differed along A1 cortical layers.
        This technique opens up opportunities to explore the functional connectivity, determine the overlap in spectrotemporal features and ultimately better understand the nature of information relay between these two areas of the auditory pathway in an awake state. Understanding of this functional connectivity will hugely improve our ability to predict feedforwad and back propogation responses to complex sounds.

        Speaker: Dr Alexa N. Buck (Institut de l’Audition, Institut Pasteur, INSERM, Univ. Paris Cité, Paris, France)
      • 42
        The left hemisphere will tell if you got it right: evidence from visual statistical learning

        Extracting statistical regularities from sensory input is vital to perception, memory and language processing. This ability, known as statistical learning (SL), is a useful capacity which has been demonstrated by a range of studies across modalities and stimulus types. To deepen the understanding of the underlying mechanism of SL, this study investigated the possible psychophysiological signatures sensitive to the extraction of temporal regularities and the hemispheric asymmetry in visual SL. Event-related potentials (ERPs) were recorded on a group of young adults (n=28) during the visual SL task. After being exposed to a continuous stream of abstract shapes, participants performed a judgement task containing adjacent and nonadjacent dependency, with visual half-field manipulation on the final shape of the triplet. Behavioral results showed higher response accuracy of target triplets than foil triplets. Grand-averaged ERPs showed that with the right visual field (RVF) presentation, the final shape that responded correctly elicited larger N100 (110 - 170 ms) and also larger N400 (300 – 500 ms) than those responded incorrectly. These results indicate the left hemisphere advantages in visual SL; the early frontal brain activity reflects the selective attention to learned items and predicts the learning outcome of individuals; mid-latency brain waves at central-to-parietal regions are possibly compatible with matched or mismatched information. In addition, the N400 effect also showed a tendency that the right hemisphere might be responsible for processing items with low statistical regularities. Our findings provide new insights into the neurocognitive mechanisms associated with extracting patterns of regularity under visual SL.

        Speaker: Ms Hoi Yan Mak (City University of Hong Kong)
      • 43
        The role of gamma activity in affinity to statistical learning

        Statistical learning (SL) is described as a general, implicit phenomenon to segment continuous information. Although it is claimed to be essential for our perception, the behavioral results of SL studies vary greatly. In the present study, we examined the EEG correlates of implicit, visual SL and investigated the results according to performance.

        Twenty-nine subjects (16 female, mean age: 26.38y) were shown an image sequence, where unbeknownst to them, certain pictures formed stimulus pairs that always followed each other. The second images of the pairs became predictable compared to the preceding ones and the unpaired control pictures. After acquiring 64-channel EEG data during the task, participants were divided into two groups based on their results of a familiarity test (above-chance (n=14) or chance (n=15)). We examined the time-frequency data between the groups and the conditions (predictable & unpredictable) and used permutation statistics with cluster-based correction.

        We found a great difference between above-chance performers and chance-performers in the gamma band (45-70 Hz), 500-800 ms after stimulus onset. The analysis shows a significant cluster in the left frontoparietal region. We pursued this difference in the above-chance performers and the same gamma difference was traceable between the preceding and the control conditions. Gamma power was higher in the preceding condition with the same scalp distribution.

        In literature, the gamma activity of the frontoparietal network has been linked to visual attention. Our findings might reflect a top-down modulation of visual attention before the predictable stimuli, which can contribute to interpersonal affinity differences.

        Speaker: Szabolcs Sáringer (Department of Physiology, Albert Szent-Györgyi Medical School, University of Szeged)
    • 44
      Predictive what and when across sensory modalities
      Speaker: Prof. Nicola Molinaro (BCBL, Basque Center on Cognition, Brain and Language)
    • 1:00 PM
      Lunch Break
    • Final Session
      • 45
        Naturalistic viewing involves prediction: extrafoveal preview effect in visual perception

        The vast majority of studies of visual perception have measured behavioral and neural responses to unpredictable, suddenly onsetting stimuli. In natural vision, however, saccades are typically used to bring relevant information, glimpsed with extrafoveal vision, into the fovea for further processing. This raises the question of whether the extrafoveal preview influences visual object recognition in more natural viewing conditions. Here, I will focus on research from my lab showing strong effects of prediction on the post-saccadic response. In the case of face perception, a predictable preview leads to faster and better face recognition judgments and reduced face-related evoked potentials. For more simple stimuli, there is also a preview effect and, for both faces and gratings, information about the preview is present in the EEG/MEG signal prior to saccade onset and integrated with the post-saccadic information. Overall, these studies suggest that visual perception during natural viewing is influenced by the extrafoveal preview, and prediction more generally. In line with theories of active, sensorimotor perception, we would argue that studying behavioral and EEG/MEG responses under more natural viewing conditions may be necessary to understand how visual processing typically works.

        Speaker: Prof. David Melcher (New York University Abu Dhabi)
      • 46
        Cardio-audio regularity encoding during human wakefulness, sleep and coma

        The human brain can encode temporal regularities based on synchronizing sound onsets to the ongoing heartbeat. Here we investigated whether cardio-audio regularity processing can occur in the absence of perceptual awareness by administering auditory sequences while recording continuous electrocardiography and electroencephalography in a cohort of comatose patients (N=65) i.e. in a deep unconscious state, and in a group of healthy individuals during sleep (N=26). We investigated the neural and cardiac correlates of violated auditory prediction by administering a series of sounds which were unexpectedly interrupted by random omissions. Sounds could occur in synchrony with the ongoing heartbeat or at a fixed pace (isochronous) or at variable interstimulus intervals and in asynchrony with the ongoing heartbeat (asynchronous). In coma survivors, unexpected omissions elicited a neural surprise response only in the synchronous condition at -99-114 ms and 225-391ms following omission onset. Patients with poor outcome did not exhibit evidence of preserved omission responses. In healthy individuals during N2 sleep, we observed a modulation of the neural response to unexpected omissions within the synchronous auditory sequences at -99-117 ms and 322-500 ms and within the isochronous sequence at 83-226ms after omission onset. In healthy individuals, cardio-audio regularity encoding was further demonstrated by a heartbeat deceleration upon omissions in the synchronous condition only across all vigilance states. Cardio-audio regularity encoding can occur in the absence of consciousness and is largely preserved across all vigilance states. The degree of preservation of cardiac and auditory integration represents a potential biomarker for coma outcome prognostication.

        Speaker: Marzia De Lucia (Lausanne University Switzerland)
    • Poster Session 4 (coffee break)
    • Round Table
      • 47
        Round Table: Caspar Schwiedrzik, Lucia Melloni, Nicola Molinaro and Ram Frost
        Speakers: Caspar Schwiedrzik (Neural Circuits and Cognition Lab, European Neuroscience Institute Goettingen. Perception and Plasticity Group, German Primate Center - Leibniz Institute for Primate Research), Lucia Melloni (Neural circuits, consciousness and cognition research group, Max Planck Institute for Empirical Aesthetics. Department of neurology, NYU Grossman School of Medicine), Nicola Molinaro (BCBL, Basque Center on Cognition, Brain and Language), Ram Frost (Department of Psychology, The Hebrew University of Jerusalem)