Speaker
Description
Human languages are externalized as linear sequences of atomic units.
Generativist approaches assume categorized chunks in language to stem from primitive language-specific properties of the language faculty. Usage-based approaches to language adopt a processing perspective where chunk-formation and word-recognition are strictly tied to statistical computations on the string.
We adopt a processing view and directly address the cognitive foundations of the human capacity to build structure from a linear ordering, exploring the relationship between precedence and containment (Vender et al., 2019, 2020).
We report the results of a series of AGL studies developed adopting a SRT task, where the sequence of the stimuli, presented either visually or haptically, is determined by the rules of the Fibonacci grammar Fib or the foil-grammar Skip.
Fib and Skip share the same two deterministic transitions, but they crucially differ in their structure. Only Fib is characterized by the presence of so-called k-points, which provide a bridge to hierarchical reconstruction, while not giving rise to a predictable deterministic transition. Linearly predicting k-points involves progressively larger chunks, with a non-linear relation between chunk size and prediction power.
We examine children’s and adults’ implicit learning skills, assessing linear learning, while also crucially investigating the ability to predict k-points.
Results provide evidence not only for sequential learning, but also for hierarchical learning in Fib. We propose that the relations of precedence and containment are not antagonistic ways of processing a temporally ordered sequence of units, rather strictly interdependent implementations of an abstract mathematical relation of linear ordering.