Recent evidence has shown that convergence of print and speech processing across a network of primarily left-hemisphere regions of the brain is a predictor of future reading skills in children, and a marker of fluent reading ability in adults. The present study extends these findings into the domain of second-language (L2) literacy, through brain imaging data of English and Hebrew L2 learners. Participants received an fMRI brain scan, while performing a semantic judgement task on spoken and written words and pseudowords in both their L1 and L2, alongside a battery of L1 and L2 behavioural measures. Imaging results show, overall, show a similar network of activation for reading across the two languages, alongside significant convergence of print and speech processing across a network of lefthemisphere regions in both L1 and L2 and in both cohorts. Importantly, convergence is greater for L1 in occipito-temporal regions tied to automatic skilled reading processes including the visual word-form area, but greater for L2 in frontal regions of the reading network, tied to more effortful, active processing. The main groupwise brain effects tell a similar story, with greater L2 than L1 activation across frontal, temporal and parietal regions, but greater L1 than L2 activation in parieto-occipital regions tied to automatic mapping processes in skilled reading. These results provide evidence for the shifting of the reading networks towards more automatic processing as reading proficiency rises and the mappings and statistics of the new orthography are learned and incorporated into the reading system.
Statistical Learning (SL) is typically considered to be a domain-general mechanism by which cognitive systems discover the underlying statistical regularities in the input. Recent findings, however, show clear differences in processing regularities across modalities and stimuli as well as low correlations between performance on visual and auditory tasks. Why does a presumably domain-general mechanism show distinct patterns of modality and stimulus specificity? Here we claim that the key to this puzzle lies in the prior knowledge brought upon by learners to the learning task. Specifically, we argue that learners’ already entrenched expectations about speech co-occurrences from their native language impacts what they learn from novel auditory verbal input. In contrast, learners are free of such entrenchment when processing sequences of visual material such as abstract shapes. We present evidence from three experiments supporting this hypothesis by showing that auditory-verbal tasks display distinct item-specific effects resulting in low correlations between test items. In contrast, non-verbal tasks – visual and auditory – show high correlations between items. Importantly, we also show that individual performance in visual and auditory SL tasks that do not implicate prior knowledge regarding co-occurrence of elements, is highly correlated. In a fourth experiment, we present further support for the entrenchment hypothesis by showing that the variance in performance between different stimuli in auditory-verbal statistical learning tasks can be traced back to their resemblance to participants' native language. We discuss the methodological and theoretical implications of these findings, focusing on models of domain generality/specificity of SL.
The Hebb repetition task, an operationalization of long-term sequence learning through repetition, is the focus of renewed interest, as it is taken to provide a laboratory analogue for naturalistic vocabulary acquisition. Indeed, recent studies have consistently related performance in the Hebb repetition task with a range of linguistic (dis)abilities. However, in spite of the growing interest in the Hebb repetition effect as a theoretical construct, no previous research has ever tested whether the task used to assess Hebb learning offers a stable and reliable measure of individual performance in sequence learning. Since reliability is a necessary condition to predictive validity, in the present work we tested whether individual ability in visual verbal Hebb repetition learning displays basic test-retest reliability. In a first experiment Hebrew-English bilinguals performed two verbal Hebb tasks, one with English and one with Hebrew consonant letters. They were retested on the same Hebb tasks after a period of about six months. Overall serial recall performance proved to be a stable and reliable capacity of an individual. By contrast, the test-retest reliability of individual learning performance in our Hebb task was close to zero. A second experiment with French speakers replicated these results and demonstrated that the concurrent learning of two repeated Hebb sequences within the same task minimally improves the reliability scores. Taken together, our results raise concerns regarding the usefulness of at least some current Hebb learning tasks, in predicting linguistic (dis)abilities. The theoretical implications are discussed.
In recent years, statistical learning (SL) research has seen a growing interest in tracking individual performance in SL tasks, mainly as a predictor of linguistic abilities. We review studies from this line of research and outline three presuppositions underlying the experimental approach they employ: (i) that SL is a unified theoretical construct; (ii) that current SL tasks are interchangeable, and equally valid for assessing SL ability; and (iii) that performance in the standard forced-choice test in the task is a good proxy of SL ability. We argue that these three critical presuppositions are subject to a number of theoretical and empirical issues. First, SL shows patterns of modality- and informational-specificity, suggesting that SL cannot be treated as a unified construct. Second, different SL tasks may tap into separate sub-components of SL that are not necessarily interchangeable. Third, the commonly used forced-choice tests in most SL tasks are subject to inherent limitations and confounds. As a first step, we offer a methodological approach that explicitly spells out a potential set of different SL dimensions, allowing for better transparency in choosing a specific SL task as a predictor of a given linguistic outcome. We then offer possible methodological solutions for better tracking and measuring SL ability. Taken together, these discussions provide a novel theoretical and methodological approach for assessing individual differences in SL, with clear testable predictions.
Almost all types of learning involve, to some degree, the ability to encode regularities across time and space. Although statistical learning (SL) research initially focused on offering a viable alternative to rule-based grammars and specialized mechanisms for word learning (e.g. [1,2]), the processing of regularities embedded in sensory input extends well beyond language. SL, therefore, was taken to offer a comprehensive theory of information processing, holding the promise of advancing knowledge across various domains of cognition including visual and auditory perception, multimodal integration, motor learning, segmentation, categorization and generalization, to name a few.
From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible “statistical” properties that are the object of learning. Much less attention has been given to defining what “learning” is in the context of “statistical learning.” One major difficulty is that SL research has been monitoring participants’ performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL.
Perceiving linguistic input is vital for human functioning, but the process is complicated by the fact that the incoming signal is often degraded. However, humans can compensate for unimodal noise by relying on simultaneous sensory input from another modality. Here, we investigated noise-compensation for spoken and printed words in two experiments. In the first behavioral experiment, we observed that accuracy was modulated by reaction time, bias and sensitivity, but noise compensation could nevertheless be explained via accuracy differences when controlling for RT, bias and sensitivity. In the second experiment, we also measured Event Related Potentials (ERPs) and observed robust electrophysiological correlates of noise compensation starting at around 350 ms after stimulus onset, indicating that noise compensation is most prominent at lexical/semantic processing levels.
Most research in statistical learning (SL) has focused on the mean success rates of participants in detecting statistical contingencies at a group level. In recent years, however, researchers have shown increased interest in individual abilities in SL, either to predict other cognitive capacities or as a tool for understanding the mechanism underlying SL. Most if not all of this research enterprise has employed SL tasks that were originally designed for group-level studies. We argue that from an individual difference perspective, such tasks are psychometrically weak, and sometimes even flawed. In particular, the existing SL tasks have three major shortcomings: (1) the number of trials in the test phase is often too small (or, there is extensive repetition of the same targets throughout the test); (2) a large proportion of the sample performs at chance level, so that most of the data points reflect noise; and (3) the test items following familiarization are all of the same type and an identical level of difficulty. These factors lead to high measurement error, inevitably resulting in low reliability, and thereby doubtful validity. Here we present a novel method specifically designed for the measurement of individual differences in visual SL. The novel task we offer displays substantially superior psychometric properties. We report data regarding the reliability of the task and discuss the importance of the implementation of such tasks in future research.
What determines individuals’ efficacy in detecting regularities in visual statistical learning? Our theoretical starting point assumes that the variance in performance of statistical learning (SL) can be split into the variance related to efficiency in encoding representations within a modality and the variance related to the relative computational efficiency of detecting the distributional properties of the encoded representations. Using a novel methodology, we dissociated encoding from higher-order learning factors, by independently manipulating exposure duration and transitional probabilities in a stream of visual shapes. Our results show that the encoding of shapes and the retrieving of their transitional probabilities are not independent and additive processes, but interact to jointly determine SL performance. The theoretical implications of these findings for a mechanistic explanation of SL are discussed.
We propose and test a theoretical perspective in which a universal hallmark of successful literacy acquisition is the convergence of the speech and orthographic processing systems onto a common network of neural structures, regardless of how spoken words are represented orthographically in a writing system. During functional MRI, skilled adult readers of four distinct and highly contrasting languages, Spanish, English, Hebrew, and Chinese, performed an identical semantic categorization task to spoken and written words. Results from three complementary analytic approaches demonstrate limited language variation, with speech-print convergence emerging as a common brain signature of reading proficiency across the wide spectrum of selected languages, whether their writing system is alphabetic or logographic, whether it is opaque or transparent, and regardless of the phonological and morphological structure it represents. cross-language invariance | word recognition | functional MRI
The processing of letter order has profound implications for understanding how visually presented words are processed and how they are recognized, given the lexical architecture that characterizes a given language. Research conducted in different writing systems suggests that letter-position effects, such as transposed-letter priming, are not universal. The cognitive system may perform very different types of processing on a sequence of letters depending on factors that are unrelated to peripheral orthographic characteristics but related to the deep structural properties of the printed stimuli. Assuming that identical neurobiological constraints govern reading performance in any language, these findings suggest that neurobiological constraints interact with the idiosyncratic statistical properties of a given writing system to determine the preciseness or fuzziness of letter-position coding. This chapter reviews the evidence for this interaction and discusses the implications for theories of reading and for modeling visual word recognition.
Second language comprehension is generally not as efficient and effective as native language comprehension. In the present study, we tested the hypothesis that lower-level processes such as lexical support for phonetic perception are a contributing factor to these differences. For native listeners, it has been shown that the perception of ambiguous acoustic–phonetic segments is driven by lexical factors (Samuel Psychological Science, 12, 348–351, 2001). Here, we tested whether nonnative listeners can use lexical context in the same way. Native Hebrew speakers living in Israel were tested with American English stimuli. When subtle acoustic cues in the stimuli worked against the lexical context, these nonnative speakers showed no evidence of lexical guidance of phonetic perception. This result conflicts with the performance of native speakers, who demonstrate lexical effects on phonetic perception even with conflicting acoustic cues. When stimuli without any conflicting cues were used, the native Hebrew subjects produced results similar to those of native English speakers, showing lexical support for phonetic perception in their second language. In contrast, native Arabic speakers, who were less proficient in English than the native Hebrew speakers, showed no ability to use lexical activation to support phonetic perception, even without any conflicting cues. These results reinforce previous demonstrations of lexical support of phonetic perception and demonstrate how proficiency modulates the use of lexical information in driving phonetic perception.
* Statistical learning (SL) theory is challenged by modality/stimulus-specific effects. * We argue that SL is shaped by both modality-specific constraints and domain-general principles. * SL relies on modality-specific neural networks and partially shared neural networks. * Studies of individual differences provide targeted insights into mechanisms of SL.