EyeLink Reading and Language Eye-Tracking Publications
All EyeLink reading and language research publications up until 2022 (with some early 2023s) are listed below by year. You can search the publications using keywords such as Visual World, Comprehension, Speech Production, etc. You can also search for individual author names. If we missed any EyeLink reading or language articles, please email us!
Chuanli Zang; Zhichao Zhang; Zhang Manman; Federica Degno; Simon P. Liversedge
In: Journal of Memory and Language, vol. 128, pp. 1–14, 2023.
The issue of whether lexical processing occurs serially or in parallel has been a central and contentious issue in respect of models of eye movement control in reading for well over a decade. A critical question in this regard concerns whether lexical parafoveal-on-foveal effects exist in reading. Because Chinese is an unspaced and densely packed language, readers may process parafoveal words to a greater extent than they do in spaced alphabetic languages. In two experiments using a novel Stroop boundary paradigm (Rayner, 1975), participants read sentences containing a single-character color-word whose preview was manipulated (identity or pseu- docharacter, printed in black [no-color], or in a color congruent or incongruent with the character meaning). Two boundaries were used, one positioned two characters before the target and one immediately to the left of the target. The previews changed from black to color and then back to black as the eyes crossed the first and then the second boundary respectively. In Experiment 1 four color-words (red, green, yellow and blue) were used and in Experiment 2 only red and green color-words were used as targets. Both experiments showed very similar patterns such that reading times were increased for colored compared to no-color previews indicating a parafoveal visual interference effect. Most importantly, however, there were no robust interactive effects. Preview effects were comparable for congruent and incongruent color previews at the pretarget region when the data were combined from both experiments. These results favour serial processing accounts and indicate that even under very favourable experimental conditions, lexical semantic parafoveal-on-foveal effects are minimal.
Yao Yao; Katrina Connell; Stephen Politzer-ahles
In: Bilingualism: Language and Cognition, pp. 1–14, 2023.
Differential affective processing has been widely documented for bilinguals: L1 affective words elicit higher levels of arousal and stronger emotionality ratings than L2 affective words (Pavlenko, 2012). In this study, we focus on two closely related Chinese languages, Mandarin and Cantonese, whose affective lexicons are highly overlapping, with shared lexical items that only differ in pronunciation across languages. We recorded L1 Cantonese – L2 Mandarin bilinguals' pupil responses to auditory tokens of Cantonese and Mandarin affective words. Our results showed that Cantonese–Mandarin bilinguals had stronger pupil responses when the affective words were pronounced in Cantonese (L1) than when the same words were pronounced in Mandarin (L2). The effect was most evident in taboo words and among bilinguals with lower L2 proficiency. We discuss the theoretical implications of the findings in the frameworks of exemplar theory and models of the bilingual lexicon. textcopyright
Danhui Wang; Man Zeng; Han Zhao; Lei Gao; Shan Li; Zibei Niu; Xuejun Bai; Xiaolei Gao
Effects of syllable boundaries in Tibetan reading Journal Article
In: Scientific Reports, vol. 13, no. 314, pp. 1–10, 2023.
Interword spaces exist in the texts of many languages that use alphabetic writing systems. In most cases, interword spaces, as a kind of word boundary information, play an important role in the reading process of readers. Tibetan also uses alphabetic writing, its text has no spaces between words as word boundary markers. Instead, there are intersyllable tshegs (“."), which are superscript dots. Interword spaces play an important role in reading as word boundary information. Therefore, it is interesting to investigate the role of tshegs and what effect replacing tshegs with spaces will have on Tibetan reading. To answer these questions, Experiment 1 was conducted in which 72 Tibetan undergraduates read three-syllable-boundary conditions (normal, spaced, and untsheged). However, in Experiment 1, because we performed the experimental operations of deleting tshegs and replacing tshegs, the spatial information distribution of Tibetan sentences under different operating conditions was different, which may have a certain potential impact on the experimental results. To rule out the underlying confounding factor, in Experiment 2, 58 undergraduates read sentences for both untsheged and alternating-color conditions. Overall, the global and local analyses revealed that tshegs, spaces, and alternating-color markers as syllable boundaries can help readers segment syllables in Tibetan reading. In Tibetan reading, both spaces and tshegs are effective visual syllable segmentation cues, and spaces are more effective visual syllable segmentation cues than tshegs.
Florence Van Meenen; Nicolas Masson; Leen Catrysse; Liesje Coertjens
In: Learning and Instruction, vol. 84, pp. 1–11, 2023.
Little is known on how students process peer feedback (PF) and use it to improve their work. We asked 59 participants to read the feedback of two peers on a fictional essay and to revise it, while we recorded their gaze behaviour. Regarding the PF processing subphase, discrepant PF led to more transitions, but only for participants who reported the discrepancy afterwards. Counterintuitively, participants who did not report the discrepancy, showed longer first-pass reading times. Concerning the PF use subphase, dwell time on essay correlated positively with the quality of the revised essays assessed by professors. Participants with a high-quality revision spent more time addressing higher order comments, corrected one or two lower order aspects at a time and proofread in the end, in which they went beyond the suggestions provided in the PF. These insights can be used when designing training to foster students' uptake of (discrepant) PF.
Adam J. Parker; Milla Räsänen; Timothy J. Slattery
In: Applied Cognitive Psychology, pp. 1–13, 2023.
When displaying text on a page or a screen, only a finite number of characters can be presented on a single line. If the text exceeds that finite value, then text wrapping occurs. Often this process results in longer, more difficult to process words being positioned at the start of a line. We conducted an eye movement study to examine how this artefact of text wrapping affects passage reading. This allowed us to answer the question: should word difficulty be used when determining line breaks? Thirty-nine participants read 20 passages where low-frequency target words were either line-initial or line-final. There was no statistically reliable effect of our manipulation on passage reading time or comprehension despite several effects at a local level. Regarding our primary research question, the evidence suggests that word difficulty may not need to be accounted for when determining line breaks and assigning words to new lines.
Justin B. Kueser; Ryan Peters; Arielle Borovsky
In: Journal of Experimental Child Psychology, vol. 226, pp. 1–19, 2023.
Verb meaning is challenging for children to learn across varied events. This study examined how the taxonomic semantic similarity of the nouns in novel verb learning events in a progressive alignment learning condition differed from the taxonomic dissimilarity of nouns in a dissimilar learning condition in supporting near (similar) and far (dissimilar) verb generalization to novel objects in an eye-tracking task. A total of 48 children in two age groups (23 girls; younger: 21–24 months
Rony Lemel; Lilach Shalev; Gal Nitsan; Boaz M. Ben-David
In: Research in Developmental Disabilities, vol. 133, pp. 1–15, 2023.
Background: Cognitive skills such as sustained attention, inhibition and working memory are essential for speech processing, yet are often impaired in people with ADHD. Offline measures have indicated difficulties in speech recognition on multi-talker babble (MTB) background for young adults with ADHD (yaADHD). However, to-date no study has directly tested online speech processing in adverse conditions for yaADHD. Aims: Gauging the effects of ADHD on segregating the spoken target-word from its sound-sharing competitor, in MTB and working-memory (WM) load. Methods and procedures: Twenty-four yaADHD and 22 matched controls that differ in sustained attention (SA) but not in WM were asked to follow spoken instructions presented on MTB to touch a named object, while retaining one (low-load) or four (high-load) digit/s for later recall. Their eye fixations were tracked. Outcomes and results: In the high-load condition, speech processing was less accurate and slowed by 140ms for yaADHD. In the low-load condition, the processing advantage shifted from early perceptual to later cognitive stages. Fixation transitions (hesitations) were inflated for yaADHD. Conclusions and implications: ADHD slows speech processing in adverse listening conditions and increases hesitation, as speech unfolds in time. These effects, detected only by online eyetracking, relate to attentional difficulties. We suggest online speech processing as a novel purview on ADHD. What this paper adds?: We suggest speech processing in adverse listening conditions as a novel vantage point on ADHD. Successful speech recognition in noise is essential for performance across daily settings: academic, employment and social interactions. It involves several executive functions, such as inhibition and sustained attention. Impaired performance in these functions is characteristic of ADHD. However, to date there is only scant research on speech processing in ADHD. The current study is the first to investigate online speech processing as the word unfolds in time using eyetracking for young adults with ADHD (yaADHD). This method uncovered slower speech processing in multi-talker babble noise for yaADHD compared to matched controls. The performance of yaADHD indicated increased hesitation between the spoken word and sound-sharing alternatives (e.g., CANdle-CANdy). These delays and hesitations, on the single word level, could accumulate in continuous speech to significantly impair communication in ADHD, with severe implications on their quality of life and academic success. Interestingly, whereas yaADHD and controls were matched on WM standardized tests, WM load appears to affect speech processing for yaADHD more than for controls. This suggests that ADHD may lead to inefficient deployment of WM resources that may not be detected when WM is tested alone. Note that these intricate differences could not be detected using traditional offline accuracy measures, further supporting the use of eyetracking in speech tasks. Finally, communication is vital for active living and wellbeing. We suggest paying attention to speech processing in ADHD in treatment and when considering accessibility and inclusion.
Katrine Falcon Søby; Evelyn Arko Milburn; Line Burholt Kristensen; Valentin Vulchanov; Mila Vulchanova
In: Applied Psycholinguistics, pp. 1–28, 2023.
How do native speakers process texts with anomalous learner syntax? Second-language learners of Norwegian, and other verb-second (V2) languages, frequently place the verb in third position (e.g., *Adverbial-Subject-Verb), although it is mandatory for the verb in these languages to appear in second position (Adverbial-Verb-Subject). In an eye-tracking study, native Norwegian speakers read sentences with either grammatical V2 or ungrammatical verb-third (V3) word order. Unlike previous eye-tracking studies of ungrammaticality, which have primarily addressed morphosyntactic anomalies, we exclusively manipulate word order with no morphological or semantic changes. We found that native speakers reacted immediately to ungrammatical V3 word order, indicated by increased fixation durations and more regressions out on the subject, and subsequently on the verb. Participants also recovered quickly, already on the following word. The effects of grammaticality were unaffected by the length of the initial adverbial. The study contributes to future models of sentence processing which should be able to accommodate various types of “noisy” input, that is, non-standard variation. Together with new studies of processing ofother L2 anomalies in Norwegian, the current findings can help language instructors and students prioritize which aspects of grammar to focus on.
Vladislava Staroverova; Anastasiya Lopukhina; Nina Zdorova; Nina Ladinskaya; Olga Vedenina; Sofya Goldina; Anastasiia Kaprielova; Ksenia Bartseva; Olga Dragoy
In: Journal of Experimental Child Psychology, vol. 226, pp. 1–11, 2023.
Studies on German and English have shown that children and adults can rely on phonological and orthographic information from the parafovea during reading, but this reliance differs between ages and languages. In the current study, we investigated the development of phonological and orthographic parafoveal processing during silent reading in Russian-speaking 8-year-old children, 10-year-old children, and adults using the gaze-contingent boundary paradigm. The participants read sentences with embedded nouns that were presented in original, pseudohomophone, control for pseudohomophone, transposed-letter, and control for transposed-letter conditions in the parafoveal area to assess phonological and orthographic preview benefit effects. The results revealed that all groups of participants relied only on orthographic but not phonological parafoveal information. These findings indicate that 8-year-old children already preprocess parafoveal information similarly to adults.
Miriam Rivero-Contreras; Paul E. Engelhardt; David Saldaña
In: Learning and Instruction, vol. 84, pp. 1–13, 2023.
The Easy-to-Read guidelines recommend visual support and lexical simplification to facilitate text processing, but few studies have empirically verified the efficacy of these guidelines. This study examined the influence of these recommendations on sentence processing by examining eye movements at the text- and word-level in adult readers. We tested 30 non-university adults (low education level) and 30 university adults (high education level). The experimental task consisted of 60 sentences. Half were accompanied by an image and half were not, and half contained a low-frequency word and half a high-frequency word. Results showed that visual support and lexical simplification facilitated processing in both groups of adults, and non-university adults were significantly slower than university adults at sentence processing. However, lexical simplification resulted in faster processing in the non-university adults' group. Conclusions focus on the mechanisms in which both adaptations benefit readers, and practical implications for reading comprehension.
Camilo R. Ronderos; Ernesto Guerra; Pia Knoeferle
In: Language and Cognition, vol. 15, no. 1, pp. 1–28, 2023.
Several studies have investigated the comprehension of decontextualized English nominal metaphors. However, not much is known about how contextualized, non-nominal, non-English metaphors are processed, and how this might inform existing theories of metaphor comprehension. In the current work, we investigate the effects of context and of sequential order for an under-studied type of construction: German verb–object metaphors. In two visual-world, eye-tracking experiments, we manipulated whether a discourse context biased a spoken target utterance toward a metaphoric or a literal interpretation. We also manipulated the order of verb and object in the target utterances (e.g., Stefan interviewt eine Hyäne , ‘Stefan interviews a hyena', verb→object; and Stefan wird eine Hyäne interviewen , ‘Stefan will a hyena interview', object→verb). Experiment 1 shows that contextual cues interacted with sequential order, mediating the processing of verb–object metaphors: When the context biased toward a metaphoric interpretation, participants readily understood the object metaphorically for the verb→object sequence, whereas they likely first understood it literally for the object→verb sequence. Crucially, no such effect of sequential order was found when context biased toward a literal interpretation. Experiment 2 suggests that differences in processing found in Experiment 1 were brought on by the interaction of discourse context and sequential order and not by sequential order alone. We propose ways in which existing theoretical views could be extended to account for these findings. Overall, our study shows the importance of context during figurative language comprehension and highlights the need to test the predictions of metaphor theories on non-English and non-nominal metaphors.
Scott S. Hsieh; David A. Cook; Akitoshi Inoue; Hao Gong; Parvathy Sudhir Pillai; Matthew P. Johnson; Shuai Leng; Lifeng Yu; Jeff L. Fidler; David R. Holmes Iii; Rickey E. Carter; Cynthia H. Mccollough; Joel G. Fletcher
In: Radiology, vol. 306, no. 2, pp. 1–10, 2023.
Background: Substantial interreader variability exists for common tasks in CT imaging, such as detection of hepatic metastases. This variability can undermine patient care by leading to misdiagnosis. Purpose: To determine the impact of interreader variability associated with (a) reader experience, (b) image navigation patterns (eg, eye movements, workstation interactions), and (c) eye gaze time at missed liver metastases on contrast-enhanced abdominal CT images. Materials and Methods: In a single-center prospective observational trial at an academic institution between December 2020 and February 2021, readers were recruited to examine 40 contrast-enhanced abdominal CT studies (eight normal, 32 containing 91 liver metastases). Readers circumscribed hepatic metastases and reported confidence. The workstation tracked image navigation and eye movements. Performance was quantified by using the area under the jackknife alternative free-response receiver operator charac- teristic (JAFROC-1) curve and per-metastasis sensitivity and was associated with reader experience and image navigation variables. Differences in area under JAFROC curve were assessed with the Kruskal-Wallis test followed by the Dunn test, and effects of image navigation were assessed by using the Wilcoxon signed-rank test. Results: Twenty-five readers (median age, 38 years; IQR, 31–45 years; 19 men) were recruited and included nine subspecialized abdominal radiologists, five nonabdominal staff radiologists, and 11 senior residents or fellows. Reader experience explained differences in area under the JAFROC curve, with abdominal radiologists demonstrating greater area under the JAFROC curve (mean, 0.77; 95% CI: 0.75, 0.79) than trainees (mean, 0.71; 95% CI: 0.69, 0.73) (P = .02) or nonabdominal subspecialists (mean, 0.69; 95% CI: 0.60, 0.78) (P = .03). Sensitivity was similar within the reader experience groups (P = .96). Image navigation variables that were associated with higher sensitivity included longer interpretation time (P = .003) and greater use of coronal images (P < .001). The eye gaze time was at least 0.5 and 2.0 seconds for 71% (266 of 377) and 40% (149 of 377) of missed metastases, respectively. Conclusion: Abdominal radiologists demonstrated better discrimination for the detection of liver metastases on abdominal contrast-enhanced CT images. Missed metastases frequently received at least a brief eye gaze. Higher sensitivity was associated with longer interpretation time and greater use of liver display windows and coronal images.
Laura Fernández-Arroyo; Nuria Sagarra; Kaylee Fernández
In: The Mental Lexicon, pp. 1–26, 2023.
Language experience is essential for SLA. Yet, studies comparing the role of L2 proficiency and L2 use on L2 processing are scant, and there are no studies examining how these variables modulate learners' ability to generalize grammatical associations to new instances. This study investigates whether L2 proficiency and L2 use affect L2 stress-tense suffix associations (a stressed syllable cuing a present suffix, and an unstressed syllable cuing a preterit suffix) using eye-tracking. Spanish monolinguals and English learners of Spanish varying in L2 proficiency and L2 use saw two verbs (e.g., firma-firmó ‘(s)he signs/signed'), heard a sentence containing one of the verbs, and chose the verb they had heard. Both groups looked at target verbs above chance before hearing the suffix, but the monolinguals did so more accurately and earlier than the learners. The learners recognized past verbs faster than present verbs, were faster with higher than lower L2 proficiency, and later with higher than lower L2 use. Finally, higher L2 proficiency yielded earlier morphological activation but higher L2 use produced later morphological activation, indicating that L2 proficiency and L2 use affect L2 word processing differently. We discuss the contribution of these findings to language acquisition and processing models, as well as models of general cognition.
Monica Barbira; Mireille J. Babineaua; Anne-Caroline Fiévét; Anne Christophe
Rapid infant learning of syntactic–semantic links Journal Article
In: Proceedings of the National Academy of Sciences, vol. 120, no. 1, pp. 1–6, 2023.
In the second year of life, infants begin to rapidly acquire the lexicon of their native language. A key learning mechanism underlying this acceleration is syntactic bootstrapping: the use of hidden cues in grammar to facilitate vocabulary learning. How infants forge the syntactic–semantic links that underlie this mechanism, however, remains specula- tive. A hurdle for theories is identifying computationally light strategies that have high precision within the complexity of the linguistic signal. Here, we presented 20-mo-old infants with novel grammatical elements in a complex natural language environment and measured their resultant vocabulary expansion. We found that infants can learn and exploit a natural language syntactic–semantic link in less than 30 min. The rapid speed of acquisition of a new syntactic bootstrap indicates that even emergent syntactic–semantic links can accelerate language learning. The results suggest that infants employ a cognitive network of efficient learning strategies to self-supervise language development.
Anthony Beh; Paul V. McGraw; Denis Schluppeck
In: Vision Research, vol. 204, pp. 1–14, 2023.
Vision loss is a common, devastating complication of cerebral strokes. In some cases the complete contra-lesional visual field is affected, leading to problems with routine tasks and, notably, the ability to read. Although visual information crucial for reading is imaged on the foveal region, readers often extract useful parafoveal information from the next word or two in the text. In hemianopic field loss, parafoveal processing is compromised, shrinking the visual span and resulting in slower reading speeds. Recent approaches to rehabilitation using perceptual training have been able to demonstrate some recovery of useful visual capacity. As gains in visual sensitivity were most pronounced at the border of the scotoma, it may be possible to use training to restore some of the lost visual span for reading. As restitutive approaches often involve prolonged training sessions, it would be beneficial to know how much recovery is required to restore reading ability. To address this issue, we employed a gaze-contingent paradigm using a low-pass filter to blur one side of the text, functionally simulating a visual field defect. The degree of blurring acts as a proxy for visual function recovery that could arise from restitutive strategies, and allows us to evaluate and quantify the degree of visual recovery required to support normal reading fluency in patients. Because reading ability changes with age, we recruited a group of younger participants, and another with older participants who are closer in age to risk groups for ischaemic strokes. Our results show that changes in patterns of eye movement observed in hemianopic loss can be captured using this simulated reading environment. This opens up the possibility of using participants with normal visual function to help identify the most promising strategies for ameliorating hemianopic loss, before translation to patient groups.
Carmen Julia Coloma; Ernesto Guerra; Zulema De Barbieri; Andrea Helo; Carmen Julia; Ernesto Guerra; Zulema De Barbieri; Andrea Helo
In: International Journal of Speech-Language Pathology, pp. 1–13, 2023.
Purpose: Article-noun disagreement in spoken language is a marker of children with developmental language disorder (DLD). However, the evidence is less clear regarding article comprehension. This study investigates article comprehension in monolingual Spanish-speaking children with and without DLD. Method: Eye tracking methodology used in a longitudinal experimental design enabled the examination of real time article comprehension. The children at the time 1 were 40 monolingual Spanish-speaking preschoolers (20 with DLD and 20 with typical language development [TLD]). A year later (time 2), 27 of these children (15 with DLD and 12 with TLD) were evaluated. Children listened to simple phrases while inspecting a four object visual context. The article in the phrase agreed in number and gender with only one of the objects. Result: At the time 1, children with DLD did not use articles to identify the correct image, while children with TLD anticipated the correct picture. At the time 2, both groups used the articles' morphological markers, but children with DLD showed a slower and weaker preference for the correct referent compared to their age-matched peers. Conclusion: These findings suggest a later emergence, but a similar developmental trajectory, of article comprehension in children with DLD compared to their peers with TLD.
Michael Hahn; Frank Keller
In: Cognition, vol. 230, pp. 1–25, 2023.
Research on human reading has long documented that reading behavior shows task-specific effects, but it has been challenging to build general models predicting what reading behavior humans will show in a given task. We introduce NEAT, a computational model of the allocation of attention in human reading, based on the hypothesis that human reading optimizes a tradeoff between economy of attention and success at a task. Our model is implemented using contemporary neural network modeling techniques, and makes explicit and testable predictions about how the allocation of attention varies across different tasks. We test this in an eyetracking study comparing two versions of a reading comprehension task, finding that our model successfully accounts for reading behavior across the tasks. Our work thus provides evidence that task effects can be modeled as optimal adaptation to task demands.
Christina M. Blomquist; Rochelle S. Newman; Jan Edwards
In: Journal of Experimental Child Psychology, vol. 227, pp. 1–10, 2023.
Although there is ample evidence documenting the development of spoken word recognition from infancy to adolescence, it is still unclear how development of word-level processing interacts with higher-level sentence processing, such as the use of lexical–semantic cues, to facilitate word recognition. We investigated how the ability to use an informative verb (e.g., draws) to predict an upcoming word (picture) and suppress competition from similar-sounding words (pickle) develops throughout the school-age years. Eye movements of children from two age groups (5–6 years and 9–10 years) were recorded while the children heard a sentence with an informative or neutral verb (The brother draws/gets the small picture) in which the final word matched one of a set of four pictures, one of which was a cohort competitor (pickle). Both groups demonstrated use of the informative verb to more quickly access the target word and suppress cohort competition. Although the age groups showed similar ability to use semantic context to facilitate processing, the older children demonstrated faster lexical access and more robust cohort suppression in both informative and uninformative contexts. This suggests that development of word-level processing facilitates access of top-down linguistic cues that support more efficient spoken language processing. Whereas developmental differences in the use of semantic context to facilitate lexical access were not explained by vocabulary knowledge, differences in the ability to suppress cohort competition were explained by vocabulary. This suggests a potential role for vocabulary knowledge in the resolution of lexical competition and perhaps the influence of lexical competition dynamics on vocabulary development.
Christina Blomquist; Bob MCMurray
The development of lexical inhibition in spoken word recognition Journal Article
In: Developmental Psychology, vol. 59, no. 1, pp. 186–206, 2023.
As a spoken word unfolds over time, similar sounding words (cap and cat) compete until one word “wins”. Lexical competition becomes more efficient from infancy through adolescence. We examined one potential mechanism underlying this development: lexical inhibition, by which activated candidates suppress competitors. In Experiment 1, younger (7–8 years) and older (12–13 years) children heard words (cap) in which the onset was manipulated to briefly boost competition from a cohort competitor (cat). This was compared to a condition with a nonword (cack) onset that would not inhibit the target. Words were presented in a visual world task during which eye movements were recorded. Both groups showed less looking to the target when perceiving the competitor-splice relative to the nonword-splice, showing engagement of lexical inhibition. Exploratory analyses of linguistic adaptation across the experiment revealed that older children demonstrated consistent lexical inhibition across the experiment and younger children did not, initially showing no effect in the first half of trials and then a robust effect in the latter half. In Experiment 2, adults also displayed consistent lexical inhibition in the same task. These findings suggest that younger children do not consistently engage lexical inhibition in typical listening but can quickly bring it online in response to certain linguistic experiences. Computational modeling showed that age-related differences are best explained by increased engagement of inhibition rather than growth in activation. These findings suggest that continued development of lexical inhibition in later childhood may underlie increases in efficiency of spoken word recognition.
Rolando Bonandrini; Eraldo Paulesu; Daniela Traficante; Elena Capelli; Marco Marelli; Claudio Luzzatti
In: Neuropsychologia, vol. 180, pp. 1–16, 2023.
Despite its widespread use to measure functional lateralization of language in healthy subjects, the neuro- cognitive bases of the visual field effect in lateralized reading are still debated. Crucially, the lack of knowledge on the nature of the visual field effect is accompanied by a lack of knowledge on the relative impact of psycholinguistic factors on its measurement, thus potentially casting doubts on its validity as a functional laterality measure. In this study, an eye-tracking-controlled tachistoscopic lateralized lexical decision task (Experiment 1) was administered to 60 right-handed and 60 left-handed volunteers and word length, orthographic neighbor- hood, word frequency, and imageability were manipulated. The magnitude of visual field effect was bigger in right-handed than in left-handed participants. Across the whole sample, a visual field-by-frequency interaction was observed, whereby a comparatively smaller effect of word frequency was detected in the left visual field/ right hemisphere (LVF/RH) than in the right visual field/left hemisphere (RVF/LH). In a subsequent computational study (Experiment 2), efficient (LH) and inefficient (RH) activation of lexical orthographic nodes was modelled by means of the Naïve Discriminative Learning approach. Computational data simulated the effect of visual field and its interaction with frequency observed in the Experiment 1. Data suggest that the visual field effect can be biased by word frequency. Less distinctive connections between orthographic cues and lexical/ semantic output units in the RH than in the LH can account for the emergence of the visual field effect and its interaction with word frequency.
Haiyan Wang; Matthew Walenski; Kaitlyn Litcofsky; Jennifer E. Mack; M. Marsel Mesulam; Cynthia K. Thompson
Verb production and comprehension in primary progressive aphasia Journal Article
In: Journal of Neurolinguistics, vol. 64, pp. 1–18, 2022.
Studies of word class processing have found verb retrieval impairments in individuals with primary progressive aphasia (Bak et al., 2001; Cappa et al., 1998; Cotelli et al., 2006; Hillis, Heidler-Gary, et al., 2006; Hillis, Oh, & Ken, 2004; Marcotte et al., 2014; Rhee, Antiquena, & Grossman, 2001; Silveri & Ciccarelli, 2007; Thompson, Lukic, et al., 2012) associated primarily with the agrammatic variant. However, fewer studies have focused on verb comprehension, with inconsistent results. Because verbs are critical to both production and comprehension of clauses and sentences, we investigated verb processing across domains in agrammatic, logopenic, and semantic PPA and a group of age-matched healthy controls. Participants completed a confrontation naming task for verb production and an eye-tracking word-picture matching task for online verb comprehension. All PPA groups showed impaired verb production and comprehension relative to healthy controls. Most notably, the PPA-S group performed more poorly than the other two PPA variants in both domains. Overall, the results indicate that semantic deficits in the PPA-S extend beyond object knowledge to verbs as well, adding to our knowledge concerning the nature of the language deficits in the three variants of primary progressive aphasia.
Xuling Li; Man Zeng; Lei Gao; Shan Li; Zibei Niu; Danhui Wang; Tianzhi Li; Xuejun Bai; Xiaolei Gao
In: Journal of Eye Movement Research, vol. 15, no. 5, 2022.
Two eye-tracking experiments were used to investigate the mechanism of word satiation in Tibetan reading. The results revealed that, at a low repetition level, gaze duration and total fixation duration in the semantically unrelated condition were significantly longer than in the semantically related condition; at a medium repetition level, reaction time in the semantically related condition was significantly longer than in the semantically unrelated condition; at a high repetition level, the total fixation duration and reaction time in the semantically related condition were significantly longer than in the semantically unrelated condition. However, fixation duration and reaction time showed no significant difference between the similar and dissimilar orthography at any repetition level. These findings imply that there are semantic priming effects in Tibetan reading at a low repetition level, but semantic satiation effects at greater repetition levels, which occur in the late stage of lexical processing.
Charlotte Moore; Elika Bergelson
In: Journal of Memory and Language, vol. 126, pp. 1–17, 2022.
By around 12 months, infants have well-specified phonetic representations for the nouns they understand, for instance looking less at a car upon hearing ‘cur' than ‘car' (Swingley and Aslin, 2002). Here we test whether such high-fidelity representations extend to irregular nouns, and regular and irregular verbs. A corpus analysis confirms the intuition that irregular verbs are far more common than irregular nouns in speech to young children. Two eyetracking experiments then test whether toddlers are sensitive to mispronunciation in regular and irregular nouns (Experiment 1) and verbs (Experiment 2). For nouns, we find a mispronunciation effect and no regularity effect in 18-month-olds. For verbs, in Experiment 2a, we find only a regularity effect and no mispronunciation effect in 18-month-olds, though toddlers' poor comprehension overall limits interpretation. Finally, in Experiment 2b we find a mispronunciation effect and no regularity effect in 26-month-olds. The interlocking roles of lexical class and regularity for wordform representations and early word learning are discussed.
Brice Olivier; Anne Guérin-Dugué; Jean-Baptiste Durand
In: Journal of Eye Movement Research, vol. 15, no. 4, pp. 1–19, 2022.
Our objective is to analyze scanpaths acquired through participants achieving a reading task aiming at answering a binary question: Is the text related or not to some given target topic? We propose a data-driven method based on hidden semi-Markov chains to segment scanpaths into phases deduced from the model states, which are shown to represent different cognitive strategies: normal reading, fast reading, information search, and slow confirmation. These phases were confirmed using different external covariates, among which semantic information extracted from texts. Analyses highlighted some strong preference of specific participants for specific strategies and more globally, large individual variability in eye-movement characteristics, as accounted for by random effects. As a perspective, the possibility of improving reading models by accounting for possible heterogeneity sources during reading is discussed.
M. Antúnez; P. J. López-Pérez; J. Dampuré; H. A. Barber
In: Journal of Neurolinguistics, vol. 63, 2022.
During reading, we can process words allocated to the parafoveal visual region. Our ability to extract parafoveal information is determined by the availability of attentional resources, and by how these are distributed among words in the visual field. According to the foveal load hypothesis, a greater difficulty in processing the foveal word would result in less attentional resources being allocated to the parafoveal word, thereby hindering its processing. However, contradictory results have raised questions about which foveal load manipulations may affect the processing of parafoveal words at different levels. We explored whether the semantic processing of parafoveal words can be modulated by variations in a frequency-based foveal load. When participants read word triads, modulations in the N400 component indicated that, while parafoveal words were semantically processed when foveal load was low, their meaning could not be accessed if the foveal word was more difficult to process. Therefore, a frequency-based foveal load modulates semantic parafoveal processing and a semantic preview manipulation may be a suitable baseline to test the foveal load hypothesis.
Michael G. Cutter; Ruth Filik; Kevin B. Paterson
In: Journal of Memory and Language, vol. 125, pp. 1–14, 2022.
We present a replication of Levy, Bicknell, Slattery, and Rayner (2009). In this prior study participants read sentences in which a perceptually confusable preposition (at; confusable with as) or non-confusable preposition (toward) was followed by a verb more likely to appear in the syntactic structure formed by replacing at with as (e.g. tossed) or a verb that was not more likely to appear in this structure (e.g. thrown). Readers experienced processing difficulty upon fixating verbs like tossed following at, but not toward. Levy et al. argued that this suggests readers maintained uncertainty about previously fixated words' identities. We argue that this finding has wide-ranging implications for language processing theories, and that a replication is required. On the basis of a Bayes Factor Design Analysis we conducted a replication study with 56 items and 72 participants in order to determine whether Levy et al.'s effects are replicable. Using Bayesian statistical techniques we show that in our dataset there is evidence against the existence of the interaction Levy et al. found, and thus conclude that this study is non-replicable.
Ruth E. Corps; Charlotte Brooke; Martin J. Pickering
In: Journal of Memory and Language, vol. 122, pp. 1–20, 2022.
Comprehenders often predict what they are going to hear. But do they make the best predictions possible? We addressed this question in three visual-world eye-tracking experiments by asking when comprehenders consider perspective. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nicełdots) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress, distractor: hairdryer) objects. In all three experiments, participants rapidly predicted semantic associates of the verb. But participants also predicted consistently-that is, consistent with their beliefs about what the speaker would ultimately say. They predicted consistently from the speaker's perspective in Experiment 1, their own perspective in Experiment 2, and the character's perspective in Experiment 3. This consistent effect occurred later than the associative effect. We conclude that comprehenders consider perspective when predicting, but not from the earliest moments of prediction, consistent with a two-stage account.
Xianglan Chen; Hulin Ren; Xiao Ying Yan
In: Frontiers in Psychology, vol. 13, pp. 1–10, 2022.
Current cognitively oriented research on metaphor proposes that understanding metaphorical expressions is a process of building embodied simulations, which are constrained by past and present bodily experiences. However, it has also been shown that metaphor processing is also constrained by the linguistic context but, to our knowledge, there is no comparable work in the domain of metonymy. As an initial attempt to fill this gap, the present study uses eye-tracking experimentation to explore this aspect of Chinese metonymy processing. It complements previous work on how the length of preceding linguistic context influences metonymic processing by focusing on: (1) the contextual information of both the preceding target words; (2) the immediate spillover after the target words; and (3) whether the logical relationship between the preceding contextual information and the target word is strong or weak (a 2 × 2 between-subject experiment with target words of literal/metonymy and logic of strong/weak). Results show that readers take longer to arrive at a literal interpretation than at a metonymic one when the preceding information is in a weak logic relationship with target words, although this disparity can disappear when the logic is strong. Another finding is that both the preceding and the spillover contextual information contribute to metonymy processing when the spillover information does more to the metonymy than it does to the literal meaning. This study further complements cognitive and pragmatic approaches to metonymy, which are centered on its conceptual nature and its role in interpretation, by drawing attention to how the components of sentences contribute to the metonymic processing of target words. Based on an experiment, a contextual model of Chinese metonymy processing is proposed.
Yi-Ting Ting Chen; Ming-Chou Chou Ho
In: Learning and Individual Differences, vol. 93, pp. 1–9, 2022.
Background: Extant eye-tracking studies suggest that foreign-language learners tend to read the native language captions while watching foreign-language videos. However, it remains unclear how the captions affect the learners' eye movements when watching Math videos. Purpose: While watching teaching videos, we seek to determine how the lesson type (English or Math), cognitive load (high or low), and caption type (meaningful, no captions, or meaningless) affect the dwell times and fixation counts on the captions. Methods: One hundred and eighty undergraduate students were randomly and equally assigned to six (2 lesson type × 3 caption type) conditions. Each participant watched two short teaching videos (one low load and one high load). After watching each video, a comprehension test and three self-reported items (fatigue, effort, and difficulty) regarding this particular video were given. Results: We reported more dwell times and fixation counts on the meaningful captions, compared to the meaningless captions and no captions. In the high-load condition, viewers watching an English lesson relied more on the meaningful captions than they did when watching a Math lesson. In the low-load condition, the dwell times and fixation counts on the captions were similar between the English and Math lessons. Finally, the captions did not affect the comprehension test performances after ruling out individual differences in the prior performances of English and Math. Conclusions: English language learning may rely more on the captions than is the case in learning Math. This study provides the direction for designing multimedia teaching materials in the current trend of multimedia teaching. In
Sun-Joo Cho; Sarah Brown-Schmidt; Paul De Boeck; Matthew Naveiras
In: Psychological Methods, vol. 27, no. 3, pp. 307–346, 2022.
Eye-tracking has emerged as a popular method for empirical studies of cognitive processes across multiple substantive research areas. Eye-tracking systems are capable of automatically generating fixation-location data over time at high temporal resolution. Often, the researcher obtains a binary measure of whether or not, at each point in time, the participant is fixating on a critical interest area or object in the real world or in a computerized display. Eye-tracking data are characterized by spatial-temporal correlations and random variability, driven by multiple fine-grained observations taken over small time intervals (e.g., every 10 ms). Ignoring these data complexities leads to biased inferences for the covariates of interest such as experimental condition effects. This article presents a novel application of a generalized additive logistic regressionmodel for intensive binary time series eye-tracking data from a between and within-subjects experimental design. The model is formulated as a generalized additive mixed model (GAMM) and implemented in the mgcv R package. The generalized additive logistic regression model was illustrated using an empirical data set aimed at understanding the accommodation of regional accents in spoken language processing. Accuracy of parameter estimates and the importance of modeling the spatial-temporal correlations in detecting the experimental condition effects were shown in conditions similar to our empirical data set via a simulation study.
Derya Cokal; Patrick Sturt
The real-time status of strong and weak islands Journal Article
In: PLoS ONE, vol. 17, no. 2, pp. 1–25, 2022.
In two eye-tracking reading experiments, we used a variant of the filled gap technique to investigate how strong and weak islands are processed on a moment-to-moment basis during comprehension. Experiment 1 provided a conceptual replication of previous studies showing that real time processing is sensitive to strong islands. In the absence of an island, readers experienced processing difficulty when a pronoun appeared in a position of a predicted gap, but this difficulty was absent when the pronoun appeared inside a strong island. Experiment 2 showed an analogous effect for weak islands: A processing cost was seen for a pronoun in the position of a predicted gap in a that-complement clause, but this cost was absent in a matched whether clause, which constitutes a weak island configuration. Overall, our results are compatible with the claim that active dependency formation is suspended, or reduced, in both weak and strong island structures.
Fengjiao Cong; Baoguo Chen
Parafoveal orthographic processing in bilingual reading Journal Article
In: International Journal of Bilingual Education and Bilingualism, vol. 25, pp. 3698–3710, 2022.
Reading is a very complex task in which readers obtain information to promote reading from not only the fixated word located in the foveal area but also non-fixated words located in the parafoveal area. We aimed to investigate the second language (L2) parafoveal orthographic (letter identity and letter position) processing mechanism adopting the eye-tracking technique and boundary paradigm. We set up four previews for each target: (1) the identity preview (e.g. reporter → reporter), (2) the transposed-letter preview (e.g. repotrer → reporter), (3) the substituted-letter condition (e.g. repokcer → reporter), and (4) the unrelated preview (e.g. chemaful → reporter). There are three main findings. First, L2 readers could extract and utilize the parafoveal orthographic information shared by the preview and the target to affect the late L2 processing stage. Second, when there was only a small difference between the preview and the target, L2 readers did not notice the subtle difference in the parafovea. Third, the identity and position of an internal single letter have little effect on L2 reading compared with the similarity of the whole word in the parafoveal area. Future L2 reading frameworks should be developed to explain these new findings.
Fengjiao Cong; Baoguo Chen
In: Quarterly Journal of Experimental Psychology, vol. 75, no. 10, pp. 1932–1947, 2022.
We conducted three eye movement experiments to investigate the mechanism for coding letter positions in a person's second language during sentence reading; we also examined the role of morphology in this process with a more rigorous manipulation. Given that readers obtain information not only from currently fixated words (i.e., the foveal area) but also from upcoming words (i.e., the parafoveal area) to guide their reading, we examined both when the targets were fixated (Exp. 1) and when the targets were seen parafoveally (Exps. 2 and 3). First, we found the classic transposed letter (TL) effect in Exp. 1, but not in Exp. 2 or 3. This implies that flexible letter position coding exists during sentence reading. However, this was limited to words located in the foveal area, suggesting that L2 readers whose L2 proficiency is not as high as skilled native readers are not able to extract and utilise the parafoveal letter identity and position information of a word, whether the word length is long (Exp. 2) or short (Exp. 3). Second, we found morphological information to influence the magnitude of the TL effect in Exp. 1. These results provide new eye movement evidence for the flexibility of L2 letter position coding during sentence reading, as well as the interactions between the different internal representations of words in this process. Future L2 reading frameworks should integrate word recognition and eye movement control models.
Erin Conwell; Gregor Horvath; Allyson Kuznia; Stephen J. Agauas
In: Language, Cognition and Neuroscience, pp. 1–12, 2022.
Apparently homophonous sequences contain acoustic information that differentiates their meanings [Gahl. (2008). Time and thyme are not homophones: The effect of lemma frequency on word durations in spontaneous speech. Language, 84(3), 474–496; Quené. (1992). Durational cues for word segmentation in Dutch. Journal of Phonetics, 20(3), 331–350]. Adults use this information to segment embedded homophones [e.g. ham vs. hamster; Salverda et al. (2003). The role of prosodic boundaries in the resolution of lexical embedded in speech comprehension. Cognition, 90(1), 51–89] in fluent speech. Whether children also do this is unknown, as is whether listeners of any age use such information to disambiguate lexical homophones. In two experiments, 48 English-speaking adults and 48 English-speaking 7 to 10-year-old children viewed sets of four images and heard sentences containing phonemically identical sequences while their eye movements were continuously tracked. As in previous research, adults showed greater fixation of target meanings when the acoustic properties of an embedded homophone were consistent with the target than when they were consistent with the alternate interpretation. They did not show this difference for lexical homophones. Children's behaviour was similar to that of adults, indicating that the use of subphonemic information in homophone processing is consistent over development.
Ruth E. Corps; Meijian Liao; Martin J. Pickering
In: Bilingualism: Language and Cognition, pp. 1–13, 2022.
Comprehenders predict what a speaker is likely to say when listening to non-native (L2) and native (L1) utterances. But what are the characteristics of L2 prediction, and how does it relate to L1 prediction? We addressed this question in a visual-world eye-tracking experiment, which tested when L2 English comprehenders integrated perspective into their predictions. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nicełdots) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress; distractor: hairdryer) objects. Participants predicted associatively, fixating objects semantically associated with critical verbs (here, the tie and the dress). They also predicted stereotypically consistent objects (e.g., the tie rather than the dress, given the male speaker). Consistent predictions were made later than associative predictions, and were delayed for L2 speakers relative to L1 speakers. These findings suggest prediction involves both automatic and non-automatic stages.
Lei Cui; Chuanli Zang; Xiaochen Xu; Wenxin Zhang; Yuhan Su; Simon P. Liversedge
In: Quarterly Journal of Experimental Psychology, vol. 75, no. 1, pp. 18–29, 2022.
We report a boundary paradigm eye movement experiment to investigate whether the predictability of the second character of a two-character compound word affects how it is processed prior to direct fixation during reading. The boundary was positioned immediately prior to the second character of the target word, which itself was either predictable or unpredictable. The preview was either a pseudocharacter (nonsense preview) or an identity preview. We obtained clear preview effects in all conditions, but more importantly, skipping probability for the second character of the target word and the whole target word from pretarget was greater when it was predictable than when it was not predictable from the preceding context. Interactive effects for later measures on the whole target word (gaze duration and go-past time) were also obtained. These results demonstrate that predictability information from preceding sentential context and information regarding the likely identity of upcoming characters are used concurrently to constrain the nature of lexical processing during natural Chinese reading.
Xiaohui Cui; Fabio Richlan; Wei Zhou
In: Brain Structure and Function, vol. 227, no. 8, pp. 2609–2621, 2022.
While parafoveal word processing plays an important role in natural reading, the underlying neural mechanism remains unclear. The present study investigated the neural basis of parafoveal processing during Chinese word reading with the co-registration of eye-tracking and functional magnetic resonance imaging (fMRI) using fixation-related fMRI analysis. In the gaze-contingent boundary paradigm, preview conditions (words that are identical, orthographically similar, and unrelated to target words), pre-target word frequency and target word frequency were manipulated. When fixating the pre-target word, the identical preview condition elicited lower brain activation in the left fusiform gyrus relative to unrelated and orthographically similar preview conditions and there were significant interactions of preview condition and pre-target word frequency on brain activation of the left middle frontal gyrus, left fusiform gyrus and supplementary motor area. When fixating the target word, there was a significant main effect of preview condition on brain activation of the right fusiform gyrus and a significant interaction of preview condition and pre-target word frequency on brain activation of the left middle frontal gyrus. These results suggest that fixation-related brain activation provides immediate measures and new perspectives to understand the mechanism of parafoveal processing in self-paced reading.
Xin Cui; Xiaoming Jiang; Hongwei Ding
Affective prosody guides facial emotion processing Journal Article
In: Current Psychology, pp. 1–12, 2022.
Previous studies have reported the "emotional congruency effect (ECE)" in cross-modal emotion processing, claiming that multimodal congruent emotional signals will enhance the emotion processing, yet few studies have shown how this effect is dynamically processed over time and whether it is achieved in the same way across language and cultural backgrounds. We adopted the eye-tracking technique to investigate whether and how the audio emotional signal influences the visual processing of emotional faces according to ECE. We explored this issue by asking thirty-two native Mandarin speakers to scan a visual array of four types of emotional faces while listening to the affective prosody matching one of the four emotions. To eliminate the potential confounding from lexico-semantic information, the affective prosody is pronounced in meaningless di-syllable clusters. Results of the experiment indicate that (1) participants paid more attention to happy faces at first glance and their attention shifted to angry and sad faces over time. (2) Consistent with findings in English-speaking settings, ECE appeared in Mandarin-speaking settings, but took effect earlier in happy faces and persisted in all emotions as the unfolding of the signal. Based on the results, we conclude that the processing time differs across emotion types and therefore ECE takes effect in different temporal points according to the emotion type. Finally, we suggest that language and cultural experience may shape the processing time of different emotions.
Isabelle Dautriche; Louise Goupil; Kenny Smith; Hugh Rabagliati
In: Psychological Science, vol. 33, no. 11, pp. 1842–1856, 2022.
We studied the fundamental issue of whether children evaluate the reliability of their language interpretation, that is, their confidence in understanding words. In two experiments, 2-year-olds (Experiment 1: N = 50; Experiment 2: N = 60) saw two objects and heard one of them being named; both objects were then hidden behind screens and children were asked to look toward the named object, which was eventually revealed. When children knew the label used, they showed increased postdecision persistence after a correct compared with an incorrect anticipatory look, a marker of confidence in word comprehension (Experiment 1). When interacting with an unreliable speaker, children showed accurate word comprehension but reduced confidence in the accuracy of their own choice, indicating that children's confidence estimates are influenced by social information (Experiment 2). Thus, by the age of 2 years, children can estimate their confidence during language comprehension, long before they can talk about their linguistic skills.
Catherine Davies; Vincent Porretta; Kremena Koleva; Ekaterini Klepousniotou
Speaker-specific cues influence semantic disambiguation Journal Article
In: Journal of Psycholinguistic Research, vol. 51, pp. 933–955, 2022.
Addressees use information from specific speakers' previous discourse to make predictions about incoming linguistic material and to restrict the choice of potential interpretations. In this way, speaker specificity has been shown to be an influential factor in language processing across several domains e.g., spoken word recognition, sentence processing, and pragmatics. However, its influence on semantic disambiguation has received little attention to date. Using an exposure-test design and visual world eye tracking, we examined the effect of speaker-specific literal vs. nonliteral style on the disambiguation of metaphorical polysemes such as ‘fork', ‘head', and ‘mouse'. Eye movement data revealed that when interpreting polysemous words with a literal and a nonliteral meaning, addressees showed a late-stage preference for the literal meaning in response to a nonliteral speaker. We interpret this as reflecting an indeterminacy in the intended meaning in this condition, as well as the influence of meaning dominance cues at later stages of processing. Response data revealed that addressees then ultimately resolved to the literal target in 90% of trials. These results suggest that addressees consider a range of senses in the earlier stages of processing, and that speaker style is a contextual determinant in semantic processing.
Charles P. Davis; Inge Marie Eigsti; Roisin Healy; Gitte H. Joergensen; Eiling Yee
In: PLoS ONE, vol. 17, no. 7, pp. 1–20, 2022.
Sensorimotor-based theories of cognition predict that even subtle developmental motor differences, such as those characterizing autism spectrum disorder (ASD), impact how we represent the meaning of manipulable objects (e.g., faucet). Here, we test 85 neurotypical participants, who varied widely on the Adult Autism Spectrum Quotient (AQ), a measure intended to capture variability in ASD characteristics in the general adult population (participant scores were all below the clinical threshold for autism). Participants completed a visual world eyetracking task designed to assess the activation of conceptual representations of manipulable objects. Participants heard words referring to manually manipulable objects (e.g., faucet) while we recorded their eye movements to arrays of four objects: the named object, a related object typically manipulated similarly (e.g., jar), and two unrelated objects. Consistent with prior work, we observed more looks to the related object than to the unrelated ones (i.e., a manipulation-relatedness effect). This effect likely reflects the overlapping conceptual representations of objects sharing manipulation characteristics (e.g., faucet and jar), due to embodied sensorimotor properties being part of their representations. Critically, we observed—among typically developed young adults—that as AQ scores increased, manipulation-relatedness effects decreased. In contrast, in a visual control condition, in which a target object was paired with related objects of a similar shape (e.g., snake and rope), relatedness effects increased with AQ scores. The results show that AQ scores can predict variation in how object-concept representations are activated for typically developed individuals. More speculatively, they are consistent with the hypothesis that in individuals with ASD, differences in object-concept representations emerge at least in part via differences in sensorimotor experience.
Yevgeni Berzak; Chie Nakamura; Amelia Smith; Emily Weng; Boris Katz; Suzanne Flynn; Roger Levy
In: Open Mind: Discoveries in Cognitive Science, vol. 6, pp. 41–50, 2022.
We present CELER (Corpus of Eye Movements in L1 and L2 English Reading), a broad coverage eye-tracking corpus for English. CELER comprises over 320,000 words, and eye-tracking data from 365 participants. Sixty-nine participants are L1 (first language) speakers, and 296 are L2 (second language) speakers from a wide range of English proficiency levels and five different native language backgrounds. As such, CELER has an order of magnitude more L2 participants than any currently available eye movements dataset with L2 readers. Each participant in CELER reads 156 newswire sentences from the Wall Street Journal (WSJ), in a new experimental design where half of the sentences are shared across participants and half are unique to each participant. We provide analyses that compare L1 and L2 participants with respect to standard reading time measures, as well as the effects of frequency, surprisal, and word length on reading times. These analyses validate the corpus and demonstrate some of its strengths. We envision CELER to enable new types of research on language processing and acquisition, and to facilitate interactions between psycholinguistics and natural language processing (NLP).
Elisabeth Beyersmann; Signy Wegener; Nenagh Kemp
In: Journal of Media Psychology, pp. 1–11, 2022.
The use of emojis in digital communication has become increasingly popular, but how emojis are processed and integrated in reading processes remains underexplored. This study used eye-tracking to monitor university students' (n = 47) eye movements while reading single-line text messages with a face emoji embedded medially. Messages contained a semantically congruent emoji (e.g., That's good news tell me more), a semantically incongruent emoji (e.g., That's good news tell me more), or a dash (e.g., That's good news - tell me more). Results revealed that emoji congruency did not influence early fixation measures (first fixation duration and gaze duration), nor the probability of regressions. However, there was a significant congruency effect in total reading time and trial dwell time, showing that incongruence incurred a processing cost. The present results extend previously reported semantic congruency effects in sentence reading to the processing of emojis. This result suggests that the semantic content conveyed by face emojis is integrated with sentence context late in processing. We further found that the use of congruent emojis improved the relationship between sender and receiver: Ratings collected separately suggested that message senders were liked better if they included congruent than incongruent emojis. Overall, emojis attracted attention: Participants were twice as likely to fixate on emojis than on dashes, and to fixate on emojis for longer.
Elisabeth Beyersmann; Signy Wegener; Valentina N. Pescuma; Kate Nation; Danielle Colenbrander; Anne Castles
In: Quarterly Journal of Experimental Psychology, pp. 1–12, 2022.
Do readers benefit from their knowledge of the phonological form and meaning of stems when seeing them embedded in morphologically complex words for the first time in print? This question was addressed using a word learning paradigm. Participants were trained on novel spoken word stems and their meanings (“tump”). Following training, participants then saw the novel stems for the first time in print, either in combination with a real affix (tumpist, tumpor) or with a non-affix (tumpel, tumpain). Untrained items were also included to test whether the affix effect was modulated by the prior training of the spoken word stems. First, the complex words were embedded in meaningful sentences which participants read as their eye movements were recorded (first orthographic exposure). Second, participants were asked to read aloud and spell each individual complex novel word (second orthographic exposure). Participants spent less time fixating on words that included trained stems compared with untrained stems. However, the training effect did not change depending on whether stems were accompanied by a real affix or a non-affix. In the reading aloud and spelling tasks, there was no effect of training, suggesting that the effect of oral vocabulary training did not extend beyond the initial print exposure. The results indicate that familiarity with spoken stems influences how complex words containing those stems are processed when being read for the first time. Our findings highlight the flexibility and adaptability of the morphological processing system to novel complex words during the first print exposure.
Elisabeth Beyersmann; Signy Wegener; Jasmine Spencer; Anne Castles
In: Psychonomic Bulletin & Review, pp. 1–12, 2022.
This study used a novel word-training paradigm to examine the integration of spoken word knowledge when learning to read morphologically complex novel words. Australian primary school children including Grades 3–5 were taught the oral form of a set of novel morphologically complex words (e.g., (/vɪbɪŋ/, /vɪbd/, /vɪbz/), with a second set serving as untrained items. Following oral training, participants saw the printed form of the novel word stems for the first time (e.g., vib), embedded in sentences, while their eye movements were monitored. Half of the stems were spelled predictably and half were spelled unpredictably. Reading times were shorter for orally trained stems with predictable than unpredictable spellings and this difference was greater for trained than untrained items. These findings suggest that children were able to form robust orthographic expectations of the embedded morphemic stems during spoken word learning, which may have occurred automatically without any explicit control of the applied mappings, despite still being in the early stages of reading development. Following the sentence reading task, children completed a reading-aloud task where they were exposed to the novel orthographic forms for a second time. The findings are discussed in the context of theories of reading acquisition.
Isha Bhutada; Peggy Skelly; Jonathan Jacobs; Jordan Murray; Aasef G. Shaikh; Fatema F. Ghasia
In: Journal of the Neurological Sciences, vol. 442, pp. 1–13, 2022.
Introduction: Reading is a vision-reliant task, requiring sequential eye movements. Binocularly discordant input results in visual sensory and oculomotor dysfunction in amblyopia, which may contribute to reading difficulties. This study aims to determine the contributions of fixation eye movement (FEM) abnormalities, clinical type and severity of amblyopia to reading performance under binocular and monocular viewing conditions. Methods: Twenty-three amblyopic patients and nine healthy controls were recruited. Eye movements elicited during fixation and reading of preselected passages were collected for each subject using infrared video- oculography. Subjects were classified as having no nystagmus (n = 9), fusion maldevelopment nystagmus (FMN
Nicoletta Biondo; Marielena Soilemezidi; Simona Mancini
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 48, no. 7, pp. 1001–1018, 2022.
The ability to think about nonpresent time is a crucial aspect of human cognition. Both the past and future imply a temporal displacement of an event outside the “now.” They also intrinsically differ: The past refers to inalterable events; the future to alterable events, to possible worlds. Are the past and future processed similarly or differently? In this study, we addressed this question by investigating how Spanish speakers process past/future time reference violations during sentence processing, while recording eye movements. We also investigated the role of verbs (in isolation; within sentences) and adverbs (deictic; nondeictic) during time processing. Existing accounts propose that past processing, which requires a link to discourse, is more complex than future processing, which—like the present—is locally bound. Our findings show that past and future processing differs, especially at early stages of verb processing, but this difference is not limited to the presence/absence of discourse linking. We found earlier mismatch effects for past compared to future time reference in incongruous sentences, in line with previous studies. Interestingly, it took longer to categorize the past than the future tense when verbs were presented in isolation. However, it took longer to categorize the future than the past when verbs were presented in congruous sentences, arguably because the future implies alterable worlds. Finally, temporal adverbs were found to play an important role in reinspection and reanalysis triggered by the presence of undefined time frames (nondeictic adverbs) or incongruences (mismatching verbs).
Stefan Blohm; Stefano Versace; Sanja Methner; Valentin Wagner; Matthias Schlesewsky; Winfried Menninghaus
Reading poetry and prose: Eye movements and acoustic evidence Journal Article
In: Discourse Processes, vol. 59, no. 3, pp. 159–183, 2022.
We examined genre-specific reading strategies for literary texts and hypothesized that text categorization (literary prose vs. poetry) modulates both how readers gather information from a text (eye movements) and how they realize its phonetic surface form (speech production). We recorded eye movements and speech while college students (N = 32) orally read identical texts that we categorized and formatted as either literary prose or poetry. We further varied the text position of critical regions (text-initial vs. text-medial) to compare how identical information is read and articulated with and without context; this allowed us to assess whether genre-specific reading strategies make differential use of identical context information. We observed genre-dependent differences in reading and speaking tempo that reflected several aspects of reading and articulation. Analyses of regions of interests revealed that word-skipping increased particularly while readers progressed through the texts in the prose condition; speech rhythm was more pronounced in the poetry condition irrespective of the text position. Our results characterize strategic poetry and prose reading, indicate that adjustments of reading behavior partly reflect differences in phonetic surface form, and shed light onto the dynamics of genre-specific literary reading. They generally support a theory of literary comprehension that assumes distinct literary processing modes and incorporates text categorization as an initial processing step.
Liam P. Blything; Maialen Iraola Azpiroz; Shanley Allen; Regina Hert; Juhani Järvikivi; Juhani Jarvikivi
In: Journal of Child Language, vol. 49, no. 5, pp. 930–958, 2022.
In two visual world experiments we disentangled the influence of order of mention (first vs. second mention), grammatical role (subject vs object), and semantic role (proto-agent vs proto-patient) on 7- to 10-year-olds' real-time interpretation of German pronouns. Children listened to SVO or OVS sentences containing active accusative verbs (küssen "to kiss") in Experiment 1 (N = 72), or dative object-experiencer verbs (gefallen "to like") in Experiment 2 (N = 64). This was followed by the personal pronoun er or the demonstrative pronoun der. Interpretive preferences for er were most robust when high prominence cues (first mention, subject, proto-agent) were aligned onto the same entity; and the same applied to der for low prominence cues (second mention, object, proto-patient). These preferences were reduced in conditions where cues were misaligned, and there was evidence that each cue independently influenced performance. Crucially, individual variation in age predicted adult-like weighting preferences for semantic cues (Schumacher, Roberts & Järvikivi, 2017).
In: Journal of Experimental Child Psychology, vol. 219, pp. 1–21, 2022.
Prediction is posited to support fluent comprehension of speech—but how and when do young listeners, who encounter unfamiliar and novel events with high frequency, learn to deploy predictive processing strategies in these unfamiliar circumstances? The current work used a discourse-based event teaching paradigm to explore how English-speaking school-aged children (aged 5;0–8;11 [years;months]; N = 92) generalize from their (experimentally controlled) experience to generate real-time linguistic predictions about novel events during an eye-tracked sentence recognition task. The findings reveal developmental differences in how the initial structure of event exposure supports generalization. Specifically, real-time extension was supported by viewing multiple instances of events involving varied agents in the younger children (5–6 years), whereas older children (7–8 years) extended when they experienced repetition of events with identical agents. The findings support accounts of predictive processing suggesting that learners generate predictions in a variety of less predictable circumstances and suggest practical directions to support early learning and language processing skills.
Emily A. Burg; Tanvi D. Thakkar; Ruth Y. Litovsky
In: Frontiers in Neuroscience, vol. 16, pp. 1–13, 2022.
Introduction: Bilateral cochlear implants (BiCIs) can facilitate improved speech intelligibility in noise and sound localization abilities compared to a unilateral implant in individuals with bilateral severe to profound hearing loss. Still, many individuals with BiCIs do not benefit from binaural hearing to the same extent that normal hearing (NH) listeners do. For example, binaural redundancy, a speech intelligibility benefit derived from having access to duplicate copies of a signal, is highly variable among BiCI users. Additionally, patients with hearing loss commonly report elevated listening effort compared to NH listeners. There is some evidence to suggest that BiCIs may reduce listening effort compared to a unilateral CI, but the limited existing literature has not shown this consistently. Critically, no studies to date have investigated this question using pupillometry to quantify listening effort, where large pupil sizes indicate high effort and small pupil sizes indicate low effort. Thus, the present study aimed to build on existing literature by investigating the potential benefits of BiCIs for both speech intelligibility and listening effort. Methods: Twelve BiCI adults were tested in three listening conditions: Better Ear, Poorer Ear, and Bilateral. Stimuli were IEEE sentences presented from a loudspeaker at 0° azimuth in quiet. Participants were asked to repeat back the sentences, and responses were scored by an experimenter while changes in pupil dilation were measured. Results: On average, participants demonstrated similar speech intelligibility in the Better Ear and Bilateral conditions, and significantly worse speech intelligibility in the Poorer Ear condition. Despite similar speech intelligibility in the Better Ear and Bilateral conditions, pupil dilation was significantly larger in the Bilateral condition. Discussion: These results suggest that the BiCI users tested in this study did not demonstrate binaural redundancy in quiet. The large interaural speech asymmetries demonstrated by participants may have precluded them from obtaining binaural redundancy, as shown by the inverse relationship between the two variables. Further, participants did not obtain a release from effort when listening with two ears versus their better ear only. Instead, results indicate that bilateral listening elicited increased effort compared to better ear listening, which may be due to poor integration of asymmetric inputs.
Alexander P. Burgoyne; Sari Saba-Sadiya; Lauren Julius Harris; Mark W. Becker; Jan W. Brascamp; David Z. Hambrick
Revisiting the self-generation effect in proofreading Journal Article
In: Psychological Research, pp. 1–16, 2022.
The self-generation effect refers to the finding that people's memory for information tends to be better when they generate it themselves. Counterintuitively, when proofreading, this effect may make it more difficult to detect mistakes in one's own writing than in others' writing. We investigated the self-generation effect and sources of individual differences in proofreading performance in two eye-tracking experiments. Experiment 1 failed to reveal a self-generation effect. Experiment 2 used a studying manipulation to induce overfamiliarity for self-generated text, revealing a weak but non-significant self-generation effect. Overall, word errors (i.e., wrong words) were detected less often than non-word errors (i.e., misspellings), and function word errors were detected less often than content word errors. Fluid intelligence predicted proofreading performance, whereas reading comprehension, working memory capacity, processing speed, and indicators of miserly cognitive processing did not. Students who made more text fixations and spent more time proofreading detected more errors.
Jon Burnsky; Franziska Kretzschmar; Erika Mayer; Adrian Staub
In: Language, Cognition and Neuroscience, pp. 1–22, 2022.
Two eye movement/EEG co-registration experiments investigated effects of predictability, visual contrast, and parafoveal preview in normal reading. Replicating previous studies, in Experiment 1 contrast and predictability additively influenced fixation durations, and in Experiment 2 invalid preview eliminated the predictability effect on early eye movement measures. In both experiments, predictability influenced the amplitude of the N400 component of the fixation-related potential. In Experiment 1, visual contrast did not influence the N400, and in Experiment 2, the effect of predictability on the N400 was larger with invalid preview, in opposition to the eye movement pattern. The N400 may reflect a late process of accessing conceptual representations while the duration of the eyes' fixation on a word is sensitive to the difficulty of perceptual encoding and early stages of word recognition. The effects of predictability on both fixation duration and the N400 suggest an influence of this variable at two distinct processing stages.
Jon W. Carr; Valentina N. Pescuma; Michele Furlan; Maria Ktori; Davide Crepaldi
In: Behavior Research Methods, vol. 54, no. 1, pp. 287–310, 2022.
A common problem in eye-tracking research is vertical drift—the progressive displacement of fixation registrations on the vertical axis that results from a gradual loss of eye-tracker calibration over time. This is particularly problematic in experiments that involve the reading of multiline passages, where it is critical that fixations on one line are not erroneously recorded on an adjacent line. Correction is often performed manually by the researcher, but this process is tedious, time-consuming, and prone to error and inconsistency. Various methods have previously been proposed for the automated, post hoc correction of vertical drift in reading data, but these methods vary greatly, not just in terms of the algorithmic principles on which they are based, but also in terms of their availability, documentation, implementation languages, and so forth. Furthermore, these methods have largely been developed in isolation with little attempt to systematically evaluate them, meaning that drift correction techniques are moving forward blindly. We document ten major algorithms, including two that are novel to this paper, and evaluate them using both simulated and natural eye-tracking data. Our results suggest that a method based on dynamic time warping offers great promise, but we also find that some algorithms are better suited than others to particular types of drift phenomena and reading behavior, allowing us to offer evidence-based advice on algorithm selection.
Sarah Chabal; Sayuri Hayakawa; Viorica Marian
In: Cognition, vol. 222, pp. 1–17, 2022.
In the present study, we provide compelling evidence that viewing objects automatically activates linguistic labels and that this activation is not due to task-specific memory demands. In two experiments, eye-movements of English speakers were tracked while they identified a visual target among an array of four images, including a phonological competitor (e.g., flower-flag). Experiment 1 manipulated the capacity to subvocally rehearse the target label by imposing linguistic, spatial, or no working memory load. Experiment 2 manipulated the need to encode target objects by presenting target images either before or concurrently with the search display. While the timing and magnitude of competitor activation varied across conditions, we observed consistent evidence of language activation regardless of the capacity or need to maintain object labels in memory. We propose that language activation is automatic and not contingent upon working memory capacity or demands, and conclude that objects' labels influence visual search.
Suphasiree Chantavarin; Emily Morgan; Fernanda Ferreira
In: Cognitive Science, vol. 46, no. 9, pp. 1–43, 2022.
Prior research has shown that various types of conventional multiword chunks are processed faster than matched novel strings, but it is unclear whether this processing advantage extends to variant multiword chunks that are less formulaic. To determine whether the processing advantage of multiword chunks accommodates variations in the canonical phrasal template, we examined the robustness of the processing advantage (i.e., predictability) of binomial phrases with non-canonical conjunctions (e.g., salt and also pepper; salt as well as pepper). Results from the cloze study (Experiment 1) showed that there was a high tendency of producing the canonical conjunct (pepper), even in the binomials that contained non-formulaic conjunctions. Consistent with these findings, results from two eye tracking studies (Experiments 2a and 2b) showed that canonical conjuncts were read faster than novel conjuncts that were matched on word length (e.g., paprika), even in the binomials with variant conjunctions. This robust online processing advantage was replicated in a self-paced reading study that compared all three conjunction types (Experiment 3). Taken together, these findings show that binomials with variant function words also receive facilitated processing relative to matched novel strings, even though both types of strings are neither conventional nor relatively frequent. Exploratory analyses revealed that this processing speed advantage was driven by the lexical–semantic association between the canonical conjuncts (salt–pepper), rather than lexical and phrasal frequency. Overall, these results highlight flexibility in the processing of multiword chunks that current models of multiword storage and processing must take into account.
Jarkko Hautala; Ladislao Salmerón; Asko Tolvanen; Otto Loberg; Paavo Leppänen
In: Reading and Writing, vol. 35, no. 8, pp. 1787–1813, 2022.
The associations among readers' cognitive skills (general cognitive ability, reading skills, and attentional functioning), task demands (easy versus difficult questions), and process measures (total fixation time on relevant and irrelevant paragraphs) was investigated to explain task-oriented reading accuracy and efficiency (number of scores in a given time unit). Structural equation modeling was applied to a large dataset collected with sixth-grade students, which included samples of dysfluent readers and those with attention difficulties. The results are in line with previous findings regarding the dominant role of general cognitive ability in the accuracy of task-oriented reading. However, efficiency in task-oriented reading was mostly explained by the shorter viewing times of both paragraph types (i.e., relevant and irrelevant), which were modestly explained by general cognitive ability and reading fluency. These findings suggest that high efficiency in task orientation is obtained by relying on a selective reading strategy when reading both irrelevant and relevant paragraphs. The selective reading strategy seems to be specifically learned, and this potentially applies to most students, even those with low cognitive abilities.
Andrea Helo; Ernesto Guerra; Carmen Julia Coloma; María Antonia Reyes; Pia Rämä
In: Language Learning and Development, vol. 18, no. 3, pp. 324–351, 2022.
Visually situated spoken words activate phonological, visual, and semantic representations guiding overt attention during visual exploration. We compared the activation of these representations in children with and without developmental language disorder (DLD) across four eye-tracking experiments, with a particular focus on visual (shape) representations. Two types of trials were presented in each experiment. In Experiment 1, participants heard a word while seeing (1) an object visually associated with the spoken word (i.e., shape competitor) together with a phonologically related object (i.e., cohort competitor), or (2) a shape competitor with an unrelated object. In Experiment 2 and 3, participants heard a word while seeing (1) a shape competitor with an object semantically related to the spoken word (i.e., semantic competitor), or (2) a shape competitor with an unrelated object. In Experiment 4, children heard a word while seeing a semantic competitor with (1) the visual referent of the spoken or (2) with an unrelated object. The visual context was previewed for three seconds before the spoken word, except for Experiment 2, where it appeared at the onset of the spoken word (i.e., no preview). The results showed that when a preview was provided both groups were equally attracted by cohort and semantic competitors and preferred the shape competitors over the unrelated objects. However, shape preference disappeared in the DLD group when no preview was provided and when the shape competitor was presented with a semantic competitor. Our results indicate that children with DLD have a less efficient retrieval of shape representation during word recognition compared to typically developing children.
Kristi Hendrickson; Keith Apfelbaum; Claire Goodwin; Christina Blomquist; Kelsey Klein; Bob McMurray
In: Quarterly Journal of Experimental Psychology, vol. 75, no. 9, pp. 1653–1673, 2022.
Word recognition occurs across two sensory modalities: auditory (spoken words) and visual (written words). While each faces different challenges, they are often described in similar terms as a competition process by which multiple lexical candidates are activated and compete for recognition. While there is a general consensus regarding the types of words that compete during spoken word recognition, there is less consensus for written word recognition. The present study develops a novel version of the Visual World Paradigm (VWP) to examine written word recognition and uses this to assess the nature of the competitor set during word recognition in both modalities using the same experimental design. For both spoken and written words, we found evidence for activation of onset competitors (cohorts, e.g., cat, cap) and words that contain the same phonemes or letters in reverse order (anadromes, e.g., cat, tack). We found no evidence of activation for rhymes (e.g., cat, hat). The results across modalities were quite similar, with the exception that for spoken words, cohorts were more active than anadromes, whereas for written words activation was similar. These results suggest a common characterisation of lexical similarity across spoken and written words: temporal or spatial order is coarsely coded, and onsets may receive more weight in both systems. However, for spoken words, temporary ambiguity during the moment of processing gives cohorts an additional boost during real-time recognition.
Kristi Hendrickson; Danielle Ernest
The recognition of whispered speech in real-time Journal Article
In: Ear and Hearing, vol. 43, no. 2, pp. 554–562, 2022.
Objectives: Whispered speech offers a unique set of challenges to speech perception and word recognition. The goals of the present study were twofold: First, to determine how listeners recognize whispered speech. Second, to inform major theories of spoken word recognition by considering how recognition changes when major cues to phoneme identity are reduced or largely absent compared with normal voiced speech. Design: Using eye tracking in the Visual World Paradigm, we examined how listeners recognize whispered speech. After hearing a target word (normal or whispered), participants selected the corresponding image from a display of four - a target (e.g., money), a word that shares sounds with the target at the beginning (cohort competitor, e.g., mother), a word that shares sounds with the target at the end (rhyme competitor, e.g., honey), and a phonologically unrelated word (e.g., whistle). Eye movements to each object were monitored to measure (1) how fast listeners process whispered speech, and (2) how strongly they consider lexical competitors (cohorts and rhymes) as the speech signal unfolds. Results: Listeners were slower to recognize whispered words. Compared with normal speech, listeners displayed slower reaction times to click the target image, were slower to fixate the target, and fixated the target less overall. Further, we found clear evidence that the dynamics of lexical competition are altered during whispered speech recognition. Relative to normal speech, words that overlapped with the target at the beginning (cohorts) displayed slower, reduced, and delayed activation, whereas words that overlapped with the target at the end (rhymes) exhibited faster, more robust, and longer lasting activation. Conclusion: When listeners are confronted with whispered speech, they engage in a "wait-and-see" approach. Listeners delay lexical access, and by the time they begin to consider what word they are hearing, the beginning of the word has largely come and gone, and activation for cohorts is reduced. However, delays in lexical access actually increase consideration of rhyme competitors; the delay pushes lexical activation to a point later in processing, and the recognition system puts more weight on the word-final overlap between the target and the rhyme.
In: Linguistics, vol. 60, no. 6, pp. 1785–1810, 2022.
The processing of multi-word units and complex words has been one of the main issues of psycholinguistic research in the last decades. However, there is still no mutual consent on how multi-word units, complex words, and their internal constituents are accessed in language processing. Current models of linguistic theory and language processing generally assume that there is no interconnection between the morphosyntactic information of a lexical unit and itws phonetic realization. Recent studies challenge this assumption and suggest a relationship between the morphosyntactic, lexical, and pragmatic information of specific lexemes or morphemes and the phonetic signal. The present study adds to these current studies in psycholinguistics and morphophonetics by investigating the French preposition de 'of' as a constituent in different construction types. While de occurs regularly as a free lexeme in syntactic structures, it also appears as a bound constituent in lexicalized and grammaticalized constructions. First, this study presents an analysis of French de in eye-tracking data from a reading task with French native speakers. Second, this study presents a statistical analysis of acoustic durations of de from an experimental reading task. The results suggest that the constituent de shows certain peculiarities in its processing and acoustic realization as a constituent in a certain construction type. The results are discussed with regard to current theoretical approaches to the processing of multi-word units, n-grams, and complex words.
Annina K. Hessel; Sascha Schroeder
In: Reading and Writing, vol. 35, pp. 2287–2312, 2022.
Successful reading comprehension—especially in a second language (L2)—relies on the ability to monitor one's comprehension, that is, to notice comprehension breaks and make repairs. Comprehension monitoring may be limited given effortful word processing but may also be supported through active reading. The current study addresses to what extent word processing difficulty reduces adolescents' ability to monitor their comprehension in their L2, and whether readers can compensate limitations given sufficient executive control. We conducted an eye-tracking experiment in which 34 adolescent L2 learners (aged 13–17 years) read short expository texts containing two within-subject manipulations. First, comprehension monitoring was tested through inconsistencies, for example, when the topic changed from Spanish to Russian vis-à-vis consistent controls. Second, word processing difficulty was altered by inserting either shorter and higher-frequency words such as want, or longer and lower-frequency words such as prefer. We additionally measured participants' executive control. Outcome variables were reading times on the whole texts and the words manipulated for inconsistency and word processing difficulty. We found evidence of successful moment-to-moment monitoring, as visible in adolescents' increased rereading of inconsistent compared to consistent information. We also found that adolescents adapted their monitoring differently to word processing difficulty, depending on their executive control: while adolescents with weaker control reduced their monitoring given higher word processing difficulty, adolescents with stronger control monitored their comprehension more (instead of less) on difficult texts. These findings provide insights into how L2 comprehension monitoring arises in the interplay of lower-level processing load and active reading processes.
Florian Hintz; Cesko C. Voeten; Odette Scharenborg
In: Psychonomic Bulletin & Review, pp. 1–15, 2022.
Listeners frequently recognize spoken words in the presence of background noise. Previous research has shown that noise reduces phoneme intelligibility and hampers spoken-word recognition – especially for non-native listeners. In the present study, we investigated how noise influences lexical competition in both the non-native and the native language, reflecting the degree to which both languages are co-activated. We recorded the eye movements of native Dutch participants as they listened to English sentences containing a target word while looking at displays containing four objects. On target-present trials, the visual referent depicting the target word was present, along with three unrelated distractors. On target-absent trials, the target object (e.g., wizard) was absent. Instead, the display contained an English competitor, overlapping with the English target in phonological onset (e.g., window), a Dutch competitor, overlapping with the English target in phonological onset (e.g., wimpel, pennant), and two unrelated distractors. Half of the sentences was masked by speech-shaped noise; the other half was presented in quiet. Compared to speech in quiet, noise delayed fixations to the target objects on target-present trials. For target-absent trials, we observed that the likelihood for fixation biases towards the English and Dutch onset competitors (over the unrelated distractors) was larger in noise than in quiet. Our data thus show that the presence of background noise increases lexical competition in the task-relevant non-native (English) and in the task-irrelevant native (Dutch) language. The latter reflects stronger interference of one's native language during non-native spoken-word recognition under adverse conditions.
Markus J. Hofmann; Mareike A. Kleemann; André Roelke-Wellmann; Christian Vorstius; Ralph Radach
In: Cognitive Processing, vol. 23, pp. 309–318, 2022.
While most previous studies of “semantic” priming confound associative and semantic relations, here we use a simple co-occurrence-based approach to examine “pure” semantic priming, while experimentally controlling for associative relations. We define associative relations by the co-occurrence of words in the sentences of a large text corpus. Contextual-semantic feature overlap, in contrast, is defined by the number of common associates that the prime shares with the target. Then we revisit the spreading activation theory and examine whether a long vs. short time available for semantic feature activation leads to early vs. late viewing time effects on the target words of a sentence reading experiment. We independently manipulate contextual-semantic feature overlap of two primes with one target word in sentences of the form pronoun, verb prime, article, adjective prime and target noun, e. g., "She rides the gray elephant." The results showed that long-SOA (verb-noun) overlap reduces early single and first fixation durations of the target noun, and short-SOA (adjective-noun) overlap reduces late go-past durations. This result pattern can be explained by the spreading activation theory: The semantic features of the prime words need some time to become sufficiently active before they can reliably affect target processing. Therefore, the verb can act on the target noun's early eye-movement measures presented three words later, while the adjective is presented immediately prior to the target—thus a difficult adjective-noun semantic integration leads to a late sentence re-examination of the preceding words.
Lingshan Huang; Jingyang Jiang
In: Journal of Cognitive Psychology, vol. 34, no. 5, pp. 607–621, 2022.
The present study examined how working memory (WM) affects unfamiliar word processing during L2 reading comprehension among L2 learners with different proficiency levels. Forty-four participants were divided into the higher proficiency group (n = 22) and the lower proficiency group (n = 22). All of them read an English text with 17 target unfamiliar words while their eye movements were tracked. After online reading, they subsequently completed an L2 reading comprehension test and a WM test respectively. The results showed that WM significantly correlated with L2 reading comprehension in the higher proficiency group. In addition, the effect of WM capacity on L2 reading comprehension performance was mediated by unfamiliar words' first fixation duration in the higher proficiency group. Results of the study revealed the different mechanisms of unfamiliar word processing among learners with different proficiency levels from the cognitive dimension of WM.
Lingshan Huang; Jinghui Ouyang; Jingyang Jiang
In: Learning and Individual Differences, vol. 95, pp. 1–12, 2022.
The present study combines both online (eye-tracking) and offline (reading comprehension test) measures to investigate the relationships among word processing, working memory (WM) and second language (L2) reading comprehension performance. Forty-eight Chinese students read an English text with 17 unfamiliar words while their eye movements were recorded with two different settings (L1-glossed and non-glossed). A reading comprehension test and a reading span task were respectively used to evaluate participants' L2 reading comprehension performance and WM capacity. The results indicated that L2 reading comprehension performance was related to first fixation duration (FFD) on unfamiliar words, and the FFD on unfamiliar words was related to participants' WM capacity. Moreover, the effect of unfamiliar words' FFD on L2 reading comprehension performance can be moderated by participants' WM capacity. These relationships were not found when the unfamiliar words were glossed by L1. The results expand our understanding of the role of WM in unfamiliar word processing during L2 reading comprehension.
Jiwon Hwang; Chikako Takahashi; Hyunah Baek; Alex Hong Lun Yeung; Ellen Broselow
In: Bilingualism, vol. 25, pp. 816–826, 2022.
This study compared the ability of English monolinguals and Mandarin-English bilinguals to make use of English contrastive prosody not only in natural speech but also in masked speech, in which the only available information was prosodic. In contrast to earlier studies (Choi et al.2019; Choi, 2021; Tong et al.2015) which found that L1 tone language speakers outperformed native speakers in tasks involving the use of pitch to identify stress position in English, we did not find a similar advantage for Mandarin-English bilinguals in the interpretation of English contrastive prosody, even under conditions that enforced reliance on pitch contours. These findings are consistent with other studies suggesting that the integration of prosodic information into discourse is an area of particular difficulty for L2 speakers.
Megan Elizabeth Deibel; Jocelyn R. Folk
In: Journal of Psycholinguistic Research, vol. 51, pp. 1121–1142, 2022.
The present study evaluated if lexical expertise, defined as the quality and quantity of a reader's word representations, influenced college students' ability to learn novel homophones while reading. In two experiments novel homophones (e.g. ‘brale') and novel nonhomophones (e.g. ‘gloobs') were embedded in sentences. In Experiment 1, novel homophones had low-frequency familiar word mates, and in Experiment 2 they had high-frequency familiar word mates. Learning was assessed with meaning and spelling recognition post-tests. Although eye movements during reading did not differ between the word types, participants had more difficulty learning the spellings of the novel homophones compared to the novel nonhomophones in Experiments 1 and 2. In contrast, participants only had difficulty learning the meaning of novel homophones when it had a low-frequency mate. Higher levels of lexical expertise were related to higher learning rates of novel homophone spellings only when the novel homophones had a high-frequency mate. Phonology is activated when novel words are encountered and can interfere with learning under certain circumstances.
Xizi Deng; Ashley Farris-Trimble; H. Henny Yeung
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–16, 2022.
Lexical access is highly contextual. For example, vowel (rime) information is prioritized over tone in the lexical access of isolated words in Mandarin Chinese, but these roles are flipped in constraining contexts. The time course of these contextual effects remains unclear, and so here we tracked the real-time eye gaze of native Mandarin speakers in a visual-world paradigm. While listening to a noun classifier, before the target noun was even uttered, gaze to the target noun was already greater than looking to phonologically unrelated distractors. Critically, there was also more distraction from a cohort competitor (tone information) than a segmental competitor (vowel information) in more semantically constraining contexts. Results confirm that phonological activation in Mandarin lexical access is highly sensitive to context, with tone taking priority over vowel information even before a target word is heard. Results suggest that phonological activation in real-time lexical access may be highly context-specific across languages
Félix Desmeules-Trudel; Marc F. Joanisse
In: Acta Psychologica, vol. 226, pp. 1–11, 2022.
One of the challenges in second-language learning is learning unfamiliar word forms, especially when this involves novel phoneme contrasts. The present study examines how real-time processing of newly-learned words and phonemes in a second language is impacted by the structure of learning (discrimination training) and whether asking participants to complete the same task after a 16–21 h delay favours subsequent word recognition. Specifically, using a visual world eye tracking paradigm, we assessed how English listeners processed newly-learned words containing non-native French front-rounded [y] compared to native-sounding vowels, both immediately after training and the following day. Some learners were forced to discriminate between vowels that are perceptually similar for English listeners, [y]-[u], while others were not. We found significantly better word-level processing on a variety of indices after an overnight delay. We also found that training [y] words paired with [u] words (vs. [y]-Control pairs) led to a greater decrease in reaction times during the word recognition task over the two testing sessions. Discrimination training using perceptually similar sounds had facilitative effects on second language word learning with novel phonemic information, and real-time processing measures such as eyetracking provided valuable insights into how individuals learn words and phonemes in a second language.
Sara Dhaene; Nicolas Dirix; Hélène Van Marcke; Evy Woumans
In: Bilingualism: Language and Cognition, vol. 25, pp. 444–458, 2022.
Research among bilinguals suggests a foreign language effect for various tasks requiring a more systematic processing style. For instance, bilinguals seem less prone to heuristic reasoning when solving problem statements in their foreign (FL) as opposed to their native (NL) language. The present study aimed to determine whether such an effect might also be observed in the detection of semantic anomalies. Participants were presented NL and FL ques- tions with and without anomalies while their eye movements were recorded. Overall, they failed to detect the anomaly in more than half of the trials. Furthermore, more illusions occurred for questions presented in the FL, indicating an FL disadvantage. Additionally, eye movement analyses suggested that reading patterns for anomalies are predominantly similar across languages. Our results therefore substantiate theories suggesting that FL use induces cognitive load, causing increased susceptibility to illusions due to partial semantic processing.
Denis Drieghe; Robert Chan Seem
Parafoveal processing of repeated words during reading Journal Article
In: Psychonomic Bulletin & Review, vol. 29, pp. 1451–1460, 2022.
In an eye-tracking experiment during reading, we examined the repetition effect, whereby words that are repeated in the same paragraph receive shorter fixation durations. Target words that were either high-frequency or low-frequency words and of which the parafoveal preview was either correct or with all letters replaced were embedded three times in the same paragraph. Shorter fixation times and higher skipping rates were observed for high-frequency compared to low-frequency words, words for which the parafoveal preview was correct versus incorrect, and as the word was being repeated more often. An interaction between frequency and repetition indicated that the reduction in fixation times due to repetition was more pronounced for low-frequency words. We also observed influences of word repetition on parafoveal processing, as repeated words were skipped more often. An interaction between parafoveal preview and repetition indicated an absent repetition effect when the preview was incorrect, but this effect was short lived, as it was restricted to the first fixation duration on the target word.
Ciara Egan; Anna Siyanova-Chanturia; Paul Warren; Manon W. Jones
In: Quarterly Journal of Experimental Psychology, vol. 76, no. 2, pp. 231–247, 2022.
For skilled readers, idiomatic language confers faster access to overall meaning compared with non-idiomatic language, with a processing advantage for figurative over literal interpretation. However, currently very little research exists to elucidate whether atypical readers—such as those with developmental dyslexia—show such a processing advantage for figurative interpretations of idioms, or whether their reading impairment implicates subtle differences in semantic access. We wanted to know whether an initial figurative interpretation of similes, for both typical and dyslexic readers, is dependent on familiarity. Here, we tracked typical and dyslexic readers' eye movements as they read sentences containing similes (e.g., as cold as ice), orthogonally manipulated for novelty (e.g., familiar: as cold as ice, novel: as cold as snow) and figurativeness (e.g., literal: as cold as ice [low temperature], figurative: as cold as ice [emotionally distant]), with figurativeness being defined by the sentence context. Both participant groups exhibited a processing advantage for familiar and figurative similes over novel and literal similes. However, compared with typical readers, participants with dyslexia had greater difficulty processing similes both when they were unfamiliar and when the context biased the simile meaning towards a literal rather than a figurative interpretation. Our findings suggest a semantic processing anomaly in dyslexic readers, which we discuss in light of recent literature on sentence-level semantic processing.
Eric Failes; Mitchell S. Sommers
In: Frontiers in Psychology, vol. 13, pp. 1–15, 2022.
Several recent studies have demonstrated context-based, high-confidence misperceptions in hearing, referred to as false hearing. These studies have unanimously found that older adults are more susceptible to false hearing than are younger adults, which the authors have attributed to an age-related decline in the ability to inhibit the activation of a contextually predicted (but incorrect) response. However, no published work has investigated this activation-based account of false hearing. In the present study, younger and older adults listened to sentences in which the semantic context provided by the sentence was either unpredictive, highly predictive and valid, or highly predictive and misleading with relation to a sentence-final word in noise. Participants were tasked with clicking on one of four images to indicate which image depicted the sentence-final word in noise. We used eye-tracking to investigate how activation, as revealed in patterns of fixations, of different response options changed in real-time over the course of sentences. We found that both younger and older adults exhibited anticipatory activation of the target word when highly predictive contextual cues were available. When these contextual cues were misleading, younger adults were able to suppress the activation of the contextually predicted word to a greater extent than older adults. These findings are interpreted as evidence for an activation-based model of speech perception and for the role of inhibitory control in false hearing.
Xi Fan; Ronan G. Reilly
In: Acta Psychologica, vol. 230, pp. 1–13, 2022.
This paper explores the processes underlying eye movement control in Chinese reading among a population of young 4th and 5th grade readers. Various proposals to explain the underlying mechanisms involved in eye movement control are examined and the paper concludes that the most likely account is of a two-factor process whereby the character is the main driver for longer saccades and that the word plays a role in shorter ones. A computational model is proposed to provide an integrated account of the dynamic interaction of these two factors.
Claudia Felser; Janna Deborah Drummer
In: Journal of Psycholinguistic Research, vol. 51, pp. 763–788, 2022.
Pronouns can sometimes covary with a non c-commanding quantifier phrase (QP). To obtain such 'telescoping' readings, a semantic representation must be computed in which the QP's semantic scope extends beyond its surface scope. Non-native speakers have been claimed to have more difficulty than native speakers deriving such non-isomorphic syntax-semantics mappings, but evidence from processing studies is scarce. We report the results from an eye-movement monitoring experiment and an offline questionnaire investigating whether native and non-native speakers of German can link personal pronouns to non c-commanding QPs inside relative clauses. Our results show that both participant groups were able to obtain telescoping readings offline, but only the native speakers showed evidence of forming telescoping dependencies during incremental parsing. During processing the non-native speakers focused on a discourse-prominent, non-quantified alternative antecedent instead. The observed group differences indicate that non-native comprehenders have more difficulty than native comprehenders computing scope-shifted representations in real time.
Sara Fernández Cuenca; Jill Jegerski
In: Studies in Second Language Acquisition, pp. 1–30, 2022.
The present study investigated the second language processing of grammatical mood in Spanish. Eye-movement data from a group of advanced proficiency second language users revealed nativelike processing with irregular verb stimuli but not with regular verb stimuli. A comparison group of native speakers showed the expected effect with both types of stimuli, but these were slightly more robust with irregular verbs than with regular verbs. We propose that the role of verb form regularity was due to the greater visual salience of Spanish subjunctive forms with irregular verbs versus regular verbs and possibly also due to less efficient processing of rule-based regular inflectional morphology versus whole irregular word forms. In any case, the results suggest that what appeared to be difficulty with sentence processing could be traced back to word-level processes, which appeared to be the primary area of difficulty. This outcome seems to go against theories that suggest that L2 sentence processing is shallow.
Rana Abu-Zhaya; Inbal Arnon; Arielle Borovsky
In: Cognitive Science, vol. 46, no. 3, pp. 1–32, 2022.
Meaning in language emerges from multiple words, and children are sensitive to multi-word frequency from infancy. While children successfully use cues from single words to generate linguistic predictions, it is less clear whether and how they use multi-word sequences to guide real-time language processing and whether they form predictions on the basis of multi-word information or pairwise associations. We address these questions in two visual-world eye-tracking experiments with 5- to 8-year-old children. In Experiment 1, we asked whether children generate more robust predictions for the sentence-final object of highly frequent sequences (e.g.,“Throw the ball”), compared to less frequent sequences (e.g., “Throw the book”). We further examined if gaze patterns reflect event knowledge or phrasal frequency by comparing the processing of phrases that have the same event structure but differ in multi-word content (e.g.,“Brush your teeth” vs. “Brush her teeth”). In the second study, we employed a training paradigm to ask if children are capable of generating predictions from novel multi-word associations while controlling for the overall frequency of the sequences. While the results of Experiment 1 suggested that children primarily relied on event associations to generate real-time predictions, those of Experiment 2 showed that the same children were able to use recurring novel multi-word sequences to generate real-time linguistic predictions. Together, these findings suggest that children can draw on multi-word information to generate linguistic predictions, in a context-dependent fashion, and highlight the need to account for the influence of multi-word sequences in models of language processing.
Victoria I. Adedeji; Martin R. Vasilev; Julie A. Kirkby; Timothy J. Slattery
Return‑sweep saccades in oral reading Journal Article
In: Psychological Research, vol. 86, no. 6, pp. 1804–1815, 2022.
Recent research on return-sweep saccades has improved our understanding of eye movements when reading paragraphs. However, these saccades, which take our gaze from the end of one line to the start of the next line, have been studied only within the context of silent reading. Articulatory demands and the coordination of the eye–voice span (EVS) at line bounda- ries suggest that the execution of this saccade may be different in oral reading. We compared launch and landing positions of return-sweeps, corrective saccade probability and fixations adjacent to return-sweeps in skilled adult readers while reading paragraphs aloud and silently. Compared to silent reading, return-sweeps were launched from closer to the end of the line and landed closer to the start of the next line when reading aloud. The probability of making a corrective saccade was higher for oral reading than silent reading. These indicate that oral reading may compel readers to rely more on foveal processing at the expense of parafoveal processing. We found an interaction between reading modality and fixation type on fixation durations. The reading modality effect (i.e., increased fixation durations in oral compared to silent reading) was greater for accurate line-initial fixations and marginally greater for line-final fixations compared to intra-line fixations. This suggests that readers may use the fixations adjacent to return-sweeps as natural pause locations to modulate the EVS.
Miriam Aguilar; Pilar Ferré; José A. Hinojosa; José M. Gavilán; Josep Demestre
In: Language, Cognition and Neuroscience, vol. 37, no. 10, pp. 1303–1310, 2022.
The universality of locality is a long-standing debate that has endured in psycholinguistics in spite of the challenges. The non-local preference of attachment in Relative Clauses (RCs) with double antecedent (DP1-of-DP2-RC) reported in a subset of languages (i.e. Spanish) represented an important challenge that locality-based accounts had to address. The forces responsible for attachment preferences turned out to be multifactorial, with relevant roles for prosody, referentiality, lexical semantics and Pseudo-Relative availability. In the present eye-tracking study, we explore the timing of disambiguation in Spanish DP1-of-DP2-RC structures placed in preverbal and post-verbal positions, while also controlling for the previously mentioned influencing factors. Our results are straightforward: an early processing cost arises when the RC is disambiguated non-locally, irrespective of the position. The implications of this work contribute to a better understanding of parsing processes and suggest that locality is at the centre of the forces that influence RC attachment.
Rania Al-Aqarbeh; Mohammed Al-Malahmeh
This study investigates the sensitivity of grammatical resumption to islands in wh-interrogative and relative clause dependencies in Southern Jordanian Arabic (JA). An offline acceptability judgment task and an eye-tracking reading experiment were conducted. The results reveal that resumption in southern JA exhibits sensitivity to strong islands, such as adjunct islands, in both dependencies. The findings also suggest that the southern JA parser posits a resumptive pronoun (RP) inside islands that allow resumption. However, the parser does not predict an RP inside islands that disallow resumption. Furthermore, quantitative data show that wh-interrogative and relative clause dependencies pattern similarly in their sensitivity to islands.
Svetlana Alexeeva; Vladislav Zubov; Alena Konina
In: Primenjena Psihologija, vol. 15, no. 2, pp. 199–236, 2022.
The current study aims to test the assumption that a specially designed Cyrillic font, LexiaD, can assist adolescents with reading problems and facilitate their reading experience. LexiaD was compared with the widely used Arial font. Two groups of adolescents with dyslexia (N = 34) and without dyslexia (N = 28) silently read 144 sentences from the Russian Sentence Corpus (Laurinavichyute et al., 2019), some of which were presented in LexiaD, and others in Arial, while their eye movements were recorded. LexiaD did not show the desired effect for adolescents at the beginning of the experiment: Arial outperformed it in reading speed in both participant groups. However, by the end of the experiment, LexiaD showed a better performance. Although the speed of the higher-level cognitive processing (e.g., lexical access) in both fonts did not differ significantly, the feature extraction was found to be better in LexiaD than in Arial. Thus, we found some positive effect of LexiaD when participants with and without dyslexia got accustomed to it. A follow-up study with an explicit exposure session is needed to confirm this conclusion.
Maryam A. AlJassmi; Kayleigh L. Warrington; Victoria A. McGowan; Sarah J. White; Kevin B. Paterson
In: Attention, Perception, and Psychophysics, vol. 84, no. 1, pp. 10–24, 2022.
Contextual predictability influences both the probability and duration of eye fixations on words when reading Latinate alphabetic scripts like English and German. However, it is unknown whether word predictability influences eye movements in reading similarly for Semitic languages like Arabic, which are alphabetic languages with very different visual and linguistic characteristics. Such knowledge is nevertheless important for establishing the generality of mechanisms of eye-movement control across different alphabetic writing systems. Accordingly, we investigated word predictability effects in Arabic in two eye-movement experiments. Both produced shorter fixation times for words with high compared to low predictability, consistent with previous findings. Predictability did not influence skipping probabilities for (four- to eight-letter) words of varying length and morphological complexity (Experiment 1). However, it did for short (three- to four-letter) words with simpler structures (Experiment 2). We suggest that word-skipping is reduced, and affected less by contextual predictability, in Arabic compared to Latinate alphabetic reading, because of specific orthographic and morphological characteristics of the Arabic script.
Nadja Althaus; Sandra Kotzor; Swetlana Schuster; Aditi Lahiri
In: Cognition, vol. 222, pp. 1–22, 2022.
This study is concerned with how vowel alternation, in combination with and without orthographic reflection of the vowel change, affects lexical access and the discrimination of morphologically related forms. Bengali inflected verb forms provide an ideal test case, since present tense verb forms undergo phonologically conditioned, predictable vowel raising. The mid-to-high alternations, but not the low-to-mid ones, are represented in the orthography. This results in three different cases: items with no change (NoDiff), items with a phonological change not represented in the orthography (PronDiff) and items for which both phonology and orthography change (OrthPronDiff). To determine whether these three cases differ in terms of lexical access and discrimination, we conducted two experiments. Experiment 1 was a cross-modal lexical decision task with auditory primes (1st PERSON and 3rd PERSON forms, e.g. [lekhe] or [likhi]) and visual targets (verbal noun; e.g. [lekha]). Experiment 2 uses eye tracking in a fragment completion task, in which auditory fragments (first syllable of 1st or 3rd PERSON form, e.g. [le-] from [lekhe]) were to be matched to one of two visual targets (full 1st and 3rd PERSON forms, [lekhe] vs. [likhi] in Bengali script). While the lexical decision task, a global measure of lexical access, did not show a difference between the cases, the eye-tracking experiment revealed effects of both phonology and orthography. Discrimination accuracy in the OrthPronDiff condition (vowel alternation represented in the orthography) was high. In the PronDiff condition, where phonologically differing forms are represented by the same graphemes, manual responses were at chance, although eye movements revealed that match and non-match were discriminated. Thus, our results indicate that phonological alternations which are not represented in spelling are difficult to process, whereas having orthographically distinct forms boosts discrimination performance, implying orthographically influenced mental phonological representations.
Simona Amenta; Jana Hasenäcker; Davide Crepaldi; Marco Marelli
In: Psychonomic Bulletin & Review, pp. 1–12, 2022.
A key issue in language processing is how we recognize and understand words in sentences. Research on sentence reading indicates that the time we need to read a word depends on how (un)expected it is. Research on single word recognition shows that each word also has its own recognition dynamics based on the relation between its orthographic form and its meaning. It is not clear, however, how these sentence-level and word-level dynamics interact. In the present study, we examine the joint impact of these sources of information during sentence reading. We analyze existing eye-tracking and self-paced reading data (Frank et al., 2013, Behavior Research Methods, 45, 1182–1190) to investigate the interplay of sentence-level prediction (operationalized as Surprisal) and word Orthography-Semantics Consistency in activating word meaning in sentence processing. Results indicate that both Surprisal and Orthography-Semantics Consistency exert an influence on several reading measures. The shape of the observed interaction differs, but the results give compelling indication for a general trade-off between expectations based on sentence context and cues to meaning from word orthography.
Rhona M. Amos; Kilian G. Seeber; Martin J. Pickering
In: Cognition, vol. 220, pp. 1–16, 2022.
We report the results of an eye-tracking study which used the Visual World Paradigm (VWP) to investigate the time-course of prediction during a simultaneous interpreting task. Twenty-four L1 French professional conference interpreters and twenty-four L1 French professional translators untrained in simultaneous interpretation listened to sentences in English and interpreted them simultaneously into French while looking at a visual scene. Sentences contained a highly predictable word (e.g., The dentist asked the man to open his mouth a little wider). The visual scene comprised four objects, one of which depicted either the target object (mouth; bouche), an English phonological competitor (mouse; souris), a French phonological competitor (cork; bouchon), or an unrelated word (bone; os). We considered 1) whether interpreters and translators predict upcoming nouns during a simultaneous interpreting task, 2) whether interpreters and translators predict the form of these nouns in English and in French and 3) whether interpreters and translators manifest different predictive behaviour. Our results suggest that both interpreters and translators predict upcoming nouns, but neither group predicts the word-form of these nouns. In addition, we did not find significant differences between patterns of prediction in interpreters and translators. Thus, evidence from the visual-world paradigm shows that prediction takes place in simultaneous interpreting, regardless of training and experience. However, we were unable to establish whether word-form was predicted.
Filip Andras; Marta Rivera; Teresa Bajo; Paola E. Dussias; Daniela Paolieri
In: International Journal of Bilingualism, vol. 26, no. 4, pp. 405–425, 2022.
Aims and Objectives: The cognate facilitation effect (CFE) is a robust effect in language production and visual word comprehension, but evidence for CFE during auditory comprehension is still scarce. This study aimed to explore the CFE during auditory comprehension of a second language (L2) while manipulating proficiency in the L2 and cognate type. These two variables are known to influence the CFE. Methodology: Low and highly proficient Spanish–English bilinguals listened to individual words in their L2, English, that shared high, low, and no phonological overlap (PO) with their native language Spanish. We designed a visual world paradigm task that consisted of selecting an image shown as a spoken word unfolded in time while eye movements were recorded. Data and Analysis: Response times revealed a clear CFE in low proficiency bilinguals, while this effect was absent in highly proficient bilinguals. The eye-tracking (ET) data showed late coactivation of low-PO words and, surprisingly, no coactivation of high-PO words in low proficiency bilinguals. Highly proficient bilinguals showed no clear pattern of language coactivation in the ET data. The English monolingual control group showed no effects during the critical time window. Conclusions: These results are interpreted within the framework of L2 processing models. At low levels of proficiency, the PO between translations facilitates access to meaning. On the other hand, highly proficient bilinguals no longer benefit from the PO between translations, at least for concrete and simple nouns. Originality: The findings demonstrate a clear CFE in auditory comprehension. Proficiency in L2 and PO modulated the effect, as shown in both the response time and in the ET data, respectively. Implications: These findings suggest that at low levels of L2 proficiency, learners more easily access the conceptual information if the auditory input is similar to their native language. Nevertheless, as proficiency increases, this facilitation disappears.
Sally Andrews; Aaron Veldre; Roslyn Wong; Lili Yu; Erik D. Reichle
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–24, 2022.
Facilitated identification of predictable words during online reading has been attributed to the generation of predictions about upcoming words. But highly predictable words are relatively infrequent in natural texts, raising questions about the utility and ubiquity of anticipatory prediction strategies. This study investigated the contribution of task demands and aging to predictability effects for short natural texts from the Provo corpus. The eye movements of 49 undergraduate students (mean age 21.2) and 46 healthy older adults (mean age 70.8) were recorded while they read these passages in two conditions: (a) reading for meaning to answer occasional comprehension questions; (b) proofreading to detect “transposed letter” lexical errors (e. g., clam instead of calm) in intermixed filler passages. The results suggested that the young adults, but not the older adults, engaged anticipatory prediction strategies to detect semantic errors in the proofreading condition, but neither age group showed any evidence of costs of prediction failures. Rather, both groups showed facilitated reading times for unexpected words that appeared in a high constraint within-sentence position. These findings suggest that predictability effects for natural texts reflect partial, probabilistic expectancies rather than anticipatory prediction of specific words.
Martín Antúnez; Sara Milligan; Juan Andrés Hernández-Cabrera; Horacio A. Barber; Elizabeth R. Schotter
In: Psychophysiology, vol. 59, no. 4, pp. 1–19, 2022.
Prior research suggests that we may access the meaning of parafoveal words during reading. We explored how semantic-plausibility parafoveal processing takes place in natural reading through the co-registration of eye movements (EM) and fixation-related potentials (FRPs), using the boundary paradigm. We replicated previous evidence of semantic parafoveal processing from highly controlled reading situations, extending their findings to more ecologically valid reading scenarios. Additionally, and exploring the time-course of plausibility preview effects, we found distinct but complementary evidence from EM and FRPs measures. FRPs measures, showing a different trend than EM evidence, revealed that plausibility preview effects may be long-lasting. We highlight the importance of a co-registration set-up in ecologically valid scenarios to disentangle the mechanisms related to semantic-plausibility parafoveal processing.
Keith S. Apfelbaum; Claire Goodwin; Christina Blomquist; Bob McMurray
In: Quarterly Journal of Experimental Psychology, vol. 76, no. 1, pp. 196–219, 2022.
Efficient word recognition depends on the ability to overcome competition from overlapping words. The nature of the overlap depends on the input modality: spoken words have temporal overlap from other words that share phonemes in the same positions, whereas written words have spatial overlap from other words with letters in the same places. It is unclear how these differences in input format affect the ability to recognise a word and the types of competitors that become active while doing so. This study investigates word recognition in both modalities in children between 7 and 15. Children complete a visual-world paradigm eye-tracking task that measures competition from words with several types of overlap, using identical word lists between modalities. Results showed correlated developmental changes in the speed of target recognition in both modalities. In addition, developmental changes were seen in the efficiency of competitor suppression for some competitor types in the spoken modality. These data reveal some developmental continuity in the process of word recognition independent of modality but also some instances of independence in how competitors are activated. Stimuli, data, and analyses from this project are available at: https://osf.io/eav72.
Eléonore Arbona; Kilian G. Seeber; Marianne Gullberg
In: Bilingualism: Language and Cognition, pp. 1–15, 2022.
Manual co-speech gestures can facilitate language comprehension, but do they influence language comprehension in simultaneous interpreters, and if so, is this influence modulated by simultaneous interpreting (SI) and/or by interpreting experience? In a picture-matching task, 24 professional interpreters and 24 professional translators were exposed to utterances accompanied by semantically matching representational gestures, semantically unrelated pragmatic gestures, or no gestures while viewing passively (interpreters and translators) or during SI (interpreters only). During passive viewing, both groups were faster with semantically related than with semantically unrelated gestures. During SI, interpreters showed the same result. The results suggest that language comprehension is sensitive to the semantic relationship between speech and gesture, and facilitated when speech and gestures are semantically linked. This sensitivity is not modulated by SI or interpreting experience. Thus, despite simultaneous interpreters' extreme language use, multimodal language processing facilitates comprehension in SI the same way as in all other language processing.
Emma Axelsson; Nur Najihah Othman; Nayantara Kansal
In: Infant Behavior and Development, vol. 69, pp. 1–16, 2022.
When hearing a novel word, children typically rule out familiar objects and assume a speaker is referring to a novel object. This strategy is known as fast mapping, and young children use this with a high degree of accuracy. However, not all children engage in fast mapping to the same extent and temperament can play a role. Shyness is associated with poorer fast mapping and less attention to target objects, which is associated with poorer retention (Hilton et al., 2019; Hilton & Westermann, 2017). We further investigated the relationship between temperament and fast mapping by presenting 2.5-year-old children with 8 familiar target fast mapping trials and 4 novel target trials presented twice. We considered two temperamental dimensions: approachability due to its similarity to shyness; and reactivity, which could predict children's capacity to engage during fast mapping. We found an association between approachability and fast mapping accuracy the second time children fast-mapped novel targets, and approachability was associated with greater retention accuracy. Reactivity predicted proportions of target looking during fast mapping with less reactive temperament scores associated with greater focus on targets. This provides support for a relationship between two dimensions of temperament and fast mapping and retention. Approachability may be associated with a further opportunity to fast map and memory for novel words, and/or how willing children are to guess the targets. Reactivity may be associated with the capacity to focus during word learning situations. Different aspects of temperament could have implications for children's capacity to disambiguate and learn words.
Najla Azaiez; Otto Loberg; Jarmo A. Hämäläinen; Paavo H. T. Leppänen
In: Frontiers in Neuroscience, vol. 16, pp. 1–18, 2022.
Neural correlates in reading and speech processing have been addressed extensively in the literature. While reading skills and speech perception have been shown to be associated with each other, their relationship remains debatable. In this study, we investigated reading skills, speech perception, reading, and their correlates with brain source activity in auditory and visual modalities. We used high-density event-related potentials (ERPs), fixation-related potentials (FRPs), and the source reconstruction method. The analysis was conducted on 12–13-year-old schoolchildren who had different reading levels. Brain ERP source indices were computed from frequently repeated Finnish speech stimuli presented in an auditory oddball paradigm. Brain FRP source indices were also computed for words within sentences presented in a reading task. The results showed significant correlations between speech ERP sources and reading scores at the P100 (P1) time range in the left hemisphere and the N250 time range in both hemispheres, and a weaker correlation for visual word processing N170 FRP source(s) in the posterior occipital areas, in the vicinity of the visual word form areas (VWFA). Furthermore, significant brain-to-brain correlations were found between the two modalities, where the speech brain sources of the P1 and N250 responses correlated with the reading N170 response. The results suggest that speech processes are linked to reading fluency and that brain activations to speech are linked to visual brain processes of reading. These results indicate that a relationship between language and reading systems is present even after several years of exposure to print.
Hyunah Baek; Wonil Choi; Peter C. Gordon
In: Quarterly Journal of Experimental Psychology, pp. 1–14, 2022.
In written Korean, spaces appear between phrasal units (“eojeols”). In Experiment 1, participants read sentences in which space information had been manipulated. Results indicated that removing spaces or replacing them with a symbol hindered reading, but this effect was not as disruptive as previously found in English. Experiment 2 presented sentences varying in the proportion of eojeols that ended with postpositional particles as well as the presence/absence of spaces. Results showed that space removal interfered with reading, but its effects were weaker when the sentence contained more postpositional particles. This suggests that postpositional particles provide an extra cue to word segmentation in Korean texts. These findings are discussed in relation to the unique characteristics of the Korean writing system and to the models of eye-movement control during reading in different languages.
Chiara Barattieri di San Pietro; Giovanni Girolamo; Claudio Luzzatti; Marco Marelli
In: Journal of Psycholinguistic Research, vol. 51, no. 6, pp. 1371–1391, 2022.
People with schizophrenia spectrum disorders (SSD) show anomalies in language processing with respect to “who is doing what” in an action. This linguistic behavior is suggestive of an atypical representation of the formal concepts of “Agent” in the lexical representation of a verb, i.e., its thematic grid. To test this hypothesis, we administered a silent-reading task with sentences including a semantic violation of the animacy trait of the grammatical subject to 30 people with SSD and 30 healthy control participants (HCs). When the anomalous grammatical subject was the Agent of the event, a significant increase of Gaze Duration was observed in HCs, but not in SSDs. Conversely, when the anomalous subject was a Theme, SSDs displayed an increased probability of go-back movements, unlike HCs. These results are suggestive of a higher tolerability for anomalous Agents in SSD compared to the normal population. The fact that SSD participants did not show a similar tolerability for anomalous Themes rules out the issue of an attention deficit. We suggest that general communication abilities in SSD might benefit from explicit training on deep linguistic structures.
Alisa Baron; Katrina Connell; Zenzi M. Griffin
In: Frontiers in Psychology, vol. 13, pp. 1–10, 2022.
This study investigated grammatical gender processing in school-age Spanish-English bilingual children using a visual world paradigm with a 4-picture display where the target noun was heard with a gendered article that was either in a context where all distractor images were the same gender as the target noun (same gender; uninformative) or in a context where all distractor images were the opposite gender than the target noun (different gender; informative). We investigated 32 bilingual children (ages 5;6–8;6) who were exposed to Spanish since infancy and began learning English by school entry. Along with the eye-tracking experiment, all children participated in a standardized language assessment and told narratives in English and Spanish, and parents reported on their child's current Spanish language use. The differential proportion fixations to target (target − averaged distractor fixations) were analyzed in two time regions with linear mixed-effects models (LME). Results show that prior to the target word being spoken, these bilingual children did not use the gendered articles to actively anticipate upcoming nouns. In the subsequent time region (during the noun), it was shown that there are differences in the way they use feminine and masculine articles, with a lack of use of the masculine article and a potential facilitatory use of the feminine article for children who currently use more Spanish than English. This asymmetry in the use of gendered articles in processing is modulated by current Spanish language use and trends with results found for bilingual and second-language learning adults.
Floor Berg; Jelle Brouwer; Thomas B. Tienkamp; Josje Verhagen; Merel Keijzer
In: Frontiers in Psychology, vol. 13, pp. 1–17, 2022.
Introduction: It has been proposed that bilinguals' language use patterns are differentially associated with executive control. To further examine this, the present study relates the social diversity of bilingual language use to performance on a color- shape switching task (CSST) in a group of bilingual university students with diverse linguistic backgrounds. Crucially, this study used language entropy as a measure of bilinguals' language use patterns. This continuous measure reflects a spectrum of language use in a variety of social contexts, ranging from compartmentalized use to fully integrated use. Methods: Language entropy for university and non-university contexts was calculated from questionnaire data on language use. Reaction times (RTs) were measured to calculate global RT and switching and mixing costs on the CSST, representing conflict monitoring, mental set shifting, and goal maintenance, respectively. In addition, this study innovatively recorded a potentially more sensitive measure of set shifting abilities, namely, pupil size during task performance. Results: Higher university entropy was related to slower global RT. Neither university entropy nor non-university entropy were associated with switching costs as manifested in RTs. However, bilinguals with more compartmentalized language use in non-university contexts showed a larger difference in pupil dilation for switch trials in comparison with non-switch trials. Mixing costs in RTs were reduced for bilinguals with higher diversity of language use in non-university contexts. No such effects were found for university entropy. Discussion: These results point to the social diversity of bilinguals' language use as being associated with executive control, but the direction of the effects may depend on social context (university vs. non-university). Importantly, the results also suggest that some of these effects may only be detected by using more sensitive measures, such as pupil dilation. The paper discusses theoretical and practical implications regarding the language entropy measure and the cognitive effects of bilingual experiences more generally, as well as as how methodological choices can advance our understanding of these effects.
Danil Fokin; Stefan Blohm; Elena Riekhakaynen
Reading Russian poetry: An expert-novice study Journal Article
In: Journal of Eye Movement Research, vol. 13, no. 3, pp. 1–13, 2022.
Studying the role of expertise in poetry reading, we hypothesized that poets' expe11 knowledge comprises genre-appropriate reading-and comprehension straregies that are reflected in distinct patterns of reading behavior. We recorded eye movements while two groups of native speakers (n= 1 O each) read selected Russian poetty: an expert group of professional poets who read poetry daily, and a control group ofnovices who read poet1y less than once a 111011th. We conducted mixed-effects regression analyses to test for effects of group 011 first-fixation durations, first-pass gaze durations , and total reading times per word while controlling for lexical-and text variables. First-fü.:ation durations exclusively reflected lexical features. and total reading times re-tlected both lexical-and rexr variables; only firsr-pass gaze durarions were additionally modulated by readers' level of expertise. Whereas gaze durations of novice readers became faster as they progressed through the poems, and differed between line-final words and non-final ones, poets retained a steady pace of firsr-pass reading throughout the poems and within verse lines. Additionally, poets' gaze durations were less sensitive to word length. We conclude that readers' level of expertise modulates the way they read poetry. Our findings support theories of litera1y comprehension that assume distinct processing modes which emerge from prior experience with litera1y texts.
Max R. Freeman; Viorica Marian
Visual word recognition in bilinguals Journal Article
In: Studies in Second Language Acquisition, vol. 44, no. 3, pp. 1–29, 2022.
A bilingual's language system is highly interactive. When hearing a second language (L2), bilinguals access native-language (L1) words that share sounds across languages. In the present study, we examine whether input modality and L2 proficiency moderate the extent to which bilinguals activate L1 phonotactic constraints (i.e., rules for combining speech sounds) during L2 processing. Eye movements of English monolinguals and Spanish-English bilinguals were tracked as they searched for a target English word in a visual display. On critical trials, displays included a target that conflicted with the Spanish vowel-onset rule (e.g., spa), as well as a competitor containing the potentially activated e onset (e.g., egg). The rule violation was processed either in the visual modality (Experiment 1) or audio-visually (Experiment 2). In both experiments, bilinguals with lower L2 proficiency made more eye movements to competitors than fillers. Findings suggest that bilinguals who have lower L2 proficiency access L1 phonotactic constraints during L2 visual word processing with and without auditory input of the constraint-conflicting structure (e.g., spa). We conclude that the interactivity between a bilingual's two languages is not limited to words that share form across languages, but also extends to sublexical, rule-based structures.
Cheryl Frenck-Mestre; Hyeree Choo; Ana Zappa; Julia Herschensohn; Seung Kyung Kim; Alain Ghio; Sungryung Koh
In: Brain Sciences, vol. 12, pp. 1–28, 2022.
Previous experimental studies have reported clear differences between native speakers and second language (L2) learners as concerns their capacity to extract and exploit morphosyntactic information during online processing. We examined the online processing of nominal case morphology in Korean by native speakers and L2 learners by contrasting canonical (SOV) and scrambled (OSV) structures, across auditory (Experiment 1) and written (Experiment 2) formats. Moreover, we compared different instances of nominal case marking: accusative (NOM-ACC) and dative (NOM-DAT). During auditory processing, Koreans showed incremental processing based on case information, with no effect of scrambling or specific case marking. In contrast, the L2 group showed no evidence of predictive processing and was negatively impacted by scrambling, especially for the accusative. During reading, both Koreans and the L2 group showed a cost of scrambling on first pass reading times, specifically for the dative. Lastly, L2 learners showed better comprehension for scrambled dative than accusative structures across formats. The current set of results show that format, the specific case marking, and word order all affect the online processing of nominal case morphology.