Reading and Language Eye-Tracking Publications
All EyeLink eye tracker reading and language research publications up until 2025 (with some early 2026s) are listed below by year. You can search the eye-tracking publications using keywords such as Visual World, Comprehension, Speech Production, etc. You can also search for individual author names. If we missed any EyeLink reading or language research articles, please email us!
2026 |
Zebo Lan; Meihua Guo; Nina Liu; Guoli Yan; Valerie Benson Language experience and reading ability modulate word recognition in deaf readers Journal Article In: Journal of Deaf Studies and Deaf Education, vol. 31, pp. 41–57, 2026. @article{Lan2026,For most deaf readers, learning to read is a challenging task. Visual word recognition is crucial during reading; however, little is known about the cognitive mechanism of Chinese deaf readers during visual word recognition. In the present study, two experiments explored the activation of orthographic, phonological, and sign language representations during Chinese word recognition. Eye movements were recorded as participants read sentences containing orthographically similar words, homophones, sign language–related words, or unrelated words. All deaf readers showed shorter reading times for orthographically similar words compared to unrelated words. However, when the reading ability was controlled, the homophone advantage was observed only for deaf readers with more oral language experience, whereas the sign language advantage was observed only for deaf readers with more sign language experience. When language experience was controlled, in comparison to deaf readers with lower reading fluency levels, those with higher reading fluency levels had more stable orthographic and sign language representations. Deaf college readers with more oral language experience activate word meanings through orthographic and phonological representation, whereas deaf college readers with more sign language experience activate word meanings through orthographic and sign language representation, reflecting a unique cognitive mechanism, and reading ability moderates this process. |
Xuran Cao; Yaxin Du; Yuhan Jiang; Run Zhang; Jingxin Wang Aging and semantic transparency effects in Chinese reading: Evidence from eye movements Journal Article In: BMC Psychology, vol. 14, no. 1, pp. 1–14, 2026. @article{Cao2026a,Background Semantic transparency is typically defined in terms of compositionality, the extent to which the meaning of a word can be predicted from the meaning of each of its constituents, which is crucial for the processing of compound words. Studies employing behavioral, eye-tracking, and neuroimaging techniques have identified common effects of semantic transparency. Transparent words, defined as those in which the word itself and its morphemes exhibit a high degree of semantic relatedness, facilitate word recognition. Semantic transparency effects have been well observed for alphabetic languages. However, the effects of semantic transparency on Chinese readers are largely unknown, as do whether healthy aging may affect this effect. The present study investigated these questions by analyzing the semantic transparency effects in both young and older adults under conditions of normal reading and preview. Methods The eye movements of young (18–25 years) and older (60 + years) Chinese readers were recorded under conditions of normal reading and preview. Results (1) Transparent words facilitated word recognition and the valid preview clues did not benefit readers recognizing transparent words. (2) Age groups showed no differences in the process of compound words. However, older adults had greater difficulty recognizing opaque words under preview conditions than younger adults. Conclusions Compound words are stored in the mental lexicon as mixed representations, in which transparent words are represented by morphemes and opaque words are represented by whole words. Semantic transparency effects exhibit cross-age consistency which do not rely on valid preview information but instead stem from foveal in-depth processing. However, age differences in semantic integration become apparent in parafoveal preview processing which demands greater cognitive resources, with older adults experiencing greater difficulty recognizing opaque words than younger adults. |
Alexia Galati; Rick Dale; Camila Alviar; Moreno I. Coco Task goals constrain the alignment in eye-movements and speech during interpersonal coordination Journal Article In: Journal of Memory and Language, vol. 146, pp. 1–18, 2026. @article{Galati2026,Collaborative task performance is assumed to benefit from interpersonal coordination between interacting individuals. Prominent views of language use and social behavior, including the Interactive Alignment Model (IAM; Pickering & Garrod, 2004), support this view by building on tasks that require monitoring a partner's perspective (e.g., in route planning), proposing that behavioral alignment enables conceptual convergence. However, the role of alignment in tasks requiring complementarity (e.g., a “divide and conquer” strategy during joint visual search) remains underexplored. We address this gap by manipulating task goals (route planning vs. visual search) as forty dyads completed ten trials involving subway maps while their eye movements and speech were co-registered. We used Cross Recurrence Quantification Analysis (CRQA) to examine the temporal relationships between partners' eye fixations and word sequences, generating measures that reveal similarity and dynamic coupling. Dyads exhibited more gaze alignment in route planning than visual search across a range of CRQA metrics. Gaze alignment also varied across the trial and related differently to accuracy: in visual search, greater alignment late in the trial predicted better performance. In speech, route planning prompted longer and more entropic word sequences, but lower overall recurrence than visual search. This finding suggests that the two modalities organize in a compensatory fashion to support distinct task demands. These results support a theoretical framework more general than IAM, in which interactive alignment emerges as a consequence of dynamic adaptation to task goals. Overall, task goals constrain how people coordinate behavior and offer insights into how collaborating partners distribute their multimodal contributions. |
Huanhuan Yin; Martin J. Pickering Predicting words across languages depends on language context: Evidence from visual world eye-tracking Journal Article In: Journal of Memory and Language, vol. 146, pp. 1–12, 2026. @article{Yin2026,There is good evidence that monolingual comprehenders can predict the form of upcoming words, and also that bilinguals often activate words from both languages in parallel during bottom-up language comprehension. But it is unclear whether bilinguals predict the form of upcoming words in the language that they are not hearing, and whether such predictions depend on whether or not they have recently encountered that language. We investigated these questions in two visual-world eye-tracking experiments by asking whether Mandarin Chinese (L1)-English (L2) bilinguals pre-activate Mandarin phonological representations of predictable words during English comprehension. Participants heard English sentences containing a highly predictable word while viewing a display. They fixated more on a competitor object whose Mandarin name was a homophone of the Mandarin translation of the predictable word than an unrelated object when both languages were used (Experiment 2) but not when just English was used (Experiment 1). Our findings suggest that bilinguals predict across languages when both languages are contextually relevant but not otherwise. |
Manman Zhang; Zhichao Zhang; Fang Li; Xuejun Bai; Chuanli Zang; Simon P. Liversedge Exploring effects of foveal load and preview restrictions for single and multiple parafoveal words in Chinese reading Journal Article In: Journal of Memory and Language, vol. 146, pp. 1–14, 2026. @article{Zhang2026b,Two experiments are reported that used the boundary paradigm to investigate how foveal lexical processing load (high/low frequency) of a pre-target word influences parafoveal processing of upcoming target word(s) with either zero-, one-, two- or three-character, or full preview in Chinese reading. In Experiment 1, the three characters comprised a single word as the target while in Experiment 2 they formed multiple words (two or three words). Pre-target word analyses showed an effective foveal load manipulation with low frequency pre-targets being fixated for longer than high frequency pre-targets in both experiments. Both experiments showed robust preview extent effects at the target words, such that fixation times increased, and landing positions shortened dramatically with reduced preview extent. Modulatory influences of foveal load effects were obtained on both fixation times and landing positions at the target region. These effects themselves were consistent, but reduced, for parafoveal character strings comprised of multiple words relative to a single word, consistent with the MCU hypothesis ( Zang, 2019 ). Our findings demonstrate that increased foveal load reduces the disruptive influence of restrictive parafoveal windows and reduces preview extent in relation to saccadic targeting. The current findings align at a very basic level with the Foveal Load Hypothesis ( Henderson & Ferreira, 1990 ), though the results indicate that a more nuanced theoretical account is necessary to capture all aspects of the results in respect of Chinese reading. |
Matt D. Anderson; Emily A. Cooper; Jorge Otero-Millan A method for measuring closed-loop latency in gaze-contingent rendering without extra equipment Journal Article In: Behavior Research Methods, vol. 58, no. 1, pp. 1–12, 2026. @article{Anderson2026,In gaze-contingent rendering, the visual stimulus rendered on a display changes based on where the observer is looking. This technique allows researchers to achieve dynamic control over stimulus placement on the retina in the presence of eye movements and is often used to investigate how sensory processing and perception vary across the visual field. Precise stimulus placement using gaze-contingent rendering depends on minimizing the temporal latency between a change in the observer's gaze position, measured using an eye tracker, and the corresponding change to the stimulus. This latency, however, can be challenging to measure reliably. Here, we present a simple method for measuring system latency that requires no additional hardware beyond the eye tracker and display, which are already part of the gaze-contingent system. Two small circles are rendered on the display to simulate the appearance of two pupils. The eye tracker is pointed towards the display to record both pupils simultaneously. One pupil is drawn based on a pre-determined trajectory, for example, moving up and down at a constant speed. The second pupil is “gaze-contingent”: it is drawn based on the measured position of the first pupil. The time-lag at which the position of the second pupil matches the first pupil gives the closed-loop latency of the entire system. To validate this method, we added artificial rendering delays to our system and produced measured latencies that precisely corresponded to predictions, given the refresh rate of the display. This method provides a simple, low-cost way of precisely quantifying gaze-contingent rendering latencies, with no additional hardware required. |
Valentina Apresjan; Alexander V. Orlov; Kirill Koncha; Vladislava Staroverova; Anastasia Lopukhina Metaphor in the mental lexicon: Investigating different types of polysemy via eye-tracking and behavioral experiments Journal Article In: Metaphor and Symbol, vol. 41, no. 1, pp. 5–38, 2026. @article{Apresjan2026,This study investigates the mental representation and processing of the two types of metaphorical senses in Russian polysemous verbs and adjectives using eye-tracking, sensicality judgment, and semantic clustering tasks. The metaphorical senses under study differ in their semantic proximity to the literal sense, with “proximal” metaphors (e.g. “raise prices”) retaining more semantic components, and “distal” metaphors being semantically bleached (e.g. “raise alarm”). Metaphors differed in their mental representations and processing patterns based on semantic proximity and part of speech. In semantic clustering, proximal metaphors were miscategorized with literal senses more often than distal metaphors. Proximal metaphors in adjectives were more often miscategorized with literal senses, while in verbs they were miscategorized with distal metaphors. In sensicality judgment, verbs showed longer reaction times for proximal metaphors, while adjectives demonstrated higher accuracy for distal metaphors compared to literal senses. In eye-tracking, adjectival distal metaphors triggered more regressions on disambiguating nouns than literal senses. Our findings suggest that distal metaphors are stored and processed as distinct, non-compositional units, while proximal metaphors overlap with literal senses and are processed compositionally. Proximal metaphors in adjectives are closer to literal senses, while in verbs they are closer to distal metaphors, explained by different semantic derivation mechanisms. |
Frances G. Cooley; Karen Emmorey; Emily Saunders; Elizabeth R. Schotter In: Behavior Research Methods, vol. 58, no. 1, pp. 1–14, 2026. @article{Cooley2026,Eye-tracking corpora have advanced our understanding of reading processes by providing large-scale datasets of naturalistic reading behavior. However, existing corpora have almost exclusively sampled from typically hearing readers of spoken languages. Here, we present the Signers' Eye-movements in English Reading (SEER) Corpus, a dataset of eye-movement behaviors from 41 skilled deaf adult readers who are early signers of American Sign Language (ASL), as well as a comparative group of 101 typically hearing monolingual English readers. Participants read 200 English sentences presented one at a time. In addition to eye-tracking data, the corpus includes detailed participant information: a standardized measure of reading proficiency, spelling recognition, and nonverbal intelligence for all participants. Information for the deaf participants include ASL comprehension scores, age of ASL acquisition, and phonological awareness scores (for a subset of participants). We report comparative analyses of reading behaviors at both the word level and sentence level. We also examine group differences in the effects of word length, frequency, and surprisal on local measures. The results indicate stronger effects of length and surprisal, but equivalent frequency effects (on content words) for deaf compared to hearing readers. The SEER Corpus offers researchers the opportunity to test hypotheses about reading development and efficiency in bimodal bilinguals who are first language users of ASL and skilled readers of English, supporting broader investigations of visual language processing. The corpus is preregistered and publicly available (https://doi.org/10.17605/OSF.IO/7P4F2) to facilitate replication, cross-study comparisons, and exploration of preliminary hypotheses in this understudied population. |
Anne Françoise Chambrier; Philippe Terrier; Paolo Ruggeri; David Müller; Myrto Atzemian; Catherine Thevenot; Marco Pedrotti Eye movements when reading Arabic numbers in sentences Journal Article In: Acta Psychologica, vol. 262, pp. 1–11, 2026. @article{Chambrier2026,We examined eye movements in 49 adults as they read aloud or silently rounded and non-rounded Arabic numbers embedded in texts. We compared the patterns of eye movements to those obtained when participants read words and pseudowords matched in length to the numbers. The results revealed that non-rounded numbers elicited more fixations, longer fixation durations, and an increased number of saccades with shorter amplitudes compared to words, with pseudowords and rounded numbers falling in between. This reflects the cognitively demanding step-by-step processing required for number reading. However, this effect was moderated for non-rounded numbers in silent reading, suggesting that without oralization requirement, participants engaged in a more superficial reading. This interpretation was further supported by a higher error rate on a comprehension task administered after reading when the questions were related to the magnitude of the numbers read. Additionally, participants made more leftward saccades when reading numbers compared to words and pseudowords, indicating that despite numbers being oralized from left to right, they must be, to some extent, scanned from right to left to determine the value and therefore the denomination of the various digits. These findings shed light on the cognitive mechanisms underlying number reading. |
Anne Friede; Albrecht Inhoff; Christian Vorstius; Ralph Radach Word difficulty determines the accuracy of regressive saccades in reading Journal Article In: Psychonomic Bulletin & Review, vol. 33, no. 1, pp. 1–13, 2026. @article{Friede2026,The current experiment was conducted to study effects of lexical word difficulty on the control of long-range regressive saccades. Participants read single line sentences in German for comprehension and checked for a spelling error that was inserted when the eyes had reached the end of the line. When words were more difficult in terms of orthographic irregularity and lower frequency, this dramatically increased the accuracy of regressions back to these words. If the target was missed, fewer additional saccades and less time were needed until the eyes fixated the target word. The data suggest that more effortful word processing is related to a better representation in visual–spatial memory, enabling more effective programming of regressions. |
Patrick Haller; Cui Ding; Maja Stegenwallner-Schütz; David R. Reich; Iva Koncic; Silvia Makowski; Lena A. Jäger Replicate me if you can: Assessing measurement reliability of individual differences in reading across measurement occasions and methods Journal Article In: Cognitive Science, vol. 50, no. 1, pp. 1–50, 2026. @article{Haller2026,Psycholinguistic theories traditionally assume similar cognitive mechanisms across different speakers. However, more recently, researchers have begun to recognize the need to consider individual differences when explaining human cognition. An increasing number of studies have investigated how individual differences influence human sentence processing. Implicitly, these studies assume that individual-level effects can be replicated across experimental sessions and different assessment methods such as eye-tracking and self-paced reading. However, this assumption is challenged by the Reliability Paradox. Thus, a crucial first step for a principled investigation of individual differences in sentence processing is to establish their measurement reliability, that is, the correlation of individual-level effects across multiple measurement occasions and methods. In this work, we present the first naturalistic eye movement corpus of reading data with four experimental sessions from each participant (two eye-tracking sessions and two self-paced reading sessions). We deploy a two-task Bayesian hierarchical model to assess the measurement reliability of individual differences in a range of psycholinguistic phenomena that are well-established at the population level, namely, effects of word length, lexical frequency, surprisal, dependency length, and number of to-be-integrated dependents. While our results indicate high reliability across measurement occasions for the word length effect, it is only moderate for higher-level psycholinguistic predictors such as lexical frequency, dependency distance, and the number of to-be-integrated dependencies, and even low for surprisal. Moreover, even after accounting for spillover effects, we observe only low to moderate reliability at the individual level across methods (eye-tracking and self-paced reading) for most predictors, and poor reliability for predictors of syntactic integration. These findings underscore the importance of establishing measurement reliability before drawing inferences about individual differences in sentence processing. |
Hyunwoo Kim; Kitaek Kim; Haerim Hwang Effects of goals and strategies on predictive processing: A visual world eye-tracking study on honorific agreement in Korean Journal Article In: Linguistics, pp. 1–35, 2026. @article{Kim2026,There is ongoing debate about whether prediction is driven solely by bottom-up associative links or is modulated by top-down goals and strategies. The current study attempts to address this issue by investigating the role of top-down factors in Korean speakers' predictive processing of honorific agreement. Two visual-world eye-tracking experiments were conducted, analyzing participants' anticipatory eye movements while manipulating two top-down factors. In Experiment 1, we assigned participants to two groups with different instructions, asking one group to listen to sentences and answer referent-selection questions, and the other group to actively predict the upcoming referent. Experiment 2 manipulated the validity of predictive cues by interspersing experimental items with fillers containing consistent or inconsistent continuations. Results from Experiment 1 showed that participants instructed to actively anticipate the referent used honorific information more quickly to make predictions than the comprehension-only group. In Experiment 2, the group exposed to predictive linguistic stimuli showed an earlier and stronger prediction effect compared to the group exposed to stimuli with no prediction validity. These results suggest that comprehenders engage in different degrees of prediction according to the current demands of task goals and strategies. We discuss these findings in light of recent theories of predictive language processing. |
Marzie Samimifar; Federica Bulgarelli Decoding child speech in silence and noise: The type of background noise shapes adults' processing Journal Article In: Attention, Perception & Psychophysics, vol. 88, no. 1, pp. 1–22, 2026. @article{Samimifar2026,Processing speech that is non-canonical (i.e., child-produced speech) and/or presented in background noise can pose challenges for listeners. We investigated how listening to child-produced speech affects young adults' word recognition under varying noise conditions. Participants (n = 121) completed a two-picture eye-tracking task in one of three conditions: no background noise, pink background noise, and real-world background noise from LENA recordings. Participants heard a child or adult (Speaker-Age) direct attention to a generic (e.g., keys) or child-specific (e.g., potty; Item-Type) item. We examined the effect of Speaker-Age and Item-Type on participants' looking time. In no background noise, increases in target looking were high, with greater increases when adults produced generic items. Both pink noise and real-world noise increased task difficulty, but patterns of results varied as a function of speaker gender. For female speech, background noise resulted in an effect of Speaker-Age, with participants increasing their looking time more for adult relative to child speech. The type of background noise did not influence this pattern. For male speech, there was an effect of Speaker-Age in the opposite direction, with participants increasing their looking time more for child relative to adult speech. For male speech, real-world background noise resulted in higher increases in target looking for child-specific items. Together, results suggest that child-produced speech may be more difficult to process than female-adult produced speech in noise, and that listeners can use background noise to predict who will speak and what they might speak about under more challenging conditions, such as processing male speech. |
Marina Serrano-Carot; Bernhard Angele Spanish readers skip articles regardless of gender and number agreement Journal Article In: Journal of Eye Movement Research, vol. 19, no. 1, pp. 1–30, 2026. @article{Serrano-Carot2026,Articles are among the most frequently encountered words during reading; however, it is not clear how deeply they are usually processed. This study examines whether native Spanish speakers use parafoveal article–noun agreement information to guide eye movements during reading. Using the gaze-contingent boundary paradigm, we manipulated the parafoveal preview of articles across two experiments. In Experiment 1, we manipulated gender agreement between the previews readers received of definite articles and the subsequent nouns (e.g., la mesa vs. el* mesa). In Experiment 2, we manipulated grammatical gender and number agreement between parafoveal article previews and the subsequent nouns jointly (e.g., los* mesa vs. una mesa). We found no evidence that parafoveal article–noun gender or number agreement affected article skipping probability, suggesting that initial parafoveal processing of articles does not extend to their grammatical properties. However, we observed increased total viewing time on the noun following mismatching previews, suggesting that, while the decision of whether to skip an article is taken largely without considering the grammatical properties of the upcoming words, readers do need more time to recover from the grammatical mismatch afterwards. We discuss the results in the context of current models of eye-movement control during reading. |
Amanda Rose Yuile; Justin B. Kueser; Claney Outzen; Sharon Christ; Risa Stiegler; Mary Carson Adams; Barbara Brown; Arielle Borovsky Lexical vocabulary acquisition through multimodal annotation: An eye-tracking study with Chinese learners' dictionaries Journal Article In: Developmental Science, vol. 29, no. 1, pp. 1–18, 2026. @article{Yuile2026,Toddlers better retain novel object-label mappings from taxonomic categories they have more knowledge of. Separately, words for concepts with more perceptual features are learned earlier than words for concepts with fewer perceptual features. Because these factors have only been examined separately, it is unclear whether the effects of taxonomic density stem from differences in structured taxonomic knowledge or simply reflect lower-level differences in perceptual similarity among concepts. We asked how taxonomic structure and perceptual information jointly contribute to word learning at 24 months old in an ostensive word learning task. We found that semantic category knowledge facilitated word learning. We also found that the availability of perceptual features served as additional supports for word learning by children with smaller expressive vocabularies. This indicates that structured taxonomic knowledge is a better predictor of word learning compared to lower-level perceptual features at 24 months old. However, perceptual cues may provide additional support for vocabulary growth at the start of development. Summary: We explore how semantic category knowledge and perceptual features jointly influence novel word learning at 24 months old in an ostensive word learning context. Novel word learning was facilitated within semantic categories the toddlers knew more about, when controlling for the availability of perceptual information. Toddlers with smaller productive vocabularies used perceptual features as additional supports for word learning, but those with larger vocabularies did not. These findings show that structured taxonomic knowledge is a better predictor of word learning at 24 months old compared to lower-level perceptual information. |
2025 |
Sahand Amir-Asgari; Stefan Georgiev; Manuel Ruiss; Sotiris Plainis; Caroline Pilwachs; Oliver Findl In: BMC Ophthalmology, vol. 25, no. 1, pp. 1–10, 2025. @article{AmirAsgari2025,Background: To compare oculomotor behavior and reading performance along with conventional visual outcomes in patients following implantation of an enhanced depth of focus (EDOF) intraocular lens (IOL) versus a monofocal IOL. Methods: In this prospective, exploratory, randomized, clinical trial patients underwent either bilateral implantation with a non-diffractive EDOF IOL DFT015 (Acrysof IQ Vivity, Alcon, USA) or an aspheric monofocal IOL SN60WF (Acrysof IQ, Alcon, USA). 106 eyes of 53 patients with bilateral age-related cataract were evaluated (EDOF IOL group: n = 25; Monofocal IOL group n = 28). At 3 months after surgery, along with visual acuity at various distances, halometry, and patient-reported quality of vision, silent reading performance and oculomotor behavior were assessed at 66 cm with an infrared eye tracker under photopic and mesopic light levels. Data analysis included computation of reading speed and a range of oculomotor indices. Results: Median mesopic silent reading speed in words per minute (wpm) and fixation duration in milliseconds (ms) were 205 wpm and 277 ms for the EDOF group and 168 wpm (p = 0.04) and 301 ms (p = 0.02) for the monofocal control IOL group, respectively. No significant differences were observed under photopic conditions. Binocular means expressed in logarithm of the minimum angle of resolution for postoperative corrected distance, uncorrected distance, uncorrected intermediate, and uncorrected near visual acuities were − 0.04 ± 0.07, 0.00 ± 0.07, 0.12 ± 0.09, and 0.27 ± 0.13 for the EDOF group and − 0.08 ± 0.06 (p = 0.08), -0.04 ± 0.07 (p = 0.10), 0.19 ± 0.13 (p = 0.01), and 0.41 ± 0.14 (p = < 0.001) for the control group, respectively. Postoperative halo size and visual disturbances were similar in both groups. Conclusions: Silent reading speed at 66 cm distance is found improved with EDOF compared to monofocal IOLs only at mesopic light levels, mainly due to the improvement in average fixation duration. Further studies are needed to confirm these preliminary findings. |
Yaqian Borogjoon Bao; Xingshan Li; Victor Kuperman The eye movement database of passage reading in vertically written traditional Mongolian Journal Article In: Scientific Data, vol. 12, no. 1, pp. 1–12, 2025. @article{Bao2025,This paper introduces an eye-tracking corpus of passage reading data in the vertical writing system of traditional Mongolian. This corpus extends the Multilingual Eye Movement Corpus (MECO) database and includes data from 66 native readers of traditional Mongolian script reading 12 texts comprising 99 sentences and 2,592 words. This traditional Mongolian MECO corpus aims to address the research gap in reading studies on understudied languages. As one of the very few actively used vertical writing systems, these data offer unique insights into the cognitive and visual processing demands of vertical reading. The paper provides reliability estimates for the data and reports lexical benchmark effects of word frequency and length. Additionally, the corpus provides a valuable opportunity for cross-linguistic comparisons of eye movement data, especially with horizontal writing systems, contributing to a better understanding of how reading direction influences cognitive processing. |
Yevgeni Berzak; Jonathan Malmaud; Omer Shubi; Yoav Meiri; Ella Lion; Roger Levy OneStop: A 360-participant English eye tracking dataset with different reading regimes Journal Article In: Scientific Data, vol. 12, no. 1, pp. 1–15, 2025. @article{Berzak2025,We present OneStop Eye Movements, a large-scale corpus of eye movements in reading, in which native (L1) speakers read newswire texts in English and answer reading comprehension questions. OneStop has 152 hours of eye movement recordings from 360 participants for 2.6 million word tokens, more data than all the existing public broad coverage English L1 eye tracking datasets combined. The eye movement data was collected for extensively piloted reading comprehension materials comprising 486 reading comprehension questions and auxiliary text annotations geared towards behavioral analyses of reading comprehension. Importantly, OneStop includes multiple reading regimes: ordinary reading, information seeking, repeated reading of the same text, and reading simplified text. The combination of the unprecedented size, high-quality reading comprehension materials and multiple reading scenarios, aims to enable new research avenues in the study of reading and human language processing. It further aims to facilitate the integration of eye tracking data in Natural Language Processing (NLP), Artificial Intelligence (AI), Human Computer Interaction (HCI) and educational applications. |
Lijuan Chen; Wenjia Zuo; Xiaodong Xu How quotation types shape classic novel reading in Chinese: A comparison between human eye-movements and large language models Journal Article In: Behavioral Sciences, vol. 15, no. 12, pp. 1–21, 2025. @article{Chen2025d,Quotations play a central role in shaping narrative perspective, as they guide readers' adoption and shifting of character and narrator viewpoints. While direct speech (DS) is often assumed to enhance vividness and emotional engagement, its cognitive demands relative to free direct speech (FDS) and free indirect speech (FIS) remain unclear, particularly in Chinese classical literature. Using eye-tracking, we investigated how Chinese readers process DS, FDS, and FIS in the Four Great Classical Novels, manipulating perspective congruency through address terms versus proper names. The results revealed two key findings. First, DS consistently incurred longer fixation times than FIS, demonstrating its higher processing cost. Second, congruency effects emerged robustly across all quotation types (including FIS) in later measures, suggesting that in the specific context of classical Chinese novels, FIS does not exhibit the dual-voice effect proposed in narrative theory for this particular manipulation. Complementary analyses with large language models (LLMs) further showed that DS yielded higher surprisal and entropy than both FDS and FIS, indicating greater contextual unpredictability. By integrating human eye-movement evidence with computational modeling, this study provides evidence about the cognitive processing of DS in Chinese classical texts and raises questions about the universality of dual-voice processing in FIS across different languages and text types. |
Mingjing Chen; Li Chih Wang; Sisi Liu; Duo Liu The role of format familiarity and semantic transparency in Chinese reading: Evidence from eye movements Journal Article In: BMC Psychology, vol. 13, no. 1, pp. 1–15, 2025. @article{Chen2025h,Unlike alphabetic language, Chinese is an ideographic language that does not contain spaces between words. Chinese readers must develop unique segmentation strategies for word recognition and reading comprehension. This study explored the role of format familiarity and semantic transparency in Chinese reading, reflecting the segmentation strategy and word processing characteristics in Chinese reading. Forty undergraduates read Chinese in familiar and unfamiliar formats, segmenting target words into semantically transparent and semantically opaque words. We used Eye Link 1000 to measure readers' eye movement index, which can reflect processing characteristics of word recognition in Chinese reading. The following findings were made: (1) Familiarity with the text format affects Chinese reading performance. The fixation time in the familiar direction is short, the skipping rate is high, and the processing efficiency is higher when the fixation point is close to the word center; (2) Semantic transparency affects the segmentation strategy and word processing in Chinese reading. Chinese readers have shorter fixation times, higher reading efficiency, and a fixation point closer to the word center when reading semantically transparent words. It supported the combined access model. (3) There is significant interaction in the early eye movement indicators, representing word processing characteristics in the early stage of Chinese reading. Specifically, the semantic-transparency effect appeared under a familiar rather than an unfamiliar format. The format familiarity effect was found in the early processing indexes of transparent words rather than opaque words. In the familiar format, since the meaning of the morpheme and the whole word of transparent words is consistent, readers tend to segment and process them as whole words. Due to the lack of reading experience, the reading difficulty increases in the unfamiliar format. To reduce the difficulty and promote comprehension, readers change their segmentation strategy and tend to segment transparent words by character. The word segmentation process slowed, and the format-familiarity effect did not show in the early indexes under unfamiliar format. More importantly, the separability of the lexical processing stages showed in the interaction of different indexes, which means that word segmentation and lexical recognition in Chinese reading may not be completely synchronized, supporting the Chinese E-Z reader model. |
Karen Emmorey; Emily M. Akers; Emily Saunders; Marzieh Bannazadeh; Elizabeth Droubi; Frances G. Cooley; Elizabeth R. Schotter Assessing the effects of sign language experience versus deafness on the leftward reading span Journal Article In: Cognitive Science, vol. 49, no. 12, pp. 1–21, 2025. @article{Emmorey2025,Both deafness and sign language experience impact the distribution of visual attention, and either factor could affect reading span size, the area around fixation from which useful information is obtained. In contrast to the typical asymmetrical span (smaller on the left), deaf signers have a larger leftward span than skill-matched hearing readers. We investigated whether this enhanced span is due to changes in visual attention associated with early deafness or sign language experience (right-handed signs fall in the left periphery). A gaze-contingent moving-window paradigm was used to assess the leftward reading span of hearing early signers, deaf early signers, and hearing nonsigners with similar reading abilities. The size of the leftward span for deaf and hearing signers was the same (10 characters) and was larger than that of hearing nonsigners (4 characters). Thus, sign language experience appears to be at least one source of the larger leftward span in deaf signers. However, deaf signers were more efficient readers than both hearing groups (faster reading rate, more skipped words, fewer regressions), suggesting that their greater reading efficiency does not stem solely from a larger leftward span. |
Justin T. Fleming; Matthew B. Winn Seeing a talker's mouth reduces the effort of perceiving speech and repairing perceptual mistakes for listeners with cochlear implants Journal Article In: Ear and Hearing, vol. 46, no. 6, pp. 1502–1518, 2025. @article{Fleming2025,Objectives: Seeing a talker's mouth improves speech intelligibility, particularly for listeners who use cochlear implants (CIs). However, the impacts of visual cues on listening effort for listeners with CIs remain poorly understood, as previous studies have focused on listeners with typical hearing (TH) and featured stimuli that do not invoke effortful cognitive speech perception challenges. This study directly compared the effort of perceiving audiovisual speech between listeners who use CIs and those with TH. Visual cues were hypothesized to yield more relief from listening effort in a cognitively challenging speech perception condition that required listeners to mentally repair a missing word in the auditory stimulus. Eye gaze was simultaneously measured to examine whether the tendency to look toward a talker's mouth would increase during these moments of uncertainty about the speech stimulus. Design: Participants included listeners with CIs and an age-matched group of participants with typical age-adjusted hearing (N = 20 in both groups). The magnitude and time course of listening effort were evaluated using pupillometry. In half of the blocks, phonetic visual cues were severely degraded by selectively blurring the talker's mouth, which preserved stimulus luminance so visual conditions could be compared using pupillometry. Each block included a mixture of trials in which the sentence audio was intact, and trials in which a target word in the auditory stimulus was replaced by noise; the latter required participants to mentally reconstruct the target word upon repeating the sentence. Pupil and gaze data were analyzed using generalized additive mixed-effects models to identify the stretches of time during which effort or gaze strategy differed between conditions. Results: Visual release from effort was greater and lasted longer for listeners with CIs compared with those with TH. Within the CI group, visual cues reduced effort to a greater extent when a missing word needed to be repaired than when the speech was intact. Seeing the talker's mouth also improved speech intelligibility for listeners with CIs, including reducing the number of incoherent verbal responses when repair was required. The two hearing groups deployed different gaze strategies when perceiving audiovisual speech. CI listeners looked more at the mouth overall, even when it was blurred, while TH listeners tended to increase looks to the mouth in the moment following a missing word in the auditory stimulus. Conclusions: Integrating visual cues from a talker's mouth not only improves speech intelligibility but also reduces listening effort, particularly for listeners with CIs. For listeners with CIs (but not those with TH), these visual benefits are magnified when a missed word needs to be mentally corrected—a common occurrence during everyday speech perception for individuals with hearing loss. These results underscore the importance of including participants with hearing loss in listening effort studies and suggest caution in assuming results from TH listeners will generalize to those with hearing loss. They also highlight the potential clinical relevance of visual speech information, for counseling patients and families and potentially for the development of audiovisual strategies to reduce listening effort. |
Chanyuan Gu; Samuel A. Nastase; Zaid Zada; Ping Li Reading comprehension in L1 and L2 readers: Neurocomputational mechanisms revealed through large language models Journal Article In: npj Science of Learning, vol. 10, no. 1, pp. 1–13, 2025. @article{Gu2025,While evidence has accumulated to support the argument of shared computational mechanisms underlying language comprehension between humans and large language models (LLMs), few studies have examined this argument beyond native-speaker populations. This study examines whether and how alignment between LLMs and human brains captures the homogeneity and heterogeneity in both first-language (L1) and second-language (L2) readers. We recorded brain responses of L1 and L2 English readers of texts and assessed reading performance against individual difference factors. At the group level, the two groups displayed comparable model-brain alignment in widespread regions, with similar unique contributions from contextual embeddings. At the individual level, multiple regression models revealed the effects of linguistic abilities on alignment for both groups, but effects of attentional ability and language dominance status for L2 readers only. These findings provide evidence that LLMs serve as cognitively plausible models in characterizing homogeneity and heterogeneity in reading across human populations. |
Xin Huang; Hezul Tin Yan Ng; Chien Ho Lin; Ming Yan; Olaf Dimigen; Werner Sommer; Urs Maurer How the dominant reading direction changes parafoveal processing: A combined EEG/eye-tracking study Journal Article In: Psychophysiology, vol. 62, no. 12, pp. 1–22, 2025. @article{Huang2025e,Reading directions vary across writing systems. Through long-term experience, readers adjust their visual systems to the dominant reading direction in their writing systems. However, little is known about the neural correlates underlying these adjustments because different writing systems do not just differ in reading direction, but also in visual and linguistic properties. Here, we took advantage of the fact that Chinese is read to different degrees in left-to-right or top-to-bottom directions in different regions. We investigated visual word processing in participants from Taiwan (both top-to-bottom and left-to-right directions) and from mainland China (only left-to-right direction). We used combined EEG/eye-tracking with a saccade-contingent parafoveal preview manipulation to investigate how the dominant reading direction shapes neural visual processing while participants read 5-word lists. Fixation-related potentials (FRPs) showed a reduced late N1 effect (preview positivity), but this effect was modulated by prior experience with a specific reading direction. Results replicated previous findings that valid previews facilitate visual word processing, as indicated by reduced FRP activation. Critically, the results provide the first neuroelectric evidence that this facilitation effect depends on experience with a given reading direction. The findings provide insight into how cultural experience shapes the way people process visual information and demonstrate how a person's everyday visual experience can influence how the brain processes parafoveal information. |
Jookyoung Jung; Andrea Révész; Matthew J. Stainer; Ana Pellicer-Sánchez; Yoojin Chung; Danni Shi The impact of gaze-contingent textual enhancement on L2 collocation learning from computer-mediated reading tasks Journal Article In: TESOL Quarterly, vol. 59, no. 4, pp. 2035–2060, 2025. @article{Jung2025a,This study examined if gaze-contingent textual enhancement could be used as an interactive focus-on-form device to promote learning of second language (L2) collocations from computer-mediated reading tasks. Seventy-five Chinese ESL users read three English texts that contained twelve target collocations, presented under one of three conditions: no highlighting, proactive highlighting (target collocations highlighted in advance), and gaze-contingent highlighting (target collocations highlighted when looked at). Participants' eye movements were captured during the reading task, and collocation form recall and recognition tests were administered immediately after and 2 weeks later. Additionally, five participants from each group took part in a stimulated recall session, eliciting their thoughts while reading. The results indicated that both highlighting techniques increased total fixation duration and count on the target collocations and improved collocation form recall and recognition scores in the posttests. Gaze-contingent highlighting demonstrated a more durable impact on the collocation recall test compared to proactive highlighting. The stimulated recall comments also revealed that gaze-contingent highlighting tended to promote attentive processing of the target collocations. These findings suggest that highlighting is a useful focus-on-form technique in task-based reading contexts, with gaze-contingent highlighting yielding potential benefits in terms of L2 collocation learning. |
Oren Kadosh; Benjamin Menashe; Yael Gera; Michal Ben-Shachar; Yoram S. Bonneh Oculomotor chronometry of spoken word structure processing Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–9, 2025. @article{Kadosh2025,Oculomotor inhibition (OMI) is the momentary inhibition of involuntary eye movements, such as saccades and blinks, following sensory stimulation. Higher-level cognitive processes, such as processing the structure of written words, have been shown to affect the duration and magnitude of the OMI. In this study we tested whether OMI measures are influenced by the processing of spoken word structures. Participants listened to Hebrew words and pseudowords presented auditorily while we recorded their microsaccades, eye blinks, and pupil dilation. Spoken pseudowords were divided into two groups based on their underlying linguistic structure, with one half containing real roots and the other half containing invented roots. Results show a greater OMI for Real-root pseudowords as compared to Invented-root pseudowords, replicating the morpheme interference effect found previously for written stimuli. OMI measures, including microsaccade and eye blink latencies, as well as pupil dilation peak latency, were consistently greater for real-root pseudowords compared to invented-root pseudowords. These findings demonstrate the sensitivity of OMI to the cognitive processing of spoken word structure, even in the absence of visual stimuli or visually directed task. The results highlight the potential role of oculomotor responses as a marker of higher-order linguistic processing. |
Lin Li; Min Gao; Xue Sui; Xiaolei Gao; Ralph Radach Retrieval of information to the left of the current fixation during reading Chinese Journal Article In: BMC Psychology, vol. 13, no. 1, pp. 1–13, 2025. @article{Li2025h,This study investigates whether readers can extract the glyphs and lexical information of Chinese words from the left area of the current fixation. This study uses the eye-tracking technique with a boundary paradigm to explore this question. Participants were asked to read Chinese sentences carefully, with an invisible boundary between an adjective and a noun of a well-matched attributive clause. As readers' fixation crossed the boundary, the adjective to the left of the boundary was replaced with a similar or a dissimilar pseudoword (Experiment 1), or a word with an appropriate meaning at different frequency (Experiment 2). We found that readers spent more time reading the target word under the dissimilar mask condition than the similar condition in both early- and late-stage processing indexes of eye movement. In addition, readers spent more time on processing different-frequency adjective mask conditions at the late stage processing measures of eye movement. The results suggested that readers can begin to acquire glyph information from early stages and can acquire lexical information at the late stage, from the left of the current fixation during Chinese sentence reading. The implication for oculomotor gradient processing models was discussed. |
Shun Liu; Wenpeng Hu; Xiqin Liu Different effects of verbal and visual working memory loads on language prediction Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–11, 2025. @article{Liu2025o,Mounting studies suggest that working memory (WM) plays a crucial role in language prediction, but how varying types of WM loads influence language prediction remains unclear. This study investigated whether verbal and visual WM loads differentially impact language predictions during speech comprehension. Using a dual-task paradigm combined with eye-tracking in a visual world setting, we asked 48 participants to complete a sentence comprehension task under concurrent WM load conditions. Participants were divided into two groups, one of which performed a visual dots memory task and the other completed a visual words memory task, with memory load being applied in half of the trials. Results revealed anticipatory gaze towards target objects, suggesting the prediction of upcoming linguistic information. Notably, early fixations during the tonal cue window indicated tonal prediction in spoken sentence processing. Furthermore, WM load significantly disrupted participants' language prediction effects, highlighting the involvement of working memory resources in this process. Importantly, the verbal memory task imposed a more severe disruption to language prediction than the visual memory task, suggesting differential roles of WM subtypes in linguistic prediction. This offers novel insights into how verbal WM and visual-spatial WM differentially influence predictive language processing. |
Antoqn Malko; Sasha Wilmoth; Thivina Thanabalan; Ina Bornkessel-Schlesewsky; Rachel Nordlinger; Matthias Schlesewsky; Evan Kidd Real-time thematic role assignment in Pitjantjatjara: An eye-tracking study Journal Article In: Language, Cognition and Neuroscience, pp. 1–21, 2025. @article{Malko2025,Languages differ in how core argument roles are marked and in the cues guiding their real-time comprehension. This study investigated thematic role assignment in Pitjantjatjara – an Australian Pama-Nyungan ergative language with free word-order. Using visual world eye- tracking, we analysed whether a noun phrase's humanness, case marking and position in the sentence guide its interpretation as agent or patient of an event. Confirmatory analyses indicated that these properties do not affect thematic role processing at the noun phrase itself. Exploratory analyses suggested that transitivity expectations play an important role. When the visual scene depicted more typical human agents, the influence of linguistic factors was observed later in the trial: speakers committed to the thematic role faster when all cues pointed toward the same interpretation. However, visual events that violated expectations (animals/ inanimate objects acting on humans) strongly attracted participants' visual attention, attenuating the influence of linguistic input. |
Emma Krane Mathisen; Nicholas Allott; Camilo R. Ronderos Cognitive mechanisms in simile and metaphor comprehension Journal Article In: Language and Cognition, vol. 17, pp. 1–19, 2025. @article{Mathisen2025,This study investigates whether metaphors and similes are processed the same way or not. Comparison accounts of metaphor claim that metaphors and similes use the same cognitive mechanisms because metaphors are implicit similes, while Categorization accounts claim that the two figures of speech require different cognitive mechanisms. It is unclear which position has the most support. We address this by introducing the distinction between single and extended metaphors to this debate. Several experiments have shown that a metaphor preceded by another metaphor is read faster than a single metaphor. If similes in extended and non-extended contexts display a similar processing difference, this would support views saying that metaphors and similes are processed the same way. If not, it would be more in line with the view that they are processed differently. Using an eye-tracking reading paradigm, we find that the difference between processing single and extended metaphors does not hold in the case of simile comprehension. This is more compatible with Categorization accounts than with Comparison accounts; if the cognitive mechanism behind metaphor and simile processing is the same, we would expect there to be a comparable processing difference between metaphors and similes in the single and extended conditions. |
Sarah Michel; Céline Pozniak; Saveria Colonna Reading new morpho-syntactic forms: The case of gender-inclusive writing in French Journal Article In: Journal of French Language Studies, vol. 35, pp. 1–30, 2025. @article{Michel2025,This study investigates the reading of novel morpho-syntactic forms, specifically gender-inclusive writing in French. Inclusive writing aims to address the generic use of the masculine form, which often encourages male mental representations over female or non-binary ones. The study focuses on contracted forms using the mid-dot, such as étudiant·e·s, which have become widespread in French despite ongoing public debate. Four experiments using eye-tracking and self-paced reading methods compared reading times for inclusive, masculine, and feminine forms. Experiment 1 found no robust difference in reading times between inclusive forms ending in “·e” and their feminine counterparts, suggesting familiarity with this form. Experiment 2 showed that inclusive forms ending in “·ne”, such as comédien·ne·s, were read more slowly than their feminine counterparts, possibly due to phonological effects. Experiment 3 tested highly pronounceable inclusive forms like auteur·rice·s, which were read more slowly initially, but this effect was short-lasting. Experiment 4 compared more or less pronounceable forms, such as chanteur·euse·s and chanteur·se·s, respectively, confirming that the degree of pronounceability affects reading times. Overall, the study concluded that the reading time for contracted inclusive forms depends on familiarity and the degree of pronounceability. |
Dorsa Mir Norouzi; Norah M. Nyangau; Yi Zhong Wang; Lori M. Dao; Cynthia L. Beauchamp; David R. Stager; Jeffrey S. Hunter; Krista R. Kelly Slow binocular reading during rapid serial visual presentation (RSVP) in children with amblyopia and the role of fixation instability Journal Article In: Vision Research, vol. 237, pp. 1–7, 2025. @article{Norouzi2025,Children with amblyopia read slower than their peers during binocular viewing. Ocular motor dysfunction typical of amblyopia may cause slow reading. It is unclear whether this is due to fixation instability or increased forward saccades. We examined whether removing the requirement of inter-word saccades helps children with amblyopia read at a similar rate as controls using a rapid serial visual presentation (RSVP) task. We also assessed whether reading rate was related to fixation instability. Children with amblyopia (n = 32) and control (n = 30) children ages 8–12 years silently read sentences presented in RSVP (single word presentation at screen center) during binocular viewing. Exposure time per sentence changed with a 2 − down 1 − up staircase to obtain reading speed thresholds (log words/minute [WPM]). Eye movements were tracked to determine fellow eye (FE) and amblyopic eye (AE) fixation stability during RSVP reading. Children with amblyopia read slower than controls (2.75 ± 0.47 log WPM vs 3.06 ± 0.40 log WPM), and had increased AE fixation instability (0.21 ± 0.39 log deg2 vs − 0.20 ± 0.18 log deg2) and increased FE fixation instability (−0.03 ± 0.34 log deg2 vs − 0.20 ± 0.15 log deg2) during RSVP reading. Reading rate in amblyopic children with good FE stability (n = 11) did not differ from controls and was faster than those with poor FE stability (n = 21). Children with poor FE stability read slower than controls. Removing the need for inter-word saccades (i.e., RSVP reading) did not help children with amblyopia read at control speeds. Our data support FE fixation instability as a source of slow reading in amblyopia. |
Adam J. Parker; Muchan Tao; Martin R. Vasilev In: Psychonomic Bulletin and Review, vol. 32, no. 6, pp. 3055–3066, 2025. @article{Parker2025c,Return-sweeps, which move the reader's gaze from the end of one line to the beginning of the next, typically result in shorter line-final fixations and longer accurate line-initial fixations compared to intra-line fixations. The mechanisms underlying these differences have been the subject of debate. To assess the linguistic and oculomotor contributions to these return-sweep fixation differences, we compared the eye movements of 41 participants during reading and z-string scanning, an oculomotor control condition that is devoid of useful linguistic content. Our results indicate that line-final fixations are shorter than intra-line fixations, while accurate line-initial fixations are longer than intra-line fixations, under both tasks, underscoring the significant role of the oculomotor system in determining fixation durations across tasks. Notably, the reduction in line-final fixation durations compared to intra-line fixations did not differ between tasks. This suggests that oculomotor coordination or visual processing, rather than linguistic processing, drives shorter line-final fixations. In contrast, the difference in the increase in duration for accurate line-initial fixations between reading and z-string scanning implies that longer accurate line-initial fixations are likely a result of lexical processing, oculomotor coordination, and visual processing. These findings advance our understanding of eye movement control by highlighting the combined influence of linguistic and oculomotor processes on return-sweep fixation durations. |
Valentina N. Pescuma; Kohei Haneda; Aine Ito; Katja Maquate; Pia Knoeferle Eye-tracking context formality effects in German and Japanese sentence processing Journal Article In: Scientific reports, vol. 15, no. 1, pp. 44030, 2025. @article{Pescuma2025,Context information can rapidly affect language processing. Yet, whether this holds for social information like respect and formality (e.g., talking to a teacher vs. student), and across languages adhering differently to social hierarchy (German vs. Japanese) is unclear. We manipulated (in)congruence of formality (match vs. mismatch) in German and formality-style (matching or mismatching social respect) in Japanese. Participants encountered context-target sentence pairs (mis)matching in formality (German), or within-sentence formality-style (mis)match (Japanese). For Japanese, another factor 'style' (exalted vs. humble) created this mismatch (e.g., exalted style matches actions by a teacher but not student). For German, (in)congruence between subject and verb inflection served as a morphosyntactic baseline: we expected rapid effects. Longer first-pass times for mismatches than matches at the first mismatching word would indicate immediate formality processing. In German, formality mismatch effects emerged later, and in later measures, compared to first-pass morphosyntactic mismatch effects. In Japanese, formality-style effects emerged at the first mismatching word in a "late" measure (only for exalted style), in first fixation duration, post-verbally. Our results contribute to characterizing formality effects across languages that differ in their adherence to social hierarchy, and in which social markers in language are (Japanese) versus are not (German) part of grammar. |
Claire Prendergast What young children's processing and understanding of compound words can tell us about their pragmatic development Journal Article In: Psychology of Language and Communication, vol. 29, no. 1, pp. 1–34, 2025. @article{Prendergast2025,What can we learn by observing how children process and interpret compound terms? By integrating both linguistic and pragmatic factors, typically studied in isolation, the current study revealed children's growing adherence to linguistic norms, but also their increasing openness to unconventional reference. Across three experiments employing a picture selection task for referent selection, young children were presented with lexicalized and novel exocentric and endocentric compound nouns. Examining age-related differences in referent selection, Experiment 1 (baseline), found a preference for conventional and semantically transparent referents, increasing with age. Experiment 2 showed that an individual speaker influenced referent selection across both age groups, with 5-year-olds showing more accommodation of the speaker's intended meaning. Experiment 3, examining gaze behaviour, indicated that both 3- and 5-year-olds decompose lexicalized compound terms similarly to novel compounds. This research highlights the interplay between language and social development, showcasing key stages in children's pragmatic development. |
Ying Que; Yueyuan Zheng; Janet H. Hsiao; Xiao Hu Using eye movements, electrodermal activities, and heart rates to predict different types of cognitive load during reading with background music Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–12, 2025. @article{Que2025a,The triarchic model of cognitive load postulates three types of cognitive load—extraneous, intrinsic, and germane load. While various approaches have been proposed to measure the three types of cognitive load, most measurements are intrusive. To address this issue, we leveraged multimodal learning analytics to collect eye movement (EM), electrodermal activity (EDA), heart rate (HR), and heart rate variability (HRV) from non-intrusive sensors and investigate whether they could predict the three types of cognitive load. We examined extraneous load (created by adding background music (BGM)), intrinsic load (created by text complexity), and germane load (reflected by comprehension accuracy) in a novel reading context with self-selected preferred BGM. One hundred and two (102) non-native English speakers were recruited. Half of them read English passages with BGM, while the other half read in silence. Results of logistic regression indicated that EM measures were predictive of the three load types, while HR/HRV measures predicted extraneous and germane load. Our findings provide evidence supporting the triarchic structure of cognitive load theory and implications for the design of non-intrusive measurement of cognitive load. |
Samuel Shaki; Oria Pitem; Martin H. Fischer Lexical priming of space depends on how deeply you think about it Journal Article In: Scientific Reports, vol. 15, no. 1, 2025. @article{Shaki2025,There is a long debate about how the meaning of words cues our spatial attention. For implicitly spatial words such as “ROOF” or “BASEMENT”, it was recently shown that processing both the cue word and a subsequent spatial target stimulus was necessary for spatial congruity effects to emerge. Here we challenge this work by documenting that word cues alone suffice to induce congruity effects if they are processed deeply. Sixty-three healthy adults detected vertically displaced targets after looking at centrally presented cue words under three counterbalanced instructions, imposing increasing processing depth: Lexical decision, non-spatial categorization, and spatial categorization. Target detection speed revealed spatial congruity effects for both spatial and non-spatial categorization but not for lexical decision. An interpretation in terms of covert attention deployment was corroborated by concomitant vertical displacements of eye gaze. Our results reveal minimal requirements for covert and overt semantic cueing of spatial attention. |
Noam Siegelman; Sascha Schroeder; Yaqian Borogjoon Bao; Cengiz Acartürk; Niket Agrawal; Lena S. Bolliger; Jan Brasser; César Campos-Rojas; Denis Drieghe; Dušica Filipović Đurđević; Sofya Goldina; Romualdo Ibáñez Orellana; Lena A. Jäger; Ómar I. Jóhannesson; Anurag Khare; Nik Kharlamov; Hanne B. S. Knudsen; Árni Kristjánsson; Charlotte E. Lee; Jun Ren Lee; Marina P. T. Leite; Simona Mancini; Nataša Mihajlović; Ksenija Mišić; Miloslava Orekhova; Olga Parshina; Milica Popović Stijačić; Athanassios Protopapas; David R. Reich; Anurag Rimzhim; Rui Rothe-Neves; Thais M. M. Sá; Andrea Santana-Covarrubias; Irina Sekerina; Heida M. Sigurdardottir; Anna Smirnova; Priyanka Srivastava; Elisangela N. Teixeira; Ivana Ugrinic; Kerem Alp Usal; Karolina Vakulya; Ark Verma; João M. M. Vieira; Denise H. Wu; Jin Xue; Sunčica Zdravković; Junjing Zhuo; Laoura Ziaka; Victor Kuperman Wave 2 of the Multilingual Eye-Movement Corpus (MECO): New text reading data across languages Journal Article In: Scientific Data, vol. 12, no. 1, pp. 1–14, 2025. @article{Siegelman2025,This paper reports the Wave 2 expansion of the Multilingual Eye-Movement Corpus (MECO), a collaborative multi-lab project collecting eye-tracking data on text reading in a variety of languages. The present expansion comes with new eye-tracking data of N = 654 from 13 languages, collected in 16 labs over 15 countries, including in several languages that have little to no representation in current eye-tracking studies on reading. MECO also contains demographic, language use, and other individual differences data. This paper makes available the first-language reading data of MECO Wave 2 and incorporates reliability estimates of all tests at the participant and item level, as well as other methods of data validation. It also reports the descriptive statistics on all languages, including comparisons with prior similar data, and outlines directions for potential reuse. |
Catharina Tibken; Simon P. Tiffin-Richards Reading behavior as an indicator of comprehension monitoring when reading expository texts Journal Article In: Metacognition and Learning, vol. 20, no. 1, pp. 1–29, 2025. @article{Tibken2025,Comprehension of expository texts is an important prerequisite for self-regulated learning. Processes of passive validation and metacognitive monitoring are thought to be involved in building a coherent situation model of a text. Inconsistency tasks are often used to measure these processes. Several studies have shown longer reading times for inconsistent sentences than for consistent sentences. However, it remains unclear whether the additional time arises from passive disruptions of the reading process when encountering an inconsistency or from metacognitive processes of reanalysis of previous text. To address this issue, we recorded the reading behavior of 96 university students with an eye-tracker while they read inconsistent and consistent expository texts. We analyzed first-pass reading (first-pass reading time, lookbacks) and reanalysis (rereading time, revisits) at the level of the (in)consistent target word, at the sentence-final word of the target sentence, and in the pre-target text. Our results did not strongly support the hypothesis that immediate changes in reading behavior when inconsistencies are first encountered influence the detection and processing of inconsistencies. Our results partially supported the hypothesis that processes of text reanalysis, specifically of the source of inconsistency, increase the probability of identifying an inconsistency. The findings indicate that a purposeful reanalysis of passages that appear inconsistent to readers improves situation model construction for (short) expository texts about conceptually difficult topics. Learning from texts thus requires metacognitive comprehension monitoring beyond passive validation processes. |
Lijuan Wang; Steven Frisson; Yali Pan; Ole Jensen Fast hierarchical processing of orthographic and semantic parafoveal information during natural reading Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–12, 2025. @article{Wang2025f,In reading, information from parafoveal words is extracted before direct fixation; however, it is debated whether this processing is restricted to orthographic features or also encompasses semantics. Moreover, the neuronal mechanisms supporting parafoveal processing remain poorly understood. We co-registered MEG and eye-tracking data in a natural reading paradigm to uncover the timing and brain regions involved in parafoveal processing. Representational similarity analysis revealed that parafoveal orthographic neighbours (e.g., “writer” vs. “waiter”) showed higher representational similarity than non-neighbours (e.g., “writer” vs. “police”), emerging ~68 ms after fixation onset on the preceding word (e.g., “clever”) in the visual word form area. Similarly, parafoveal semantic neighbours (e.g., “writer” vs. “author”) exhibited increased representational similarity at ~137 ms in the left inferior frontal gyrus. Importantly, the degree of orthographic and semantic parafoveal processing was correlated with individual reading speed. Our findings suggest fast hierarchical processing of parafoveal words across distinct brain regions, enhancing reading efficiency. |
Tao Wang; Yue Wang; Haibo Hu; Xing Wang; Shengdong Chen; Yiming Yang An eye-movement database of bilingual language control for Chinese-English bilinguals Journal Article In: Scientific Data, vol. 12, no. 1, pp. 1–7, 2025. @article{Wang2025l,The current absence of an eye-tracking database that explores bilingual language control and how intra-sentence code-switching types influence the language control process limits our deeper understanding of bilingual control mechanisms. To address this issue, we present a database containing eye-movement recordings collected during a silent reading task combined with language switching paradigm. The database contains typical measures of eye movement data of 160 Chinese and their translation equivalent English words from 40 high-proficient and 40 low-proficient participants across 1280 Chinese, English and intra-sentential code-switching sentences. This database enables researchers to test the impacts of both intra-sentential code-switching and the second language proficiency on bilingual language control and the underlying cognitive mechanisms. |
Allyson Copeland; Brennan R. Payne Co-registered eye-movements and brain potentials reveal multiple effects of context across the visual field in natural reading Journal Article In: Psychophysiology, vol. 62, no. 11, 2025. @article{Copeland2025,This study investigates how expectancy and plausibility influence behavioral and neural measures of language processing during naturalistic reading comprehension. Prior event-related potential (ERP) studies show evidence of distinct post-N400 positivities to violations of semantic expectancy and plausibility using artificial serial presentation but have yet to establish these phenomena during naturalistic reading. Therefore, we recorded simultaneous eye movements and EEG while participants read highly constraining sentences with expected, unexpected (but plausible), and anomalous target words. Time locked to the pre-target word, we observed a contextually graded parafoveal N400 effect. The N400 was facilitated (i.e., reduced) when the word was subsequently fixated, suggesting trans-saccadic integration of semantic features. At target fixation, we also observed a late anteriorly distributed positivity to unexpected target words and a posteriorly distributed positivity to anomalous target words, effects that were not clearly present when time locked to the pre-target word. Eye-tracking (ET) measures show that readers were sensitive to both expectancy and plausibility at target fixation. In conclusion, we show that readers can begin accessing semantic information in parafoveal vision, but higher-level semantic processing may require the orchestration of both parafoveal and foveal representations. |
Cécile Fabio; Christoph Kayser Mixed evidence for the rhythmicity of auditory perceptual judgements in humans Journal Article In: eLife, vol. 14, pp. 1–24, 2025. @article{Fabio2025,Numerous studies advocate for a rhythmic mode of perception. However, the evidence in the context of hearing remains inconsistent. We propose that the divergent conclusions drawn from previous work stem from conceptual and methodological issues. These include ambiguous assumptions regarding the origin of rhythmicity, variations in tasks and attentional demands, and differing analytical approaches for statistical testing. To address these points, we conducted a series of experiments in which human participants performed auditory tasks involving monaural targets presented against binaural white noise backgrounds, while also recording eye movements. These experiments varied in whether stimuli were presented randomly or required motor initialisation, the necessity of memory across trials, and the manipulation of attentional demands. Our findings challenge the notion of universal rhythmicity in hearing, but support the existence of paradigm- and ear-specific fluctuations in sensitivity and biases at multiple frequencies. The rhythmicity for sounds in the left and right ears appears independent among participants, and the rhythmicity in performance is possibly linked to oculomotor activity and attentional requirements. Overall, these results may help to resolve conflicting conclusions drawn in previous work and provide specific avenues for further studies into the rhythmicity of auditory perception. |
Andrea Helo; María Teresa Martin-Aragoneses; Ernesto Guerra; Carmen Julia Coloma In: Journal of Communication Disorders, vol. 118, pp. 1–14, 2025. @article{Helo2025,This study investigated comprehension of prepositions (“con” vs “sin” and “bajo” vs “sobre”; in English: ‘with' vs ‘without' and ‘under'/‘below' vs ‘on') and prepositional locutions (“delante de” vs “detrás de” and “dentro de” vs “fuera de”; in English: ‘in front of' vs ‘behind' and ‘in'/‘inside' vs ‘out of') (hereafter, PPL) in Spanish-speaking preschoolers with Developmental Language Disorder (DLD) compared to their typically developing peers. We used an experimental approach assessing visual preference in real-time through eye tracking. The results showed that both groups demonstrated comprehension of the evaluated PPL, as evidenced by their visual preference for the image matching the sentence heard. However, children with DLD took longer to identify the correct image and tended to display a weaker preference pattern, although differences were not significant. Moreover, the prepositional locutions ‘in front of' and ‘behind'—which are typically acquired later among the morphological items assessed—were particularly challenging for children with DLD, who did not show a consistent visual preference for these prepositions. These findings suggest that, with respect to prepositions, children with DLD follow a trajectory similar to that of typically developing children, though characterized by a mild developmental delay. At a practical level, subtle difficulties in processing prepositions might have meaningful effects in everyday contexts, where rapid comprehension of language input is critical for children's participation and learning at school. |
Chen-En Ho; Jie Li Tsai Reader differences in navigating English–Chinese sight interpreting/translation Journal Article In: Poznan Studies in Contemporary Linguistics, vol. 61, no. 4, pp. 457–481, 2025. @article{Ho2025,Reading is key in sight interpreting/translation (SiT), a task in this study involving reading and orally rendering text at one's own pace in a diplomatic interpreting scenario. However, little attention is given to how different reading processes are used. This study bridges this gap by investigating SiT reading processes, using silent reading (SR) and reading aloud (RA) for comparison to understand how reading varies between tasks and participants. Experienced interpreters, interpreting trainees, and untrained bilinguals were recruited to conduct SR, RA, and SiT. Their eye movement data underwent cluster analysis based on fixation duration and saccade length plus direction. Five distinct reading processes have been identified – skimming, rauding, two levels of problem-solving, and anchoring. While the overall reading pattern is similar, nuanced differences tell groups and tasks apart. Due to the limitation of space, this paper only reports findings centring on the participants. Significant differences exist only between the trained (i.e., interpreters and trainees combined) and untrained cohorts in three processes, namely skimming, rauding (normal reading), and problem-solving, almost exclusively in SiT. Our findings attest to the multifaceted SiT reading processes and offer an alternative account to associating fixation duration solely with cognitive load, helping us better understand SiT reading. |
Vegas Hodgins; Mehrgol Tiv; Chaimaa El Mouslih; Karla Tarín; Naima Mansuri; Antonio Iniesta; Debra Titone Bilingual irony processing during natural reading: A within-participant look at L1 versus L2 effects using eye-movement measures Journal Article In: Memory and Cognition, vol. 53, pp. 2493–2508, 2025. @article{Hodgins2025,Ironic utterances (i.e., when people intend the opposite of what they say) are often more difficult to understand than literal utterances during natural reading (reviewed in Olkoniemi & Kaakinen, Canadian Journal of Experimental Psychology, 75, 99-106, 2021). Moreover, ironic compliments (“Good job!” spoken upon a failure) tend to be even more challenging compared to ironic criticisms (“Terrible job!” spoken upon a success) (Pexman & Olineck, Discourse Processes, 33, 199-217, 2002). Relevant here, understanding irony is thought to require mentalizing capacity, which may be impacted by bilingual language experience (Tiv et al., Memory & Cognition, 51, 253-272, 2023) and differ for first and second language reading (L1 and L2, respectively). In this study, bilingual adults read sentences containing ironic compliments, criticisms, and matched literal statements in both their L1 and their L2 (blocked and counterbalanced), enabling a rigorous within-participant evaluation of L1 versus L2 irony processing. Linear mixed-effects modelling demonstrated the increased difficulty of ironic compliments during reading but indicated no group-level, within-participant L1 versus L2 irony differences. However, a significant effect of bilingual language experience emerged, in that individual differences in how readers distribute use of their L1 and L2 (i.e., language entropy) patterned with faster go-past times for ironic sentences during L1 reading. These findings cohere with the idea that bilingual language experience may relate to mentalizing processes that underlie irony resolution (e.g., Tiv et al., Memory & Cognition, 51, 253-272, 2023). |
Clare Kirtley; Christopher Murray; Phillip B. Vaughan; Benjamin W. Tatler Coming up next: The extent of the perceptual window in comic reading Journal Article In: Cognitive Science, vol. 49, no. 11, pp. 1–31, 2025. @article{Kirtley2025,Recent models of sequential narratives suggest that readers form predictions about upcoming panels as they read. However, previous work has considered these predictions only in terms of currently viewed information. In the current studies, we investigate to what extent readers are using information from un-fixated panels in comic stories. Using the moving-window paradigm, we studied whether reading behavior was disrupted when upcoming panels were unavailable to the reader, in short comic strips (Experiment 1) and multipage comics (Experiment 2). Both studies showed the greatest disruption to reading when all peripheral information was removed, but such changes persisted when only partial peripheral information was available. The results indicate that readers are making use of information from at least two panels ahead of the current fixation location. We consider these findings in relation to the PINS model of comic reading, and how the role of peripheral information might be further explored. |
Maksim Markevich; Anastasiia Streltsova The influence of text genre on eye movement patterns during reading Journal Article In: Journal of Eye Movement Research, vol. 18, no. 6, pp. 1–22, 2025. @article{Markevich2025,Successful reading comprehension depends on many factors, including text genre. Eye-tracking studies indicate that genre shapes eye movement patterns at a local level. Although the reading of expository and narrative texts by adolescents has been described in the literature, the reading of poetry by adolescents remains understudied. In this study, we used scanpath analysis to examine how genre and comprehension level influence global eye movement strategies in adolescents (N = 44). Thus, the novelty of this study lies in the use of scanpath analysis to measure global eye movement strategies employed by adolescents while reading narrative, expository, and poetic texts. Two distinct reading patterns emerged: a forward reading pattern (linear progression) and a regressive reading pattern (frequent lookbacks). Readers tended to use regressive patterns more often with expository and poetic texts, while forward patterns were more common with a narrative text. Comprehension level also played a significant role, with readers with a higher level of comprehension relying more on regressive patterns for expository and poetic texts. The results of this experiment suggest that scanpaths effectively capture genre-driven differences in reading strategies, underscoring how genre expectations may shape visual processing during reading. |
Tracy E. Reuter; Lauren L. Emberson Relative contributions of predictive vs. associative processes to infant looking behavior during language comprehension Journal Article In: Journal of Child Language, vol. 52, no. 6, pp. 1225–1248, 2025. @article{Reuter2025,Numerous developmental findings suggest that infants and toddlers engage predictive processing during language comprehension. However, a significant limitation of this research is that associative (bottom-up) and predictive (top-down) explanations are not readily differentiated. Following adult studies that varied predictiveness relative to semantic-relatedness to differentiate associative vs. predictive processes, the present study used eye-tracking to begin to disentangle the contributions of bottom-up and top-down mechanisms to infants' real-time language processing. Replicating prior results, infants (14-19 months old) use successive semantically-related words across sentences (e.g., eat, yum, mouth) to predict upcoming nouns (e.g., cookie). However, we also provide evidence that using successive semantically-related words to predict is distinct from the bottom-up activation of the word itself. In a second experiment, we investigate the potential effects of repetition on the findings. This work is the first to reveal that infant language comprehension is affected by both associative and predictive processes. |
Laura Schwalm; Ralph Radach; Victor Kuperman The metrics of regressive saccades during reading in 13 written languages Journal Article In: Vision Research, vol. 236, pp. 1–12, 2025. @article{Schwalm2025,A well-documented phenomenon in research on eye movement control during reading is the systematic relationship between the landing positions of forward saccades and target word characteristics. However, the behaviour of regressive saccades, which move the eyes in the opposite direction, remains less explored. This study delves into the landing positions of regressive saccades, emphasizing the distinction between intra-word and inter-word regressions, across diverse languages. Using data from the MECO L1 project, which includes eye-tracking data from 589 participants across 13 languages, we scrutinize the precise landing positions of regressions vis-à-vis forward saccades. Our analysis shows a robust effect of launch distance on landing positions for progressive saccades, with undershoots increasing as launch distance grows and overshoots with shorter launch distances. In contrast, regressive inter-word saccades show only minimal variation in landing positions, typically landing near the centre of the target word regardless of launch distance or word length. Intra-word regressions, however, display a pattern similar to progressive saccades, where the landing position is influenced by launch distance, tending to overshoot the optimal viewing position as the launch site moves away from the word's end. This pattern is consistent across all languages. These findings support the notion of cross-linguistic universality in oculomotor control mechanisms during reading, particularly the precision of regressive saccades. They align with the spatial coding hypothesis, suggesting that precise spatial memory of word positions guides regressive saccades. |
Tanvi Thakkar; Jarett Knoepker; Stephen R. Dennison; Joseph P. Roche; Ruth Y. Litovsky Spatial separation enhances speech intelligibility but increases listening effort with session-dependent variability in pupillometric measures Journal Article In: Frontiers in Neuroscience, vol. 19, pp. 1–13, 2025. @article{Thakkar2025,Introduction: The current understanding of the cognitive load of listening effort has been advanced by combining speech intelligibility and pupillometry measures. However, the reliability of pupil dilation metrics in complex listening scenarios like spatial release from masking (SRM) remains uncertain. This study investigated how spatial separation of sound sources impacts listening effort (via peak pupil dilation, PPD) and speech intelligibility. Methods: Speech intelligibility and listening effort were simultaneously measured under co-located and symmetric, spatially-separated conditions at varying signal-to-noise ratios (SNRs). Results: Results showed that although spatial separation improved speech intelligibility, it did not yield a corresponding reduction in listening effort. Instead, listening effort increased as SNR became more challenging. Furthermore, test–retest reliability was moderate-to-high for speech intelligibility but only moderate-to-low for PPD, with greater consistency observed at more challenging SNRs. These results suggest that obtaining stable PPD measures within an SRM paradigm may be difficult to achieve. Discussion: These findings indicate that obtaining stable PPD measures within an SRM paradigm can be challenging. Test session reliability is weak when combining SRM paradigms with measures of listening effort, which may reduce statistical power due to factors such as sample size, number of trials, and sessions tested. This is further limited by the relatively small and homogeneous sample of young, typical hearing adults. Future studies should include a larger and more diverse participant group to assess the generalizability of these results. |
Andi Wang; Ana Pellicer-Sánchez Exploring L2 learners' processing of unknown words during subtitled viewing through self-reports Journal Article In: International Review of Applied Linguistics in Language Teaching, vol. 63, no. 4, pp. 2379–2408, 2025. @article{Wang2025d,Studies have shown the benefits of subtitled viewing for incidental vocabulary learning, but the effects of different subtitling types varied across studies. The effectiveness of different types of subtitled viewing could be related to how unknown vocabulary is processed during viewing. However, no studies have investigated L2 learners' processing of unknown words in viewing beyond exploring learners' attention allocation. The present research followed a qualitative approach to explore L2 learners' processing of unknown words during subtitled viewing under three conditions (i.e., captions, L1 subtitles, and bilingual subtitles) by tapping into learners' reported awareness of the unknown words and the vocabulary processing strategies used to engage with unknown words. According to stimulated recall data (elicited by eye-tracking data) from 45 intermediate-to-advanced-level Chinese learners of English, captions led to increased awareness of the unknown words. Moreover, the types of strategies learners used to cope with unknown vocabulary were determined by subtitling type. |
Micaela Wiseman; Rachel Yep; Madeline Wood Alexander; Christopher B. Pople; Lucas Perri; Georgia Gopinath; Maria Vasileiadi; Jessica Robin; Michael J. Spilka; William Simpson; Yana Yunusova; Douglas P. Munoz; Brian C. Coe; Donald Brien; Sean Nestor; Nir Lipsman; Peter Giacobbe; Jennifer S. Rabin Objective speech measures capture depressive symptoms and associated cognitive difficulties Journal Article In: Translational Psychiatry, pp. 1–9, 2025. @article{Wiseman2025,Psychiatry lacks objective biomarkers for assessing depression, relying instead on subjective measures, such as the Hamilton Depression Rating Scale (HAMD-17). This study examined whether speech features could serve as objective markers of depressive symptoms and its associated cognitive difficulties. Sixty-six individuals with major depressive disorder (MDD) and 54 non-depressed control participants completed a speech assessment, responding to the prompt: “Please tell me how you are feeling today.” Linguistic (valence, emotional intensity, agency) and acoustic (pitch, pitch variance, speech rate, time spent pausing) features were derived from natural language processing. These speech features were analyzed individually and collectively as a composite score representing overall speech disturbance. A subset of participants (40 with MDD, 38 controls) also completed a validated executive function task. ANCOVA models compared speech features between groups. Linear regression models examined associations between speech features, depression severity (HAMD-17), and performance on an executive function task. Compared to controls, individuals with MDD used language that was more negatively valenced, emotionally intense, and less agentic. They also demonstrated lower pitch, slower speech rate, and more time spent pausing. The composite speech score also differed between groups. Speech features and executive function were not associated with depression severity, as measured by the HAMD-17. However, several speech features were associated with executive function. Taken together, these findings suggest that speech features may provide a scalable, objective method for detecting depressive symptoms and associated executive difficulties. |
Aakash Agrawal; Stanislas Dehaene From retinotopic to ordinal coding: Dissecting the cortical stages of visual word recognition Journal Article In: PNAS, vol. 122, no. 43, pp. 1–11, 2025. @article{Agrawal2025,Fluent reading requires the brain to precisely encode the positions of letters within words, distinguishing for instance FORM and FROM across variations in size, position, and font. Early visual areas, however, are known to encode retinotopic positions, and how these representations get transformed into a position-invariant neural code remains unclear. Building upon a computational model of reading, we used 7T functional MRI and magnetoencephalography (MEG) to reveal a cortical hierarchy in which early visual areas (V1–V4) predominantly encode retinotopic information, whereas higher-level regions, including the visual word form area, transition to an ordinal letter-position code. MEG analyses confirm that retinotopic encoding emerges early (60 to 200 ms), followed by a shift toward ordinal representations in later time windows (220 to 450 ms). Despite this transition, word position remained a dominant factor across all time points, suggesting a concurrent coding of both retinotopic and abstract positional information. These findings uncover the spatiotemporal dynamics by which the human brain transforms visual input into structured prelexical representations, shedding light on the cortical stages of reading and their developmental and clinical implications. |
Yuen Lai Chan; Xi Cheng; Chi Shing Tse Eye on ambiguity: Effects of valence and valence ambiguity on silent word reading and surprise memory recall using pupillometry Journal Article In: Psychonomic Bulletin & Review, vol. 32, no. 5, pp. 2158–2166, 2025. @article{Chan2025a,This study investigates the impact of valence and valence ambiguity on silent word reading and memory recall using pupillometry. While emotional stimuli are found to influence pupil dilation, there have been mixed findings for the effects of valence in the literature. This study aimed to examine this effect by controlling for extraneous lexical variables (e.g., word and character frequency) and considering valence ambiguity as a distinct factor in linear mixed effects modelling analyses. Native Cantonese-speaking university students (N = 94) engaged in a silent reading task of 90 two-character Chinese words, with their pupillary responses being recorded, followed by a surprise memory recall test. The words varied in valence (negative, neutral, positive) and valence ambiguity (high, low). Analyses revealed that valence ambiguity increased pupil dilation, providing support for the deeper and more elaborated processing associated with words with higher valence ambiguity. While there was no significant effect of valence on pupil dilation, the valence × valence ambiguity interaction showed that negative words with higher ambiguity elicited greater pupil dilation than those with lower ambiguity. Memory recall performance was enhanced by valence ambiguity, independent of word valence, indicating that words with higher valence ambiguity foster more elaborated memory encoding even when it is incidental. These findings further our understanding of pupil dilation in emotional processing during silent word reading and the role of valence ambiguity during memory encoding. |
Xuemei Chen; Xiaoyang Qiu; Suiping Wang The role of working memory in structural priming during language comprehension: Evidence from a visual-world paradigm Journal Article In: Psychonomic Bulletin & Review, vol. 32, no. 5, pp. 2375–2388, 2025. @article{Chen2025m,Many studies found that structural priming in production relied on cognitive resources (e.g., working memory), suggesting a resource-constrained mechanism of syntactic processing. To investigate the mechanism of structural priming in comprehension (automatic vs. resource-constrained), we constructed two eye-tracking experiments to test the role of working memory (i.e., a number series recall task between prime and target exerting high or low working memory load) in structural priming during visual-world comprehension. The priming effect is evaluated by the proportion of looks to predicted referents for two critical time windows in target sentence processing: the target verb and the first syllable of the first postverbal noun. When prime and target involved different verbs (Experiment 1), structural priming in both time windows was similar between the high- and low-load conditions. When prime and target involved same verbs (Experiment 2), structural priming in the time window of the first syllable of the first noun phrase was weaker in the high-load than in the low-load condition. Within the time window of the first syllable of the first noun phrase, a lexical boost effect occurred in the low-load condition but not in the high-low condition. Overall, structural priming in comprehension is partially automatic, while lexically mediated structural priming is modulated by working memory, supporting the implicit learning theory. |
Veronica D'Alesio; Anna Teresa Porrini; Matteo Greco; Andrea Moro An investigation of nominal copular sentences in three reading paradigms: Acceptability judgments, self-paced reading, and eye-tracking Journal Article In: Lingua, vol. 326, pp. 1–19, 2025. @article{DAlesio2025,This work aims to investigate the elaboration of (nominal) copular sentences in three different experimental paradigms involving a reading task: an acceptability judgment, a self-paced reading and an eye-tracking experiment. Nominal copular sentences (NCs), such as [DP1 The picture of the wall] is [DP2 the cause of the riot], represent a challenging phenomenon for, at least, two reasons: (i) they can be distinguished in two subtypes, namely canonical and inverse NCs, related to the different order of the DPs (respectively, [DPsubject is DPpredicate] in canonical form vs. [DPpredicate is DPsubject] in inverse form); (ii) these two subtypes are associated with one and the same type of string [DP is DP], although their underlying structure is completely different. Our results show that no differences emerge in the off-line paradigm, i.e. in the acceptability judgments. On the other hand, the self paced-reading task and the eye-tracking experiments show an asymmetry between these two types of NCs, with higher processing costs for inverse NCs. More specifically, the DPsubject is looked at more often and for longer times in inverse NCs. Moreover, when comparing the DPsubject to the DPpredicate in postverbal position in the eye-tracking experiment, sentence structure resulted as a good predictor for total reading time and regression path duration, even after taking into account the length and frequency of the words used. These results strongly support the hypothesis that syntactic structure is a primary factor in generating a different reading pattern between the same string of lexical types of items. |
Zheng Hong Guan; Sunny S. J. Lin; Ying Chih Chen In: Educational Technology and Society, vol. 28, no. 4, pp. 183–204, 2025. @article{Guan2025,Information literacy is crucial in learning from multiple digital texts. Understanding when and how cognitive processes are taxed in developing information literacy is urgent. Previous research mainly used log data, think-aloud protocols, or note-taking to explore digital reading processes, but fine-grained cognitive processes need further investigation. This study combines eye-tracking technology, click times, and essay writing to examine in-depth multiple-text reading. Forty post-secondary novices read multiple history texts and wrote essays expressing their opinions. They read two topics—one familiar and one unfamiliar—and were instructed to write either an argument or a summary. Each topic had four texts connected through hyperlinks, including three paragraphs: background, source, and content. Eye-movement data revealed that during early reading, novices allocated attention to different paragraphs depending on the task instruction. For the familiar topic, the argument group selectively reread content paragraphs longer for integration, while the summary group evenly distributed rereading time across paragraphs. Both groups had more source-content back-and-forth saccade counts. The argument group had more click times for hyperlink selection than the summary group. In their essays, the argument group produced more text-based inferences and higher-quality writing for both topics. Conversely, the summary group demonstrated the poorest comprehension quality for the familiar topic. This study provides educators with guidance on selecting appropriate reading materials for diverse students. Educators may assign argumentative tasks for familiar topics to deepen comprehension, and summary tasks for unfamiliar topics to reduce cognitive load and support learning. These insights contribute to cultivating information literacy through multiple-text reading. |
Yu-Jeh Liu; Mounya Elhilali Sound identity, salience, and perceived importance in complex auditory environments Journal Article In: The Journal of the Acoustical Society of America, vol. 158, no. 4, pp. 3489–3502, 2025. @article{Liu2025s,Human listeners effortlessly identify salient sounds in their environments, yet the relationship between sound class identity, auditory salience, and perceived importance in complex auditory scenes remains poorly understood. In this study, we investigate these connections with scores derived from subject responses using a scoring mechanism, combined with auditory salience and pupillometry data. By leveraging both psychophysical experiments as well as a large-scale annotated dataset, our findings reveal biased responses and higher importance rankings for specific sound classes, such as alarm sounds and speech, and highlight a consistent perceptual ordering of sounds based on their identity. Salience judgments and pupillary responses further support this distinction, showing that the level of heightened arousal follows the same sound class order. The results underscore the influence of semantic mappings on both bottom-up and top-down sensory processing, suggesting that sound identity plays a crucial role in shaping perceptual judgment and neural responses. Despite dataset limitations, our findings offer insights into auditory scene analysis and provide a novel framework for understanding how auditory perception prioritizes sounds based on both their inherent properties and learned semantic associations. |
Marina Norkina; Daria Chernova; Svetlana Alexeeva; Maria Harchevnik In: Journal of Eye Movement Research, vol. 18, no. 5, pp. 1–27, 2025. @article{Norkina2025,Oculomotor reading behavior is influenced by both universal factors, like the “big three” of word length, frequency, and contextual predictability, and language-specific factors, such as script and grammar. The aim of this study was to examine the influence of the “big three” factors on L2 reading focusing on a typologically distant L1/L2 pair with dramatic differences in script and grammar. A total of 41 native Chinese-speaking learners of Russian (levels A2-B2) and 40 native Russian speakers read a corpus of 90 Russian sentences for comprehension. Their eye movements were recorded with EyeLink 1000+. We analyzed both early (gaze duration and skipping rate) and late (regression rate and rereading time) eye movement measures. As expected, the “big three” effects influenced oculomotor behavior in both L1 and L2 readers, being more pronounced for L2, but substantial differences were also revealed. Word frequency in L1 reading primarily influenced early processing stages, whereas in L2 reading it remained significant in later stages as well. Predictability had an immediate effect on skipping rates in L1 reading, while L2 readers only exhibited it in late measures. Word length was the only factor that interacted with L2 language exposure which demonstrated adjustment to alphabetic script and polymorphemic word structure. Our findings provide new insights into the processing challenges of L2 readers with typologically distant L1 backgrounds. |
Eleni Peristeri; Michaela Nerantzini; Timothy C. Papadopoulos; Spyridoula Varlokosta Autistic children's reading comprehension revisited through eye-tracking: Evidence from bridging inferencing Journal Article In: Research in Autism, vol. 128, pp. 1–14, 2025. @article{Peristeri2025,Pragmatic language impairments are universally observed in Autism Spectrum Disorders. Inferencing, i.e., combining information within text and using background knowledge to go beyond what is explicitly stated in the text to make a conjecture, has been a challenging pragmatic domain for autistic children. Most studies that have investigated inferencing in autism have used behavioral measurements. The objective of the current study was to assess inferencing in autistic and age-matched typically-developing children by employing eye-tracking to capture children's ‘in-the-moment' eye gaze behaviors while reading short passages. We also investigated links between children's inferencing and executive function skills. The study included 19 autistic children and 19 age-matched typically-developing children. Groups were administered an eye-tracking task that assessed children's inferencing skills while reading short vignettes that differed in a critical word that supported inferencing or not. Children were asked to read the vignettes and then answer questions that were either primed or not by the inference. The two groups were also assessed on executive functions, including working memory and attention. We found that autistic children exhibited lower comprehension accuracy in passages not primed by inferencing as compared to those that were primed, and also spent more looking time on primed passages than the typically-developing children. Moreover, while inferencing in typically-developing children was significantly related to their executive function skills, no such relations were observed for the autistic group. The overall findings show that reading comprehension for the autistic children was reduced when questions did not anchor to previous discourse through bridging inferencing. Finally, inferencing in the autistic group did not rely on executive functions to the same extent as in typically-developing children. |
Sotiris Plainis; Angeliki Gleni Soft toric contact lens correction of moderate astigmatism improves digital reading performance and oculomotor behaviour: An eye movement study Journal Article In: Contact Lens and Anterior Eye, vol. 48, no. 5, pp. 1–9, 2025. @article{Plainis2025,Purpose: Reading on digital devices is a crucial daily activity. In this study, oculomotor behaviour and reading performance were evaluated in patients with low to moderate astigmatism corrected with spherical or toric lenses, before and after a short reading task on a tablet. Μethods: Silent reading performance and visual acuity (VA) of twenty four volunteers (age: 30 ± 8 yrs) was assessed binocularly with IReST passages (0.3 logMAR print size) for two contrast levels (100 % and 10 %) at 40 cm screen distance. Participants were corrected for their binocular myopic astigmatism using daily disposable contact lenses (PRECISION1, Alcon Laboratories) in either single vision (spherical) or toric design (of 0.75D or 1.25D cylinder). Recordings were repeated after a 10-minute reading activity on a tablet. Eye movements were monitored with an infrared eyetracker. Data analysis included computation of reading speed and a range of oculomotor indices. Results: Average VA improved at near with toric compared to spherical lens correction at both contrast levels (high: 0.06 ± 0.07 logMAR |
Aysen Tuzcu The effects of input modality on novel L2 vocabulary processing during reading: A closer look into attention and awareness Journal Article In: Language Learning & Technology, vol. 29, no. 3, pp. 30–48, 2025. @article{Tuzcu2025,Digital tools such as audiobooks enable language learners to read and listen to a text simultaneously. Access to such bimodal input has been argued to enhance L2 vocabulary learning from reading by directing learners' attention to novel words and facilitating the development of form-meaning links (Long, 2017). However, research on the cognitive processes underlying vocabulary learning from bimodal input remains limited. This study compared L2 readers' attention and awareness levels of novel words under reading-only (RO) and reading-while-listening (RWL) conditions. Sixty-three L2 English speakers were randomly assigned to RO and RWL groups and read a 9500-word text containing 24 target pseudowords over two days while their eye movements were recorded. Attention was assessed through eye-tracking data, while awareness levels and learning of the target pseudowords were assessed using retrospective verbal reports and a form recognition test. Results indicated similar levels of attention and awareness across both groups. However, a positive relationship between attention and awareness emerged only in the RO group. These results suggest that while bimodal input may not significantly affect L2 readers' attention or awareness levels, it influences the relationship between these constructs. Additionally, both attention and awareness predicted word form learning, indicating their importance in vocabulary learning. |
João Vieira; Elisângela Teixeira; Erica Rodrigues; Hayward J. Godwin; Denis Drieghe When function words carry content Journal Article In: Quarterly Journal of Experimental Psychology, vol. 78, no. 10, pp. 2235–2248, 2025. @article{Vieira2025a,Studies on eye movements during reading have primarily focussed on the processing of content words (CWs), such as verbs and nouns. Those few studies that have analysed eye movements on function words (FWs), such as articles and prepositions, have reported that FWs are typically skipped more often and, when fixated, receive fewer and shorter fixations than CWs. However, those studies were often conducted in languages where FWs contain comparatively little information (e.g., the in English). In Brazilian Portuguese (BP), FWs can carry gender and number marking. In the present study, we analysed data from the RASTROS corpus of natural reading in BP and examined the effects of word length, predictability, frequency and word class on eye movements. Very limited differences between FWs and CWs were observed mostly restricted to the skipping rates of short words, such that FWs were skipped more often than CWs. For fixation times, differences were either nonexistent or restricted to atypical FWs, such as low frequency FWs, warranting further research. As such, our results are more compatible with studies showing limited or no differences in processing speed between FWs and CWs when influences of word length, frequency and predictability are taken into account. |
Andi Wang The integration of auditory and textual input in vocabulary learning from subtitled viewing: An eye-tracking study Journal Article In: Language Learning & Technology, vol. 29, no. 3, pp. 70–91, 2025. @article{Wang2025c,Numerous studies have documented the benefits of watching audio-visual materials with on-screen text for L2 vocabulary learning (Montero Perez, 2022). The provision of both auditory and textual input allows learners to link auditory and written forms (or L1 meanings) of unknown words during viewing, which could potentially facilitate vocabulary learning. However, little is known about the dynamics of text-audio synchrony in subtitled viewing and how the processing of written words in relation to the audio may lead to vocabulary learning. Eighty-one intermediate-to-advanced Chinese learners of English watched an English documentary with one of three on-screen texts (i.e., captions, L1 subtitles, and bilingual subtitles), while their eye movements were monitored. Participants' awareness of 17 unknown words and vocabulary learning gains were assessed via stimulated recalls and three vocabulary tests. Results revealed that captions facilitated text-audio synchronisation, whereas L1 subtitles generally led to reading ahead and skipping. Bilingual subtitles enabled synchronisation of L1 translations with L2 audio but often resulted in skipping L2 forms. Most text-audio processing behaviours led to moderate predicted probabilities of vocabulary learning and participants' reported awareness, with no significant within-group difference, except for the processing of L2 unknown words in bilingual subtitles. |
Roslyn Wong; Aaron Veldre Anticipatory prediction in older readers Journal Article In: Memory and Cognition, vol. 53, no. 7, pp. 2312–2331, 2025. @article{Wong2025,It is well-established that skilled, young-adult readers rely on predictive processing during online language comprehension; however, fewer studies have investigated whether this extends to healthy, older adults (60 + years). The aim of the present research was to assess whether older readers make use of lexical prediction by investigating whether they demonstrate processing costs for incorrect predictions in a controlled experimental design. The eye movements of a sample of older adults (60–86 years) were recorded as they read strongly and weakly constraining sentences containing a predictable word or an unpredictable alternative that was either semantically related or unrelated. To determine whether predictive processing depends on the stimuli presentation format, a second experiment presented the same materials in a self-paced reading task in which each word of a sentence appears one at a time at the readers' own pace. Older adults showed processing benefits for expected input on eye-movement measures of reading. They also showed processing costs for unexpected input across both methodologies, but only when semantically unrelated to the best completion. Taken together, the results suggest that the use of predictive processes remains relatively preserved with age. The implications of these findings for understanding whether prediction is a fundamental component of online language comprehension are discussed. |
Lauren S. Baron; Anna M. Ehrhorn; Peter Shlanta; Jane Ashby; Bethany A. Bell; Suzanne M. Adlof Orthographic influences on phonological processing in children with and without reading difficulties: An eye-tracking study Journal Article In: Reading and Writing, vol. 38, no. 7, pp. 1925–1948, 2025. @article{Baron2025,Phonological processing is an important contributor to decoding and spelling difficulties, but it does not fully explain word reading outcomes for all children. As orthographic knowledge is acquired, it influences phonological processing in typical readers. In the present study, we examined whether orthography affects phonological processing differently for children with current reading difficulties (RD), children with a history of reading difficulties who are currently presenting with typical word reading skills (Hx), and children with typical development and no history of reading difficulties (TD). School-aged children completed a phonological awareness task containing spoken words and pictures while eye movements were recorded. In this task, children had to pair a spoken stimulus word with one of four pictures that ended with the same sound. Within the task, stimulus-target picture pairs varied in the congruency and consistency of the orthographic and phonological mappings of their final consonant sounds. Eye movements revealed that children with typical word reading (the Hx and TD groups) showed better discrimination of the target from the foils compared to peers with underdeveloped word reading skills. All children were more accurate when stimulus-target pairs were congruent and consistent than when they were incongruent or inconsistent. Orthography plays an important role in the completion of phonological awareness tasks, even in the absence of written words and for children with a wide range of reading abilities. Results highlight the importance of considering orthography during interventions for phonological awareness and word reading. |
Robert J. P. M. Chamalaun; Tijn Schmitz; Mirjam T. C. Ernestus Silent morphological information in a word's spelling also affects natural reading behavior Journal Article In: Morphology, vol. 35, no. 3, pp. 417–448, 2025. @article{Chamalaun2025,Previous research suggests that, when performing experimental tasks, readers rely on the morphological information incorporated in a word's spelling even if this information is not reflected in the word's pronunciation. We investigated whether readers do so as well when they read text for comprehension under more natural conditions, that is, when participants read tweets, which were not composed for experimentation, and when they can read and skip words in the order they would like to. Two eye-tracking experiments were conducted to investigate whether participants who read for comprehension suffer from reading a homophone of the intended word, which differs from the intended word in the morphological information in the spelling. Experiment 1 focused on Dutch homophone pairs of the first and the third person singular present tense, for which previous studies have shown that confusion of the forms leads to longer self-paced reading times. Experiment 2 focused on Dutch homophone pairs of the third person singular present tense and the past participle, for which several studies could not find that readers rely on the silent morphological information in the spelling. For both pairs of homophones, we found that reading is delayed by the incorrect homophone. This shows that, also during more naturalistic reading conditions, readers process words not only by phonological encoding, but also by directly extracting morphological information from the spelling. A proper orthographic representation of morphologically complex words is thus important for the reading of natural texts. |
Suphasiree Chantavarin; Tommi Tsz Cheung Leung The effects of interword spacing and morphological complexity in reading Thai: An eye-tracking study Journal Article In: Language and Cognition, vol. 17, pp. 1–28, 2025. @article{Chantavarin2025,This study examined how word identification is influenced by interword spacing and morphological complexity in Thai, a script without interword spacing. While previous research supported the facilitative effect of interword spacing on Thai word identification, they did not account for the potential effects of the words' morphological structure. The challenge of word identification becomes more pronounced when readers have to identify compound words (e.g., bathroom) when reading sentences without interword spacing. In an eye-tracking experiment that manipulated interword spacing (unspaced, spaced) and noun type (bimorphemic compound, monomorphemic) in Thai sentences, we confirmed previous findings that interword spacing has a facilitative effect on word identification, as evidenced by shorter first fixation duration, gaze duration and total fixation time. Furthermore, we observed an interaction effect indicating that interword spacing had a larger facilitative effect on the identification of compounds compared to monomorphemic words. Our results also revealed that the morphological structure of Thai words can influence saccadic movements, e.g., the first fixation landing position was closer to the beginning of compounds than to simple words. We suggest that the orthography-language interface, a language-specific feature, should be considered a major component in eye movement models of reading. |
Aine Ito Effects of task instructions on predictive eye movements and word recognition during second language sentence comprehension Journal Article In: Language Learning, vol. 75, no. 3, pp. 801–831, 2025. @article{Ito2025,This study tested whether encouraging prediction enhances prediction in second language (L2) speakers. L2 English speakers listened to English sentences like The woman … will read/buy one of the newspapers while viewing the target (a newspaper) and distractor objects (a rose, a bowl, and a mango) on a screen and clicked on the target as quickly as possible. The target was predictable (read) or unpredictable (buy) from the verb meaning. Participants looked at the target longer and were quicker to move the mouse to it when instructed to predict sentence continuation than when they were merely instructed to comprehend sentences. This result held true both when the target was predictable and unpredictable. Furthermore, only when instructed to predict did the participants make more clicking errors when the target was unpredictable than predictable, which suggested that encouraging prediction can interfere with word recognition accuracy in unpredictable contexts due to reduced cognitive resources or failed predictions. |
Kyra L. Krass; Joslyn S. Hoang; Gitte H. Joergensen; Gerry T. M. Altmann Anticipatory eye movements revisited: From affordances and actions to consequences and states Journal Article In: Brain Research, vol. 1863, pp. 1–12, 2025. @article{Krass2025,The initial demonstration of anticipatory eye movements during sentence processing (Altmann & Kamide, 1999) found anticipatory looks towards the object that afforded the action referred to in the unfolding sentence. Here, we show that the object that affords the action is in fact dispreferred in the context of an object that instead affords the consequence of the action. We ran two studies to both confirm this bias and determine its malleability depending on the task. Our data suggest that looking behaviors (anticipatory or otherwise) are governed by the ubiquitous goal bias found in other cognitive domains. We offer a revised account of attentional biases in sentence processing that captures both action-based and goal-based biases in a unified approach to anticipatory event-based processes in language processing. |
Haiting Lan; Sixin Liao; Jan-Louis Kruger; Michael J. Richardson Processing written language in video games: An eye-tracking study on subtitled instructions Journal Article In: Journal of Eye Movement Research, vol. 18, no. 5, pp. 1–25, 2025. @article{Lan2025a,Written language is a common component among the multimodal representations that help players construct meanings and guide actions in video games. However, how players process texts in video games remains underexplored. To address this, the current exploratory eye-tracking study examines how players processed subtitled instructions and resultant game performance. Sixty-four participants were recruited to play a videogame set in a foggy desert, where they were guided by subtitled instructions to locate, corral, and contain robot agents (targets). These instructions were manipulated into three modalities: visual-only (with subtitled instructions only), auditory only (with spoken instructions), and visual–auditory (with both subtitled and spoken instructions). The instructions were addressed to participants (as relevant subtitles) or their AI teammates (as irrelevant subtitles). Subtitle-level results of eye movements showed that participants primarily focused on the relevant subtitles, as evidenced by more fixations and higher dwell time percentages. Moreover, the word-level results indicate that participants showed lower skipping rates, more fixations, and higher dwell time percentages on words loaded with immediate action-related information, especially in the absence of audio. No significant differences were found in player performance across conditions. The findings of this study contribute to a better understanding of subtitle processing in video games and, more broadly, text processing in multimedia contexts. Implications for future research on digital literacy and computer-mediated text processing are discussed. |
Yang Lei; Linyan Liu; Jie Chen; Chan Tang; Siyi Fan; Yongqiang Cai; Guosheng Ding Distinctive human dynamics of semantic uncertainty: Contextual bias accelerates lexical disambiguation Journal Article In: Behavioral Sciences, vol. 15, no. 9, pp. 1–16, 2025. @article{Lei2025,This study investigated the dynamic resolution of lexical–semantic ambiguity during sentence comprehension, focusing on how uncertainty evolves as contextual information accumulates. Using time-resolved eye-tracking and a novel entropy-based measure derived from group-level semantic choice distributions, we quantified semantic uncertainty at a fine-grained temporal resolution for ambiguous words. By parametrically manipulating the semantic bias strength of the sentence context, we examined how context guides disambiguation over time. The results showed that semantic uncertainty declined gradually over temporal segments and dropped sharply following the onset of ambiguous words, reflecting both incremental integration and syntactic anchoring. A stronger contextual bias led to faster reductions in uncertainty, with effects following a near-linear trend. These findings support dynamic semantic processing models that assume continuous, context-sensitive convergence toward intended meanings. In contrast, a pretrained Chinese BERT model (RoBERTa-wwm-ext) showed similar overall trends in uncertainty reduction but lacked sensitivity to contextual bias. This discrepancy suggests that, while language models can approximate human-level disambiguation broadly, they fail to capture fine-grained semantic modulation driven by context. These findings provide a novel empirical characterization of disambiguation dynamics and offer a new methodological approach to capturing real-time semantic uncertainty. The observed divergence between human and model performance may inform future improvements to language models and contributes to our understanding of possible architectural differences between human and artificial semantic systems. |
Diane Mézière; Johanna K. Kaakinen; Emilia Ranta; Karin Kukkonen; Jonathan Smallwood; Jaana Simola Do eye movements reflect readers' thoughts during reading? Evidence from multidimensional experience sampling and eye movements Journal Article In: Consciousness and Cognition, vol. 134, pp. 1–17, 2025. @article{Meziere2025a,While reading narrative texts, readers' attention often fluctuates from the text (e.g., immersion) to text-unrelated thoughts (e.g., mind-wandering). Research on mind-wandering and immersion suggests that they influence the reading process differently. In this article, we examine the types of thoughts readers have while reading a literary text. Specifically, we investigated the effect of immersion and mind-wandering on eye-movement behaviour during reading. Fifty-six participants read extracts from a novel while their eye-movements were monitored. Participants' thoughts were probed using multidimensional experience sampling. We identified four types of thought: Immersion, Mind-wandering, Sub-Vocalization, and Social Episodic Thoughts. We then ran General Additive Mixed Models (GAMMs) to examine the relationship between these thought types and eye movements. Results show that eye movements are influenced by the types of thoughts readers experience while reading literary texts. These results have important implications for the way that mind-wandering is typically investigated, particularly in reading research. |
Brian Nestor; Ayah Elaboudi; Sara Milligan; Elizabeth R. Schotter Parafoveally perceived orthographic cues facilitate foveal semantic processing: Evidence from event-related potentials Journal Article In: Brain and Language, vol. 268, pp. 1–10, 2025. @article{Nestor2025,Readers extract information from words viewed parafoveally, but it is unclear whether this processing is limited to orthography or if it extends to lexico-semantic content. In the current ERP study, we measured the N400 responses to words that were perceived parafoveally and/or foveally using the RSVP-with-flankers paradigm and a parafoveal masking manipulation. We compared anomalous orthographically related (neighbor) and unrelated (non-neighbor) words to expected words to determine whether the N400 responses were driven by orthographic and/or semantic processing. We observed a large parafoveal N400 effect in response to the non-neighbors (versus expected), and a smaller, later parafoveal N400 for neighbors, suggesting that the parafoveal response is largely orthographic in nature. We also observed a significant reduction in foveal N400 magnitude when non-neighbor words were previously visible parafoveally (but not for the foveal N400 response to neighbors), suggesting that facilitation of foveal processing is driven by parafoveal detection of orthographic violations. |
Dipak P. Upadhyaya; Gokce Cakir; Stefano Ramat; Jeffrey Albert; Aasef Shaikh; Satya S. Sahoo; Fatema Ghasia A multihead attention deep learning algorithm to detect amblyopia using fixation eye movements Journal Article In: Ophthalmology Science, vol. 5, no. 5, pp. 1–13, 2025. @article{Upadhyaya2025,Objective: To develop an attention-based deep learning (DL) model based on eye movements acquired during a simple visual fixation task to detect amblyopic subjects across different types and severity from controls. Design: An observational study. Subjects: We recruited 40 controls and 95 amblyopic subjects (anisometropic = 32; strabismic = 29; and mixed = 34) at the Cleveland Clinic from 2020 to 2024. Methods: Binocular horizontal and vertical eye positions were recorded using infrared video-oculography during binocular and monocular viewing. Amblyopic subjects were classified as those without nystagmus (n = 42) and those with nystagmus with fusion maldevelopment nystagmus (FMN) or nystagmus that did not meet the criteria of FMN or infantile nystagmus syndrome (n = 53). A multihead attention-based transformer encoder model was trained and cross-validated on deblinked and denoised eye position data acquired during fixation. Main Outcome Measures: Detection of amblyopia across types (anisometropia, strabismus, or mixed) and severity (treated, mild, moderate, or severe) and subjects with and without nystagmus was evaluated with area under the receiver-operator characteristic curves, area under the precision–recall curve (AUPRC), and accuracy. Results: Area under the receiver-operator characteristic curves for classification of subjects per type were 0.70 ± 0.16 for anisometropia (AUPRC: 0.72 ± 0.08), 0.78 ± 0.15 for strabismus (AUPRC: 0.81 ± 0.16), and 0.80 ± 0.13 for mixed (AUPRC: 0.82 ± 0.15). Area under the receiver-operator characteristic curves for classification of amblyopia subjects per severity were 0.77 ± 0.12 for treated/mild (AUPRC: 0.76 ± 0.18), and 0.78 ± 0.09 for moderate/severe (AUPRC: 0.79 ± 0.16). Th area under the receiver-operator characteristic curve for classification of subjects with nystagmus was 0.83 ± 0.11 (AUPRC: 0.81 ± 0.18), and the area under the receiver-operator characteristic curve for those without nystagmus was 0.75 ± 0.15 (AUPRC: 0.76 ± 0.09). Conclusions: The multihead transformer DL model classified amblyopia subjects regardless of the type, severity, and presence of nystagmus. The model's ability to identify amblyopia using eye movements alone demonstrates the feasibility of using eye-tracking data in clinical settings to perform objective classifications and complement traditional amblyopia evaluations. Financial Disclosure(s): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article. |
Chao-Jung Wu; Chia-Yu Liu An eye-tracking study of college students' infographic-reading processes Journal Article In: Journalism & Mass Communication Quarterly, vol. 102, no. 3, pp. 853–883, 2025. @article{Wu2025,We know little about how readers, especially readers with various characteristics, incorporate materials with highly synthesized words and graphs like infographics. We collected eye movements from 95 college students as they read infographics and categorized them into high-/low-score groups based on comprehension scores. Participants initially inspected the word areas that corresponded to the graph areas with the highest perceptual salience. The high-score group showed greater total fixation duration (TFD), TFD ratios of graphs, and transition numbers between words and graphs, indicating more processing of infographics. The low-score group showed greater TFD ratios of words and saccade amplitudes, indicating information-searching behavior. |
Zuzanna Fuchs; Emma Kealey; Esra Eldem-Tunç; Leo Mermelstein; Linh Pham; Anna Runova; Yue Chen; Metehan Oğuz; Seoyoon Hong; Catherine Pan; J. K. Subramony Measuring adult heritage language lexical proficiency for studies on facilitative processing of gender Journal Article In: Languages, vol. 10, no. 8, pp. 1–16, 2025. @article{Fuchs2025c,The present study analyzes individual differences in the facilitative processing of grammatical gender by heritage speakers of Spanish, asking whether these differences correlate with lexical proficiency. Results from an eye-tracking study in the Visual World Paradigm replicate prior findings that, as a group, heritage speakers of Spanish show facilitative processing of gender. Importantly, in a follow-up within-group analysis, we test whether three measures of lexical proficiency—oral picture-naming, verbal fluency, and LexTALE—predict individual performance. We find that lexical proficiency, as measured by LexTALE, predicts overall word recognition; however, we observe no effects of the other measures and no evidence that lexical proficiency modulates the strength of the facilitative effect. Our results highlight the importance of carefully selecting tools for proficiency assessment in experimental studies involving heritage speakers, underscoring that the absence of evidence for an effect of proficiency based on a single measure should not be taken as evidence of absence. |
Shiyu He; Dagmar Divjak; Petar Milin Optimising participant grouping methods in bilingualism studies Journal Article In: Linguistic Approaches to Bilingualism, vol. 15, no. 4, pp. 487–517, 2025. @article{He2025a,This research addresses two major challenges in studying second language acquisition and bilingualism: reducing overlap in predictor variables and correctly classifying participants into language proficiency levels. Too many relevant predictors can harm statistical analysis due to an increased chance of overlap, known as multicollinearity. To tackle this, we use Principal Component Analysis (PCA) on selected predictors to identify proficiency indicators, combining the length of stay in the UK and language test scores. Additionally, traditional methods, especially IELTS-based proficiency classifications, often miss subtle differences in language skills, particularly when they fail to consider how long participants have been exposed to the target language. We counter this by using non-hierarchical Cluster Analysis (NCA) for a grounded, data-driven way of detecting distinct language proficiency groups. This new approach is demonstrated on a dataset of eye movements from reading tasks, collected from Chinese–English bilinguals in the UK. |
Deborah N. Jakobi; Thomas Kern; David R. Reich; Patrick Haller; Lena A. Jäger PoTeC: A German naturalistic eye-tracking-while-reading corpus Journal Article In: Behavior Research Methods, vol. 57, no. 8, pp. 1–37, 2025. @article{Jakobi2025,The Potsdam Textbook Corpus (PoTeC) is a naturalistic eye-tracking-while-reading corpus containing data from 75 participants reading 12 scientific texts. PoTeC is the first naturalistic eye-tracking-while-reading corpus that contains eye-movements from domain experts as well as novices in a within-participant manipulation: It is based on a 2×2×2 fully crossed factorial design, which includes the participants' level of studies and the participants' discipline of studies as between-subjects factors and the text domain as a within-subjects factor. The participants' reading comprehension was assessed by a series of text comprehension questions and their domain knowledge was tested by text-independent background questions for each of the texts. The materials are annotated for a variety of linguistic features at different levels. We envision PoTeC to be used for a wide range of studies including but not limited to analyses of expert and non-expert reading strategies. The corpus and all the accompanying data at all stages of the preprocessing pipeline and all code used to preprocess the data is made available via GitHub: https://github.com/DiLi-Lab/PoTeC and OSF: https://osf.io/dn5hp/. The data is furthermore integrated into the open-source package pymovements, which can be used in Python and R: https://github.com/aeye-lab/pymovements. |
Gregory D. Keating Normalization of timed measures in bilingualism research Journal Article In: Linguistic Approaches to Bilingualism, vol. 15, no. 4, pp. 518–537, 2025. @article{Keating2025,The time it takes an individual to respond to a probe (e.g., a word, picture, or question) or to read a word or phrase provides useful insights into cognitive processes. Consequently, timed measures are a staple in bilingualism research. However, timed measures usually violate assumptions of linear models, one being normal distribution of the residuals. Power transformations are a common solution but which of the many possible transformations to apply is often guesswork. Box and Cox (1964) developed a procedure to estimate the best-fitting normalizing transformation, coefficient lambda (λ), that is easy to run using standard R packages. This practical primer demonstrates how to perform the Box-Cox transformation in R using as a testbed the distractor items from a recent eye-tracking study on sentence reading in speakers of Spanish as a majority and a heritage language. The analyses show (a) that the exponents selected via the Box-Cox procedure reduce positive skewness as well as or better than the natural log; (b) that the best-fitting value of λ varies based on factors such as group and, in the case of eye-movement data, the measure of interest; and (c) that the choice of transformation sometimes impacts p values for model estimates. |
Julie Kirwan; Deniz Başkent; Anita Wagner The time course of the pupillary response to auditory emotions in pseudospeech, music, and vocalizations Journal Article In: Trends in Hearing, vol. 29, pp. 1–14, 2025. @article{Kirwan2025,Emotions can be communicated through visual and dynamic characteristics such as smiles and gestures, but also through auditory channels such as laughter, music, and human speech. Pupil dilation has become a notable marker for visual emotion processing; however the pupil's sensitivity to emotional sounds, specifically speech, remains largely underexplored. This study investigated the processing of emotional pseudospeech, which are speech-like sentences devoid of semantic content. We measured participants' pupil dilations while they listened to pseudospeech, music, and human vocalizations, and subsequently performed an emotion recognition task. Our results showed that emotional pseudospeech can trigger increases of pupil dilation compared to neutral pseudospeech, supporting the use of pupillometry as a tool for indexing prosodic emotion processing in the absence of semantics. However, pupil responses to pseudospeech were smaller and slower than the responses evoked by human vocalizations. The pupillary response was not sensitive enough to distinguish between emotion categories in pseudospeech, but pupil dilations to music and vocalizations reflected some emotion-specific pupillary curves. The valence of the stimulus had a stronger overall influence on pupil size than arousal. These results highlight the potential for pupillometry in studying auditory emotion processing and provide a foundation for contextualizing pseudospeech alongside other affective auditory stimuli. |
Min Liu; Sainan Li; Zhu Meng; Yongsheng Wang; Chuanli Zang; Guoli Yan; Simon P. Liversedge Development of orthographic, phonological and semantic parafoveal processing in Chinese reading Journal Article In: Quarterly Journal of Experimental Psychology, pp. 1–23, 2025. @article{Liu2025k,Parafoveal pre-processing of upcoming words is a key aspect of fluent reading. A comparative analysis of how children's orthographic, phonological and semantic parafoveal processing changes with age has not been investigated to date. In the present study, three eye movement experiments used the boundary paradigm to characterize the nature of change in orthographic, phonological and semantic parafoveal processing across children in Grades 2 to 5 ( n = 366, Tianjin Primary School) and adults ( n = 90, Tianjin Normal University) during natural Chinese reading. In each experiment we manipulated preview type (identical, related or unrelated preview). The results showed that effective orthographic parafoveal processing occurred in all our participant groups; however, effective phonological and semantic parafoveal processing was somewhat delayed, occurring in the third or fourth grade through to adults. We suggest that the differential developmental time course of orthographic relative to phonological and semantic parafoveal processing likely arises because the phonological and semantic characteristics of a written character are accessed via the character's orthographic code. Orthographic parafoveal processing, therefore, likely takes developmental precedence over phonological and semantic parafoveal processing. Together, the results provide a quite comprehensive picture of how a fundamental aspect of reading, parafoveal processing, develops with age. |
Longxia Lou; Ximei Feng; Zehao Liang; Zhi-fang Liu; Zhijun Zhang Contextual plausibility effects among older adults in Chinese free-view reading: Co-registration of eye-tracking and electroencephalography Journal Article In: Perceptual and Motor Skills, pp. 1–22, 2025. @article{Lou2025,With regard to how aging affects contextual plausibility in Chinese natural silent reading, research remains limited. To address the methodological limitations inherent in using eye-tracking measures and event-related potentials separately, we aimed to apply co-registration of eye-tracking with electroencephalography (EEG) in Chinese natural silent reading. Old and young adults were asked to read sentences that contained either semantically congruent or semantically violated words. We failed to replicate any differences in contextual plausibility effects between the older and younger adults on eye-tracking measures of early word processing (including first fixation duration, gaze duration, probability of skipping, and refixation). However, larger plausibility effects for the old adult readers were observed in the measures of regression in probability and total reading time. No reliable age-related differences in plausibility effects were observed for brain response amplitudes in the parafoveal and foveal intervals (from −400 to 200 ms). Both eye-tracking and EEG data demonstrated that contextual plausibility in early word processing was preserved among the old adults, while the Chinese old adult readers made more efforts to reanalyze semantic information in relation to free-view reading. |
Hartmut Meister; Moritz Wächtler; Pascale Sandmann; Ruth Lang-Roth; Khaled H. A. Abdel-Latif Audiovisual perception of sentence stress in cochlear implant recipients Journal Article In: Audiology Research, vol. 15, no. 4, pp. 1–27, 2025. @article{Meister2025,Background/Objectives: Sentence stress as part of linguistic prosody plays an important role for verbal communication. It emphasizes particularly important words in a phrase and is reflected by acoustic cues such as the voice fundamental frequency. However, visual cues, especially facial movements, are also important for sentence stress perception. Since cochlear implant (CI) recipients are limited in their use of acoustic prosody cues, the question arises as to what extent they are able to exploit visual features. Methods: Virtual characters were used to provide highly realistic but controllable stimuli for investigating sentence stress in groups of experienced CI recipients and typical-hearing (TH) peers. In addition to the proportion of correctly identified stressed words, task load was assessed via reaction times (RTs) and task-evoked pupil dilation (TEPD), and visual attention was estimated via eye tracking. Experiment 1 considered congruent combinations of auditory and visual cues, while Experiment 2 presented incongruent stimuli. Results: In Experiment 1, CI users and TH participants performed similarly in the congruent audiovisual condition, while the former were better at using visual cues. RTs were generally faster in the AV condition, whereas TEPD revealed a more detailed picture, with TH subjects showing greater pupil dilation in the visual condition. The incongruent stimuli in Experiment 2 showed that modality use varied individually among CI recipients, while TH participants relied primarily on auditory cues. Conclusions: Visual cues are generally useful for perceiving sentence stress. As a group, CI users are better at using facial cues than their TH peers. However, CI users show individual differences in the reliability of the various cues. |
Laura Nadal; Iria Bello Viruega Counter-argumentation and the rigidity of procedural meaning: An experimental study on the Italian connective tuttavia Journal Article In: Studia Linguistica, vol. 79, no. 2, pp. 435–447, 2025. @article{Nadal2025,The counter-argumentative Italian connective tuttavia (Eng. “however”) marks the disruption of a causal chain by introducing an unexpected conclusion that cancels inferences drawn from a previous premise. Thus, under neutral conditions, tuttavia connects two argumentatively anti-oriented members of discourse. Conversely, if those members hold a coordinated relationship, the premise would be argumentatively insufficient to allow a discursive relationship of counter-argumentation, resulting in discursive incoherence. An eye-tracking reading experiment compared the processing costs generated by an utterance in which tuttavia was inserted in a neutral argumentative structure with those of a causal relationship between two co-oriented discourse members. Results showed that, faced with this type of argumentative insufficiency, readers seem to desist from processing as the cognitive effort of an incompatible assumption may be excessive. |
Dingyi Niu; Zijian Xie; Jiaqi Liu; Chen Wang; Ze Zhang Visual word segmentation cues in Tibetan reading: Comparing dictionary-based and psychological word segmentation Journal Article In: Journal of Eye Movement Research, vol. 18, no. 4, pp. 1–18, 2025. @article{Niu2025,This study utilized eye-tracking technology to explore the role of visual word segmentation cues in Tibetan reading, with a particular focus on the effects of dictionary-based and psychological word segmentation on reading and lexical recognition. The experiment employed a 2 × 3 design, comparing six conditions: normal sentences, dictionary word segmentation (spaces), psychological word segmentation (spaces), normal sentences (green), dictionary word segmentation (color alternation), and psychological word segmentation (color alternation). The results revealed that word segmentation with spaces (whether dictionary-based or psychological) significantly improved reading efficiency and lexical recognition, whereas color alternation showed no substantial facilitative effect. Psychological and dictionary word segmentation performed similarly across most metrics, though psychological segmentation slightly outperformed in specific indicators (e.g., sentence reading time and number of fixations), and dictionary word segmentation slightly outperformed in other indicators (e.g., average saccade amplitude and number of regressions). The study further suggests that Tibetan reading may involve cognitive processes at different levels, and the basic units of different levels of cognitive processes may not be consistent. These findings hold significant implications for understanding the cognitive processes involved in Tibetan reading and for optimizing the presentation of Tibetan text. |
Zeynep G. Özkan; Jukka Hyönä; Maria Fernández-López; Manuel Perea How does vertical reading affect saccade programming and lexical processing in the Roman script? Journal Article In: Psychological Research, vol. 89, no. 4, pp. 1–10, 2025. @article{Oezkan2025,Although computational models of eye movement control in reading have focused on horizontal text layouts, vertically oriented text is also encountered in daily life in the Roman script. To examine the interplay between saccade programming and lexical processing under vertical reading in the Roman script, we manipulated (1) the layout of words in a sentence (horizontal vs. vertical) and (2) word frequency (high vs. low). In the vertical layout, the words themselves remained in standard orientation but were arranged vertically (one below the other). Eye-movement measures at the sentence level (e.g., total reading time, number of fixations) showed a cost for the vertical arrangement, primarily reflected in longer fixation durations rather than a greater number of fixations. Critically, at the target-word level, the word-frequency effect —which increased in later eye-fixation measures (gaze duration, total time)— remained similar in size across both layouts. The additive pattern of word frequency and text layout, supported by Bayes factors, suggests that slower saccade programming in the vertical format does not substantially impact lexical processing. While lexical processing can influence saccade programming, delays in saccade programming do not, in turn, alter lexical processing—a pattern that constrains current models of eye movement control in reading. |
Jens Roeser; Mark Torrance; Thom Baguley Semantic contrast ahead: Contrast guides pre-planning in complex noun-phrase production Journal Article In: Language and Cognition, vol. 17, pp. 1–23, 2025. @article{Roeser2025,Whether or not pre-planning extends beyond the initial noun in a noun phrase depends, in part, on the phrase's dependency structure. Dependency structure disambiguates, in many contexts, the noun phrase's reference. In the present experiment (N = 64), we demonstrate that advance planning is affected by the extent to which a dependency supports semantic disambiguation. Participants produced noun phrases in response to picture arrays. Syntax and lexemes were held constant, but semantic scope was manipulated by varying the contrastive functions of the first and the second noun. Evidence from eye-movement data revealed a stronger tendency for early planning in the extended scope condition. This is evidence that pre-planning requirements of structurally complex noun phrases are, in at least some contexts, determined by semantic functions. |
Katherine Rowley; Eva Gutierrez-Sigut; Mairéad MacSweeney; Gabriella Vigliocco Reading with deaf eyes: Automatic activation of speech-based phonology during word recognition is task dependent Journal Article In: PLoS ONE, vol. 20, no. 8, pp. 1–27, 2025. @article{Rowley2025,Literacy levels are highly variable within the deaf population and, compared to the general population, on average, reading levels are lower. As speech-based phonological coding is a known predictor of reading success in hearing individuals, much research has focussed on deaf readers' processing of speech-based phonological codes during word recognition and reading as a possible explanation for the widespread reading difficulties in the deaf population. Although results are mixed, there is recent growing evidence that deaf and hearing readers process speech-based phonological codes differently. Furthermore, some studies indicate that phonological ability may not be a strong correlate of literacy skills in deaf, adult readers. Here, we investigate orthographic, semantic, and phonological processing during single word reading in deaf (N = 20) and hearing (N = 20) adult readers, who were matched on reading level. Specifically, we tracked deaf and hearing readers' eye-movements using an adaptation of the visual world paradigm using written words and pictures. We found that deaf and hearing readers activate orthographic and semantic information following a similar time-course. However, there were differences in the way the groups processed phonology, with deaf readers making less use of phonological information. Crucially, as both groups were matched for reading level, reduced phonological processing did not appear to impact reading skill in deaf readers. |
Marta Tagliani; Daniele Panizza; Chiara Melloni The role of the negated information in processing negation: A visual world study Journal Article In: Journal of Psycholinguistic Research, vol. 54, no. 4, pp. 1–24, 2025. @article{Tagliani2025a,This paper reports the results of a visual world study that explored the processing of sentential negation by Italian adults. To fulfill this aim, we assessed the perceptual prominence of the negated information (that is, the propositional content in the scope of negation). Specifically, we employed an identification task in a visual world set-up in which participants had to listen to affirmative and negative sentences (e.g., Aladdin [is/is not] closing the door…) while looking at visual scenes with the number of pictures matching the negated content varying from one to three. Negation processing was investigated across different propositional and perceptual dimensions by including three types of items (cartoons, black and white, and coloured shapes). We found that i) participants were always slower in target identification with negative sentences vs. their affirmative counterparts; ii) the perceptual prominence of the negated information reduced this processing penalty. These results support the idea that the representation of the negated information plays an active role in negative sentence comprehension, in compliance with two-step-based accounts of negation processing. Nonetheless, the computation of sentential negation displays some degree of flexibility and is in part modulated by the visual and linguistic information provided. |
João Vieira; Elisângela Teixeira; Hayward J Godwin; Denis Drieghe The processing of the definite article in Brazilian Portuguese: When ‘the' carries gender and number marking Journal Article In: Quarterly Journal of Experimental Psychology, pp. 1–14, 2025. @article{Vieira2025,Research on eye movements during reading has shown that function words receive fewer and shorter fixations than content words. However, recent studies suggest that when matched in frequency, length, and predictability, such differences disappear. Two studies in English still indicate a special status of the article ‘the'. Angele and Rayner, using the gaze-contingent boundary paradigm, found that ungrammatical previews of ‘the' were skipped more often than grammatical content words, while Staub et al. found that repeated articles were noticed less often than repeated content words. We extended both studies to Brazilian Portuguese (BP), where articles carry more syntactic information (gender and number) than in English. In a gaze-contingent boundary experiment, we found that the preview of an ungrammatical definite article was skipped more often than the grammatical continuation, suggesting the mechanism of automatically skipping articles is also present in BP. Because this mechanism does not seem to be influenced by the extra information articles carry in BP compared to English, it is likely that it is the high frequency of the articles that is triggering word skipping as opposed to a special function word status. However, in the second experiment, repeated articles were noticed nearly as frequently as content words, presumably because the additional syntactic information articles carry in BP is connected to the sentence's structure in a more complex way than, for instance, English. So, in an artificial task, such as repetition detection during reading, differences between articles and content words can manifest themselves. |
Bernhard Angele; Zeynep Gunes Ozkan; Marina Serrano-Carot; Jon Andoni Duñabeitia How low can you go? Tracking eye movements during reading at different sampling rates Journal Article In: Behavior Research Methods, vol. 57, no. 7, pp. 1–25, 2025. @article{Angele2025,Eye-movement research has revolutionized our understanding of reading, but the use of eye-tracking techniques in investigating the reading process is still limited by the cost of high-precision eye-tracking, which limits research to laboratories with sufficient resources. It is important to evaluate to what extent cognitive processes during reading can be measured with less expensive eye-tracking devices. One such way may be to use devices with a lower sampling rate, which are much less expensive than high-sampling rate eye-trackers. We recorded readers' eye movements during reading at different sampling rates and show that it is possible to measure the classic effect of word frequency on fixation duration, reflecting ongoing cognitive processing during reading, at sampling rates ranging from 250 to 2000 Hz. We simulate even lower sampling rates and show that, with a sufficiently large sample size, it is possible to detect the effect of word frequency even at very low sampling rates (30–125 Hz). Our results demonstrate that, in principle, low sampling rates are not an obstacle to studying the effects of cognitive processing during reading. |
Lena S. Bolliger; Patrick Haller; Isabelle C. R. Cretton; David R. Reich; Tannon Kew; Lena A. Jäger EMTeC: A corpus of eye movements on machine-generated texts Journal Article In: Behavior Research Methods, vol. 57, no. 7, pp. 1–45, 2025. @article{Bolliger2025,The Eye movements on Machine-generated Texts Corpus (EMTeC) is a naturalistic eye-movements-while-reading corpus of 107 native English speakers reading machine-generated texts. The texts are generated by three large language models using five different decoding strategies, and they fall into six different text-type categories. EMTeC entails the eye movement data at all stages of pre-processing, i.e., the raw coordinate data sampled at 2000 Hz, the fixation sequences, and the reading measures. It further provides both the original and a corrected version of the fixation sequences, accounting for vertical calibration drift. Moreover, the corpus includes the language models' internals that underlie the generation of the stimulus texts: the transition scores, the attention scores, and the hidden states. The stimuli are annotated for a range of linguistic features both at text and at word level. We anticipate EMTeC to be utilized for a variety of use cases such as, but not restricted to, the investigation of reading behavior on machine-generated text and the impact of different decoding strategies; reading behavior on different text types; the development of new pre-processing, data filtering, and drift correction algorithms; the cognitive interpretability and enhancement of language models; and the assessment of the predictive power of surprisal and entropy for human reading times. The data at all stages of pre-processing, the model internals, and the code to reproduce the stimulus generation, data pre-processing, and analyses can be accessed via https://github.com/DiLi-Lab/EMTeC/. |
Leigh B. Fernandez; Lauren V. Hadley; Aybora Koç; John C. B. Gamboa; Shanley E. M. Allen Is there a cost when predictions are not met? A VWP study investigating L1 and L2 speakers Journal Article In: Quarterly Journal of Experimental Psychology, vol. 78, no. 7, pp. 1237–1259, 2025. @article{Fernandez2025d,Research has found that both first language (L1) and second language (L2) speakers make predictions about upcoming linguistic information, with predictive behaviour being impacted by individual differences and methodological factors. However, it is not clear whether a cost is incurred when a prediction is made, but not met. L2 speakers have less experience with their L2 and parsing can be cognitively demanding, which together may lead L2 speakers to incur prediction costs differently relative to L1 speakers. In this study using the visual world paradigm, we test whether L1 and L2 speakers predict in the same way, within the same time frame, and incur the same costs if predictions are not met. We also explore the role of proficiency and speech rate. We found that both groups predict in a similar way and within a similar time frame. In addition, neither group incurred a prediction cost when the target was the most likely alternative, though L2 speakers take longer to shift their attention to the target object when predictions are not met. We argue that this reflects a slowing of lexical access rather than a specific cost of prediction. We only found prediction differences when speech rate was included in the analysis, highlighting the importance of attending to speech rate in studies using the visual world paradigm. Overall, this study supports research showing that both L1 and L2 speakers may make multiple partial predictions about upcoming information rather than predicting one specific lexical candidate while inhibiting less likely lexical candidates. |
Ying Fu; Simon P. Liversedge; Xuejun Bai; Maleeha Moosa; Chuanli Zang Word length and frequency effects in natural Chinese reading: Evidence for character representations in lexical identification Journal Article In: Quarterly Journal of Experimental Psychology, vol. 78, no. 7, pp. 1438–1449, 2025. @article{Fu2025a,Word length and frequency are two of the “big three” factors that affect eye movements in natural reading. Although these factors have been extensively investigated, all previous studies manipulating word length have been confounded with changes in visual complexity (longer words have more letters and are more visually complex). We controlled stroke complexity across one-character (short) and two-character (long) high- and low-frequency Chinese words (to avoid complexity confounds) and recorded readers' eye movements during sentence reading. Both word length and frequency yielded strong main effects for fixation time measures. For saccadic targeting and skipping probability, word length effects, but not word frequency effects, occurred. Critically, the interaction was not significant regardless of stroke complexity, indicating that word length and frequency independently influence lexical identification and saccade target selection during Chinese reading. The results provide evidence for character-level representations during Chinese word recognition in natural reading. |
Elisa Gavard; Valérie Chanoine; Franziska Geringswald; Jean-Luc Anton; Eddy Cavalli; Johannes C. Ziegler Neural networks for semantic and syntactic prediction and visual-motor statistical learning in adult readers with and without dyslexia Journal Article In: Neurobiology of Language, vol. 6, pp. 1–34, 2025. @article{Gavard2025,Prediction has become a key concept for understanding language comprehension, language production, and more recently reading. Recent studies suggest that predictive mechanisms in reading may be related to domain-general statistical learning (SL) abilities that support the extraction of regularities from sequential input. Both mechanisms have been discussed in relation to developmental dyslexia. Some suggest that SL is impaired in dyslexia with negative effects on the ability to make linguistic predictions. Others suggest that dyslexic readers rely to a greater extent on semantic and syntactic predictions to compensate for lower-level deficits. Here, we followed these two research questions in a single study. We therefore assessed the effects of semantic and syntactic prediction in reading and SL abilities in a population of university students with dyslexia and a group of typical readers using fMRI. The SL task was a serial reaction time (SRT) task that was performed inside and outside the scanner. The predictive reading task was performed in the scanner and used predictive versus nonpredictive semantic and syntactic contexts. Our results revealed distinct neural networks underlying semantic and syntactic predictions in reading, group differences in predictive processing in the left precentral gyrus and right anterior insula, and an association between predictive reading and SL, particularly in dyslexic readers. These findings contribute to our understanding of the interplay between SL, predictive processing, and compensation in dyslexia, providing new insights into the neural mechanisms that support reading. |
Elena Gessa; Chiara Valzolgher; Elena Giovanelli; Massimo Vescovi; Chiara Visentin; Nicola Prodi; Eloise Di Blasi; Viola Sadler; Francesco Pavani Speech-reading on the lips as a cognitive resource to understand speech in noise Journal Article In: Experimental Brain Research, vol. 243, no. 7, pp. 1–12, 2025. @article{Gessa2025,In challenging acoustic scenarios, speech processing is often linked to listening effort, which can be described as the balance between cognitive demands and motivation to understand speech. In such conditions, people usually rely on several behavioral strategies to support speech understanding and reduce listening effort (e.g., speech-reading behavior). Still, it is not clear what cognitive mechanisms underlie the use of behavioral strategies for listening. We hypothesized that the cognitive and motivational dimensions of listening effort may also drive speech-reading strategies spontaneously adopted in challenging conditions. Normal-hearing adults (N = 64) performed audiovisual speech-recognition in noise, in combination with a concurrent mnemonic task with low vs. high working memory engagement to set cognitive demands. Motivation was manipulated between-subjects through fixed or performance-related monetary rewards. Speech-reading was tracked with eye-movement, and pupil dilation served as a physiological measure of listening effort, confirming manipulation effectiveness. We found that exerted listening effort intensifies speech-reading behavior, with motivation playing a key role in this behavioral adaptation to enhanced cognitive demands. These findings document the association between internal mental processes and behavioral adaptation in the speech domain. |
Victor Kuperman How does language distance affect reading fluency and comprehension in English as second language? Journal Article In: Studies in Second Language Acquisition, vol. 47, no. 3, pp. 757–773, 2025. @article{Kuperman2025,Acquisition of reading skill in a second language (L2) requires development and coordinated use of multiple component skills. This acquisition is less effortful the more similar the first language (L1) of the L2 learner is to that L2. While ways to quantify the L1–L2 distance are well defined in the current literature, the theoretical status of this distance in models of L2 reading acquisition is under-specified. This paper tests whether the L1–L2 distance influences English reading fluency and comprehension directly, via the mediation of component skills of reading, or both. We used text reading data and tests of component skills of English reading from the Multilingual Eye-movement Corpus database, representing advanced L2 readers of English from 18 distinct language backgrounds. Mediation analyses show that the L1–L2 distance has both a direct and an indirect effect on English reading fluency and eye movements, yet it has no effect on reading comprehension. These findings are novel in that they specify the mechanism through which the L1–L2 distance affects L2 reading acquisition. |
Marianna Kyriacou; Cecilie Rummelhoff; Franziska Köder Irony processing in adults with ADHD: Evidence from eye-tracking and executive attention tasks Journal Article In: Journal of Attention Disorders, vol. 29, no. 9, pp. 724–744, 2025. @article{Kyriacou2025a,Objective: ADHD is a neurodevelopmental disorder that impacts pragmatic communication abilities in children, including their understanding of verbal irony. This study aims to investigate whether adults with ADHD experience similar challenges in interpreting ironic statements, and to examine the role of executive attention abilities in accounting for any observed differences. Methods: 52 adults with ADHD and 55 neurotypical controls participated in an eye-tracking experiment. They read stories that included either literal or ironic statements and answered targeted comprehension questions. We used measures of working memory and fluid intelligence as independent indices of executive attention. Results: The results showed that adults with ADHD were as accurate as the control group in comprehending irony. However, they experienced an additional processing cost, indicated by increased reading times for ironic statements. While fluid intelligence improved comprehension accuracy in the control group, it did not have the same effect for participants with ADHD. Importantly, higher working memory capacity in adults with ADHD was associated with faster processing times, making their irony processing comparable to that of the control group. Conclusion: Our findings underscore the subtle challenges adults with ADHD face in processing irony and highlight the crucial role of working memory in enhancing performance. These insights stress the importance of considering individual cognitive capacities and their interaction with ADHD symptoms to better understand how ADHD impacts pragmatic abilities in adulthood. |
