Non-Human Primate Eye-Tracking Publications
All EyeLink eye tracker non-human primate research publications up until 2024 (with some early 2025s) are listed below by year. You can search the eye-tracking publications using keywords such as Temporal Cortex, Macaque, Antisaccade, etc. You can also search for individual author names. If we missed any EyeLink non-human primate articles, please email us!
2024 |
Steven G. Luke; Celeste Tolley; Adriana Gutierrez; Cole Smith; Toni Brown; Kate Woodruff; Olivia Ford The perceptual span in dyslexic reading and visual search Journal Article In: Dyslexia, vol. 30, no. 4, pp. 1–25, 2024. @article{Luke2024, Many studies have attempted to identify the root cause of dyslexia. Different theories of dyslexia have proposed either a phonological, attentional, or visual deficit. While research has used eye-tracking to study dyslexia, only two previous studies have used the moving-window paradigm to explore the perceptual span in dyslexic reading, and none have done so in visual search. The present study analysed the perceptual span using both reading and visual search tasks to identify language-independent attentional impairments in dyslexics. We found equivocal evidence that the perceptual span was impaired in dyslexic reading and no evidence of impairment in visual search. However, dyslexic participants did show deficits in the visual search task, with lower search accuracy and shorter saccades compared with controls. These results lend support for a visual, rather than attentional or phonological, account of dyslexia. |
Anqi Lyu; Andrew E. Silva; Benjamin Thompson; Larry Abel; Allen M. Y. Cheong In: Frontiers in Neuroscience, vol. 18, pp. 1–9, 2024. @article{Lyu2024a, Visual cortex anodal transcranial direct current stimulation (a-tDCS) has been shown to reduce crowding in normal peripheral vision and may improve the reading of English words in patients with macular degeneration. Given the different visual requirements of reading English words and Chinese characters, the effect of a-tDCS on peripheral reading performance in English might differ from Chinese. This study recruited 20 participants (59–73 years of age) with normal vision and tested the hypothesis that a-tDCS would improve the reading of Chinese characters presented at 10° eccentricity compared with sham stimulation. Chinese sentences of different print sizes and exposure durations were presented one character at a time, 10° below or to the left of fixation. The individual critical print size (CPS) – the smallest print size eliciting the maximum reading speed (MRS) – was determined. Reading accuracies for characters presented 0.2 logMAR smaller than the individually fitted CPS were measured at four time points: before, during, 5 min after, and 30 min after receiving active or sham visual cortex a-tDCS. Participants completed both the active and sham sessions in a random order following a double-blind, within-subject design. No effect of active a-tDCS on reading accuracy was observed, implying that a single session of a-tDCS did not improve Chinese character reading in normal peripheral vision. This may suggest that a-tDCS does not significantly reduce the crowding elicited within a single Chinese character. However, the effect of a-tDCS on between-character crowding is yet to be determined. |
Siqi Lyu; Jung-Yueh Tu; Chien-Jer Charles Lin Structural position affects topic transition: An eye tracking study Journal Article In: Language and Linguistics, vol. 25, no. 1, pp. 56–79, 2024. @article{Lyu2024, In an eye-tracking study, we used Chinese double-subject construction [NPa NPb PREDICATE] (e.g., [nage jiezhi]NPa [sheji]NPb [hen tebie]PREDICATE ‘that ring design very special') in a concessive construction like suiran…dan… ‘although…but…' to investigate how the syntactic position of the topic NP (i.e., that ring) affects the comprehension of topic transition in the subsequent clause. We contrasted topics located at a higher pre-connective topic position (e.g., that ring although) and those located at a post-connective subject position (e.g., although that ring). Topic transition was manipulated as either using a subtopic (e.g., workmanship of that ring) or a new topic (e.g., the wedding dress) in the second clause of concession. We found a main effect of topic transition in a batch ofeye-movement measures showing that subtopic transition was preferred over new-topic transition. More importantly, we found interactions on total reading time and total fixations at the topic-suiran region and on total fixations at the post-critical region, with post hoc tests revealing a larger cost of topic transition in the high-topic condition than in the low-topic condition. The results suggest that when a topic NP is located at a higher topic position (i.e., above the connective), it binds the topics of both clauses and induces greater cost when the topics do not form a consistent chain. When the topic NP is located at a local (i.e., post-connective) position, the processing of topic shift or resolution of topic conflict in the second clause is less costly because the second topic is not syntactically bound by the higher topic. Together, the results support a prominent status of the before-connective position in Chinese discourse. Furthermore, they indicate that syntactically induced topicality constrains the processing of topic transition in the subsequent discourse. |
Xingcheng Ma; Dechao Li Effect of word order asymmetry on the cognitive load of English–Chinese sight translation Journal Article In: Translation and Interpreting Studies, vol. 19, no. 1, pp. 105–131, 2024. @article{Ma2024f, This article examines word order asymmetry as one prominent obstacle in the cognitive process of English–Chinese sight translation. A within-subject experiment was designed for 23 MA translation students who sight translated sentences with different degrees of structural asymmetry from English into Chinese in both single sentence and discourse contexts. To measure cognitive load, participants' eye movements during translation were recorded using an eye tracker. Three major findings were generated: (1) The effect of word order asymmetry was confirmed on both sentence-based and word-based processing; (2) Contextual information did not contribute to less effortful processing in the discourse context (as indicated by more fixations and longer regressions); (3) Segmentation was used far more frequently than restructuring to address asymmetric structures. We expect these findings will enrich our understanding of the cognitive mechanisms involved in interpreting between languages that are structurally very different and help inform training practices. |
Naser Al Madi Advancing dynamic-time warp techniques for correcting eye tracking data in reading source code Journal Article In: Journal of Eye Movement Research, vol. 17, no. 1, pp. 1–19, 2024. @article{Madi2024, Background: Automated eye tracking data correction algorithms such as Dynamic-Time Warp always made a trade-off between the ability to handle regressions (jumps back) and distortions (fixation drift). At the same time, eye movement in code reading is characterized by non-linearity and regressions. Objective: In this paper, we present a family of hybrid algorithms that aim to handle both regressions and distortions with high accuracy. Method: Through simulations with synthetic data, we replicate known eye movement phenomena to assess our algorithms against Warp algorithm as a baseline. Furthermore, we utilize two real datasets to evaluate the algorithms in correcting data from reading source code and see if the proposed algorithms generalize to correcting data from reading natural language text. Results: Our results demonstrate that most proposed algorithms match or outperform baseline Warp in correcting both synthetic and real data. Also, we show the prevalence of regressions in reading source code. Conclusion: Our results highlight our hybrid algorithms as an improvement to Dynamic-Time Warp in handling regressions |
Jens Madsen; Lucas C. Parra Bidirectional brain-body interactions during natural story listening Journal Article In: Cell Reports, vol. 43, no. 4, pp. 1–28, 2024. @article{Madsen2024, Narratives can synchronize neural and physiological signals between individuals, but the relationship between these signals, and the underlying mechanism, is unclear. We hypothesized a top-down effect of cognition on arousal and predicted that auditory narratives will drive not only brain signals but also peripheral physiological signals. We find that auditory narratives entrained gaze variation, saccade initiation, pupil size, and heart rate. This is consistent with a top-down effect of cognition on autonomic function. We also hypothesized a bottom-up effect, whereby autonomic physiology affects arousal. Controlled breathing affected pupil size, and heart rate was entrained by controlled saccades. Additionally, fluctuations in heart rate preceded fluctuations of pupil size and brain signals. Gaze variation, pupil size, and heart rate were all associated with anterior-central brain signals. Together, these results suggest bidirectional causal effects between peripheral autonomic function and central brain circuits involved in the control of arousal. |
Federica Magnabosco; Olaf Hauk An eye on semantics: A study on the influence of concreteness and predictability on early fixation durations Journal Article In: Language, Cognition and Neuroscience, vol. 39, no. 3, pp. 302–316, 2024. @article{Magnabosco2024, We used eye-tracking during natural reading to study how semantic control and representation mechanisms interact for the successful comprehension of sentences, by manipulating sentence context and single-word meaning. Specifically, we examined whether a word's semantic characteristic (concreteness) affects first fixation and gaze durations (FFDs and GDs) and whether it interacts with the predictability of a word. We used a linear mixed effects model including several possible psycholinguistic covariates. We found a small but reliable main effect of concreteness and replicated a predictability effect on FFDs, but we found no interaction between the two. The results parallel previous findings of additive effects of predictability (context) and frequency (lexical level) in fixation times. Our findings suggest that the semantics of a word and the context created by the preceding words additively influence early stages of word processing in natural sentence reading. |
Sasu Mäkelä; Jan Kujala; Pauliina Ojala; Jukka Hyönä; Riitta Salmelin Naturalistic reading of multi-page texts elicits spatially extended modulation of oscillatory activity in the right hemisphere Journal Article In: Scientific Reports, vol. 14, no. 1, pp. 1–11, 2024. @article{Maekelae2024a, The study of the cortical basis of reading has greatly benefited from the use of naturalistic paradigms that permit eye movements. However, due to the short stimulus lengths used in most naturalistic reading studies, it remains unclear how reading of texts comprising more than isolated sentences modulates cortical processing. To address this question, we used magnetoencephalography to study the spatiospectral distribution of oscillatory activity during naturalistic reading of multi-page texts. In contrast to previous results, we found abundant activity in the right hemisphere in several frequency bands, whereas reading-related modulation of neural activity in the left hemisphere was quite limited. Our results show that the role of the right hemisphere may be importantly emphasized as the reading process extends beyond single sentences. |
Hillarie Man; Adam J. Parker; J. S. H. Taylor Flexible letter-position coding in Chinese-English L2 bilinguals: Evidence from eye movements Journal Article In: Quarterly Journal of Experimental Psychology, vol. 77, no. 12, pp. 2497 –2515, 2024. @article{Man2024, Theories suggest that efficient recognition of English words depends on flexible letter-position coding, demonstrated by the fact that transposed-letter primes (e.g., JUGDE-judge) facilitate written word recognition more than substituted-letter primes (e.g., JUFBE-judge). The multiple route model predicts that reading experience should drive more flexible letter-position coding as readers transition from decoding words letter-by-letter to recognising words as wholes. This study therefore examined whether letter-position is coded flexibly in second-language English sentence reading for native Chinese speakers, and if this is influenced by English proficiency. Eye movements were measured while 54 adult native Chinese speakers read English sentences including either a real word (e.g., cheaply), a transposed-letter nonword (e.g., “chepaly”), or a substituted-letter nonword (e.g., “chegely”). Flexible letter-position coding was observed in initial and later processing stages—reading times were longer for substituted-letter than transposed-letter nonwords. In addition, reading times were longer in both initial and later processing stages for transposed-letter nonwords than real words, indicating that, despite encoding letter-position flexibly, readers processed letter-position. Although pre-registered frequentist analyses suggested that English proficiency did not predict overall reading times, Bayes Factors indicated that there was evidence for such a relationship. It is therefore likely that this proficiency analysis suffered from low power. Finally, neither frequentist nor Bayes Factor analyses suggested that English proficiency influenced the difference in reading times between different target word types, i.e., the nature of letter-position coding. Overall, these results suggest that highly proficient L2 learners code letter-position flexibly. |
K. Maquate; Angela Patarroyo; Angelina Ioannidou-Tsiomou; Pia Knoeferle Age differences in spoken language comprehension: Verb-argument and formality-register congruence influence real-time sentence processing Journal Article In: Discourse Processes, pp. 1–23, 2024. @article{Maquate2024, Using the Visual World Paradigm, we investigated participants' processing of formality register and verb-argument (in)congruent sentences. Crucially, we tested whether individual differences influence sentence processing by taking participants' age (18–45 years) and their social status (high vs. low) into account. Participants listened to German context sentences that set up formal (e.g. Elegantly dressed says Peter:) or informal (e.g. Sloppily dressed rambles Peter:) situations while they looked at images that were associated either with a formal (e.g. a pair of fancy shoes and chic clothes) or informal (e.g. a pair of old shoes and casual clothes) context. Following the context sentence, they listened to a German target sentence (e.g. I'm soon tying my shoescolloquial). The verb in the target sentence imposed semantic constraints on its arguments (e.g. tie has a good semantic fit with shoes but fits less well with clothes). The on-screen images represented candidate post-verbal referents (e.g. shoes or clothes), creating semantic congruence between the verb constraints and two out of four candidate referents. This verb-argument congruence factor was crossed with congruence between the formality of the context sentence and the (informal vs. more formal) register of the post-verbal argument (e.g. shoesstandard vs. shoescolloquial). Our results show that participants take the formality of the context into account to inform anticipation of matching images on the screen. Moreover, the older the participants were, the more they took the formality of the context into account. All participants made use of the verb's restrictions: They anticipated and integrated the named object noun argument. Crucially, only younger but not middle-aged participants made use of the context sentence formality to further inform expectations of verb-argument congruence. Participants' social status did not influence register and verb-argument sentence processing. |
Ayelet McKyton; Deena Elul; Netta Levin Seeing in the dark: High-order visual functions under scotopic conditions Journal Article In: iScience, vol. 27, no. 2, pp. 1–12, 2024. @article{McKyton2024, It is unknown how and to what degree people function visually in almost complete darkness, where only rod photoreceptors are active (scotopic conditions). To explore this, we first tested scotopic acuity and crowding. We demonstrated the ∼1° foveal scotoma and found that crowding increases with eccentricity, resulting in optimal scotopic discrimination 2° into the periphery. We then investigated whether these limitations affect high-level foveal tasks. We recorded eye movements while testing reading and upright/inverted face matching under photopic and scotopic conditions. Under scotopic conditions, participants read accurately and showed a face inversion effect. Temporally, fixation durations were longer. Spatially, surprisingly, participants did not avert their gaze 2° into the periphery. Instead, they fixated on similar locations as under photopic conditions, locations that were shown to correlate with global perception. We propose that this result suggests global perception governs under scotopic conditions, and we discuss how receptive-field properties support this conclusion. |
Drew J. McLaughlin; Jackson S. Colvett; Julie M. Bugg; Kristin J. Van Engen Sequence effects and speech processing: cognitive load for speaker-switching within and across accents Journal Article In: Psychonomic Bulletin & Review, vol. 31, no. 1, pp. 1–11, 2024. @article{McLaughlin2024, Prior work in speech processing indicates that listening tasks with multiple speakers (as opposed to a single speaker) result in slower and less accurate processing. Notably, the trial-to-trial cognitive demands of switching between speakers or switching between accents have yet to be examined. We used pupillometry, a physiological index of cognitive load, to examine the demands of processing first (L1) and second (L2) language-accented speech when listening to sentences produced by the same speaker consecutively (no switch), a novel speaker of the same accent (within-accent switch), and a novel speaker with a different accent (across-accent switch). Inspired by research on sequential adjustments in cognitive control, we aimed to identify the cognitive demands of accommodating a novel speaker and accent by examining the trial-to-trial changes in pupil dilation during speech processing. Our results indicate that switching between speakers was more cognitively demanding than listening to the same speaker consecutively. Additionally, switching to a novel speaker with a different accent was more cognitively demanding than switching between speakers of the same accent. However, there was an asymmetry for across-accent switches, such that switching from an L1 to an L2 accent was more demanding than vice versa. Findings from the present study align with work examining multi-talker processing costs, and provide novel evidence that listeners dynamically adjust cognitive processing to accommodate speaker and accent variability. We discuss these novel findings in the context of an active control model and auditory streaming framework of speech processing. |
Bob McMurray; Francis X. Smith; Marissa Huffman; Kristin Rooff; John B. Muegge; Charlotte Jeppsen; Ethan Kutlu; Sarah Colby Underlying dimensions of real-time word recognition in cochlear implant users Journal Article In: Nature Communications, vol. 15, no. 1, pp. 1–19, 2024. @article{McMurray2024, Word recognition is a gateway to language, linking sound to meaning. Prior work has characterized its cognitive mechanisms as a form of competition between similar-sounding words. However, it has not identified dimensions along which this competition varies across people. We sought to identify these dimensions in a population of cochlear implant users with heterogenous backgrounds and audiological profiles, and in a lifespan sample of people without hearing loss. Our study characterizes the process of lexical competition using the Visual World Paradigm. A principal component analysis reveals that people's ability to resolve lexical competition varies along three dimensions that mirror prior small-scale studies. These dimensions capture the degree to which lexical access is delayed (“Wait-and-See”), the degree to which competition fully resolves (“Sustained-Activation”), and the overall rate of activation. Each dimension is predicted by a different auditory skills and demographic factors (onset of deafness, age, cochlear implant experience). Moreover, each dimension predicts outcomes (speech perception in quiet and noise, subjective listening success) over and above auditory fidelity. Higher degrees of Wait-and-See and Sustained-Activation predict poorer outcomes. These results suggest the mechanisms of word recognition vary along a few underlying dimensions which help explain variable performance among listeners encountering auditory challenge. |
Gregory D. Keating Normalization of timed measures in bilingualism research: Make it optimal with the Box-Cox transformation Journal Article In: Linguistic Approaches to Bilingualism, pp. 1–20, 2024. @article{Keating2024, The time it takes an individual to respond to a probe (e.g., a word, picture, or question) or to read a word or phrase provides useful insights into cognitive processes. Consequently, timed measures are a staple in bilingualism research. However, timed measures usually violate assumptions of linear models, one being normal distribution of the residuals. Power transformations are a common solution but which of the many possible transformations to apply is often guesswork. Box and Cox (1964) developed a procedure to estimate the best-fitting normalizing transformation, coefficient lambda (λ), that is easy to run using standard R packages. This practical primer demonstrates how to perform the Box-Cox transformation in R using as a testbed the distractor items from a recent eye-tracking study on sentence reading in speakers of Spanish as a majority and a heritage language. The analyses show (a) that the exponents selected via the Box-Cox procedure reduce positive skewness as well as or better than the natural log; (b) that the best-fitting value of λ varies based on factors such as group and, in the case ofeye-movement data, the measure of interest; and (c) that the choice of transformation sometimes impacts p values for model estimates. |
Hyoju Kim; Annie Tremblay; Taehong Cho Perceptual cue weighting matters in real-time integration of acoustic information during spoken word recognition Journal Article In: Cognitive Science, vol. 48, pp. 1–38, 2024. @article{Kim2024a, This study investigates whether listeners' cue weighting predicts their real-time processing of asynchronous acoustic information as the speech signal unfolds over time. It does so by testing the time course of acoustic cue integration in the processing of Seoul Korean stop contrasts by native listeners. The current study further tests whether listeners' cue weighting is associated with cue integration at the individual level. Seoul Korean listeners' (n = 62) perceptual weightings of voice onset time (VOT, available earlier in time) and onset fundamental frequency of the following vowel (F0, available later in time) to perceive Korean stop contrasts were measured with a speech perception task (Experiment 1), and the timing of VOT integration in lexical access was examined with a visual-world eye-tracking task (Experiment 2). The group results revealed that the timing of VOT integration is predicted by listeners' reliance on F0, with delayed integration of VOT in target-competitor pairs where F0 is a primary cue to process the stop contrast. At the individual level, listeners who relied more on F0 than on VOT showed later integration of VOT, further elucidating the relationship between cue weighting and the time course of cue integration. These results suggest that listeners' real-time processing of asynchronous acoustic information in lexical activation is modulated by the informativeness of perceptual cues. As such, this study provides a nuanced perspective for a better understanding of listeners' moment-by-moment processing of acoustic information in spoken word recognition. |
Hyunwoo Kim; Kitaek Kim; Joonhee Kim The role of processing goals in second language predictive processing Journal Article In: Studies in Second Language Acquisition, pp. 1–21, 2024. @article{Kim2024b, This study investigates how second language (L2) learners engage in prediction based on their processing goals. While prediction is a prominent feature of human sentence comprehension in first–language speakers, it remains less understood when and how L2 learners engage in predictive processing. By conducting a visual–world eye–tracking experiment involving Chinese–speaking L2 learners of Korean, we tested the hypothesis that L2 learners determine whether to engage in prediction by evaluating the costs and benefits of anticipatory processing. The experiment specifically focused on the impact of a top–down comprehension goal for L2 learners' predictive use of an honorific form in Korean by providing them with different types of task instruction. Our results indicated that all groups engaged in predictive processing in early and entire predictive regions. However, in the late predictive region, L2 learners presented with a prediction–oriented task, but not those with a simple comprehension task, actively generated expectations about the honorific status of an upcoming referent. These findings lend support to the utility account of L2 prediction, suggesting that L2 learners' engagement in prediction depends on their current goals and strategies for processing efficiency. |
Julie A. Kirkby Parafoveal processing and transposed-letter effects in developmental dyslexic reading Journal Article In: Dyslexia, vol. 31, pp. 1–25, 2024. @article{Kirkby2024, During reading, adults and children independently parafoveally encode letter identity and letter position information using a flexible letter position encoding mechanism. The current study examined parafoveal encoding of letter position and letter iden- tity for dyslexic children. Eye movements were recorded during a boundary- change paradigm. Parafoveal previews were either an identity preview (e.g., nearly), a transposed- letter preview (e.g., enarly) or a substituted- letter preview (e.g., acarly). Dyslexic readers showed a preview benefit for identity previews, indicating that orthographic information was encoded parafoveally. Furthermore, dyslexic readers benefitted from transposed- letter previews more than substituted- letters previews, demonstrating that letter identity was encoded independently to letter position during parafoveal processing. Although a transposed- letter effect was found for dyslexic readers, they demonstrated a reduced sensitivity to detect transposed- letters in later measures of reading, that is, go- past times, relative to that found for typically developing readers. We conclude that dyslexic readers, with less rich and fully specified lexical representations, have a reduced sensitivity to transpositions of the first two letters of the upcoming word in preview. These findings are compatible with the view that orthographic representations of dyslexic children are not sufficiently specified. |
Oren Kobo; Aya Meltzer-Asscher; Jonathan Berant; Tom Schonberg Classification of depression tendency from gaze patterns during sentence reading Journal Article In: Biomedical Signal Processing and Control, vol. 93, pp. 1–9, 2024. @article{Kobo2024, Background: Depression is a common and disabling mental health disorder, which impacts hundreds of millions of people worldwide. Current diagnosis methods rely almost solely on self-report and are prone to subjectivity and biases. In recent years, computational psychiatry has employed advanced sensing technology, utilizing rich data, to train accurate algorithms to detect depression from passive, non-invasive physiological markers. Gaze-tracking is used to collect cognitive data with high temporal resolution and offers a surrogate to underlying processes such as attention distribution, making it particularly useful for classification of attention-related cognitive abnormalities, including depression. Methods: We used data from gaze-tracking while participants were engaged in sentence reading to build a classifier for depression tendency. We created sentences constructed to highlight expected attention biases in depression. We recorded gaze data during reading from a sample of 101 participants and analyzed the data as a raw time-series. We used the validated PHQ-9 questionnaire to obtain depression levels per participant. Results: Using LSTMs (Long Short-Term Memory Artificial Neural Network) and Random Forest analysis techniques we were able to reach above chance classification (60+%) of depression tendency levels from the gaze patterns. Limitations: A replication with more participants is needed. Data was collected among undergraduate students and was conducted only in Hebrew. Individual assessment was not validated against clinical data. Conclusions: The results can lead to potential data-driven and accessible diagnosis tools that will support and monitor depression treatment and rehabilitation. |
Anna Krason; Erica L. Middleton; Matthew E. P. Ambrogi; Malathi Thothathiri Conflict adaptation in aphasia: Upregulating cognitive control for improved sentence comprehension Journal Article In: Journal of Speech, Language, and Hearing Research, vol. 67, no. 11, pp. 4411–4430, 2024. @article{Krason2024, PURPOSE: This study investigated conflict adaptation in aphasia, specifically whether upregulating cognitive control improves sentence comprehension. METHOD: Four individuals with mild aphasia completed four eye tracking sessions with interleaved auditory Stroop and sentence-to-picture matching trials (critical and filler sentences). Auditory Stroop congruency (congruent/incongruent across a male/female voice saying "boy"/"girl") was crossed with sentence congruency (syntactically correct sentences that are semantically plausible/implausible), resulting in four experimental conditions (congruent auditory Stroop followed by incongruent sentence [CI], incongruent auditory Stroop followed by incongruent sentence [II], congruent auditory Stroop followed by congruent sentence [CC], and incongruent auditory Stroop followed by congruent sentence [IC]). Critical sentences were always preceded by auditory Stroop trials. At the end of each session, a five-item questionnaire was administered to assess overall well-being and fatigue. We conducted individual-level mixed-effects regressions on reaction times and growth curve analyses on the proportion of eye fixations to target pictures during incongruent sentences. RESULTS: One participant showed conflict adaptation indicated by faster reaction times on active sentences and more rapid growth in fixations to target pictures on passive sentences in the II condition compared to the CI condition. Incongruent auditory Stroop also modulated active-sentence processing in an additional participant, as indicated by eye movements. CONCLUSIONS: This is the first study to observe conflict adaptation in sentence comprehension in people with aphasia. The extent of adaptation varied across individuals. Eye tracking revealed subtler effects than overt behavioral measures. The results extend the study of conflict adaptation beyond neurotypical adults and suggest that upregulating cognitive control may be a potential treatment avenue for some individuals with aphasia. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.27056149. |
Emmanouil Ktistakis; Angeliki Gleni; Miltiadis K. Tsilimbaris; Sotiris Plainis Comparing silent reading performance for single sentences and paragraphs: An eye movement-based analysis Journal Article In: Clinical and Experimental Optometry, vol. 107, no. 4, pp. 449–456, 2024. @article{Ktistakis2024, Clinical relevance: Reading performance in clinical practice is commonly evaluated by reading ‘aloud' and ‘as fast as possible' single sentences. Assessing comprehensive silent reading performance using passages, composed of multiple sentences, is the preferred reading mode in real-life reading conditions. Background: The purpose of this study was to compare eye movement-based silent reading performance for standardised short sentences and paragraphs. Methods: A group of 15 young volunteers (age range: 22–36 years) read silently and comprehensively in two sessions: (a) a paragraph with continuous text and (b) standardised short sentences. Text print size was 0.4 logMAR (1.0 M at 40 cm distance). Eye movements during reading were recorded using video oculography (EyeLink II, SR Research Ltd). Data analysis included computation of reading speed, fixation duration, the number of fixations, saccadic amplitude and percentage of regressions. Moreover, frequency distributions of fixation durations were analysed with ex-Gaussian fittings. Results: Repeatability coefficient in silent reading speed was found better for the paragraph (66 wpm) than for short sentences (88 wpm). The superiority in repeatability coefficient for the corresponding eye movement parameters, i.e. fixation duration (35 vs 73 ms), regressions (10.1 vs. 22.3%) and fixations per word (0.21 vs. 0.37 fpw), was even more pronounced. In addition, a statistically significant improvement with the paragraph was found in average fixation duration (19 ± 26 ms |
Justin B. Kueser; Arielle Borovsky; Patricia Deevy; Mine Muezzinoglu; Claney Outzen; Laurence B. Leonard Verb vocabulary supports event probability use in developmental language disorder Journal Article In: Journal of Speech, Language, and Hearing Research, vol. 67, no. 5, pp. 1490–1513, 2024. @article{Kueser2024, Purpose: Children with developmental language disorder (DLD) tend to interpret noncanonical sentences like passives using event probability (EP) information regardless of structure (e.g., by interpreting "The dog was chased by the squirrel" as "The dog chased the squirrel"). Verbs are a major source of EP information in adults and children with typical development (TD), who know that "chase" implies an unequal relationship among participants. Individuals with DLD have poor verb knowledge and verb-based sentence processing. Yet, they also appear to rely more on EP information than their peers. This paradox raises two questions: (a) How do children with DLD use verb-based EP information alongside other information in online passive sentence interpretation? (b) How does verb vocabulary knowledge support EP information use? Method: We created novel EP biases by showing animations of agents with consistent action tendencies (e.g., clumsy vs. helpful actions). We then used eye tracking to examine how this EP information was used during online passive sentence processing. Participants were 4- to 5-year-old children with DLD (n = 20) and same-age peers with TD (n = 20). Results: In Experiment 1, children with DLD quickly integrated verb-based EP information with morphosyntax close to the verb but failed to do so with distant morphosyntax. In Experiment 2, the quality of children's sentence-specific verb vocabulary knowledge was positively associated with the use of EP information in both groups. Conclusion: Depending on the morphosyntactic context, children with DLD and TD used EP information differently, but verb vocabulary knowledge aided its use. |
Jan Kujala; Sasu Mäkelä; Pauliina Ojala; Jukka Hyönä; Riitta Salmelin Beta- and gamma-band cortico-cortical interactions support naturalistic reading of continuous text Journal Article In: European Journal of Neuroscience, vol. 59, no. 2, pp. 238–251, 2024. @article{Kujala2024, Large-scale integration of information across cortical structures, building on neural connectivity, has been proposed to be a key element in supporting human cognitive processing. In electrophysiological neuroimaging studies of reading, quantification of neural interactions has been limited to the level of isolated words or sentences due to artefacts induced by eye movements. Here, we combined magnetoencephalography recording with advanced artefact rejection tools to investigate both cortico-cortical coherence and directed neural interactions during naturalistic reading of full-page texts. Our results show that reading versus visual scanning of text was associated with wide-spread increases of cortico-cortical coherence in the beta and gamma bands. We further show that the reading task was linked to increased directed neural interactions compared to the scanning task across a sparse set of connections within a wide range of frequencies. Together, the results demonstrate that neural connectivity flexibly builds on different frequency bands to support continuous natural reading. |
Victor Kuperman; Sascha Schroeder; Daniil Gnetov Word length and frequency effects on text reading are highly similar in 12 alphabetic languages Journal Article In: Journal of Memory and Language, vol. 135, pp. 1–15, 2024. @article{Kuperman2024, Reading research robustly finds that shorter and more frequent words are recognized faster and skipped more often than longer and less frequent words. An empirical question that has not been tested yet is whether languages within the same writing system would produce similarly strong length and frequency effects or whether typological differences between written languages would cause those effects to vary systematically in their magnitude. We analyzed text reading eye-movement data in 12 alphabetic languages from the Multilingual Eye-Movement Corpus (MECO). The languages varied substantially in their word length and frequency distributions as a function of their orthographic depth and morpho-syntactic type. Yet, the effects of word length and frequency on fixation durations and skipping rate were highly similar in size between the languages. This finding suggests a high degree of cross-linguistic universality in the readers' behavioral response to linguistic complexity (indexed by word length) and the amount of experience with the word (indexed by word frequency). These findings run counter to influential theories of single word recognition, which predict orthographic depth of a language to modulate the size of these benchmark effects. They also facilitate development of cross-linguistically generalizable computational models of eye-movement control in reading. |
Ethan Kutlu; Jamie Klein-Packard; Charlotte Jeppsen; J. Bruce Tomblin; Bob McMurray The development of real-time spoken and word recognition derives from changes in ability, not maturation Journal Article In: Cognition, vol. 251, pp. 1–28, 2024. @article{Kutlu2024, In typical adults, recognizing both spoken and written words is thought to be served by a process of competition between candidates in the lexicon. In recent years, work has used eye-tracking in the visual world paradigm to characterize this competition process over development. It has shown that both spoken and written word recognition continue to develop through adolescence (Rigler et al., 2015). It is still unclear what drives these changes in real-time word recognition over the school years, as there are dramatic changes in language, the onset of reading instruction, and gains in domain general function during this time. This study began to address these issues by asking whether changes in real-time word recognition derive from changes in overall language and reading ability or reflect more general age-related development. This cross-sectional study examined 278 school-age children (Grades 1–3) using the Visual World Paradigm to assess both spoken and written word recognition, along with multiple measures of language, reading and phonology. A structural equation model applied to these ability measures found three factors representing language, reading, and phonology. Multiple regression analyses were used to understand how these three factors relate to real-time spoken and written word recognition as well as a non-linguistic variant of the VWP intended to capture decision speed, eye-movement factors, and other non-language/reading differences. We found that for both spoken and written word recognition, the speed of activating target words in both domains was more closely tied to the relevant ability (e.g., reading for written word recognition) than was age. We also examined competition resolution (how fully competitors were suppressed late in processing). Here, spoken word recognition showed only small, developmental effects that were only related to phonological processing, suggesting links to developmental language disorder. However, in written word recognition, competitor resolution showed large impacts of development which were strongly linked to reading. This suggests the dimensionality of real-time lexical processing may differ across domains. Importantly, neither spoken nor written word recognition is fully described by changes in non-linguistic skills assessed with non-linguistic VWP, and the non-linguistic VWP was linked to differences in language and reading. These findings suggest that spoken and written word recognition continue past the first year of life and are mostly driven by ability and not only by overall maturation. |
Marianna Kyriacou Not batting an eye: Figurative meanings of L2 idioms do not interfere with literal uses Journal Article In: Languages, vol. 9, no. 32, pp. 1–15, 2024. @article{Kyriacou2024, Encountering idioms (hit the sack = “go to bed”) in a second language (L2) often results in a literal-first understanding (“literally hit a sack”). The figurative meaning is retrieved later, subject to idiom familiarity and L2 proficiency, and typically at a processing cost. Intriguingly recent findings report the overextension of idiom use in inappropriate contexts by advanced L2 users, with greater L2 proficiency somewhat mitigating this effect. In this study, we tested the tenability of this finding by comparing eye-movement patterns for idioms used literally, vs. literal control phrases (hit the dirt) in an eye-tracking-while-reading paradigm. We hypothesised that if idiom overextension holds, processing delays should be observed for idioms, as the (over)activated but contextually irrelevant figurative meanings would cause interference. In contrast, unambiguous control phrases should be faster to process. The results demonstrated undifferentiated processing for idioms used literally and control phrases across measures, with L2 proficiency affecting both similarly. Therefore, the findings do not support the hypothesis that advanced L2 users overextend idiom use in inappropriate contexts, nor that L2 proficiency modulates this tendency. The results are also discussed in light of potential pitfalls pertaining to idiom priming under typical experimental settings. wibble99: |
Hend Lahoud; Zohar Eviatar; Hamutal Kreiner Eye-movement patterns in skilled Arabic readers: Effects of specific features of Arabic versus universal factors Journal Article In: Reading and Writing, vol. 37, no. 5, pp. 1079–1108, 2024. @article{Lahoud2024, This study aims to shed light on the contribution of universal versus language specific factors on reading. We examined eye movements of Arabic readers and analyzed effects specific to Arabic such as perceptual complexity, diglossia and morphology, in addition to universal factors such as word length and frequency. Twenty native Arabic speakers read continuous texts in Modern Standard Arabic (MSA) while their eye movements were monitored. A corpus-based analyses was carried to test effects specific to Arabic and effects of the benchmark eye movement factors. We found that perceptually more complex words received longer fixation durations, moreover, differences in processing words unique in MSA versus words shared between MSA and spoken Arabic Vernacular were found. This is the first indication for these effects during an eye movement reading task. However, the effect of morphological length was not significant when included in the model with all predictors. Lastly, the benchmark factors were significant showing effects for word length, word frequency and part of speech. Short and frequent words are processed faster than longer and less frequent words. Function words are often skipped. We conclude that eye movement of Arabic readers reflect proficient reading, yet they also exhibit an on-going challenge in processing the written language. |
Anna Laurinavichyute; Anastasia Ziubanova; Anastasiya Lopukhina Eye-movement suppression in the visual world paradigm Journal Article In: Open Mind, vol. 8, no. 2011, pp. 1012–1036, 2024. @article{Laurinavichyute2024, Eye movements in the visual world paradigm are known to depend not only on linguistic input but on such factors as task, pragmatic context, affordances, etc. However, the degree to which eye movements may depend on task rather than on linguistic input is unclear. The present study for the first time tests how task constraints modulate eye movement behavior in the visual world paradigm by probing whether participants could refrain from looking at the referred image. Across two experiments with and without comprehension questions (total N = 159), we found that when participants were instructed to avoid looking at the referred images, the probability of fixating these reduced from 58% to 18% while comprehension scores remained high. Although language-mediated eye movements could not be suppressed fully, the degree of possible decoupling of eye movements from language processing suggests that participants can withdraw at least some looks from the referred images when needed. If they do so to different degrees in different experimental conditions, comparisons between conditions might be compromised. We discuss some cases where participants could adopt different viewing behaviors depending on the experimental condition, and provide some tentative ways to test for such differences. |
Charlotte E. Lee; Hayward J. Godwin; Denis Drieghe The jingle fallacy in comprehension tests for reading Journal Article In: PLoS ONE, vol. 19, no. 7, pp. 1–22, 2024. @article{Lee2024, The Jingle fallacy is the false assumption that instruments which share the same name measure the same underlying construct. In this experiment, we focus on the comprehension subtests of the Nelson Denny Reading Test (NDRT) and the Wechsler Individual Achievement Test (WIAT-II). 91 university students read passages for comprehension whilst their eye movements were recorded. Participants took part in two experimental blocks of which the order was counterbalanced, one with higher comprehension demands and one with lower comprehension demands. We assumed that tests measuring comprehension would be able to predict differences observed in eye movement patterns as a function of varying comprehension demands. Overall, readers were able to adapt their reading strategy to read more slowly, making more and longer fixations, coupled with shorter saccades when comprehension demands were higher. Within an experimental block, high scorers on the NDRT were able to consistently increase their pace of reading over time for both higher and lower comprehension demands, whereas low scorers approached a threshold where they could not continue to increase their reading speed or further reduce the number of fixations to read a text, even when comprehension demands were low. Individual differences based on the WIAT-II did not explain similar patterns. The NDRT comprehension test was therefore more predictive of differences in the reading patterns of skilled adult readers in response to comprehension demands than the WIAT-II (which also suffered from low reliability). Our results revealed that these different comprehension measures should not be used interchangeably, and researchers should be cautious when choosing reading comprehension tests for research. |
Charlotte E. Lee; Ascensión Pagán; Hayward J. Godwin; Denis Drieghe Individual differences and the transposed letter effect during reading Journal Article In: PLoS ONE, vol. 19, no. 2, pp. 1–21, 2024. @article{Lee2024a, When a preview contains substituted letters (SL; markey) word identification is more disrupted for a target word (monkey), compared to when the preview contains transposed letters (TL; mnokey). The transposed letter effect demonstrates that letter positions are encoded more flexibly than letter identities, and is a robust finding in adults. However, letter position encoding has been shown to gradually become more flexible as reading skills develop. It is unclear whether letter position encoding flexibility reaches maturation in skilled adult readers, or whether some differences in the magnitude of the TL effect remain in relation to individual differences in cognitive skills. We examined 100 skilled adult readers who read sentences containing a correct, TL or SL preview. Previews were replaced by the correct target word when the reader's gaze triggered an invisible boundary. Cognitive skills were assessed and grouped based on overlapping variance via Principal Components Analysis (PCA) and subsequently used to predict eye movement measures for each condition. Consistent with previous literature, adult readers were found to generally encode letter position more flexibly than letter identity. Very few differences were found in the magnitude of TL effects between adults based on individual differences in cognitive skills. The flexibility of letter position encoding appears to reach maturation (or near maturation) in skilled adult readers. |
Hsing Hao Lee; Karleigh Groves; Pablo Ripollés; Marisa Carrasco Audiovisual integration in the McGurk effect is impervious to music training Journal Article In: Scientific Reports, vol. 14, no. 1, pp. 1–9, 2024. @article{Lee2024b, The McGurk effect refers to an audiovisual speech illusion where the discrepant auditory and visual syllables produce a fused percept between the visual and auditory component. However, little is known about how individual differences contribute to the McGurk effect. Here, we examined whether music training experience—which involves audiovisual integration—can modulate the McGurk effect. Seventy-three participants completed the Goldsmiths Musical Sophistication Index (Gold-MSI) questionnaire to evaluate their music expertise on a continuous scale. Gold-MSI considers participants' daily-life exposure to music learning experiences (formal and informal), instead of merely classifying people into different groups according to how many years they have been trained in music. Participants were instructed to report, via a 3-alternative forced choice task, “what a person said”: /Ba/, /Ga/ or /Da/. The experiment consisted of 96 audiovisual congruent trials and 96 audiovisual incongruent (McGurk) trials. We observed no significant correlations between the susceptibility of the McGurk effect and the different subscales of the Gold-MSI (active engagement, perceptual abilities, music training, singing abilities, emotion) or the general musical sophistication composite score. Together, these findings suggest that music training experience does not modulate audiovisual integration in speech as reflected by the McGurk effect. |
Hyeri Lee; Yoomi Choi; Jee Eun Sung Age-related changes in connected speech production: Evidence from eye-tracking in the culturally adapted picture description task Journal Article In: Frontiers in Psychology, vol. 15, pp. 1–14, 2024. @article{Lee2024d, Purpose: Age-related changes in connected speech production remain a subject of debate, yielding inconsistent findings across various tasks and measures. This study aimed to investigate the effects of aging on picture description tasks using two types of pictures: a standardized picture (the Beach picture) and a culturally and linguistically modified picture tailored for Korean speakers (the Han River picture). Method: Twenty-four young adults and 22 older adults participated in two picture description tasks while their eye movements were recorded. Word-level linguistic variables were used to assess informativeness (Correct Information Units per minute) and productivity (noun and verb counts per utterance) of connected speech production. Eye-movement measures were employed to evaluate real-time cognitive processing associated with planning connected speech (pre-speech fixation counts and durations; eye fixations before the speech onset of each utterance). Results and conclusions: The findings revealed age-related declines in linguistic measures, with older adults exhibiting decreased CIUs per minute and smaller counts of nouns and verbs per utterance. Age-related changes in eye movement measures were evident in that older adults displayed longer pre-speech fixation durations. Unlike younger adults, older adults exhibited higher pre-speech fixation counts on the Han River picture compared to the Beach picture, suggesting cognitive challenges in performing the task that requires producing more words and detailed descriptions. These results suggest that aging is associated with reduced informativeness and productivity of connected speech, as well as a decline in cognitive processing efficiency. |
Victoria Lai Cheng Lei; Teng Ieng Leong; Cheok Teng Leong; Lili Liu; Chi Un Choi; Martin I. Sereno; Defeng Li; Ruey Song Huang Phase-encoded fMRI tracks down brainstorms of natural language processing with subsecond precision Journal Article In: Human Brain Mapping, vol. 45, no. 2, pp. 1–23, 2024. @article{Lei2024, Natural language processing unfolds information overtime as spatially separated, multimodal, and interconnected neural processes. Existing noninvasive subtraction-based neuroimaging techniques cannot simultaneously achieve the spatial and temporal resolutions required to visualize ongoing information flows across the whole brain. Here we have developed rapid phase-encoded designs to fully exploit the temporal information latent in functional magnetic resonance imaging data, as well as overcoming scanner noise and head-motion challenges during overt language tasks. We captured real-time information flows as coherent hemodynamic waves traveling over the cortical surface during listening, reading aloud, reciting, and oral cross-language interpreting tasks. We were able to observe the timing, location, direction, and surge of traveling waves in all language tasks, which were visualized as “brainstorms” on brain “weather” maps. The paths of hemodynamic traveling waves provide direct evidence for dual-stream models of the visual and auditory systems as well as logistics models for crossmodal and cross-language processing. Specifically, we have tracked down the step-by-step processing of written or spoken sentences first being received and processed by the visual or auditory streams, carried across language and domain-general cognitive regions, and finally delivered as overt speeches monitored through the auditory cortex, which gives a complete picture of information flows across the brain during natural language functioning. |
Lin Li; Lingshan Bao; Zhuoer Li; Sha Li; Jingyi Liu; Pin Wang; Kayleigh L. Warrington; Sarah Gunn; Kevin B. Paterson Efficient word segmentation is preserved in older adult readers: Evidence from eye movements during Chinese reading Journal Article In: Psychology and Aging, vol. 39, no. 3, pp. 215–230, 2024. @article{Li2024b, College-aged readers use efficient strategies to segment and recognize words in naturally unspaced Chinese text. Whether this capability changes across the adult lifespan is unknown, although segmenting words in unspaced text may be challenging for older readers due to visual and cognitive declines in older age, including poorer parafoveal processing of upcoming characters. Accordingly, we conducted two eye movement experiments to test for age differences in word segmentation, each with 48 young (18–30 years) and 36 older (65+ years) native Chinese readers. Following Zhou and Li (2021), we focused on the processing of “incremental” three-character words, like 幼儿园 (meaning “kindergartens”), which contain an embedded two-character word (e.g., 幼儿, meaning “children”). In Experiment 1, either the threecharacter word or its embedded word was presented as the target word in sentence contexts where the three-character word always was plausible, and the embedded word was either plausible or implausible. Both age groups produced similar plausibility effects, suggesting age constancy in accessing the embedded word early during ambiguity processing before ultimately assigning an incremental word analysis. Experiment 2 provided further evidence that both younger and older readers access the embedded word early during ambiguity processing, but rapidly select the appropriate (incremental) word. Crucially, the findings suggest that word segmentation strategies do not differ with age. |
Nan Li; Suiping Wang; Florian Kornrumpf; Werner Sommer; Olaf Dimigen Parafoveal and foveal N400 effects in natural reading: A timeline of semantic processing from fixation-related potentials Journal Article In: Psychophysiology, vol. 61, no. 5, pp. 1–25, 2024. @article{Li2024c, The depth at which parafoveal words are processed during reading is an ongoing topic of debate. Recent studies using RSVP-with-flanker paradigms have shown that implausible words within sentences elicit an N400 component while they are still in parafoveal vision, suggesting that the semantics of parafoveal words can be accessed to rapidly update the sentence representation. To study this effect in natural reading, we combined the coregistration of eye movements and EEG with the deconvolution modeling of fixation-related potentials (FRPs) to test whether semantic plausibility is processed parafoveally during Chinese sentence reading. For one target word per sentence, both its parafoveal and foveal plausibility were orthogonally manipulated using the boundary paradigm. Consistent with previous eye movement studies, we observed a delayed effect of parafoveal plausibility on fixation durations that only emerged on the foveal word. Crucially, in FRPs aligned to the pretarget fixation, a clear N400 effect emerged already based on parafoveal plausibility, with more negative voltages for implausible previews. Once participants fixated the target, we again observed an N400 effect of foveal plausibility. Interestingly, this foveal N400 was absent whenever the preview had been implausible, indicating that when a word's (im)plausibility is already processed in parafoveal vision, this information is not revised anymore upon direct fixation. Implausible words also elicited a late positive component (LPC), but exclusively when in foveal vision. Our results not only provide convergent neural and behavioral evidence for the parafoveal uptake of semantic information, but also indicate different contributions of parafoveal versus foveal information toward higher level sentence processing. |
Tianyun Li; Agnieszka Chmiel Automatic subtitles increase accuracy and decrease cognitive load in simultaneous interpreting Journal Article In: Interpreting, vol. 26, no. 2, pp. 253–281, 2024. @article{Li2024e, This study examines the effect of real-time subtitles generated by automatic speech recognition (ASR) technology on interpreting accuracy and interpreters' cognitive load. Multiple measurements — including interpreting accuracy, the NASA-TLX for subjective ratings of cognitive load, eye-tracking and theta power as indicated by EEG recordings — were applied. Twenty-three professional simultaneous interpreters worked with a video recording of a speech presented in five conditions: a baseline without subtitles and then with subtitles of varying levels of precision (100%, 95%, 90% and 80%). The results reveal that the presence of subtitles significantly improved interpreting accuracy, with a suggested optimal precision rate of 90% or higher. The interpreters looked more at the subtitles, regardless of their level of precision, than the speaker. Contrary to our predictions, the presence of subtitles decreased, rather than increased, the cognitive load (although this outcome was shown by the EEG data only and not by the self-reported data). We conclude that the cognitive cost of processing subtitles as an additional information channel is offset by the cognitive gain achieved through visual prompting. The study highlights a complex effect of subtitles on interpreting, with such factors as subtitle presence and precision modulating the interpreters' cognitive load in such a workflow. wibble99: |
Xinjing Li; Qingqing Qu Verbal working memory capacity modulates semantic and phonological prediction in spoken comprehension Journal Article In: Psychonomic Bulletin & Review, vol. 31, no. 1, pp. 249–258, 2024. @article{Li2024f, Mounting evidence suggests that people may use multiple cues to predict different levels of representation (e.g., semantic, syntactic, and phonological) during language comprehension. One question that has been less investigated is the relationship between general cognitive processing and the efficiency of prediction at various linguistic levels, such as semantic and phonological levels. To address this research gap, the present study investigated how working memory capacity (WMC) modulates different kinds of prediction behavior (i.e., semantic prediction and phonological prediction) in the visual world. Chinese speakers listened to the highly predictable sentences that contained a highly predictable target word, and viewed a visual display of objects. The visual display of objects contained a target object corresponding to the predictable word, a semantic or a phonological competitor that was semantically or phonologically related to the predictable word, and an unrelated object. We conducted a Chinese version of the reading span task to measure verbal WMC and grouped participants into high- and low-span groups. Participants showed semantic and phonological prediction with comparable size in both groups during language comprehension, with earlier semantic prediction in the high-span group, and a similar time course of phonological prediction in both groups. These results suggest that verbal working memory modulates predictive processing in language comprehension. |
Feifei Liang; Linlin Feng; Ying Liu; Xin Li; Xuejun Bai Different roles of initial and final character positional probabilities on incidental word learning during Chinese reading Journal Article In: Acta Psychologica Sinica, vol. 56, no. 3, pp. 281–294, 2024. @article{Liang2024a, In natural unspaced Chinese reading, there are no salient visual word segmentation cues (like word spaces) to demark where words begin or end, yet Chinese skilled readers process a comparable amount of text content as efficiently as English readers, processing roughly 400 characters (equal to 260 words) per minute. This raises the question of how Chinese readers engage in such word segmentation processing efficiently and effectively. Liang et al (2015, 2017) have shown that the positional probability information associated with a character, might offer a cue to the likely positions of word boundaries during Chinese incidental word learning. Given that they simultaneously manipulated the positional probabilities of both word initial and word final characters to make their manipulations maximally effective, it is unclear whether the initial, the final, or both constituent characters' positional probabilities contribute to the word segmentation and word identification effects during incidental word learning in Chinese reading. For this reason, in the present study, two parallel experiments were designed to directly investigate whether word initial, or word ending characters are more or less important for word segmentation word learning in Chinese reading. Two-character pseudowords were constructed as novel words. Each novel word was embedded into six high-constraint contexts for readers to establish novel lexical representation. In Experiment 1, we examined how word's initial character positional probability influenced word segmentation and word identification during Chinese word learning. The initial character's positional probability of target words was manipulated as being either high or low, and the final character was kept identical across the two conditions. In Experiment 2, an analogous manipulation was made for the final character of the target word to check whether the final character positional probability of two-character words can be used as word segmentation cue. We also included “Exposure” as a continuous variable into the model to further examine how the process of initial and final character positional probabilities changed with exposure. In both experiments, the participants spent shorter reading times and made fewer fixations on targets that comprised initial and final characters with high relative to low positional probabilities, suggesting that the positional probability of both the initial and final character of a word influences segmentation commitments in novel word learning in Chinese reading. Furthermore, both the effect of initial and final character positional probabilities of novel words decreased with exposure, showing the typical familiarity effect. To be somewhat different, the familiarity effect associated with the initial character had a slower time course relative to final character. This finding suggests that the role of word's initial character positional probability is of more importance than that of final character's, supporting the concurrent standpoint that word beginning constituents might be more influential than word final constituents during two-character word identification in Chinese reading. Based on the findings above, the time course of the process of initial and final character positional probabilities of novel words is argued and summarized as follows. During the early stage of word learning, both the statistical properties of word's initial and final character positional probabilities are processed as segmentation cue. As lexical familiarity increases, the extent to such segmentation roles decreases, which initially begins with final character, and then occurs with initial character. Later, both the roles of initial and final character positional probabilities disappear with the establishment of a more-integral representation of novel words. |
Weiyan Liao; Janet Hui Hsiao Understanding the role of eye movement pattern and consistency in isolated English word reading through Hidden Markov Modeling Journal Article In: Cognitive Science, vol. 48, no. 9, pp. 1–29, 2024. @article{Liao2024a, In isolated English word reading, readers have the optimal performance when their initial eye fixation is directed to the area between the beginning and word center, that is, the optimal viewing position (OVP). Thus, how well readers voluntarily direct eye gaze to this OVP during isolated word reading may be associated with reading performance. Using Eye Movement analysis with Hidden Markov Models, we discovered two representative eye movement patterns during lexical decisions through clustering, which focused at the OVP and the word center, respectively. Higher eye movement similarity to the OVP-focusing pattern predicted faster lexical decision time in addition to cognitive abilities and lexical knowledge. However, the OVP-focusing pattern was associated with longer isolated single letter naming time, suggesting conflicting visual abilities required for identifying isolated letters and multi-letter words. In contrast, in both word and pseudoword naming, although clustering did not reveal an OVP-focused pattern, higher consistency of the first fixation as measured in entropy predicted faster naming time in addition to cognitive abilities and lexical knowledge. Thus, developing a consistent eye movement pattern focusing on the OVP is essential for word orthographic processing and reading fluency. This finding has important implications for interventions for reading difficulties. |
Huan Liu; Shifa Chen; Ruiyong Liu; Huinan Du In: Behavioral Sciences, vol. 14, no. 11, pp. 1–24, 2024. @article{Liu2024e, Implicit causality (IC) is a phenomenon where verbs supply information about the potential cause of the behavior or state to one of the antecedents (e.g., “Mary praised David because…” will continue about David, not Mary). The study examines the influence of IC information and second language (L2) proficiency on Chinese English learners' pronoun anaphoric inference. Results from an eye-tracking study showed that Chinese English learners can actively use IC information in making pronoun anaphoric inference. Additionally, compared to low-proficiency learners, high-proficiency learners spent less time on making pronoun anaphoric inference. The findings indicate that Chinese English learners can activate IC information early, before the disambiguating information appears, thus supporting the focusing account. Furthermore, L2 proficiency also affects this process. |
Yan Liu; Chuanbin Ni How semantics works in Chinese relative clause processing: Insights from eye tracking Journal Article In: Frontiers in Psychology, vol. 15, pp. 1–16, 2024. @article{Liu2024h, Recent years have witnessed much research on semantic analysis and syntactic anatomy in ordinary language processing. However, it is still a matter of considerable debate about when and how the semantic integration of single word meanings works and interacts with syntax during on-line comprehension. This study, in an eye-tracking paradigm, took 38 native speakers of Mandarin Chinese as the participants and took Chinese relative clauses as stimuli to figure out the functions of semantics by investigating the conditioning semantic factors influencing and governing the word order variation of Chinese relative clauses during different processing stages. Accordingly, this study manipulated two syntactic variables, i.e., relative clause type and the position of the numeral-classifier sequence (NCL) in the relative clause, as well as a semantic variable, i.e., the abstractness of the head noun that the relative clause modified. Specifically, the study addressed two questions: (1) when semantics is activated and interacts with syntax and (2) how semantics affects syntax during the time course of Chinese relative clause processing. The results indicated that: (1) Semantics was activated and interacted with syntax during the early and late processing stages of Chinese relative clauses, which challenged the sequential order of syntactic and semantic processes, and supported the claims of the Concurrent Processing Model. (2) The syntactic order of the Chinese relative clause was affected by the semantic information of the head noun that the clause modified. Object-extraction relative clauses (ORCs) had a conjunction preference for the order “an object relative clause preceding the numeral-classifier sequence and the head noun.” Instead, the subject-extraction relative clause (SRC) which modified a concrete noun (CN) had a co-occurrence preference for the order “numeral-classifier sequence preceding the subject relative clause and the head noun,” while the subject-extraction relative clause which modified an abstract noun (AN) had a co-occurrence preference for the order “subject relative clause preceding the numeral-classifier sequence and the head noun.” The findings of this study were evaluated in light of the perspectives of truth value semantics of the syntactic components, the semantic compatibility of numeral-classifier sequence and its modified noun as well as the discourse functions of outer modifier nominals and inner modifier nominals. |
Simon P. Liversedge; Henri Olkoniemi; Chuanli Zang; Xin Li; Guoli Yan; Xuejun Bai; Jukka Hyönä Universality in eye movements and reading: A replication with increased power Journal Article In: Cognition, vol. 242, pp. 1–19, 2024. @article{Liversedge2024, Liversedge, Drieghe, Li, Yan, Bai and Hyönä (2016) reported an eye movement study that investigated reading in Chinese, Finnish and English (languages with markedly different orthographic characteristics). Analyses of the eye movement records showed robust differences in fine grained characteristics of eye movements between languages, however, overall sentence reading times did not differ. Liversedge et al. interpreted the entire set of results across languages as reflecting universal aspects of processing in reading. However, the study has been criticized as being statistically underpowered (Brysbaert, 2019) given that only 19–21 subjects were tested in each language. Also, given current best practice, the original statistical analyses can be considered to be somewhat weak (e.g., no inclusion of random slopes and no formal comparison of performance between the three languages). Finally, the original study did not include any formal statistical model to assess effects across all three languages simultaneously. To address these (and some other) concerns, we tested at least 80 new subjects in each language and conducted formal statistical modeling of our data across all three languages. To do this, we included an index that captured variability in visual complexity in each language. Unlike the original findings, the new analyses showed shorter total sentence reading times for Chinese relative to Finnish and English readers. The other main findings reported in the original study were consistent. We suggest that the faster reading times for Chinese subjects occurred due to cultural changes that have taken place in the decade or so that lapsed between when the original and current subjects were tested. We maintain our view that the results can be taken to reflect universality in aspects of reading and we evaluate the claims regarding a lack of statistical power that were levelled against the original article. |
Alizée Lombard; Richard Huyghe; Pascal Gygax Morphological productivity and neological intuition Journal Article In: Glossa Psycholinguistics, vol. 3, no. 1, pp. 1–41, 2024. @article{Lombard2024, This paper investigates the relationship between morphological productivity and neological intuition, defined as the ability to identify novel words as such. It can be hypothesised that the more productive a word-formation process is, the less salient the neologisms it forms will be. We test this hypothesis experimentally on neologisms formed with prefixes and suffixes of variable productivity. Three experiments are conducted, involving lexical identification and reading tasks with eye tracking, to provide a comprehensive description of neological intuition. The negative correlation between productivity and neological salience is supported by experimental results, but only in the case of suffixed neologisms, as opposed to prefixed ones. The effect of affix type on neological intuition can be explained by differences in the grammatical nature of prefixes and suffixes. Broadly speaking, investigating the linguistic factors of neological intuition provides an original approach to both linguistic and psycholinguistic issues related to word structure and lexical processing. |
Adrielli Tina Lopes Rego; Joshua Snell; Martijn Meeter Language models outperform cloze predictability in a cognitive model of reading Journal Article In: PLoS Computational Biology, vol. 20, no. 9, pp. 1–24, 2024. @article{LopesRego2024, Although word predictability is commonly considered an important factor in reading, sophisticated accounts of predictability in theories of reading are lacking. Computational models of reading traditionally use cloze norming as a proxy of word predictability, but what cloze norms precisely capture remains unclear. This study investigates whether large language models (LLMs) can fill this gap. Contextual predictions are implemented via a novel parallel-graded mechanism, where all predicted words at a given position are pre-activated as a function of contextual certainty, which varies dynamically as text processing unfolds. Through reading simulations with OB1-reader, a cognitive model of word recognition and eye-movement control in reading, we compare the model's fit to eye-movement data when using predictability values derived from a cloze task against those derived from LLMs (GPT-2 and LLaMA). Root Mean Square Error between simulated and human eye movements indicates that LLM predictability provides a better fit than cloze. This is the first study to use LLMs to augment a cognitive model of reading with higher-order language processing while proposing a mechanism on the interplay between word predictability and eye movements. |
Aimee O'Shea; Rita Cersosimo; Paul E. Engelhardt Online metaphor comprehension in adults with autism spectrum disorders: An eye tracking study Journal Article In: Journal of Autism and Developmental Disorders, pp. 1–15, 2024. @article{OShea2024, The aim of this study was to investigate novel metaphor comprehension in adults with autism spectrum disorder (ASD). Previous literature is conflicting about whether individuals with ASD have impairment in this particular type of figurative language. Participants in the study completed a visual world paradigm eye-tracking task, which involved selecting an interpretation of an auditorily presented sentence (i.e. a picture-sentence matching task), where images corresponded to literal and metaphorical interpretations. Thus, the study also investigated online processing, via reaction times and eye movements. Forty adults participated in the study (18 with ASD and 22 typically-developing controls). Each participant completed the AQ questionnaire and had their vocabulary assessed. Results showed that participants with ASD comprehended metaphorical utterances with the same accuracy as controls. However, they had significantly slower reaction times, and specifically, were approximately 800 ms slower. Analysis of eye movements revealed that participants with ASD showed significantly longer fixation times on both the target and distractor image, the latter of which suggests difficulty overcoming the literal interpretation. Consistent with some prior studies, we showed that adults with ASD are not impaired in novel metaphor comprehension, but they were clearly less efficient. Verbal abilities did not significantly relate to performance. Finally, our online processing measure (eye tracking) provided us with insights into the nature of the ASD inefficiency (i.e. a literality bias). |
Henri Olkoniemi; Diane Mézière; Johanna K. Kaakinen Comprehending irony in text: Evidence from scanpaths Journal Article In: Discourse Processes, vol. 61, no. 1-2, pp. 6–20, 2024. @article{Olkoniemi2024, Eyetracking studies have shown that readers reread ironic phrases when resolving their meaning. Moreover, it has been shown that the timecourse of processing ironic meaning is affected by reader's working memory capacity (WMC). Irony is a context-dependent phenomenon but using traditional eye-movement measures it is difficult to analyze processing beyond sentence-level. A promising method to study individual differences in irony processing at the paragraph-level is scanpath analysis. In the present experiment, we analyzed whether individual differences in WMC are reflected in scanpaths during reading ironic stories by combining data from two previous eye-tracking studies (N = 120). The results revealed three different reading patterns: fast-and-linear reading, selective reading, and nonselective rereading. The readers predominantly used the fast-and-linear reading pattern for ironic and literal stories. However, readers were less likely to use the nonselective rereading pattern with ironic than literal texts. The reading patterns for ironic stories were modulated by WMC. Results showed that scanpaths captured differences missed by standard measures, showing it to be a valuable tool to study individual differences in irony processing. |
Andreas Opitz; Denisa Bordag; Alberto Furgoni The role of linguistic factors in the retention of verbatim information in reading: An eye-tracking study on L1 and L2 German Journal Article In: Applied Psycholinguistics, vol. 45, pp. 393–417, 2024. @article{Opitz2024, We investigated the retention of surface linguistic information during reading using eye-tracking. Departing from a research tradition that examines differences between meaning retention and verbatim memory, we focused on how different linguistic factors affect the retention of surface linguistic information. We examined three grammatical alternations in German that differed in involvement of changes in morpho-syntax and/or information structure, while their propositional meaning is unaffected: voice (active vs. passive), adverb positioning, different realizations of conditional clauses. Single sentences were presented and repeated, either identical or modified according to the grammatical alternation (with controlled interval between them). Results for native (N = 60) and non-native (N = 58) German participants show longer fixation durations for modified versus unmodified sentences when information structural changes are involved (voice, adverb position). In contrast, mere surface grammatical changes without a functional component (conditional clauses) did not lead to different reading behavior. Sensitivity to the manipulation was not influenced by language (L1, L2) or repetition interval. The study provides novel evidence that linguistic factors affect verbatim retention and highlights the importance of eye-tracking as a sensitive measure of implicit memory. |
Nele Ots Planning sentences and sentence intonation in Estonian Journal Article In: Laboratory Phonology, vol. 50, no. 1, pp. 1–50, 2024. @article{Ots2024, The notion of advance planning of sentence intonation is grounded in the positive correlation between the sentence-initial intonation peaks and sentence duration. This study examined real-time sentence planning and intonation using visual world speech production. In two eye-tracking experiments, native Estonian speakers described transitive events involving multiple actors. Conceptual complexity of the resulting picture descriptions was manipulated through a pictorial design, while sentence length was controlled for by manipulating specific task characteristics. In Experiment I, conceptual complexity of the picture descriptions varied together with linguistic complexity, while linguistic complexity was held constant in Experiment II. As the conceptual complexity of utterances increased, the duration of naming gazes also increased, indicating less incremental conceptual planning. Notably, while utterance-initial intonation peaks did not correlate with the relative duration of naming gazes, they were influenced by utterance length. These findings highlight advance planning of intonation in Estonian. Furthermore, they suggest that intonation planning depends on linguistic information that is rapidly activated after establishing a comprehensive conceptual framework during earliest stages of preverbal planning. |
Nele Ots Planning intonation under cognitive constraints of speaking Book 2024. @book{Ots2024a, Pitch peaks tend to be higher at the beginning of longer utterances than in shorter ones (e.g., 'The Santa is decorating the Christmas trees' vs. 'The Santa is decorating the Christmas tree and the window'). Given that a rise in pitch frequently occurs in response to increased mental effort, we explore the link between higher pitch at the beginning of an utterance and the cognitive demands of sentence planning for speech production. To modulate the cognitive resources available for generating a message in a visual world speech production task, the study implemented a dual-task paradigm. Participants described pictures depicting events with multiple actors. In one-half of these descriptions, the participants memorized three nouns, later recalling them and answering related questions. The results demonstrate both cognitive and linguistic influences on sentence intonation. Specifically, intonation peaks at the beginning of longer utterances were higher than in shorter ones, and they were lower under the condition of memory load than under no load. Measurements of eye gaze indicated a very short processing delay at the outset of processing the picture and the sentence, which was rapidly overcome by the start of speech. The short time frame of restricted cognitive resources thus was manifested in the lowering of the intonation peaks. These findings establish a novel link between language-related memory span and sentence intonation and warrant further study to investigate the cognitive mechanisms of the planning of intonation. |
Jinger Pan; Aiping Wang; Mingsha Zhang; Yiu Kei Tsang; Ming Yan Printing words in alternating colors facilitates eye movements among young and older Chinese adults Journal Article In: Psychonomic Bulletin and Review, pp. 1–11, 2024. @article{Pan2024a, It is well known that the Chinese writing system lacks visual cues for word boundaries, such as interword spaces. However, characters must be grouped into words or phrases for understanding, and the lack of interword spaces can cause certain ambiguity. In the current study, young and older Chinese adults' eye movements were recorded during their reading of naturally unspaced sentences, where consecutive words or nonwords were printed using alternating colors. The eye movements of both the Chinese young and older adults were clearly influenced by this explicit word boundary information. Across a number of eye-movement measures, in addition to a general age-related slowdown, the results showed that both groups benefited overall from the explicit color-based word boundary and experienced interference from the nonword boundary. Moreover, the manipulations showed stronger effects among the older adults. We discuss implications for practical application. |
Jinger Pan; Ming Yan The perceptual span in traditional Chinese Journal Article In: Language and Cognition, vol. 16, no. 1, pp. 134–147, 2024. @article{Pan2024d, The present study aimed at examining the perceptual span, the visual field area for information extraction within a single fixation, during the reading of traditional Chinese sentences. Native traditional Chinese readers' eye-movements were recorded as they read sentences that were presented using a gaze-contingent technique, in which legible text was restricted within a window that moved in synchrony with the eyes, while characters outside the window were masked. Comparisons of the window conditions with a baseline condition in which no viewing constraint was applied showed that when the window revealed one previous character and three upcoming characters around the current fixation, reading speed and oculomotor activities reached peak performance. Compared to previous results with simplified Chinese reading, based on a similar set of materials, traditional Chinese exhibits a reduction of the perceptual span. We suggest that the visual complexity of a writing system likely influences the perceptual span during reading. |
Yali Pan; Steven Frisson; Kara D Federmeier; Ole Jensen Early parafoveal semantic integration in natural reading Journal Article In: eLife, vol. 12, pp. 1–27, 2024. @article{Pan2024b, Humans can read and comprehend text rapidly, implying that readers might process multiple words per fixation. However, the extent to which parafoveal words are previewed and integrated into the evolving sentence context remains disputed. We investigated parafoveal processing during natural reading by recording brain activity and eye movements using MEG and an eye tracker while participants silently read one-line sentences. The sentences contained an unpredictable target word that was either congruent or incongruent with the sentence context. To measure parafoveal processing, we flickered the target words at 60 Hz and measured the resulting brain responses (i.e. Rapid Invisible Frequency Tagging, RIFT ) during fixations on the pre-target words. Our results revealed a significantly weaker tagging response for target words that were incongruent with the previous context compared to congruent ones, even within 100ms of fixating the word immediately preceding the target. This reduction in the RIFT response was also found to be predictive of individual reading speed. We conclude that semantic information is not only extracted from the parafovea but can also be integrated with the previous context before the word is fixated. This early and extensive parafoveal processing supports the rapid word processing required for natural reading. Our study suggests that theoretical frameworks of natural reading should incorporate the concept of deep parafoveal processing. |
Adam J. Parker; J. S. H. Taylor; Jennifer M. Rodd Readers use recent experiences with word meanings to support the processing of lexical ambiguity: Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–22, 2024. @article{Parker2024, Fluent reading comprehension demands the rapid access and integration of word meanings. This can be challenging when lexically ambiguous words have less frequent meanings (e.g., the dog meaning of boxer). Indeed, readers fixate on lexically ambiguous words that are disambiguated toward their subordinate meaning for longer than matched control words embedded within identical sentence contexts. Word-meaning priming studies have shown that participants flexibly use recent experiences with ambiguous words to guide their interpretation when these words are presented in isolation, even after substantial delays. However, word-meaning priming paradigms have almost always used artificial tasks to measure word-meaning availability and we do not therefore know how priming would support lexical processing when reading for comprehension. Thus, we conducted two eye-movement experiments to examine word-meaning priming during sentence reading. Both experiments employed a 2 (ambiguity: low-ambiguity control vs. high-ambiguity) × 2 (priming: unprimed vs. primed) within-participants design, with either a 1-min delay (Experiment 1; N = 28) or a 30-min delay (Experiment 2; N = 60) between prime and test sentences. Both experiments showed greater reductions in go-past times and total reading times following priming for high-ambiguity target words than matched low-ambiguity control words, indicating that recent encounters support the processing of word meanings during sentence reading and that this effect extends beyond the simple repetition effect observed for low-ambiguity control words. This illustrates the remarkable flexibility of the human language system in using diverse input to refine stored lexical knowledge even in skilled readers. |
Olga Parshina; Nina Zdorova; Victor Kuperman Cross-linguistic comparison in reading sentences of uniform length: Visual-perceptual demands override readers' experience Journal Article In: Quarterly Journal of Experimental Psychology, vol. 77, no. 8, pp. 1694–1702, 2024. @article{Parshina2024, Accurate saccadic targeting is critical for efficient reading and is driven by the sensory input under the eye-gaze. Yet whether a reader's experience with the distributional properties of their written language also influences saccadic targeting is an open debate. This study of Russian sentence reading follows Cutter et al.'s (2017) study in English and presents readers with sentences consisting of words of the same length. We hypothesised that if the readers' experience matters as per discrete control account, Russian readers would produce longer saccades and farther landing positions than the ones produced by English readers. On the contrary, if the saccadic targeting is primarily driven by the immediate perceptual demands that override readers' experience as per the dynamic adjustment account, the saccades of Russian and English readers would be of the same length, resulting in similar landing positions. The results in both Cutter et al. and the present study provided evidence for the latter account: Russian readers showed rapid and accurate adjustment of saccade lengths and landing positions to the highly constrained input. Crucially, the saccade lengths and landing positions did not differ between English and Russian readers even in the cross-linguistically length-matched stimuli. |
Kinga Patterson; James A. Street; Andriy Myachykov In: Journal of Cultural Cognitive Science, vol. 8, pp. 247–274, 2024. @article{Patterson2024, We present experimental evidence suggesting that frequency and literacy predict online processing and comprehension of subject-verb agreement constructions by adult native speakers of English. We measured participants' eye fixations, reaction times, and response accuracy in a forced-choice task using audio-visual eye-tracking paradigm. Participants completed a battery of tasks, inc. the Literacy Rating Scale (Tarone et al., Literacy and Second Language Oracy-Oxford Applied Linguistics, Oxford University Press, 2013), Agreement Judgement Task (e.g., Veenstra et al., Frontiers in Psychology 5:783, 2014). The AJT involved matching an auditorily presented subject phrase to one of two images of easily distinguishable colours presented on a computer screen (e.g., stars, circles). Participants heard 42 test sentences, counterbalanced across the three types: Type 1 (e.g., ‘The stars with the circles are blue'), Type 2 (e.g., ‘The star with the circles is blue') and Type 3 (e.g., ‘The star with the circles are blue'*. Type 1 and Type 2 constructions are considerably more frequent in writing than in speech (Miller et al., Spontaneous spoken language: Syntax and discourse, Oxford University Press on Demand, 1998) with Type 2 producing more attraction errors (Bock et al., Cognitive Psychology 43:83–128, 2001; Becker, L., & Dcabrowska, E. (2020). Does experience with written language influence grammaticality intuitions? UK Cognitive Linguistics Conference: University of Birmingham [conference presentation].). Data were analysed with linear mixed effects models and generalised additive models. Results show lower literacy participants took longer to process sentential cues and made more attraction errors. These findings support usage-based research showing frequency and experience effects on online comprehension of canonical and non-canonical constructions (Farmer, T. A., Misyak, J. B., & Christiansen, M. H. (2012). Individual differences in sentence processing. In Cambridge handbook of psycholinguistics (pp. 353-364)., Street, Language Sciences 59:192–203, 2017), detection and production of agreement attraction errors (Becker, L., & Dcabrowska, E. (2020). Does experience with written language influence grammaticality intuitions? UK Cognitive Linguistics Conference: University of Birmingham [conference presentation].) and demonstrate how linguistic and attentional processes interact (Tomlin and Myachykov, Attention and salience, Handbook of Cognitive Linguistics, 2015). They also complement corpus-based studies by providing evidence that native speakers are sensitive to observed distributions (Miller et al., Spontaneous spoken language: Syntax and discourse, Oxford University Press on Demand, 1998). |
Ana Pellicer-Sánchez; Stuart Webb; Andi Wang How does lexical coverage affect the processing of L2 texts? Journal Article In: Applied Linguistics, vol. 45, no. 6, pp. 953–972, 2024. @article{PellicerSanchez2024, Lexical coverage, i.e. the extent to which words in a text are known, is considered an important predictor of reading comprehension, with studies suggesting 98% lexical coverage leads to adequate comprehension. However, no studies to date have examined how the various lexical coverage percentages suggested in the literature are reflected by the cognitive effort involved in processing text and the attention that is devoted to the unknown vocabulary. This study used eye-tracking to examine how lexical coverage affects the processing of text (global measures) and unknown vocabulary (word-level measures), as well as the relationship between processing time on unknown vocabulary and learning. Advanced L2 learners of English read a text in one of four lexical coverage conditions (90%, 95%, 98%, 100%) while their eye movements were recorded. Knowledge of unknown pseudowords in the texts was assessed via an immediate, meaning recall post-test. Results showed that only one of the three global measures examined showed a processing advantage for the 98% condition, reflected by longer saccades and less effortful reading than the 90% and 95% conditions. Crucially, lexical coverage did not have a significant impact on the amount of attention spent on unknown vocabulary. Processing times were found to significantly predict vocabulary gains. |
V. N. Pescuma; K. Maquate; C. R. Ronderos; A. Ito; P. Knoeferle Register and morphosyntactic congruence during sentence processing in German: An eye-tracking study Journal Article In: Acta Psychologica, vol. 251, pp. 1–9, 2024. @article{Pescuma2024, In the present study, we used eye-tracking to investigate formality-register and morphosyntactic congruence during sentence reading. While research frequently covers participants' processing of lexical, (morpho-)syntactic, or semantic knowledge (e.g., operationalized by means of violations to which we can measure responses relative to felicitous stimuli), less attention has been devoted to the full breadth of pragmatic and context-related aspects. One such aspect is sensitivity to formality-register congruence, i.e., the match or mismatch between the register of a target word and the formality conveyed by the (linguistic) context. In particular, we investigated how congruence of linguistic register with context formality, as well as its interplay with morphosyntactic knowledge, may unfold during reading and be reflected in eye movements. In our study, 40 native German speakers read context sentences conveying a formal or informal situation, and a target sentence containing a high- or low-register verb (e.g., Engl. transl. The policeman detained the activist vs. The policeman nabbed the activist) which matched or mismatched the formality of the preceding context sentences. We additionally manipulated subject-verb agreement, with either a match (see examples above) or a mismatch thereof (e.g., Engl. transl. *The policeman detain the activist; *The policeman nab the activist). We predicted that a violation of formality-register congruence would be reflected in longer reading times at the verb and post-verbal object region, as this would be in line with previous research on context violations (e.g., Lüdtke & Kaup, 2006; Reali et al., 2015; Traxler & Pickering, 1996). We found effects of morphosyntactic congruence on late processing stages at the verb and on earlier processing stages at the post-verbal object region. As far as formality-register congruence is concerned, only late (in total reading time analysis, in the post-verbal object region) and subtle effects emerged. The results suggest that, compared to morphosyntactic violations, formality-register congruence effects emerge quite subtly and slowly during reading. |
Ruei-Fang Shiang; Chiou-Lan Chern; Hsueh-Chih Chen Embodied cognition and L2 sentence comprehension: An eye-tracking study of motor representations Journal Article In: Frontiers in Human Neuroscience, vol. 18, pp. 1–18, 2024. @article{Shiang2024, Introduction: Evidence from neuroscience and behavioral research has indicated that language meaning is grounded in our motor–perceptual experiences of the world. However, the question of whether motor embodiment occurs at the sentence level in L2 (second language) comprehension has been raised. Furthermore, existing studies on motor embodiment in L2 have primarily focused on the lexical and phrasal levels, often providing conflicting and indeterminate results. Therefore, to address this gap, the present eye-tracking study aimed to explore the embodied mental representations formed during the reading comprehension of L2 action sentences. Specifically, it sought to identify the types of motor representations formed during L2 action sentence comprehension and the extent to which these representations are motor embodied. Methods: A total of 56 advanced L2 learners participated in a Sentence–Picture Verification Task, during which their response times (RTs) and eye movements were recorded. Each sentence–picture pair depicted an action that either matched or mismatched the action implied by the sentence. Data analysis focused on areas of interest around the body effectors. Results and discussion: RTs in the mismatch condition indicated an impeding effect. Furthermore, fixations on the body effector executing an action were longer in the mismatch condition, especially in late eye-movement measures. |
Jack W. Silcox; Karen Bennett; Allyson Copeland; Sarah Hargus Ferguson; Brennan R. Payne The costs (and benefits?) of effortful listening for older adults: Insights from simultaneous electrophysiology, pupillometry, and memory Journal Article In: Journal of Cognitive Neuroscience, vol. 36, no. 6, pp. 997–1020, 2024. @article{Silcox2024, Although the impact of acoustic challenge on speech processing and memory increases as a person ages, older adults may engage in strategies that help them compensate for these demands. In the current preregistered study, older adults (n = 48) listened to sentences—presented in quiet or in noise—that were high constraint with either expected or unexpected endings or were low constraint with unexpected endings. Pupillometry and EEG were simultaneously recorded, and subsequent sentence recognition and word recall were measured. Like young adults in prior work, we found that noise led to increases in pupil size, delayed and reduced ERP responses, and decreased recall for unexpected words. However, in contrast to prior work in young adults where a larger pupillary response predicted a recovery of the N400 at the cost of poorer memory performance in noise, older adults did not show an associated recovery of the N400 despite decreased memory performance. Instead, we found that in quiet, increases in pupil size were associated with delays in N400 onset latencies and increased recognition memory performance. In conclusion, we found that transient variation in pupil-linked arousal predicted trade-offs between real-time lexical processing and memory that emerged at lower levels of task demand in aging. Moreover, with increased acoustic challenge, older adults still exhibited costs associated with transient increases in arousal without the corresponding benefits. |
Breno Silva; Valentina Ragni; Agnieszka Otwinowska; Agnieszka Szarkowska Cognate vs. noncognate processing and subtitle speed among advanced L2-English learners: An eye-tracking study Journal Article In: Language Learning & Technology, vol. 28, no. 1, pp. 1–25, 2024. @article{Silva2024, Existing research shows that identical cognates are read more quickly than noncognates. However, most studies focused on words presented in isolation or embedded in sentences. To address this gap, our exploratory eye-tracking study is the first to investigate the processing of cognates and noncognates in English subtitles. First, we tested whether cognates differ from noncognates in terms of word processing. Second, we explored whether gradual changes in the amount of cross-linguistic overlap predict cognate processing (and potentially learning). We recorded the eye movements of 35 L1-Polish adult participants with high L2-English proficiency while they watched videos with English subtitles displayed at three different speeds. The mixed-model analyses showed that cognates and noncognates are processed for longer in slower subtitles than in faster subtitles. Also, we found no difference in processing between cognates and noncognates. However, more similar cognates were processed longer than less similar cognates, except for identical cognates (e.g., Polish/English “minus”), which were processed the fastest. The discussion addresses several implications for L2 lexical learning via audiovisual materials and makes some recommendations for future research. |
Kimberly G. Smith; Sarah C. McWilliams; Joseph Schmidt Eye movements of persons with aphasia during connected-text reading Journal Article In: Aphasiology, pp. 1–14, 2024. @article{Smith2024a, Background: Eye movements reflect the cognitive-linguistic processing of neurotypical readers. Numerous reading related eye movement measures are associated with language processing, including first fixation duration, gaze duration, number of fixations, word skipping, and regressions. Eye movements have also been used to examine reading in neuro-atypical populations including persons with aphasia (PWA). Aims: This study aimed to determine whether eye movement measures obtained from connected text reading differ among persons with varying types of aphasia and neurotypical individuals, as well as whether eye movement measures are associated with language processing severity and reading comprehension ability in PWA. Methods: Twenty-four PWA and twenty-four age-matched control participants completed a connected text-reading eye-tracking task. The PWA also completed assessments to evaluate overall language processing severity and reading comprehension skills and to identify specific subtypes of aphasia. Results: Persons with aphasia had shorter gaze duration, longer regression duration, and made more fixations than control participants, while no group differences emerged for first fixation duration or word skipping. Eye movement patterns did not differ among participants with anomic, Broca's, or conduction/Wernicke's. Language severity scores were a significant factor for gaze duration, while reading comprehension scores were not a significant factor for the eye movement measures examined. Conclusions: The findings support previous eye tracking literature that indicate different eye movement patterns for persons with aphasia during text reading relative to neurotypical controls. The findings also highlight that the selection of eye movement measures examined, the stimuli used, and procedural considerations may impact the pattern of results. The results from this study can be used to further determine which eye movement measures may be most suited for studying language processing during reading in neuro-atypical individuals and determine whether persons with aphasia use different strategies for reading comprehension than neurotypical individuals. |
Olga Solaja; Davide Crepaldi The role of morphology in novel word learning: A registered report Journal Article In: Royal Society Open Science, vol. 11, no. 6, pp. 1–30, 2024. @article{Solaja2024, The majority of the new words that we learn every day as adults are morphologically complex; yet, we do not know much about the role of morphology in novel word learning. In this study, we tackle this issue by comparing the learning of: (i) suffixed novel words (e.g. flibness); (ii) novel words that end in non-morphological, but frequent letter chunks (e.g. fliban); and (iii) novel words with non-morphological, low-frequency endings (e.g. flibov). Words are learned incidentally through sentence reading, while the participants' eye movements are monitored. We show that morphology has a facilitatory role compared with the other two types of novel words, both during learning and in a post-learning recognition memory task. We also showed that participants attributed meaning to word parts (if flibness is a state of happiness, then flib must mean happy), but this process was not specifically triggered by the presence of a suffix (flib must also mean happy in fliban and flibov), thus suggesting that the brain tends to assume similar meanings for similar words and word parts. |
Adrian Staub The function/content word distinction and eye movements in reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 50, no. 6, pp. 967–984, 2024. @article{Staub2024a, A substantial quantity of research has explored whether readers' eye movements are sensitive to the distinction between function and content words. No clear answer has emerged, in part due to the difficulty of accounting for differences in length, frequency, and predictability between the words in the two classes. Based on evidence that readers differentially overlook function word errors, we hypothesized that function words may be more frequently skipped or may receive shorter fixations. We present two very large-scale eyetracking experiments using selected sentences from a corpus of natural text, with each sentence containing a target function or content word. The target words in the two classes were carefully matched on length, frequency, and predictability, with the latter variable operationalized in terms of next-word probability obtained from the large language model GPT-2. While the experiments replicated a range of expected effects, word class did not have any clear influence on target word skipping probability, and there was some evidence for a content word advantage in fixation duration measures. These results indicate that readers' tendency to overlook function word errors is not due to reduced time spent encoding these words. The results also broadly support the implicit assumption in prominent models of eye movement control in reading that a word's syntactic category does not play an important role in decisions about when and where to move the eyes. (PsycInfo Database Record (c) 2024 APA, all rights reserved). |
Adrian Staub; Harper McMurray; Anthony Wickett Perceptual inference corrects function word errors in reading: Errors that are not noticed do not disrupt eye movements Journal Article In: Cognitive Psychology, vol. 154, pp. 1–16, 2024. @article{Staub2024, Both everyday experience and laboratory research demonstrate that readers often fail to notice errors such as an omitted or repeated function word. This phenomenon challenges central tenets of reading and sentence processing models, according to which each word is lexically processed and incrementally integrated into a syntactic representation. One solution would propose that apparent failure to notice such errors reflects post-perceptual inference; the reader does initially perceive the error, but then unconsciously 'corrects' the perceived string. Such a post-perceptual account predicts that when readers fail to explicitly notice an error, the error will nevertheless disrupt reading, at least fleetingly. We present a large-scale eyetracking experiment investigating whether disruption is detectable in the eye movement record when readers fail to notice an omitted or repeated two-letter function word in naturalistic sentences. Readers failed to notice both omission and repetition errors over 36% of the time. In an analysis that included all trials, both omission and repetition resulted in pronounced eye movement disruption, compared to reading of grammatical control sentences. But in an analysis including only trials on which readers failed to notice the errors, neither type of error disrupted eye movements on any measure. Indeed, there was evidence in some measures that reading was relatively fast on the trials on which errors were missed. It does not appear that when an error is not consciously noticed, it is initially perceived, and then later corrected; rather, linguistic knowledge influences what the reader perceives. |
Jeremy Steffman; Megha Sundara Disentangling the role of biphone probability from neighborhood density in the perception of nonwords Journal Article In: Language and Speech, vol. 67, no. 1, pp. 166–202, 2024. @article{Steffman2024, In six experiments we explored how biphone probability and lexical neighborhood density influence listeners' categorization of vowels embedded in nonword sequences. We found independent effects of each. Listeners shifted categorization of a phonetic continuum to create a higher probability sequence, even when neighborhood density was controlled. Similarly, listeners shifted categorization to create a nonword from a denser neighborhood, even when biphone probability was controlled. Next, using a visual world eye-tracking task, we determined that biphone probability information is used rapidly by listeners in perception. In contrast, task complexity and irrelevant variability in the stimuli interfere with neighborhood density effects. These results support a model in which both biphone probability and neighborhood density independently affect word recognition, but only biphone probability effects are observed early in processing. |
Pnina Stern; Tamar Kolodny; Shlomit Tsafrir; Galit Cohen; Lilach Shalev Unique patterns of eye movements characterizing inattentive reading in ADHD Journal Article In: Journal of Attention Disorders, vol. 28, no. 6, pp. 1008–1016, 2024. @article{Stern2024, Objective: We aimed to identify unique patterns of eye-movements measures reflecting inattentive reading among adults with and without ADHD. Method & Results: We recorded eye-movements during uninterrupted text reading of typically developed (TD) and ADHD adults. First, we found significantly longer reading time for the ADHD group than the TD group. Further, we detected cases in which words were reread more than twice and found that such occasions were much more frequent in participants with ADHD than in TD participants. Moreover, we discovered that the first reading pass of these words was less sensitive to the length of the word than the first pass of words read only once, indicating a less meaningful reading. Conclusion: We propose that high rate of words that were reread is a correlate of inattentive reading which is more pronounced among ADHD readers. Implications of the findings in the context of reading comprehension are discussed. |
Casey Stringer; Frances Cooley; Emily Saunders; Karen Emmorey; Elizabeth R. Schotter Deaf readers use leftward information to read more efficiently: Evidence from eye tracking Journal Article In: Quarterly Journal of Experimental Psychology, vol. 77, no. 10, pp. 2098 –2110, 2024. @article{Stringer2024, Little is known about how information to the left of fixation impacts reading and how it may help to integrate what has been read into the context of the sentence. To better understand the role of this leftward information and how it may be beneficial during reading, we compared the sizes of the leftward span for reading-matched deaf signers (n = 32) and hearing adults (n = 40) using a gaze-contingent moving window paradigm with windows of 1, 4, 7, 10, and 13 characters to the left, as well as a no-window condition. All deaf participants were prelingually and profoundly deaf, used American Sign Language (ASL) as a primary means of communication, and were exposed to ASL before age eight. Analysis of reading rates indicated that deaf readers had a leftward span of 10 characters, compared to four characters for hearing readers, and the size of the span was positively related to reading comprehension ability for deaf but not hearing readers. These findings suggest that deaf readers may engage in continued word processing of information obtained to the left of fixation, making reading more efficient, and showing a qualitatively different reading process than hearing readers. |
Patrick Sturt; Nayoung Kwon Agreement attraction in comprehension: Do active dependencies and distractor position play a role? Journal Article In: Language, Cognition and Neuroscience, vol. 39, no. 3, pp. 279–301, 2024. @article{Sturt2024, Across four eye-tracking studies and one self-paced reading study, we test whether attraction in subject-verb agreement is affected by (a) the relative linear positions of target and distractor, and (b) the active dependency status of the distractor. We find an effect of relative position, with greater attraction in retro-active interference configurations, where the distractor is linearly closer to the critical verb (Subject…Distractor…V) than in pro-active interference where it is more distant (Distractor…Subject…V). However, within pro-active interference configurations, attraction was not affected by the active dependency status of the distractor: attraction effects were similarly small whether or not the distractor was waiting to complete an upcoming dependency at the critical verb, with Bayes Factor analyses showing evidence in favour of a null effect of active dependency status. We discuss these findings in terms of the decay of activation, and whether such decay is affected by maintenance of features in memory. |
Yongqiang Su; Yixun Li; Hong Li Development and validation of the simplified Chinese Author Recognition Test: Evidence from eye movements of Chinese adults in Mainland China Journal Article In: Journal of Research in Reading, vol. 47, no. 1, pp. 20–44, 2024. @article{Su2024a, Background: It is well evident that individuals' levels of print exposure are significantly correlated with their reading ability across languages, and an author recognition test is commonly used to measure print exposure objectively. For the first time, the current work developed and validated a Simplified Chinese Author Recognition Test (SCART) and examined its role in explaining Chinese online reading. Methods: In Study 1, we constructed the SCART for readers of simplified Chinese and validated the test using data collected from 203 young adults in Mainland China. Participants were measured on the SCART and three self-report tasks about their reading experiences and habits. Study 2 recruited additional 68 young adults in Mainland and measured their print exposure (with the same tasks used in Study 1), reading-related cognitive ability (working memory, rapid automatic naming, Chinese character reading, and vocabulary knowledge), and Chinese online reading via an eye-tracking passage reading task. Results: Results of Study 1 support the high reliability and validity of the SCART. Results of Study 2 indicate that SCART scores significantly predicted participants' online reading processing while controlling for subjective reading experiences and habits, and reading-related cognitive abilities. Across two studies, we found converging evidence that the in-depth recognition of the authors (i.e., participants have read the books written by these authors) appears to be a better indicator of print exposure than the superficial recognition of the author names. Conclusions: Taken together, this work filled in the gap in the literature by providing an evidence-based, objective print exposure measure for simplified Chinese and contributes to a broader understanding of print exposure and online reading processing across different writing systems. |
Yue Sun; Sainan Li; Yancui Zhang; Jingxin Wang Parafoveal word frequency does not modulate the effect of foveal load on preview in Chinese reading: Evidence from eye movements Journal Article In: Brain Sciences, vol. 14, no. 4, pp. 1–13, 2024. @article{Sun2024e, The foveal load effect is one of the most fundamental effects in reading psychology, and also one of the most controversial issues in recent years. The foveal load effect refers to the phenomenon that the difficulty of foveal processing affects parafoveal preview. In Chinese reading, whether the foveal load effect exists, as well as whether this effect is modulated by parafoveal word frequency, remains unclear. In this study, the eye-tracking technique was used to track the eye movements of 48 subjects. Utilized the boundary paradigm with single-character words as parafoveal words, the present study manipulated foveal word frequency (high and low), parafoveal word frequency (high and low), and two types of preview (identical preview and pseudocharacter preview) to investigate these questions. The results revealed that the foveal word frequency does not influence preview, suggesting the absence of the foveal load effect when using single-character words as parafoveal words. Furthermore, parafoveal word frequency does not modulate the effect of the foveal load on the preview. This empirical evidence contributes to refining the understanding of the Chinese reading model. |
Agnieszka Szarkowska; Valentina Ragni; David Orrego-Carmona; Sharon Black; Sonia Szkriba; Jan-Louis Louis Kruger; Krzysztof Krejtz; Breno Silva The impact of video and subtitle speed on subtitle reading: An eye-tracking replication study Journal Article In: Journal of Audiovisual Translation, vol. 7, no. 1, pp. 1–23, 2024. @article{Szarkowska2024, We present results of a direct replication of Liao et al.'s (2021) study on how subtitle speed and the presence of concurrent video impact subtitle reading among British and Polish viewers. Our goal was to assess the generalisability of the original study's findings on a cohort of Australian English. The study explored both subtitle-level and word-level effects, considering the presence or absence of concurrent video and three subtitle speeds: 12 characters per second, 20 cps, and 28 cps. Overall, most of the original results were replicated, confirming that the presence of video and the speed of the subtitles have a measurable impact on processing across different viewer groups. Additionally, differences in how native and non-native speakers process subtitles emerged, in particular related to wrap-up, word frequency and word length effects. The paper describes the replication in detail, presents the findings, and discusses some of their implications. Lay summary In our study we were interested in the effects that the presence of video and various subtitle speeds have on how viewers watch subtitled videos and how they understand them. We also wanted to know if the previous results obtained in a study by Liao et al. (2021) in Australia hold true for other viewers living in different locations. With this goal in mind, we repeated Liao et al.'s (2021) study on British and Polish viewers. The study explored both subtitle-level and word-level effects, considering the presence or absence of video and three subtitle speeds: 12 characters per second, 20 cps, and 28 cps. Overall, most of the original results were confirmed, showing that the presence of video and the speed of the subtitles have an impact on processing across different viewer groups. Additionally, differences in how native and non-native speakers process subtitles emerged, in particular related to well-known linguistic effects from reading studies, such as wrap-up, word frequency and word length effects. The paper describes the replication in detail, presents the findings, and discusses some of their implications. |
Agnieszka Szarkowska; Valentina Ragni; Sonia Szkriba; Sharon Black; David Orrego-Carmona; Jan Louis Kruger In: PLoS ONE, vol. 19, no. 10, pp. 1–29, 2024. @article{Szarkowska2024a, Every day, millions of viewers worldwide engage with subtitled content, and an increasing number choose to watch without sound. In this mixed-methods study, we examine the impact of sound presence or absence on the viewing experience of both first-language (L1) and second-language (L2) viewers when they watch subtitled videos. We explore this novel phenomenon through comprehension and recall post-tests, self-reported cognitive load, immersion, and enjoyment measures, as well as gaze pattern analysis using eye tracking. We also investigate viewers' motivations for opting for audiovisual content without sound and explore how the absence of sound impacts their viewing experience, using in-depth, semi-structured interviews. Our goal is to ascertain whether these effects are consistent among L2 and L1 speakers from different language varieties. To achieve this, we tested L1-British English, L1-Australian English and L2-English (L1-Polish) language speakers (n = 168) while they watched English-language audiovisual material with English subtitles with and without sound. The findings show that when watching videos without sound, viewers experienced increased cognitive load, along with reduced comprehension, immersion and overall enjoyment. Examination of participants' gaze revealed that the absence of sound significantly affected the viewing experience, increasing the need for subtitles and thus increasing the viewers' propensity to process them more thoroughly. The absence of sound emerged as a global constraint that made reading more effortful. Triangulating data from multiple sources made it possible to tap into some of the metacognitive strategies employed by viewers to maintain comprehension in the absence of sound. We discuss the implications within the context of the growing trend of watching subtitled videos without sound, emphasising its potential impact on cognitive processes and the viewing experience. |
Enze Tang; Hongwei Ding Emotion effects in second language processing: Evidence from eye movements in natural sentence reading Journal Article In: Bilingualism, vol. 27, no. 3, pp. 460–479, 2024. @article{Tang2024, There exists insufficient eye-tracking evidence on the differences in emotional word processing between the first language (L1) and second language (L2) readers. This study conducted an eye-tracking experiment to investigate the emotional effects in L2 sentence reading, and to explore the modulation of L2 proficiency and individual emotional states. Adapted from Knickerbocker et al. (2015), the current study recorded eye movements at both early and late processing stages when late Chinese–English bilinguals read emotion-label and neutral target words in natural L2 sentences. Results indicated that L2 readers did not show the facilitation effects of lexical affective connotations during sentence reading, and they even demonstrated processing disadvantages for L2 emotional words. Additionally, the interaction effect between L2 proficiency and emotion was consistently significant for the measure of total reading time in positive words. Measurements of participants' depressive and anxious states were not robustly correlated with eye movement measures. Our findings supplemented new evidence to existing sparse eye-tracking experiments on L2 emotion processing, and lent support to several theoretical frameworks in the bilingual research field, including the EMOTIONAL CONTEXTS OF LEARNING THEORY, LEXICAL QUALITY HYPOTHESIS and REVISED HIERARCHICAL MODEL. |
Simon P. Tiffin-Richards Cognate facilitation in bilingual reading: The influence of orthographic and phonological similarity on lexical decisions and eye-movements Journal Article In: Bilingualism: Language and Cognition, pp. 1–18, 2024. @article{TiffinRichards2024, A central finding of bilingual research is that cognates – words that share semantic, phonological, and orthographic characteristics across languages – are processed faster than non-cognate words. However, it remains unclear whether cognate facilitation effects are reliant on identical cognates, or whether facilitation simply varies along a continuum of cross-language orthographic and phonological similarity. In two experiments, German–English bilinguals read identical cognates, close cognates, and non-cognates in a lexical decision task and a sentence-reading task while their eye movements were recorded. Participants read the stimuli in their L1 German and L2 English. Converging results found comparable facilitation effects of identical and close cognates vs. non-cognates. Cognate facilitation could be described as a continuous linear effect of cross-language orthographic similarity on lexical decision accuracy and latency, as well as fixation durations. Cross-language phonological similarity modulated the continuous orthographic similarity effect in single word recognition, but not in sentence processing. |
Simon P. Tiffin-Richards In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 50, no. 11, pp. 1844–1861, 2024. @article{TiffinRichards2024a, Readers of different ages and across different languages routinely process information of upcoming words in a sentence, before their eyes move to fixate them directly (parafoveal processing). However, there is inconsistent evidence of similar parafoveal processing in a reader's second language (L2). In this eye movement study, the gaze-contingent boundary paradigm (Rayner, 1975a) was used to test whether parafoveal processing of orthographic information is an integral part of both beginning and proficient L2 reading. The eye movements of beginning L2-learners (n = 53, aged 11–14 years) and highly proficient L2-users (n = 56, aged 19–65 years) were recorded while they read sentences in their first language (L1) German and L2 English. Sentences each contained a cognate target word (e.g., English: tunnel, German: Tunnel). The parafoveal preview of the targets either (a) preserved the spelling and meaning of the target (identity condition), (b) preserved letter identities but transposed the position of two adjacent letters (transposed-letter [TL] condition, e.g., tunenl/Tunenl), or substituted the identity of two adjacent letters (substituted-letter condition, e.g., tunocl/Tunocl). TL previews elicited longer early first-pass reading times than identity previews in both L1 and L2 reading in children and adults, suggesting that letter position was processed parafoveally. Substituted-letter previews resulted in longer reading times than TL previews in children and adults in L1 and L2, suggesting that letter identity information was processed independently of position information. These results suggest that letter position and identity information are extracted from the parafovea during L1 and L2 reading, facilitating word recognition in children and adults. |
Nuria Sagarra; Laura Fernández-Arroyo; Cristina Lozano-Argüelles; Joseph V. Casillas In: Language Learning, vol. 74, no. 3, pp. 574–605, 2024. @article{Sagarra2024, We investigated the role of cue weighting, second language (L2) proficiency, and L2 daily exposure in L2 learning of suprasegmentals different from the first language (L1), using eye-tracking. Spanish monolinguals, English–Spanish learners, and Mandarin–Spanish learners saw a paroxytone and an oxytone verb (e.g., FIRma–firMÓ “s/he signs–signed”), listened to a sentence containing one of the verbs, and chose the one that they heard. The three languages have contrastive lexical stress, but suprasegmentals have a greater functional load in Mandarin than in English. Monolinguals predicted suffixes accurately with both stress conditions and favored oxytones, but learners predicted suffixes accurately only with oxytones, the condition activating fewer lexical competitors. Monolinguals predicted suffixes accurately sooner but at a slower rate than did learners. L2 proficiency, but not L1 or L2 exposure, facilitated L2 predictions. In conclusion, learners of a contrastive-stress L1 rely on L2 suprasegmentals to the same extent as monolinguals, regardless of their L1. Lower L2 proficiency and higher cognitive load (more lexical competitors) reduce learners' reliance on suprasegmentals. |
Emily Saunders; Jonathan Mirault; Karen Emmorey Activation of ASL signs during sentence reading for deaf readers: Evidence from eye-tracking Journal Article In: Bilingualism: Language and Cognition, pp. 1–9, 2024. @article{Saunders2024, Bilinguals activate both of their languages as they process written words, regardless of modality (spoken or signed); these effects have primarily been documented in single word reading paradigms. We used eye-tracking to determine whether deaf bilingual readers (n = 23) activate American Sign Language (ASL) translations as they read English sentences. Sentences contained a target word and one of the two possible prime words: a related prime which shared phonological parameters (location, handshape or movement) with the target when translated into ASL or an unrelated prime. The results revealed that first fixation durations and gaze durations (early processing measures) were shorter when target words were preceded by ASL-related primes, but prime condition did not impact later processing measures (e.g., regressions). Further, less-skilled readers showed a larger ASL co-activation effect. Together, the results indicate that ASL co-activation impacts early lexical access and can facilitate reading, particularly for less-skilled deaf readers. |
Daniel J Schad; Antje Nuthmann; R Frank; Ralf Engbert Supplemental material for mental effort during mindless reading? Pupil fluctuations indicate internal processing during levels of inattention Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 50, no. 10, pp. 1637–1649, 2024. @article{Schad2024, Mind wandering, an experience characterized by a reduced external focus of attention and an increased internal focus, has seen significant theoretical advancement in understanding its underlying cognitive processes. The levels-of-inattention hypothesis posits that in mind wandering, external attention is reduced in a graded fashion, reflecting different levels of weak versus deep attentional decoupling. However, it has remained unclear whether internal processing during mind wandering, and mindless reading in particular, requires effort and, if so, whether it is graded or distinct. To address this, we analyzed pupil size as a measure of cognitive load in the sustained-attention-to-stimulus task during text reading. We examined whether decoupled external attention is linked to an overall reduction in workload and whether internal focus of attention is graded or represents a distinct cognitive process. Overall, overlooking errors in the text was associated with a small pupil size, indicating reduced effortful processing. However, this effect varied with error type: overlooking high- or medium-level errors (weak decoupling) resulted in reduced pupil size, while overlooking low-level errors (deep decoupling) had no effect on pupil size. Moreover, detecting an error (at any processing level) elicited a task-evoked pupillary response, which was absent when it was overlooked. These findings suggest that weak decoupling reduces internal resource-demanding processing and are in line with the hypothesis that large pupils during deep decoupling may be associated with distinct states of effortful internal processing. They further support both the levels-of-inattention hypothesis and the notion that internal focus is a distinct mode of deeply decoupled processing. |
Judith Schlenter; Marit Westergaard What eye and hand movements tell us about expectations towards argument order: An eye- and mouse-tracking study in German Journal Article In: Acta Psychologica, vol. 246, pp. 1–20, 2024. @article{Schlenter2024, Previous research on real-time sentence processing in German has shown that listeners use the morphological marking of accusative case on a sentence-initial noun phrase to not only interpret the current argument as the object and patient, but also to predict a plausible agent. So far, less is known about the use of case marking to predict the semantic role of upcoming arguments after the subject/agent has been encountered. In the present study, we examined the use of case marking for argument interpretation in transitive as well as ditransitive structures. We aimed to control for multiple factors that could have influenced processing in previous studies, including the animacy of arguments, world knowledge, and the perceptibility of the case cue. Our results from eye- and mouse-tracking indicate that the exploitation of the first case cue that enables the interpretation of the unfolding sentence is influenced by (i) the strength of argument order expectation and (ii) the perceptual salience of the case cue. |
Tijn Schmitz; Jan Winkowski; Morwenna Hoeks; Rick Nouwen; Jakub Dotlačil Semantic accessibility and interference in pronoun resolution Journal Article In: Glossa Psycholinguistics, vol. 3, no. 1, pp. 1–58, 2024. @article{Schmitz2024a, The general view in syntactic literature is that binding constraints can make antecedents syntactically inaccessible. However, several studies showed that antecedents which are ruled out by syntactic binding constraints still influence online processing of anaphora in some stages, suggesting that a cue-based retrieval mechanism plays a role during anaphora resolution. As in the syntactic literature, in semantic accounts like Discourse Representation Theory (DRT), formal constraints are formulated in terms of accessibility of the antecedent. We explore the discourse inaccessibility postulated in DRT by looking at its role in pronoun resolution of inter-sentential anaphoric relations in four off-line and two eye-tracking experiments. The results of the eye-tracking experiments suggest that accessibility has an effect on pronoun resolution from early on. The study quantifies evidence of inaccessible antecedents affecting pronoun resolution and shows that almost all evidence points to the conclusion that discourse-inaccessible antecedents are ruled out for pronoun resolution in processing. The only potential counter-example to this claim is also detected, but remains only as anecdotal evidence even after combining data from both eye-tracking studies. The findings in the study show that accessibility plays a significant role in the processing of pronoun resolution in a way which is potentially challenging for the cue-based retrieval mechanism. The paper argues that discourse accessibility can help expand the theories of retrieval beyond the syntactic and sentence-level domain and provides a window into the study of interference in discourse. |
Merel C. J. Scholman; Hannah Rohde; Vera Demberg Facilitation of a lexical form or a discourse relation: Evidence from pairs of contrastive discourse markers Journal Article In: Glossa Psycholinguistics, vol. 3, no. 1, pp. 1–29, 2024. @article{Scholman2024, Research has shown that people anticipate upcoming linguistic content, but evidence regarding expectations of specific lexical markers is mixed. We use the Dutch pair of discourse markers Aan de ene kant…Aan de andere kant (‘On the one hand…On the other hand') and Enerzijds… Anderzijds (also equivalent to ‘On the one hand…On the other hand') to test whether readers generate predictions of an upcoming contrast dependency based on the lexical marker for the first contrastive segment, and whether processing of the lexical marker for the second segment is facilitated (i) when the first segment contains a lexical marker to signal the upcoming contrast, and (ii) when that marker directly matches that of the second segment. In a self-paced reading study, we show that readers do generate expectations for upcoming discourse markers, in that the presence of a marker on the first segment facilitates processing of the marker on the second segment, but that a directly matching lexical form does not yield further facilitation. In an eye- tracking study, we replicate the facilitative effect of the first marker of a lexical pair on the processing of the second marker, and show that this effect occurs in immediate processing. These results establish expectation-driven effects at the discourse level in early reading time measures, showing that comprehenders are aware of the discourse dependency established by a discourse marker and are flexible in identifying and integrating discourse relations with different markers. |
Elizabeth R. Schotter; Casey Stringer; Emily Saunders; Frances G. Cooley; Grace Sinclair; Karen Emmorey The role of perceptual and word identification spans in reading efficiency: Evidence from hearing and deaf readers Journal Article In: Journal of Experimental Psychology: General, vol. 153, no. 10, pp. 2359–2377, 2024. @article{Schotter2024, Theories of reading posit that decisions about "where" and "when" to move the eyes are driven by visual and linguistic factors, extracted from the perceptual span and word identification span, respectively. We tested this hypothesized dissociation by masking, outside of a visible window, either the spaces between the words (to assess the perceptual span, Experiment 1) or the letters within the words (to assess the word identification span, Experiment 2). We also investigated whether deaf readers' previously reported larger reading span was specifically linked to one of these spans. We analyzed reading rate to test overall reading efficiency, as well as average saccade length to test "where" decisions and average fixation duration to test "when" decisions. Both hearing and deaf readers' perceptual spans extended between 10 and 14 characters, and their word identification spans extended to eight characters to the right of fixation. Despite similar sized rightward spans, deaf readers read more efficiently overall and showed a larger increase in reading rate when leftward text was available, suggesting they attend more to leftward information. Neither rightward span was specifically related to where or when decisions for either group. Our results challenge the assumed dissociation between type of reading span and type of saccade decision and indicate that reading efficiency requires access to both perceptual and linguistic information in the parafovea. (PsycInfo Database Record (c) 2024 APA, all rights reserved). |
Ana I. Schwartz; Joseph Negron; Colin Scholl In: Bilingualism: Language and Cognition, pp. 1–12, 2024. @article{Schwartz2024, Prominent models of the bilingual lexicon do not allow for language – wide inhibition or any effect of general cognitive control on the activation of words within the lexicon. We report evidence that global language inhibitory control and cognitive general control mechanisms affect lexical retrieval during comprehension. Spanish–English bilinguals read language-pure or sentences with mid-sentence switches while their eye movements were recorded. A switch cost was observed in aspects of the eye-tracking record reflecting early spread of lexical activation, as well as later measures. The switch cost was larger for L2-to-L1 switches and was not attenuated when switched words were cognates (Experiment 1). In Experiment 2, switch costs were reduced when the sentences contained a language color cue. These findings are inconsistent with the predictions of the Bilingual Interactive Activation Plus (BIA+) but support the architecture of its predecessor, the BIA. They refute the assumption that early lexical activation is impervious to nonlinguistic cues. |
Amanda H. Seidl; Michelle Indarjit; Arielle Borovsky Touch to learn: Multisensory input supports word learning and processing Journal Article In: Developmental Science, vol. 27, no. 1, pp. 1–20, 2024. @article{Seidl2024, Infants experience language in rich multisensory environments. For example, they may first be exposed to the word applesauce while touching, tasting, smelling, and seeing applesauce. In three experiments using different methods we asked whether the number of distinct senses linked with the semantic features of objects would impact word recognition and learning. Specifically, in Experiment 1 we asked whether words linked with more multisensory experiences were learned earlier than words linked fewer multisensory experiences. In Experiment 2, we asked whether 2-year-olds' known words linked with more multisensory experiences were better recognized than those linked with fewer. Finally, in Experiment 3, we taught 2-year-olds labels for novel objects that were linked with either just visual or visual and tactile experiences and asked whether this impacted their ability to learn the new label-to-object mappings. Results converge to support an account in which richer multisensory experiences better support word learning. We discuss two pathways through which rich multisensory experiences might support word learning. |
Marco S. G. Senaldi; Debra Titone Idiom meaning selection following a prior context: Eye movement evidence of L1 direct retrieval and L2 compositional assembly Journal Article In: Discourse Processes, vol. 61, no. 1-2, pp. 21–43, 2024. @article{Senaldi2024, Past work has suggested that L1 readers retrieve idioms (i.e., spill the tea) directly vs. matched literal controls (drink the tea) following unbiased contexts, whereas L2 readers process idioms more compositionally. However, it is unclear whether this occurs when a figuratively or literally biased context precedes idioms. We tested this in an eye-tracking study in which 40 English-L1 and 35 English-L2 adults read English sentences containing idioms having figurative, literal, or control prior contexts. Linear mixed-effects models revealed that L1 readers processed idioms faster after a literal preamble; however, at the disambiguation region, they processed idioms' figurative interpretations more quickly as familiarity increased, suggesting a L1 reliance on direct retrieval. In contrast, L2 readers processed idioms' figurative interpretations faster as verb decomposability increased, suggesting an L2 reliance on compositional assembly. Collectively, these results suggest that meaning selection occurs in a hybrid fashion when idioms follow a biased context. |
Eser Sendesen; Didem Turkyilmaz Investigation of the behavior of tinnitus patients under varying listening conditions with simultaneous electroencephalography and pupillometry Journal Article In: Brain and Behavior, vol. 14, no. 6, pp. 1–11, 2024. @article{Sendesen2024a, Objective: This study aims to control all hearing thresholds, including extended high frequencies (EHFs), presents stimuli of varying difficulty levels, and measures electroencephalography (EEG) and pupillometry responses to determine whether listening difficulty in tinnitus patients is effort or fatigue-related. Methods: Twenty-one chronic tinnitus patients and 26 matched healthy controls having normal pure-tone averages with symmetrical hearing thresholds were included. Subjects were evaluated with 0.125−20 kHz pure-tone audiometry, Montreal Cognitive Assessment Test (MoCA), Tinnitus Handicap Inventory (THI), EEG, and pupillometry. Results: Pupil dilatation and EEG alpha power during the “encoding” phase of the presented sentence in tinnitus patients were less in all listening conditions (p <.05). Also, there was no statistically significant relationship between EEG and pupillometry components for all listening conditions and THI or MoCA (p >.05). Conclusion: EEG and pupillometry results under various listening conditions indicate potential listening effort in tinnitus patients even if all frequencies, including EHFs, are controlled. Also, we suggest that pupillometry should be interpreted with caution in autonomic nervous system-related conditions such as tinnitus. |
Eser Sendesen; Didem Turkyilmaz Listening handicap in tinnitus patients by controlling extended high frequencies - Effort or fatigue? Journal Article In: Auris Nasus Larynx, vol. 51, no. 1, pp. 198–205, 2024. @article{Sendesen2024, Objective: In previous studies, the results regarding the presence of listening effort or fatigue in tinnitus patients were inconsistent. The reason for this inconsistency could be that extended high frequencies, which can cause listening handicap, were not considered. Therefore, this study aimed to evaluate the listening skills in tinnitus patients by matching the hearing thresholds at all frequencies, including the extended high frequency. Methods: Eighteen chronic tinnitus patients and thirty matched healthy controls having normal pure-tone average with symmetrical hearing thresholds was included. Subjects were evaluated with 0.125-20 kHz pure-tone audiometry, Montreal cognitive assessment test (MoCA), Tinnitus Handicap Inventory (THI), Matrix Test, Pupillometry. Results: Pupil dilatation in the 'coding' phase of the sentence presented in tinnitus patients was less than in the control group (p<0.05). There was no difference between the groups for Matrix test scores (p> 0.05) Also, there was no statistically significant correlation between THI and Pupillometry components nor between MoCA (p>0.05). Conclusion: The results were interpreted for potential listening fatigue in tinnitus patients. Considering the possible listening handicap in tinnitus patients, reducing the listening difficulties especially in noisy environments, can be added to the goals of tinnitus therapy protocols. |
Eser Sendesen; Meral Didem Türkyılmaz In: Auris Nasus Larynx, vol. 51, no. 4, pp. 659–665, 2024. @article{Sendesen2024b, Objective: In previous studies, the results regarding the presence of listening effort or fatigue in tinnitus patients were inconsistent. The reason for this inconsistency could be that extended high frequencies, which can cause listening handicap, were not within normal limits. Therefore, this study aimed to evaluate the listening skills in tinnitus patients by matching the normal hearing thresholds at all frequencies, including the extended high frequency. Methods: Eighteen chronic tinnitus patients and thirty matched healthy controls having normal pure-tone average with symmetrical hearing thresholds was included. Subjects were evaluated with 0.125–20 kHz pure-tone audiometry, Montreal cognitive assessment test (MoCA), Tinnitus Handicap Inventory (THI), Matrix Test, Pupillometry. Results: Pupil dilatation in the 'coding' phase of the sentence presented in tinnitus patients was less than in the control group (p < 0.05). There was no difference between the groups for Matrix test scores (p > 0.05) Also, there was no statistically significant correlation between THI and Pupillometry components nor between MoCA (p > 0.05). Conclusion: Even though tinnitus patients had normal hearing in the range of 0.125–20 kHz, their autonomic nervous system responses during listening differed from healthy subjects. This difference was interpreted for potential listening fatigue in tinnitus patients. |
Adi Shechter; Sivan Medina; David L. Share; Amit Yashar In: Cortex, vol. 171, pp. 319–329, 2024. @article{Shechter2024, Peripheral letter recognition is fundamentally limited not by the visibility of letters but by the spacing between them, i.e., ‘crowding'. Crowding imposes a significant constraint on reading, however, the interplay between crowding and reading is not fully understood. Using a letter recognition task in varying display conditions, we investigated the effects of lexicality (words versus pseudowords), visual hemifield, and transitional letter probability (bigram/trigram frequency) among skilled readers (N = 14. and N = 13) in Hebrew – a script read from right to left. We observed two language-universal effects: a lexicality effect and a right hemifield (left hemisphere) advantage, as well as a strong language-specific effect – a left bigram advantage stemming from the right-to-left reading direction of Hebrew. The latter finding suggests that transitional probabilities are essential for parafoveal letter recognition. The results reveal that script-specific contextual information such as letter combination probabilities is used to accurately identify crowded letters. |
Jing Shen; Elizabeth Heller Murray Breathy vocal quality, background noise, and hearing loss: How do these adverse conditions affect speech perception by older adults? Journal Article In: Ear & Hearing, vol. 46, no. 2, pp. 474–482, 2024. @article{Shen2024, Objectives: Although breathy vocal quality and hearing loss are both prevalent age-related changes, their combined impact on speech communication is poorly understood. This study investigated whether breathy vocal quality affected speech perception and listening effort by older listeners. Furthermore, the study examined how this effect was modulated by the adverse listening environment of background noise and the listener's level of hearing loss. Design: Nineteen older adults participated in the study. Their hearing ranged from near-normal to mild-moderate sensorineural hearing loss. Participants heard speech material of low-context sentences, with stimuli resynthesized to simulate original, mild-moderately breathy, and severely breathy conditions. Speech intelligibility was measured using a speech recognition in noise paradigm, with pupillometry data collected simultaneously to measure listening effort. Results: Simulated severely breathy vocal quality was found to reduce intelligibility and increase listening effort. Breathiness and background noise level independently modulated listening effort. The impact of hearing loss was not observed in this dataset, which can be due to the use of individualized signal to noise ratios and a small sample size. Conclusion: Results from this study demonstrate the challenges of listening to speech with a breathy vocal quality. Theoretically, the findings highlight the importance of periodicity cues in speech perception in noise by older listeners. Breathy voice could be challenging to separate from the noise when the noise also lacks periodicity. Clinically, it suggests the need to address both listener-and talker-related factors in speech communication by older adults. |
Heather Sheridan; Eliza Barach; Andriana L. Christofalos; Laurie Beth Feldman Emojis elicit semantic parafoveal-on-foveal (PoF) effects during reading Journal Article In: Visual Cognition, vol. 32, no. 2, pp. 151–161, 2024. @article{Sheridan2024, Semantic parafoveal-on-foveal (PoF) effects, in which the meaning of a parafoveal word influences the processing of the foveal word, indicate that readers engage in extensive parafoveal processing of upcoming words. To test if emojis elicit semantic PoF effects, we examined eye movements while participants read sentences containing a target word (e.g., coffee in “I enjoyed my tall coffee”) that was followed either by a semantically congruent emoji (e.g., (Formula presented.); Alt text: “coffee emoji”), an incongruent emoji (e.g., (Formula presented.); Alt text: “beer mug emoji”), or no emoji. First-pass fixation durations were shorter on the foveal target word when the parafoveal emoji was semantically congruent rather than incongruent. Furthermore, the presence of an emoji (compared to no emoji) led to faster first-pass fixation durations for the preceding target word, which indicates that emojis can modulate the processing of preceding words. |
Weiqing Shi; Xin Jiang Predicting Chinese reading proficiency based on eye movement features and machine learning Journal Article In: Reading and Writing, pp. 1–25, 2024. @article{Shi2024b, This study explores the effectiveness of machine learning and eye movement features in predicting Chinese reading proficiency. Unlike previous research, which focused on one or two specific levels of eye movement features, this study integrates passage-, sentence- and word-level eye movement features to predict reading proficiency. By analyzing the eye movements of 71 native Chinese-speaking undergraduate students as they read nine short passages, a support vector machine was constructed to predict Chinese reading proficiency. Proficiency was determined based on performance on the Chinese achievement test in the National College Entrance Examination and scores from the cloze test. The results indicate that the model, which utilizes passage-, sentence- and word-level eye movement features comprehensively, achieves the highest prediction accuracy (81.69%, 84.71%). Nevertheless, eye movement features at the word, sentence, and passage levels each play a unique role in predicting Chinese reading proficiency. The results provide empirical support for the relationship between eye movement features at different levels and the reading proficiency of Chinese readers. The outcomes highlight the feasibility of integrating eye movement features at the passage, sentence, and word levels, and of employing support vector machine to construct a predictive model for the reading proficiency of Chinese readers. |
Yajiao Shi; Tongquan Zhou; Simin Zhao; Zhenghui Sun; Zude Zhu In: Language and Cognition, vol. 16, no. 4, pp. 1418–1432, 2024. @article{Shi2024c, Social hierarchical information impacts language comprehension. Nevertheless, the specific process underlying the integration of linguistic and extralinguistic sources of social hierarchical information has not been identified. For example, the Chinese social hierarchical verb shan4yang3/, 'support: provide for the needs and comfort of one's elders', only allows its Agent to have a lower social status than the Patient. Using eye-tracking, we examined the precise time course of the integration of these semantic selectional restrictions of Chinese social hierarchical verbs and extralinguistic social hierarchical information during natural reading. A 2 (Verb Type: hierarchical vs. non-hierarchical) × 2 (Social Hierarchy Sequence: match vs. mismatch) design was constructed to investigate the effect of the interaction on early and late eye-tracking measures. Thirty-two participants (15 males; age range: 18-24 years) read sentences and judged the plausibility of each sentence. The results showed that violations of semantic selectional restrictions of Chinese social hierarchical verbs induced shorter first fixation duration but longer regression path duration and longer total reading time on sentence-final nouns (NP2). These differences were absent under non-hierarchical conditions. The results suggest that a mismatch between linguistic and extralinguistic social hierarchical information is immediately detected and processed. |
Ming Yan; Yiu-Kei Tsang; Jinger Pan Phonological recovery during Chinese sentence reading: Effects of rime and tone Journal Article In: Language, Cognition and Neuroscience, vol. 39, no. 4, pp. 501–512, 2024. @article{Yan2024a, The present study tested the activation of different phonological units of Chinese characters during silent sentence reading. Fifty-five participants were tested in an eye-tracking experiment. A highly predictable target character in each experimental sentence was replaced by four types of substitutes (i.e. no-violation, tone-violation, rime-violation, and double-violation). The participants exhibited a shorter total reading time in the no-violation and tone-violation conditions than in the double-violation baseline condition, whereas the rime-violation condition did not differ from the baseline. Moreover, the participants did not benefit from tonal information in addition to syllable-level phonological overlap. Our findings are consistent with a notion of late phonological activation in Chinese, and therefore suggest a direct route of lexical activation bypassing phonological mediation during visual word recognition. |
Bo Yao; Graham G. Scott; Gillian Bruce; Ewa Monteith-Hodge; Sara C. Sereno Emotion processing in concrete and abstract words: Evidence from eye fixations during reading Journal Article In: Cognition and Emotion, pp. 1–10, 2024. @article{Yao2024, We replicated and extended the findings of Yao et al. [(2018). Differential emotional processing in concrete and abstract words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(7), 1064–1074] regarding the interaction of emotionality, concreteness, and imageability in word processing by measuring eye fixation times on target words during normal reading. A 3 (Emotion: negative, neutral, positive) × 2 (Concreteness: abstract, concrete) design was used with 22 items per condition, with each set of six target words matched across conditions in terms of word length and frequency. Abstract (e.g. shocking, reserved, fabulous) and concrete (e.g. massacre, calendar, treasure) target words appeared (separately) within contextually neutral, plausible sentences. Sixty-three participants each read all 132 experimental sentences while their eye movements were recorded. Analyses using Gamma generalised linear mixed models revealed significant effects of both Emotion and Concreteness on all fixation measures, indicating faster processing for emotional and concrete words. Additionally, there was a significant Emotion × Concreteness interaction which, critically, was modulated by Imageability in early fixation time measures. Emotion effects were significantly larger in higher-imageability abstract words than in lower-imageability ones, but remained unaffected by imageability in concrete words. These findings support the multimodal induction hypothesis and highlight the intricate interplay of these factors in the immediate stages of word processing during fluent reading. |
Panpan Yao; David Hall; Hagit Borer; Linnaea Stockall Dutch–Mandarin learners' online use of syntactic cues to anticipate mass vs. count interpretations Journal Article In: Second Language Research, vol. 40, no. 4, pp. 803 –831, 2024. @article{Yao2024c, It remains unclear whether late second language learners (L2ers) can acquire sufficient knowledge about unique-to-L2 constructions through implicit learning to build anticipations during real-time processing. To tackle this question, we conducted a visual world paradigm experiment to investigate high-proficiency late first-language Dutch second-language Mandarin Chinese learners' online processing of syntactic cues to count vs. mass interpretations in Chinese which are unique-to-L2 and never explicitly taught. The results showed that late Dutch–Mandarin learners were sensitive to a mass-biased syntactic cue in real-time processing, and exhibited some native-like anticipatory behaviour. These findings indicate that late L2ers can acquire unique-to-L2 constructions through implicit learning, and can automatically use this knowledge to make predictions. |
Michael C. W. Yip Tracking the time-course of spoken word recognition of Cantonese Chinese in sentence context: Evidence from eye movements Journal Article In: Psychonomic Bulletin & Review, vol. 31, no. 3, pp. 1–11, 2024. @article{Yip2024, In this study, we conducted an eye-tracking experiment to investigate the effects of sentence context and tonal information on spoken word recognition processes in Cantonese Chinese. We recruited 60 native Cantonese listeners to participate in the eye-tracking experiment. The target words (phonologically similar words) were manipulated to either (1) a congruent context or (2) an incongruent context in the experiment. The resulting eye-movement patterns in the incongruent context condition clearly revealed that (1) sentence context produced a garden-path effect in the initial stage of the spoken word recognition processes and then (2) the lexical tone of the word (bottom-up information) overrode the contextual effects to help listeners to discriminate between different similar-sounding words during lexical access. In conclusion, the patterns of eye-tracking data show the interactive processes between the lexical tone (an acoustic cue within a Cantonese word) and sentence context played in different phases to the spoken word recognition of Cantonese Chinese. |
Artturi Ylinen; Minna Hannula-Sormunen; Jake McMullen; Erno Lehtinen; Patrik Wikman; Kimmo Alho Attenuated processing of task-irrelevant speech and other auditory stimuli: fMRI evidence from arithmetic tasks Journal Article In: European Journal of Neuroscience, vol. 60, pp. 7124–7147, 2024. @article{Ylinen2024, When performing cognitive tasks in noisy conditions, the brain needs to maintain task performance while additionally controlling the processing of task-irrelevant and potentially distracting auditory stimuli. Previous research indicates that a fundamental mechanism by which this control is achieved is the attenuation of task-irrelevant processing, especially in conditions with high task demands. However, it remains unclear whether the processing of complex naturalistic sounds can be modulated as easily as that of simpler ones. To address this issue, the present fMRI study examined whether activity related to task-irrelevant meaningful speech is attenuated similarly as that related to meaningless control sounds (nonsense speech and noise-vocoded, unintelligible sounds). The sounds were presented concurrently with three numerical tasks varying in difficulty: an easy control task requiring no calculation, a ‘routine' arithmetic calculation task and a more demanding ‘creative' arithmetic task, where solutions are generated to reach a given answer. Consistent with their differing difficulty, the tasks activated fronto-parieto-temporal regions parametrically (creative > routine > control). In bilateral auditory regions, activity related to the speech stimuli decreased as task demands increased. Importantly, however, the attenuation was more pronounced for meaningful than nonsense speech, demonstrating that distractor type can strongly modulate the extent of the attenuation. This also suggests that semantic processing may be especially susceptible to attenuation under conditions with increased task demands. Finally, as this is the first study to utilize the ‘creative' arithmetic task, we conducted exploratory analyses to examine its potential in assessing neural processes involved in mathematical problem-solving beyond routine arithmetic. |
Si On Yoon; Sarah Brown-Schmidt Partner-specific adaptation in disfluency processing Journal Article In: Cognitive Science, vol. 48, no. 8, pp. 1–16, 2024. @article{Yoon2024, Speakers tend to produce disfluencies when naming unexpected or complex items; in turn, when perceiving disfluency, listeners tend to expect upcoming reference to items that are unexpected or complex to name. In two experiments, we examined if these disfluency-based expectations are routine, or instead, if they adapt to the way the speaker uses disfluency in the current context in a talker-specific manner. Participants listened to instructions to look at objects in contexts with several images, some of which lacked conventional names. We manipulated the co-occurrence of disfluency and reference to novel versus familiar objects in a single talker situation (Experiment 1) and in a multi-talker situation (Experiment 2). In the predictive condition, disfluent expressions referred to novel objects, and fluent expressions referred to familiar objects. In the nonpredictive condition, fluent and disfluent trials referred to either familiar or novel objects. Participants' gaze revealed that listeners more readily predicted familiar images for fluent trials and novel images for disfluent trials in the predictive condition than in the nonpredictive condition. In sum, listeners adapted their expectations about upcoming words based on recent experience with disfluency. Disfluency is not invariably processed, but instead a cue that is flexibly interpreted depending on the local context even in a multi-talker setting. |
Lei Yuan; Miriam Novack; David Uttal; Steven Franconeri Language systematizes attention: How relational language enhances relational representation by guiding attention Journal Article In: Cognition, vol. 243, pp. 1–14, 2024. @article{Yuan2024, Language can affect cognition, but through what mechanism? Substantial past research has focused on how labeling can elicit categorical representation during online processing. We focus here on a particularly powerful type of language-relational language-and show that relational language can enhance relational representation in children through an embodied attention mechanism. Four-year-old children were given a color-location conjunction task, in which they were asked to encode a two-color square, split either vertically or horizontally (e.g., red on the left, blue on the right), and later recall the same configuration from its mirror reflection. During the encoding phase, children in the experimental condition heard relational language (e.g., "Red is on the left of blue"), while those in the control condition heard generic non-relational language (e.g., "Look at this one, look at it closely"). At recall, children in the experimental condition were more successful at choosing the correct relational representation between the two colors compared to the control group. Moreover, they exhibited different attention patterns as predicted by the attention shift account of relational representation (Franconeri et al., 2012). To test the sustained effect of language and the role of attention, during the second half of the study, the experimental condition was given generic non-relational language. There was a sustained advantage in the experimental condition for both behavioral accuracies and signature attention patterns. Overall, our findings suggest that relational language enhances relational representation by guiding learners' attention, and this facilitative effect persists over time even in the absence of language. Implications for the mechanism of how relational language can enhance the learning of relational systems (e.g., mathematics, spatial cognition) by guiding attention will be discussed. |
Chuanli Zang; Ying Fu; Hong Du; Xuejun Bai; Guoli Yan; Simon P. Liversedge Processing multiconstituent units: Preview effects during reading of Chinese words, idioms, and phrases Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 50, no. 1, pp. 169–188, 2024. @article{Zang2024b, Arguably, the most contentious debate in the field of eye movement control in reading has centered on whether words are lexically processed serially or in parallel during reading. Chinese is character-based and unspaced, meaning the issue of how lexical processing is operationalized across potentially ambiguous, multicharacter strings is not straightforward. We investigated Chinese readers' processing of frequently occurring multiconstituent units (MCUs), that is, linguistic units composed of more than a single word, that might be represented lexically as a single representation. In Experiment 1, we manipulated the linguistic category of a two-constituent Chinese string (word, MCU, or phrase) and the preview of its second constituent (identical or pseudocharacter) using the boundary paradigm with the boundary located before the twoconstituent string. A robust preview effect was obtained when the second constituent, alongside the first, formed a word or MCU, but not a phrase, suggesting that frequently occurring MCUs are lexicalized and processed parafoveally as single units during reading. In Experiment 2, we further manipulated the phrase type of a two-constituent but three-character Chinese string (idiom with a one-character modifier and a twocharacter noun, or matched phrase) and the preview of the second constituent noun (identity or pseudocharacter). A greater preview effect was obtained for idioms than phrases, indicating that idioms are processed to a greater extent in the parafovea than matched phrases. Together, the results of these two experiments suggest that lexical identification processes in Chinese can be operationalized over linguistic units that are larger than an individual word. |