All EyeLink Publications
All 13,000+ peer-reviewed EyeLink research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2025 |
Jing Zhu; Yuanlong Li; Changlin Yang; Hanshu Cai; Xiaowei Li; Bin Hu Transformer-based fusion model for mild depression recognition with EEG and pupil area signals Journal Article In: Medical and Biological Engineering & Computing, pp. 1–17, 2025. @article{Zhu2025, Early detection and treatment are crucial for the prevention and treatment of depression; compared with major depression, current researches pay less attention to mild depression. Meanwhile, analysis of multimodal biosignals such as EEG, eye movement data, and magnetic resonance imaging provides reliable technical means for the quantitative analysis of depression. However, how to effectively capture relevant and complementary information between multimodal data so as to achieve efficient and accurate depression recognition remains a challenge. This paper proposes a novel Transformer-based fusion model using EEG and pupil area signals for mild depression recognition. We first introduce CSP into the Transformer to construct single-modal models of EEG and pupil data and then utilize attention bottleneck to construct a mid-fusion model to facilitate information exchange between the two modalities; this strategy enables the model to learn the most relevant and complementary information for each modality and only share the necessary information, which improves the model accuracy while reducing the computational cost. Experimental results show that the accuracy of the EEG and pupil area signals of single-modal models we constructed is 89.75% and 84.17%, the precision is 92.04% and 95.21%, the recall is 89.5% and 71%, the specificity is 90% and 97.33%, the F1 score is 89.41% and 78.44%, respectively, and the accuracy of mid-fusion model can reach 93.25%. Our study demonstrates that the Transformer model can learn the long-term time-dependent relationship between EEG and pupil area signals, providing an idea for designing a reliable multimodal fusion model for mild depression recognition based on EEG and pupil area signals. |
Xiaomei Zhao; Yabo Wang; Keke Wang; Luyao Wang Effects of sequential and non-sequential presentation conditions of multiple-stem facts on memory integration and cognitive resource allocation Journal Article In: Psychological Research, vol. 89, no. 10, pp. 1–14, 2025. @article{Zhao2025, What limits the self-generation of new knowledge in the memory integration process? One striking contender is the amount of necessary pieces of information that are dispersed. Specifically, when essential information is scattered across multiple sources/places, it becomes challenging to effectively integrate and generate new knowledge. Most of the studies on memory integration have focused on the study of paired stem facts, but have neglected the exploration of multiple-stem facts. The present study examined college students' performance on memory integration under different conditions of three stem facts. In Experiment 1, participants were exposed to a series of novel, authentic stem facts in which every three relevant ones could be integrated to generate new knowledge. The results of Experiment 1 found that college students could spontaneously generate a new piece of information by integrating two or three separate but related facts. The integration can occur in at least two distinct types due to the different presentation orders of the learning materials: sequential recursive integration and non-recursive integration. College students performed better in sequential recursive integration than in non-sequential recursive integration, and this difference in integration performance is not caused by differences in memory for the stem facts. Based on Experiment 1, Experiment 2 used eye-tracking technology to explore the allocation of internal cognitive resources across different conditions of three stem facts. We found that in non-sequential recursive integration, college students had the longest visual duration and the highest number of fixations on the second stem fact. In sequential recursive integration, there were no other significant differences in the number and duration of visual fixations for the three stem facts. College students paid longer fixations to the second stem fact and the third stem fact in the non-sequential recursive condition than in the sequential recursive condition. Our study suggests that when information is related but cannot be integrated, longer fixation indicates stumbling when dealing with an unresolvable difficulty. When knowledge is presented in a stepwise manner (such as in the sequential recursive integration condition), it results in better semantic memory extension. |
Zhenghua Zhanga; Qingfang Zhang Accentuation affects the planning scope and focus – accentuation consistency modulates sentence production: Evidence from eye movements Journal Article In: Journal of Speech, Language, and Hearing Research, pp. 1–27, 2025. @article{Zhanga2025, Purpose: Previous studies have shown that the planning scope of sentence production is flexible and influenced by a range of linguistic and extralinguistic factors. However, one important aspect that remains underexplored is the role of prosody, a key component of language, in shaping the planning scope. While it has been established that both conceptual and grammatical information influence sentence production and conceptual information is closely linked with prosodic cues, it remains unclear whether and how prosody, particularly accentuation, affects the planning process. Additionally, there is limited understanding of how conceptual (focus) and prosodic (accentuation) information interact to influence sentence production. Therefore, this study aims to investigate whether prosody (specifically, sentence accentuation) influences the planning scope and how the interaction between conceptual focus and prosodic accentuation jointly shapes sentence production. Method: Question-answer pairs were used to create focus, and a red dot was added in scenarios as a cue for accentuation. Participants were asked to complete a picture description task and accent the entity with a red dot. We manipulated the accentuation position (initial vs. medial) and focus-accentuation consistency (consistent vs. inconsistent). Results: Speech latencies with initial accentuation were shorter than with medial accentuation. Eye-tracking data indicated that speakers preferred to fixate on accented pictures before articulation in initial accentuation, whereas in medial accentuation, speakers first preferred to fixate on deaccented pictures before shifting to accented ones. Both speech and first fixation latencies on accented pictures were shorter in the consistent condition. In the initial accentuation, accented-deaccented advantage scores were higher in the consistent condition from scenario onset to speech onset, while in the medial accentuation, this difference emerged after 220 ms. In addition, a focus inconsistent with the accentuation position slightly increases the acoustic prominence of deaccented information. Conclusions: Accentuation positions affect planning scope, with a larger scope for medial accentuation. Additionally, the consistency between focus and accentuation influences sentence production, broadly affecting the processing of accented information and impacting external acoustic prominence. This influence on accented information processing occurs during the conceptualization and linguistic encoding phases, with processing starting more quickly and taking priority when focus and accentuation are consistent. This study provides a more comprehensive understanding of how various linguistic components interact to shape sentence production. |
Zhenghua Zhang; Qingfang Zhang Linear incrementality in focus and accentuation processing during sentence production: Evidence from eye movements Journal Article In: Frontiers in Human Neuroscience, pp. 1–19, 2025. @article{Zhang2025d, Introduction: While considerable research in language production has focused on incremental processing during conceptual and grammatical encoding, prosodic encoding remains less investigated. This study examines whether focus and accentuation processing in speech production follows linear or hierarchical incrementality. Methods: We employed visual world eye-tracking to investigate how focus and accentuation are processed during sentence production. Participants were asked to complete a scenario description task where they were prompted to use a predetermined sentence structure to accurately convey the scenario, thereby spontaneously accentuate the corresponding entity. We manipulated the positions of focus with accentuation (initial vs. medial) by changing the scenarios. The initial and medial positions correspond to the first and second nouns in sentences like "N1 is above N2, not N3." Results: Our findings revealed that speech latencies were significantly shorter in the sentences with initial focus accentuation than those with medial focus accentuation. Furthermore, eye-tracking data demonstrated that speakers quickly displayed a preference for fixating on initial information after scenarios onset. Crucially, the time-course analysis revealed that the onset of the initial focus accentuation effect (around 460 ms) preceded that of the medial focus accentuation effect (around 920 ms). Discussion: These results support that focus and accentuation processing during speech production prior to articulation follows linear incrementality rather than hierarchical incrementality. |
Yuan Zhang; Giulia Agosti; Shuchen Guan; Doris I. Braun; Karl R. Gegenfurtner Dynamics of S-cone contributions to the initiation of saccadic and smooth pursuit eye movements Journal Article In: Journal of the Optical Society of America A, vol. 42, no. 5, pp. 256–265, 2025. @article{Zhang2025c, We investigated the interplaybetween luminance and heterochromatic brightness in guidingoculomotor behavior, particularly in saccades and smooth pursuit eye movements. We were particularly interested in testing whether mechanisms for eye target selection incorporate contributions from S-cones. Luminance, typically measured using the CIE's luminous efficiency function V(λ), has limitations in representing the perceived brightness of hetero- chromatic stimuli, especiallywith bluish and yellowish lights. S-cones do not contribute significantly to luminance but do influence brightness perception. To examine the S-cone contributions to oculomotor behavior, we mea- sured the target choices ofsaccades and smooth pursuit between equi-luminant bluish and yellowish stimuli, with paradigms producing a wide range of latencies. Our results show that at shorter latencies, luminance primarily drives both eye movements, with equi-luminant bluish and yellowish stimuli being chosen equally often.However, as latency increases, participants tend to choose bluish stimuli more frequently, suggesting that heterochromatic brightness plays a major role in longer-latency eye movements. This indicates that S-cone input may influence target selection as latency increases, highlighting a dynamic interaction between luminance and brightness in oculomotor decisions.We were particularly interested in investigating whether the mechanism responsible for eye movement target selection incorporates S-cone activity. |
Yang Zhang; Yangping Li; Weiping Hu; Huizhi Bai; Yuanjing Lyu Applying machine learning to intelligent assessment of scientific creativity based on scientific knowledge structure and eye-tracking data Journal Article In: Journal of Science Education and Technology, pp. 1–19, 2025. @article{Zhang2025b, Scientific creativity plays an essential role in science education as an advanced cognitive ability that inspires students to solve scientific problems inventively. The cultivation of scientific creativity relies heavily on effective assessment. Typically, human raters manually score scientific creativity using the Consensual Assessment Technique (CAT), which is a labor-intensive, time-consuming, and error-prone process. The assessment procedure is susceptible to subjective biases stemming from cognitive prejudice, distractions, fatigue, and fondness, potentially compromising its reliability, consistency, and efficiency. Previous research has sought to mitigate these risks by automating the assessment through latent semantic analysis and artificial intelligence. In this study, we developed machine learning (ML) models based on a training dataset that included output labels from the Scientific Creativity Test (SCT) evaluated by human experts, along with input features derived from objectively measurable semantic network parameters (representing the scientific knowledge structure) and eye-tracking blink duration (indicating attention patterns during the SCT). Most models achieve over 90% accuracy in predicting the scientific creativity levels of new individuals outside the training set, with some models achieving perfect accuracy. The results indicate that the ML models effectively capture the underlying relationship between scientific knowledge, eye movements, and scientific creativity. These models enable the fairly objective prediction of scientific creativity levels based on semantic network parameters and blink durations during the SCT, eliminating the need for ongoing human scoring. Therefore, laborious and complex manual assessment methods typically used for SCT can be avoided. This new method improves the efficiency of scientific creativity assessment by automating processes, minimizing subjectivity, providing rapid feedback, and enabling large-scale evaluations, all while reducing evaluators' workloads. |
Hao Zhang; Yiqing Hu; Yang Li; Shuangyu Zhang; Xiao Li Li; Chenguang Zhao Simultaneous dataset of brain, eye and hand during visuomotor tasks Journal Article In: Scientific Data, vol. 12, no. 1, pp. 1–15, 2025. @article{Zhang2025a, Visuomotor integration is a complex skill set encompassing many fundamental abilities, such as visual search, attention monitoring, and motor control. To explore the dynamic interplay between visual inputs and motor outputs, it is necessary to simultaneously record multiple brain activities with high temporal and spatial resolution, as well as to record implicit and explicit behaviors. However, there is a lack of public datasets that provide simultaneous multiple modalities during a visual-motor task. Functional near-infrared spectroscopy and electroencephalography to record brain activity simultaneously facilitate more precise capture of the complex visuomotor of brain mechanisms. Additionally, by employing a combined eye movement and manual response, it is possible to fully evaluate the effects of visuomotor outputs from implicit and explicit dimensions. We recorded whole-brain EEG (34 electrodes) and fNIRS (44 channels) covering the frontal and parietal cortex along with eye movements, behavior sampling, and operant behavior. The dataset underwent rigorous synchronization, quality control to highlight the effectiveness of our experiments and to demonstrate the high quality of our multimodal data framework. |
Han Zhang; Jacob Sellers; Taraz G. Lee; John Jonides The temporal dynamics of visual attention Journal Article In: Journal of Experimental Psychology: General, vol. 154, no. 2, pp. 435–456, 2025. @article{Zhang2025, Researchers have long debated how humans select relevant objects amid physically salient distractions. An increasingly popular view holds that the key to avoiding distractions lies in suppressing the attentional priority of a salient distractor. However, the precise mechanisms of distractor suppression remain elusive. Because the computation of attentional priority is a time-dependent process, distractor suppression must be understood within these temporal dynamics. In four experiments, we tracked the temporal dynamics of visual attention using a novel forced-response method, by which participants were required to express their latent attentional priority at varying processing times via saccades. We show that attention could be biased either toward or away from a salient distractor depending on the timing of observation, with these temporal dynamics varying substantially across experiments. These dynamics were explained by a computational model assuming the distractor and target priority signals arrive asynchronously in time and with different influences on saccadic behavior. The model suggests that distractor signal suppression can be achieved via a "slow" mechanism in which the distractor priority signal dictates saccadic behavior until a late-arriving priority signal overrides it, or a "fast" mechanism which directly suppresses the distractor priority signal's behavioral expression. The two mechanisms are temporally dissociable and can work collaboratively, resulting in time-dependent patterns of attentional allocation. The current work underscores the importance of considering the temporal dynamics of visual attention and provides a computational architecture for understanding the mechanisms of distractor suppression. |
Taishen Zeng; Longxia Lou; Zhifang Liu; Zhijun Zhang Age-related depreciation in predictive processing during Chinese reading: Insights from fixation-related potentials Journal Article In: Current Psychology, no. 2004, pp. 1–11, 2025. @article{Zeng2025a, To overcome methodological deficiencies in previous eye-tracking and event-related potentials (ERP) studies, the fixa- tion-related potential (FRP) approach was used to investigate how aging affects predictive processing in silent Chinese free-view reading. Forty older and 42 young adults participated in the experiment. All of them reported good reading abilities and none suffered from physical, mental, or cognitive diseases. The older participants were over 60 years of age (62.670 ± 3.018), and they did not differ from the younger group in the schooling years (11.43 vs. 12.10 |
Taishen Zeng; Longxia Lou; Zhi-Fang Liu; Chaoyang Chen; Zhijun Zhang Coregistration of eye movements and EEG reveals frequency effects of words and their constituent characters in natural silent Chinese reading Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–14, 2025. @article{Zeng2025, We conducted two experiments to examine the lexical and sub-lexical processing of Chinese two-character words in reading. We used a co-registration electroencephalogram (EEG) for the first fixation on target words. In Experiment 1, whole-word occurrence frequency and initial constituent character frequency were orthogonally manipulated, while in Experiment 2, whole-word occurrence frequency and end constituent character frequency were orthogonally manipulated. Results showed that word frequency facilitated eye-tracking measures, while initial and end character frequencies inhibited them. Classical word frequency effects on N170 and N400 in the posterior region and reversed word frequency effects over the anterior region were consistently observed in both experiments. Experiment 1 revealed an inhibiting effect of initial character frequency on anterior N170. In Experiment 2, interaction between end-character frequency and word frequency showed reliable effects on anterior N170 and N400. These results demonstrate both facilitating and inhibiting word frequency effects, along with inhibiting effects of character frequency and that word frequency moderates the inhibiting effects of end constituent character frequency during natural silent Chinese reading. |
Mengying Yuan; Min Gao; Xinzhong Cui; Xin Yue; Jing Xia; Xiaoyu Tang The power of sound: Exploring the auditory influence on visual search efficiency Journal Article In: Cognition, vol. 256, pp. 1–13, 2025. @article{Yuan2025, In a dynamic visual search environment, a synchronous and meaningless auditory signal (pip) that corresponds with a change in a visual target promotes the efficiency of visual search (pop out), which is known as the pip-and-pop effect. We conducted three experiments to investigate the mechanism of the pip-and-pop effect. Using the eye movement technique, we manipulated the interval rhythm (Exp. 1) and interval duration time (Exp. 2) of dynamic color changes in visual stimuli in the dynamic visual search paradigm to ensure that there was a significant pip-and-pop effect. In Exp. 3, we modulated the appearance of the sound by employing a visual-only condition, an auditory target condition (synchronized sounds), an auditory oddball condition (a high-frequency sound in a series of low-frequency sounds), an omitted oddball condition (an omitted sound in a series of sounds) and an auditory non-oddball condition (the last of the four sounds). We aim to clarify the role of audiovisual cross-modal information in the pip-and-pop effect by comparing different conditions. The search time results showed that a significant pip-and-pop effect was found for the auditory target, auditory oddball and auditory non-oddball conditions. The eye movement results revealed an increase in the fixation duration and a decrease in the number of fixations for the auditory target and auditory oddball conditions. Our findings suggest that the pip-and-pop effect is indeed a cross-modal effect. Furthermore, the interaction between auditory and visual information is necessary for the pip-and-pop effect, whereas auditory oddball stimuli attract attention and therefore moderate this effect. Our study provides a solution for the pip-and-pop effect mechanism in a dynamic visual search paradigm. |
Soon Young; Park Id; Diederick C Niehorster Id; Ludwig Huber Examining holistic processing strategies in dogs and humans through gaze behavior Journal Article In: PLoS ONE, vol. 20, no. 2, pp. 1–27, 2025. @article{Young2025, Extensive studies have shown that humans process faces holistically, considering not only individual features but also the relationships among them. Knowing where humans and dogs fixate first and the longest when they view faces is highly informative, because the locations can be used to evaluate whether they use a holistic face processing strategy or not. However, the conclusions reported by previous eye-tracking studies appear inconclu- sive. To address this, we conducted an experiment with humans and dogs, employing experimental settings and analysis methods that can enable direct cross-species compari- sons. Our findings reveal that humans, unlike dogs, preferentially fixated on the central region, surrounded by the inner facial features, for both human and dog faces. This pattern was consistent for initial and sustained fixations over seven seconds, indicating a clear ten- dency towards holistic processing. Although dogs did not show an initial preference for what to look at, their later fixations may suggest holistic processing when viewing faces of their own species. We discuss various potential factors influencing species differences in our results, as well as differences compared to the results of previous studies. |
Panpan Yao; Xin Jiang; Xinwei Chen; Xingshan Li Explore the processing unit of L2 Chinese learners in on-line Chinese reading Journal Article In: Second Language Research, vol. 41, no. 1, pp. 3 –19, 2025. @article{Yao2025, The present study explored the processing units of high-proficiency second language (L2) Chinese learners in on-line reading in an eye-tracking experiment. The critical aim was to investigate how learners segment continuous characters into words without the aid of word boundary demarcations. Based on previous studies, the embedded words of 2- and 3-character incremental words were manipulated to be either plausible or implausible with the preceding verbs, while the incremental words themselves were always plausible. The results revealed an effect of the plausibility manipulation, which suggested that L2 Chinese learners activated embedded words first and integrated embedded words with previous sentence context as soon as they read them. |
Masataka Yano; Keiyu Niikuni; Ruri Shimura; Natsumi Funasaki; Masatoshi Koizumi Producing non-basic word orders in (in)felicitous contexts: Evidence from pupillometry and functional near-infrared spectroscopy (fNIRS) Journal Article In: Language, Cognition and Neuroscience, vol. 40, no. 1, pp. 1–22, 2025. @article{Yano2025, The present study examined why speakers of languages with flexible word orders are more likely to use syntactically complex non-basic word orders when they provide discourse-given information earlier in sentences. This may be because they are more efficient for speakers to produce (the Speaker Economy Hypothesis). Alternatively, speakers may produce them to help listeners understand sentences more efficiently (the Listener Economy Hypothesis), given that previous studies showed that the processing of non-basic word orders was facilitated when the felicitous context was provided (i.e. a displaced object refers to discourse-given information). We addressed this issue by conducting a picture-description experiment, in which participants uttered sentences with syntactically basic Subject-Object-Verb (SOV) or non-basic Object-Subject-Verb (OSV) in felicitous or infelicitous contexts while cognitive load was tracked using pupillometry and functional near-infrared spectroscopy. The results showed that the felicitous context facilitated the filler-gap dependency formation of OSVs in production, supporting the Speaker Economy Hypothesis. |
Zheng Yang; Bing Han; Xinbo Gao; Zhi Hui Zhan Eye-movement-prompted large image captioning model Journal Article In: Pattern Recognition, vol. 159, pp. 1–13, 2025. @article{Yang2025a, Pretrained large vision-language models have shown outstanding performance on the task of image captioning. However, owing to the insufficient decoding of image features, existing large models sometimes lose important information, such as objects, scenes, and their relationships. In addition, the complex “black-box” nature of these models makes their mechanisms difficult to explain. Research shows that humans learn richer representations than machines do, which inspires us to improve the accuracy and interpretability of large image captioning models by combining human observation patterns. We built a new dataset, called saliency in image captioning (SIC), to explore relationships between human vision and language representation. One thousand images with rich context information were selected as image data of SIC. Each image was annotated with five caption labels and five eye-movement labels. Through analysis of the eye-movement data, we found that humans efficiently captured comprehensive information for image captioning during their observations. Therefore, we propose an eye-movement-prompted large image captioning model, which is embedded with two carefully designed modules: the eye-movement simulation module (EMS) and the eye-movement analyzing module (EMA). EMS combines the human observation pattern to simulate eye-movement features, including the positions and scan paths of eye fixations. EMA is a graph neural network (GNN) based module, which decodes graphical eye-movement data and abstracts image features as a directed graph. More accurate descriptions can be predicted by decoding the generated graph. Extensive experiments were conducted on the MS-COCO and NoCaps datasets to validate our model. The experimental results showed that our network was interpretable, and could achieve superior results compared with state-of-the-art methods, i.e., 84.2% BLEU-4 and 145.1% CIDEr-D on MS-COCO Karpathy test split, indicating its strong potential for use in image captioning. |
Liu Yang; Wenmao Zhang; Peitao Li; Hongjie Tang; Shuying Chen; Xinhong Jin The aiming advantages in experienced first-person shooter gamers: Evidence from eye movement patterns Journal Article In: Computers in Human Behavior, vol. 165, pp. 1–12, 2025. @article{Yang2025, The esports industry is expanding rapidly, with First-Person Shooter (FPS) games gaining unprecedented popularity, attracting millions of players and viewers worldwide. Proficiency in aiming is crucial in FPS games, serving as a critical factor for performance and victory. The present study explores the aiming advantages of experienced FPS players by analyzing their eye movement patterns under varying spatial and temporal conditions. Utilizing eye-tracking technology, data were collected from 63 participants, including 28 experienced FPS players and 35 non-FPS players. Task performance and eye movement indices such as accuracy, execution time, fixation count, and saccade count were analyzed. Results indicated that experienced FPS players exhibit faster execution times and more efficient eye movement patterns. Specifically, they more frequently exhibited the 0-fixation-1-saccade pattern, characterized by a single saccade without fixation, while showing fewer patterns requiring multiple corrective adjustments. This enhanced efficiency in visual search and eye-hand coordination likely contributes to their superior performance. Moreover, the study found that target distance and appearance latency significantly affect task performance and eye movement behavior. Greater distances and higher temporal uncertainty negatively impact performance, while spatiotemporal interactions are most influential near the fovea. These findings highlight the critical role of efficient eye movement patterns in enhancing aiming performance and suggest that FPS players could benefit from targeted eye-hand coordination training. |
Yao Yan; Yilin Wu; Hoi Ming Ken Yip Nicholas; Nicholas Seow Chiang Price Metrics of two-dimensional smooth pursuit are diverse across participants and stable across days Journal Article In: Journal of Vision, vol. 25, no. 2, pp. 1–18, 2025. @article{Yan2025b, Smooth pursuit eye movements are used to volitionally track moving objects, keeping their image near the fovea. Pursuit gain, the ratio of eye to stimulus speed, is used to quantify tracking accuracy and is usually close to 1 for healthy observers. Although previous studies have shown directional asymmetries such as horizontal gain exceeding vertical gain, the temporal stability of these biases and the correlation between oculomotor metrics for tracking in different directions and speeds have not been investigated. Here, in testing sessions 4 to 10 days apart, 45 human observers tracked targets moving along two-dimensional trajectories. Horizontal, vertical, and radial pursuit gain had high test–retest reliability (mean intraclass correlation 0.84). The frequency of all saccades and anticipatory saccades during pursuit also had high test–retest reliability (intraclass correlation coefficients = 0.66 and 0.73, respectively). In addition, gain metrics showed strong intermetric correlation, and saccade metrics separately showed strong intercorrelation; however, gain and saccade metrics showed only weak intercorrelation. These correlations are likely to originate from a mixture of sensory, motor, and integrative mechanisms. The test–retest reliability of multiple distinct pursuit metrics represents a “pursuit identity” for individuals, but we argue against this ultimately contributing to an oculomotor biomarker. |
Ming Yan; Jinger Pan; Reinhold Kliegl The Beijing Sentence Corpus II: A cross-script comparison between traditional and simplified Chinese sentence reading Journal Article In: Behavior Research Methodsl, vol. 57, no. 2, pp. 1–16, 2025. @article{Yan2025a, We introduce a sentence corpus with eye-movement data in traditional Chinese (TC), based on the original Beijing Sentence Corpus (BSC) in simplified Chinese (SC). The most noticeable difference between TC and SC character sets is their visual complexity. There are reaction time corpora in isolated TC character/word lexical decision and naming tasks. However, up to now natural TC sentence reading corpus with recorded eye movements has not been available for general public. We report effects of word frequency, visual complexity, and predictability on eye movements on fixation location and duration based on 60 native TC readers. In addition, because the current BSC-II sentences are nearly identical to the original BSC sentences, we report similarities and differences of the linguistic influences on eye movements for the two varieties of written Chinese. The results shed light on how visual complexity affects eye movements. Together, the two sentence corpora comprise a useful tool to establish cross-script similarities and differences in TC and SC. |
Chuyao Yan; Hao Wang; Xueyan Jiang; Zhiguo Wang Attention modulates subjective time perception across eye movements Journal Article In: Vision Research, vol. 227, pp. 1–9, 2025. @article{Yan2025, Prior research has established that actions, such as eye movements, influence time perception. However, the relationship between pre-saccadic attention, which is often associated with eye movement, and subjective time perception is not explored. Our study examines the impact of pre-saccadic attention on the subjective experience of time during eye movements, particularly focusing on its influence on subjective time perception at the saccade target. Participants were presented with two clocks featuring spinning hands, positioned at distinct locations corresponding to fixation and the saccade target. They were required to report the perceived time of these clocks across the eye movements, enabling us to measure and compare both the perceived and actual timing at these specific clock locations. In Experiment 1, we observed that participants tended to report the timing of their eyes' arrival at the target location as occurring slightly ahead of the actual time. In contrast, in Experiment 2, when participants divert their attention to the fixation clock prior to the imperative saccade, this perceptual bias diminishes. These results indicate that subjective time perception is strongly impacted by attentional conditions across the two experiments. Together, these findings offer further evidence for the notion that stable time perception during eye movements is not solely an inherent property of the eye movement system but also encompasses other cognitive mechanisms, such as attention. Statement of relevance: While we often remain unaware of the frequent saccades (rapid eye movements) we make, they have a profound impact on our perception of the world and the flow of time. Nevertheless, the connection between pre-saccadic attention, often associated with eye movements, and our subjective perception of time remains largely unexplored. In our research, we investigated the relationship between attention and our subjective experience of time. Our findings revealed the crucial role of attention, serving as a bridge between the physical movements of our eyes and our internal sense of temporal continuity. In essence, although previous studies have demonstrated the impact of eye movements on time perception, our current study emphasizes the critical influence of attention during the preparatory phase of saccades on the subjective experience of time during eye movements. |
Xiaodong Xu; Cailing Ji; Taohui Li; Martin J. Pickering The prediction of segmental and tonal information in Mandarin Chinese: An eye-tracking investigation Journal Article In: Language, Cognition and Neuroscience, vol. 40, no. 1, pp. 56–70, 2025. @article{Xu2025a, There is controversy about the extent to which people predict phonology during comprehension. In three visual-world experiments, we ask whether it occurs in Mandarin, a tonal language. Participants heard sentences containing a target word that was highly predictable (Cloze 80.2%, Experiment 1) or very highly predictable (Cloze 93.9%, Experiments 2–3) and saw an array of objects containing one whose name matched the target word (Experiments 1–2), was unrelated to the target word (Experiments 1–3), or matched the target word in segment and tone (Experiments 1–3), in segment only (Experiments 1–3), or tone only (Experiment 3). In comparison to the unrelated object, participants looked more at the segment + tone object (Experiments 1–3), and sometimes at the segment object (Experiments 1 and 3), but not at the tone object. We conclude that participants predict segmental information independently of tone. |
Jingjing Xu; Zhongling Pi; Meng Liu; Chaoqun Ye; Weiping Hu Effective learning through task motivation and learning scaffolding: Analyzing online collaborative interaction with eye tracking technology Journal Article In: Instructional Science, pp. 1–28, 2025. @article{Xu2025, Discussion has become a crucial method of interactive learning in online collaborative environments. This study aims to identify the impact of different task motivation compositions and learning scaffolding on attention, learning performance, and behavioral patterns. The 90 Chinese undergraduate and graduate students (Mage=20.38 |
Pei Xie; Han Bin Sang; Chao Zheng Huang; Ai Bao Zhou The effect of body-related information on food attentional bias in women with body weight dissatisfaction Journal Article In: Appetite, vol. 208, pp. 1–9, 2025. @article{Xie2025a, The eating behavior of individuals is susceptible to various factors. Emotion is an important factor that influences eating behaviors, especially in women who care about their body weight and dissatisfied with their bodies. This study explored the effect of emotional cues on attentional bias toward food in women with body weight dissatisfaction (BWD). Following the Negative Physical Self Scale-Fatness scores, a total of 60 females were recruited: twenty-nine were assigned to the BWD group, and thirty-one were assigned to the no body weight dissatisfaction (NBWD) group. All participants completed the food dot-probe task after exposure to emotional cues, and their eye-tracking data were recorded. The results showed greater duration bias and first fixation di- rection bias for high-calorie food in the BWD group than in the NBWD group after exposure to negative emotional cues. After exposure to positive emotional cues, the BWD group showed greater first-fixation duration bias and duration bias for high-calorie food than for low-calorie food. The present study found an effect of emotion on the attention bias toward food in women with BWD, and it provided insight into the psychological mechanism of the relationship between emotion and eating behaviors in women with BWD. Our study suggests that both negative and positive emotional cues may lead women with BWD to focus on high-calorie foods. |
Fang Xie; Wanying Chen; Lei Zhang; Xiaohua Cao; Kayleigh L. Warrington Exploring the role of word segmentation on parafoveal processing during Chinese reading Journal Article In: Journal of Cognitive Psychology, vol. 37, no. 1, pp. 1–14, 2025. @article{Xie2025, The importance of the word as a unit of meaning is well-established for readers of both alphabetic languages and Chinese. However, the unspaced nature of written Chinese raises questions about how readers use upcoming information to guide word segmentation and to adjust the parafoveal processing of subsequent characters. Using an eye-tracking experiment, we investigated whether Chinese readers pre-process character C2 more when it forms a word with C1 than when they belong to separate words. The boundary paradigm was used to manipulate the preview of C2, such that readers saw either an identity (normal) or pseudo-character preview. Linear mixed-effects models revealed reduced preview benefit when C1 and C2 were separate words. These results suggest that despite the absence of visual segmentation cues, Chinese readers are able to utilise the parafoveal preview to support the identification of word boundaries and modulate the extent of their parafoveal processing to prioritise the processing of word units. |
Iris Wiegand; Mariska Van Pouderoijen; Joukje M. Oosterman; Kay Deckers; Gernot Horstmann Contributions of distractor dwelling , skipping , and revisiting to age differences in visual search Journal Article In: Scientific Reports, vol. 15, pp. 1–28, 2025. @article{Wiegand2025, Visual search becomes slower with aging, particularly when targets are difficult to discriminate from distractors. Multiple distractor rejection processes may contribute independently to slower search times: dwelling on, skipping of, and revisiting of distractors, measurable by eye-tracking. The present study investigated how age affects each of the distractor rejection processes, and how these contribute to the final search times in difficult (inefficient) visual search. In a sample of Dutch healthy adults (19–85 years), we measured reaction times and eye-movements during a target present/absent visual search task, with varying target-distractor similarity and visual set size. We found that older age was associated with longer dwelling and more revisiting of distractors, while skipping was unaffected by age. This suggests that increased processing time and reduced visuo-spatial memory for visited distractor locations contribute to age-related decline in visual search. Furthermore, independently of age, dwelling and revisiting contributed stronger to search times than skipping of distractors. In conclusion, under conditions of poor guidance, dwelling and revisiting have a major contribution to search times and age-related slowing in difficult visual search, while skipping is largely negligible. |
Xin Wang; Lizhou Fan; Haiyun Li; Xiaochan Bi; Wenjing Jiang; Xin Ma Skip-AttSeqNet: Leveraging skip connection and attention-driven Seq2seq model to enhance eye movement event detection in Parkinson's disease Journal Article In: Biomedical Signal Processing and Control, vol. 99, pp. 1–17, 2025. @article{Wang2025a, To address the limitations of traditional algorithms in detecting eye movement events, particularly in Parkinson's disease (PD) patients, this study introduces Skip-AttSeqNet. It presents an innovative approach combining skip-connected, one-dimensional convolutional neural networks with an attention-enhanced, bidirectional long short-term memory network. This hybrid architecture significantly advances smooth pursuit (SP) event detection, as evidenced by its performance on both the GazeCom dataset and a unique dataset of PD patient eye movements. Key innovations in this work include the utilization of skip connections and attention mechanisms, along with optimized training–validation set division, collectively enhancing the model's accuracy while mitigating overfitting. Skip-AttSeqNet outperforms existing algorithms, achieving a 3.2% higher sample-level F1 score and a notable 6.2% increase in event-level F1 scores for SP detection. Furthermore, we established a smooth-pursuit experimental paradigm and identified significant differences in saccade and SP features between PD patients and healthy older adults through statistical analysis using the Mann–Whitney test. These findings underscore the potential of eye movement metrics as biomarkers for PD, thereby not only strengthening PD diagnosis but also enriching the intersection of computer vision and biomedical research domains. |
Rongwei Wang; Jianrong Jia Aperiodic pupil fluctuations at rest predict orienting of visual attention Journal Article In: Psychophysiology, vol. 62, no. 1, pp. 1–10, 2025. @article{Wang2025, The aperiodic exponent of the power spectrum of signals in several neuroimaging modalities has been found to be related to the excitation/inhibition balance of the neural system. Leveraging the rich temporal dynamics of resting-state pupil fluctuations, the present study investigated the association between the aperiodic exponent of pupil fluctuations and the neural excitation/inhibition balance in attentional processing. In separate phases, we recorded participants' pupil size during resting state and assessed their attentional orienting using the Posner cueing tasks with different cue validities (i.e., 100% and 50%). We found significant correlations between the aperiodic exponent of resting pupil fluctuations and both the microsaccadic and behavioral cueing effects. Critically, this relationship was particularly evident in the 50% cue-validity condition rather than in the 100% cue-validity condition. The microsaccadic responses mediated the association between the aperiodic exponent and the behavioral response. Further analysis showed that the aperiodic exponent of pupil fluctuations predicted the self-rated hyperactivity/impulsivity trait across individuals, suggesting its potential as a marker of attentional deficits. These findings highlight the rich information contained in pupil fluctuations and provide a new approach to assessing the neural excitation/inhibition balance in attentional processing. |
Carla A. Wall; Frederick Shic; Elizabeth A. Will; Quan Wang; Jane E. Roberts Similar gap-overlap profiles in children with fragile x syndrome and IQ-matched autism Journal Article In: Journal of Autism and Developmental Disorders, vol. 55, pp. 891–903, 2025. @article{Wall2025a, Purpose: Fragile X syndrome (FXS) is a single-gene disorder characterized by moderate to severe cognitive impairment and a high association with autism spectrum disorder (ASD) and Attention-Deficit/Hyperactivity Disorder (ADHD). Atypical visual attention is a feature of FXS, ASD, and ADHD. Thus, studying early attentional patterns in young children with FXS can offer insight into early emerging neurocognitive processes underlying challenges and contribute to our understanding of common and unique features of ASD and ADHD in FXS. Methods: The present study examined visual attention indexed by the gap-overlap paradigm in children with FXS (n = 39) compared to children with ASD matched on intellectual ability and age (n = 40) and age-matched neurotypical controls (n = 34). The relationship between gap-overlap performance and intellectual ability, ASD, and ADHD across groups was characterized. Saccadic reaction times (RT) were collected across baseline, gap, and overlap conditions. Results: Results indicate no group differences in RT for any conditions. However, RT of the ASD and NT groups became slower throughout the experiment whereas RT of the FXS group did not change, suggesting difficulties in habituation for the FXS group. There was no relationship between RT and intellectual ability, ADHD, or ASD symptoms in the FXS and ASD groups. In the NT group, slower RT was related to elevated ADHD symptoms only. Conclusion: Taken together, findings suggest that the social attention differences documented in FXS and ASD may be due to other cognitive factors, such as reward or motivation, rather than oculomotor control of visual attention. |
Carla A. Wall; Caitlin Hudac; Kelsey Dommer; Beibin Li; Adham Atyabi; Claire Foster; Quan Wang; Erin Barney; Yeojin Amy Ahn; Minah Kim; Monique Mahony; Raphael Bernier; Pamela Ventola; Frederick Shic Preserved but un-sustained responses to bids for dyadic engagement in school-age children with Autism Journal Article In: Journal of Autism and Developmental Disorders, pp. 1–9, 2025. @article{Wall2025, Purpose: Dynamic eye-tracking paradigms are an engaging and increasingly used method to study social attention in autism. While prior research has focused primarily on younger populations, there is a need for developmentally appropriate tasks for older children. Methods: This study introduces a novel eye-tracking task designed to assess school-aged children's attention to speakers involved in conversation. We focused on a primary outcome of attention to speakers' faces during conversation between three actors and during emulated bids for dyadic engagement (dyadic bids). Results: In a sample of 161 children (78 autistic, 83 neurotypical), children displayed significantly lower overall attention to faces compared to their neurotypical peers (p <.0001). Contrary to expectations, both groups demonstrated preserved attentional responses to dyadic bids, with no significant group differences. However, a divergence was observed following the dyadic bid: neurotypical children showed more attention to other conversational agents' faces than autistic children (p =.017). Exploratory analyses in the autism group showed that reduced attention to faces was associated with greater autism features during most experimental conditions. Conclusion: These findings highlight key differences in how autistic and neurotypical children engage with social cues, particularly in dynamic and interactive contexts. The preserved response to dyadic bids in autism, alongside the absence of post-bid attentional shifts, suggests nuanced and context-dependent social attention mechanisms that should be considered in future research and intervention strategies. |
Anne C. L. Vrijling; Minke J. Boer; Remco J. Renken; Jan-Bernard C. Marsman; Joost Heutink; Frans W. Cornelissen; Nomdo M. Jansonius Detecting and quantifying glaucomatous visual function loss with continuous visual stimulus tracking: A case-control study Journal Article In: Translational Vision Science & Technology, vol. 14, no. 2, pp. 1–14, 2025. @article{Vrijling2025, Purpose: Continuous visual stimulus tracking could be used as an easy alternative to standard automated perimetry (SAP) for visual function screening. With continuous visual stimulus tracking, we simplified the perimetric task to following a moving dot on a screen with the eyes. Here, we determined whether tracking performance (the agreement between gaze and stimulus position) enables the detection and quantification of glaucomatous visual function loss (in terms of SAP), and whether it shows a learning effect. Methods: We evaluated the tracking performance of 36 cases with early, moderate, or severe glaucoma (median with interquartile range [IQR] age = 70 [67-74] years) and 36 controls (median = 70 |
Naomi Vingron; Lea Alexandra Müller Karoza; Nancy Azevedo; Aaron Johnson; Evdokimos Konstantinidis; Panagiotis Bamidis; Melissa Võ; Eva Kehayia How words can guide our eyes: Increasing engagement with art through audio-guided visual search in young and older adults Journal Article In: The Mental Lexicon, pp. 1–12, 2025. @article{Vingron2025, Pursuing cognitively stimulating activities, such as engaging with art, is crucial to a healthy lifestyle. The current work simulates visits to an art museum in a laboratory setting. Using eye tracking, we explored how linguistically guided visual search may increase attention, enjoyment and retention of information when viewing art. Two groups of adults, young (under 35 years) and older (over 65 years) viewed ten paintings on a computer screen presented either with or without an accompanying audio-guide, while having their eye movements recorded. Audio-guides referred to specific areas of the painting, marked as Interest Areas (IA). Across age groups, as attested by gaze fixations, the audio-guides increased attention to these areas compared to free-viewing. Audio-guided viewing did not lead to a significantly increase over free-viewing in information recall accuracy or feelings of enjoyment and engagement. Overall, older adults did report feeling more positively about both audio-guided and free viewing than young adults. Thus, the use of audio-guides, specifically the gamification through linguistically guided visual search, may be a useful tool to promote meaningful attentional interactions with art. |
Ana Vilotijević; Sebastiaan Mathôt The effect of covert visual attention on pupil size during perceptual fading Journal Article In: Cortex, vol. 182, pp. 112–123, 2025. @article{Vilotijevic2025, Pupil size is modulated by various cognitive factors such as attention, working memory, mental imagery, and subjective perception. Previous studies examining cognitive effects on pupil size mainly focused on inducing or enhancing a subjective experience of brightness or darkness (for example by asking participants to attend to/memorize a bright or dark stimulus), and then showing that this affects pupil size. Surprisingly, the inverse has never been done; that is, it is still unknown what happens when a subjective experience of brightness or darkness is eliminated or strongly reduced even though bright or dark stimuli are physically present. Here, we aim to answer this question by using perceptual fading, a phenomenon where a visual stimulus gradually fades from visual awareness despite its continuous presentation. The study contains two blocks: Fading and Non-Fading. In the Fading block, participants were presented with black and white patches with a fuzzy outline that were presented at the same location throughout the block, thus inducing strong perceptual fading. In contrast, in the Non-Fading block, the patches switched sides on each trial, thus preventing perceptual fading. Participants covertly attended to one of the two patches, indicated by a cue, and reported the offset of one of a set of circles that are displayed on top. We hypothesized that pupil size will be modulated by covert visual attention in the Non-Fading block, but that this effect will not (or to a lesser extent) arise in the Fading block. We found that covert visual attention to bright/dark does modulate pupil size even during perceptual fading (Fading block), but to a lesser extent than when the perceptual experience of brightness/darkness is preserved (Non-Fading block). This implies that pupil size is always modulated by covert attention, but that the effect decreases as subjective experience of brightness or darkness decreases. In broader terms, this suggests that cognitive modulations of pupil size reflect a mixture of high-level and lower-level visual processing. |
Martin R. Vasilev; Zeynep Ozkan; Julie A. Kirkby; Antje Nuthmann; Fabrice B. R. Parmentier Unexpected sounds induce a rapid inhibition of eye-movement responses Journal Article In: Psychophysiology, vol. 62, pp. 1–19, 2025. @article{Vasilev2025, Abstract Unexpected sounds have been shown to trigger a global and transient inhibition of motor responses. Recent evidence suggests that eye movements may also be inhibited in a similar way, but it is not clear how quickly unexpected sounds can affect eye-movement responses. Additionally, little is known about whether they affect only voluntary saccades or also reflexive saccades. In this study, participants performed a pro-saccade and an anti- saccade task while the timing of sounds relative to stimulus onset was manipulated. Pro-saccades are generally reflexive and stimulus-driven, whereas anti- saccades require the generation of a voluntary saccade in the opposite direction of a peripheral stimulus. Unexpected novel sounds inhibited the execution of both pro- and anti-saccades compared to standard sounds, but the inhibition was stronger for anti-saccades. Novel sounds affected response latencies as early as 150 ms before the peripheral cue to make a saccade, all the way to 25 ms after the cue to make a saccade. Interestingly, unexpected sounds also reduced anti-saccade task errors, indicating that they aided inhibitory control. Overall, these results suggest that unexpected sounds yield a global and rapid inhibition of eye-movement responses. This inhibition also helps suppress reflexive eye-movement responses in favor of more voluntarily generated |
Timo Kerkoerle; Louise Pape; Milad Ekramnia; Xiaoxia Feng; Jordy Tasserie; Morgan Dupont; Xiaolian Li; Bechir Jarraya; Wim Vanduffel; Stanislas Dehaene; Ghislaine Dehaene-Lambertz Brain mechanisms of reversible symbolic reference: A potential singularity of the human brain Journal Article In: eLife, vol. 12, pp. 1–28, 2025. @article{Kerkoerle2025, The emergence of symbolic thinking has been proposed as a dominant cognitive criterion to distinguish humans from other primates during hominization. Although the proper definition of a symbol has been the subject of much debate, one of its simplest features is bidirectional attachment: the content is accessible from the symbol, and vice versa. Behavioral observations scattered over the past four decades suggest that this criterion might not be met in non-human primates, as they fail to generalize an association learned in one temporal order (A to B) to the reverse order (B to A). Here, we designed an implicit fMRI test to investigate the neural mechanisms of arbitrary audio-visual and visual-visual pairing in monkeys and humans and probe their spontaneous reversibility. After learning a unidirectional association, humans showed surprise signals when this learned association was violated. Crucially, this effect occurred spontaneously in both learned and reversed directions, within an extended network of high-level brain areas, including, but also going beyond the language network. In monkeys, by contrast, violations of association effects occurred solely in the learned direction and were largely confined to sensory areas. We propose that a human-specific brain network may have evolved the capacity for reversible symbolic reference. ### Competing Interest Statement The authors have declared no competing interest. |
Ekin Tünçok; Lynne Kiorpes; Marisa Carrasco Opposite asymmetry in visual perception of humans and macaques Journal Article In: Current Biology, vol. 35, pp. 681–687, 2025. @article{Tuencok2025, In human adults, visual perception varies throughout the visual field. Performance decreases with eccentricity1,2 and varies around polar angle. At isoeccentric locations, performance is typically higher along the horizontal than vertical meridian (horizontal-vertical asymmetry [HVA]) and along the lower than the upper vertical meridian (vertical meridian asymmetry [VMA]). It is unknown whether the macaque visual system, the leading animal model for understanding human vision also exhibits these performance asymmetries. Here, we investigated whether and how visual field asymmetries differ between these two groups. Human adults and adult macaque monkeys (Macaca nemestrina) performed a two-alternative forced choice (2AFC) motion direction discrimination task for a target presented among distractors at isoeccentric locations. Both groups showed heterogeneous visual sensitivity around the visual field, but there were striking differences between them. Human observers showed a large VMA—their sensitivity was poorest at the upper vertical meridian—a weak horizontal-vertical asymmetry, and lower sensitivity at intercardinal locations. Macaque performance revealed an inverted VMA—their sensitivity was poorest in the lower vertical meridian. The opposite pattern of VMA in macaques and humans revealed in this study may reflect adaptive behavior by increasing discriminability at locations with greater relevance for visuomotor integration. This study reveals that performance also varies as a function of polar angle for monkeys, but in a different manner than in humans, and highlights the need to investigate species-specific similarities and differences in brain and behavior to constrain models of vision and brain function. |
Duncan T. Tulimieri; Amelia Decarie; Tarkeshwar Singh; Jennifer A. Semrau Impairments in proprioceptively-referenced limb and eye movements in chronic stroke Journal Article In: Neurorehabilitation and Neural Repair, vol. 39, no. 1, pp. 47 –57, 2025. @article{Tulimieri2025, Background: Upper limb proprioceptive impairments are common after stroke and affect daily function. Recent work has shown that stroke survivors have difficulty using visual information to improve proprioception. It is unclear how eye movements are impacted to guide action of the arm after stroke. Here, we aimed to understand how upper limb proprioceptive impairments impact eye movements in individuals with stroke. Methods: Control (N = 20) and stroke participants (N = 20) performed a proprioceptive matching task with upper limb and eye movements. A KINARM exoskeleton with eye tracking was used to assess limb and eye kinematics. The upper limb was passively moved by the robot and participants matched the location with either an arm or eye movement. Accuracy was measured as the difference between passive robot movement location and active limb matching (Hand-End Point Error) or active eye movement matching (Eye-End Point Error). Results: We found that individuals with stroke had significantly larger Hand (2.1×) and Eye-End Point (1.5×) Errors compared to controls. Further, we found that proprioceptive errors of the hand and eye were highly correlated in stroke participants (r =.67 |
Kathryn A. Tremblay; Katja Mcbane; Katherine S. Binder The role of morphology and sentence context in word processing for adults with low literacy Journal Article In: Journal of Learning Disabilities, pp. 1–15, 2025. @article{Tremblay2025, Both vocabulary skill and morphological complexity, or whether words can be broken down into root words and affixes, have a significant impact on word processing for adults with low literacy. We investigated the influence of word-level variables of morphological complexity and root word frequency, and the sentence-level variable of context strength on word processing in adults with low literacy, who differed on levels of vocabulary depth skills, which was a participant- level variable. Our findings demonstrate that morphological complexity, root word frequency, and context strength are all related to how adult learners process words while reading, but their effects are dependent on participants' vocabulary depth. Participants with higher levels of vocabulary depth were able to more quickly process morphologically complex words and make better use of supportive sentence context as compared to individuals with lower levels of vocabulary depth. These findings suggest that both morphological complexity and vocabulary depth are important for word processing and reading comprehension in adults with low literacy. |
Tommaso Tosato; Guillaume Dumas; Gustavo Rohenkohl; Pascal Fries Performance modulations phase-locked to action depend on internal state Journal Article In: iScience, vol. 28, no. 1, pp. 1–13, 2025. @article{Tosato2025, Previous studies have shown that perceptual performance can be modulated at specific frequencies phase-locked to self-paced motor actions, but findings have been inconsistent. To investigate this effect at the population level, we tested 50 participants who performed a self-paced button press followed by a threshold-level detection task, using both fixed- and random-effects analyses. Contrary to expectations, the aggregated data showed no significant action-related modulation. However, when accounting for internal states, we found that trials during periods of low performance or following a missed detection exhibited significant modulation at approximately 17 Hz. Additionally, participants with no false alarms showed similar modulation. These effects were significant in random effects tests, suggesting that they generalize to the population. Our findings indicate that action-related perceptual modulations are not always detectable but may emerge under specific internal conditions, such as lower attentional engagement or higher decision criteria, particularly in the beta-frequency range. |
Andrés Torres Sánchez; Marie Dawant; Venethia Danthine; Inci Cakiroglu; Roberto Santalucia; Enrique Ignacio Germany Morrison; Antoine Nonclercq; Riëm El Tahry VNS-induced dose-dependent pupillary response in refractory epilepsy Journal Article In: Clinical Neurophysiology, vol. 171, pp. 67–75, 2025. @article{TorresSanchez2025, Purpose: The Locus Coeruleus (LC) plays a vital role by releasing norepinephrine, which contributes to the antiepileptic effects of Vagus Nerve Stimulation (VNS). LC activity also influences pupil dilation. Investigating VNS dose-dependent Pupillary Dilation Response (PDR) may provide novel neurophysiological insights into therapeutic response and allow for an objective and personalized optimization of stimulation parameters. Methods: Fourteen VNS-implanted patients (9 responders, 5 non-responders) treated for at least 6 months were retrospectively recruited. VNS intensities were adjusted from 0.25 mA to 2.25 mA, or to the highest tolerable level. Concurrently, we tracked pupil size in the left eye and gathered patients' subjective perception scores. Individual curve fitting was used to explore the relationship between VNS intensity and PDR. Results: PDR increased with stimulation intensity, particularly in responders. In 6 patients, an inverted U-shaped relationship between intensity and PDR was observed 2–3 s after stimulation onset. A significant interaction was found between VNS intensity and responder status, independent of subjective perception. Conclusions: VNS induces a dose-dependent PDR, which differs between responders and non-responders. In nearly half the patients, the dose–response relationship was characterized by an inverted U-shape with a maximal VNS effect. Significance: We propose VNS-induced PDR as a novel biomarker of VNS response. |
Jan Theeuwes; Jonna Van Doorn; Dirk Van Moorselaar Suppression of fear-conditioned stimuli Journal Article In: Emotion, pp. 1–6, 2025. @article{Theeuwes2025, This study demonstrates that even objects generating acute fear through shock conditioning can be attentionally suppressed. Participants searched for shapes while a color singleton distractor was presented. In a preconditioning phase, participants learned to suppress a color singleton distractor frequently appearing in a specific location. Following fear conditioning, suppression remained in place even for those color distractors that were now associated with receiving an electric shock. This finding provides evidence that people can learn to suppress stimuli they fear. The current results are important as they challenge prevailing theories that suggest attentional capture by fearful stimuli is inflexible and driven by innate, bottom-up processes. Moreover, the finding that fearful stimuli can be suppressed opens up potential avenues for developing behavior modification techniques aimed at counteracting attentional biases toward fearful stimuli. |
Claudio Terravecchia; Giovanni Mostile; Clara Grazia Chisari; Federico Contrafatto; Andrea Salerno; Giulia Donzuso; Calogero Edoardo Cicero; Giorgia Sciacca; Alessandra Nicoletti; Mario Zappia Different patterns of acute saccadic response to levodopa in de novo Parkinson's disease Journal Article In: Journal of Neurology, vol. 272, no. 1, pp. 1–10, 2025. @article{Terravecchia2025, Background: L-dopa (LD) effects on visually guided saccades (VGS) have been poorly investigated in de novo Parkinson's disease (PD) patients through a standardized acute challenge test. Objectives: To assess the acute saccadic effects of LD as well as possible different patterns of VGS response to LD in a consistent population of de novo PD. Methods: VGS were assessed among de novo PD at baseline and 2 h after the administration of LD/carbidopa 250/25 mg. Baseline instrumental assessments were compared with healthy controls (HCs). Results: Thirty-two de novo PD and 17 HCs were enrolled. PD patients showed lower upward velocities and amplitude than HCs, improving after LD administration. Two subgroups were identified among PD patients based on percent improvement or worsening of the most significant changing VGS parameter after LD administration: Group A (19 patients, showing improvement) and B (13 patients, showing worsening). Group A had at baseline reduced vertical, especially downward, velocities, gain and amplitude compared to Group B, with a significant improvement after LD. Conversely, in Group B, an LD-induced worsening effect on both horizontal and vertical VGS parameters was found. Comparing the two identified groups based on clinical–demographic characteristics, higher prevalence of female sex was found in Group B. Conclusions: De novo PD patients presented prominent vertical VGS impairment which improved acutely after LD administration. Different patterns of acute saccadic responses to LD were also shown, suggesting a possible role of VGS in PD phenotyping. |
Zohre Soleymani Tekbudak; Mehdi Purmohammad; Ayşegül Özkan; Cengiz Acartürk The PSR corpus: A Persian sentence reading corpus of eye movements Journal Article In: Behavior Research Methods, vol. 57, no. 1, pp. 1–16, 2025. @article{Tekbudak2025, The present study introduces the Persian Sentence Reading (PSR) Corpus, aiming to expand empirical data for Persian, an under-investigated language in research on oculomotor control in reading. Reading research has largely focused on Latin script languages with a left-to-right reading direction. However, languages with different reading directions, such as right-to-left and top-to-bottom, and particularly Persian script-based languages like Farsi and Dari, have remained understudied. This study pioneers in providing an eye movement dataset for reading Persian sentences, enabling further exploration of the influences of unique Persian characteristics on eye movement patterns during sentence reading. The core objective of the study is to provide data about how word characteristics impact eye movement patterns. The research also investigates the characteristics of the interplay between neighboring words and eye movements on them. By broadening the scope of reading research beyond commonly studied languages, the study aims to contribute to an interdisciplinary approach to reading research, exemplifying investigations through various theoretical and methodological perspectives. |
Lukas Suveg; Tanvi Thakkar; Emily Burg; Shelly P. Godar; Daniel Lee; Ruth Y. Litovsky The relationship between spatial release from masking and listening effort among cochlear implant users with single-sided deafness Journal Article In: Ear & Hearing, pp. 1–16, 2025. @article{Suveg2025, Objectives: To examine speech intelligibility and listening effort in a group of patients with single-sided deafness (SSD) who received a cochlear implant (CI). There is limited knowledge on how effectively SSD-CI users can integrate electric and acoustic inputs to obtain spatial hearing benefits that are important for navigating everyday noisy environments. The present study examined speech intelligibility in quiet and noise simultaneously with measuring listening effort using pupillometry in individuals with SSD before, and 1 year after, CI activation. The study was designed to examine whether spatial separation between target and interfering speech leads to improved speech understanding (spatial release from masking [SRM]), and is associated with a decreased effort (spatial release from listening effort [SRE]) measured with pupil dilation (PPD). Design: Eight listeners with adult-onset SSD participated in two visits: (1) pre-CI and (2) post-CI (1 year after activation). Target speech consisted of Electrical and Electronics Engineers sentences and masker speech consisted of AzBio sentences. Outcomes were measured in three target-masker configurations with the target fixed at 0° azimuth: (1) quiet, (2) co-located target/maskers, and (3) spatially separated (±90° azimuth) target/maskers. Listening effort was quantified as change in peak proportional PPD on the task relative to baseline dilation. Participants were tested in three listening modes: acoustic-only, CI-only, and SSD-CI (both ears). At visit 1, the acoustic-only mode was tested in all three target-masker configurations. At visit 2, the acoustic-only and CI-only modes were tested in quiet, and the SSD-CI listening mode was tested in all three target-masker configurations. Results: Speech intelligibility scores in quiet were at the ceiling for the acoustic-only mode at both visits, and in the SSD-CI listening mode at visit 2. In quiet, at visit 2, speech intelligibility scores were significantly worse in the CI-only listening modes than in all other listening modes. Comparing SSD-CI listening at visit 2 with pre-CI acoustic-only listening at visit 1, speech intelligibility scores for co-located and spatially separated configurations showed a trend toward improvement (higher scores) that was not significant. However, speech intelligibility was significantly higher in the separated compared with the co-located configuration in acoustic-only and SSD-CI listening modes, indicating SRM. PPD evoked by speech presented in quiet was significantly higher with CI-only listening at visit 2 compared with acoustic-only listening at visit 1. However, there were no significant differences between co-located and spatially separated configurations on PPD, likely due to the variability among this small group of participants. There was a negative correlation between SRM and SRE, indicating that improved speech intelligibility with spatial separation of target and masker is associated with a greater decrease in listening effort on those conditions. Conclusions: The small group of patients with SSD-CI in the present study demonstrated improved speech intelligibility from spatial separation of target and masking speech, but PPD measures did not reveal the effects of spatial separation on listening effort. However, there was an association between the improvement in speech intelligibility (SRM) and the reduction in listening effort (SRE) from spatial separation of target and masking speech. |
Patrick W. Stroman; Roland Staud; Caroline F. Pukall In: PLoS ONE, vol. 20, no. 1, pp. 1–25, 2025. @article{Stroman2025, Altered neural signaling in fibromyalgia syndrome (FM) was investigated with functional magnetic resonance imaging (fMRI). We employed a novel fMRI network analysis method, Structural and Physiological Modeling (SAPM), which provides more detailed information than previous methods. The study involved brain fMRI data from participants with FM (N = 22) and a control group (HC |
Caleb Stone; Jason B. Mattingley; Dragan Rangelov Neural mechanisms of metacognitive improvement under speed pressure Journal Article In: Communications Biology, vol. 8, pp. 1–12, 2025. @article{Stone2025, The ability to accurately monitor the quality of one's choices, or metacognition, improves under speed pressure, possibly due to changes in post-decisional evidence processing. Here, we investigate the neural processes that regulate decision-making and metacognition under speed pressure using time- resolved analyses of brain activity recorded using electroencephalography. Participants performed a motion discrimination task under short and long response deadlines and provided a metacognitive rating following each response. Behaviourally, participants were faster, less accurate, and showed superior metacognition with short deadlines. These effects were accompanied by a larger centro- parietal positivity (CPP), a neural correlate of evidence accumulation. Crucially, post-decisional CPP amplitude was more strongly associated with participants' metacognitive ratings following errors under short relative to long responsedeadlines. Our results suggest that superior metacognition under speed pressure may stem from enhanced metacognitive readout of post-decisional evidence. |
Sophie Marie Stasch; Wolfgang Mack When automation fails - Investigating cognitive stability and flexibility in a multitasking scenario Journal Article In: Applied Ergonomics, vol. 125, pp. 1–12, 2025. @article{Stasch2025, Managing multiple tasks simultaneously often results in performance decrements due to limited cognitive resources. Task prioritization, requiring effective cognitive control, is a strategy to mitigate these effects and is influenced by the stability-flexibility dilemma. While previous studies have investigated the stability-flexibility dilemma in fully manual multitasking environments, this study explores how cognitive control modes interact with automation reliability. While no significant interaction between control mode and automation reliability was observed in single multitasking performance, our findings demonstrate that overall task performance benefits from a flexible cognitive control mode when automation is reliable. However, when automation is unreliable, a stable cognitive control mode improves manual takeover performance, though this comes at the expense of secondary task performance. Furthermore, cognitive control modes and automation reliability independently affect various eye-tracking metrics and mental workload. These findings underscore the need to integrate cognitive control and automation reliability into adaptive assistance systems, particularly during the perceive stage, to enhance safety in human-machine systems. |
Ramanujan Srinath; Amy M. Ni; Claire Marucci; Marlene R. Cohen; David H. Brainard Orthogonal neural representations support perceptual judgements of natural stimuli Journal Article In: Scientific Reports, vol. 15, pp. 1–17, 2025. @article{Srinath2025, In natural behavior, observers must separate relevant information from a barrage of irrelevant information. Many studies have investigated the neural underpinnings of this ability using artificial stimuli presented on simple backgrounds. Natural viewing, however, carries a set of challenges that are inaccessible using artificial stimuli, including neural responses to background objects that are task-irrelevant. An emerging body of evidence suggests that the visual abilities of humans and animals can be modeled through the linear decoding of task-relevant information from visual cortex. This idea suggests the hypothesis that irrelevant features of a natural scene should impair performance on a visual task only if their neural representations intrude on the linear readout of the task relevant feature, as would occur if the representations of task-relevant and irrelevant features are not orthogonal in the underlying neural population. We tested this hypothesis using human psychophysics and monkey neurophysiology, in response to parametrically variable naturalistic stimuli. We demonstrate that 1) the neural representation of one feature (the position of a central object) in visual area V4 is orthogonal to those of several background features, 2) the ability of human observers to precisely judge object position was largely unaffected by task-irrelevant variation in those background features, and 3) many features of the object and the background are orthogonally represented by V4 neural responses. Our observations are consistent with the hypothesis that orthogonal neural representations support stable perception of objects and features despite the tremendous richness of natural visual scenes. |
Connor Spiech; Mikael Hope; Valentin Bégel Evoked and entrained pupillary activity while moving to preferred tempo and beyond Journal Article In: iScience, vol. 28, no. 1, pp. 1–10, 2025. @article{Spiech2025, People synchronize their movements more easily to rhythms with tempi closer to their preferred motor rates than with faster or slower ones. More efficient coupling at one's preferred rate, compared to faster or slower rates, should be associated with lower cognitive demands and better attentional entrainment, as predicted by dynamical system theories of perception and action. We show that synchronizing one's finger taps to metronomes at tempi outside of their preferred rate evokes larger pupil sizes, a proxy for noradrenergic attention, relative to passively listening. This demonstrates that synchronizing is more cognitively demanding than listening only at tempi outside of one's preferred rate. Furthermore, pupillary phase coherence increased for all tempi while synchronizing compared to listening, indicating that synchronous movements resulted in more efficiently allocated attention. Beyond their theoretical implications, our findings suggest that rehabilitation for movement disorders should be tailored to patients' preferred rates to reduce cognitive demands. |
Lauren N. Slivka; Kenna R. H. Clayton; Greg D. Reynolds Mask-wearing affects infants' selective attention to familiar and unfamiliar audiovisual speech Journal Article In: Frontiers in Developmental Psychology, vol. 3, pp. 1–8, 2025. @article{Slivka2025, This study examined the immediate effects of mask-wearing on infant selective visual attention to audiovisual speech in familiar and unfamiliar languages. Infants distribute their selective attention to regions of a speaker's face differentially based on their age and language experience. However, the potential impact wearing a face mask may have on infants' selective attention to audiovisual speech has not been systematically studied. We utilized eye tracking to examine the proportion of infant looking time to the eyes and mouth of a masked or unmasked actress speaking in a familiar or unfamiliar language. Six-month-old and 12-month-old infants (n = 42, 55% female, 91%White Non-Hispanic/Latino) were shown videos of an actress speaking in a familiar language (English) with and without a mask on, as well as videos of the same actress speaking in an unfamiliar language (German) with and without a mask. Overall, infants spent more time looking at the unmasked presentations compared to the masked presentations. Regardless of language familiarity or age, infants spent more time looking at the mouth area of an unmasked speaker and they spent more time looking at the eyes of a masked speaker. These findings indicate mask-wearing has immediate effects on the distribution of infant selective attention to different areas of the face of a speaker during audiovisual speech. |
Yiming Shi; Jiaming Zhang; Xingyi Li; Yuchong Han; Jiangheng Guan; Yilin Li; Jiawei Shen; Tzvetomir Tzvetanov; Dongyu Yang; Xinyi Luo; Yichuan Yao; Zhikun Chu; Tianyi Wu; Zhiping Chen; Ying Miao; Yufei Li; Qian Wang; Jiaxi Hu; Jianjun Meng; Xiang Liao; Yifeng Zhou; Louis Tao; Yuqian Ma; Jutao Chen; Mei Zhang; Rong Liu; Yuanyuan Mi; Jin Bao; Zhong Li; Xiaowei Chen; Tian Xue Non-image-forming photoreceptors improve visual orientation selectivity and image perception. Journal Article In: Neuron, vol. 113, pp. 486–500, 2025. @article{Shi2025, It has long been a decades-old dogma that image perception is mediated solely by rods and cones, while intrinsically photosensitive retinal ganglion cells (ipRGCs) are responsible only for non-image-forming vision, such as circadian photoentrainment and pupillary light reflexes. Surprisingly, we discovered that ipRGC activation enhances the orientation selectivity of layer 2/3 neurons in the primary visual cortex (V1) of mice by both increasing preferred-orientation responses and narrowing tuning bandwidth. Mechanistically, we found that the tuning properties of V1 excitatory and inhibitory neurons are differentially influenced by ipRGC activation, leading to a reshaping of the excitatory/inhibitory balance that enhances visual cortical orientation selectivity. Furthermore, light activation of ipRGCs improves behavioral orientation discrimination in mice. Importantly, we found that specific activation of ipRGCs in human participants through visual spectrum manipulation significantly enhances visual orientation discriminability. Our study reveals a visual channel originating from "non-image-forming photoreceptors" that facilitates visual orientation feature perception. |
Kaiyuan Sheng; Lian Liu; Feng Wang; Songnian Li; Xu Zhou An eye-tracking study on exploring children's visual attention to streetscape elements Journal Article In: Buildings, vol. 15, pp. 1–25, 2025. @article{Sheng2025, Urban street spaces play a crucial role in children's daily commuting and social activities. Therefore, the design of these spaces must give more consideration to children's perceptual preferences. Traditional street landscape perception studies often rely on sub- jective analysis, which lacks objective, data-driven insights. This study overcomes this limitation by using eye-tracking technology to evaluate children's preferences more scientif- ically. We collected eye-tracking data from 57 children aged 6–12 as they naturally viewed 30 images depicting school commuting environments. Data analysis revealed that the proportions of landscape elements in different street types influenced the visual perception characteristics of children in this age group. On well-maintained main and secondary roads, elements such as minibikes, people, plants, and grass attracted significant visual attention from children. In contrast, commercial streets and residential streets, character- ized by greater diversity in landscape elements, elicited more frequent gazes. Children's eye-tracking behaviors were particularly influenced by vibrant elements like walls, plants, cars, signboards, minibikes, and trade. Furthermore, due to the developmental immaturity of children's visual systems, no significant gender differences were observed in visual per- ception. Understanding children's visual landscape preferences provides a new perspective for researching the sustainable development of child-friendly cities at the community level. These findings offer valuable insights for optimizing the design of child-friendly streets. |
Alexander J. Shackman; Jason F. Smith; Ryan D. Orth; Christina L. G Savage; Paige R. Didier; Julie M. Mccarthy; Melanie E. Bennett; Jack J. Blanchard Blunted ventral striatal reactivity to social reward is associated with more severe motivation and pleasure deficits in psychosis Journal Article In: Schizophrenia Bulletin, pp. 1–36, 2025. @article{Shackman2025, Background and Hypothesis: Among individuals living with psychotic disorders, social impairment is common, debilitating, and challenging to treat. While the roots of this impairment are undoubtedly complex, converging lines of evidence suggest that social motivation and pleasure (MAP) deficits play a central role. Yet most neuroimaging studies have focused on monetary rewards, precluding deci- sive inferences. Study Design: Here we leveraged parallel social and monetary incentive delay functional magnetic resonance imaging paradigms to test whether blunted reactivity to social incentives in the ventral striatum—a key component of the distributed neural circuit mediating appetitive motivation and hedonic pleasure—is associated with more severe MAP symptoms in a transdiagnostic adult sample enriched for psychosis. To maximize ecological validity and translational relevance, we capitalized on naturalistic audiovisual clips of an established social partner expressing positive feedback. Study Results: Although both paradigms robustly engaged the ventral striatum, only reactivity to social incentives was associated with clinician-rated MAP deficits. This association remained significant when controlling for other symptoms, binary diagnostic status, or striatal reactivity to monetary incentives. Follow-up analyses suggested that this association predominantly reflects diminished activation during the presentation of social reward. Conclusions: These observations provide a neurobiologically grounded framework for conceptualizing the social-anhedonia symptoms and social impairments that characterize many individuals living with psychotic disorders and underscore the need to develop targeted intervention strategies. |
Irina A. Sekerina; Olga Parshina; Vladislava Staroverova; Natalia Gagarina Attention–language interface in Multilingual Assessment Instrument for Narratives Journal Article In: Journal of Experimental Child Psychology, vol. 249, pp. 1–19, 2025. @article{Sekerina2025, The current study employed the Multilingual Assessment Instrument for Narratives (MAIN) to test comprehension of narrative macrostructure in Russian in a visual world eye-tracking paradigm. The four MAIN visual narratives are structurally similar and question referents' goals and internal states (IS). Previous research revealed that children's MAIN comprehension differed among the four narratives in German, Swedish, Russian, and Turkish, but it is not clear why. We tested whether the difference in comprehension was (a) present, (b) caused by complicated inferences in understanding IS compared with goals, and (c) ameliorated by orienting visual attention to the referents whose IS was critical for accurate comprehension. Our findings confirmed (a) and (b) but found no effect of attentional cues on accuracy for (c). The multidimensional theory of narrative organization of children's knowledge of macrostructure needs to consider the type of inferences necessary for IS that are influenced by subjective interpretation and reasoning. |
Sarah Schuster; Kim Lara Weiss; Florian Hutzler; Martin Kronbichler; Stefan Hawelka Interactive and additive effects of word frequency and predictability: A fixation-related fMRI study Journal Article In: Brain and Language, vol. 260, pp. 1–7, 2025. @article{Schuster2025, The effects of word frequency and predictability are informative with respect to bottom-up and top-down mechanisms during reading. Word frequency is assumed to index bottom-up, whereas word predictability top-down information. Findings regarding potential interactive effects, however, are inconclusive. An interactive effect would suggest an early lexical impact of contextual top-down mechanisms where both variables are processed concurrently in early stages of word recognition. An additive effect, to the contrary, would suggest that contextual top-down processing only occurs post-lexically. We evaluated potential interactions between word frequency and predictability during silent reading by means of functional magnetic resonance imaging and simultaneous eye-tracking (i.e., fixation-related fMRI). Our data revealed exclusively additive effects. Specifically, we observed effects of word frequency and word predictability in left inferior frontal regions, whereas word frequency additionally exhibited an effect in the left occipito-temporal cortex. We interpret our findings in terms of contextual top-down processing facilitation. |
Marie Schroth; Wim Fias; Muhammet Ikbal Sahan Eye movements follow the dynamic shifts of attention through serial order in verbal working memory Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–11, 2025. @article{Schroth2025, How are arbitrary sequences of verbal information retained and manipulated in working memory? Increasing evidence suggests that serial order in verbal WM is spatially coded and that spatial attention is involved in access and retrieval. Based on the idea that brain areas controlling spatial attention are also involved in oculomotor control, we used eye tracking to reveal how the spatial structure of serial order information is accessed in verbal working memory. In two experiments, participants memorized a sequence of auditory words in the correct order. While their eye movements were being measured, they named the memorized items in a self-determined order in Experiment 1 and in a cued order in Experiment 2. We tested the hypothesis that serial order in verbal working memory interacts with the spatial attention system whereby gaze patterns in visual space closely follow attentional shifts in the internal space of working memory. In both experiments, we found that the gaze shifts in visual space correlated with the spatial shifts of attention along the left-to-right one-dimensional mapping of serial order positions in verbal WM. These findings suggest that spatial attention is employed for dynamically searching through verbal WM and that eye movements reflect the spontaneous association of order and space even in the absence of visuospatial input. |
Jens Schmidtke; Dana Bsharat-Maalouf; Tamar Degani; Hanin Karawani How lexical frequency, language dominance and noise affect listening effort–insights from pupillometry insights from pupillometry Journal Article In: Language, Cognition and Neuroscience, vol. 40, no. 2, pp. 195–208, 2025. @article{Schmidtke2025a, Acoustic, listener, and stimulus-related factors modulate speech-in-noise processes. This study examined how noise, listening experience, manipulated at two levels, native [L1] vs. second language [L2], and lexical frequency impact listening effort. Forty-seven participants, tested in their L1 Hebrew and L2 English, completed a word recognition test in quiet and noisy conditions while pupil size was recorded to assess listening effort. Results showed that listening in L2 was overall more effortful than in L1, with frequency effects modulated by language and noise. In L1, pupil responses to high and low frequency words were similar in both conditions. In L2, low frequency words elicited a larger pupil response, indicating greater effort, but this effect vanished in noise. A time-course analysis of the pupil response suggests that L1–L2 processing differences occur during lexical selection, indicating that L2 listeners may struggle to match acoustic-phonetic signals to long-term memory representations. |
Daniel Schmidtke; Julie A. Van Dyke; Victor Kuperman DerLex: An eye‑movement database of derived word reading in English Journal Article In: Behavior Research Methods, vol. 57, no. 1, pp. 1–15, 2025. @article{Schmidtke2025, This paper introduces a new database of eye-tracking data on English derived words, DerLex. A total of 598 unique derived suffixed words were embedded in sentences and read by 357 participants representing both university convenience pools and community pools of non-college-bound adults. Besides the eye-movement record of reading derived suffixed words, the DerLex database provides the author recognition test (ART) scores for each participant, tapping into their reading proficiency, as well as multiple lexical variables reflecting distributional, orthographic, phonological, and semantic features of the words, their constituent morphemes, and morphological families. The paper additionally reports the main effects of select lexical variables and their interactions with the ART scores. It also produces estimates of statistical power and sample sizes required to reliably detect those lexical effects. While some effects are robust and can be readily detected even in a small-scale typi- cal experiment, the over-powered DerLex database does not offer sufficient power to detect many other effects—including those of theoretical importance for existing accounts of morphological processing. We believe that both the availability of the new data resource and the limitations it provides for the planning and design of upcoming experiments are useful for future research on morphological complexity. |
Ayushi Sangoi; Farzin Hajebrahimi; Suril Gohel; Mitchell Scheiman; Tara L. Alvarez Efferent compared to afferent neural substrates of the vergence eye movement system evoked via fMRI Journal Article In: Frontiers in Neuroscience, vol. 18, pp. 1–13, 2025. @article{Sangoi2025, Introduction: The vergence neural system was stimulated to dissect the afferent and efferent components of symmetrical vergence eye movement step responses. The hypothesis tested was whether the afferent regions of interest would differ from the efferent regions to serve as comparative data for future clinical patient population studies. Methods: Thirty binocularly normal participants participated in an oculomotor symmetrical vergence step block task within a functional MRI experiment compared to a similar sensory task where the participants did not elicit vergence eye movements. Results: For the oculomotor vergence task, functional activation was observed within the parietal eye field, supplemental eye field, frontal eye field, and cerebellar vermis, and activation in these regions was significantly diminished during the sensory task. Differences between the afferent sensory and efferent oculomotor experiments were also observed within the visual cortex. Discussion: Differences between the vergence oculomotor and sensory tasks provide a protocol to delineate the afferent and efferent portion of the vergence neural circuit. Implications with clinical populations and future therapeutic intervention studies are discussed. |
Rosa Salmela; Minna Lehtonen; Seppo Vainio; Raymond Bertram Challenges in inflected word processing for L2 speakers The role of stem allomorphy Journal Article In: Studies in Second Language Acquisition, pp. 1–28, 2025. @article{Salmela2025, Morphological knowledge refers to the ability to recognize and use morphemes correctly in syntactic contexts and word formation. This is crucial for learning a morphologically rich language like Finnish, which features both agglutinative and fusional morphology. In Finnish, agglutination occurs in forms like aamu: aamu+lla (‘morning: in the morning'), where a suffix is transparently added. Fusional features, as seen in ilta: illa+lla (‘evening: in the evening'), involve allomorphic stemchanges that reduce transparency. We investigated the challenges posed by stem allomorphy for word recognition in isolation and in context for L2 learners and L1 speakers ofFinnish. In a lexical decision task, L2 speakers had longer response times and higher error rates for semitransparent inflections, while L1 speakers showed longer response times for both transparent and semitransparent inflection types. In sentence reading, L2 speakers exhibited longer fixation times for semitransparent forms, whereas L1 speakers showed no significant effects. The results suggest that the challenges in L2 inflectional processing are more related to fusional than agglutinative features of the Finnish language. |
Duygu F. Şafak; Holger Hopp Learning L2 grammar from prediction errors? Verb biases in structural priming in comprehension and production Journal Article In: Bilingualism: Language and Cognition, pp. 1–17, 2025. @article{Safak2025, This study tests whether prediction error underlies structural priming in a later-learnt L2 across two visual world eye-tracking priming experiments. Experiment 1 investigates priming when learners encounter verbs biased to double-object-datives (DO, “pay”) or prepositional-object- datives (PO, “send”) in the other structure in prime sentences. L1-German–L2-English learners read prime sentences crossing verb bias and structure (DO/PO). Subsequently, they heard target sentences – with unbiased verbs (“show”) – while viewing visual scenes. In line with implicit learning models, gaze data revealed priming and prediction-error effects, namely, more pre- dictive looks consistent with PO following PO primes with DO-bias verbs. Priming in compre- hension persisted into (unprimed) production, indicating that priming by prediction error leads to longer-term learning. Experiment 2 investigates the effects of target verb bias on error-based priming. Priming and prediction-error effects were reduced for targets with non-alternating verbs (“donate”) that only allow PO structures, suggesting learners' knowledge of the L2 grammar modulates prediction-error-based priming. Highlights |
Jason F. Rubinstein; Noelia Gabriela Alcalde; Adrien Chopin; Preeti Verghese Oculomotor challenges in macular degeneration impact motion extrapolation Journal Article In: Journal of Vision, vol. 25, no. 1, pp. 1–19, 2025. @article{Rubinstein2025, Macular degeneration (MD), which affects the central visual field including the fovea, has a profound impact on acuity and oculomotor control.We used a motion extrapolation task to investigate the contribution of various factors that potentially impact motion estimation, including the transient disappearance of the target into the scotoma, increased position uncertainty associated with eccentric target positions, and increased oculomotor noise due to the use of a non-foveal locus for fixation and for eye movements. Observers performed a perceptual baseball task where they judged whether the target would intersect or miss a rectangular region (the plate). The target was extinguished before reaching the plate and participants were instructed either to fixate a marker or smoothly track the target before making the judgment.We tested nine eyes of six participants with MD and four control observers with simulated scotomata that matched those of individual participants with MD. Both groups used their habitual oculomotor locus—eccentric preferred retinal locus (PRL) for MD and fovea for controls. In the fixation condition, motion extrapolation was less accurate for controls with simulated scotomata than without, indicating that occlusion by the scotoma impacted the task. In both the fixation and pursuit conditions, MD participants with eccentric preferred retinal loci typically had worse motion extrapolation than controls with a matched artificial scotoma and foveal preferred retinal loci. Statistical analysis revealed occlusion and target eccentricity significantly impacted motion extrapolation in the pursuit condition, indicating that these factors make it challenging to estimate and track the path of a moving target in MD. |
Lilly Roth; Hans Christoph Nuerk; Felix Cramer; Gabriella Daroczy In: Psychological Research, vol. 89, no. 1, pp. 1–24, 2025. @article{Roth2025, Solving arithmetic word problems requires individuals to create a correct mental representation, and this involves both text processing and number processing. The latter comprises understanding the semantic meaning of numbers (i.e., their magnitudes) and potentially executing the appropriate mathematical operation. However, it is not yet clear whether number processing occurs after text processing or both take place simultaneously. We hypothesize that number processing occurs early and simultaneously with other problem-solving processes such as text processing. To test this hypothesis, we created non-solvable word problems that do not require any number processing and we manipulated the calculation difficulty using carry/borrow vs. non-carry/non-borrow within addition and subtraction problems. According to a strictly sequential model, this manipulation should not matter, because when problems are non-solvable, no calculation is required. In contrast, according to an interactive model, attention to numbers would be higher when word problems require a carry/borrow compared to a non-carry/non-borrow operation. Eye-tracking was used to measure attention to numbers and text in 63 adults, operationalized by static (duration and count of fixations and regressions) and dynamic measures (count of transitions). An interaction between difficulty and operation was found for all static and dynamic eye-tracking variables as well as for response times and error rates. The observed number processing in non-solvable word problems, which indicates that it occurs simultaneously with text processing, is inconsistent with strictly sequential models. |
Tracy E Reuter; Lauren L Emberson Relative contributions of predictive vs associative processes to infant looking behavior during language comprehension Journal Article In: Journal ofChild Language, pp. 1–24, 2025. @article{Reuter2025, Numerous developmental findings suggest that infants and toddlers engage predictive processing during language comprehension. However, a significant limitation of this research is that associative (bottom-up) and predictive (top-down) explanations are not readily differentiated. Following adult studies that varied predictiveness relative to semantic-relatedness to differentiate associative vs. predictive processes, the present study used eye-tracking to begin to disentangle the contributions of bottom-up and top-down mechanisms to infants' real-time language processing. Replicating prior results, infants (14-19 months old) use successive semantically-related words across sentences (e.g., eat, yum, mouth) to predict upcoming nouns (e.g., cookie). However, we also provide evidence that using successive semantically-related words to predict is distinct from the bottom-up activation of the word itself. In a second experiment, we investigate the potential effects of repetition on the findings. This work is the first to reveal that infant language comprehension is affected by both associative and predictive processes. |
Michela Redolfi; Chiara Melloni Processing adjectives in development: Evidence from eye-tracking Journal Article In: Journal of Child Language, vol. 52, pp. 270–293, 2025. @article{Redolfi2025, Combining adjective meaning with the modified noun is particularly challenging for children under three years. Previous research suggests that in processing noun-adjective phrases children may over-rely on noun information, delaying or omitting adjective interpretation. However, the question of whether this difficulty is modulated by semantic differences among (subsective) adjectives is underinvestigated. A visual-world experiment explores how Italian-learning children (N=38, 2;4-5;3) process noun-adjective phrases and whether their processing strategies adapt based on the adjective class. Our investigation substantiates the proficient integration of noun and adjective semantics by children. Nevertheless, alligning with previous research, a notable asymmetry is evident in the interpretation of nouns and adjectives, the latter being integrated more slowly. Remarkably, by testing toddlers across a wide age range, we observe a developmental trajectory in processing, supporting a continuity approach to children's development. Moreover, we reveal that children exhibit sensitivity to the distinct interpretations associated with each subsective adjective. |
Rishi Rajalingham; Hansem Sohn; Mehrdad Jazayeri Dynamic tracking of objects in the macaque dorsomedial frontal cortex Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–16, 2025. @article{Rajalingham2025, A central tenet of cognitive neuroscience is that humans build an internal model of the external world and use mental simulation of the model to perform physical inferences. Decades of human experiments have shown that behaviors in many physical reasoning tasks are consistent with predictions fromthemental simulation theory. However, evidence for the defining feature ofmental simulation – that neural population dynamics reflect simulations of physical states in the environment – is limited. We test the mental simulation hypothesis by combining a naturalistic ball-interception task, large-scale electrophysiology in non-human primates, and recurrent neural network modeling. We find that neurons in the monkeys' dorsomedial frontal cortex (DMFC) represent task-relevant information about the ball position in a mul- tiplexed fashion. At a population level, the activity pattern in DMFC comprises a low-dimensional neural embedding that tracks the ball both when it is visible and invisible, serving as a neural substrate for mental simulation. A systematic comparison of different classes of task-optimized RNN models with the DMFC data provides further evidence supporting the mental simulation hypothesis. Our findings provide evidence that neural dynamics in the frontal cortex are consistent with internal simulation of external states in the environment. |
Alma Rahimi; Azar Ayaz; Chloe Edgar; Gianna Jeyarajan; Darryl Putzer; Michael Robinson; Matthew Heath; Alma Rahimi; Azar Ayaz; Chloe Edgar; Gianna Jeyarajan; Darryl Putzer; Michael Robinson; Matthew Heath; Feb Sub-symptom Sub-symptom threshold aerobic exercise improves executive function during the early stage of sport-related concussion recovery Journal Article In: Journal of Sports Sciences, pp. 1–14, 2025. @article{Rahimi2025, We examined whether persons with a sport-related concussion (SRC) derive a postexercise executive function (EF) benefit, and whether a putative benefit is related to an exercise-mediated increase in cerebral blood flow (CBF). Participants with an SRC completed the Buffalo Concussion Bike Test to determine the heart rate threshold (HRt) associated with symptom exacerbation and/or voluntary exhaustion. On a separate day, SRC participants – and healthy controls (HC group) – completed 20-min of aerobic exercise at 80% HRt while middle cerebral artery velocity (MCAv) was measured to estimate CBF. The antisaccade task (i.e. saccade mirror-symmetrical to target) was completed pre- and postexercise to evaluate EF. SRC and HC groups showed a comparable exercise-mediated increase in CBF (ps < .001), and both groups elicited a postexercise EF benefit (ps < .001); however, the benefit was unrelated to the magnitude of the MCAv change. Moreover, SRC symptomology was not increased when assessed immediately postexercise and showed a 24 h follow-up benefit. Accordingly, persons with an SRC demonstrated an EF benefit following a single bout of sub-symptom threshold aerobic exercise. Moreover, the exercise intervention did not result in symptom exacerbation and thus demonstrates that a tailored aerobic exercise program may support cognitive and symptom recovery following an SRC. |
Meizhen Qian; Jianbao Wang; Yang Gao; Ming Chen; Yin Liu; Dengfeng Zhou; Haidong D Lu; Xiaotong Zhang; Jia Ming Hu; Anna Wang Roe Multiple loci for foveolar vision in macaque monkey visual cortex Journal Article In: Nature Neuroscience, vol. 28, no. 1, pp. 137–149, 2025. @article{Qian2025, In humans and nonhuman primates, the central 1° of vision is processed by the foveola, a retinal structure that comprises a high density of photoreceptors and is crucial for primate-specific high-acuity vision, color vision and gaze-directed visual attention. Here, we developed high-spatial-resolution ultrahigh-field 7T functional magnetic resonance imaging methods for functional mapping of the foveolar visual cortex in awake monkeys. In the ventral pathway (visual areas V1–V4 and the posterior inferior temporal cortex), viewing of a small foveolar spot elicits a ring of multiple (eight) foveolar representations per hemisphere. This ring surrounds an area called the ‘foveolar core', which is populated by millimeter-scale functional domains sensitive to fine stimuli and high spatial frequencies, consistent with foveolar visual acuity, color and achromatic information and motion. Thus, this elaborate rerepresentation of central vision coupled with a previously unknown foveolar core area signifies a cortical specialization for primate foveation behaviors. |
Manuel F. Pulido; Marijana Macis; Suhad Sonbul The effects of adjacent and nonadjacent collocations on processing: Eye-tracking evidence from “nested” collocations Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–19, 2025. @article{Pulido2025, There is now robust evidence of priming effects during sentence processing for adjacent words that form collocations (statistically associated combinations). However, there is very limited evidence regarding how nonadjacent collocations might facilitate processing. Furthermore, no previous research has examined how nonadjacent collocations interplay with other (non)collocational material in the surrounding context. We employed “nested” collocations for the first time, in which more than one contextual element (verb, adjective) is a potential collocate for a noun. For example, in a verb–adjective–noun (V-A-N) phrase, two collocations may be “nested” (“express concerns” + “valid concerns” = “express valid concerns”) or only the verb (nonadjacent) or adjective (adjacent) might be collocational. In an eye-tracking experiment with L1 English speakers, we manipulated the collocational status of adjectives adjacent to the noun, (V)-A-N, and verbs nonadjacent to the noun, V-(A)-N. Our results replicated the basic adjacent effect and produced evidence of facilitation for nonadjacent collocations. Additionally, we find preliminary evidence for a syntactic primacy effect, whereby collocational links involving the verb prove more impactful than adjective–noun collocations, despite nonadjacency. Importantly, the results reveal cumulative facilitation in “nested collocations,” with a boost resulting from the simultaneous effects observed in adjacent and nonadjacent collocations. Altogether, the results extend our understanding of collocational priming effects beyond single collocations. wibble99: |
Seema Prasad; Shivam Puri; Keerthana Kapiley; Riya Rafeekh Looking without knowing: Evidence for language-mediated eye movements to masked words in Hindi-English bilinguals Journal Article In: Languages, vol. 10, no. 2, pp. 1–15, 2025. @article{Prasad2025, Cross-linguistic activation has been frequently demonstrated in bilinguals through eye movements using the visual world paradigm. In this study, we explored if such activations could operate below thresholds of awareness, at least in the visual modality. Participants listened to a spoken word in Hindi or English and viewed a display containing masked printed words. One of the printed words was a phonological cohort of the translation equivalent of the spoken word (TE cohort). Previous studies using this paradigm with clearly visible words on a similar sample have demonstrated robust activation of TE cohorts. We tracked eye movements to a blank screen where the masked written words had appeared accompanied by spoken words. Analyses of fixation proportions and dwell times revealed that participants looked more often and for longer duration at quadrants that contained the TE cohorts compared to distractors. This is one of the few studies to show that cross-linguistic activation occurs even with masked visual information. We discuss the implications for bilingual parallel activation and unconscious processing of habitual visual information. |
Pierre Pouget; Pierre Daye; Martin Paré Cognitive and kinematic markers of ketamine effects in behaving non-human primates Journal Article In: European Journal of Pharmacology, vol. 987, pp. 1–7, 2025. @article{Pouget2025, Ketamine is widely used to probe cognitive functions relying on the properties of methyl-D-aspartate receptor (NMDAR) synaptic transmission. Numerous works have proved that cognitive performance and adjustments in the decision or perceptual domains are affected after ketamine injection in general circulation of primates. Here, we take advantage of that in the brain stem; horizontal saccade deceleration is controlled by glycine-NMDAR-gated current, while gamma-aminobutyric acid (GABA) current controls vertical deceleration to demonstrate that despite general circulation level manipulation of NMDAR synaptic transmission, the kinematic of the saccade appeared to be in the motor brainstem generator circuit differentially maintained. The results show that the deacceleration of the saccade elicited toward a horizontal target was substantially decreased, while the deacceleration of a vertical saccade remained largely unaffected. These results provide functional distinct markers for estimating cognitive and kinematic NMDAR-gated specificity acting in the pre-frontal cortex while maintaining specificity among the GABA circuit of drugs in general circulation. |
Christian H. Poth; Werner X. Schneider Vision of objects happens faster and earlier for location than for identity Journal Article In: iScience, vol. 28, no. 2, pp. 1–11, 2025. @article{Poth2025, Visual perception of objects requires the integration of separate independent stimulus features, such as object identity and location. We ask whether the location and the identity of an object are processed with different efficiency for being consciously recognized and reported. Participants viewed a target letter at one out of several locations that were terminated by pattern masks at all possible locations. Participants reported the location of the target and/or its letter identity. Report performance as a function of the target duration before the mask is enabled to estimate the speed of visual processing and the minimum duration for processing to start. Visual processing was faster and started earlier for spatial location than for object identity, even though the processing of the features was (stochastically) independent. Together, these findings reveal an intrinsic preference of the human visual system for the perceptual processing of space as opposed to visual features such as categorical identity. |
Marlene Poncet; Sara Spotorno; Margaret C. Jackson Competition between emotional faces in visuospatial working memory Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 51, no. 1, pp. 68–81, 2025. @article{Poncet2025, Visuospatial working memory (VSWM) helps track the identity and location of people during social interactions. Previous work showed better VSWM when all faces at encoding displayed a happy compared to an angry expression, reflecting a prosocial preference for monitoring who was where. However, social environments are not typically uniform, and certain expressions may more strongly compete for and bias face monitoring according to valence and/or arousal properties. Here, we used heterogeneous encoding displays in which two faces shared one emotion and two shared another, and asked participants to relocate a central neutral probe face after a blank delay. When considering the emotion of the probed face independently of the cooccurring emotion at encoding, an overall happy benefit was replicated. However, accuracy was modulated by the nonprobed emotion, with a relocation benefit for angry over sad, happy over fearful, and sad over happy faces. These effects did not depend on encoding fixation time, stimulus arousal, perceptual similarity, or response bias. Thus, emotional competition for faces in VSWM is complex and appears to rely on more than simple arousal- or valence-biased mechanisms. We propose a “social value (SV)” account to better explain when and why certain emotions may be prioritized in VSWM. |
Vincent Plikata; Pablo R. Grassia; Julius Frackd; Andreas Bartels Hierarchical surprise signals in naturalistic violation of expectations Journal Article In: Imaging Neuroscience, vol. 3, pp. 1–23, 2025. @article{Plikata2025, Surprise responses signal both high-level cognitive alerts that information is missing, and increasingly specific back-propagating error signals that allow updates in processing nodes. Studying surprise is, hence, central for cognitive neuroscience to understand internal world representations and learning. Yet, only few prior studies used naturalistic stimuli targeting our high-level understanding of the world. Here, we use magic tricks in an fMRI experiment to investigate neural responses to violations of core assumptions held by humans about the world. We showed participants naturalistic videos of three types of magic tricks, involving objects appearing, changing color, or disappearing, along with control videos without any violation of expectation. Importantly, the same videos were presented with and without prior knowledge about the tricks' explanation. Results revealed generic responses in frontal and parietal areas, together with responses specific to each of the three trick types in posterior sensory areas. A subset of these regions, the midline areas of the default mode network (DMN), showed surprise activity that depended on prior knowledge. Equally, sensory regions showed sensitivity to prior knowledge, reflected in differing decoding accuracies. These results suggest a hierarchy of surprise signals involving generic processing of violation of expectations in frontal and parietal areas with concurrent surprise signals in sensory regions that are specific to the processed features. |
Alessandro Piras The role of the peripheral target in stimulating eye movements Journal Article In: Psychology of Sport & Exercise, vol. 76, pp. 1–10, 2025. @article{Piras2025, The present study investigated the role of top-down and bottom-up processes during a deceptive sports strategy called “no-look passes” and how microsaccades and small saccades modulate these processes. The first experiment examined the role of expertise in modulating the shift of covert attention with the bottom-up procedure. Results showed more saccades of greater amplitude and faster peak velocity in amateur than in expert groups. In the second experiment, the shift of covert attention between top-down and bottom-up conditions was investigated in a group of expert basketball players. Analysis showed that athletes make more microsaccades during the bottom-up condition; meanwhile, during the top-down condition, they were pushed to make more small saccades to decide where to send the ball. The findings suggested that the top-down process stimulates the eyes to move more concerning the bottom-up condition. It could be explained by the fact that during the top-down condition, athletes do not have an "eyehold” that stimulates their attention. During the top-down condition, athletes had to shift their attention to both sides before making the pass, resulting in their eyes being more “hesitant” concerning the situation in which they are peripherally stimulated. |
Zhongling Pi; Xuemei Huang; Yun Wen; Qin Wang; Xin Zhao; Xiying Li Happy facial expressions and mouse pointing enhance EFL vocabulary learning from instructional videos Journal Article In: British Journal of Educational Technology, vol. 56, pp. 388–409, 2025. @article{Pi2025, Given their easy accessibility and dual-channel model of content presentation, instructional videos have become a favoured tool for EFL vocabulary learning tool among many students. Teachers often use various nonverbal behaviours to elicit social reactions and guide learners' attention in instructional videos. The current study conducted three eye-tracking experiments to examine the circumstances under which a teacher's happy facial expressions are beneficial in instructional videos, with or without pointing gestures and mouse pointing. Experiments 1 and 2 demonstrated that the combination of happy facial expressions and pointing gestures attracted learners' attention to the teacher and hindered students' learning performance, regardless of the complexity of slides. Experiment 3 showed that in instructional videos with complex slides, using happy facial expressions along with mouse pointing can enhance students' learning performance. Teachers are advised to show happy facial expressions and avoid using pointing gestures when designing instructional videos. |
Olga Parshina; Anna Smirnova; Sofya Goldina; Emily Bainbridge The effect of the global language context on bilingual language control during L1 reading Journal Article In: Bilingualism: Language and Cognition, pp. 1–11, 2025. @article{Parshina2025, The proactive gain control hypothesis suggests that the global language context regulates lexical access to the bilinguals' languages during reading. Specifically, with increasing exposure to non-target language cues, bilinguals adjust the lexical activation to allow non-target language access from the earliest word recognition stages. Using the invisible boundary paradigm, we examined the flow of lexical activation in 50 proficient Russian-English bilinguals reading in their native Russian while the language context shifted from a monolingual to a bilingual environment. We gradually introduced non-target language cues (the language of experimenter and fillers) while also manipulating the type of word previews (identical, code-switches, unrelated code-switches, pseudowords). The results revealed the facilitatory reading effects of code-switches but only in the later lexical processing stages and these effects were independent of the global language context manipulation. The results are discussed from the perspective of limitations imposed by script differences on bilingual language control flexibility. |
Jessica L. Parker; A. Caglar Tas The saccade target is prioritized for visual stability in naturalistic scenes Journal Article In: Vision Research, vol. 227, pp. 1–12, 2025. @article{Parker2025a, The present study investigated the mechanisms of visual stability using naturalistic scene images. In two experiments, we asked whether the visual system relies on spatial location of the saccade target, as previously found with simple dot stimuli, or relational positions of the objects in the scene during visual stability decisions. Using a modified version of the saccadic suppression of displacement task, we manipulated the information that is displaced in the scene as well as visual stability using intrasaccadic target blanking paradigm. There were four displacement conditions: saccade target, saccade source (Experiment 2 only), whole scene, and background. We also included a no-displacement control condition where everything remained stationary. Participants reported whether they detected any movement. The results showed that spatial displacements that occur in the saccade target object were more easily detected than any other displacements in the scene. Further, disrupting visual stability with blanking only improved displacement detection for the saccade target and saccade source objects, suggesting that saccade target and saccade source objects are both consulted in the establishment of visual stability, most likely due to both receiving selective attention before saccade execution. The present study is the first to show that the visual system uses similar visual stability mechanisms for simple dot stimuli and more naturalistic stimuli. |
Adam J. Parker; Timothy J. Slattery Frequency and predictability effects for line final words Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 51, no. 1, pp. 92–112, 2025. @article{Parker2025, Computational models of eye movement control during reading have revolutionized the study of visual, perceptual, and linguistic processes underlying reading. However, these models can only simulate and test predictions about the reading of single lines of text. Here we report two studies that examined how input var- iables for lexical processing (frequency and predictability) in these models influence the processing of line- final words. The first study was a linear mixed-effects analysis of the Provo Corpus, which included data from 84 readers reading 55 multiline texts. The second study was a preregistered eye movement experiment, where 32 participants read 128 items where frequency, predictability, and position (intraline vs. line-final) were orthogonally manipulated. Both studies were consistent in showing that reading timeswere shorter on line-final words.While therewasmixed evidence for frequency and predictability effects in the ProvoCorpus, our exper- imental data confirmed additive effects of frequency and predictability for line-final words, which did not differ from those for intralinewords.We conclude that while models that make additive assumptions about the role of frequency and predictability may be better suited to modeling the current findings, additional assumptions are required if models are to be capable of modeling shorter reading times on line-final words. |
Natalie A. Paquette; Joseph Schmidt How expectations alter search performance Journal Article In: Attention, Perception, & Psychophysics, pp. 1–20, 2025. @article{Paquette2025, We assessed how expected search difficulty impacts search performance when expectations match and do not match reality. Expectations were manipulated using a blocked design (75% of trials presented at the expected difficulty; target–distractor similarity increased with difficulty). Expectancy was assessed by examining the change in search performance between trials with accurate expectations and easier-than-expected or harder-than-expected trials, matched for search difficulty. Observers searched for Landolt-C targets (Exp-1) or real-world objects (Exp-2). Increased difficulty resulted in reduced accuracy, increased RT and object dwell times (targets and distractors; both experiments), and reduced guidance (Exp-2). Relative to the same level of search difficulty and when expectations were accurate, harder-than-expected search reduced accuracy, RT, and target object dwell times (Exp-1). Whereas easier-than-expected search increased RT and target dwell times (Exp-1). While Experiment 2 showed somewhat muted expectancy effects, easier-than-expected search replicated the increased RT observed in Exp-1, with an additional guidance decrement and increased distractor dwell time. These results demonstrate that expectations shift search performance toward the expected difficulty level. Additionally, post hoc analyses revealed that observers who experience larger difficulty effects also experience larger expectancy effects in RT, guidance, and target dwell time. |
Yunxian Pan; Jie Xu Human-machine plan conflict and conflict resolution in a visual search task Journal Article In: International Journal of Human-Computer Studies, vol. 193, pp. 1–12, 2025. @article{Pan2025, With rapid technological development, humans are more likely to cooperatively work with intelligence systems in everyday life and work. Similar to interpersonal teamwork, the effectiveness of human-machine teams is affected by conflicts. Some human-machine conflict scenarios occur when neither the human nor the system was at fault, for example, when the human and the system formulated different but equally effective plans to achieve the same goal. In this study, we conducted two experiments to explore the effects of human-machine plan conflict and the different conflict resolution approaches (human adapting to the system, system adapting to the human, and transparency design) in a computer-aided visual search task. The results of the first experiment showed that when conflicts occurred, the participants reported higher mental load during the task, performed worse, and provided lower subjective evaluations towards the aid. The second experiment showed that all three conflict resolution approaches were effective in maintaining task performance, however, only the transparency design and the human adapting to the system approaches were effective in reducing mental load and improving subjective evaluations. The results highlighted the need to design appropriate human-machine conflict resolution strategies to optimize system performance and user experience. |
Ascensión Pagán; Federica Degno; Sara V. Milledge; Richard D. Kirkden; Sarah J. White; Simon P. Liversedge; Kevin B. Paterson Aging and word predictability during reading: Evidence from eye movements and fixation-related potentials Journal Article In: Attention, Perception, & Psychophysics, pp. 1–26, 2025. @article{Pagan2025, The use of context to facilitate the processing of words is recognized as a hallmark of skilled reading. This capability is also hypothesized to change with older age because of cognitive changes across the lifespan. However, research investigating this issue using eye movements or event-related potentials (ERPs) has produced conflicting findings. Specifically, whereas eye-movement studies report larger context effects for older than younger adults, ERP findings suggest that context effects are diminished or delayed for older readers. Crucially, these contrary findings may reflect methodological differences, including use of unnatural sentence displays in ERP research. To address these limitations, we used a coregistration technique to record eye movements (EMs) and fixation-related potentials (FRPs) simultaneously while 44 young adults (18–30 years) and 30 older adults (65+ years) read sentences containing a target word that was strongly or weakly predicted by prior context. Eye-movement analyses were conducted over all data (full EM dataset) and only data matching FRPs. FRPs were analysed to capture early and later components 70–900 ms following fixation-onset on target words. Both eye-movement datasets and early FRPs showed main effects of age group and context, while the full EM dataset and later FRPs revealed larger context effects for older adults. We argue that, by using coregistration methods to address limitations of earlier ERP research, our experiment provides compelling complementary evidence from eye movements and FRPs that older adults rely more on context to integrate words during reading. |
High Overall; Values Mitigate; Gaze-related Effects; Chih-chung Ting; Sebastian Gluth High overall values mitigate gaze-related effects in perceptual and preferential choices Journal Article In: Journal of Experimental Psychology: General, pp. 1–14, 2025. @article{Overall2025, A growing literature has shown that people tend to make faster decisions when choosing between two high- intensity or high-utility options than when choosing between two less-intensity or low-utility options. However, the underlying cognitive mechanisms of this effect of overall value (OV) on response times (RT) remains controversial, partially due to inconsistent findings of OV effects on accuracy but also due to the lack of process-tracing studies testing this effect. Here, we set out to fill this gap by testing and modeling the influence of OV on choices, RT, and eye movements in both perceptual and preferential decisions in a preregistered eye-tracking experiment (N = 61). Across perceptual and preferential tasks, we observed significant and consistently negative correlations between OV and RT, replicating previous work. Accuracy tended to increase with OV, reaching significance in preferential choices only. Eye-tracking analyses revealed a reduction of different gaze-related effects under high OV: a reduced tendency to choose the longer fixated option in perceptual choice and a reduced tendency to choose the last fixated option in preferential choice. Modeling these data with the attentional drift-diffusion model showed that the nonfixated option value was discounted least in the high-OV condition, confirming that higher OV might mitigate the impact of gaze on choices. Our results suggest that OV jointly affects behavior and gaze influences and offer a mechanistic account for the puzzling phenomenon that decisions between options of higher OV tend to be faster, but not less accurate. Public |
Wesley Orth; Shayne Sloggett; Masaya Yoshida Positive polarity items: An illusion of ungrammaticality Journal Article In: Language, Cognition and Neuroscience, pp. 1–25, 2025. @article{Orth2025, Negative Polarity Item (NPIs) produce an illusion of grammaticality in some contexts with negation. Many approaches to modelling the NPI illusion propose that it is driven by the processor's attempt to link an NPI to a negative element. We investigate an illusion effect observed with Positive Polarity Item (PPIs), another class of polarity sensitive element. While NPIs must be licensed by a negative element, PPIs are anti-licensed by negative elements. We find an illusion of ungrammaticality for PPIs in environments where an illusion of grammaticality is observed for NPIs. Thus, we argue there is a general polarity illusion. We find that several accounts of the NPI illusion either predict this PPI illusion or can capture this effect with a straightforward extension. The approaches which are able to predict this effect share a reliance on structural representation, highlighting the importance of both the licensing features of polarity items and the structural detail in sentence processing representations. |
Ryan M. O'Leary; Nicole M. Amichetti; Zoe Brown; Alexander J. Kinney; Arthur Wingfield Congruent prosody reduces cognitive effort in memory for spoken sentences: A pupillometric study with young and older adults Journal Article In: Experimental Aging Research, vol. 51, no. 1, pp. 35–58, 2025. @article{OLeary2025, Background: In spite of declines in working memory and other processes, older adults generally maintain good ability to understand and remember spoken sentences. In part this is due to preserved knowledge of linguistic rules and their implementation. Largely overlooked, however, is the support older adults may gain from the presence of sentence prosody (pitch contour, lexical stress, intra-and inter-word timing) as an aid to detecting the structure of a heard sentence. Methods: Twenty-four young and 24 older adults recalled recorded sentences in which the sentence prosody corresponded to the clausal structure of the sentence, when the prosody was in conflict with this structure, or when there was reduced prosody uninformative with regard to the clausal structure. Pupil size was concurrently recorded as a measure of processing effort. Results: Both young and older adults' recall accuracy was superior for sentences heard with supportive prosody than for sentences with uninformative prosody or for sentences in which the prosodic marking and causal structure were in conflict. The measurement of pupil dilation suggested that the task was generally more effortful for the older adults, but with both groups showing a similar pattern of effort-reducing effects of supportive prosody. Conclusions: Results demonstrate the influence of prosody on young and older adults' ability to recall accurately multi-clause sentences, and the significant role effective prosody may play in preserving processing effort. |
Salar Nouri; Amirali Soltani Tehrani; Niloufar Faridani; Ramin Toosi; Jalaledin Noroozi; Mohammad Reza A. Dehaqani Microsaccade selectivity as discriminative feature for object decoding Journal Article In: iScience, vol. 28, no. 1, pp. 1–19, 2025. @article{Nouri2025, Microsaccades, a form of fixational eye movements, help maintain visual stability during stationary observations. This study examines the modulation of microsaccadic rates by various stimulus categories in monkeys and humans during a passive viewing task. Stimulus sets were grouped into four primary categories: human, animal, natural, and man-made. Distinct post-stimulus microsaccade patterns were identified across these categories, enabling successful decoding of the stimulus category with accuracy and recall of up to 85%. We observed that microsaccade rates are independent of pupil size changes. Neural data showed that category classification in the inferior temporal (IT) cortex peaks earlier than changes in microsaccade rates, suggesting feedback from the IT cortex influences eye movements after stimulus discrimination. These results contribute to neurobiological models, enhance human-machine interfaces, optimize experimental visual stimuli, and deepen understanding of microsaccades' role in object decoding. |
Elle Minh Ngoc Le Nguyen; Meaghan J. Clough; Joanne Fielding; Owen B. White A video-oculography study of fixation instability in myasthenia gravis Journal Article In: Frontiers in Neurology, vol. 16, pp. 1–9, 2025. @article{Nguyen2025, Introduction: Myasthenia gravis (MG) is an autoimmune disease that causes extraocular muscle weakness in up to 70–85% of patients, which can impact quality of life. Current diagnostic measures are not very sensitive for ocular MG. This study aimed to compare fixation instability (inability to maintain gaze on a target) in patients with MG with control participants using video-oculography. Methods: A prospective study of 20 age-and sex-matched MG and control participants was performed using a novel protocol with the EyeLink 1000 plus ©. Bivariate contour ellipse area (BCEA) analysis, number of fixations on a target, and percentage of dwell time of fixations in the target interest area (IA) were calculated. Inter-eye (right vs. left) comparisons were performed using paired t-tests, and inter-group (MG vs. control) comparisons were performed using independent samples t-tests. Results: There were no inter-eye differences in the BCEAs between control eyes and MG eyes. However, the BCEAs were larger in both the right (RE) and left (LE) eyes of MG patients in the right (RE p = 0.029, LE p = 0.033), left (RE p = 0.006, LE p = 0.004), upward (RE p = 0.009, LE p = 0.018), and downward (RE p = 0.006, LE p = 0.006) gaze holds of the controls. The total mean sum of gaze hold fixations in all directions was greater in MG patients than in control participants (354 ± 139 vs. 249 ± 135 |
Sergio Navas‑León; Milagrosa Sánchez‑Martín; Ana Tajadura‑Jiménez; Lize De Coster; Mercedes Borda‑Mas; Luis Morales Exploring eye‑movement changes as digital biomarkers and endophenotypes in subclinical eating disorders: An eye tracking study Journal Article In: BMC Psychiatry, vol. 133, pp. 1–12, 2025. @article{Navas‑Leon2025, Objective: Previous research has indicated that patients with Anorexia Nervosa (AN) exhibit specific eye movement changes, identified through eye tracking sensor technology. These changes have been proposed as potential digital biomarkers and endophenotypes for early diagnosis and preventive clinical interventions. This study aims to explore whether these eye movement changes are also present in individuals with subclinical eating disorder (ED) symptomatology compared to control participants. Method: The study recruited participants using convenience sampling and employed the Eating Disorder Examination Questionnaire for initial screening. The sample was subsequently divided into two groups: individuals exhibiting subclinical ED symptomatology and control participants. Both groups performed various tasks, including a fixation task, prosaccade/antisaccade task, and memory‑guided task. Alongside these tasks, anxiety and premorbid intel‑ ligence were measured as potential confounding variables. The data were analyzed through means comparison and exploratory Pearson's correlations. Results No significant differences were found between the two groups in the three eye tracking tasks. Discussion The findings suggest that the observed changes in previous research might be more related to the clinical state of the illness rather than a putative trait. Implications for the applicability of eye movement changes as early biomarkers and endophenotypes for EDs in subclinical populations are discussed. Further research is needed to validate hese findings and understand their implications for preventive diagnostics. |
William Narhi-Martinez; Yong Min Choi; Blaire Dube; Julie D. Golomb Allocation of spatial attention in human visual cortex as a function of endogenous cue validity Journal Article In: Cortex, vol. 185, pp. 4–19, 2025. @article{NarhiMartinez2025, Several areas of visual cortex contain retinotopic maps of the visual field, and neuroimaging studies have shown that covert attentional guidance will result in increases of activity within the regions representing attended locations. However, little research has been done to directly compare neural activity for different types of attentional cues. Here, we used fMRI to investigate how retinotopically-specific cortical activity would be modulated depending on whether we provided deterministic or probabilistic spatial information. On each trial, a four-item memory array was presented and participants' memory for one of the items would later be probed. Critically, trials began with a foveally-presented endogenous cue that was either 100% valid (deterministic runs), 70% valid (probabilistic runs), or neutral. By dividing visual cortex into quadrant-specific regions of interest (qROIs), we could examine how attention was spatially distributed across the visual field within each trial, depending on cue type and delay. During the anticipatory period prior to the memory array, we found increased activation at the cued location compared to noncued locations, with surprisingly comparable levels of facilitation for both deterministic and probabilistic cues. However, we found significantly greater facilitation on deterministic relative to probabilistic trials following the onset of the memory array, with only deterministic cue-related facilitation persisting through the presentation of the probe. These findings reveal how cue validity can drive differential allocations of neural resources over time across cued and noncued locations, and that the allocation of attention should not be assumed to invariably scale alongside the validity of a cue. |
Erin Morrow; David Clewett Distortion of overlapping memories relates to arousal and anxiety Journal Article In: Cognitive, Affective, & Behavioral Neuroscience, vol. 25, pp. 154–172, 2025. @article{Morrow2025, Everyday experiences often overlap, challenging our ability to maintain distinct episodic memories. One way to resolve such interference is by exaggerating subtle differences between remembered events, a phenomenon known as memory repulsion. Here, we tested whether repulsion is influenced by emotional arousal, when resolving memory interference is perhaps most needed. We adapted an existing paradigm in which participants repeatedly studied object–face associations. Participants studied two different-colored versions of each object: a to-be-tested “target” and its not-to-be-tested “competitor” pair mate. The level of interference between target and competitor pair mates was manipulated by making the object colors either highly similar or less similar, depending on the participant group. To manipulate arousal, the competitor object–face associations were preceded by either a neutral tone or an aversive and arousing burst of white noise. Memory distortion for the color of the target objects was tested after each study round to examine whether memory distortions emerge after learning. We found that participants with greater sound-induced pupil dilations, an index of physiological arousal, showed greater memory attraction of target colors towards highly similar competitor colors. Greater memory attraction was also correlated with greater memory interference in the last round of learning. Additionally, individuals who self-reported higher trait anxiety showed greater memory attraction when one of the overlapping memories was associated with something aversive. Our findings suggest that memories of similar neutral and arousing events may blur together after repeated exposures, especially in individuals who show higher arousal responses and symptoms of anxiety. |
Vanessa Carneiro Morita; David Souto; Guillaume S. Masson; Anna Montagnini Anticipatory smooth eye movements scale with the probability of visual motion: Role of target speed and acceleration Journal Article In: Journal of Vision, vol. 25, no. 1, pp. 1–22, 2025. @article{Morita2025, Sensory-motor systems are able to extract statistical regularities in dynamic environments, allowing them to generate quicker responses and anticipatory behavior oriented towards expected events. Anticipatory smooth eye movements (aSEM) have been observed in primates when the temporal and kinematic properties of a forthcoming visual moving target are fully or partially predictable. However, the precise nature of the internal model of target kinematics which drives aSEM remains largely unknown, as well as its interaction with environmental predictability. In this study we investigated whether and how the probability of target speed or acceleration is taken into account for driving aSEM. We recorded eye movements in healthy human volunteers while they tracked a small visual target with either constant, accelerating or decelerating speed, keeping the direction fixed. Across experimental blocks, we manipulated the probability of the presented target motion properties, with either 100% probability of occurrence of one kinematic condition (fully-predictable sessions), or a mixture with different proportions of two conditions (mixture sessions). We show that aSEM are robustly modulated by the target kinematic properties. With constant-velocity targets, aSEM velocity scales linearly with target velocity across the blocked sessions, and it follows overall a probability-weighted average in the mixture sessions. Predictable target acceleration/deceleration does also have an influence on aSEM, but with more variability across participants. Finally, we show that the latency and eye acceleration at the initiation of visually-guided pursuit do also scale, overall, with the probability of target motion. This scaling is consistent with Bayesian integration of sensory and predictive information. ### Competing Interest Statement The authors have declared no competing interest. |
Maria Eleonora Minissi; Alexia Antzaka; Simona Mancini; Marie Lallier Can playing video games enhance reading skills through more efficient serial visual search mechanisms? Insights from an eye tracking study Journal Article In: Language, Cognition and Neuroscience, vol. 40, no. 2, pp. 209–230, 2025. @article{Minissi2025, Reading disorders are associated with atypical top-down visual attention (VA) processes like reduced VA span and slower serial visual search (SVS). In contrast, expert action video game (AVG) players, known for their efficient top-down VA, exhibit improved reading abilities. It is unclear whether these benefits stem solely from AVGs or apply to other gaming experiences. To explore this, AVG players (AVGPs), players of genres excluding AVGs (VGPs), and non-players were evaluated on their VA span, and behavioural and oculomotor performance in SVS. VGPs, but not AVGPs, demonstrated enhanced performance and oculomotor behaviour in SVS compared to non-players, while both player groups showed a trend towards better VA span skills. Notably, reading-related skills were enhanced in the two player groups, but particularly more so in VGPs. These findings support the existence of potential benefits of playing video games different from classical AVGs for the development of top-down VA and reading-related abilities. |
Mylène Michaud; Annie Roy-Charland; Mélanie Perron Effects of explicit knowledge and attentional-perceptual processing on the ability to recognize fear and surprise Journal Article In: Behavioral Sciences, vol. 15, pp. 1–11, 2025. @article{Michaud2025, When participants are asked to identify expressed emotions from pictures, fear is often confused with surprise. The present study explored this confusion by utilizing one prototype of surprise and three prototypes of fear varying as a function of distinctive cues in the fear prototype (cue in the eyebrows, in the mouth or both zones). Participants were presented with equal numbers of pictures expressing surprise and fear. Eye movements were monitored when they were deciding if the picture was fear or surprise. Following each trial, explicit knowledge was assessed by asking the importance (yes vs. no) of five regions (mouth, nose, eyebrows, eyes, cheeks) in recognizing the expression. Results revealed that fear with both distinctive cues was recognized more accurately, followed by the prototype of surprise and fear with a distinctive cue in the mouth at a similar level. Finally, fear with a distinctive cue in the eyebrows was the least accurately recognized. Explicit knowledge discriminability results revealed that participants were aware of the relevant areas for each prototype but not equally so for all prototypes. Specifically, participants judged the eyebrow area as more important when the distinctive cue was in the eyebrows (fear–eyebrow) than when the cue was in the mouth (fear–mouth) or when both cues were present (fear–both). Results are discussed considering the attentional-perceptual and explicit knowledge limitation hypothesis. |
Zhu Meng; Guoli Yan; John E. Marsh; Simon P. Liversedge Primary task demands modulate background speech disruption during reading of Chinese tongue twisters: An eye-tracking study Journal Article In: Journal of Cognitive Psychology, pp. 1–18, 2025. @article{Meng2025, This study investigated how the semantic and phonological properties of background speech affect reading, depending on primary task processing. Chinese participants were randomly assigned to two groups and read Chinese tongue twisters while exposed to meaningful, meaningless, spectrally-rotated speech (acoustically similar to normal speech but without linguistic information), or silence. One group engaged in a semantic task, comprehending sentences and responding to “yes-no” questions, while the other performed a phonological task, identifying the most frequent initial phoneme in sentences and selecting a corresponding character. Although background speech did not significantly influence accuracy for either task, it differentially impacted eye movements and reading rates. Semantic properties disrupted the semantic task without significantly affecting the phonological task, while phonological properties influenced both tasks, particularly the phonological one. These findings indicate that the nature of the reading task modulates the disruptive effects of background speech, supporting the interference-by-process account. |
Elif Memis; Gizem Y. Yildiz; Gereon R. Fink; Ralph Weidner Hidden size: Size representations in implicitly coded objects Journal Article In: Cognition, vol. 256, pp. 1–15, 2025. @article{Memis2025, Its angular representation on the retina does not solely determine the perceived size of an object. Instead, contextual information is interpreted. We investigated the levels of processing at which this interpretation occurs. Combining three experimental paradigms, we explored whether masked and more implicitly coded objects are already size-rescaled. We induced object size rescaling using a modified variant of the Ebbinghaus illusion. In this variant, six dots altered the size of a central stimulus and served as inducers generating Object-Substitution Masking (OSM). Participants reported the average size of multiple circles using the size-averaging paradigm, allowing us to test the contribution of masked and non-masked central target circles. Our Ebbinghaus illusion variant altered perceived stimulus size and showed robust masking via OSM. Furthermore, size-averaging was sensitive enough to detect perceived size changes in the magnitude of the ones induced by the Ebbinghaus illusion. Finally, combining all three paradigms, we observed that masked and non-masked stimuli contributed to size averaging in a size-rescaled manner. In a control experiment testing the general effects of Ebbinghaus inducers, we observed a contrast-like effect on size averaging. Large inducers decreased the perceived average size, while small inducers increased it. In summary, our experiments indicate that context integration, induced by the Ebbinghaus illusion, alters size representations at an early stage. These modified size representations are independent of whether a target is recognisable. Moreover, perceived average size appears to be coded relative to surrounding perceptual groups. |
Kimberly Meier; Simon Warner; Miriam Spering; Deborah Giaschi Poor fixation stability does not account for motion perception deficits in amblyopia Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–14, 2025. @article{Meier2025, People with amblyopia show deficits in global motion perception, especially at slow speeds. These observers are also known to have unstable fixation when viewing stationary fixation targets, relative to healthy controls. It is possible that poor fixation stability during motion viewing interferes with the fidelity of the input to motion-sensitive neurons in visual cortex. To probe these mechanisms at a behavioral level, we assessed motion coherence thresholds in adults with amblyopia while measuring fixation stability. Consistent with prior work, participants with amblyopia had elevated coherence thresholds for the slow speed stimuli, but not the fast speed stimuli, using either the amblyopic or the fellow eye. Fixation stability was elevated in the amblyopic eye relative to controls across all motion stimuli, and not selective for conditions on which perceptual deficits were observed. Fixation stability was not related to visual acuity, nor did it predict coherence thresholds. These results suggest that motion perception deficits might not be a result of poor input to the motion processing system due to unstable fixation, but rather due to processing deficits in motion-sensitive visual areas. |
Júlio Medeiros; André Bernardes; Ricardo Couceiro; Paulo Oliveira; Henrique Madeira; César Teixeira; Paulo Carvalho Optimal frequency bands for pupillography for maximal correlation with HRV Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–17, 2025. @article{Medeiros2025, Assessing cognitive load using pupillography frequency features presents a persistent challenge due to the lack of consensus on optimal frequency limits. This study aims to address this challenge by exploring pupillography frequency bands and seeking clarity in defining the most effective ranges for cognitive load assessment. From a controlled experiment involving 21 programmers performing software bug inspection, our study pinpoints the optimal low-frequency (0.06-0.29 Hz) and high-frequency (0.29-0.49 Hz) bands. Correlation analysis yielded a geometric mean of 0.238 compared to Heart Rate Variability features, with individual correlations for low-frequency, high-frequency, and their ratio at 0.279, 0.168, and 0.286, respectively. Extending the study to 51 participants, including a different experiment focusing on mental arithmetic tasks, validated the previous findings and further refined bands, maintaining effectiveness with a geometric mean correlation of 0.236 and surpassing common frequency bands reported in the existing literature. This study represents a pivotal step toward converging and establishing a coherent framework for frequency band definition to be used in pupillography analysis. Furthermore, based on this, it also contributes insights into the importance of more integration and adoption of eye-tracking with pupillography technology into authentic software development contexts for cognitive load assessment at a very fine level of granularity. |
Margarethe Mcdonald; Tania S. Zamuner The relationship between language experience variables and the time course of spoken word recognition Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–21, 2025. @article{Mcdonald2025, During spoken word recognition, words that are related phonologically (e.g., dog and dot) and words that are related semantically (e.g., dog and bear) are known to become active within the first second of word recognition. The time course of activation and resolution of these competing words changes as a function of linguistic knowledge. This preregistered study aimed to examine how a less commonly used linguistic predictor, percent lifetime language exposure, affects the time course of target and competitor activation in an eye-tracking visual world paradigm. Lifetime exposure was expected to capture variability in the representations and processes that contribute to individual differences in spoken word recognition. Results show that when putting lifetime exposure to French on a scale, more lifetime exposure was related to target fixations and slightly related to early phonological coactivation, but not related to semantic coactivation. These analyses demonstrate how generalized additive mixed models might help examine time course data with more continuous linguistic variables. Exploratory analyses looked at the amount of variance captured by three linguistic experience predictors (lifetime French exposure, recent French exposure, French vocabulary) on indices of target, phonological, and semantic fixations and identified vocabulary size as most frequently explaining significant variance, but the pattern of results did not differ from those of lifetime language exposure. These findings suggest that lifetime language exposure may not fully capture subtle differences in linguistic experience that affect lexical coactivation such as those brought upon by differences in exposure trajectories across the lifetime or differences in the setting of language exposure. |
Stella Mayer; Pankhuri Saxena; Max Arwed Crayen; Stefan Treue In: Journal of Neuroscience Methods, vol. 415, pp. 1–11, 2025. @article{Mayer2025, Background: Neuronal activity is modulated by behavior and cognitive processes. The combination of several neurotransmitter systems, acting directly or indirectly on specific populations of neurons, underlie such modulations. Most studies with non-human primates (NHPs) fail to capture this complexity, partly due to the lack of adequate methods for reliably and simultaneously measuring a broad spectrum of neurotransmitters while the animal engages in behavioral tasks. New Method: To address this gap, we introduce a novel implementation of brain microdialysis (MD), employing semi-chronically implanted guides and probes in awake, behaving NHPs facilitated by removable insets within a standard recording chamber over extrastriate visual cortex (here, the visual middle temporal area (MT)). This approach allows flexible access to diverse brain regions, including areas deep within the sulcus. Results: Reliable concentration measurements of GABA, glutamate, norepinephrine, epinephrine, dopamine, serotonin, and choline were achieved from small sample volumes (<20 µl) using ultra-performance liquid chromatography with electrospray ionization-mass spectrometry (UPLC-ESI-MS). Comparing two behavioral states – ‘active' and ‘inactive', we observe subtle concentration variations between the two behavioral states and a greater variability of concentrations in the active state. Additionally, we find positively and negatively correlated concentration changes for neurotransmitter pairs between the behavioral states. Conclusions: Therefore, this MD setup allows insights into the neurochemical dynamics in awake primates, facilitating comprehensive investigations into the roles and the complex interplay of neurotransmitters in cognitive and behavioral functions. |
Kate Matsunaga; Kleanthis Avramidis; Mark S. Borchert; Shrikanth Narayanan; Melinda Y. Chang Method for assessing visual saliency in children with cerebral/cortical visual impairment using generative artificial intelligence Journal Article In: Frontiers in Human Neuroscience, vol. 18, pp. 1–9, 2025. @article{Matsunaga2025, Cerebral/cortical visual impairment (CVI) is a leading cause of pediatric visual impairment in the United States and other developed countries, and is increasingly diagnosed in developing nations due to improved care and survival of children who are born premature or have other risk factors for CVI. Despite this, there is currently no objective, standardized method to quantify the diverse visual impairments seen in children with CVI who are young and developmentally delayed. We propose a method that combines eye tracking and an image-based generative artificial intelligence (AI) model (SegCLIP) to assess higher- and lower-level visual characteristics in children with CVI. We will recruit 40 CVI participants (aged 12 months to 12 years) and 40 age-matched controls, who will watch a series of images on a monitor while eye gaze position is recorded using eye tracking. SegCLIP will be prompted to generate saliency maps for each of the images in the experimental protocol. The saliency maps (12 total) will highlight areas of interest that pertain to specific visual features, allowing for analysis of a range of individual visual characteristics. Eye tracking fixation maps will then be compared to the saliency maps to calculate fixation saliency values, which will be assigned based on the intensity of the pixel corresponding to the location of the fixation in the saliency map. Fixation saliency values will be compared between CVI and control participants. Fixation saliency values will also be correlated to corresponding scores on a functional vision assessment, the CVI Range-CR. We expect that fixation saliency values on visual characteristics that require higher-level processing will be significantly lower in CVI participants compared to controls, whereas fixation saliency values on lower-level visual characteristics will be similar or higher in CVI participants. Furthermore, we anticipate that fixation saliency values will be significantly correlated to scores on corresponding items on the CVI Range-CR. Together, these findings would suggest that AI-enabled saliency analysis using eye tracking can objectively quantify abnormalities of lower- and higher-order visual processing in children with CVI. This novel technique has the potential to guide individualized interventions and serve as an outcome measure in future clinical trials. |
Stanford Martinez; Carolina Ramirez-Tamayo; Syed Hasib Akhter Faruqui; Kal Clark; Adel Alaeddini; Nicholas Czarnek; Aarushi Aggarwal; Sahra Emamzadeh; Jeffrey R. Mock; Edward J. Golob Discrimination of radiologists' experience level using eye-tracking technology and machine learning: Case study Journal Article In: JMIR Formative Research, vol. 9, pp. 1–16, 2025. @article{Martinez2025, Background: Perception-related errors comprise most diagnostic mistakes in radiology. To mitigate this problem, radiologists use personalized and high-dimensional visual search strategies, otherwise known as search patterns. Qualitative descriptions of these search patterns, which involve the physician verbalizing or annotating the order he or she analyzes the image, can be unreliable due to discrepancies in what is reported versus the actual visual patterns. This discrepancy can interfere with quality improvement interventions and negatively impact patient care. Objective: The objective of this study is to provide an alternative method for distinguishing between radiologists by means of captured eye-tracking data such that the raw gaze (or processed fixation data) can be used to discriminate users based on subconscious behavior in visual inspection. Methods: We present a novel discretized feature encoding based on spatiotemporal binning of fixation data for efficient geometric alignment and temporal ordering of eye movement when reading chest x-rays. The encoded features of the eye-fixation data are used by machine learning classifiers to discriminate between faculty and trainee radiologists. A clinical trial case study was conducted using metrics such as the area under the curve, accuracy, F1-score, sensitivity, and specificity to evaluate the discriminability between the 2 groups regarding their level of experience. The classification performance was then compared with state-of-the-art methodologies. In addition, a repeatability experiment using a separate dataset, experimental protocol, and eye tracker was performed with 8 participants to evaluate the robustness of the proposed approach. Results: The numerical results from both experiments demonstrate that classifiers using the proposed feature encoding methods outperform the current state-of-the-art in differentiating between radiologists in terms of experience level. An average performance gain of 6.9% is observed compared with traditional features while classifying experience levels of radiologists. This gain in accuracy is also substantial across different eye tracker–collected datasets, with improvements of 6.41% using the Tobii eye tracker and 7.29% using the EyeLink eye tracker. These results signify the potential impact of the proposed method for identifying radiologists' level of expertise and those who would benefit from additional training. Conclusions: The effectiveness of the proposed spatiotemporal discretization approach, validated across diverse datasets and various classification metrics, underscores its potential for objective evaluation, informing targeted interventions and training strategies in radiology. This research advances reliable assessment tools, addressing challenges in perception-related errors to enhance patient care outcomes. |
Soodeh Majidpour; Mehdi Sanayei; Reza Ebrahimpour; Sajjad Zabbah Better than expected performance effect depends on the spatial location of visual stimulus Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–12, 2025. @article{Majidpour2025, The process of perceptual decision-making in the real world involves the aggregation of pieces of evidence into a final choice. Visual evidence is usually presented in different pieces, distributed across time and space. We wondered whether adding variation in the location of the received information would lead to differences in how subjects integrated visual information. Seven participants viewed two pulses of random dot motion stimulus, separated by time gaps and presented at different locations within the visual field. Our findings suggest that subjects accumulate discontinuous information (over space or time) differently than when it is presented continuously, in the same location or with no gaps between them. These findings indicate that the discontinuity of evidence impacts the process of evidence integration in a manner more nuanced than that presumed by the theory positing perfect integration of evidence. |