所有EyeLink眼动仪出版物
以下按年份列出了截至2025年(包括2026年初)的所有14000篇经同行评审的EyeLink研究出版物。您可以使用视觉搜索、平滑追求、帕金森氏症等关键字搜索出版物库。您还可以搜索单个作者的姓名。按研究领域分组的眼动追踪研究可以在解决方案页面上找到。如果我们错过了任何EyeLink眼动追踪论文,请给我们发电子邮件!
2026 |
Zebo Lan; Meihua Guo; Nina Liu; Guoli Yan; Valerie Benson Language experience and reading ability modulate word recognition in deaf readers Journal Article In: Journal of Deaf Studies and Deaf Education, vol. 31, pp. 41–57, 2026. @article{Lan2026,For most deaf readers, learning to read is a challenging task. Visual word recognition is crucial during reading; however, little is known about the cognitive mechanism of Chinese deaf readers during visual word recognition. In the present study, two experiments explored the activation of orthographic, phonological, and sign language representations during Chinese word recognition. Eye movements were recorded as participants read sentences containing orthographically similar words, homophones, sign language–related words, or unrelated words. All deaf readers showed shorter reading times for orthographically similar words compared to unrelated words. However, when the reading ability was controlled, the homophone advantage was observed only for deaf readers with more oral language experience, whereas the sign language advantage was observed only for deaf readers with more sign language experience. When language experience was controlled, in comparison to deaf readers with lower reading fluency levels, those with higher reading fluency levels had more stable orthographic and sign language representations. Deaf college readers with more oral language experience activate word meanings through orthographic and phonological representation, whereas deaf college readers with more sign language experience activate word meanings through orthographic and sign language representation, reflecting a unique cognitive mechanism, and reading ability moderates this process. |
Xuran Cao; Yaxin Du; Yuhan Jiang; Run Zhang; Jingxin Wang Aging and semantic transparency effects in Chinese reading: Evidence from eye movements Journal Article In: BMC Psychology, vol. 14, no. 1, pp. 1–14, 2026. @article{Cao2026a,Background Semantic transparency is typically defined in terms of compositionality, the extent to which the meaning of a word can be predicted from the meaning of each of its constituents, which is crucial for the processing of compound words. Studies employing behavioral, eye-tracking, and neuroimaging techniques have identified common effects of semantic transparency. Transparent words, defined as those in which the word itself and its morphemes exhibit a high degree of semantic relatedness, facilitate word recognition. Semantic transparency effects have been well observed for alphabetic languages. However, the effects of semantic transparency on Chinese readers are largely unknown, as do whether healthy aging may affect this effect. The present study investigated these questions by analyzing the semantic transparency effects in both young and older adults under conditions of normal reading and preview. Methods The eye movements of young (18–25 years) and older (60 + years) Chinese readers were recorded under conditions of normal reading and preview. Results (1) Transparent words facilitated word recognition and the valid preview clues did not benefit readers recognizing transparent words. (2) Age groups showed no differences in the process of compound words. However, older adults had greater difficulty recognizing opaque words under preview conditions than younger adults. Conclusions Compound words are stored in the mental lexicon as mixed representations, in which transparent words are represented by morphemes and opaque words are represented by whole words. Semantic transparency effects exhibit cross-age consistency which do not rely on valid preview information but instead stem from foveal in-depth processing. However, age differences in semantic integration become apparent in parafoveal preview processing which demands greater cognitive resources, with older adults experiencing greater difficulty recognizing opaque words than younger adults. |
Tal Ravid-Roth; Romi Livne; Ariel Berlinger; Wilfried Kunde; Baruch Eitam; Sagi Jaffe-Dax The effect of gaze contingencies on infants' looking preference Journal Article In: Cognition, vol. 270, pp. 1–18, 2026. @article{Ravid-Roth2026,Infants exhibit robust predictive capacities from birth; Most research has focused on how they process externally generated events, leaving unexplored how predictions rooted in their own actions influence attention. We asked whether the source of predictability- self-generated vs. externally structured- affects infants' looking preferences beyond overall predictability. Across two gaze-contingent eye-tracking experiments, we investigated whether infants prefer to look at stimuli whose movements are triggered by their own gaze, or at stimuli that move independently. In Experiment 1 (n = 21 |
Yamei Zhang; Xiaojun Sun; Jing Ma Cognitive aspects of video-based learning with instructor presence depend on pedagogical approaches: A perspective from motivating styles Journal Article In: Learning and Instruction, vol. 102, pp. 1–11, 2026. @article{Zhang2026e,Background: Instructor presence is a critical feature that should be considered when designing video lectures. However, its influence on cognitive aspects of learning is mixed. Such inconsistencies imply the likelihood of moderators shaping the influence. Motivating styles (i.e., autonomy-supportive, controlling and neutral teaching), the most concerned pedagogical approaches, may be such a moderator. Aim: This study examines how the influence of instructor presence on the cognitive aspects of learning, including learning outcomes, attention (i.e., visual attention allocation and concentration), and extraneous cognitive load, varies with motivating styles. Sample and methods: A three (motivating styles: autonomy-supportive vs. controlling vs. neutral teaching) × two (instructor presence: present vs. absent) between-subjects eye-tracking experiment was conducted among 181 university students. Results: While instructor presence reduced visual attention to the knowledge area regardless of the instructor's motivating style, its effects on learning outcomes (albeit only in terms of retention), concentration, and extraneous load were conditional on it. Specifically, compared with the instructor-absent condition, under autonomy-supportive teaching, instructor presence decreased retention and concentration, but did not affect extraneous load; under controlling teaching, instructor presence did not impact retention, but damaged concentration and boosted extraneous load; under neutral teaching, instructor presence promoted retention without affecting concentration or extraneous load. Conclusions: The findings imply that the facilitating effect of instructor presence as a social cue and its detrimental effect as a seductive detail can dominate one another or cancel each other out under specific motivating styles. Hence, pedagogical approaches can shape the effects of instructor presence. |
Kaitlyn N. Drennan; Nicholas Gaspelin What can a half-million saccades tell us about distractor suppression? Journal Article In: Cognition, vol. 269, pp. 1–14, 2026. @article{Drennan2026,Salient distractions in the environment compete for attention and have the potential to interfere with our goals. An abundance of research has therefore examined how we learn to prevent distraction by salient stimuli. There is growing consensus that salient stimuli can be suppressed to mitigate distraction. However, many questions about distractor suppression have been difficult to resolve in typical studies that use small sample sizes. The current study is a pooled analysis of several previous eye-tracking studies (N = 354) which resulted in a large data set of more than a half-million eye movements. This large data set was used to uncover new findings that improve our understanding of the attentional processes involved in distractor suppression. We also evaluated several new findings related to how attentional suppression is learned and is influenced by selection history. Altogether, these findings highlight the need for a hybrid model of attention that includes both bottom-up and top-down components. Moreover, this large publicly available dataset can be used by future research to investigate other questions related to attentional capture and distractor suppression. |
Ting Zhang; Shujia Zhang; Yi Jiang Automatic pupillary responses to pain perception in adults and children: The influence of race and autistic traits Journal Article In: Cognition, vol. 268, pp. 1–9, 2026. @article{Zhang2026d,The ability to understand and share others' emotional states (e.g., feeling of pain) plays a fundamental role in survival and prosocial behavior. The current study utilized pupillometry to assess automatic psychophysiological responses to others' painful facial expressions in both adults and children (N = 72). Results revealed that pupil size significantly increased when perceiving painful versus neutral expressions, independent of low-level visual features. Notably, both adults and children exhibited a racial in-group bias, with pupil dilation effects observed only for same-race painful faces. Furthermore, individuals' Autism Spectrum Quotient scores were negatively correlated with pupil dilation effects toward painful expressions of same-race faces. These findings suggest that pupillary responses might reflect automatic empathic arousal to others' pain and are modulated by racial group membership and autistic traits, providing a potential physiological indicator, at least at the group level, for probing affective resonance in children or individuals with socio-cognitive disorders (e.g., autism spectrum disorder). |
Xiaozhi Yang; Elizabeth E. Riggs; Jason C. Coronel; Ian Krajbich Issue importance amplifies the effect of gaze on voting decisions Journal Article In: Cognition, vol. 268, pp. 1–12, 2026. @article{Yang2026,There are many factors that can influence a voter's decision in the ballot booth but not all of them are policy related. One non-policy factor that may influence voters is the tendency to choose options that attract attention. Here, we investigate this possibility in two proof-of-concept laboratory studies with people choosing between proposed laws. We find that people are slower to vote when their party is split over an issue, and that they tend to vote for laws that they look at more. Moreover, this gaze effect is stronger for more important issues. We also find that we can increase the probability that someone will vote for one of two laws by getting them to look at that option first. Our work harnesses the power of sequential sampling models to explain the relationship between gaze and vote choice. We find support for a goal-based model where overt attention amplifies information supporting a particular law. This model explains why gaze has a stronger effect on choice for more important issues. Our findings indicate that some voting decisions are not predetermined and instead rely on an on-the-spot evaluation. As a result, these decisions can be swayed by attentional manipulations. Thus, visual attention may serve as a unifying framework for understanding different biases that occur in the voting booth, such as ballot-order and candidate-name-familiarity effects. |
Yunfei Shang; Ke Liu; Qing Feng The influences of security and context on attentional bias toward emotional faces: Evidence from eye movements Journal Article In: Acta Psychologica, vol. 263, pp. 1–8, 2026. @article{Shang2026,This study employed a dot-probe paradigm to investigate attentional biases toward emotional faces in individuals with high versus low levels of security across general and threat contexts, using eye-tracking technology. Participants were screened into high- and low-security groups based on validated security scales. Threat contexts were established using images from the International Affective Picture System (IAPS). Results revealed that: (1) Both high- and low-security individuals exhibited attentional biases toward emotional faces compared to neutral faces. (2) Security levels modulated attention to emotional faces: high-security individuals displayed greater bias toward happy faces, while low-security individuals showed enhanced bias toward angry faces, consistent with the schema-congruence hypothesis. (3) Reaction times accelerated under threat conditions for all participants, and threat contexts amplified attentional bias toward angry faces in high-security individuals. These findings highlight the interplay between intrinsic security and external contexts in shaping attentional processing of emotional stimuli. |
Ilanit Hochmitz; Yaffa Yeshurun; Amit Yashar Temporal dynamics of integration and individuation: Insights from temporal averaging and crowding Journal Article In: Cognition, vol. 268, pp. 1–12, 2026. @article{Hochmitz2026,Individuating a single item presented within a continuous sequence of items requires segregating its signal from that of the other items. In contrast, representing a global aspect of the sequence, such as its average orientation, involves integration of information across time. Individuation and integration allow us to focus on individual events while maintaining an overall perception of our environment. To examine the relations between temporal averaging and individuation, we measured orientation averaging over short and long timescales using the same stimuli and orientation-estimation procedure previously used to measure individuation. Participants reported the average orientation of a sequence of three oriented items separated by either short (SOAs<150 ms) or long intervals (SOAs>150 ms). Analysis of the error distribution and mixture-modeling revealed distinct patterns of results for the different tasks and timescales, but also some similarities, particularly for the short timescale. In this timescale, the relative contribution of each individual item to the final response was similar across tasks, indicating the involvement of low-level factors operating regardless of the task. With the long timescale, the two tasks showed dissociable pattern across all performance aspects, except guessing rate, indicating that long-scale individuation and averaging engage mainly higher-level, task-related processes. Importantly, regardless of timescale, estimation errors in these tasks were best described by different models: in integration they primarily reflected unequal weighting of the averaged items, whereas in individuation they reflected imprecise target encoding with occasional misreports of distractors. Together, the findings reveal dissociable dynamics for integration and individuation. |
Ângela Gomes Tomaz; Adrien Chopin; Noelia Gabriela Alcalde; Dennis M. Levi; Preeti Verghese The best stereoacuity is rarely at the fovea Journal Article In: Vision Research, vol. 240, pp. 1–13, 2026. @article{GomesTomaz2026,Stereoacuity, the ability to perceive depth from binocular disparity, is traditionally considered to be best at the fovea in typical human vision, and to decline with eccentricity. Previous studies have shown that when stereopsis is present in amblyopia, it is often coarse and comparable to stereoacuity associated with the pe- ripheral retina in neurotypical controls, suggesting that it might be mediated by a non-foveal locus. Here we measured stereoacuity as a function of eccentricity in participants with amblyopia as well as controls with no history of abnormal visual development. We measured stereoacuity using random dot stereograms and targets that scaled with eccentricity, testing the fovea, and eccentricities of 2.5◦, 5◦, and 10◦ along the horizontal and vertical meridians. For 87.5% (7/8) of amblyopic participants, the locus of best stereoacuity was non-foveal. Surprisingly, 75% of control participants (15/20) also exhibited their best stereoacuity at non-foveal locations, with only 5 controls showing foveal superiority. Using stimulus parameters modified to improve foveal performance, we repeated measurements on a subset of controls whose best stereoacuity was non-foveal, but the best locus only shifted to the fovea in one participant. Stereoacuity measured at the experimentally determined “best locus” correlated well with standard clinical stereoacuity tests. These findings challenge the conventional view of universal foveal dominance for stereopsis, suggesting that the fovea is not invariably the site of best stereoscopic sensitivity, even in many normally sighted individuals. This has implications for understanding binocular vision in amblyopic and normal vision, and for interpreting clinical stereo tests. |
Ryan M. Barker; Michael J. Armson; Nicholas B. Diamond; Zhong Xu Liu; Yushu Wang; Jennifer D. Ryan; Brian Levine Remembrance with gazes passed: Eye movements precede continuous recall of episodic details of real-life events Journal Article In: Cognition, vol. 268, pp. 1–6, 2026. @article{Barker2026,Autobiographical memory entails the reconstructing of the visual features of past events. While eye movements are associated with vivid autobiographical recollection, this research has yet to capitalize on the high temporal resolution of eye-tracking data. We aligned eye movement data with participants' extemporaneous free recall of a verified real-life event, allowing us to assess the temporal correspondence of saccades to production of episodic and non-episodic narrative content at the millisecond level. Episodic autobiographical details were preceded by an increase in saccade frequency and followed by a reduction in saccades prior to the next detail. There was no such effect observed for non-episodic details. Oculomotor responses in the temporal window preceding freely-recalled details may facilitate recollection by reinstating spatiotemporal context, or they may reflect post-retrieval processes—or a combination of both—in cyclical sensory-motor-mnemonic interactions that promote vivid recall. |
Fangfang Zhu; Yifen Liu; Mengyuan Wang; Jiumin Yang; Zhongling Pi; Zhiqiang Ma When do teachers' pleasant expressions in video lectures facilitate learning? The role of emotional learning materials and auditory emotions Journal Article In: Journal of Computer Assisted Learning, vol. 42, no. 1, pp. 1–16, 2026. @article{Zhu2026,Background: Emotional cues in video lectures have demonstrated complex effects on learning, particularly regarding teachers' facial expressions. However, these effects remain inconclusive, necessitating further exploration of potential factors to enhance learning. Objectives: This study examined how three forms of emotional design—learning materials, teachers' facial expressions and teachers' auditory emotions, individually and jointly influence learners' emotional responses, cognitive processing and learning outcomes in video-based instruction. Methods: Across two experiments, we investigated the independent and interactive effects of teachers' facial expressions, the emotional design of learning materials and teachers' auditory emotion on students' emotions, motivation, attention, cognitive load and learning outcomes. Experiment 1 examined the interaction between teachers' facial expressions and emotionally designed learning materials, while Experiment 2 built on these findings to test whether congruent positive facial and auditory cues further enhance students' emotional, motivational and cognitive engagement. Results: In Experiment 1, when learning materials were neutrally designed, teachers' pleasant facial expressions reduced extraneous cognitive load and improved learning outcomes. Experiment 2 showed that pairing pleasant facial expressions with pleasant auditory emotion elicited more positive emotions, higher motivation, increased germane load and better learning outcomes. Eye-tracking analyses indicated that this emotional congruence decreased attentional distraction, highlighting the synergistic benefits of combining visual and auditory emotional cues. Conclusions: The study identifies the synergistic effects of various emotional design elements in video lectures on students' learning and contributes to theories of emotional design and cognitive processing in multimedia learning contexts. It also offers practical insights for educators on optimising emotional cues in video-based learning environments. |
Tianyu Zhang; Yongchun Cai Shared mechanisms of presaccadic and exogenous attention in modulating visual perception of contrast Journal Article In: Cognition, vol. 267, pp. 1–13, 2026. @article{Zhang2026c,Different types of attention alter subjective visual perception in fundamentally distinct ways. Previous studies have focused on covert attention without concurrent eye movements, revealing that covert exogenous (involuntary) attention enhances contrast appearance of low-contrast stimuli while diminishing that of high-contrast stimuli, whereas covert endogenous (voluntary) attention uniformly enhances contrast appearance. However, the attentional effect preceding saccadic eye movements, a critical component of natural vision, remain understudied. Here, we found that when participants voluntarily initiated saccades, presaccadic attention enhanced the appearance of low-contrast stimuli while attenuating the appearance of high-contrast stimuli (Experiment 1 |
Manman Zhang; Zhichao Zhang; Fang Li; Xuejun Bai; Chuanli Zang; Simon P. Liversedge Exploring effects of foveal load and preview restrictions for single and multiple parafoveal words in Chinese reading Journal Article In: Journal of Memory and Language, vol. 146, pp. 1–14, 2026. @article{Zhang2026b,Two experiments are reported that used the boundary paradigm to investigate how foveal lexical processing load (high/low frequency) of a pre-target word influences parafoveal processing of upcoming target word(s) with either zero-, one-, two- or three-character, or full preview in Chinese reading. In Experiment 1, the three characters comprised a single word as the target while in Experiment 2 they formed multiple words (two or three words). Pre-target word analyses showed an effective foveal load manipulation with low frequency pre-targets being fixated for longer than high frequency pre-targets in both experiments. Both experiments showed robust preview extent effects at the target words, such that fixation times increased, and landing positions shortened dramatically with reduced preview extent. Modulatory influences of foveal load effects were obtained on both fixation times and landing positions at the target region. These effects themselves were consistent, but reduced, for parafoveal character strings comprised of multiple words relative to a single word, consistent with the MCU hypothesis ( Zang, 2019 ). Our findings demonstrate that increased foveal load reduces the disruptive influence of restrictive parafoveal windows and reduces preview extent in relation to saccadic targeting. The current findings align at a very basic level with the Foveal Load Hypothesis ( Henderson & Ferreira, 1990 ), though the results indicate that a more nuanced theoretical account is necessary to capture all aspects of the results in respect of Chinese reading. |
Huanhuan Yin; Martin J. Pickering Predicting words across languages depends on language context: Evidence from visual world eye-tracking Journal Article In: Journal of Memory and Language, vol. 146, pp. 1–12, 2026. @article{Yin2026,There is good evidence that monolingual comprehenders can predict the form of upcoming words, and also that bilinguals often activate words from both languages in parallel during bottom-up language comprehension. But it is unclear whether bilinguals predict the form of upcoming words in the language that they are not hearing, and whether such predictions depend on whether or not they have recently encountered that language. We investigated these questions in two visual-world eye-tracking experiments by asking whether Mandarin Chinese (L1)-English (L2) bilinguals pre-activate Mandarin phonological representations of predictable words during English comprehension. Participants heard English sentences containing a highly predictable word while viewing a display. They fixated more on a competitor object whose Mandarin name was a homophone of the Mandarin translation of the predictable word than an unrelated object when both languages were used (Experiment 2) but not when just English was used (Experiment 1). Our findings suggest that bilinguals predict across languages when both languages are contextually relevant but not otherwise. |
Benjamin G. Lowe; Alexandra Woolgar; Sophie Smit; Anina N. Rich Using EEG to detect lapses in sustained attention to moving stimuli Journal Article In: Cortex, vol. 195, pp. 1–14, 2026. @article{Lowe2026,Sustaining attention is effortful but crucial for daily life. Despite this, attentional lapses are common and can have fatal consequences (e.g., when driving). The spontaneous nature of these lapses makes studying their underlying phenomena elusive. As such, methods capable of determining when lapses have occurred may be fruitful research tools, with the potential to save lives if implemented within real-world settings. Here, we capitalised on a recent hierarchical classification method, which uses multivariate decoding to index how well human observers sustain their attention within a dynamic visual environment. We asked whether this method could be used to anticipate behavioural errors based on neural activity measured with electroencephalography (EEG |
Anisha Khosla; R. Shayna Rosenbaum; Morris Moscovitch; Jennifer D. Ryan Spatial updating in amnesia using an eye movement analogue of a path integration task Journal Article In: Neuropsychologia, vol. 222, pp. 1–17, 2026. @article{Khosla2026,Path integration (PI) allows organisms to navigate home by updating their location in reference to the route's starting point. We previously demonstrated a PI-like process in eye movements using an eyetracking version of commonly used PI tasks. As the hippocampus/medial temporal lobes (MTL) have been implicated in updating self-position via whole-body PI, we investigated whether the hippocampus/MTL is necessary for the spatial updating of gaze position. Using our eyetracking PI-analog task, we tested two individuals with hippocampal lesions, DA and BL; BL's hippocampal damage is relatively circumscribed to his dentate gyrus, but he has additional volume loss in the right precuneus and left superior-posterior parietal cortex. Participants followed routes with their eyes guided by visual onsets and, when subsequently cued, returned to the starting point or mid-route location. Surprisingly, despite DA's extensive MTL damage, his accuracy was comparable to that of control participants, but unlike the control participants, he showed increased saccade latency and little to no gaze revisits to enroute locations when returning to the start location. BL's accuracy was reduced relative to that of the control participants. Additionally, in contrast to DA, BL demonstrated an increased reliance on overt, enroute revisits. The behavior of the two amnesic cases, who each differ from the control participants and show distinct patterns from one another, suggests that spatial updating of gaze position reflects interactive processes supported by the hippocampus/MTL and posterior parietal cortex. |
Güven Kandemir; Christian N. L. Olivers Serial dependence is stronger for peripheral than for central vision Journal Article In: Attention, Perception, & Psychophysics, vol. 88, no. 2, pp. 1–18, 2026. @article{Kandemir2026,Serial dependence in vision refers to the fact that perceptual judgements are biased by earlier experiences, and has been thought to reduce sensory uncertainty and sustain perceptual continuity over time and space. While vision changes with eccentricity, little is known about if and how serial dependence differs in the periphery relative to fovea. Here we aimed to reduce this gap by comparing serial dependence for centrally and peripherally presented stimuli. Experiment 1 presents a reanalysis of an existing dataset from an earlier working memory task requiring the memorization of differently oriented gratings, presented either centrally or at 15° eccentricity. Experiment 2 also varied pre-knowledge of the item's location through spatial cueing. Experiment 3 replicated Experiment 1 but with lower contrast levels and equating the probabilities of central and peripheral stimuli. Across all experiments we observed an attractive bias towards the orientation of the preceding trial at all locations. Crucially, this bias was always larger in the periphery relative to the central position, and it was mainly the current item's location that drove this effect, rather than the previous item's location. Pre-knowledge of item location failed to influence the eccentricity effect serial dependence, nor did reduced contrast or differential probabilities change the conclusions. Our results thus demonstrate that serial dependence is not equal across eccentricity. The data and the scripts are available at: https:// osf. io/ v56hn/? view_ only= 6d4d5 bba49 3b4bc 788c3 eed8d ecd83 70 WABBLE |
Alexia Galati; Rick Dale; Camila Alviar; Moreno I. Coco Task goals constrain the alignment in eye-movements and speech during interpersonal coordination Journal Article In: Journal of Memory and Language, vol. 146, pp. 1–18, 2026. @article{Galati2026,Collaborative task performance is assumed to benefit from interpersonal coordination between interacting individuals. Prominent views of language use and social behavior, including the Interactive Alignment Model (IAM; Pickering & Garrod, 2004), support this view by building on tasks that require monitoring a partner's perspective (e.g., in route planning), proposing that behavioral alignment enables conceptual convergence. However, the role of alignment in tasks requiring complementarity (e.g., a “divide and conquer” strategy during joint visual search) remains underexplored. We address this gap by manipulating task goals (route planning vs. visual search) as forty dyads completed ten trials involving subway maps while their eye movements and speech were co-registered. We used Cross Recurrence Quantification Analysis (CRQA) to examine the temporal relationships between partners' eye fixations and word sequences, generating measures that reveal similarity and dynamic coupling. Dyads exhibited more gaze alignment in route planning than visual search across a range of CRQA metrics. Gaze alignment also varied across the trial and related differently to accuracy: in visual search, greater alignment late in the trial predicted better performance. In speech, route planning prompted longer and more entropic word sequences, but lower overall recurrence than visual search. This finding suggests that the two modalities organize in a compensatory fashion to support distinct task demands. These results support a theoretical framework more general than IAM, in which interactive alignment emerges as a consequence of dynamic adaptation to task goals. Overall, task goals constrain how people coordinate behavior and offer insights into how collaborating partners distribute their multimodal contributions. |
Zhanna Chuikova; Anna Izmalkova; Andriy Myachykov; Anastasiia Liashenko; Yury Shtyrov; Marie Arsalidou Interplay between switching, inhibition, and mental attention: An exploratory eye-tracking study Journal Article In: Psychological Research, vol. 90, no. 1, pp. 1–19, 2026. @article{Chuikova2026,Cognitive flexibility (CF) allows individuals to adapt their behavior to changing environmental demands. As task complexity increases, CF may substantially impact performance by facilitating a shift towards more efficient information processing strategies. However, its role in tasks with high cognitive demands remains largely unexplored. Furthermore, while CF is associated with inhibitory control and working memory functions, their precise relationship under task demands is not yet fully understood. To address this gap, we investigated how CF and inhibition metrics are associated with different levels of mental attentional demand (Md), as well as СF. Additionally, we explored differences in eye-movement indices associated with high and low CF in tasks with varied levels of Md. Analyzing data from 42 young participants performing CF, inhibition, and mental attention tasks with eye movement recording for the last task, we found that multidimensional switching (i.e., switching between three rules) correlated with mental attentional capacity, whereas two-dimensional switching (i.e., switching between two rules) correlated with inhibitory control. Individuals with low and high switching scores differed in task performance and eye-movement patterns of mental attentional demand (i.e., difficulty). Specifically, those with high efficiency in multidimensional switching exhibited superior performance across all levels of mental attentional demand. Further, high-efficiency performers employed eye-movement patterns characterized by an increased number of fixations, shorter fixation durations, and decreased blink rates, with significant differences observed at higher levels of mental-attention demand. Our findings offer new insights into psychophysiological metrics related to higher-order cognitive processes, discussed in terms of cognitive theory and practical significance. |
Hongda Zhao; Wei Du; Chao Wang Cognitive visual strategies are associated with delivery accuracy in elite wheelchair curling: Insights from eye-tracking and machine learning Journal Article In: Frontiers in Psychology, vol. 16, pp. 1–10, 2026. @article{Zhao2026,Visual search is pivotal for athletic performance, yet its role in adaptive sports like wheelchair curling remains understudied. This study investigated how eye-movement features predict delivery accuracy and distinguish elite from novice athletes. Thirty wheelchair curling athletes (15 experts, 15 novices) performed standardized delivery accuracy and visual search tasks, with eye movements recorded using the EyeLink Portable Duo system. We employed multiple regression to identify predictors of accuracy and a support vector machine (SVM) to classify athletes based on expertise. Experts demonstrated superior delivery accuracy and significantly more efficient visual search patterns, characterized by shorter dwell times, faster reaction times, and fewer fixations. The SVM model successfully classified athletes with 90% accuracy (AUC = 0.93), while regression analysis confirmed that specific gaze metrics were robust factors associated with performance. These findings establish a strong quantitative link between efficient gaze strategies and expert motor performance in a constrained-mobility setting. This integrated eye-tracking and machine learning approach offers a powerful framework for objectively evaluating performance and developing data-driven, personalized training interventions in wheelchair curling and other precision-focused adaptive sports. |
Huan Zhang; Keyin Chen; Pengfei Xu; Xin Zhao Impact of emotional working memory training on threat-related attentional bias in social anxiety: Evidence from eye movements Journal Article In: Journal of Affective Disorders, vol. 393, pp. 1–11, 2026. @article{Zhang2026a,Threat-related attentional bias is a core characteristic of social anxiety and is closely associated with impaired attentional control. While traditional working memory training (WM-T) improves cognitive control and emotional regulation, it does not address emotional information processing. Emotional working memory training (EWM-T), which integrates negative emotional stimuli, may enhance control over negative information. This study hypothesizes that EWM-T can reduce threat-related attentional bias in socially anxious individuals and outperform WM-T in decreasing sustained attention to negative stimuli. Two experiments were conducted to investigate the effects of EWM-T. Experiment 1 employed a dot-probe task and eye-tracking to examine threat-related attentional bias in high and low social anxiety groups. Experiment 2 compared EWM-T with WM-T in a randomized controlled trial, in which participants with high social anxiety completed 20 training sessions over 30 days. Transfer effects were evaluated pre- and post-training using the Stroop task, number-switching task, digit-span task, and active memory task. In Experiment 1, individuals with high social anxiety exhibited greater attentional vigilance and faster detection of threat stimuli. In Experiment 2, both groups showed reductions in anxiety symptoms and practice-related improvements on several cognitive tasks, with no Group × Time interactions. Post-training eye-tracking data revealed a decrease in fixation bias toward threat stimuli, indicating improved attentional control. These findings suggest that EWM-T enhances attentional orientation and alleviates anxiety symptoms in social anxiety, with stronger transfer effects compared to WM-T. Incorporating emotional content into working memory training offers advantages for clinical interventions in social anxiety. |
Han Zhang; John Jonides PupEyes: An interactive Python library for eye movement data processing Journal Article In: Behavior Research Methods, vol. 58, no. 1, pp. 1–25, 2026. @article{Zhang2026,We present PupEyes, an open-source Python package for preprocessing and visualizing pupil size and fixation data. PupEyes supports data collected from EyeLink and Tobii eye-trackers as well as any generic dataset that conforms to minimal formatting standards. Developed with current best practices, PupEyes provides a comprehensive pupil preprocessing pipeline and interactive tools for data exploration and diagnosis. In addition to pupil size data, PupEyes provides interactive tools for visualizing fixation data, drawing areas of interest (AOIs), and computing AOI-based metrics. PupEyes uses the pandas data structure and can work seamlessly with other data analysis packages within the Python ecosystem. Overall, PupEyes (1) ensures that pupil size data are preprocessed in a principled, transparent, and reproducible manner, (2) helps researchers better understand their data through interactive visualizations, and (3) enables flexible extensions for further analysis tailored to specific research goals. To ensure computational reproducibility, we provide detailed, executable tutorials ( https://pupeyes.readthedocs.io/ ) that allow users to reproduce and modify the code examples in a virtual environment. |
Amanda Rose Yuile; Justin B. Kueser; Claney Outzen; Sharon Christ; Risa Stiegler; Mary Carson Adams; Barbara Brown; Arielle Borovsky Lexical vocabulary acquisition through multimodal annotation: An eye-tracking study with Chinese learners' dictionaries Journal Article In: Developmental Science, vol. 29, no. 1, pp. 1–18, 2026. @article{Yuile2026,Toddlers better retain novel object-label mappings from taxonomic categories they have more knowledge of. Separately, words for concepts with more perceptual features are learned earlier than words for concepts with fewer perceptual features. Because these factors have only been examined separately, it is unclear whether the effects of taxonomic density stem from differences in structured taxonomic knowledge or simply reflect lower-level differences in perceptual similarity among concepts. We asked how taxonomic structure and perceptual information jointly contribute to word learning at 24 months old in an ostensive word learning task. We found that semantic category knowledge facilitated word learning. We also found that the availability of perceptual features served as additional supports for word learning by children with smaller expressive vocabularies. This indicates that structured taxonomic knowledge is a better predictor of word learning compared to lower-level perceptual features at 24 months old. However, perceptual cues may provide additional support for vocabulary growth at the start of development. Summary: We explore how semantic category knowledge and perceptual features jointly influence novel word learning at 24 months old in an ostensive word learning context. Novel word learning was facilitated within semantic categories the toddlers knew more about, when controlling for the availability of perceptual information. Toddlers with smaller productive vocabularies used perceptual features as additional supports for word learning, but those with larger vocabularies did not. These findings show that structured taxonomic knowledge is a better predictor of word learning at 24 months old compared to lower-level perceptual information. |
Xuefei Yu; Atul Gopal; Ken-ichi Inoue; Martin O. Bohlen; Genevieve M. Kuczewski; Marc A. Sommer; Hendrikje Nienborg; Masahiko Takada; Okihide Hikosaka Retrograde optogenetics reveals sensorimotor convergence within a corticotectal pathway of non-human primates Journal Article In: Current Biology, vol. 36, no. 1, pp. 236–242, 2026. @article{Yu2026,Understanding how the cerebral cortex communicates with subcortical areas to drive behavior remains a central question in system neuroscience. One key unresolved issue is whether prefrontal cortical outputs to motor-related subcortical regions carry predominantly motor commands1 or mixed sensory-motor signals.2,3 Retrograde optogenetics offers a powerful way to interrogate such projection-defined circuits,4–7 but its use in non-human primates has been limited.8–11 Here, we applied retrograde optogenetics in awake macaques to directly test the functional organization of the corticotectal projection from the frontal eye field (FEF) to the superior colliculus (SC). We asked whether the FEF output signals to SC are motor-dominant or broadly sensory-motor. Optical activation of this pathway evoked robust, contralateral saccades and selectively modulated reaction times, demonstrating its causal role in saccade generation. Optogenetically tagging FEF neurons pro- jecting to SC revealed a heterogeneous population of visual, visuomotor, and motor neurons. This diverse output converged predominantly onto motor-related neurons in the SC. These findings support a visuomotor convergence model, in which diverse FEF outputs drive motor-selective SC neurons with activity sufficient for saccade generation, and thus resolve long-standing questions over the composition of FEF outputs. Additionally, our results establish retrograde optogenetics as a tool for dissecting projection-defined circuits in primates and for precisely probing the neural pathways that link perception to action. |
Songqiao Xie; Chunyan He An empirical study on native Mandarin-speaking children's metonymy comprehension development Journal Article In: Journal ofChild Language, vol. 53, pp. 80–107, 2026. @article{Xie2026,This study investigatesMandarin-speaking children's(age 3–7) comprehension development ofnovel and conventional metonymy, combining online and offline methods. Both online and offline data show significantly better performances from the oldest group (6-to-7-year-old) and a delayed acquisition of conventional metonymy compared with novel metonymy. However, part of offline data shows no significant difference between adjacent age groups, while the eye-tracking data show a chronological development fromage 3–7. Furthermore, in offline tasks, the three-year-old group features a high choice randomness and the four-to-five- year-olds show the longest reaction time. Therefore, we argue that, not only age but also metonymy type can influence metonymy acquisition, and that a lack of socio-cultural experience can be a source of acquisition difficulty for children under six. Methodologically speaking, we believe that online methods should not be considered superior to offline ones as they investigate different aspects of implicit and explicit language comprehension. |
Wiktor Wicecławski; Jakub Paszulewicz ERP evidence of attentional selection outside of effective oculomotor range Journal Article In: Experimental Brain Research, vol. 244, no. 1, pp. 1–9, 2026. @article{Wiȩclawski2026,The close link between visual attention and the oculomotor system is well documented. Within the selection-for-action framework, two perspectives exist. According to Visual Attention Model (VAM) attention is seen as a prerequisite for successful movement execution, though it is considered a distinct cognitive and neural process. By contrast, the premotor theory of attention (PMTA) argues that the beneficial effects of attention are fully accounted for by the system's preparation for saccadic eye movements. From this standpoint, a central prediction emerges: attentional advantages should be confined to regions within the oculomotor range, since saccadic planning is not feasible outside those limits. A common way to examine this prediction is to present cues and targets in a hemifield beyond the oculomotor range, typically achieved by occluding one eye while abducting the other. Using this method, Smith et al. showed that in a visual search task, exogenous orienting is reduced in the temporal hemifield when the eye is abducted. They concluded that exogenous attentional orienting is constrained by the range of potential saccadic movements. In our study, we sought to replicate Smith et al.'s findings while extending the paradigm with EEG recordings—an approach not yet applied in this context. PMTA predicts that, under eye abduction, stimuli appearing in the temporal hemifield would yield diminished N2pc amplitudes. An ANOVA revealed no reduction of N2pc amplitude in the temporal hemifield. Taken together, our results support the growing body of evidence suggesting that visual attention is not strictly bound to the oculomotor range. |
Yang Wang; Lei Zhang; Jon D. Elhai; Christian Montag; Haibo Yang The interacting role of fear of missing out in attentional bias dynamics during problematic social media use Journal Article In: Addictive Behaviors, vol. 173, no. 393, pp. 1–8, 2026. @article{Wang2026,Problematic social media use (PSMU) is increasingly conceptualized as a behavioral addiction involving attentional bias toward social media icons. Although fear of missing out (FoMO) contributes to PSMU maintenance, its dynamic interactive role in attentional bias dynamics remains unclear. Guided by the I-PACE model and attentional bias theory, this study examined whether and when FoMO modulates gaze-based attentional bias toward social media icons in PSMU. 912 university students completed online screening for PSMU and FoMO; 55 meeting PSMU criteria (Mage = 19.60) were categorized into high- or low-FoMO groups. Participants performed a visual dot-probe task with social/non-social app icons while eye-tracking recorded gaze behavior across four 500 ms time windows. Results revealed FoMO significantly interacted with attentional bias in two critical phases: During early processing (0–500 ms), the PSMU/high-FoMO group exhibited attentional orienting deceleration to social media icons, whereas PSMU/low-FoMO showed attentional maintenance. In later processing (1000–1500 ms), PSMU/high-FoMO demonstrated attentional vigilance-maintenance, while PSMU/low-FoMO displayed avoidance. These findings indicate FoMO exerts a temporally dynamic interaction effect on attentional bias in PSMU—characterized by initial orienting delays followed by sustained attentional engagement with social media icons. This supports reconceptualizing FoMO as a core psychological mechanism that reinforces PSMU through biased attentional dynamics, advancing theoretical alignment with the I-PACE framework. |
Mingze Sun; Zhe Qu; Yajie Wang; Jingwen Xiang; Yulong Ding A well-trained nonsalient shape captures attention with delayed inhibition of return Journal Article In: Psychonomic Bulletin & Review, vol. 33, no. 1, pp. 1–16, 2026. @article{Sun2026,Numerous studies adopting Posner peripheral cueing paradigms have shown that exogenous attentional orientation (EAO) to a salient-but-irrelevant stimulus involves two opposing attentional processes: early attentional capture and late attentional suppression. Recent evidence has indicated that long-term perceptual learning can induce involuntary attentional capture by nonsalient shapes. However, it remains unclear whether a well-trained nonsalient shape could exhibit a biphasic pattern of EAO similar to that observed with physically salient stimuli, including both an early exogenous attentional shift and a late inhibition of return (IOR). Through both a perceptual learning task and a classic peripheral cueing task, the current study showed that a well-trained nonsalient shape cue could exhibit a biphasic pattern of EAO. When compared with an untrained shape, a well-trained nonsalient shape facilitated subsequent target detection at short cue-target onset asynchronies (CTOAs, 200–300 ms) and deteriorated target detection at a relatively long CTOA (800 ms), but not at 400- to 600-ms CTOAs. As a comparison, a detectability-matched onset cue or luminance contrast cue elicited a facilitatory effect at 200- to 300-ms CTOAs and an inhibitory effect starting from 400-ms CTOA. A control eye-tracking experiment suggested that the absence of IOR effects at 400- to 600-ms CTOAs in the trained cue task was not due to fewer eye movements during the task. Our results indicated that, as opposed to physically salient stimuli, a well-trained nonsalient shape induced delayed IOR after an evident exogenous shift of visual attention. The different patterns of EAO processes support the notion that prior experience (such as perceptual learning) plays a unique role in modulating our exogenous attention. Possible underlying mechanisms are proposed. |
Waxun Su; Xiao Lin; Weijian Liu; Tak Kwan Lam; Peng Li; Qiandong Wang The impact of depression and social anxiety on eye orientation and disengagement in individuals with and without depression Journal Article In: Journal of Psychiatric Research, vol. 192, pp. 325–331, 2026. @article{Su2026,In individuals with depression, the comorbidity with social anxiety disorder is prevalent that often exacerbates symptoms and social dysfunction, such as exhibiting more severe social avoidance and interpersonal impairment. Our study used the eye-tracking technique to explore how depression and social anxiety, individually and in combination, influence orientation toward and disengagement from the eyes in individuals diagnosed with depression or not. Participants were 49 healthy individuals and 64 individuals with depression, whose gaze was initially guided to the eye or mouth region immediately before the onset of the face. Latency to disengage from the guided regions and latency to orient to the eyes following the onset of the face were measured. The findings revealed that, firstly, individuals showed delayed disengagement from the eyes compared to the mouth regardless of depression diagnosis or social anxiety level. Secondly, in healthy individuals, increased social anxiety was related to quick eye orientation. Thirdly, in individuals with depression, longer disengagement latencies from the eyes were associated with higher levels of depression or social anxiety, but only when one of the scores was high, not medium or low. These findings highlight the importance of understanding the distinct and combined impacts of depression and social anxiety on clinical and nonclinical individuals, informing more targeted clinical interventions and assessment strategies. |
Anjum Shaikh; Idah Mbithi; Maiko Okamura; Skylar Rice; Lily Rosan; Fabio Solorzano Quesada; Trafton Drew; Brennan Payne; Jeff Moher Distractor avoidance and early quitting in visual search Journal Article In: Attention, Perception & Psychophysics, vol. 88, no. 1, pp. 1–13, 2026. @article{Shaikh2026,In the current study, we examined the mechanisms underpinning how salient distractors produce early quitting in visual search. Participants completed a simple visual search task and indicated whether a target was present or absent. When salient distractors were present, fewer eye movements occurred before target-absent responses, and less of the display area was searched. Surprisingly, participants actively avoided directing eye movements towards the distractor. Still, salient distractors increased both search errors, which were committed when the target was never fixated, and decision errors, which were committed when the target was fixated but not detected. Our results demonstrate that salient distractors trigger early quitting by reducing the amount of information that observers extract from the search image and disrupting search guidance. |
Marina Serrano-Carot; Bernhard Angele Spanish readers skip articles regardless of gender and number agreement Journal Article In: Journal of Eye Movement Research, vol. 19, no. 1, pp. 1–30, 2026. @article{Serrano-Carot2026,Articles are among the most frequently encountered words during reading; however, it is not clear how deeply they are usually processed. This study examines whether native Spanish speakers use parafoveal article–noun agreement information to guide eye movements during reading. Using the gaze-contingent boundary paradigm, we manipulated the parafoveal preview of articles across two experiments. In Experiment 1, we manipulated gender agreement between the previews readers received of definite articles and the subsequent nouns (e.g., la mesa vs. el* mesa). In Experiment 2, we manipulated grammatical gender and number agreement between parafoveal article previews and the subsequent nouns jointly (e.g., los* mesa vs. una mesa). We found no evidence that parafoveal article–noun gender or number agreement affected article skipping probability, suggesting that initial parafoveal processing of articles does not extend to their grammatical properties. However, we observed increased total viewing time on the noun following mismatching previews, suggesting that, while the decision of whether to skip an article is taken largely without considering the grammatical properties of the upcoming words, readers do need more time to recover from the grammatical mismatch afterwards. We discuss the results in the context of current models of eye-movement control during reading. |
Thomas Seacrist; Elizabeth A. Walshe; Shukai Cheng; Emily Brown; Charlotte Birnbaum; Victoria Kaufman; Flaura K. Winston; William C. Gaetz A novel paradigm for identifying eye-tracking metrics associated with cognitive control during driving through MEG neuroimaging Journal Article In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 116, pp. 1–13, 2026. @article{Seacrist2026,Understanding the neurocognitive underpinnings of driving behavior in adolescents is critical to improving road safety. To address this, we established a novel paradigm linking magnetoencephalography (MEG)-recorded frequency-specific brain activity to simulated driving performance, identifying periods of increased cognitive control. However, this initial paradigm did not incorporate eye-tracking – a potentially scalable proxy for cognitive control that could be leveraged by in-vehicle driver monitoring systems. This proof-of-concept study expands our paradigm by integrating eye-tracking to identify scanning behavior metrics associated with periods of increased cognitive control validated by MEG. Typically developing adolescents (n = 11; mean age = 15.1 ± 1.5 yrs) completed three driving tasks of varying cognitive demand, and MEG frequency specific analysis confirmed periods of high (Hi) and low (Lo) cognitive control via the established biomarker of frontal midline theta (FMT). Fixation count, fixation duration, horizontal/vertical mean gaze position, saccade amplitude, and horizontal/vertical spread of search were compared between Hi vs. Lo periods of cognitive control. Task-specific differences in fixation count (p < 0.05), mean gaze position (p < 0.01), saccade amplitude (p < 0.05), and spread of search (p < 0.01) were observed between Hi compared to Lo cognitive control periods. These differences corresponded to expected task-specific changes in scanning behavior that would accompany cognitive control over behavior, suggesting a signal that eye-tracking may serve as a proxy for underlying neurocognitive processes. This integrated approach demonstrates methodological rigor and offers a promising framework for further research and informing development of in-vehicle driver monitoring systems for detecting cognitive deficits in real time, with implications for enhancing teen driver safety. |
Marzie Samimifar; Federica Bulgarelli Decoding child speech in silence and noise: The type of background noise shapes adults' processing Journal Article In: Attention, Perception & Psychophysics, vol. 88, no. 1, pp. 1–22, 2026. @article{Samimifar2026,Processing speech that is non-canonical (i.e., child-produced speech) and/or presented in background noise can pose challenges for listeners. We investigated how listening to child-produced speech affects young adults' word recognition under varying noise conditions. Participants (n = 121) completed a two-picture eye-tracking task in one of three conditions: no background noise, pink background noise, and real-world background noise from LENA recordings. Participants heard a child or adult (Speaker-Age) direct attention to a generic (e.g., keys) or child-specific (e.g., potty; Item-Type) item. We examined the effect of Speaker-Age and Item-Type on participants' looking time. In no background noise, increases in target looking were high, with greater increases when adults produced generic items. Both pink noise and real-world noise increased task difficulty, but patterns of results varied as a function of speaker gender. For female speech, background noise resulted in an effect of Speaker-Age, with participants increasing their looking time more for adult relative to child speech. The type of background noise did not influence this pattern. For male speech, there was an effect of Speaker-Age in the opposite direction, with participants increasing their looking time more for child relative to adult speech. For male speech, real-world background noise resulted in higher increases in target looking for child-specific items. Together, results suggest that child-produced speech may be more difficult to process than female-adult produced speech in noise, and that listeners can use background noise to predict who will speak and what they might speak about under more challenging conditions, such as processing male speech. |
Mohammadhossein Salari; Diederick C. Niehorster; Marcus Nyström; Roman Bednarik The effect of pupil size on data quality in head-mounted eye trackers Journal Article In: Behavior research methods, vol. 58, no. 1, pp. 1–16, 2026. @article{Salari2026,Changes in pupil size can lead to apparent gaze shifts in data recorded with video-based eye trackers in the absence of physical eye rotation. This is known as the pupil-size artifact (PSA). While the PSA is widely reported in desktop eye trackers, it is unknown whether and to what extent it occurs in head-mounted eye trackers. In this paper, we examined the effects of pupil size variations on eye-tracking data quality in four head-mounted eye trackers: the Pupil Core, the Pupil Neon, the SMI ETG 2w, and the Tobii Pro Glasses 2, in addition to a widely used desktop eye tracker, the SR Research EyeLink 1000 Plus. Participants viewed a central target on a monitor while we systematically varied the screen brightness to induce controlled pupil size changes. All head-mounted eye trackers exhibited PSA, with apparent gaze shifts ranging from 0.94 formula presented for the Pupil Neon to 3.46 formula presented for the Pupil Core. Except for the Pupil Neon, all eye trackers exhibited a significant change in accuracy due to pupil size variations. Precision measures showed device-specific effects of pupil size changes, with some eye trackers performing better in the bright condition and others in the dark condition. These findings demonstrated that, just like desktop eye trackers, head-mounted video-based eye trackers exhibited PSA. |
Estelle Raffin; Roberto F. Salamanca-Giron; Krystel R. Huxlin; Olivier Reynaud; Loan Mattera; Roberto Martuzzi; Friedhelm C. Hummel Causal disconnectomics of motion perception networks: Insights from transcranial magnetic stimulation-induced BOLD responses Journal Article In: The Journal of Physiology, vol. 604, pp. 503–526, 2026. @article{Raffin2026,Understanding how focal perturbations trigger large-scale network reorganization is essential for uncovering the neural mechanisms that support perception and behaviour. Here we used a transcranial magnetic stimulation (TMS) perturbational approach by applying brief 10 Hz TMS to early visual areas (EVAs) or the medio-temporal (MT) area in healthy participants while recording concurrent functional magnetic resonance imaging (fMRI). TMS delivered during the early stages of motion processing specifically impaired direction discrimination at both sites,whereas disruption of the later processing phase impaired performances only for the MT condition. Despite a similar local increase in BOLD activity induced by EVA and MT stimulation, the broader network responses diverged significantly. Perturbation ofEVA elicited a more robust and efficient pattern of functional reorganization, manifesting as more constrained BOLD changes, consistent with greater resilience to focal disruption. In contrast behavioural impairments induced by MT stimulation were accompanied by a disorganized and less-efficient network configuration, characterized by smaller small-world properties and longer path lengths. The decrease in performances induced by MT stimulation scaled with lower clustering coefficients, implying a more random or decentralized network structure. These findings demonstrate that TMS-fMRI coupling provides a powerful framework for causally mapping the relationships between local neural perturbations, large-scale network dynamics and behavioural performance. |
Zhongling Pi; Xuemei Huang; Richard E. Mayer; Xin Zhao; Xiying Li Role of the instructor's social cues in instructional videos Journal Article In: Education Sciences, vol. 16, no. 1, pp. 1–15, 2026. @article{Pi2026,Little attention has been paid to whether an instructor's hand-pointing gestures or use of a mouse-guided arrow can mitigate the attentional loss caused by an instructor's happy facial expressions or can enhance the social benefits of these expressions in instructional videos. The goal of the present study is to determine whether social cues in an instructional video affect learning processes and outcomes. The participants were 57 female students from a university. We employed a 2 × 2 mixed experimental design. The instructor's facial expression was a within-subject variable, while the type of pointing cue was a between-subject variable. Students who had the smiling instructor rather than the bored instructor gave higher ratings of the perceived positive emotion of the instructor, felt more positive emotion, and had more motivation to learn. Eye-tracking technology showed that students who learned with the smiling instructor spent more time looking at the content on the slides than those who learned with a bored instructor. Students who learned with the smiling instructor scored higher on a learning outcome post-test than those who learned with the bored instructor. Among female Chinese students, this pattern is consistent with the five steps posited by the positivity principle, which concludes that people learn better from instructors who exhibit positive social cues. Pointing with a human hand was not superior to pointing with an arrow, suggesting that in this case hand-pointing was not a strong social cue and did not moderate the effects of facial expression. Given the exclusively female sample, future research should examine whether these effects generalize across genders. |
Effie J. Pereira; Jelena Ristic Beauty in the eye of the beholder: Attention to attractive faces dissociates across covert and overt measures Journal Article In: Attention, Perception, & Psychophysics, vol. 88, no. 1, pp. 1–17, 2026. @article{Pereira2026,Attractive faces attract attention. Here, we examined how facial attractiveness influenced covert and overt social attention. Participants discriminated targets occurring after one of 32 different face–object cue pairs, which varied in the degree of attractiveness. Experiment 1 measured covert social attention in manual responses while maintaining central fixation. No evidence of attentional preference for faces was found. Experiment 2 measured overt social attention in eye movements while maintaining natural viewing conditions. A reliable oculomotor preference for attractive faces was found. Thus, facial attractiveness affects covert and overt social attention differently, reflecting the diverging ways in which faces affect attention with respect to social functioning in daily life. The datasets for all experiments can be found on the Open Science Framework (https://osf.io/u54tp/). |
Christine Misketis; Hamed Tadayyoni; Paul C. Yielder; Bernadette Murphy Subclinical neck pain alters gaze stability during the vestibulo-ocular reflex Journal Article In: Applied Sciences, vol. 247, no. 16, pp. 1–21, 2026. @article{Misketis2026,(1) Background: Subclinical neck pain is mild-to-moderate neck pain that has not yet beentreated, and where individuals experience pain-free days. Alterations in sensorimotor integration, motor control, proprioception, and cerebellar inhibition have been observed in individuals with subclinical neck pain. Upregulation of the cervico-ocular reflex is documented in subclinical neck pain, with no difference in the gain of the vestibulo-ocular reflex. Vestibulo-ocular reflex gain adaptation and associated differences in visuo-motor control have not been successfully measured in this population. This study aims to investigate the vestibulo-ocular reflex gain adaptation and visuo-motor control in individuals with subclinical neck pain. (2) Methods: 30 right-hand-dominant participants (19 healthy controls: 10 male and 9 female; 16 subclinical neck pain: 6 male and 10 female) aged 18 to 35 performed an eye tracking task. Participants were seated 90cm away from a monitor and instructed to hold their gaze on a stationary or moving target projected onto a screen while performing active head rotations. Trials were divided into 12 blocks (pre-adaptation,10 adaptation, and post-adaptation) for a total of 192 trials. During adaptation, the target would move at increasing speeds during each block, increasing by 10% of active head velocity up to a maximum of 100%. (3) Results: The subclinical neck pain group demonstrated significantly higher total saccades (p = 0.006, η2 = 0.240) and overt catch-up saccades (p = 0.041, η2 = 0.141) than the healthy control group. (4) Conclusion: Subclinical neck pain alters the visual–vestibular interaction. |
Mario Michiels; David Luque; Ignacio Obeso Implicit and explicit reversal of trained oculomotor movements Journal Article In: Neurobiology of Learning and Memory, vol. 223, pp. 1–7, 2026. @article{Michiels2026,Habitual behavior is thought to emerge with extended training and reduced sensitivity to outcome devaluation. However, little is known about how habit-like oculomotor responses adapt when devaluation is implicit or embedded within a previously learned context. We examined this in a novel oculomotor learning task involving visual shape-reward associations with both standard and overtrained stimuli. Twenty-six participants completed a shape-color learning task while their eye movements were recorded using an eye-tracker system (1000 Hz). The task involved 11 blocks, including training, intra-block reversal (implicit stimulus-reward changes), and classical devaluation phases (explicitly instructed reward changes). Statistical analyses were performed using linear mixed-effects models on accuracy and response time (RT) measures. As expected, higher accuracy and faster responses for overtrained versus standard-trained stimuli were observed during training, confirming stronger learning. In the classical devaluation phase, overtrained stimuli elicited significantly more errors compared to standard-trained stimuli, relative to the performance in the training phase. This indicates stronger resistance to goal-directed updating. The effect was more pronounced during intra-block reversal of associations, where reward contingencies changed without warning. While RTs were not affected by classical devaluation, intra-block reversal significantly increased RTs for overtrained stimuli, relative to RTs in the training phase. This suggests a higher cognitive cost for overriding well-learned habitual responses when changes are unpredictable. These findings provide new evidence for the behavioral rigidity associated with overtraining of oculomotor behavior and suggest that unexpected outcome changes impose an additional switch cost on habitual oculomotor behavior. |
Sara LoTemplio; Jack Silcox; David L. Strayer; Brennan R. Payne Single‐trial relationships between the error‐related negativity, pe, error‐related pupillary dilation response, and post‐error behavior Journal Article In: Psychophysiology, vol. 63, no. 1, 2026. @article{LoTemplio2026,The amplitude of the error‐related negativity (ERN) is known to be correlated with attention to task and general cognitive control abilities. However, previous research has struggled to consistently link ERN amplitude with behavioral accuracy or reaction time in the task from which the ERN is being measured. This lack of relationship could be due to many factors that are difficult to control for, so explorations of other converging measures to understand error‐processing and subsequent behavior adjustment are warranted. The current study examines how two other physiological markers of error‐processing—the phasic pupillary dilation response (PDR) and the positivity following an error (Pe)—relate to post‐error behavior. Additionally, we also examine relationships between the three physiological indices of error‐processing. In the study, EEG and pupillometry were simultaneously recorded while participants completed 24 blocks (50 trials each) of an Ericksen Flanker task. For post‐error accuracy, we found that on a single‐trial level, the amplitude of all three physiological error‐processing indices for error trials predicted post‐error accuracy. At the subject level, only the PDR predicted average post‐error accuracy. For post‐error slowing, at the single‐trial level, only the Pe predicted post‐error slowing, whereas only the ERN predicted post‐error slowing at the subject level. We also found that both the ERN and Pe correlated with PDR amplitude. This is consistent with our hypothesis that the Pe and PDR may share underlying neural mechanisms, but qualified by the fact that the ERN, which is not hypothesized to have shared neural mechanisms, also predicted unique variance in pupillary amplitude. Collectively, these results suggest that the PDR and Pe might represent promising indicators of post‐error behavior adjustment and highlight the need to examine relationships at multiple levels of analysis. |
Raymond M. Klein; Şimal Dölek; John Christie Does the output form of inhibition of return operate at or after the bottleneck? Journal Article In: Acta Psychologica, vol. 262, pp. 1–8, 2026. @article{Klein2026,Inhibition of return (IOR) refers to the longer reaction times (RTs) to targets presented at previously cued, fixated or attended locations. It has been suggested that there are two distinct forms of IOR. The input form, generated when the reflexive oculomotor system is suppressed, affects the sensory/perceptual processing. The output form, generated when the reflexive oculomotor system is not suppressed, biases responding. It has been demonstrated, using the locus of slack logic associated with the psychological refractory period (Pashler, 1998),that the input form of IOR operates on a pre-bottleneck stage of processing, Kavyani et al. (2017). Using the same logic, Klein et al. (2020) demonstrated that the output form of IOR operates at or after the bottleneck. Building on the methods of Klein et al. the present study used PRP paradigm to determine whether the output form of IOR operates at or after the bottleneck. The output form of IOR was generated by an initial saccade from a peripheral location to a central fixation point. Task 1 consisted of a manual response indicating the location (right/left) of a subsequent visual stimulus. Task 2 required participants to discriminate the frequency (high/low) of an auditory stimulus and make a key-press response with their other hand. The targets (T1 and T2) for the two tasks were presented in close succession with 200, 400 and 800 ms target-target onset asynchronies (TTOAs). Responses to T1 were delayed by IOR and responses to T2 were substantially delayed when the TTOA was brief. Statistical analysis of the amount of carry over of the IOR effect experienced by Task 1 onto the RTs for Task 2 strongly suggest that the output form of IOR operates after the bottleneck. Nevertheless, aspects of the results could be interpreted to support a weaker influence of IOR operating also at the bottleneck stage of processing. |
Hyunwoo Kim; Kitaek Kim; Haerim Hwang Effects of goals and strategies on predictive processing: A visual world eye-tracking study on honorific agreement in Korean Journal Article In: Linguistics, pp. 1–35, 2026. @article{Kim2026,There is ongoing debate about whether prediction is driven solely by bottom-up associative links or is modulated by top-down goals and strategies. The current study attempts to address this issue by investigating the role of top-down factors in Korean speakers' predictive processing of honorific agreement. Two visual-world eye-tracking experiments were conducted, analyzing participants' anticipatory eye movements while manipulating two top-down factors. In Experiment 1, we assigned participants to two groups with different instructions, asking one group to listen to sentences and answer referent-selection questions, and the other group to actively predict the upcoming referent. Experiment 2 manipulated the validity of predictive cues by interspersing experimental items with fillers containing consistent or inconsistent continuations. Results from Experiment 1 showed that participants instructed to actively anticipate the referent used honorific information more quickly to make predictions than the comprehension-only group. In Experiment 2, the group exposed to predictive linguistic stimuli showed an earlier and stronger prediction effect compared to the group exposed to stimuli with no prediction validity. These results suggest that comprehenders engage in different degrees of prediction according to the current demands of task goals and strategies. We discuss these findings in light of recent theories of predictive language processing. |
Madeline Jarvis; Adam Vasarhelyi; Joe Anderson; Caitlyn Mulley; Ottmar V. Lipp; Luke J. Ney js-mEye: An extension and plugin for the measurement of pupil size in the online platform jsPsych Journal Article In: Behavior Research Methods, vol. 58, no. 1, pp. 1–18, 2026. @article{Jarvis2026,The measurement of pupil size has become a topic of interest in psychology research over the past two decades due to its sensitivity to psychological processes such as arousal or cognitive load. However, pupil measurements have been limited by the necessity to conduct experiments in laboratory settings using high-quality and costly equipment. The current article describes the development and use of a jsPsych plugin and extension that incorporates an existing software that estimates pupil size using consumer-grade hardware, such as a webcam. We validated this new program (js-mEye) across two separate studies, which each manipulated screen luminance and color using a novel luminance task, as well as different levels of cognitive load using the N-back and the Stroop tasks. Changes in luminance and color produced significant changes in pupil size in the hypothesized direction. Changes in cognitive load induced in the N-back and Stroop tasks produced less clear findings; however, these findings were explained to some extent when participant engagement – indexed by task performance – was controlled for. Most importantly, all data were at least moderately correlated with data simultaneously recorded using an EyeLink 1000, suggesting that mEye was able to effectively substitute for a gold-standard eye-tracking device. This work presents an exciting future direction for pupillometry and, with further validation, may present a platform for measuring pupil size in online research studies, as well as in laboratory-based experiments that require minimal equipment. |
Xin Huang; Bikalpa Ghimire; Anjani Sreeprada Chakrala; Steven Wiesner Neural coding of multiple motion speeds in visual cortical area MT Journal Article In: eLife, vol. 13, pp. 1–43, 2026. @article{Huang2026,Motion speed is a salient cue for visual segmentation, yet how the visual system represents and differentiates multiple speeds remains unclear. Here, we investigated the encoding and decoding of multiple speeds. We first characterized the perceptual capacity of human and macaque subjects to segment overlapping stimuli moving at different speeds. We then determined how neurons in area MT of macaque monkeys represent multiple speeds. We found that the responses of MT neurons to two speeds showed a robust bias toward the faster speed component. This faster-speed bias occurred when both speeds were slow (≤20°/s) and diminished as stimulus speed increased. Our findings can be explained by a modified divisive normalization model, in which the weights for the speed components are proportional to the responses of a population of neurons (the weighting pool) with a broad range of speed preferences, elicited by the individual speeds. Regarding decoding, a classifier could distinguish MT responses to two speeds from those to a corresponding log-mean speed. We further found that it was possible to decode two speeds from the MT population response, supporting the theoretical framework of coding multiplicity in neuronal populations. The decoded speeds can account for perceptual performance in segmenting two speeds with a large (4x) but not a small (2x) separation. Our findings help define the neural coding rule of multiple speeds. The faster-speed bias in MT could benefit important behavioral tasks, such as figure-ground segregation, as figural objects tend to move faster than the background in the natural environment. |
Zachary Hamblin-Frohman; Jay Pratt Rapid development of inhibitory effects in response to novel features: It's mostly target-feature enhancement Journal Article In: Psychonomic Bulletin & Review, vol. 33, no. 7, pp. 1–10, 2026. @article{HamblinFrohman2026,In some visual search scenarios, the presence of a singleton distractor leads to faster search performance. This has been coined as the inhibition effect and is believed to represent avoidance of the singleton distractor. Research has identified two contributing components: a bias towards target features, target-feature enhancement, a bias away from distractor features, distractor-feature suppression. The current study examines how each of these effects independently develops in response to novel stimuli. In short blocks participants completed a search for a pre-defined target shape. Each block the colour of the target and the distractor were randomized so that the initial and subsequent attentional adaptations to these features could be assessed (via eye-tracking). These mini-blocks reveal substantial information about the development of the inhibition effect. Incredibly, we observe the classic inhibition effect (shorter RTs on distractor-present trials) as soon as the second trial of each block. Furthermore, the effect emerged even if it was the first presentation of the distractor feature. Gaze analysis concurs with this, eyes avoided the distractor when the target feature was known, but the distractor feature unknown. This shows compelling evidence for guidance from target-feature enhancement. However, some evidence for distractor-feature suppression is observed, further oculomotor suppression of the distractor is seen after its initial presentation. Together, the current results show that the inhibition effect develops rapidly in visual search displays, and that while a large portion of the effect can be accounted for by target-enhancement, distractor-suppression may still have a role in influencing attentional allocations. |
Patrick Haller; Cui Ding; Maja Stegenwallner-Schütz; David R. Reich; Iva Koncic; Silvia Makowski; Lena A. Jäger Replicate me if you can: Assessing measurement reliability of individual differences in reading across measurement occasions and methods Journal Article In: Cognitive Science, vol. 50, no. 1, pp. 1–50, 2026. @article{Haller2026,Psycholinguistic theories traditionally assume similar cognitive mechanisms across different speakers. However, more recently, researchers have begun to recognize the need to consider individual differences when explaining human cognition. An increasing number of studies have investigated how individual differences influence human sentence processing. Implicitly, these studies assume that individual-level effects can be replicated across experimental sessions and different assessment methods such as eye-tracking and self-paced reading. However, this assumption is challenged by the Reliability Paradox. Thus, a crucial first step for a principled investigation of individual differences in sentence processing is to establish their measurement reliability, that is, the correlation of individual-level effects across multiple measurement occasions and methods. In this work, we present the first naturalistic eye movement corpus of reading data with four experimental sessions from each participant (two eye-tracking sessions and two self-paced reading sessions). We deploy a two-task Bayesian hierarchical model to assess the measurement reliability of individual differences in a range of psycholinguistic phenomena that are well-established at the population level, namely, effects of word length, lexical frequency, surprisal, dependency length, and number of to-be-integrated dependents. While our results indicate high reliability across measurement occasions for the word length effect, it is only moderate for higher-level psycholinguistic predictors such as lexical frequency, dependency distance, and the number of to-be-integrated dependencies, and even low for surprisal. Moreover, even after accounting for spillover effects, we observe only low to moderate reliability at the individual level across methods (eye-tracking and self-paced reading) for most predictors, and poor reliability for predictors of syntactic integration. These findings underscore the importance of establishing measurement reliability before drawing inferences about individual differences in sentence processing. |
Carie Guan; Naomi Geller; Maya Mammon; Naiqi G. Xiao Infants recognized other-race faces when learning them with incidental emotional sounds Journal Article In: Developmental Psychobiology, vol. 68, no. 1, pp. 1–13, 2026. @article{Guan2026,Infant face recognition shows plasticity, with recent evidence indicating enhancement by the presence of emotional facial expressions. The mechanisms and domain-generality of this effect remain largely unknown. This study tested whether auditory emotional cues (vocalizations) facilitated infants' recognition of other-race faces, a perceptual challenge during the first year of life. Infants (N = 89) were presented with emotionally neutral faces paired with happy, sad, or neutral vocal sounds in a within-subjects design. Experiment 1 assessed recognition using identical face images between the familiarization and test phases, while Experiment 2 examined face recognition across viewpoint changes. Across both experiments, infants exhibited successful face recognition only when they were learned with emotional sounds (happy and sad). This facilitative effect remained stable across the tested age range and did not differ between happy and sad vocalizations. Infants' eye movement data revealed comparable face-looking patterns across conditions, suggesting that the facilitation was not driven by changes in visual attention. Thus, incidental, cross-modal emotional signals significantly enhance infant face recognition. This underscores the early integrative nature of emotion processing and its catalytic role in cognitive development. |
Matthias Grabenhorst; David Poeppel; Georgios Michalareas The anticipation of imminent events is time-scale invariant Journal Article In: PNAS, vol. 123, no. 2, pp. 1–11, 2026. @article{Grabenhorst2026,Humans predict the timing of imminent events to generate fast and precise actions, decisions, and other behaviors. Such temporal anticipation is critical over wide timescales, and especially salient over the range from hundreds of milliseconds to a few seconds. Despite advances in our understanding of basic timing behavior and its underlying neural mechanisms, it remains an open question whether anticipation is stable across these short time scales. Recent work shows that the brain models the probability density function (PDF) of events across time, suggesting a canonical mechanism for temporal anticipation. Here, we investigate whether this computation holds when the event distribution covers different time spans. We show that, irrespective of the time span, anticipation, measured as reaction time, scales with the event distribution. This demonstrates that the key computation—the estimation of event probability density—is invariant across temporal scales. We further show that the precision of anticipation is also scale invariant which contradicts Weber's law. The results are established in vision and audition, suggesting that the core computations in temporal anticipation are independent of sensory modality. Perceptual systems exploit probability estimation over time independently of temporal scale to anticipate imminent events. |
Skadi Gerkensmeier; Christina Bolte; Jan‐Ole Radecke; Feline Hamami; Andreas Sprenger; Christoph Helmchen; Robert Chen; Marcus Callister; Talyta Cortez Grippe; Christine Klein; Norbert Brüggemann; Tobias Bäumer; Alexander Münchau; Anne Weissbach Convergence deficits in myoclonus‐dystonia point to cerebellar impairment Journal Article In: Movement Disorders Clinical Practice, pp. 1–8, 2026. @article{Gerkensmeier2026,Background Background: Myoclonus-dystonia (M-D) is a monogenic movement disorder, with proposed cerebellar dysfunction. Vergence eye movement deficits, characteristics of degenerative cerebellar disease, have not been studied in M-D. Cerebellar transcranial alternating current stimulation (tACS) is considered a potential therapeutic approach. Objectives: To assess vergence and prosaccade performance as markers of cerebellar dysfunction in M-D and Objectives to evaluate the effects of cerebellar 50 Hz tACS on these eye movements. Methods: Vergence and prosaccade performance were examined in 14 M-D patients carrying pathogenic SGCE Methods variants and 14 healthy controls. A subgroup (n = 7) received real and sham 50 Hz cerebellar tACS in a randomized, double-blind design. Results: M-D patients showed prolonged latency and reduced gain of convergence compared to controls. Results Divergence did not differ between groups. Prosaccade peak velocity was reduced in M-D patients. 50 Hz cerebellar tACS showed no effect on eye movements. Conclusion: Impaired convergence supports cerebellar involvement in M-D. Further studies should identify affected pathways. |
Zhushi Fu; Xiaotong Ding; Yutao Lu; Cai Xing Physiological evidence supporting the emotional motivation account of the ending effect: Pupil diameters increase toward the end Journal Article In: Journal of Gambling Studies, pp. 1–15, 2026. @article{Fu2026,The phenomenon of increased risk-taking in the last round of a set of risky decision-making tasks is called the ending effect. Recent empirical studies proposed an emotional motivation account to explain the ending effect. That is, the pursuit of an emotionally satisfying ending leads to the increase of risk-taking. However, previous studies have mostly examined the ending effect at the behavioral level, there is yet no physiological evidence to examine the emotional motivation account. To fill in this gap of knowledge, the current study examined the emotional motivation account at the physiological level by recording pupil diameters, which reflect the activation of emotional motivation. Participants were randomly assigned to complete eight rounds or ten rounds of risk decision tasks while having their eyes tracked. The results showed a significant interaction between round and group on pupil diameter. Specifically, there was no significant difference between the first six rounds and the 8th round in the experimental group. For the control group, the pupil diameter of the first six rounds was significantly larger than the 8th round. Perceived end- ing may have sustained emotional arousal. This finding provides qualified physiological support for the emotional motivation account of the ending effect. |
Anne Friede; Albrecht Inhoff; Christian Vorstius; Ralph Radach Word difficulty determines the accuracy of regressive saccades in reading Journal Article In: Psychonomic Bulletin & Review, vol. 33, no. 1, pp. 1–13, 2026. @article{Friede2026,The current experiment was conducted to study effects of lexical word difficulty on the control of long-range regressive saccades. Participants read single line sentences in German for comprehension and checked for a spelling error that was inserted when the eyes had reached the end of the line. When words were more difficult in terms of orthographic irregularity and lower frequency, this dramatically increased the accuracy of regressions back to these words. If the target was missed, fewer additional saccades and less time were needed until the eyes fixated the target word. The data suggest that more effortful word processing is related to a better representation in visual–spatial memory, enabling more effective programming of regressions. |
Gabrielle F. Freitag; Shannon Shaughnessy; Jennifer M. Meigs; Parmis Khosravi; Julia O. Linke; Spencer C. Evans; Ellen Leibenluft; Melissa A. Brotman; Daniel S. Pine; Katharina Kircanski; Elise M. Cardinale An investigation of inhibitory control as a mechanism differentiating tonic and phasic irritability Journal Article In: Child Psychiatry & Human Development, pp. 1–11, 2026. @article{Freitag2026,Phasic and tonic irritability are highly correlated clinical constructs yet differentially associated with developmental trajectories and treatment response. However, limited research has identified their shared and unique underlying behavioral mechanisms. In a sample of youths enriched for irritability (N = 141, age range 7–18, age M[SD] = 12.60[2.54], 48.23% female), we investigated whether inhibitory control is differentially associated with phasic versus tonic irritability. Repli- cating prior work, tonic and phasic irritability were estimated via independent confirmatory factor analyses (CFAs) using items and/or subscales from multi-informant questionnaires. A latent factor of inhibitory control was extracted from four behavioral tasks. Initial multiple linear regression analysis found that phasic, not tonic, irritability was significantly associ- ated with impaired inhibitory control. However, results were no longer significant after accounting for shared associations with age. In addition, when adding commonly co-occurring symptoms such as attention-deficit/hyperactivity disorder (ADHD) symptoms and oppositionality, age and ADHD were significant predictors of inhibitory control, but phasic irri- tability was not. Results suggest that inhibitory control alone may not be a salient mechanism for disambiguating phasic and tonic irritability. Future work leveraging longitudinal methods and consideration of other potential contextual factors is needed. |
Wei Fang; Naiqi G. Xiao Emotional consistency guides social engagement in 18- to 24-month-old toddlers Journal Article In: Child Development, pp. 1–14, 2026. @article{Fang2026,This study investigated toddlers' sensitivity to emotional consistency and its influence on social engagement. Sixty-eight toddlers of diverse ethnic backgrounds (39 females; 338–908 days old; 79.4% White; and collected in 2024) watched videos depicting adults expressing emotions toward novel objects. The expression valence was either consistent (e.g., always positive toward Object A) or inconsistent (e.g., both positive and negative toward Object A). Eighteen- to 24-month-olds exhibited distinct looking when learning the consistent versus inconsistent informants (Cohen's d = 0.42) and showed greater sustained gaze following toward the emotionally consistent informants (Hedges' g = 0.45). Twelve- to 18-month-olds did not differentiate between conditions. These data suggest that detecting and utilizing emotional consistency as a cue for social engagement develops during the second year of life. |
Anne Françoise Chambrier; Philippe Terrier; Paolo Ruggeri; David Müller; Myrto Atzemian; Catherine Thevenot; Marco Pedrotti Eye movements when reading Arabic numbers in sentences Journal Article In: Acta Psychologica, vol. 262, pp. 1–11, 2026. @article{Chambrier2026,We examined eye movements in 49 adults as they read aloud or silently rounded and non-rounded Arabic numbers embedded in texts. We compared the patterns of eye movements to those obtained when participants read words and pseudowords matched in length to the numbers. The results revealed that non-rounded numbers elicited more fixations, longer fixation durations, and an increased number of saccades with shorter amplitudes compared to words, with pseudowords and rounded numbers falling in between. This reflects the cognitively demanding step-by-step processing required for number reading. However, this effect was moderated for non-rounded numbers in silent reading, suggesting that without oralization requirement, participants engaged in a more superficial reading. This interpretation was further supported by a higher error rate on a comprehension task administered after reading when the questions were related to the magnitude of the numbers read. Additionally, participants made more leftward saccades when reading numbers compared to words and pseudowords, indicating that despite numbers being oralized from left to right, they must be, to some extent, scanned from right to left to determine the value and therefore the denomination of the various digits. These findings shed light on the cognitive mechanisms underlying number reading. |
Frances G. Cooley; Karen Emmorey; Emily Saunders; Elizabeth R. Schotter In: Behavior Research Methods, vol. 58, no. 1, pp. 1–14, 2026. @article{Cooley2026,Eye-tracking corpora have advanced our understanding of reading processes by providing large-scale datasets of naturalistic reading behavior. However, existing corpora have almost exclusively sampled from typically hearing readers of spoken languages. Here, we present the Signers' Eye-movements in English Reading (SEER) Corpus, a dataset of eye-movement behaviors from 41 skilled deaf adult readers who are early signers of American Sign Language (ASL), as well as a comparative group of 101 typically hearing monolingual English readers. Participants read 200 English sentences presented one at a time. In addition to eye-tracking data, the corpus includes detailed participant information: a standardized measure of reading proficiency, spelling recognition, and nonverbal intelligence for all participants. Information for the deaf participants include ASL comprehension scores, age of ASL acquisition, and phonological awareness scores (for a subset of participants). We report comparative analyses of reading behaviors at both the word level and sentence level. We also examine group differences in the effects of word length, frequency, and surprisal on local measures. The results indicate stronger effects of length and surprisal, but equivalent frequency effects (on content words) for deaf compared to hearing readers. The SEER Corpus offers researchers the opportunity to test hypotheses about reading development and efficiency in bimodal bilinguals who are first language users of ASL and skilled readers of English, supporting broader investigations of visual language processing. The corpus is preregistered and publicly available (https://doi.org/10.17605/OSF.IO/7P4F2) to facilitate replication, cross-study comparisons, and exploration of preliminary hypotheses in this understudied population. |
Olympia Colizoli; Tessa M. Leeuwen; Danaja Rutar; Harold Bekkering Pupil dilation offers a time-window on prediction error Journal Article In: eLife, vol. 14, pp. 1–44, 2026. @article{Colizoli2026,Task-evoked pupil dilation is notably linked to unexpected events. Building on Zénon's (2019) information-theory framework, we investigated whether the pupil's response to feedback on decision outcomes during associative learning reflects a prediction error signal. Operationally, we defined prediction errors as an interaction between stimulus-pair frequency and accuracy. We then tested if these signals correlated with information gain, formally defined as the Kullback-Leibler (KL) divergence between posterior and prior belief distributions of an ideal observer. We reasoned that information gain should be proportional to the precision-weighted prediction error signals potentially arising from neuromodulatory arousal networks. We analyzed two data sets in which participants performed perceptual decision-making tasks while pupil dilation was recorded. Our findings consistently showed that a significant proportion of variability in the post-feedback pupil response was explained by information gain shortly after feedback presentation. For the first time, we present evidence that whether the pupil dilates or constricts along with information gain was context dependent. This study offers empirical evidence that the pupil's response provides valuable insights into the process of model updating during learning, highlighting its utility as a physiological indicator of internal belief states. |
Yue Cheng; Weizhen Chen In: Buildings, vol. 16, no. 1, pp. 1–23, 2026. @article{Cheng2026,Sacred heritage landscapes face significant challenges in engaging Generation Z tourists. To understand their visual processing and emotional responses, this study grounded in Cognitive Appraisal Theory (CAT), employed a mixed-methods approach with Chinese youth. Study 1 (N = 35) uses eye-tracking to examine the visual attention of Gen Z to different sacred heritage types, revealing that natural sacred sites yield the highest First Fixation Duration (FFD) and Average Fixation Duration (AFD), alongside stronger subjective preferences—highlighting the role of biophilia and perceptual fluency. Study 2 constructs a moderated mediation model with a questionnaire (N = 300), identifying a “Novelty → Awe → Place Attachment” pathway and the moderating role of mindfulness. The research identifies the specific visual processing patterns of Gen Z and provides a psychological model for place attachment, offering empirical insights for designing intergenerationally inclusive heritage landscapes. |
Jui-Tai Chen; Yi-Hsuan Chang; Cesar Barquero; Chin-An Wang Pupil dynamics reveal preparatory processes in the generation of pro-saccades and anti-saccades in open skill sports athletes Journal Article In: Biology of Sport, vol. 43, pp. 77–94, 2026. @article{Chen2026,This study investigated pupil dynamics to establish a physiological index of mental processes associated with executive functioning, enabling objective evaluation of cognitive load during training to improve understanding of cognitive control in sport-specific contexts. Using video-based eye-tracking, we examined pupil and saccade responses in athletes (N = 40) and non-athletes (N = 40) performing an interleaved pro- saccade and anti-saccade task. In this task, participants were instructed prior to target appearance to either make a reflexive saccade toward the target (pro-saccade) or inhibit that response and generate a voluntary saccade in the opposite direction (anti-saccade). Larger pupil dilation prior to target onset was observed during anti-saccade compared to pro-saccade preparation (p < 0.001, ηp² = 0.153). Athletes showed reduced pupil dilation compared to non-athletes (p < 0.05, ηp² = 0.049). In addition, trials with larger pupil dilation and smaller tonic pupil sizes were associated with faster saccade reaction times. Pupil dilation also positively correlated with saccade peak velocities but showed no association with saccade endpoint accuracy. These findings suggest that athletes may engage in more efficient motor preparation, as reflected by reduced pupil dilation. Moreover, phasic pupil dilation, indexing cognitive load, and tonic pupil size, associated with arousal level, both contributed to the control of saccade dynamics during goal-directed movements. Together, these results highlight the utility of pupil size as an objective and informative index for assessing both cognitive and arousal functions in sports science research. |
Francesca Carbone; Abigail Pitt; Angela Nyhout; Stacie Friend; Murray Smith; Heather J. Ferguson Art opening minds: An experimental study on the effects of temporal and perspectival complexity in film on open-mindedness Journal Article In: Quarterly Journal of Experimental Psychology, vol. 79, no. 1, pp. 102–123, 2026. @article{Carbone2026,Aesthetic Cognitivism posits that artworks have the potential to enhance open-mindedness. However, this claim has not yet been explored empirically. Here, we present two experiments that investigate the extent to which two formal features of the film – temporal and perspectival complexity – can ‘open our minds'. In Experiment 1, we manipulated the temporal complexity of the film. Participants (Ntotal = 100) watched a film (Memento) either in its original non-chronological order or the same film in chronological order. In Experiment 2, we manipulated perspectival complexity in film. Participants (Ntotal = 100) watched an excerpt from a film (Jackie Brown) that either included the perspectives of multiple characters on an event or a single character's perspective on the same event. Film conditions in both experiments were further compared with a control condition in which participants did not watch a film (N = 50). Participants' open-mindedness was assessed in both experiments through four empirical indicators (creativity, imaginability, cognitive flexibility, openness to new evidence) and in Experiment 2, participants' eye movements, heart rate and electrodermal activity were measured while watching the film. Results showed that watching films, regardless of their temporal or perspectival complexity, modulated only one facet of open-mindedness – cognitive flexibility – when compared to the no-film control condition, providing only limited support for the aesthetic cognitivist claim that artistic films can ‘open our minds'. Real-time measures in Experiment 2 revealed that pupil size and number of fixations were modulated by perspectival complexity: both were smaller when watching a film from multiple perspectives compared to a single perspective. Possible explanations for this difference are examined in relation to the viewers' cognitive processes involved in understanding and interpreting film content. |
Huarui Cao; Lin Mu; Xuejiao Mao; Tang Yao Effect of tourism architecture shape and self-construal Journal Article In: Annals of Tourism Research, vol. 116, pp. 1–17, 2026. @article{Cao2026,The issue of whether tourists with varying characteristics exhibit distinct preferences for diverse geometric shapes of architecture remains underexplored in tourism literature. To address this gap, we drew on aesthetic distance theory and conducted eye-tracking and scenario-based experiments to examine an effect of tourism architecture shape in alignment with tourist self-construal. Our findings indicated that independent self-construal tourists favor circular-shaped architecture, while interdependent self-construal tourists prefer angular-shaped architecture. Additionally, the results confirmed the mediating role of novelty and highlighted architectural authenticity as a moderator in this effect. These insights enhance our understanding of aesthetic preferences for tourism architecture among tourists with different self-construals and provide practical recommendations for tourism industry to tailor specific architectural shapes to increase tourists' travel intentions. |
Mark W. Becker; Andrew Rodriguez; Derrek T. Montalvo; Chad Peltier Reducing the low-prevalence effect with probe trials Journal Article In: Cognitive Research: Principles and Implications, vol. 11, no. 1, pp. 1–19, 2026. @article{Becker2026,As targets become rare in visual search tasks, the likelihood of missing them increases—a phenomenon known as the low-prevalence effect (LPE). This has important implications for real-world searches, but reducing the LPE has proven challenging. In Experiment 1, we used a low-prevalence T-among-Ls task and found that distributing “probe” trials—trials with known targets and post-response feedback—reduced the LPE. In Experiment 2, participants searched for two low-prevalence targets (T and O among Ls and Qs), and we varied how often each appeared in probe trials. The probe benefit scaled with the frequency of the matching target, suggesting limited generalizability to non-probed targets. Experiment 3 used eye tracking to examine whether probes affected quitting thresholds, decision criteria, or guidance. Results showed that probes biased top-down guidance toward features of frequently probed targets, without affecting the number of items inspected or the decision criterion. In Experiment 4, we tested whether feedback was necessary for the probe benefit. Findings suggest that probes improve rare-target search by altering perceived prevalence, not through feedback alone. Overall, probes may reduce the LPE by increasing perceived prevalence and thereby increasing search guidance, but only when probe targets closely match actual search targets. |
Valentina Apresjan; Alexander V. Orlov; Kirill Koncha; Vladislava Staroverova; Anastasia Lopukhina Metaphor in the mental lexicon: Investigating different types of polysemy via eye-tracking and behavioral experiments Journal Article In: Metaphor and Symbol, vol. 41, no. 1, pp. 5–38, 2026. @article{Apresjan2026,This study investigates the mental representation and processing of the two types of metaphorical senses in Russian polysemous verbs and adjectives using eye-tracking, sensicality judgment, and semantic clustering tasks. The metaphorical senses under study differ in their semantic proximity to the literal sense, with “proximal” metaphors (e.g. “raise prices”) retaining more semantic components, and “distal” metaphors being semantically bleached (e.g. “raise alarm”). Metaphors differed in their mental representations and processing patterns based on semantic proximity and part of speech. In semantic clustering, proximal metaphors were miscategorized with literal senses more often than distal metaphors. Proximal metaphors in adjectives were more often miscategorized with literal senses, while in verbs they were miscategorized with distal metaphors. In sensicality judgment, verbs showed longer reaction times for proximal metaphors, while adjectives demonstrated higher accuracy for distal metaphors compared to literal senses. In eye-tracking, adjectival distal metaphors triggered more regressions on disambiguating nouns than literal senses. Our findings suggest that distal metaphors are stored and processed as distinct, non-compositional units, while proximal metaphors overlap with literal senses and are processed compositionally. Proximal metaphors in adjectives are closer to literal senses, while in verbs they are closer to distal metaphors, explained by different semantic derivation mechanisms. |
Matt D. Anderson; Emily A. Cooper; Jorge Otero-Millan A method for measuring closed-loop latency in gaze-contingent rendering without extra equipment Journal Article In: Behavior Research Methods, vol. 58, no. 1, pp. 1–12, 2026. @article{Anderson2026,In gaze-contingent rendering, the visual stimulus rendered on a display changes based on where the observer is looking. This technique allows researchers to achieve dynamic control over stimulus placement on the retina in the presence of eye movements and is often used to investigate how sensory processing and perception vary across the visual field. Precise stimulus placement using gaze-contingent rendering depends on minimizing the temporal latency between a change in the observer's gaze position, measured using an eye tracker, and the corresponding change to the stimulus. This latency, however, can be challenging to measure reliably. Here, we present a simple method for measuring system latency that requires no additional hardware beyond the eye tracker and display, which are already part of the gaze-contingent system. Two small circles are rendered on the display to simulate the appearance of two pupils. The eye tracker is pointed towards the display to record both pupils simultaneously. One pupil is drawn based on a pre-determined trajectory, for example, moving up and down at a constant speed. The second pupil is “gaze-contingent”: it is drawn based on the measured position of the first pupil. The time-lag at which the position of the second pupil matches the first pupil gives the closed-loop latency of the entire system. To validate this method, we added artificial rendering delays to our system and produced measured latencies that precisely corresponded to predictions, given the refresh rate of the display. This method provides a simple, low-cost way of precisely quantifying gaze-contingent rendering latencies, with no additional hardware required. |
2025 |
Luan Zimmermann Bortoluzzi; Estêvão Carlos-Lima; Gabriela Mueller de Melo; Melissa Hongjin Song Zhu; Gustavo Rohenkohl Presaccadic attentional shifts are not modulated by saccade amplitude Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–10, 2025. @article{ZimmermannBortoluzzi2025,Humans constantly explore the visual environment through saccades, bringing relevant visual stimuli to the center of the gaze. Before the eyes begin to move, visual attention is directed to the intended saccade target. As a consequence of this presaccadic shift of attention (PSA), visual perception is enhanced at the future gaze position. PSA has been investigated in a variety of saccade amplitudes, from microsaccades to locations that exceed the oculomotor range. Interestingly, recent studies have shown that PSA effects on visual perception are not equally distributed around the visual field. However, it remains unknown whether the magnitude of presaccadic perceptual enhancement varies with the amplitude of the saccades. Here, we measured contrast sensitivity thresholds during saccade planning in a two-alternative forced-choice (2AFC) discrimination task in human observers. Filtered pink noise (1/f) patches, presented at four eccentricities scaled in size according to the cortical magnification factor were used as visual targets. This method was adopted to mitigate well-known eccentricity effects on perception, thereby enabling us to explore the effects associated to saccade amplitudes. First, our results show that saccade preparation enhanced contrast sensitivity in all tested eccentricities. Importantly, we found that this presaccadic perceptual enhancement was not modulated by the amplitude of the saccades. These findings suggest that presaccadic attention operates consistently across different saccade amplitudes, enhancing visual processing at intended gaze positions regardless of saccade size. |
Cong Zheng; Qifan Wang; He Cui Continuous sensorimotor transformation enhances robustness of neural dynamics to perturbation in macaque motor cortex Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–17, 2025. @article{Zheng2025a,Neural activity in the motor cortex evolves dynamically to prepare and generate movement. Here, we investigate how motor cortical dynamics adapt to dynamic environments and whether these adaptations influence robustness against disruptions. We apply intracortical microstimulation (ICMS) in the motor cortex of monkeys performing delayed center-out reaches to either a static target (static) or a rotating target (moving) that required interception. While ICMS prolongs reaction times (RTs) in the static condition, it does not increase RTs in the moving condition, correlating with faster recovery of neural population activity post-perturbation. Neural dynamics suggests that the moving condition involves ongoing sensorimotor transformations during the delay period, whereas motor planning in the static condition is completed shortly. A neural network model shows that continuous feedback input rapidly corrects perturbation-induced errors in the moving condition. We conclude that continuous sensorimotor transformations enhance the motor cortex's resilience to perturbations, facilitating timely movement execution. |
Tianze Zhang; Yue Xi The influences of image entropy and text direction on consumer attention: Insights from eye-tracking studies Journal Article In: Psychology & Marketing, vol. 42, no. 12, pp. 3266–3287, 2025. @article{Zhang2025m,As visual content is increasingly prioritized by social media platforms, the effective interplay between image and text is critical for capturing consumer attention. This research aims to investigate the effects of two novel visual cues—image entropy (disorder) and text direction—and presents the concept of an image–text fit effect. Through three eye-tracking experiments, we demonstrate that high-entropy (vs. low-entropy) images and vertical (vs. horizontal) text direction significantly increase consumer attention. We identify a “feeling right” sense as the underlying psychological mechanism, which can be explained via time perception association. Furthermore, we examine the moderating effect of emoji intensity in social media communications on capturing consumer attention. These findings increase the theoretical understanding of visual marketing and provide actionable strategies for practitioners. |
Hao Zhang; Yiqing Hu; Yang Li; Shuangyu Zhang; Xiao Li Li; Chenguang Zhao Simultaneous dataset of brain, eye and hand during visuomotor tasks Journal Article In: Scientific Data, vol. 12, no. 1, pp. 1–15, 2025. @article{Zhang2025f,Visuomotor integration is a complex skill set encompassing many fundamental abilities, such as visual search, attention monitoring, and motor control. To explore the dynamic interplay between visual inputs and motor outputs, it is necessary to simultaneously record multiple brain activities with high temporal and spatial resolution, as well as to record implicit and explicit behaviors. However, there is a lack of public datasets that provide simultaneous multiple modalities during a visual-motor task. Functional near-infrared spectroscopy and electroencephalography to record brain activity simultaneously facilitate more precise capture of the complex visuomotor of brain mechanisms. Additionally, by employing a combined eye movement and manual response, it is possible to fully evaluate the effects of visuomotor outputs from implicit and explicit dimensions. We recorded whole-brain EEG (34 electrodes) and fNIRS (44 channels) covering the frontal and parietal cortex along with eye movements, behavior sampling, and operant behavior. The dataset underwent rigorous synchronization, quality control to highlight the effectiveness of our experiments and to demonstrate the high quality of our multimodal data framework. |
Zhao Zeng; Ce Zhang; Yue Xu; Hua He; Yong Gu Distinct neural population code and causal roles of primate caudate nucleus in multimodal decision-making Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–16, 2025. @article{Zeng2025b,Perceptual decision-making involves distributed networks spanning both association cortices and subcortical areas. A fundamental question is whether such a network is highly redundant, or each node is distinct with unique function. Using a visuo-vestibular decision-making task, here we show the subcortical caudate nucleus (CN) of male primates displays distinct population code compared to association cortices along the modality dimension. Specifically, in a low-dimensional state subspace, neural trajectory in the frontal and posterior-parietal association cortical activity during multimodal-stimulus condition evolves along the visual trajectory, whereas along the vestibular trajectory in the CN. We then show CN population activity is consistent with the animal's behavioral strategy employed within a generalized drift-diffusion framework. Importantly, causal-link experiments, including application of GABAa-receptor agonist, D1-receptor antagonist, and electrical microstimulation, further confirmed CN's critical contributions to perceptual behavior. Our results confirm CN's vital importance to decision making in complex environments with multimodal information. |
Zinong Yang; Stephanie D. Williams; Ewa Beldzik; Stephanie Anakwe; Emilia Schimmelpfennig; Laura D. Lewis Attentional failures after sleep deprivation are locked to joint neurovascular, pupil and cerebrospinal fluid flow dynamics Journal Article In: Nature Neuroscience, pp. 2526–2536, 2025. @article{Yang2025e,Sleep deprivation rapidly disrupts cognitive function and in the long term contributes to neurological disease. Why sleep deprivation has such profound effects on cognition is not well understood. Here we use simultaneous fast fMRI–EEG to test how sleep deprivation modulates cognitive, neural and fluid dynamics in the human brain. We demonstrate that attentional failures during wakefulness after sleep deprivation are tightly orchestrated in a series of brain–body changes, including neuronal shifts, pupil constriction and cerebrospinal fluid (CSF) flow pulsations, pointing to a coupled system of fluid dynamics and neuromodulatory state. CSF flow and hemodynamics are coupled to attentional function within the awake state, with CSF pulsations following attentional impairment. The timing of these dynamics is consistent with a vascular mechanism regulated by neuromodulatory state. The attentional costs of sleep deprivation may thus reflect an irrepressible need for rest periods driven by a central neuromodulatory system that regulates both neuronal and fluid physiology. |
Yu Fang Yang; Matthias Gamer Facial features associated with fear and happiness attract gaze during brief exposure without enhancing emotion recognition Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–15, 2025. @article{Yang2025d,Facial features transmit emotions but their effect on visual orienting and explicit emotion recognition is debated. Here we examined whether fixating on diagnostic features of emotional expressions—such as eye region for fear and the mouth for happiness—affects saccadic targeting and improves recognition accuracy. Across two pre-registered experiments, participants viewed fearful, happy, and neutral faces for short intervals (50 or 150 ms) while the initial fixation location was manipulated. Although such brief stimulation does not allow for visual exploration, the faces still elicited reflexive saccades that occurred after stimulus offset. These saccades were modulated by the emotional expressions indicating a consistent preferential saccadic orienting towards diagnostic features, even with limited exposure. As this effect disappeared for inverted faces, it can be attributed to an extrafoveal processing of facial features instead of an attentional orienting towards physically salient image regions. Participants' recognition accuracy was unaffected by the foveated facial feature, but this observation might also be due to ceiling effects in performance. Collectively, these findings contribute to understanding the attentional mechanisms of feature-based processing in the perception of emotional facial expressions. |
Xiaojuan Xue; Gilles Pourtois Neurophysiological evidence for emotional attention modulation depending on goal relevance Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–16, 2025. @article{Xue2025b,Although threat-related stimuli can capture attention automatically, recent findings have challenged this assumption by showing that goal rather than threat can be prioritized and eventually guide attentional control. In this study, we used high density electroencephalography (EEG) in 40 participants while peripheral emotional faces (either fear or happiness) were either goal-relevant or irrelevant during a dot-probe task (DPT). The use of peripheral vision was established by eye-tracking. Both the face specific N170 component and the subsequent Early Posterior Negativity (EPN) were enhanced by fear at the cue level, yet the latter one only when fear was goal relevant. Importantly, we found that early on following target onset at the P1 level, both value and goal relevance drove spatial attention and interacted with each other such that when they were goal-relevant, fearful faces captured attention less than when they were not. These results suggest that emotional attention is flexible and it can be influenced by the goal relevance of emotion. Moreover, they shed light on the electrophysiological manifestations of this flexibility and dovetail with the assumption that sensory gain control effects occurring in the visual cortex depending on attentional control are multiplexed and determined by both value and goal. |
Jia-Jie Xu; Jun-Yi Chen; Hong-Zhou Xu; Zhiwei Zheng; Jing Yu The role of inhibitory function in associative memory among older adults and its plasticity Journal Article In: Cognitive Research: Principles and Implications, vol. 10, no. 1, pp. 1–20, 2025. @article{Xu2025,Associative memory deteriorates with age. One possible reason for this associative memory deficit in older adults is a decline in inhibitory function. However, it remains unclear what role of inhibitory function plays in age-related associative memory deficits, and whether and how acute training of inhibitory function could ameliorate the detrimental effects of inhibitory deficits on associative memory in older adults. In Experiment 1, 80 participants (40 younger and 40 older adults) studied scene-word pairs while attempting to inhibit interfering words during encoding, with two conditions: gist and non-gist interferences. In Experiment 2, 66 older adults were randomly assigned to either acute inhibitory training or a control group, and eye-tracking technology was used to capture the benefits of acute inhibitory training. Results showed that older adults were more disturbed by gist than non-gist interferences because of hyper-binding, and that inhibitory function mediated the relationship between age and associative memory accuracy. Notably, although acute inhibitory training did not significantly improve associative memory accuracy in the training group compared to the control group, structural equation model showed that older adults in the acute training group showed shorter fixation durations and lower frequencies in the interference region of interest, leading to better associative memory. These results indicate that inhibitory function plays a mediating role in age-related associative memory decline, as well as its plasticity in this association. It provides a potential pathway to improve associative memory in older adults. |
Jackie Wai Yi Wo; Weiyan Liao; Janet Hui Hsiao Impact of mask use on facial emotion recognition in individuals with subclinical social anxiety: An eye-tracking study Journal Article In: Cognitive Research: Principles and Implications, vol. 10, no. 1, pp. 1–18, 2025. @article{Wo2025,Previous studies suggested that social anxiety is associated with interpretation bias, theory of mind deficit, and eye gaze avoidance when identifying facial emotions. We tested the hypothesis that socially anxious individuals would be more affected by mask use during facial emotion recognition. 88 healthy undergraduates with various levels of social anxiety were invited. Participants judged the emotions of masked and unmasked facial expressions. Eye Movement Analysis with Hidden Markov Models was used to analyze participants' eye movement patterns during the task. Potential confounders including gender, depressive symptoms, stress, and executive planning ability were controlled for in the analyses. Results failed to support our hypothesis. Instead, higher social anxiety was associated with higher accuracy rates for angry and fearful faces and lower false alarm rates for sad faces. Eye movement patterns were similar across social anxiety levels. Interestingly, an exploratory moderation analysis revealed that an increase in using a more eye-centered strategy due to mask use was significantly associated with a larger drop in accuracy rate for fearful faces among individuals with higher social anxiety, while non-significantly associated with a smaller drop among individuals with lower social anxiety. Thus, our study indicates social anxiety, at least at subclinical levels, may be associated with a generally heightened sensitivity to negative emotions. However, such heightened sensitivity diminishes if they switch to a more eye-centered strategy when viewing masked facial emotions. Potential mechanisms and implications were discussed. |
Iris Wiegand; Mariska Pouderoijen; Joukje M. Oosterman; Kay Deckers; Gernot Horstmann; Mariska Van Pouderoijen; Joukje M. Oosterman; Kay Deckers; Gernot Horstmann Contributions of distractor dwelling , skipping , and revisiting to age differences in visual search Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–28, 2025. @article{Wiegand2025,Visual search becomes slower with aging, particularly when targets are difficult to discriminate from distractors. Multiple distractor rejection processes may contribute independently to slower search times: dwelling on, skipping of, and revisiting of distractors, measurable by eye-tracking. The present study investigated how age affects each of the distractor rejection processes, and how these contribute to the final search times in difficult (inefficient) visual search. In a sample of Dutch healthy adults (19–85 years), we measured reaction times and eye-movements during a target present/absent visual search task, with varying target-distractor similarity and visual set size. We found that older age was associated with longer dwelling and more revisiting of distractors, while skipping was unaffected by age. This suggests that increased processing time and reduced visuo-spatial memory for visited distractor locations contribute to age-related decline in visual search. Furthermore, independently of age, dwelling and revisiting contributed stronger to search times than skipping of distractors. In conclusion, under conditions of poor guidance, dwelling and revisiting have a major contribution to search times and age-related slowing in difficult visual search, while skipping is largely negligible. |
Bayley M. Wellons; Christopher N. Wahlheim Misinformation reminders enhance belief updating and memory for corrections: The role of attention during encoding revealed by eye tracking Journal Article In: Cognitive Research: Principles and Implications, vol. 10, no. 1, pp. 1–22, 2025. @article{Wellons2025,Misinformation exposure can cause inaccurate beliefs and memories. These unwanted outcomes can be mitigated when misinformation reminders—veracity-labeled statements that repeat earlier-read false information—appear before corrections with true information. The present experiment used eye tracking to examine the role of attention while encoding corrective details in the beneficial effects of reminder-based corrections. Participants read headlines in a belief-updating task that included a within-subjects manipulation of correction format. They first rated the familiarity and veracity of true and false headlines (Phase 1). Then, they read true headlines that corrected false headlines or affirmed true headlines (Phase 2). The true headlines appeared (1) without veracity labels, (2) with veracity labels, or (3) with misinformation reminders and veracity labels. Finally, participants re-rated the veracity of the Phase 1 headlines and rated their memory for whether those headlines were corrected in Phase 2 (Phase 3). Reminder-based corrections led to the greatest reduction in false beliefs, best high confidence recognition of corrections, and earliest eye fixations to the true details of corrections during encoding in Phase 2. Corrections remembered with the highest confidence rating were associated with more and earlier fixations to true details in correction statements in Phase 2. Collectively, these results suggest that misinformation reminders directed attention to corrective details, which improved encoding and subsequent memory for veracity information. These results have applied implications in suggesting that optimal correction formats should include features that direct attention to, and thus support encoding of, the contrast between false and true information. |
Ágnes Welker; Orsolya Pető-Plaszkó; Luca Verebélyi; Ferenc Gombos; István Winkler; Ilona Kovács Neurodiversity in mental simulation: Conceptual but not visual imagery priming modulates perception across the imagery vividness spectrum Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–12, 2025. @article{Welker2025,Mental simulation—the ability to internally model sensory, conceptual, or future events—may include mental imagery as a component, with considerable individual variability in its vividness and dependence on sensory detail. While self-reports have been widely used to assess imagery, they are subjective and prone to bias. Among more objective methods, imagery priming in binocular rivalry has been employed to investigate the influence of mental imagery on perception, but findings have been ambiguous. Here, we introduce a no-report version of the task, using eye-tracking-based optokinetic nystagmus assessment to provide a more reliable measure of perceptual shifts. In addition to visual imagery priming, we introduce conceptual priming, which does not rely on sensory imagery but engages abstract representations. In visual imagery priming, perceptual modulation correlated with self-reported vividness, and participants with low vividness did not show modulatory effects. However, in conceptual priming, effects were observed across the entire vividness spectrum, demonstrating that both concrete sensory-based and abstract conceptual representations can influence perception. These findings challenge purely sensory accounts of mental imagery. We propose avoiding deficit-based terms such as “aphantasia” and advocate for a neuroaffirmative perspective on mental simulation diversity. |
Béla Weiss; Annamária Manga; Ádám Nárai; Adél Bihari; Judit Zsuga; Zoltán Vidnyánszky Reward boosts cognitive control during working memory maintenance Journal Article In: Scientific Reports, vol. 15, no. 1, 2025. @article{Weiss2025,Working memory (WM) involves short-term maintenance and manipulation of goal-relevant information, with cognitive control playing a crucial role in these processes due to WM's limited capacity. Pupillometry studies show distinct pupillary changes for WM stages, reflecting cognitive effort and load. Motivational incentives enhance WM performance by potentially improving encoding, maintenance, or retrieval, though the specific components influenced by reward remain unclear. This study specifically tested whether reward modulates cognitive control processes during WM maintenance using pupillometry. Participants performed a delayed-estimation orientation WM task with reward cues indicating reward levels at the beginning of trials. The results revealed that motivational incentives significantly improved WM performance and increased pupillary dilation during maintenance. These findings provide evidence for the modulation of WM maintenance by reward through enhanced top-down cognitive control processes. |
Hanliang Wei; Tak Lam; Weijian Liu; Waxun Su; Zheng Wang; Qiandong Wang; Xiao Lin; Peng Li Initial and sustained attentional bias toward emotional faces in patients with major depressive disorder Journal Article In: Journal of Eye Movement Research, vol. 18, no. 6, pp. 72, 2025. @article{Wei2025,Major depressive disorder (MDD) represents a prevalent mental health condition characterized by prominent attentional biases, particularly toward negative stimuli. While extensive research has established the significance of negative attentional bias in depression, critical gaps remain in understanding the temporal dynamics and valence-specificity of these biases. This study employed eye-tracking technology to systematically examine the attentional processing of emotional faces (happy, fearful, sad) in MDD patients (n = 61) versus healthy controls (HC |
Sara Jane Webb; Brian Kwan; Raphael Bernier; Katarzyna Charwarska; Geraldine Dawson; James Dziura; Susan Faja; Gerhard Hellmann; Shafali Jeste; Natalia Kleinhans; April Levin; Adam Naples; Maura Sabatos-DeVito; Damla Şentürk; Frederick Shic; Catherine Sugar; James C. McPartland; Autism Biomarkers Consortium for Clinical Trials Face perception, attention, and memory as predictors of social change in autistic children Journal Article In: Journal of Neurodevelopmental Disorders, vol. 17, no. 1, pp. 1–9, 2025. @article{Webb2025,Objective: Social perception and attention markers have been identified that, on average, differentiate autistic from non-autistic children. However, little is known about how these markers predict behavior over time at both short and long time intervals. Methods: We conducted a large multisite, naturalistic study of 6- to 11-year-old children diagnosed with ASD (n = 214). We evaluated three markers of social processing: social perception via the ERP N170 Latency to Upright Faces; social attention via the Eye Tracking (ET) OMI (Oculomotor Index of Gaze to Human Faces) that captures percent looking to faces from three tasks; and social cognition via the NEPSY Face Memory task. Each was evaluated in predicting social ability and autistic social behaviors derived from parental interviews and questionnaires about child behavior at + 6 months (T3) and + 4 years (T4). Results: Adjusting for baseline performance, time between measurements, age, and sex, our results suggest differential prognostic relations for each of the markers. The ERP N170 Latency to Upright Faces showed limited prognostic relations, with a significant relation to short term changes in face memory. The ET OMI was related to face memory over both short and long term. Both the ET OMI and Face Memory predicted long-term autistic social behavior scores. Conclusions: In the context of a large-scale, rigorous evaluation of candidate markers for use in future clinical trials, our primary markers had significant but small-effect prognostic capability. The ET OMI and Face Memory showed significant long-term predictive relations, with increased visual attention to faces and better face memory at baseline related to increased social approach and decreased autistic social behaviors 4 years later. |
Xin Wang; Shitao Chen; Keyang Wang; Liyu Cao Predicted action-effects shape action representation through pre-activation of alpha oscillations Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1–11, 2025. @article{Wang2025n,Actions are typically accompanied by sensory feedback (or action-effects). Action-effects, in turn, influence the action. Theoretical accounts of action control assume a pre-activation of action-effects prior to action execution. Here we show that when participants were asked to report the time of their voluntary keypress using the position of a fast-rotating clock hand, a predictable action-effect (i.e. a 250 ms delayed sound after keypress) led to a shift of visuospatial attention towards the clock hand position of action-effect onset, thus demonstrating an influence of action-effects on action representation. Importantly, the attention shift occurred about 1 second before the action execution, which was further preceded and predicted by a lateralisation of alpha oscillations in the visual cortex. Our results indicate that when the spatial location is the key feature of action-effects, the neural implementation of the action-effect pre-activation is achieved through alpha lateralisation. |
Tao Wang; Yue Wang; Haibo Hu; Xing Wang; Shengdong Chen; Yiming Yang An eye-movement database of bilingual language control for Chinese-English bilinguals Journal Article In: Scientific Data, vol. 12, no. 1, pp. 1–7, 2025. @article{Wang2025l,The current absence of an eye-tracking database that explores bilingual language control and how intra-sentence code-switching types influence the language control process limits our deeper understanding of bilingual control mechanisms. To address this issue, we present a database containing eye-movement recordings collected during a silent reading task combined with language switching paradigm. The database contains typical measures of eye movement data of 160 Chinese and their translation equivalent English words from 40 high-proficient and 40 low-proficient participants across 1280 Chinese, English and intra-sentential code-switching sentences. This database enables researchers to test the impacts of both intra-sentential code-switching and the second language proficiency on bilingual language control and the underlying cognitive mechanisms. |
Lijuan Wang; Steven Frisson; Yali Pan; Ole Jensen Fast hierarchical processing of orthographic and semantic parafoveal information during natural reading Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–12, 2025. @article{Wang2025f,In reading, information from parafoveal words is extracted before direct fixation; however, it is debated whether this processing is restricted to orthographic features or also encompasses semantics. Moreover, the neuronal mechanisms supporting parafoveal processing remain poorly understood. We co-registered MEG and eye-tracking data in a natural reading paradigm to uncover the timing and brain regions involved in parafoveal processing. Representational similarity analysis revealed that parafoveal orthographic neighbours (e.g., “writer” vs. “waiter”) showed higher representational similarity than non-neighbours (e.g., “writer” vs. “police”), emerging ~68 ms after fixation onset on the preceding word (e.g., “clever”) in the visual word form area. Similarly, parafoveal semantic neighbours (e.g., “writer” vs. “author”) exhibited increased representational similarity at ~137 ms in the left inferior frontal gyrus. Importantly, the degree of orthographic and semantic parafoveal processing was correlated with individual reading speed. Our findings suggest fast hierarchical processing of parafoveal words across distinct brain regions, enhancing reading efficiency. |
Carla A. Wall; Kayla Smith; Frederick Shic; Bridgette Kelleher; Abigail Hogan; Elizabeth A. Will; Jane E. Roberts Heart rate defined sustained attention relates to visual attention in autism and fragile X syndrome Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–9, 2025. @article{Wall2025b,Social attention, including shared attention and social orienting, is essential for positive social interactions. Although early visual social attention is often quantified using eye tracking, these indices may not consistently reflect cognitive engagement. Heart rate defined sustained attention (HRDSA) is a physiological measure that can index cognitive engagement alongside visual attention, leading to more comprehensive assessments of attentional processes that are particularly important in young, neurodiverse children with high support needs, including those with autism and fragile X syndrome (FXS). The present study examined visual and heart-defined measures of social attention to the Selective Social Attention task, a video-based assay of social attention, in children with autism, FXS, and neurotypical development. Linear mixed models examined group and condition effects in multiple cardiac indices and overall looking at the scene. Findings suggest that, overall, children across all groups engaged similarly across the experiment in most dimensions of HRDSA, and consistent with previous work, autistic children spent less time visually attending to the scene than either other group. HRDSA was positively associated with visual social attention. Combining physiological and visual attention measures may elucidate the complex nature of social attention and be especially valuable for neurodiverse children when typical assessments are inaccessible. |
Preeti Verghese; Adrien Chopin; Ângela Gomes-Tomaz; Noelia G. Alcalde; Dennis M. Levi Vergence anomalies are associated with impaired stereopsis in amblyopia Journal Article In: Vision Research, vol. 237, pp. 1–16, 2025. @article{Verghese2025,We examined the relationship between stereopsis and fusional vergence in groups of amblyopic and stereo-normal control observers. As absolute disparity is thought to be the basis for relative disparity and for disparity-driven vergence, we hypothesized that vergence anomalies would be accompanied by impaired stereopsis. Specifically, we examined whether patterns of impaired stereopsis across the central 20° of the visual field were accompanied by impaired fusional vergence for stimuli confined to these regions. Stereopsis was measured locally across the visual field with disparity steps of 5 to 20 arcmin. Fusional vergence to large disparity steps (2 to 3°) was measured with binocular eye tracking. The vergence stimuli were random dot stereograms, in one of 3 spatial configurations: a large disc 16° in diameter, a small disc 4° in diameter, and an annulus with outer and inner diameters corresponding to the large and small discs. Of the controls (n = 25) with no history of abnormal visual development, 12 individuals exhibited normal stereopsis across the visual field and normal vergence gains for all configurations. Thirteen individuals with weak stereopsis in the central field tended to have anomalous vergence for small stimuli, but normal vergence for larger stimuli. Amblyopic/strabismic individuals (n = 12) had poor stereopsis and poor vergence for small stimuli. We report a strong correlation between vergence, coarse and fine stereopsis, with no double dissociation (no cases of impaired vergence with normal stereopsis). Taken together, the results suggest that compromised binocular interaction is the cause of both stereopsis and vergence deficits. |
Michaël Vanhoyland; Peter Janssen; Tom Theys Single-neuron correlates of visual consciousness in human lateral occipital complex Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–17, 2025. @article{Vanhoyland2025,Conscious perception, a critical aspect of human cognition, is assumed to emerge from a complex network of interacting brain regions that transmit information via feedforward and recurrent pathways. This study presents single- and multiunit recordings from the human lateral occipital complex (LO), a key region for shape and object recognition, during three distinct perceptual paradigms: backward masking, flash suppression and binocular rivalry. Stimulus awareness increased decoding accuracy and decoders assigned higher probabilities to the consciously perceived stimulus during periods of dichoptic stimulus presentation. These findings highlight the intricate neural mechanisms underlying visual awareness and show that LO responses predominantly align with subjective phenomenology, offering new insights into the neural correlates of visual consciousness. |
Suraj Upadhyaya Oculomotor dynamics in emmetropes, myopes, and hyperopes: A behavioral perspective Journal Article In: PloS One, vol. 20, no. 12, pp. 1–16, 2025. @article{Upadhyaya2025a,PURPOSE: The oculomotor system, which controls eye movements, is closely linked to visual processing. While refractive errors are common, their influence on oculomotor behavior remains underexplored. This study compared oculomotor performance among emmetropic, myopic, and hyperopic individuals. METHODS: In this cross-sectional, single-visit study, 67 participants (33 myopes, 10 hyperopes, 24 emmetropes; mean age 25.9 ± 3.0 years) completed fixation and visually guided saccade tasks at a viewing distance of 57 cm. A centrally positioned black, disc-shaped target (0.50° in diameter) was displayed on the screen for 45 seconds, after which it shifted to a predetermined location to elicit visually guided saccades. Clinical measures were included in the correlation analysis to ensure the findings were clinically relevant and to examine relationships between research variables and patient outcomes. Eye movements were recorded using the EyeLink 1000Plus. Fixation stability was quantified using Bivariate Contour Ellipse Area (BCEA). Fixational saccade metrics, vergence stability, and saccadic behavior were analyzed. Axial length and corneal power were measured using a portable ultrasound scanner. RESULTS: Fixation stability differed significantly across groups, with myopes exhibiting larger BCEA values compared to emmetropes (H[2] = 10.6, p < 0.05). Analysis of fixational saccades revealed that myopes demonstrated significantly greater amplitude (H[2] = 507.4, p < 0.001), peak velocity (H[2] = 595.7, p < 0.001), and frequency (H[2] = 9.0, p < 0.05) relative to the other groups. Vergence stability was also poorer in myopes than in emmetropes (H[2] = 8.7, p < 0.05). In contrast, saccadic behavior during visually guided tasks showed no significant group differences in latency (H[2] = 1.9 |
Sandra Tyralla; Eckart Zimmermann Serial dependencies and overt attention shifts Journal Article In: Journal of Vision, vol. 25, no. 14, pp. 1–16, 2025. @article{Tyralla2025,When visual input is uncertain, visual perception is biased toward the stimulation from the recent past. We can attend to stimuli either endogenously based on an internal decision or exogenously, triggered by an external event. Here, we wondered whether serial dependencies are selective for the attentional mode which we draw to stimuli. We studied overt attention shifts: saccades and recorded either motor error correction or visual orientation judgments. In Experiment 1, we assessed sensorimotor serial dependencies, focusing on how the postsaccadic error influences subsequent saccade amplitudes. In Experiment 2, we evaluated visual serial dependencies by measuring orientation judgments, contingent on the type of saccade performed. In separate sessions, participants performed either only voluntary saccades or only delayed saccades, or both saccade types alternated within a session. Our results revealed that sensorimotor serial dependencies were selective for the saccade type performed. When voluntary saccades had been performed in the preceding trial, serial dependencies were much stronger in the current trial if voluntary instead of delayed saccades were executed. In contrast, visual serial dependencies were not influenced by the type of saccade performed. Our findings reveal that shifts in exogenous and endogenous attention differentially impact sensorimotor serial dependencies, but visual serial dependencies remain unaffected. |
Ekin Tünçok; Marisa Carrasco; Jonathan Winawer Spatial attention selectively alters visual cortical representation during target anticipation Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–19, 2025. @article{Tuencok2025,Attention enables us to efficiently and flexibly interact with the environment by prioritizing specific image locations and features in preparation for responding to stimuli. Using a concurrent psychophysics–fMRI experiment, we investigate how covert spatial attention modulates responses in human visual cortex before target onset and how it affects subsequent behavioral performance. Performance improves at cued locations and worsens at uncued locations compared to distributed attention, demonstrating a selective processing tradeoff. Pre-target BOLD responses in cortical visual field maps reveal two key changes: First, a stimulus-independent baseline shift, with increases near cued locations and decreases elsewhere, paralleling behavioral results. Second, a shift in population receptive field centers toward the attended location. Both effects increase in higher visual areas. Together, these findings reveal that spatial attention has large effects on visual cortex prior to target appearance, altering neural response properties across multiple visual field maps and enhancing performance through anticipatory mechanisms. |
Tobiasz Trawiński; Chuanli Zang; Letizia Palumbo; Nick Donnelly Individuating experience moderates the effect of implicit racial bias on eye movements to other race faces: A cross-cultural study Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–12, 2025. @article{Trawinski2025,The present cross-cultural study investigated gaze behaviour in the context of assessing the aesthetic value of figurative paintings depicting White and East Asian individuals in social scenes. Across three experiments, we examined how implicit racial attitudes and self-reported individuating experiences influenced gaze patterns when participants evaluated their liking of these paintings. Despite no requirement to inspect faces in the paintings, the results revealed that participants with negative implicit attitudes toward other-race individuals and limited individuating experience with those groups, spent more time fixating on other-race faces. This relationship between implicit attitudes and individuating experience in guiding gaze behaviour was consistent across both British and Chinese participants, despite differing definitions of same- and other-race faces between the groups. Our findings suggest that gaze behaviour during the aesthetic evaluation of figurative paintings is shaped by an interaction between attitudinal and experiential factors, which operates across cultural contexts. |
Catharina Tibken; Simon P. Tiffin-Richards Reading behavior as an indicator of comprehension monitoring when reading expository texts Journal Article In: Metacognition and Learning, vol. 20, no. 1, pp. 1–29, 2025. @article{Tibken2025,Comprehension of expository texts is an important prerequisite for self-regulated learning. Processes of passive validation and metacognitive monitoring are thought to be involved in building a coherent situation model of a text. Inconsistency tasks are often used to measure these processes. Several studies have shown longer reading times for inconsistent sentences than for consistent sentences. However, it remains unclear whether the additional time arises from passive disruptions of the reading process when encountering an inconsistency or from metacognitive processes of reanalysis of previous text. To address this issue, we recorded the reading behavior of 96 university students with an eye-tracker while they read inconsistent and consistent expository texts. We analyzed first-pass reading (first-pass reading time, lookbacks) and reanalysis (rereading time, revisits) at the level of the (in)consistent target word, at the sentence-final word of the target sentence, and in the pre-target text. Our results did not strongly support the hypothesis that immediate changes in reading behavior when inconsistencies are first encountered influence the detection and processing of inconsistencies. Our results partially supported the hypothesis that processes of text reanalysis, specifically of the source of inconsistency, increase the probability of identifying an inconsistency. The findings indicate that a purposeful reanalysis of passages that appear inconsistent to readers improves situation model construction for (short) expository texts about conceptually difficult topics. Learning from texts thus requires metacognitive comprehension monitoring beyond passive validation processes. |
Zhongbin Su; Xiaolin Zhou; Stefan Pollmann; Lihui Wang Dynamic face-related eye movement representations in the human ventral pathway Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1–12, 2025. @article{Su2025c,Multiple brain areas along the ventral pathway have been known to represent face images. Here, in a magnetoencephalography (MEG) experiment, we show dynamic representations of face-related eye movements in the ventral pathway in the absence of image perception. Participants followed a dot presented on a uniform background, the movement of which represented gaze tracks acquired previously during their free-viewing of face and house pictures. We found a dominant role of the ventral stream in representing face-related gaze tracks, starting from the orbitofrontal cortex (OFC) and anterior temporal lobe (ATL), and extending to the medial temporal and ventral occipitotemporal cortex. Our findings show that the ventral pathway represents the gaze tracks used to explore faces, by which top-down prediction of face category in OFC and ATL may guide, via the medial temporal cortex or directly, face perception in the ventral occipitotemporal cortex. |
Renana Storm; Viktoria Wrobel; Antonia Frings; Andreas Sprenger; Christoph Helmchen In: Scientific Reports, vol. 15, no. 1, pp. 1–11, 2025. @article{Storm2025,Persistent postural-perceptual dizziness (PPPD) is often preceded by vestibular disorders. We applied galvanic vestibular stimulation (GVS) and related stimulus-evoked activity to individual ratings of perceived motion for each stimulus and to perceived egomotion thresholds by GVS and behavioural parameters outside the scanner: levels of functional disability by standardized questionnaires, visual motion coherence, passive egomotion perception by chair rotation and quantitative postural stability. We hypothesized that the preceding vestibular disorder predisposes to abnormal brain excitability by vestibular stimulation. All participants showed normal vestibular function tests on quantitative testing. GVS with different intensities was applied to 28 patients and 28 age- and gender-matched healthy participants (HC) in the scanner. After each stimulus, participants rated their perceived level of egomotion. GVS perception threshold was significantly lower in PPPD patients. Contrasting stimulus-identical GVS against a sham stimulus, group comparison revealed a stronger activation in the patient's supramarginal gyrus, insular cortex (operculum 3), and vermis. This stronger excitability was not related to the individual threshold of perceived egomotion by GVS. Patients rated GVS-evoked egomotion intensity by identical GVS intensities larger than HC but neural activity did not correlate with individual ratings of perceived egomotion by GVS. As GVS evoked larger egomotion and larger brain activation in patients, the ratio of brain activity to egomotion perception was not different between groups. GVS-evoked insular activity increased with the level of PPPD-related disability and postural imbalance. The larger activation in multisensory cortical vestibular network indicates a sensitization to vestibular stimuli eliciting egomotion perception which increases with levels of PPPD disability. It seems to reflect a sensory-neural amplification rather than an abnormal sensory-perceptual scaling. |
Caleb Stone; Jason B. Mattingley; Dragan Rangelov Neural mechanisms of metacognitive improvement under speed pressure Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1–12, 2025. @article{Stone2025,The ability to accurately monitor the quality of one's choices, or metacognition, improves under speed pressure, possibly due to changes in post-decisional evidence processing. Here, we investigate the neural processes that regulate decision-making and metacognition under speed pressure using time-resolved analyses of brain activity recorded using electroencephalography. Participants performed a motion discrimination task under short and long response deadlines and provided a metacognitive rating following each response. Behaviourally, participants were faster, less accurate, and showed superior metacognition with short deadlines. These effects were accompanied by a larger centro-parietal positivity (CPP), a neural correlate of evidence accumulation. Crucially, post-decisional CPP amplitude was more strongly associated with participants' metacognitive ratings following errors under short relative to long response deadlines. Our results suggest that superior metacognition under speed pressure may stem from enhanced metacognitive readout of post-decisional evidence. |
Ramanujan Srinath; Amy M. Ni; Claire Marucci; Marlene R. Cohen; David H. Brainard Orthogonal neural representations support perceptual judgments of natural stimuli Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–17, 2025. @article{Srinath2025a,In natural visually guided behavior, observers must separate relevant information from a barrage of irrelevant information. Many studies have investigated the neural underpinnings of this ability using artificial stimuli presented on blank backgrounds. Natural images, however, contain task-irrelevant background elements that might interfere with the perception of object features. Recent studies suggest that visual feature estimation can be modeled through the linear decoding of task-relevant information from visual cortex. So, if the representations of task-relevant and irrelevant features are not orthogonal in the neural population, then variation in the task-irrelevant features would impair task performance. We tested this hypothesis using human psychophysics and monkey neurophysiology combined with parametrically variable naturalistic stimuli. We demonstrate that (1) the neural representation of one feature (the position of an object) in visual area V4 is orthogonal to those of several background features, (2) the ability of human observers to precisely judge object position was largely unaffected by those background features, and (3) many features of the object and the background (and of objects from a separate stimulus set) are orthogonally represented in V4 neural population responses. Our observations are consistent with the hypothesis that orthogonal neural representations can support stable perception of object features despite the richness of natural visual scenes. |
Qiao Songlin; Xuemei Xia; Jing Chen; Matteo Valsecchi Attentional tracking reduces cortical alpha oscillations Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–14, 2025. @article{Songlin2025,The premotor theory of attention suggests that both overt and covert attentional orienting are governed by similar mechanisms and neural structures, a concept extensively investigated in paradigms involving shifts in attention and gaze towards peripheral targets. Previous studies have found a strong link between cortical alpha oscillations and overt smooth pursuit of a target. However, the relationship between alpha oscillations and covert tracking of peripheral moving stimuli remains unclear. To address this, we asked 16 observers to maintain fixation while covertly attending to a visual stimulus moving along the horizontal meridian at varying speeds (2, 6, or 12 °/s), within either the left or right hemifield. We simultaneously recorded both eye movements and EEG data. Our results revealed that alpha power was significantly reduced when observers tracked a target that moved further in the periphery, independent of its speed. These findings confirm that the distribution of alpha power is sensitive to the allocation of covert attention during tracking. This suggests a tight link between the attentional processes involved in covert tracking and overt pursuit of a moving target, supporting the premotor theory of attention. |
Noam Siegelman; Sascha Schroeder; Yaqian Borogjoon Bao; Cengiz Acartürk; Niket Agrawal; Lena S. Bolliger; Jan Brasser; César Campos-Rojas; Denis Drieghe; Dušica Filipović Đurđević; Sofya Goldina; Romualdo Ibáñez Orellana; Lena A. Jäger; Ómar I. Jóhannesson; Anurag Khare; Nik Kharlamov; Hanne B. S. Knudsen; Árni Kristjánsson; Charlotte E. Lee; Jun Ren Lee; Marina P. T. Leite; Simona Mancini; Nataša Mihajlović; Ksenija Mišić; Miloslava Orekhova; Olga Parshina; Milica Popović Stijačić; Athanassios Protopapas; David R. Reich; Anurag Rimzhim; Rui Rothe-Neves; Thais M. M. Sá; Andrea Santana-Covarrubias; Irina Sekerina; Heida M. Sigurdardottir; Anna Smirnova; Priyanka Srivastava; Elisangela N. Teixeira; Ivana Ugrinic; Kerem Alp Usal; Karolina Vakulya; Ark Verma; João M. M. Vieira; Denise H. Wu; Jin Xue; Sunčica Zdravković; Junjing Zhuo; Laoura Ziaka; Victor Kuperman Wave 2 of the Multilingual Eye-Movement Corpus (MECO): New text reading data across languages Journal Article In: Scientific Data, vol. 12, no. 1, pp. 1–14, 2025. @article{Siegelman2025,This paper reports the Wave 2 expansion of the Multilingual Eye-Movement Corpus (MECO), a collaborative multi-lab project collecting eye-tracking data on text reading in a variety of languages. The present expansion comes with new eye-tracking data of N = 654 from 13 languages, collected in 16 labs over 15 countries, including in several languages that have little to no representation in current eye-tracking studies on reading. MECO also contains demographic, language use, and other individual differences data. This paper makes available the first-language reading data of MECO Wave 2 and incorporates reliability estimates of all tests at the participant and item level, as well as other methods of data validation. It also reports the descriptive statistics on all languages, including comparisons with prior similar data, and outlines directions for potential reuse. |
Sabyasachi Shivkumar; Gregory C. DeAngelis; Ralf M. Haefner Hierarchical motion perception as causal inference Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–14, 2025. @article{Shivkumar2025,Motion can only be defined relative to a reference frame; yet it remains unclear which reference frame guides perception. A century of psychophysical studies has produced conflicting evidence: retinotopic, egocentric, world-centric, or even object-centric. We introduce a hierarchical Bayesian model mapping retinal velocities to perceived velocities. Our model mirrors the structure in the world, in which visual elements move within causally connected reference frames. Friction renders velocities in these reference frames mostly stationary, formalized by an additional delta component (at zero) in the prior. Inverting this model automatically segments visual inputs into groups, groups into supergroups, progressively inferring structured reference frames and “perceives" motion in the appropriate reference frame. Critical model predictions are supported by two experiments, and fitting our model to the data allows us to infer the subjective set of reference frames used by individual observers. Our model provides a quantitative normative justification for key Gestalt principles providing inspiration for building better models of visual processing in general. |
Cal M. Shearer; Annalise B. Rawson; Helen C. Barron; Jill X. O'Reilly Memory reactivation during rest forms shortcuts in a cognitive map Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–16, 2025. @article{Shearer2025,Efficient and flexible cognition relies upon cognitive maps—representations of concepts and the relations between them. Cognitive maps integrate relations that were learned separately into a cohesive whole. Memory reactivation during rest and sleep may contribute to cognitive map formation in two ways: by simply strengthening memories for directly experienced relations, or by reorganising concepts and creating new relations that capture the underlying structure. We designed a multi-stage learning task to test whether reactivation during rest is involved in restructuring memories as opposed to simply consolidating what was experienced. We causally manipulated memory reactivation during rest using awake, contextual targeted memory reactivation. We found that promoting memory reactivation during rest qualitatively reorganises the cognitive map by forming ‘shortcuts' between events which have not been experienced together. These shortcuts in memory extend beyond direct experience to facilitate our ability to make novel inferences. Using a series of control tests we show that inference performance cannot be explained by quantitative strengthening of the experienced component links. Interestingly, we show that representing a shortcut may come with limitations, as shortcuts cannot be readily updated in response to rapid changes in the environment. Together, these findings reveal how memories are reorganised during awake rest to construct a cognitive map of our environment, while highlighting the constraints set by a trade-off between efficient and flexible behaviour. |
Dixit Sharma; Bart Krekelberg Predicting spiking activity from scalp EEG Journal Article In: Journal of Neural Engineering, vol. 22, no. 6, pp. 1–16, 2025. @article{Sharma2025,Objective. Despite decades of electroencephalography (EEG) research, the relationship between EEG and underlying spiking dynamics remains unclear. This limits our ability to infer neural dynamics reflected in intracranial signals from EEG, a critical step to bridge electrophysiological findings across species and to develop non-invasive brain–machine interfaces (BMIs). In this study, we aimed to estimate spiking activity in the visual cortex using non-invasive scalp EEG. Approach . We recorded spiking activity from a 32-channel floating microarray permanently implanted in parafoveal V1 and scalp-EEG in a male macaque monkey. While the animal fixated, the screen flickered at different temporal frequencies to induce steady-state visual evoked potentials. We analyzed the relationship between the V1 multi-unit spiking activity envelope (MUAe) and EEG frequency bands to predict MUAe at each time point from EEG. We extracted instantaneous spectrotemporal features of the EEG signal, including phase, amplitude, and phase-amplitude coupling of its frequency bands. Main results . Although the relationship between these spectrotemporal features and the V1 MUAe was complex and frequency-dependent, they were reliably predictive of the MUAe. Specifically, in a linear regression predicting MUAe from EEG, each EEG feature (phase, amplitude, coupling) contributed to model predictions. In addition, we found that MUAe predictions were better in shallow than deep cortical layers, and that the phase of stimulus frequency further improved MUAe predictions. Significance. Our study shows that a comprehensive account of spectrotemporal features of non-invasive EEG provides information on underlying spiking activity beyond what is available when only the amplitude or phase of the EEG signal is considered. This demonstrates the richness of the EEG signal and its complex relationship with neural spiking activity and suggests that using more comprehensive spectrotemporal signatures could improve BMI applications. |
