认知眼动追踪出版物
以下按年份列出了截至2025年(包括2026年初)的所有EyeLink眼动仪认知和感知眼动仪研究出版物。您可以使用视觉搜索、场景感知、面部处理等关键字搜索眼动追踪出版物。您还可以搜索单个作者的姓名。如果我们错过了任何EyeLink认知或感知文章,请给我们发电子邮件!
2026 |
Tal Ravid-Roth; Romi Livne; Ariel Berlinger; Wilfried Kunde; Baruch Eitam; Sagi Jaffe-Dax The effect of gaze contingencies on infants' looking preference Journal Article In: Cognition, vol. 270, pp. 1–18, 2026. @article{Ravid-Roth2026,Infants exhibit robust predictive capacities from birth; Most research has focused on how they process externally generated events, leaving unexplored how predictions rooted in their own actions influence attention. We asked whether the source of predictability- self-generated vs. externally structured- affects infants' looking preferences beyond overall predictability. Across two gaze-contingent eye-tracking experiments, we investigated whether infants prefer to look at stimuli whose movements are triggered by their own gaze, or at stimuli that move independently. In Experiment 1 (n = 21 |
Yamei Zhang; Xiaojun Sun; Jing Ma Cognitive aspects of video-based learning with instructor presence depend on pedagogical approaches: A perspective from motivating styles Journal Article In: Learning and Instruction, vol. 102, pp. 1–11, 2026. @article{Zhang2026e,Background: Instructor presence is a critical feature that should be considered when designing video lectures. However, its influence on cognitive aspects of learning is mixed. Such inconsistencies imply the likelihood of moderators shaping the influence. Motivating styles (i.e., autonomy-supportive, controlling and neutral teaching), the most concerned pedagogical approaches, may be such a moderator. Aim: This study examines how the influence of instructor presence on the cognitive aspects of learning, including learning outcomes, attention (i.e., visual attention allocation and concentration), and extraneous cognitive load, varies with motivating styles. Sample and methods: A three (motivating styles: autonomy-supportive vs. controlling vs. neutral teaching) × two (instructor presence: present vs. absent) between-subjects eye-tracking experiment was conducted among 181 university students. Results: While instructor presence reduced visual attention to the knowledge area regardless of the instructor's motivating style, its effects on learning outcomes (albeit only in terms of retention), concentration, and extraneous load were conditional on it. Specifically, compared with the instructor-absent condition, under autonomy-supportive teaching, instructor presence decreased retention and concentration, but did not affect extraneous load; under controlling teaching, instructor presence did not impact retention, but damaged concentration and boosted extraneous load; under neutral teaching, instructor presence promoted retention without affecting concentration or extraneous load. Conclusions: The findings imply that the facilitating effect of instructor presence as a social cue and its detrimental effect as a seductive detail can dominate one another or cancel each other out under specific motivating styles. Hence, pedagogical approaches can shape the effects of instructor presence. |
Kaitlyn N. Drennan; Nicholas Gaspelin What can a half-million saccades tell us about distractor suppression? Journal Article In: Cognition, vol. 269, pp. 1–14, 2026. @article{Drennan2026,Salient distractions in the environment compete for attention and have the potential to interfere with our goals. An abundance of research has therefore examined how we learn to prevent distraction by salient stimuli. There is growing consensus that salient stimuli can be suppressed to mitigate distraction. However, many questions about distractor suppression have been difficult to resolve in typical studies that use small sample sizes. The current study is a pooled analysis of several previous eye-tracking studies (N = 354) which resulted in a large data set of more than a half-million eye movements. This large data set was used to uncover new findings that improve our understanding of the attentional processes involved in distractor suppression. We also evaluated several new findings related to how attentional suppression is learned and is influenced by selection history. Altogether, these findings highlight the need for a hybrid model of attention that includes both bottom-up and top-down components. Moreover, this large publicly available dataset can be used by future research to investigate other questions related to attentional capture and distractor suppression. |
Ting Zhang; Shujia Zhang; Yi Jiang Automatic pupillary responses to pain perception in adults and children: The influence of race and autistic traits Journal Article In: Cognition, vol. 268, pp. 1–9, 2026. @article{Zhang2026d,The ability to understand and share others' emotional states (e.g., feeling of pain) plays a fundamental role in survival and prosocial behavior. The current study utilized pupillometry to assess automatic psychophysiological responses to others' painful facial expressions in both adults and children (N = 72). Results revealed that pupil size significantly increased when perceiving painful versus neutral expressions, independent of low-level visual features. Notably, both adults and children exhibited a racial in-group bias, with pupil dilation effects observed only for same-race painful faces. Furthermore, individuals' Autism Spectrum Quotient scores were negatively correlated with pupil dilation effects toward painful expressions of same-race faces. These findings suggest that pupillary responses might reflect automatic empathic arousal to others' pain and are modulated by racial group membership and autistic traits, providing a potential physiological indicator, at least at the group level, for probing affective resonance in children or individuals with socio-cognitive disorders (e.g., autism spectrum disorder). |
Xiaozhi Yang; Elizabeth E. Riggs; Jason C. Coronel; Ian Krajbich Issue importance amplifies the effect of gaze on voting decisions Journal Article In: Cognition, vol. 268, pp. 1–12, 2026. @article{Yang2026,There are many factors that can influence a voter's decision in the ballot booth but not all of them are policy related. One non-policy factor that may influence voters is the tendency to choose options that attract attention. Here, we investigate this possibility in two proof-of-concept laboratory studies with people choosing between proposed laws. We find that people are slower to vote when their party is split over an issue, and that they tend to vote for laws that they look at more. Moreover, this gaze effect is stronger for more important issues. We also find that we can increase the probability that someone will vote for one of two laws by getting them to look at that option first. Our work harnesses the power of sequential sampling models to explain the relationship between gaze and vote choice. We find support for a goal-based model where overt attention amplifies information supporting a particular law. This model explains why gaze has a stronger effect on choice for more important issues. Our findings indicate that some voting decisions are not predetermined and instead rely on an on-the-spot evaluation. As a result, these decisions can be swayed by attentional manipulations. Thus, visual attention may serve as a unifying framework for understanding different biases that occur in the voting booth, such as ballot-order and candidate-name-familiarity effects. |
Yunfei Shang; Ke Liu; Qing Feng The influences of security and context on attentional bias toward emotional faces: Evidence from eye movements Journal Article In: Acta Psychologica, vol. 263, pp. 1–8, 2026. @article{Shang2026,This study employed a dot-probe paradigm to investigate attentional biases toward emotional faces in individuals with high versus low levels of security across general and threat contexts, using eye-tracking technology. Participants were screened into high- and low-security groups based on validated security scales. Threat contexts were established using images from the International Affective Picture System (IAPS). Results revealed that: (1) Both high- and low-security individuals exhibited attentional biases toward emotional faces compared to neutral faces. (2) Security levels modulated attention to emotional faces: high-security individuals displayed greater bias toward happy faces, while low-security individuals showed enhanced bias toward angry faces, consistent with the schema-congruence hypothesis. (3) Reaction times accelerated under threat conditions for all participants, and threat contexts amplified attentional bias toward angry faces in high-security individuals. These findings highlight the interplay between intrinsic security and external contexts in shaping attentional processing of emotional stimuli. |
Ilanit Hochmitz; Yaffa Yeshurun; Amit Yashar Temporal dynamics of integration and individuation: Insights from temporal averaging and crowding Journal Article In: Cognition, vol. 268, pp. 1–12, 2026. @article{Hochmitz2026,Individuating a single item presented within a continuous sequence of items requires segregating its signal from that of the other items. In contrast, representing a global aspect of the sequence, such as its average orientation, involves integration of information across time. Individuation and integration allow us to focus on individual events while maintaining an overall perception of our environment. To examine the relations between temporal averaging and individuation, we measured orientation averaging over short and long timescales using the same stimuli and orientation-estimation procedure previously used to measure individuation. Participants reported the average orientation of a sequence of three oriented items separated by either short (SOAs<150 ms) or long intervals (SOAs>150 ms). Analysis of the error distribution and mixture-modeling revealed distinct patterns of results for the different tasks and timescales, but also some similarities, particularly for the short timescale. In this timescale, the relative contribution of each individual item to the final response was similar across tasks, indicating the involvement of low-level factors operating regardless of the task. With the long timescale, the two tasks showed dissociable pattern across all performance aspects, except guessing rate, indicating that long-scale individuation and averaging engage mainly higher-level, task-related processes. Importantly, regardless of timescale, estimation errors in these tasks were best described by different models: in integration they primarily reflected unequal weighting of the averaged items, whereas in individuation they reflected imprecise target encoding with occasional misreports of distractors. Together, the findings reveal dissociable dynamics for integration and individuation. |
Ângela Gomes Tomaz; Adrien Chopin; Noelia Gabriela Alcalde; Dennis M. Levi; Preeti Verghese The best stereoacuity is rarely at the fovea Journal Article In: Vision Research, vol. 240, pp. 1–13, 2026. @article{GomesTomaz2026,Stereoacuity, the ability to perceive depth from binocular disparity, is traditionally considered to be best at the fovea in typical human vision, and to decline with eccentricity. Previous studies have shown that when stereopsis is present in amblyopia, it is often coarse and comparable to stereoacuity associated with the pe- ripheral retina in neurotypical controls, suggesting that it might be mediated by a non-foveal locus. Here we measured stereoacuity as a function of eccentricity in participants with amblyopia as well as controls with no history of abnormal visual development. We measured stereoacuity using random dot stereograms and targets that scaled with eccentricity, testing the fovea, and eccentricities of 2.5◦, 5◦, and 10◦ along the horizontal and vertical meridians. For 87.5% (7/8) of amblyopic participants, the locus of best stereoacuity was non-foveal. Surprisingly, 75% of control participants (15/20) also exhibited their best stereoacuity at non-foveal locations, with only 5 controls showing foveal superiority. Using stimulus parameters modified to improve foveal performance, we repeated measurements on a subset of controls whose best stereoacuity was non-foveal, but the best locus only shifted to the fovea in one participant. Stereoacuity measured at the experimentally determined “best locus” correlated well with standard clinical stereoacuity tests. These findings challenge the conventional view of universal foveal dominance for stereopsis, suggesting that the fovea is not invariably the site of best stereoscopic sensitivity, even in many normally sighted individuals. This has implications for understanding binocular vision in amblyopic and normal vision, and for interpreting clinical stereo tests. |
Ryan M. Barker; Michael J. Armson; Nicholas B. Diamond; Zhong Xu Liu; Yushu Wang; Jennifer D. Ryan; Brian Levine Remembrance with gazes passed: Eye movements precede continuous recall of episodic details of real-life events Journal Article In: Cognition, vol. 268, pp. 1–6, 2026. @article{Barker2026,Autobiographical memory entails the reconstructing of the visual features of past events. While eye movements are associated with vivid autobiographical recollection, this research has yet to capitalize on the high temporal resolution of eye-tracking data. We aligned eye movement data with participants' extemporaneous free recall of a verified real-life event, allowing us to assess the temporal correspondence of saccades to production of episodic and non-episodic narrative content at the millisecond level. Episodic autobiographical details were preceded by an increase in saccade frequency and followed by a reduction in saccades prior to the next detail. There was no such effect observed for non-episodic details. Oculomotor responses in the temporal window preceding freely-recalled details may facilitate recollection by reinstating spatiotemporal context, or they may reflect post-retrieval processes—or a combination of both—in cyclical sensory-motor-mnemonic interactions that promote vivid recall. |
Fangfang Zhu; Yifen Liu; Mengyuan Wang; Jiumin Yang; Zhongling Pi; Zhiqiang Ma When do teachers' pleasant expressions in video lectures facilitate learning? The role of emotional learning materials and auditory emotions Journal Article In: Journal of Computer Assisted Learning, vol. 42, no. 1, pp. 1–16, 2026. @article{Zhu2026,Background: Emotional cues in video lectures have demonstrated complex effects on learning, particularly regarding teachers' facial expressions. However, these effects remain inconclusive, necessitating further exploration of potential factors to enhance learning. Objectives: This study examined how three forms of emotional design—learning materials, teachers' facial expressions and teachers' auditory emotions, individually and jointly influence learners' emotional responses, cognitive processing and learning outcomes in video-based instruction. Methods: Across two experiments, we investigated the independent and interactive effects of teachers' facial expressions, the emotional design of learning materials and teachers' auditory emotion on students' emotions, motivation, attention, cognitive load and learning outcomes. Experiment 1 examined the interaction between teachers' facial expressions and emotionally designed learning materials, while Experiment 2 built on these findings to test whether congruent positive facial and auditory cues further enhance students' emotional, motivational and cognitive engagement. Results: In Experiment 1, when learning materials were neutrally designed, teachers' pleasant facial expressions reduced extraneous cognitive load and improved learning outcomes. Experiment 2 showed that pairing pleasant facial expressions with pleasant auditory emotion elicited more positive emotions, higher motivation, increased germane load and better learning outcomes. Eye-tracking analyses indicated that this emotional congruence decreased attentional distraction, highlighting the synergistic benefits of combining visual and auditory emotional cues. Conclusions: The study identifies the synergistic effects of various emotional design elements in video lectures on students' learning and contributes to theories of emotional design and cognitive processing in multimedia learning contexts. It also offers practical insights for educators on optimising emotional cues in video-based learning environments. |
Tianyu Zhang; Yongchun Cai Shared mechanisms of presaccadic and exogenous attention in modulating visual perception of contrast Journal Article In: Cognition, vol. 267, pp. 1–13, 2026. @article{Zhang2026c,Different types of attention alter subjective visual perception in fundamentally distinct ways. Previous studies have focused on covert attention without concurrent eye movements, revealing that covert exogenous (involuntary) attention enhances contrast appearance of low-contrast stimuli while diminishing that of high-contrast stimuli, whereas covert endogenous (voluntary) attention uniformly enhances contrast appearance. However, the attentional effect preceding saccadic eye movements, a critical component of natural vision, remain understudied. Here, we found that when participants voluntarily initiated saccades, presaccadic attention enhanced the appearance of low-contrast stimuli while attenuating the appearance of high-contrast stimuli (Experiment 1 |
Güven Kandemir; Christian N. L. Olivers Serial dependence is stronger for peripheral than for central vision Journal Article In: Attention, Perception, & Psychophysics, vol. 88, no. 2, pp. 1–18, 2026. @article{Kandemir2026,Serial dependence in vision refers to the fact that perceptual judgements are biased by earlier experiences, and has been thought to reduce sensory uncertainty and sustain perceptual continuity over time and space. While vision changes with eccentricity, little is known about if and how serial dependence differs in the periphery relative to fovea. Here we aimed to reduce this gap by comparing serial dependence for centrally and peripherally presented stimuli. Experiment 1 presents a reanalysis of an existing dataset from an earlier working memory task requiring the memorization of differently oriented gratings, presented either centrally or at 15° eccentricity. Experiment 2 also varied pre-knowledge of the item's location through spatial cueing. Experiment 3 replicated Experiment 1 but with lower contrast levels and equating the probabilities of central and peripheral stimuli. Across all experiments we observed an attractive bias towards the orientation of the preceding trial at all locations. Crucially, this bias was always larger in the periphery relative to the central position, and it was mainly the current item's location that drove this effect, rather than the previous item's location. Pre-knowledge of item location failed to influence the eccentricity effect serial dependence, nor did reduced contrast or differential probabilities change the conclusions. Our results thus demonstrate that serial dependence is not equal across eccentricity. The data and the scripts are available at: https:// osf. io/ v56hn/? view_ only= 6d4d5 bba49 3b4bc 788c3 eed8d ecd83 70 WABBLE |
Alexia Galati; Rick Dale; Camila Alviar; Moreno I. Coco Task goals constrain the alignment in eye-movements and speech during interpersonal coordination Journal Article In: Journal of Memory and Language, vol. 146, pp. 1–18, 2026. @article{Galati2026,Collaborative task performance is assumed to benefit from interpersonal coordination between interacting individuals. Prominent views of language use and social behavior, including the Interactive Alignment Model (IAM; Pickering & Garrod, 2004), support this view by building on tasks that require monitoring a partner's perspective (e.g., in route planning), proposing that behavioral alignment enables conceptual convergence. However, the role of alignment in tasks requiring complementarity (e.g., a “divide and conquer” strategy during joint visual search) remains underexplored. We address this gap by manipulating task goals (route planning vs. visual search) as forty dyads completed ten trials involving subway maps while their eye movements and speech were co-registered. We used Cross Recurrence Quantification Analysis (CRQA) to examine the temporal relationships between partners' eye fixations and word sequences, generating measures that reveal similarity and dynamic coupling. Dyads exhibited more gaze alignment in route planning than visual search across a range of CRQA metrics. Gaze alignment also varied across the trial and related differently to accuracy: in visual search, greater alignment late in the trial predicted better performance. In speech, route planning prompted longer and more entropic word sequences, but lower overall recurrence than visual search. This finding suggests that the two modalities organize in a compensatory fashion to support distinct task demands. These results support a theoretical framework more general than IAM, in which interactive alignment emerges as a consequence of dynamic adaptation to task goals. Overall, task goals constrain how people coordinate behavior and offer insights into how collaborating partners distribute their multimodal contributions. |
Zhanna Chuikova; Anna Izmalkova; Andriy Myachykov; Anastasiia Liashenko; Yury Shtyrov; Marie Arsalidou Interplay between switching, inhibition, and mental attention: An exploratory eye-tracking study Journal Article In: Psychological Research, vol. 90, no. 1, pp. 1–19, 2026. @article{Chuikova2026,Cognitive flexibility (CF) allows individuals to adapt their behavior to changing environmental demands. As task complexity increases, CF may substantially impact performance by facilitating a shift towards more efficient information processing strategies. However, its role in tasks with high cognitive demands remains largely unexplored. Furthermore, while CF is associated with inhibitory control and working memory functions, their precise relationship under task demands is not yet fully understood. To address this gap, we investigated how CF and inhibition metrics are associated with different levels of mental attentional demand (Md), as well as СF. Additionally, we explored differences in eye-movement indices associated with high and low CF in tasks with varied levels of Md. Analyzing data from 42 young participants performing CF, inhibition, and mental attention tasks with eye movement recording for the last task, we found that multidimensional switching (i.e., switching between three rules) correlated with mental attentional capacity, whereas two-dimensional switching (i.e., switching between two rules) correlated with inhibitory control. Individuals with low and high switching scores differed in task performance and eye-movement patterns of mental attentional demand (i.e., difficulty). Specifically, those with high efficiency in multidimensional switching exhibited superior performance across all levels of mental attentional demand. Further, high-efficiency performers employed eye-movement patterns characterized by an increased number of fixations, shorter fixation durations, and decreased blink rates, with significant differences observed at higher levels of mental-attention demand. Our findings offer new insights into psychophysiological metrics related to higher-order cognitive processes, discussed in terms of cognitive theory and practical significance. |
Hongda Zhao; Wei Du; Chao Wang Cognitive visual strategies are associated with delivery accuracy in elite wheelchair curling: Insights from eye-tracking and machine learning Journal Article In: Frontiers in Psychology, vol. 16, pp. 1–10, 2026. @article{Zhao2026,Visual search is pivotal for athletic performance, yet its role in adaptive sports like wheelchair curling remains understudied. This study investigated how eye-movement features predict delivery accuracy and distinguish elite from novice athletes. Thirty wheelchair curling athletes (15 experts, 15 novices) performed standardized delivery accuracy and visual search tasks, with eye movements recorded using the EyeLink Portable Duo system. We employed multiple regression to identify predictors of accuracy and a support vector machine (SVM) to classify athletes based on expertise. Experts demonstrated superior delivery accuracy and significantly more efficient visual search patterns, characterized by shorter dwell times, faster reaction times, and fewer fixations. The SVM model successfully classified athletes with 90% accuracy (AUC = 0.93), while regression analysis confirmed that specific gaze metrics were robust factors associated with performance. These findings establish a strong quantitative link between efficient gaze strategies and expert motor performance in a constrained-mobility setting. This integrated eye-tracking and machine learning approach offers a powerful framework for objectively evaluating performance and developing data-driven, personalized training interventions in wheelchair curling and other precision-focused adaptive sports. |
Huan Zhang; Keyin Chen; Pengfei Xu; Xin Zhao Impact of emotional working memory training on threat-related attentional bias in social anxiety: Evidence from eye movements Journal Article In: Journal of Affective Disorders, vol. 393, pp. 1–11, 2026. @article{Zhang2026a,Threat-related attentional bias is a core characteristic of social anxiety and is closely associated with impaired attentional control. While traditional working memory training (WM-T) improves cognitive control and emotional regulation, it does not address emotional information processing. Emotional working memory training (EWM-T), which integrates negative emotional stimuli, may enhance control over negative information. This study hypothesizes that EWM-T can reduce threat-related attentional bias in socially anxious individuals and outperform WM-T in decreasing sustained attention to negative stimuli. Two experiments were conducted to investigate the effects of EWM-T. Experiment 1 employed a dot-probe task and eye-tracking to examine threat-related attentional bias in high and low social anxiety groups. Experiment 2 compared EWM-T with WM-T in a randomized controlled trial, in which participants with high social anxiety completed 20 training sessions over 30 days. Transfer effects were evaluated pre- and post-training using the Stroop task, number-switching task, digit-span task, and active memory task. In Experiment 1, individuals with high social anxiety exhibited greater attentional vigilance and faster detection of threat stimuli. In Experiment 2, both groups showed reductions in anxiety symptoms and practice-related improvements on several cognitive tasks, with no Group × Time interactions. Post-training eye-tracking data revealed a decrease in fixation bias toward threat stimuli, indicating improved attentional control. These findings suggest that EWM-T enhances attentional orientation and alleviates anxiety symptoms in social anxiety, with stronger transfer effects compared to WM-T. Incorporating emotional content into working memory training offers advantages for clinical interventions in social anxiety. |
Xuefei Yu; Atul Gopal; Ken-ichi Inoue; Martin O. Bohlen; Genevieve M. Kuczewski; Marc A. Sommer; Hendrikje Nienborg; Masahiko Takada; Okihide Hikosaka Retrograde optogenetics reveals sensorimotor convergence within a corticotectal pathway of non-human primates Journal Article In: Current Biology, vol. 36, no. 1, pp. 236–242, 2026. @article{Yu2026,Understanding how the cerebral cortex communicates with subcortical areas to drive behavior remains a central question in system neuroscience. One key unresolved issue is whether prefrontal cortical outputs to motor-related subcortical regions carry predominantly motor commands1 or mixed sensory-motor signals.2,3 Retrograde optogenetics offers a powerful way to interrogate such projection-defined circuits,4–7 but its use in non-human primates has been limited.8–11 Here, we applied retrograde optogenetics in awake macaques to directly test the functional organization of the corticotectal projection from the frontal eye field (FEF) to the superior colliculus (SC). We asked whether the FEF output signals to SC are motor-dominant or broadly sensory-motor. Optical activation of this pathway evoked robust, contralateral saccades and selectively modulated reaction times, demonstrating its causal role in saccade generation. Optogenetically tagging FEF neurons pro- jecting to SC revealed a heterogeneous population of visual, visuomotor, and motor neurons. This diverse output converged predominantly onto motor-related neurons in the SC. These findings support a visuomotor convergence model, in which diverse FEF outputs drive motor-selective SC neurons with activity sufficient for saccade generation, and thus resolve long-standing questions over the composition of FEF outputs. Additionally, our results establish retrograde optogenetics as a tool for dissecting projection-defined circuits in primates and for precisely probing the neural pathways that link perception to action. |
Songqiao Xie; Chunyan He An empirical study on native Mandarin-speaking children's metonymy comprehension development Journal Article In: Journal ofChild Language, vol. 53, pp. 80–107, 2026. @article{Xie2026,This study investigatesMandarin-speaking children's(age 3–7) comprehension development ofnovel and conventional metonymy, combining online and offline methods. Both online and offline data show significantly better performances from the oldest group (6-to-7-year-old) and a delayed acquisition of conventional metonymy compared with novel metonymy. However, part of offline data shows no significant difference between adjacent age groups, while the eye-tracking data show a chronological development fromage 3–7. Furthermore, in offline tasks, the three-year-old group features a high choice randomness and the four-to-five- year-olds show the longest reaction time. Therefore, we argue that, not only age but also metonymy type can influence metonymy acquisition, and that a lack of socio-cultural experience can be a source of acquisition difficulty for children under six. Methodologically speaking, we believe that online methods should not be considered superior to offline ones as they investigate different aspects of implicit and explicit language comprehension. |
Wiktor Wicecławski; Jakub Paszulewicz ERP evidence of attentional selection outside of effective oculomotor range Journal Article In: Experimental Brain Research, vol. 244, no. 1, pp. 1–9, 2026. @article{Wiȩclawski2026,The close link between visual attention and the oculomotor system is well documented. Within the selection-for-action framework, two perspectives exist. According to Visual Attention Model (VAM) attention is seen as a prerequisite for successful movement execution, though it is considered a distinct cognitive and neural process. By contrast, the premotor theory of attention (PMTA) argues that the beneficial effects of attention are fully accounted for by the system's preparation for saccadic eye movements. From this standpoint, a central prediction emerges: attentional advantages should be confined to regions within the oculomotor range, since saccadic planning is not feasible outside those limits. A common way to examine this prediction is to present cues and targets in a hemifield beyond the oculomotor range, typically achieved by occluding one eye while abducting the other. Using this method, Smith et al. showed that in a visual search task, exogenous orienting is reduced in the temporal hemifield when the eye is abducted. They concluded that exogenous attentional orienting is constrained by the range of potential saccadic movements. In our study, we sought to replicate Smith et al.'s findings while extending the paradigm with EEG recordings—an approach not yet applied in this context. PMTA predicts that, under eye abduction, stimuli appearing in the temporal hemifield would yield diminished N2pc amplitudes. An ANOVA revealed no reduction of N2pc amplitude in the temporal hemifield. Taken together, our results support the growing body of evidence suggesting that visual attention is not strictly bound to the oculomotor range. |
Yang Wang; Lei Zhang; Jon D. Elhai; Christian Montag; Haibo Yang The interacting role of fear of missing out in attentional bias dynamics during problematic social media use Journal Article In: Addictive Behaviors, vol. 173, no. 393, pp. 1–8, 2026. @article{Wang2026,Problematic social media use (PSMU) is increasingly conceptualized as a behavioral addiction involving attentional bias toward social media icons. Although fear of missing out (FoMO) contributes to PSMU maintenance, its dynamic interactive role in attentional bias dynamics remains unclear. Guided by the I-PACE model and attentional bias theory, this study examined whether and when FoMO modulates gaze-based attentional bias toward social media icons in PSMU. 912 university students completed online screening for PSMU and FoMO; 55 meeting PSMU criteria (Mage = 19.60) were categorized into high- or low-FoMO groups. Participants performed a visual dot-probe task with social/non-social app icons while eye-tracking recorded gaze behavior across four 500 ms time windows. Results revealed FoMO significantly interacted with attentional bias in two critical phases: During early processing (0–500 ms), the PSMU/high-FoMO group exhibited attentional orienting deceleration to social media icons, whereas PSMU/low-FoMO showed attentional maintenance. In later processing (1000–1500 ms), PSMU/high-FoMO demonstrated attentional vigilance-maintenance, while PSMU/low-FoMO displayed avoidance. These findings indicate FoMO exerts a temporally dynamic interaction effect on attentional bias in PSMU—characterized by initial orienting delays followed by sustained attentional engagement with social media icons. This supports reconceptualizing FoMO as a core psychological mechanism that reinforces PSMU through biased attentional dynamics, advancing theoretical alignment with the I-PACE framework. |
Mingze Sun; Zhe Qu; Yajie Wang; Jingwen Xiang; Yulong Ding A well-trained nonsalient shape captures attention with delayed inhibition of return Journal Article In: Psychonomic Bulletin & Review, vol. 33, no. 1, pp. 1–16, 2026. @article{Sun2026,Numerous studies adopting Posner peripheral cueing paradigms have shown that exogenous attentional orientation (EAO) to a salient-but-irrelevant stimulus involves two opposing attentional processes: early attentional capture and late attentional suppression. Recent evidence has indicated that long-term perceptual learning can induce involuntary attentional capture by nonsalient shapes. However, it remains unclear whether a well-trained nonsalient shape could exhibit a biphasic pattern of EAO similar to that observed with physically salient stimuli, including both an early exogenous attentional shift and a late inhibition of return (IOR). Through both a perceptual learning task and a classic peripheral cueing task, the current study showed that a well-trained nonsalient shape cue could exhibit a biphasic pattern of EAO. When compared with an untrained shape, a well-trained nonsalient shape facilitated subsequent target detection at short cue-target onset asynchronies (CTOAs, 200–300 ms) and deteriorated target detection at a relatively long CTOA (800 ms), but not at 400- to 600-ms CTOAs. As a comparison, a detectability-matched onset cue or luminance contrast cue elicited a facilitatory effect at 200- to 300-ms CTOAs and an inhibitory effect starting from 400-ms CTOA. A control eye-tracking experiment suggested that the absence of IOR effects at 400- to 600-ms CTOAs in the trained cue task was not due to fewer eye movements during the task. Our results indicated that, as opposed to physically salient stimuli, a well-trained nonsalient shape induced delayed IOR after an evident exogenous shift of visual attention. The different patterns of EAO processes support the notion that prior experience (such as perceptual learning) plays a unique role in modulating our exogenous attention. Possible underlying mechanisms are proposed. |
Waxun Su; Xiao Lin; Weijian Liu; Tak Kwan Lam; Peng Li; Qiandong Wang The impact of depression and social anxiety on eye orientation and disengagement in individuals with and without depression Journal Article In: Journal of Psychiatric Research, vol. 192, pp. 325–331, 2026. @article{Su2026,In individuals with depression, the comorbidity with social anxiety disorder is prevalent that often exacerbates symptoms and social dysfunction, such as exhibiting more severe social avoidance and interpersonal impairment. Our study used the eye-tracking technique to explore how depression and social anxiety, individually and in combination, influence orientation toward and disengagement from the eyes in individuals diagnosed with depression or not. Participants were 49 healthy individuals and 64 individuals with depression, whose gaze was initially guided to the eye or mouth region immediately before the onset of the face. Latency to disengage from the guided regions and latency to orient to the eyes following the onset of the face were measured. The findings revealed that, firstly, individuals showed delayed disengagement from the eyes compared to the mouth regardless of depression diagnosis or social anxiety level. Secondly, in healthy individuals, increased social anxiety was related to quick eye orientation. Thirdly, in individuals with depression, longer disengagement latencies from the eyes were associated with higher levels of depression or social anxiety, but only when one of the scores was high, not medium or low. These findings highlight the importance of understanding the distinct and combined impacts of depression and social anxiety on clinical and nonclinical individuals, informing more targeted clinical interventions and assessment strategies. |
Anjum Shaikh; Idah Mbithi; Maiko Okamura; Skylar Rice; Lily Rosan; Fabio Solorzano Quesada; Trafton Drew; Brennan Payne; Jeff Moher Distractor avoidance and early quitting in visual search Journal Article In: Attention, Perception & Psychophysics, vol. 88, no. 1, pp. 1–13, 2026. @article{Shaikh2026,In the current study, we examined the mechanisms underpinning how salient distractors produce early quitting in visual search. Participants completed a simple visual search task and indicated whether a target was present or absent. When salient distractors were present, fewer eye movements occurred before target-absent responses, and less of the display area was searched. Surprisingly, participants actively avoided directing eye movements towards the distractor. Still, salient distractors increased both search errors, which were committed when the target was never fixated, and decision errors, which were committed when the target was fixated but not detected. Our results demonstrate that salient distractors trigger early quitting by reducing the amount of information that observers extract from the search image and disrupting search guidance. |
Thomas Seacrist; Elizabeth A. Walshe; Shukai Cheng; Emily Brown; Charlotte Birnbaum; Victoria Kaufman; Flaura K. Winston; William C. Gaetz A novel paradigm for identifying eye-tracking metrics associated with cognitive control during driving through MEG neuroimaging Journal Article In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 116, pp. 1–13, 2026. @article{Seacrist2026,Understanding the neurocognitive underpinnings of driving behavior in adolescents is critical to improving road safety. To address this, we established a novel paradigm linking magnetoencephalography (MEG)-recorded frequency-specific brain activity to simulated driving performance, identifying periods of increased cognitive control. However, this initial paradigm did not incorporate eye-tracking – a potentially scalable proxy for cognitive control that could be leveraged by in-vehicle driver monitoring systems. This proof-of-concept study expands our paradigm by integrating eye-tracking to identify scanning behavior metrics associated with periods of increased cognitive control validated by MEG. Typically developing adolescents (n = 11; mean age = 15.1 ± 1.5 yrs) completed three driving tasks of varying cognitive demand, and MEG frequency specific analysis confirmed periods of high (Hi) and low (Lo) cognitive control via the established biomarker of frontal midline theta (FMT). Fixation count, fixation duration, horizontal/vertical mean gaze position, saccade amplitude, and horizontal/vertical spread of search were compared between Hi vs. Lo periods of cognitive control. Task-specific differences in fixation count (p < 0.05), mean gaze position (p < 0.01), saccade amplitude (p < 0.05), and spread of search (p < 0.01) were observed between Hi compared to Lo cognitive control periods. These differences corresponded to expected task-specific changes in scanning behavior that would accompany cognitive control over behavior, suggesting a signal that eye-tracking may serve as a proxy for underlying neurocognitive processes. This integrated approach demonstrates methodological rigor and offers a promising framework for further research and informing development of in-vehicle driver monitoring systems for detecting cognitive deficits in real time, with implications for enhancing teen driver safety. |
Mohammadhossein Salari; Diederick C. Niehorster; Marcus Nyström; Roman Bednarik The effect of pupil size on data quality in head-mounted eye trackers Journal Article In: Behavior research methods, vol. 58, no. 1, pp. 1–16, 2026. @article{Salari2026,Changes in pupil size can lead to apparent gaze shifts in data recorded with video-based eye trackers in the absence of physical eye rotation. This is known as the pupil-size artifact (PSA). While the PSA is widely reported in desktop eye trackers, it is unknown whether and to what extent it occurs in head-mounted eye trackers. In this paper, we examined the effects of pupil size variations on eye-tracking data quality in four head-mounted eye trackers: the Pupil Core, the Pupil Neon, the SMI ETG 2w, and the Tobii Pro Glasses 2, in addition to a widely used desktop eye tracker, the SR Research EyeLink 1000 Plus. Participants viewed a central target on a monitor while we systematically varied the screen brightness to induce controlled pupil size changes. All head-mounted eye trackers exhibited PSA, with apparent gaze shifts ranging from 0.94 formula presented for the Pupil Neon to 3.46 formula presented for the Pupil Core. Except for the Pupil Neon, all eye trackers exhibited a significant change in accuracy due to pupil size variations. Precision measures showed device-specific effects of pupil size changes, with some eye trackers performing better in the bright condition and others in the dark condition. These findings demonstrated that, just like desktop eye trackers, head-mounted video-based eye trackers exhibited PSA. |
Estelle Raffin; Roberto F. Salamanca-Giron; Krystel R. Huxlin; Olivier Reynaud; Loan Mattera; Roberto Martuzzi; Friedhelm C. Hummel Causal disconnectomics of motion perception networks: Insights from transcranial magnetic stimulation-induced BOLD responses Journal Article In: The Journal of Physiology, vol. 604, pp. 503–526, 2026. @article{Raffin2026,Understanding how focal perturbations trigger large-scale network reorganization is essential for uncovering the neural mechanisms that support perception and behaviour. Here we used a transcranial magnetic stimulation (TMS) perturbational approach by applying brief 10 Hz TMS to early visual areas (EVAs) or the medio-temporal (MT) area in healthy participants while recording concurrent functional magnetic resonance imaging (fMRI). TMS delivered during the early stages of motion processing specifically impaired direction discrimination at both sites,whereas disruption of the later processing phase impaired performances only for the MT condition. Despite a similar local increase in BOLD activity induced by EVA and MT stimulation, the broader network responses diverged significantly. Perturbation ofEVA elicited a more robust and efficient pattern of functional reorganization, manifesting as more constrained BOLD changes, consistent with greater resilience to focal disruption. In contrast behavioural impairments induced by MT stimulation were accompanied by a disorganized and less-efficient network configuration, characterized by smaller small-world properties and longer path lengths. The decrease in performances induced by MT stimulation scaled with lower clustering coefficients, implying a more random or decentralized network structure. These findings demonstrate that TMS-fMRI coupling provides a powerful framework for causally mapping the relationships between local neural perturbations, large-scale network dynamics and behavioural performance. |
Zhongling Pi; Xuemei Huang; Richard E. Mayer; Xin Zhao; Xiying Li Role of the instructor's social cues in instructional videos Journal Article In: Education Sciences, vol. 16, no. 1, pp. 1–15, 2026. @article{Pi2026,Little attention has been paid to whether an instructor's hand-pointing gestures or use of a mouse-guided arrow can mitigate the attentional loss caused by an instructor's happy facial expressions or can enhance the social benefits of these expressions in instructional videos. The goal of the present study is to determine whether social cues in an instructional video affect learning processes and outcomes. The participants were 57 female students from a university. We employed a 2 × 2 mixed experimental design. The instructor's facial expression was a within-subject variable, while the type of pointing cue was a between-subject variable. Students who had the smiling instructor rather than the bored instructor gave higher ratings of the perceived positive emotion of the instructor, felt more positive emotion, and had more motivation to learn. Eye-tracking technology showed that students who learned with the smiling instructor spent more time looking at the content on the slides than those who learned with a bored instructor. Students who learned with the smiling instructor scored higher on a learning outcome post-test than those who learned with the bored instructor. Among female Chinese students, this pattern is consistent with the five steps posited by the positivity principle, which concludes that people learn better from instructors who exhibit positive social cues. Pointing with a human hand was not superior to pointing with an arrow, suggesting that in this case hand-pointing was not a strong social cue and did not moderate the effects of facial expression. Given the exclusively female sample, future research should examine whether these effects generalize across genders. |
Effie J. Pereira; Jelena Ristic Beauty in the eye of the beholder: Attention to attractive faces dissociates across covert and overt measures Journal Article In: Attention, Perception, & Psychophysics, vol. 88, no. 1, pp. 1–17, 2026. @article{Pereira2026,Attractive faces attract attention. Here, we examined how facial attractiveness influenced covert and overt social attention. Participants discriminated targets occurring after one of 32 different face–object cue pairs, which varied in the degree of attractiveness. Experiment 1 measured covert social attention in manual responses while maintaining central fixation. No evidence of attentional preference for faces was found. Experiment 2 measured overt social attention in eye movements while maintaining natural viewing conditions. A reliable oculomotor preference for attractive faces was found. Thus, facial attractiveness affects covert and overt social attention differently, reflecting the diverging ways in which faces affect attention with respect to social functioning in daily life. The datasets for all experiments can be found on the Open Science Framework (https://osf.io/u54tp/). |
Mario Michiels; David Luque; Ignacio Obeso Implicit and explicit reversal of trained oculomotor movements Journal Article In: Neurobiology of Learning and Memory, vol. 223, pp. 1–7, 2026. @article{Michiels2026,Habitual behavior is thought to emerge with extended training and reduced sensitivity to outcome devaluation. However, little is known about how habit-like oculomotor responses adapt when devaluation is implicit or embedded within a previously learned context. We examined this in a novel oculomotor learning task involving visual shape-reward associations with both standard and overtrained stimuli. Twenty-six participants completed a shape-color learning task while their eye movements were recorded using an eye-tracker system (1000 Hz). The task involved 11 blocks, including training, intra-block reversal (implicit stimulus-reward changes), and classical devaluation phases (explicitly instructed reward changes). Statistical analyses were performed using linear mixed-effects models on accuracy and response time (RT) measures. As expected, higher accuracy and faster responses for overtrained versus standard-trained stimuli were observed during training, confirming stronger learning. In the classical devaluation phase, overtrained stimuli elicited significantly more errors compared to standard-trained stimuli, relative to the performance in the training phase. This indicates stronger resistance to goal-directed updating. The effect was more pronounced during intra-block reversal of associations, where reward contingencies changed without warning. While RTs were not affected by classical devaluation, intra-block reversal significantly increased RTs for overtrained stimuli, relative to RTs in the training phase. This suggests a higher cognitive cost for overriding well-learned habitual responses when changes are unpredictable. These findings provide new evidence for the behavioral rigidity associated with overtraining of oculomotor behavior and suggest that unexpected outcome changes impose an additional switch cost on habitual oculomotor behavior. |
Sara LoTemplio; Jack Silcox; David L. Strayer; Brennan R. Payne Single‐trial relationships between the error‐related negativity, pe, error‐related pupillary dilation response, and post‐error behavior Journal Article In: Psychophysiology, vol. 63, no. 1, 2026. @article{LoTemplio2026,The amplitude of the error‐related negativity (ERN) is known to be correlated with attention to task and general cognitive control abilities. However, previous research has struggled to consistently link ERN amplitude with behavioral accuracy or reaction time in the task from which the ERN is being measured. This lack of relationship could be due to many factors that are difficult to control for, so explorations of other converging measures to understand error‐processing and subsequent behavior adjustment are warranted. The current study examines how two other physiological markers of error‐processing—the phasic pupillary dilation response (PDR) and the positivity following an error (Pe)—relate to post‐error behavior. Additionally, we also examine relationships between the three physiological indices of error‐processing. In the study, EEG and pupillometry were simultaneously recorded while participants completed 24 blocks (50 trials each) of an Ericksen Flanker task. For post‐error accuracy, we found that on a single‐trial level, the amplitude of all three physiological error‐processing indices for error trials predicted post‐error accuracy. At the subject level, only the PDR predicted average post‐error accuracy. For post‐error slowing, at the single‐trial level, only the Pe predicted post‐error slowing, whereas only the ERN predicted post‐error slowing at the subject level. We also found that both the ERN and Pe correlated with PDR amplitude. This is consistent with our hypothesis that the Pe and PDR may share underlying neural mechanisms, but qualified by the fact that the ERN, which is not hypothesized to have shared neural mechanisms, also predicted unique variance in pupillary amplitude. Collectively, these results suggest that the PDR and Pe might represent promising indicators of post‐error behavior adjustment and highlight the need to examine relationships at multiple levels of analysis. |
Raymond M. Klein; Şimal Dölek; John Christie Does the output form of inhibition of return operate at or after the bottleneck? Journal Article In: Acta Psychologica, vol. 262, pp. 1–8, 2026. @article{Klein2026,Inhibition of return (IOR) refers to the longer reaction times (RTs) to targets presented at previously cued, fixated or attended locations. It has been suggested that there are two distinct forms of IOR. The input form, generated when the reflexive oculomotor system is suppressed, affects the sensory/perceptual processing. The output form, generated when the reflexive oculomotor system is not suppressed, biases responding. It has been demonstrated, using the locus of slack logic associated with the psychological refractory period (Pashler, 1998),that the input form of IOR operates on a pre-bottleneck stage of processing, Kavyani et al. (2017). Using the same logic, Klein et al. (2020) demonstrated that the output form of IOR operates at or after the bottleneck. Building on the methods of Klein et al. the present study used PRP paradigm to determine whether the output form of IOR operates at or after the bottleneck. The output form of IOR was generated by an initial saccade from a peripheral location to a central fixation point. Task 1 consisted of a manual response indicating the location (right/left) of a subsequent visual stimulus. Task 2 required participants to discriminate the frequency (high/low) of an auditory stimulus and make a key-press response with their other hand. The targets (T1 and T2) for the two tasks were presented in close succession with 200, 400 and 800 ms target-target onset asynchronies (TTOAs). Responses to T1 were delayed by IOR and responses to T2 were substantially delayed when the TTOA was brief. Statistical analysis of the amount of carry over of the IOR effect experienced by Task 1 onto the RTs for Task 2 strongly suggest that the output form of IOR operates after the bottleneck. Nevertheless, aspects of the results could be interpreted to support a weaker influence of IOR operating also at the bottleneck stage of processing. |
Hyunwoo Kim; Kitaek Kim; Haerim Hwang Effects of goals and strategies on predictive processing: A visual world eye-tracking study on honorific agreement in Korean Journal Article In: Linguistics, pp. 1–35, 2026. @article{Kim2026,There is ongoing debate about whether prediction is driven solely by bottom-up associative links or is modulated by top-down goals and strategies. The current study attempts to address this issue by investigating the role of top-down factors in Korean speakers' predictive processing of honorific agreement. Two visual-world eye-tracking experiments were conducted, analyzing participants' anticipatory eye movements while manipulating two top-down factors. In Experiment 1, we assigned participants to two groups with different instructions, asking one group to listen to sentences and answer referent-selection questions, and the other group to actively predict the upcoming referent. Experiment 2 manipulated the validity of predictive cues by interspersing experimental items with fillers containing consistent or inconsistent continuations. Results from Experiment 1 showed that participants instructed to actively anticipate the referent used honorific information more quickly to make predictions than the comprehension-only group. In Experiment 2, the group exposed to predictive linguistic stimuli showed an earlier and stronger prediction effect compared to the group exposed to stimuli with no prediction validity. These results suggest that comprehenders engage in different degrees of prediction according to the current demands of task goals and strategies. We discuss these findings in light of recent theories of predictive language processing. |
Xin Huang; Bikalpa Ghimire; Anjani Sreeprada Chakrala; Steven Wiesner Neural coding of multiple motion speeds in visual cortical area MT Journal Article In: eLife, vol. 13, pp. 1–43, 2026. @article{Huang2026,Motion speed is a salient cue for visual segmentation, yet how the visual system represents and differentiates multiple speeds remains unclear. Here, we investigated the encoding and decoding of multiple speeds. We first characterized the perceptual capacity of human and macaque subjects to segment overlapping stimuli moving at different speeds. We then determined how neurons in area MT of macaque monkeys represent multiple speeds. We found that the responses of MT neurons to two speeds showed a robust bias toward the faster speed component. This faster-speed bias occurred when both speeds were slow (≤20°/s) and diminished as stimulus speed increased. Our findings can be explained by a modified divisive normalization model, in which the weights for the speed components are proportional to the responses of a population of neurons (the weighting pool) with a broad range of speed preferences, elicited by the individual speeds. Regarding decoding, a classifier could distinguish MT responses to two speeds from those to a corresponding log-mean speed. We further found that it was possible to decode two speeds from the MT population response, supporting the theoretical framework of coding multiplicity in neuronal populations. The decoded speeds can account for perceptual performance in segmenting two speeds with a large (4x) but not a small (2x) separation. Our findings help define the neural coding rule of multiple speeds. The faster-speed bias in MT could benefit important behavioral tasks, such as figure-ground segregation, as figural objects tend to move faster than the background in the natural environment. |
Zachary Hamblin-Frohman; Jay Pratt Rapid development of inhibitory effects in response to novel features: It's mostly target-feature enhancement Journal Article In: Psychonomic Bulletin & Review, vol. 33, no. 7, pp. 1–10, 2026. @article{HamblinFrohman2026,In some visual search scenarios, the presence of a singleton distractor leads to faster search performance. This has been coined as the inhibition effect and is believed to represent avoidance of the singleton distractor. Research has identified two contributing components: a bias towards target features, target-feature enhancement, a bias away from distractor features, distractor-feature suppression. The current study examines how each of these effects independently develops in response to novel stimuli. In short blocks participants completed a search for a pre-defined target shape. Each block the colour of the target and the distractor were randomized so that the initial and subsequent attentional adaptations to these features could be assessed (via eye-tracking). These mini-blocks reveal substantial information about the development of the inhibition effect. Incredibly, we observe the classic inhibition effect (shorter RTs on distractor-present trials) as soon as the second trial of each block. Furthermore, the effect emerged even if it was the first presentation of the distractor feature. Gaze analysis concurs with this, eyes avoided the distractor when the target feature was known, but the distractor feature unknown. This shows compelling evidence for guidance from target-feature enhancement. However, some evidence for distractor-feature suppression is observed, further oculomotor suppression of the distractor is seen after its initial presentation. Together, the current results show that the inhibition effect develops rapidly in visual search displays, and that while a large portion of the effect can be accounted for by target-enhancement, distractor-suppression may still have a role in influencing attentional allocations. |
Carie Guan; Naomi Geller; Maya Mammon; Naiqi G. Xiao Infants recognized other-race faces when learning them with incidental emotional sounds Journal Article In: Developmental Psychobiology, vol. 68, no. 1, pp. 1–13, 2026. @article{Guan2026,Infant face recognition shows plasticity, with recent evidence indicating enhancement by the presence of emotional facial expressions. The mechanisms and domain-generality of this effect remain largely unknown. This study tested whether auditory emotional cues (vocalizations) facilitated infants' recognition of other-race faces, a perceptual challenge during the first year of life. Infants (N = 89) were presented with emotionally neutral faces paired with happy, sad, or neutral vocal sounds in a within-subjects design. Experiment 1 assessed recognition using identical face images between the familiarization and test phases, while Experiment 2 examined face recognition across viewpoint changes. Across both experiments, infants exhibited successful face recognition only when they were learned with emotional sounds (happy and sad). This facilitative effect remained stable across the tested age range and did not differ between happy and sad vocalizations. Infants' eye movement data revealed comparable face-looking patterns across conditions, suggesting that the facilitation was not driven by changes in visual attention. Thus, incidental, cross-modal emotional signals significantly enhance infant face recognition. This underscores the early integrative nature of emotion processing and its catalytic role in cognitive development. |
Matthias Grabenhorst; David Poeppel; Georgios Michalareas The anticipation of imminent events is time-scale invariant Journal Article In: PNAS, vol. 123, no. 2, pp. 1–11, 2026. @article{Grabenhorst2026,Humans predict the timing of imminent events to generate fast and precise actions, decisions, and other behaviors. Such temporal anticipation is critical over wide timescales, and especially salient over the range from hundreds of milliseconds to a few seconds. Despite advances in our understanding of basic timing behavior and its underlying neural mechanisms, it remains an open question whether anticipation is stable across these short time scales. Recent work shows that the brain models the probability density function (PDF) of events across time, suggesting a canonical mechanism for temporal anticipation. Here, we investigate whether this computation holds when the event distribution covers different time spans. We show that, irrespective of the time span, anticipation, measured as reaction time, scales with the event distribution. This demonstrates that the key computation—the estimation of event probability density—is invariant across temporal scales. We further show that the precision of anticipation is also scale invariant which contradicts Weber's law. The results are established in vision and audition, suggesting that the core computations in temporal anticipation are independent of sensory modality. Perceptual systems exploit probability estimation over time independently of temporal scale to anticipate imminent events. |
Zhushi Fu; Xiaotong Ding; Yutao Lu; Cai Xing Physiological evidence supporting the emotional motivation account of the ending effect: Pupil diameters increase toward the end Journal Article In: Journal of Gambling Studies, pp. 1–15, 2026. @article{Fu2026,The phenomenon of increased risk-taking in the last round of a set of risky decision-making tasks is called the ending effect. Recent empirical studies proposed an emotional motivation account to explain the ending effect. That is, the pursuit of an emotionally satisfying ending leads to the increase of risk-taking. However, previous studies have mostly examined the ending effect at the behavioral level, there is yet no physiological evidence to examine the emotional motivation account. To fill in this gap of knowledge, the current study examined the emotional motivation account at the physiological level by recording pupil diameters, which reflect the activation of emotional motivation. Participants were randomly assigned to complete eight rounds or ten rounds of risk decision tasks while having their eyes tracked. The results showed a significant interaction between round and group on pupil diameter. Specifically, there was no significant difference between the first six rounds and the 8th round in the experimental group. For the control group, the pupil diameter of the first six rounds was significantly larger than the 8th round. Perceived end- ing may have sustained emotional arousal. This finding provides qualified physiological support for the emotional motivation account of the ending effect. |
Gabrielle F. Freitag; Shannon Shaughnessy; Jennifer M. Meigs; Parmis Khosravi; Julia O. Linke; Spencer C. Evans; Ellen Leibenluft; Melissa A. Brotman; Daniel S. Pine; Katharina Kircanski; Elise M. Cardinale An investigation of inhibitory control as a mechanism differentiating tonic and phasic irritability Journal Article In: Child Psychiatry & Human Development, pp. 1–11, 2026. @article{Freitag2026,Phasic and tonic irritability are highly correlated clinical constructs yet differentially associated with developmental trajectories and treatment response. However, limited research has identified their shared and unique underlying behavioral mechanisms. In a sample of youths enriched for irritability (N = 141, age range 7–18, age M[SD] = 12.60[2.54], 48.23% female), we investigated whether inhibitory control is differentially associated with phasic versus tonic irritability. Repli- cating prior work, tonic and phasic irritability were estimated via independent confirmatory factor analyses (CFAs) using items and/or subscales from multi-informant questionnaires. A latent factor of inhibitory control was extracted from four behavioral tasks. Initial multiple linear regression analysis found that phasic, not tonic, irritability was significantly associ- ated with impaired inhibitory control. However, results were no longer significant after accounting for shared associations with age. In addition, when adding commonly co-occurring symptoms such as attention-deficit/hyperactivity disorder (ADHD) symptoms and oppositionality, age and ADHD were significant predictors of inhibitory control, but phasic irri- tability was not. Results suggest that inhibitory control alone may not be a salient mechanism for disambiguating phasic and tonic irritability. Future work leveraging longitudinal methods and consideration of other potential contextual factors is needed. |
Olympia Colizoli; Tessa M. Leeuwen; Danaja Rutar; Harold Bekkering Pupil dilation offers a time-window on prediction error Journal Article In: eLife, vol. 14, pp. 1–44, 2026. @article{Colizoli2026,Task-evoked pupil dilation is notably linked to unexpected events. Building on Zénon's (2019) information-theory framework, we investigated whether the pupil's response to feedback on decision outcomes during associative learning reflects a prediction error signal. Operationally, we defined prediction errors as an interaction between stimulus-pair frequency and accuracy. We then tested if these signals correlated with information gain, formally defined as the Kullback-Leibler (KL) divergence between posterior and prior belief distributions of an ideal observer. We reasoned that information gain should be proportional to the precision-weighted prediction error signals potentially arising from neuromodulatory arousal networks. We analyzed two data sets in which participants performed perceptual decision-making tasks while pupil dilation was recorded. Our findings consistently showed that a significant proportion of variability in the post-feedback pupil response was explained by information gain shortly after feedback presentation. For the first time, we present evidence that whether the pupil dilates or constricts along with information gain was context dependent. This study offers empirical evidence that the pupil's response provides valuable insights into the process of model updating during learning, highlighting its utility as a physiological indicator of internal belief states. |
Yue Cheng; Weizhen Chen In: Buildings, vol. 16, no. 1, pp. 1–23, 2026. @article{Cheng2026,Sacred heritage landscapes face significant challenges in engaging Generation Z tourists. To understand their visual processing and emotional responses, this study grounded in Cognitive Appraisal Theory (CAT), employed a mixed-methods approach with Chinese youth. Study 1 (N = 35) uses eye-tracking to examine the visual attention of Gen Z to different sacred heritage types, revealing that natural sacred sites yield the highest First Fixation Duration (FFD) and Average Fixation Duration (AFD), alongside stronger subjective preferences—highlighting the role of biophilia and perceptual fluency. Study 2 constructs a moderated mediation model with a questionnaire (N = 300), identifying a “Novelty → Awe → Place Attachment” pathway and the moderating role of mindfulness. The research identifies the specific visual processing patterns of Gen Z and provides a psychological model for place attachment, offering empirical insights for designing intergenerationally inclusive heritage landscapes. |
Jui-Tai Chen; Yi-Hsuan Chang; Cesar Barquero; Chin-An Wang Pupil dynamics reveal preparatory processes in the generation of pro-saccades and anti-saccades in open skill sports athletes Journal Article In: Biology of Sport, vol. 43, pp. 77–94, 2026. @article{Chen2026,This study investigated pupil dynamics to establish a physiological index of mental processes associated with executive functioning, enabling objective evaluation of cognitive load during training to improve understanding of cognitive control in sport-specific contexts. Using video-based eye-tracking, we examined pupil and saccade responses in athletes (N = 40) and non-athletes (N = 40) performing an interleaved pro- saccade and anti-saccade task. In this task, participants were instructed prior to target appearance to either make a reflexive saccade toward the target (pro-saccade) or inhibit that response and generate a voluntary saccade in the opposite direction (anti-saccade). Larger pupil dilation prior to target onset was observed during anti-saccade compared to pro-saccade preparation (p < 0.001, ηp² = 0.153). Athletes showed reduced pupil dilation compared to non-athletes (p < 0.05, ηp² = 0.049). In addition, trials with larger pupil dilation and smaller tonic pupil sizes were associated with faster saccade reaction times. Pupil dilation also positively correlated with saccade peak velocities but showed no association with saccade endpoint accuracy. These findings suggest that athletes may engage in more efficient motor preparation, as reflected by reduced pupil dilation. Moreover, phasic pupil dilation, indexing cognitive load, and tonic pupil size, associated with arousal level, both contributed to the control of saccade dynamics during goal-directed movements. Together, these results highlight the utility of pupil size as an objective and informative index for assessing both cognitive and arousal functions in sports science research. |
Francesca Carbone; Abigail Pitt; Angela Nyhout; Stacie Friend; Murray Smith; Heather J. Ferguson Art opening minds: An experimental study on the effects of temporal and perspectival complexity in film on open-mindedness Journal Article In: Quarterly Journal of Experimental Psychology, vol. 79, no. 1, pp. 102–123, 2026. @article{Carbone2026,Aesthetic Cognitivism posits that artworks have the potential to enhance open-mindedness. However, this claim has not yet been explored empirically. Here, we present two experiments that investigate the extent to which two formal features of the film – temporal and perspectival complexity – can ‘open our minds'. In Experiment 1, we manipulated the temporal complexity of the film. Participants (Ntotal = 100) watched a film (Memento) either in its original non-chronological order or the same film in chronological order. In Experiment 2, we manipulated perspectival complexity in film. Participants (Ntotal = 100) watched an excerpt from a film (Jackie Brown) that either included the perspectives of multiple characters on an event or a single character's perspective on the same event. Film conditions in both experiments were further compared with a control condition in which participants did not watch a film (N = 50). Participants' open-mindedness was assessed in both experiments through four empirical indicators (creativity, imaginability, cognitive flexibility, openness to new evidence) and in Experiment 2, participants' eye movements, heart rate and electrodermal activity were measured while watching the film. Results showed that watching films, regardless of their temporal or perspectival complexity, modulated only one facet of open-mindedness – cognitive flexibility – when compared to the no-film control condition, providing only limited support for the aesthetic cognitivist claim that artistic films can ‘open our minds'. Real-time measures in Experiment 2 revealed that pupil size and number of fixations were modulated by perspectival complexity: both were smaller when watching a film from multiple perspectives compared to a single perspective. Possible explanations for this difference are examined in relation to the viewers' cognitive processes involved in understanding and interpreting film content. |
Huarui Cao; Lin Mu; Xuejiao Mao; Tang Yao Effect of tourism architecture shape and self-construal Journal Article In: Annals of Tourism Research, vol. 116, pp. 1–17, 2026. @article{Cao2026,The issue of whether tourists with varying characteristics exhibit distinct preferences for diverse geometric shapes of architecture remains underexplored in tourism literature. To address this gap, we drew on aesthetic distance theory and conducted eye-tracking and scenario-based experiments to examine an effect of tourism architecture shape in alignment with tourist self-construal. Our findings indicated that independent self-construal tourists favor circular-shaped architecture, while interdependent self-construal tourists prefer angular-shaped architecture. Additionally, the results confirmed the mediating role of novelty and highlighted architectural authenticity as a moderator in this effect. These insights enhance our understanding of aesthetic preferences for tourism architecture among tourists with different self-construals and provide practical recommendations for tourism industry to tailor specific architectural shapes to increase tourists' travel intentions. |
Mark W. Becker; Andrew Rodriguez; Derrek T. Montalvo; Chad Peltier Reducing the low-prevalence effect with probe trials Journal Article In: Cognitive Research: Principles and Implications, vol. 11, no. 1, pp. 1–19, 2026. @article{Becker2026,As targets become rare in visual search tasks, the likelihood of missing them increases—a phenomenon known as the low-prevalence effect (LPE). This has important implications for real-world searches, but reducing the LPE has proven challenging. In Experiment 1, we used a low-prevalence T-among-Ls task and found that distributing “probe” trials—trials with known targets and post-response feedback—reduced the LPE. In Experiment 2, participants searched for two low-prevalence targets (T and O among Ls and Qs), and we varied how often each appeared in probe trials. The probe benefit scaled with the frequency of the matching target, suggesting limited generalizability to non-probed targets. Experiment 3 used eye tracking to examine whether probes affected quitting thresholds, decision criteria, or guidance. Results showed that probes biased top-down guidance toward features of frequently probed targets, without affecting the number of items inspected or the decision criterion. In Experiment 4, we tested whether feedback was necessary for the probe benefit. Findings suggest that probes improve rare-target search by altering perceived prevalence, not through feedback alone. Overall, probes may reduce the LPE by increasing perceived prevalence and thereby increasing search guidance, but only when probe targets closely match actual search targets. |
2025 |
Luan Zimmermann Bortoluzzi; Estêvão Carlos-Lima; Gabriela Mueller de Melo; Melissa Hongjin Song Zhu; Gustavo Rohenkohl Presaccadic attentional shifts are not modulated by saccade amplitude Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–10, 2025. @article{ZimmermannBortoluzzi2025,Humans constantly explore the visual environment through saccades, bringing relevant visual stimuli to the center of the gaze. Before the eyes begin to move, visual attention is directed to the intended saccade target. As a consequence of this presaccadic shift of attention (PSA), visual perception is enhanced at the future gaze position. PSA has been investigated in a variety of saccade amplitudes, from microsaccades to locations that exceed the oculomotor range. Interestingly, recent studies have shown that PSA effects on visual perception are not equally distributed around the visual field. However, it remains unknown whether the magnitude of presaccadic perceptual enhancement varies with the amplitude of the saccades. Here, we measured contrast sensitivity thresholds during saccade planning in a two-alternative forced-choice (2AFC) discrimination task in human observers. Filtered pink noise (1/f) patches, presented at four eccentricities scaled in size according to the cortical magnification factor were used as visual targets. This method was adopted to mitigate well-known eccentricity effects on perception, thereby enabling us to explore the effects associated to saccade amplitudes. First, our results show that saccade preparation enhanced contrast sensitivity in all tested eccentricities. Importantly, we found that this presaccadic perceptual enhancement was not modulated by the amplitude of the saccades. These findings suggest that presaccadic attention operates consistently across different saccade amplitudes, enhancing visual processing at intended gaze positions regardless of saccade size. |
Cong Zheng; Qifan Wang; He Cui Continuous sensorimotor transformation enhances robustness of neural dynamics to perturbation in macaque motor cortex Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–17, 2025. @article{Zheng2025a,Neural activity in the motor cortex evolves dynamically to prepare and generate movement. Here, we investigate how motor cortical dynamics adapt to dynamic environments and whether these adaptations influence robustness against disruptions. We apply intracortical microstimulation (ICMS) in the motor cortex of monkeys performing delayed center-out reaches to either a static target (static) or a rotating target (moving) that required interception. While ICMS prolongs reaction times (RTs) in the static condition, it does not increase RTs in the moving condition, correlating with faster recovery of neural population activity post-perturbation. Neural dynamics suggests that the moving condition involves ongoing sensorimotor transformations during the delay period, whereas motor planning in the static condition is completed shortly. A neural network model shows that continuous feedback input rapidly corrects perturbation-induced errors in the moving condition. We conclude that continuous sensorimotor transformations enhance the motor cortex's resilience to perturbations, facilitating timely movement execution. |
Tianze Zhang; Yue Xi The influences of image entropy and text direction on consumer attention: Insights from eye-tracking studies Journal Article In: Psychology & Marketing, vol. 42, no. 12, pp. 3266–3287, 2025. @article{Zhang2025m,As visual content is increasingly prioritized by social media platforms, the effective interplay between image and text is critical for capturing consumer attention. This research aims to investigate the effects of two novel visual cues—image entropy (disorder) and text direction—and presents the concept of an image–text fit effect. Through three eye-tracking experiments, we demonstrate that high-entropy (vs. low-entropy) images and vertical (vs. horizontal) text direction significantly increase consumer attention. We identify a “feeling right” sense as the underlying psychological mechanism, which can be explained via time perception association. Furthermore, we examine the moderating effect of emoji intensity in social media communications on capturing consumer attention. These findings increase the theoretical understanding of visual marketing and provide actionable strategies for practitioners. |
Hao Zhang; Yiqing Hu; Yang Li; Shuangyu Zhang; Xiao Li Li; Chenguang Zhao Simultaneous dataset of brain, eye and hand during visuomotor tasks Journal Article In: Scientific Data, vol. 12, no. 1, pp. 1–15, 2025. @article{Zhang2025f,Visuomotor integration is a complex skill set encompassing many fundamental abilities, such as visual search, attention monitoring, and motor control. To explore the dynamic interplay between visual inputs and motor outputs, it is necessary to simultaneously record multiple brain activities with high temporal and spatial resolution, as well as to record implicit and explicit behaviors. However, there is a lack of public datasets that provide simultaneous multiple modalities during a visual-motor task. Functional near-infrared spectroscopy and electroencephalography to record brain activity simultaneously facilitate more precise capture of the complex visuomotor of brain mechanisms. Additionally, by employing a combined eye movement and manual response, it is possible to fully evaluate the effects of visuomotor outputs from implicit and explicit dimensions. We recorded whole-brain EEG (34 electrodes) and fNIRS (44 channels) covering the frontal and parietal cortex along with eye movements, behavior sampling, and operant behavior. The dataset underwent rigorous synchronization, quality control to highlight the effectiveness of our experiments and to demonstrate the high quality of our multimodal data framework. |
Zhao Zeng; Ce Zhang; Yue Xu; Hua He; Yong Gu Distinct neural population code and causal roles of primate caudate nucleus in multimodal decision-making Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–16, 2025. @article{Zeng2025b,Perceptual decision-making involves distributed networks spanning both association cortices and subcortical areas. A fundamental question is whether such a network is highly redundant, or each node is distinct with unique function. Using a visuo-vestibular decision-making task, here we show the subcortical caudate nucleus (CN) of male primates displays distinct population code compared to association cortices along the modality dimension. Specifically, in a low-dimensional state subspace, neural trajectory in the frontal and posterior-parietal association cortical activity during multimodal-stimulus condition evolves along the visual trajectory, whereas along the vestibular trajectory in the CN. We then show CN population activity is consistent with the animal's behavioral strategy employed within a generalized drift-diffusion framework. Importantly, causal-link experiments, including application of GABAa-receptor agonist, D1-receptor antagonist, and electrical microstimulation, further confirmed CN's critical contributions to perceptual behavior. Our results confirm CN's vital importance to decision making in complex environments with multimodal information. |
Zinong Yang; Stephanie D. Williams; Ewa Beldzik; Stephanie Anakwe; Emilia Schimmelpfennig; Laura D. Lewis Attentional failures after sleep deprivation are locked to joint neurovascular, pupil and cerebrospinal fluid flow dynamics Journal Article In: Nature Neuroscience, pp. 2526–2536, 2025. @article{Yang2025e,Sleep deprivation rapidly disrupts cognitive function and in the long term contributes to neurological disease. Why sleep deprivation has such profound effects on cognition is not well understood. Here we use simultaneous fast fMRI–EEG to test how sleep deprivation modulates cognitive, neural and fluid dynamics in the human brain. We demonstrate that attentional failures during wakefulness after sleep deprivation are tightly orchestrated in a series of brain–body changes, including neuronal shifts, pupil constriction and cerebrospinal fluid (CSF) flow pulsations, pointing to a coupled system of fluid dynamics and neuromodulatory state. CSF flow and hemodynamics are coupled to attentional function within the awake state, with CSF pulsations following attentional impairment. The timing of these dynamics is consistent with a vascular mechanism regulated by neuromodulatory state. The attentional costs of sleep deprivation may thus reflect an irrepressible need for rest periods driven by a central neuromodulatory system that regulates both neuronal and fluid physiology. |
Yu Fang Yang; Matthias Gamer Facial features associated with fear and happiness attract gaze during brief exposure without enhancing emotion recognition Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–15, 2025. @article{Yang2025d,Facial features transmit emotions but their effect on visual orienting and explicit emotion recognition is debated. Here we examined whether fixating on diagnostic features of emotional expressions—such as eye region for fear and the mouth for happiness—affects saccadic targeting and improves recognition accuracy. Across two pre-registered experiments, participants viewed fearful, happy, and neutral faces for short intervals (50 or 150 ms) while the initial fixation location was manipulated. Although such brief stimulation does not allow for visual exploration, the faces still elicited reflexive saccades that occurred after stimulus offset. These saccades were modulated by the emotional expressions indicating a consistent preferential saccadic orienting towards diagnostic features, even with limited exposure. As this effect disappeared for inverted faces, it can be attributed to an extrafoveal processing of facial features instead of an attentional orienting towards physically salient image regions. Participants' recognition accuracy was unaffected by the foveated facial feature, but this observation might also be due to ceiling effects in performance. Collectively, these findings contribute to understanding the attentional mechanisms of feature-based processing in the perception of emotional facial expressions. |
Xiaojuan Xue; Gilles Pourtois Neurophysiological evidence for emotional attention modulation depending on goal relevance Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–16, 2025. @article{Xue2025b,Although threat-related stimuli can capture attention automatically, recent findings have challenged this assumption by showing that goal rather than threat can be prioritized and eventually guide attentional control. In this study, we used high density electroencephalography (EEG) in 40 participants while peripheral emotional faces (either fear or happiness) were either goal-relevant or irrelevant during a dot-probe task (DPT). The use of peripheral vision was established by eye-tracking. Both the face specific N170 component and the subsequent Early Posterior Negativity (EPN) were enhanced by fear at the cue level, yet the latter one only when fear was goal relevant. Importantly, we found that early on following target onset at the P1 level, both value and goal relevance drove spatial attention and interacted with each other such that when they were goal-relevant, fearful faces captured attention less than when they were not. These results suggest that emotional attention is flexible and it can be influenced by the goal relevance of emotion. Moreover, they shed light on the electrophysiological manifestations of this flexibility and dovetail with the assumption that sensory gain control effects occurring in the visual cortex depending on attentional control are multiplexed and determined by both value and goal. |
Jia-Jie Xu; Jun-Yi Chen; Hong-Zhou Xu; Zhiwei Zheng; Jing Yu The role of inhibitory function in associative memory among older adults and its plasticity Journal Article In: Cognitive Research: Principles and Implications, vol. 10, no. 1, pp. 1–20, 2025. @article{Xu2025,Associative memory deteriorates with age. One possible reason for this associative memory deficit in older adults is a decline in inhibitory function. However, it remains unclear what role of inhibitory function plays in age-related associative memory deficits, and whether and how acute training of inhibitory function could ameliorate the detrimental effects of inhibitory deficits on associative memory in older adults. In Experiment 1, 80 participants (40 younger and 40 older adults) studied scene-word pairs while attempting to inhibit interfering words during encoding, with two conditions: gist and non-gist interferences. In Experiment 2, 66 older adults were randomly assigned to either acute inhibitory training or a control group, and eye-tracking technology was used to capture the benefits of acute inhibitory training. Results showed that older adults were more disturbed by gist than non-gist interferences because of hyper-binding, and that inhibitory function mediated the relationship between age and associative memory accuracy. Notably, although acute inhibitory training did not significantly improve associative memory accuracy in the training group compared to the control group, structural equation model showed that older adults in the acute training group showed shorter fixation durations and lower frequencies in the interference region of interest, leading to better associative memory. These results indicate that inhibitory function plays a mediating role in age-related associative memory decline, as well as its plasticity in this association. It provides a potential pathway to improve associative memory in older adults. |
Jackie Wai Yi Wo; Weiyan Liao; Janet Hui Hsiao Impact of mask use on facial emotion recognition in individuals with subclinical social anxiety: An eye-tracking study Journal Article In: Cognitive Research: Principles and Implications, vol. 10, no. 1, pp. 1–18, 2025. @article{Wo2025,Previous studies suggested that social anxiety is associated with interpretation bias, theory of mind deficit, and eye gaze avoidance when identifying facial emotions. We tested the hypothesis that socially anxious individuals would be more affected by mask use during facial emotion recognition. 88 healthy undergraduates with various levels of social anxiety were invited. Participants judged the emotions of masked and unmasked facial expressions. Eye Movement Analysis with Hidden Markov Models was used to analyze participants' eye movement patterns during the task. Potential confounders including gender, depressive symptoms, stress, and executive planning ability were controlled for in the analyses. Results failed to support our hypothesis. Instead, higher social anxiety was associated with higher accuracy rates for angry and fearful faces and lower false alarm rates for sad faces. Eye movement patterns were similar across social anxiety levels. Interestingly, an exploratory moderation analysis revealed that an increase in using a more eye-centered strategy due to mask use was significantly associated with a larger drop in accuracy rate for fearful faces among individuals with higher social anxiety, while non-significantly associated with a smaller drop among individuals with lower social anxiety. Thus, our study indicates social anxiety, at least at subclinical levels, may be associated with a generally heightened sensitivity to negative emotions. However, such heightened sensitivity diminishes if they switch to a more eye-centered strategy when viewing masked facial emotions. Potential mechanisms and implications were discussed. |
Iris Wiegand; Mariska Pouderoijen; Joukje M. Oosterman; Kay Deckers; Gernot Horstmann; Mariska Van Pouderoijen; Joukje M. Oosterman; Kay Deckers; Gernot Horstmann Contributions of distractor dwelling , skipping , and revisiting to age differences in visual search Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–28, 2025. @article{Wiegand2025,Visual search becomes slower with aging, particularly when targets are difficult to discriminate from distractors. Multiple distractor rejection processes may contribute independently to slower search times: dwelling on, skipping of, and revisiting of distractors, measurable by eye-tracking. The present study investigated how age affects each of the distractor rejection processes, and how these contribute to the final search times in difficult (inefficient) visual search. In a sample of Dutch healthy adults (19–85 years), we measured reaction times and eye-movements during a target present/absent visual search task, with varying target-distractor similarity and visual set size. We found that older age was associated with longer dwelling and more revisiting of distractors, while skipping was unaffected by age. This suggests that increased processing time and reduced visuo-spatial memory for visited distractor locations contribute to age-related decline in visual search. Furthermore, independently of age, dwelling and revisiting contributed stronger to search times than skipping of distractors. In conclusion, under conditions of poor guidance, dwelling and revisiting have a major contribution to search times and age-related slowing in difficult visual search, while skipping is largely negligible. |
Bayley M. Wellons; Christopher N. Wahlheim Misinformation reminders enhance belief updating and memory for corrections: The role of attention during encoding revealed by eye tracking Journal Article In: Cognitive Research: Principles and Implications, vol. 10, no. 1, pp. 1–22, 2025. @article{Wellons2025,Misinformation exposure can cause inaccurate beliefs and memories. These unwanted outcomes can be mitigated when misinformation reminders—veracity-labeled statements that repeat earlier-read false information—appear before corrections with true information. The present experiment used eye tracking to examine the role of attention while encoding corrective details in the beneficial effects of reminder-based corrections. Participants read headlines in a belief-updating task that included a within-subjects manipulation of correction format. They first rated the familiarity and veracity of true and false headlines (Phase 1). Then, they read true headlines that corrected false headlines or affirmed true headlines (Phase 2). The true headlines appeared (1) without veracity labels, (2) with veracity labels, or (3) with misinformation reminders and veracity labels. Finally, participants re-rated the veracity of the Phase 1 headlines and rated their memory for whether those headlines were corrected in Phase 2 (Phase 3). Reminder-based corrections led to the greatest reduction in false beliefs, best high confidence recognition of corrections, and earliest eye fixations to the true details of corrections during encoding in Phase 2. Corrections remembered with the highest confidence rating were associated with more and earlier fixations to true details in correction statements in Phase 2. Collectively, these results suggest that misinformation reminders directed attention to corrective details, which improved encoding and subsequent memory for veracity information. These results have applied implications in suggesting that optimal correction formats should include features that direct attention to, and thus support encoding of, the contrast between false and true information. |
Ágnes Welker; Orsolya Pető-Plaszkó; Luca Verebélyi; Ferenc Gombos; István Winkler; Ilona Kovács Neurodiversity in mental simulation: Conceptual but not visual imagery priming modulates perception across the imagery vividness spectrum Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–12, 2025. @article{Welker2025,Mental simulation—the ability to internally model sensory, conceptual, or future events—may include mental imagery as a component, with considerable individual variability in its vividness and dependence on sensory detail. While self-reports have been widely used to assess imagery, they are subjective and prone to bias. Among more objective methods, imagery priming in binocular rivalry has been employed to investigate the influence of mental imagery on perception, but findings have been ambiguous. Here, we introduce a no-report version of the task, using eye-tracking-based optokinetic nystagmus assessment to provide a more reliable measure of perceptual shifts. In addition to visual imagery priming, we introduce conceptual priming, which does not rely on sensory imagery but engages abstract representations. In visual imagery priming, perceptual modulation correlated with self-reported vividness, and participants with low vividness did not show modulatory effects. However, in conceptual priming, effects were observed across the entire vividness spectrum, demonstrating that both concrete sensory-based and abstract conceptual representations can influence perception. These findings challenge purely sensory accounts of mental imagery. We propose avoiding deficit-based terms such as “aphantasia” and advocate for a neuroaffirmative perspective on mental simulation diversity. |
Béla Weiss; Annamária Manga; Ádám Nárai; Adél Bihari; Judit Zsuga; Zoltán Vidnyánszky Reward boosts cognitive control during working memory maintenance Journal Article In: Scientific Reports, vol. 15, no. 1, 2025. @article{Weiss2025,Working memory (WM) involves short-term maintenance and manipulation of goal-relevant information, with cognitive control playing a crucial role in these processes due to WM's limited capacity. Pupillometry studies show distinct pupillary changes for WM stages, reflecting cognitive effort and load. Motivational incentives enhance WM performance by potentially improving encoding, maintenance, or retrieval, though the specific components influenced by reward remain unclear. This study specifically tested whether reward modulates cognitive control processes during WM maintenance using pupillometry. Participants performed a delayed-estimation orientation WM task with reward cues indicating reward levels at the beginning of trials. The results revealed that motivational incentives significantly improved WM performance and increased pupillary dilation during maintenance. These findings provide evidence for the modulation of WM maintenance by reward through enhanced top-down cognitive control processes. |
Hanliang Wei; Tak Lam; Weijian Liu; Waxun Su; Zheng Wang; Qiandong Wang; Xiao Lin; Peng Li Initial and sustained attentional bias toward emotional faces in patients with major depressive disorder Journal Article In: Journal of Eye Movement Research, vol. 18, no. 6, pp. 72, 2025. @article{Wei2025,Major depressive disorder (MDD) represents a prevalent mental health condition characterized by prominent attentional biases, particularly toward negative stimuli. While extensive research has established the significance of negative attentional bias in depression, critical gaps remain in understanding the temporal dynamics and valence-specificity of these biases. This study employed eye-tracking technology to systematically examine the attentional processing of emotional faces (happy, fearful, sad) in MDD patients (n = 61) versus healthy controls (HC |
Sara Jane Webb; Brian Kwan; Raphael Bernier; Katarzyna Charwarska; Geraldine Dawson; James Dziura; Susan Faja; Gerhard Hellmann; Shafali Jeste; Natalia Kleinhans; April Levin; Adam Naples; Maura Sabatos-DeVito; Damla Şentürk; Frederick Shic; Catherine Sugar; James C. McPartland; Autism Biomarkers Consortium for Clinical Trials Face perception, attention, and memory as predictors of social change in autistic children Journal Article In: Journal of Neurodevelopmental Disorders, vol. 17, no. 1, pp. 1–9, 2025. @article{Webb2025,Objective: Social perception and attention markers have been identified that, on average, differentiate autistic from non-autistic children. However, little is known about how these markers predict behavior over time at both short and long time intervals. Methods: We conducted a large multisite, naturalistic study of 6- to 11-year-old children diagnosed with ASD (n = 214). We evaluated three markers of social processing: social perception via the ERP N170 Latency to Upright Faces; social attention via the Eye Tracking (ET) OMI (Oculomotor Index of Gaze to Human Faces) that captures percent looking to faces from three tasks; and social cognition via the NEPSY Face Memory task. Each was evaluated in predicting social ability and autistic social behaviors derived from parental interviews and questionnaires about child behavior at + 6 months (T3) and + 4 years (T4). Results: Adjusting for baseline performance, time between measurements, age, and sex, our results suggest differential prognostic relations for each of the markers. The ERP N170 Latency to Upright Faces showed limited prognostic relations, with a significant relation to short term changes in face memory. The ET OMI was related to face memory over both short and long term. Both the ET OMI and Face Memory predicted long-term autistic social behavior scores. Conclusions: In the context of a large-scale, rigorous evaluation of candidate markers for use in future clinical trials, our primary markers had significant but small-effect prognostic capability. The ET OMI and Face Memory showed significant long-term predictive relations, with increased visual attention to faces and better face memory at baseline related to increased social approach and decreased autistic social behaviors 4 years later. |
Xin Wang; Shitao Chen; Keyang Wang; Liyu Cao Predicted action-effects shape action representation through pre-activation of alpha oscillations Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1–11, 2025. @article{Wang2025n,Actions are typically accompanied by sensory feedback (or action-effects). Action-effects, in turn, influence the action. Theoretical accounts of action control assume a pre-activation of action-effects prior to action execution. Here we show that when participants were asked to report the time of their voluntary keypress using the position of a fast-rotating clock hand, a predictable action-effect (i.e. a 250 ms delayed sound after keypress) led to a shift of visuospatial attention towards the clock hand position of action-effect onset, thus demonstrating an influence of action-effects on action representation. Importantly, the attention shift occurred about 1 second before the action execution, which was further preceded and predicted by a lateralisation of alpha oscillations in the visual cortex. Our results indicate that when the spatial location is the key feature of action-effects, the neural implementation of the action-effect pre-activation is achieved through alpha lateralisation. |
Carla A. Wall; Kayla Smith; Frederick Shic; Bridgette Kelleher; Abigail Hogan; Elizabeth A. Will; Jane E. Roberts Heart rate defined sustained attention relates to visual attention in autism and fragile X syndrome Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–9, 2025. @article{Wall2025b,Social attention, including shared attention and social orienting, is essential for positive social interactions. Although early visual social attention is often quantified using eye tracking, these indices may not consistently reflect cognitive engagement. Heart rate defined sustained attention (HRDSA) is a physiological measure that can index cognitive engagement alongside visual attention, leading to more comprehensive assessments of attentional processes that are particularly important in young, neurodiverse children with high support needs, including those with autism and fragile X syndrome (FXS). The present study examined visual and heart-defined measures of social attention to the Selective Social Attention task, a video-based assay of social attention, in children with autism, FXS, and neurotypical development. Linear mixed models examined group and condition effects in multiple cardiac indices and overall looking at the scene. Findings suggest that, overall, children across all groups engaged similarly across the experiment in most dimensions of HRDSA, and consistent with previous work, autistic children spent less time visually attending to the scene than either other group. HRDSA was positively associated with visual social attention. Combining physiological and visual attention measures may elucidate the complex nature of social attention and be especially valuable for neurodiverse children when typical assessments are inaccessible. |
Preeti Verghese; Adrien Chopin; Ângela Gomes-Tomaz; Noelia G. Alcalde; Dennis M. Levi Vergence anomalies are associated with impaired stereopsis in amblyopia Journal Article In: Vision Research, vol. 237, pp. 1–16, 2025. @article{Verghese2025,We examined the relationship between stereopsis and fusional vergence in groups of amblyopic and stereo-normal control observers. As absolute disparity is thought to be the basis for relative disparity and for disparity-driven vergence, we hypothesized that vergence anomalies would be accompanied by impaired stereopsis. Specifically, we examined whether patterns of impaired stereopsis across the central 20° of the visual field were accompanied by impaired fusional vergence for stimuli confined to these regions. Stereopsis was measured locally across the visual field with disparity steps of 5 to 20 arcmin. Fusional vergence to large disparity steps (2 to 3°) was measured with binocular eye tracking. The vergence stimuli were random dot stereograms, in one of 3 spatial configurations: a large disc 16° in diameter, a small disc 4° in diameter, and an annulus with outer and inner diameters corresponding to the large and small discs. Of the controls (n = 25) with no history of abnormal visual development, 12 individuals exhibited normal stereopsis across the visual field and normal vergence gains for all configurations. Thirteen individuals with weak stereopsis in the central field tended to have anomalous vergence for small stimuli, but normal vergence for larger stimuli. Amblyopic/strabismic individuals (n = 12) had poor stereopsis and poor vergence for small stimuli. We report a strong correlation between vergence, coarse and fine stereopsis, with no double dissociation (no cases of impaired vergence with normal stereopsis). Taken together, the results suggest that compromised binocular interaction is the cause of both stereopsis and vergence deficits. |
Michaël Vanhoyland; Peter Janssen; Tom Theys Single-neuron correlates of visual consciousness in human lateral occipital complex Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–17, 2025. @article{Vanhoyland2025,Conscious perception, a critical aspect of human cognition, is assumed to emerge from a complex network of interacting brain regions that transmit information via feedforward and recurrent pathways. This study presents single- and multiunit recordings from the human lateral occipital complex (LO), a key region for shape and object recognition, during three distinct perceptual paradigms: backward masking, flash suppression and binocular rivalry. Stimulus awareness increased decoding accuracy and decoders assigned higher probabilities to the consciously perceived stimulus during periods of dichoptic stimulus presentation. These findings highlight the intricate neural mechanisms underlying visual awareness and show that LO responses predominantly align with subjective phenomenology, offering new insights into the neural correlates of visual consciousness. |
Sandra Tyralla; Eckart Zimmermann Serial dependencies and overt attention shifts Journal Article In: Journal of Vision, vol. 25, no. 14, pp. 1–16, 2025. @article{Tyralla2025,When visual input is uncertain, visual perception is biased toward the stimulation from the recent past. We can attend to stimuli either endogenously based on an internal decision or exogenously, triggered by an external event. Here, we wondered whether serial dependencies are selective for the attentional mode which we draw to stimuli. We studied overt attention shifts: saccades and recorded either motor error correction or visual orientation judgments. In Experiment 1, we assessed sensorimotor serial dependencies, focusing on how the postsaccadic error influences subsequent saccade amplitudes. In Experiment 2, we evaluated visual serial dependencies by measuring orientation judgments, contingent on the type of saccade performed. In separate sessions, participants performed either only voluntary saccades or only delayed saccades, or both saccade types alternated within a session. Our results revealed that sensorimotor serial dependencies were selective for the saccade type performed. When voluntary saccades had been performed in the preceding trial, serial dependencies were much stronger in the current trial if voluntary instead of delayed saccades were executed. In contrast, visual serial dependencies were not influenced by the type of saccade performed. Our findings reveal that shifts in exogenous and endogenous attention differentially impact sensorimotor serial dependencies, but visual serial dependencies remain unaffected. |
Ekin Tünçok; Marisa Carrasco; Jonathan Winawer Spatial attention selectively alters visual cortical representation during target anticipation Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–19, 2025. @article{Tuencok2025,Attention enables us to efficiently and flexibly interact with the environment by prioritizing specific image locations and features in preparation for responding to stimuli. Using a concurrent psychophysics–fMRI experiment, we investigate how covert spatial attention modulates responses in human visual cortex before target onset and how it affects subsequent behavioral performance. Performance improves at cued locations and worsens at uncued locations compared to distributed attention, demonstrating a selective processing tradeoff. Pre-target BOLD responses in cortical visual field maps reveal two key changes: First, a stimulus-independent baseline shift, with increases near cued locations and decreases elsewhere, paralleling behavioral results. Second, a shift in population receptive field centers toward the attended location. Both effects increase in higher visual areas. Together, these findings reveal that spatial attention has large effects on visual cortex prior to target appearance, altering neural response properties across multiple visual field maps and enhancing performance through anticipatory mechanisms. |
Tobiasz Trawiński; Chuanli Zang; Letizia Palumbo; Nick Donnelly Individuating experience moderates the effect of implicit racial bias on eye movements to other race faces: A cross-cultural study Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–12, 2025. @article{Trawinski2025,The present cross-cultural study investigated gaze behaviour in the context of assessing the aesthetic value of figurative paintings depicting White and East Asian individuals in social scenes. Across three experiments, we examined how implicit racial attitudes and self-reported individuating experiences influenced gaze patterns when participants evaluated their liking of these paintings. Despite no requirement to inspect faces in the paintings, the results revealed that participants with negative implicit attitudes toward other-race individuals and limited individuating experience with those groups, spent more time fixating on other-race faces. This relationship between implicit attitudes and individuating experience in guiding gaze behaviour was consistent across both British and Chinese participants, despite differing definitions of same- and other-race faces between the groups. Our findings suggest that gaze behaviour during the aesthetic evaluation of figurative paintings is shaped by an interaction between attitudinal and experiential factors, which operates across cultural contexts. |
Catharina Tibken; Simon P. Tiffin-Richards Reading behavior as an indicator of comprehension monitoring when reading expository texts Journal Article In: Metacognition and Learning, vol. 20, no. 1, pp. 1–29, 2025. @article{Tibken2025,Comprehension of expository texts is an important prerequisite for self-regulated learning. Processes of passive validation and metacognitive monitoring are thought to be involved in building a coherent situation model of a text. Inconsistency tasks are often used to measure these processes. Several studies have shown longer reading times for inconsistent sentences than for consistent sentences. However, it remains unclear whether the additional time arises from passive disruptions of the reading process when encountering an inconsistency or from metacognitive processes of reanalysis of previous text. To address this issue, we recorded the reading behavior of 96 university students with an eye-tracker while they read inconsistent and consistent expository texts. We analyzed first-pass reading (first-pass reading time, lookbacks) and reanalysis (rereading time, revisits) at the level of the (in)consistent target word, at the sentence-final word of the target sentence, and in the pre-target text. Our results did not strongly support the hypothesis that immediate changes in reading behavior when inconsistencies are first encountered influence the detection and processing of inconsistencies. Our results partially supported the hypothesis that processes of text reanalysis, specifically of the source of inconsistency, increase the probability of identifying an inconsistency. The findings indicate that a purposeful reanalysis of passages that appear inconsistent to readers improves situation model construction for (short) expository texts about conceptually difficult topics. Learning from texts thus requires metacognitive comprehension monitoring beyond passive validation processes. |
Zhongbin Su; Xiaolin Zhou; Stefan Pollmann; Lihui Wang Dynamic face-related eye movement representations in the human ventral pathway Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1–12, 2025. @article{Su2025c,Multiple brain areas along the ventral pathway have been known to represent face images. Here, in a magnetoencephalography (MEG) experiment, we show dynamic representations of face-related eye movements in the ventral pathway in the absence of image perception. Participants followed a dot presented on a uniform background, the movement of which represented gaze tracks acquired previously during their free-viewing of face and house pictures. We found a dominant role of the ventral stream in representing face-related gaze tracks, starting from the orbitofrontal cortex (OFC) and anterior temporal lobe (ATL), and extending to the medial temporal and ventral occipitotemporal cortex. Our findings show that the ventral pathway represents the gaze tracks used to explore faces, by which top-down prediction of face category in OFC and ATL may guide, via the medial temporal cortex or directly, face perception in the ventral occipitotemporal cortex. |
Renana Storm; Viktoria Wrobel; Antonia Frings; Andreas Sprenger; Christoph Helmchen In: Scientific Reports, vol. 15, no. 1, pp. 1–11, 2025. @article{Storm2025,Persistent postural-perceptual dizziness (PPPD) is often preceded by vestibular disorders. We applied galvanic vestibular stimulation (GVS) and related stimulus-evoked activity to individual ratings of perceived motion for each stimulus and to perceived egomotion thresholds by GVS and behavioural parameters outside the scanner: levels of functional disability by standardized questionnaires, visual motion coherence, passive egomotion perception by chair rotation and quantitative postural stability. We hypothesized that the preceding vestibular disorder predisposes to abnormal brain excitability by vestibular stimulation. All participants showed normal vestibular function tests on quantitative testing. GVS with different intensities was applied to 28 patients and 28 age- and gender-matched healthy participants (HC) in the scanner. After each stimulus, participants rated their perceived level of egomotion. GVS perception threshold was significantly lower in PPPD patients. Contrasting stimulus-identical GVS against a sham stimulus, group comparison revealed a stronger activation in the patient's supramarginal gyrus, insular cortex (operculum 3), and vermis. This stronger excitability was not related to the individual threshold of perceived egomotion by GVS. Patients rated GVS-evoked egomotion intensity by identical GVS intensities larger than HC but neural activity did not correlate with individual ratings of perceived egomotion by GVS. As GVS evoked larger egomotion and larger brain activation in patients, the ratio of brain activity to egomotion perception was not different between groups. GVS-evoked insular activity increased with the level of PPPD-related disability and postural imbalance. The larger activation in multisensory cortical vestibular network indicates a sensitization to vestibular stimuli eliciting egomotion perception which increases with levels of PPPD disability. It seems to reflect a sensory-neural amplification rather than an abnormal sensory-perceptual scaling. |
Caleb Stone; Jason B. Mattingley; Dragan Rangelov Neural mechanisms of metacognitive improvement under speed pressure Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1–12, 2025. @article{Stone2025,The ability to accurately monitor the quality of one's choices, or metacognition, improves under speed pressure, possibly due to changes in post-decisional evidence processing. Here, we investigate the neural processes that regulate decision-making and metacognition under speed pressure using time-resolved analyses of brain activity recorded using electroencephalography. Participants performed a motion discrimination task under short and long response deadlines and provided a metacognitive rating following each response. Behaviourally, participants were faster, less accurate, and showed superior metacognition with short deadlines. These effects were accompanied by a larger centro-parietal positivity (CPP), a neural correlate of evidence accumulation. Crucially, post-decisional CPP amplitude was more strongly associated with participants' metacognitive ratings following errors under short relative to long response deadlines. Our results suggest that superior metacognition under speed pressure may stem from enhanced metacognitive readout of post-decisional evidence. |
Ramanujan Srinath; Amy M. Ni; Claire Marucci; Marlene R. Cohen; David H. Brainard Orthogonal neural representations support perceptual judgments of natural stimuli Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–17, 2025. @article{Srinath2025a,In natural visually guided behavior, observers must separate relevant information from a barrage of irrelevant information. Many studies have investigated the neural underpinnings of this ability using artificial stimuli presented on blank backgrounds. Natural images, however, contain task-irrelevant background elements that might interfere with the perception of object features. Recent studies suggest that visual feature estimation can be modeled through the linear decoding of task-relevant information from visual cortex. So, if the representations of task-relevant and irrelevant features are not orthogonal in the neural population, then variation in the task-irrelevant features would impair task performance. We tested this hypothesis using human psychophysics and monkey neurophysiology combined with parametrically variable naturalistic stimuli. We demonstrate that (1) the neural representation of one feature (the position of an object) in visual area V4 is orthogonal to those of several background features, (2) the ability of human observers to precisely judge object position was largely unaffected by those background features, and (3) many features of the object and the background (and of objects from a separate stimulus set) are orthogonally represented in V4 neural population responses. Our observations are consistent with the hypothesis that orthogonal neural representations can support stable perception of object features despite the richness of natural visual scenes. |
Qiao Songlin; Xuemei Xia; Jing Chen; Matteo Valsecchi Attentional tracking reduces cortical alpha oscillations Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–14, 2025. @article{Songlin2025,The premotor theory of attention suggests that both overt and covert attentional orienting are governed by similar mechanisms and neural structures, a concept extensively investigated in paradigms involving shifts in attention and gaze towards peripheral targets. Previous studies have found a strong link between cortical alpha oscillations and overt smooth pursuit of a target. However, the relationship between alpha oscillations and covert tracking of peripheral moving stimuli remains unclear. To address this, we asked 16 observers to maintain fixation while covertly attending to a visual stimulus moving along the horizontal meridian at varying speeds (2, 6, or 12 °/s), within either the left or right hemifield. We simultaneously recorded both eye movements and EEG data. Our results revealed that alpha power was significantly reduced when observers tracked a target that moved further in the periphery, independent of its speed. These findings confirm that the distribution of alpha power is sensitive to the allocation of covert attention during tracking. This suggests a tight link between the attentional processes involved in covert tracking and overt pursuit of a moving target, supporting the premotor theory of attention. |
Sabyasachi Shivkumar; Gregory C. DeAngelis; Ralf M. Haefner Hierarchical motion perception as causal inference Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–14, 2025. @article{Shivkumar2025,Motion can only be defined relative to a reference frame; yet it remains unclear which reference frame guides perception. A century of psychophysical studies has produced conflicting evidence: retinotopic, egocentric, world-centric, or even object-centric. We introduce a hierarchical Bayesian model mapping retinal velocities to perceived velocities. Our model mirrors the structure in the world, in which visual elements move within causally connected reference frames. Friction renders velocities in these reference frames mostly stationary, formalized by an additional delta component (at zero) in the prior. Inverting this model automatically segments visual inputs into groups, groups into supergroups, progressively inferring structured reference frames and “perceives" motion in the appropriate reference frame. Critical model predictions are supported by two experiments, and fitting our model to the data allows us to infer the subjective set of reference frames used by individual observers. Our model provides a quantitative normative justification for key Gestalt principles providing inspiration for building better models of visual processing in general. |
Cal M. Shearer; Annalise B. Rawson; Helen C. Barron; Jill X. O'Reilly Memory reactivation during rest forms shortcuts in a cognitive map Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–16, 2025. @article{Shearer2025,Efficient and flexible cognition relies upon cognitive maps—representations of concepts and the relations between them. Cognitive maps integrate relations that were learned separately into a cohesive whole. Memory reactivation during rest and sleep may contribute to cognitive map formation in two ways: by simply strengthening memories for directly experienced relations, or by reorganising concepts and creating new relations that capture the underlying structure. We designed a multi-stage learning task to test whether reactivation during rest is involved in restructuring memories as opposed to simply consolidating what was experienced. We causally manipulated memory reactivation during rest using awake, contextual targeted memory reactivation. We found that promoting memory reactivation during rest qualitatively reorganises the cognitive map by forming ‘shortcuts' between events which have not been experienced together. These shortcuts in memory extend beyond direct experience to facilitate our ability to make novel inferences. Using a series of control tests we show that inference performance cannot be explained by quantitative strengthening of the experienced component links. Interestingly, we show that representing a shortcut may come with limitations, as shortcuts cannot be readily updated in response to rapid changes in the environment. Together, these findings reveal how memories are reorganised during awake rest to construct a cognitive map of our environment, while highlighting the constraints set by a trade-off between efficient and flexible behaviour. |
Dixit Sharma; Bart Krekelberg Predicting spiking activity from scalp EEG Journal Article In: Journal of Neural Engineering, vol. 22, no. 6, pp. 1–16, 2025. @article{Sharma2025,Objective. Despite decades of electroencephalography (EEG) research, the relationship between EEG and underlying spiking dynamics remains unclear. This limits our ability to infer neural dynamics reflected in intracranial signals from EEG, a critical step to bridge electrophysiological findings across species and to develop non-invasive brain–machine interfaces (BMIs). In this study, we aimed to estimate spiking activity in the visual cortex using non-invasive scalp EEG. Approach . We recorded spiking activity from a 32-channel floating microarray permanently implanted in parafoveal V1 and scalp-EEG in a male macaque monkey. While the animal fixated, the screen flickered at different temporal frequencies to induce steady-state visual evoked potentials. We analyzed the relationship between the V1 multi-unit spiking activity envelope (MUAe) and EEG frequency bands to predict MUAe at each time point from EEG. We extracted instantaneous spectrotemporal features of the EEG signal, including phase, amplitude, and phase-amplitude coupling of its frequency bands. Main results . Although the relationship between these spectrotemporal features and the V1 MUAe was complex and frequency-dependent, they were reliably predictive of the MUAe. Specifically, in a linear regression predicting MUAe from EEG, each EEG feature (phase, amplitude, coupling) contributed to model predictions. In addition, we found that MUAe predictions were better in shallow than deep cortical layers, and that the phase of stimulus frequency further improved MUAe predictions. Significance. Our study shows that a comprehensive account of spectrotemporal features of non-invasive EEG provides information on underlying spiking activity beyond what is available when only the amplitude or phase of the EEG signal is considered. This demonstrates the richness of the EEG signal and its complex relationship with neural spiking activity and suggests that using more comprehensive spectrotemporal signatures could improve BMI applications. |
Samuel Shaki; Oria Pitem; Martin H. Fischer Lexical priming of space depends on how deeply you think about it Journal Article In: Scientific Reports, vol. 15, no. 1, 2025. @article{Shaki2025,There is a long debate about how the meaning of words cues our spatial attention. For implicitly spatial words such as “ROOF” or “BASEMENT”, it was recently shown that processing both the cue word and a subsequent spatial target stimulus was necessary for spatial congruity effects to emerge. Here we challenge this work by documenting that word cues alone suffice to induce congruity effects if they are processed deeply. Sixty-three healthy adults detected vertically displaced targets after looking at centrally presented cue words under three counterbalanced instructions, imposing increasing processing depth: Lexical decision, non-spatial categorization, and spatial categorization. Target detection speed revealed spatial congruity effects for both spatial and non-spatial categorization but not for lexical decision. An interpretation in terms of covert attention deployment was corroborated by concomitant vertical displacements of eye gaze. Our results reveal minimal requirements for covert and overt semantic cueing of spatial attention. |
Fatemeh Shahnabati; Atefeh Sabourifard; S. Hamid Amiri; Alireza Bosaghzadeh; Reza Ebrahimpour Cognitive load and visual attention assessment using physiological eye tracking measures in multimedia learning Journal Article In: PLoS One, vol. 20, pp. 1–26, 2025. @article{Shahnabati2025,Effective multimedia content design can boost performance, capture visual attention, and optimize cognitive load. The current study employs eye-tracking technology to establish metrics to measure cognitive load, analyze visual attention allocation, and evaluate learners' performance in English language learning. The study focuses on creating and comparing two different multimedia presentations. The differentiation between them lies in their adherence to or deviation from Mayer's educational multimedia design principles: coherence, signaling, and spatial contiguity. participants were randomly assigned to two groups. The first group viewed with principles version, while the second group viewed without principles version, during which their eye movement data were collected. Subsequently, both groups participated in a recall test and completed the NASA-TLX questionnaire. The research establishes connections between specific eye-tracking parameters, subjective cognitive load scores, and recall test results through regression models and analyzes fixation distributions. The study also delves into microsaccades rate and changes in pupil size, each analyzed within times of interest. The study's findings indicate that the examined metrics can significantly help distinguish between the two conditions: principles and no principles. These metrics are pertinent for assessing individuals' cognitive load and visual attention and serve as beneficial indicators for gauging the efficacy of the designed multimedia content. |
Yelda Semizer; Ruth Rosenholtz The effect of background clutter on visual search in video conferencing Journal Article In: Cognitive Research: Principles and Implications, vol. 10, no. 1, pp. 1–16, 2025. @article{Semizer2025,The use of video conferencing tools has become increasingly common recently. The visual displays in these tools are highly complex, being composed of multiple faces with varying image quality and lighting conditions. On top of this, users have the ability to choose their own backgrounds. Some choose simple artificial backgrounds, some appear in front of a real or simulated room, and some use something more abstract. How do these choices affect the user's ability to use the tool, for example, finding the current speaker or a reaction symbol? Vision science can certainly provide answers to these questions; however, most search studies use simple displays with a uniform background, or more recently, real-world scenes. How does what we know about search generalize to these more complex displays? The current study sought to examine how our understanding of visual search applies to well-controlled video conferencing displays. Specifically, we investigated the effect of display clutter (i.e., background complexity and variability) on perceptual tasks relevant for video conferencing. In an eye-tracking set-up, participants searched either for the speaker whose image was highlighted (Experiment 1) or for a reaction symbol (raised-hand) embedded on one of the attendees' background. Results showed a significant effect of background complexity and variability, suggesting that search performance declined as the display clutter increased. Image-based analysis showed that the choice of backgrounds mediated these effects, suggesting that some virtual backgrounds were not optimal for perceptual processes. |
Alia Seedat; Alex Lepauvre; Jay Jeschke; Urszula Gorska-Klimowska; Marcelo Armendariz; Katarina Bendtz; Simon Henin; Rony Hirschhorn; Tanya Brown; Erika Jensen; Csaba Kozma; David Mazumder; Stephanie Montenegro; Leyao Yu; Niccolò Bonacchi; Diptyajit Das; Kyle Kahraman; Praveen Sripad; Fatemeh Taheriyan; Orrin Devinsky; Patricia Dugan; Werner Doyle; Adeen Flinker; Daniel Friedman; Wendell Lake; Michael Pitts; Liad Mudrik; Melanie Boly; Sasha Devore; Gabriel Kreiman; Lucia Melloni Open multi-center intracranial electroencephalography dataset with task probing conscious visual perception Journal Article In: Scientific Data, vol. 12, no. 1, pp. 1–14, 2025. @article{Seedat2025,We introduce an intracranial EEG (iEEG) dataset collected as part of an adversarial collaboration between proponents of two theories of consciousness: Global Neuronal Workspace Theory and Integrated Information Theory. The data were recorded from 38 patients undergoing intracranial monitoring of epileptic seizures across three research centers using the same experimental protocol. Participants were presented with suprathreshold visual stimuli belonging to four different categories (faces, objects, letters, false fonts) in three orientations (front, left, right view), and for three durations (0.5, 1.0, 1.5 s). Participants engaged in a non-speeded Go/No-Go target detection task to identify infrequent targets with some stimuli becoming task-relevant and others task-irrelevant. Participants also engaged in a motor localizer task. The data were checked for its quality and converted to Brain Imaging Data Structure (BIDS). The de-identified dataset contains demographics, clinical information, electrode reconstruction, behavioral performance, and eye-tracking data. We also provide code to preprocess and analyze the data. This dataset holds promise for reuse in consciousness science and vision neuroscience to answer questions related to stimulus processing, target detection, and task-relevance, among many others. |
Lara Stella Marie Schroth; Wim Fias; Muhammet Ikbal Sahan Eye movements follow the dynamic shifts of attention through serial order in verbal working memory Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–11, 2025. @article{Schroth2025,How are arbitrary sequences of verbal information retained and manipulated in working memory? Increasing evidence suggests that serial order in verbal WM is spatially coded and that spatial attention is involved in access and retrieval. Based on the idea that brain areas controlling spatial attention are also involved in oculomotor control, we used eye tracking to reveal how the spatial structure of serial order information is accessed in verbal working memory. In two experiments, participants memorized a sequence of auditory words in the correct order. While their eye movements were being measured, they named the memorized items in a self-determined order in Experiment 1 and in a cued order in Experiment 2. We tested the hypothesis that serial order in verbal working memory interacts with the spatial attention system whereby gaze patterns in visual space closely follow attentional shifts in the internal space of working memory. In both experiments, we found that the gaze shifts in visual space correlated with the spatial shifts of attention along the left-to-right one-dimensional mapping of serial order positions in verbal WM. These findings suggest that spatial attention is employed for dynamically searching through verbal WM and that eye movements reflect the spontaneous association of order and space even in the absence of visuospatial input. |
Eda Sarı; Furkan Dindaroğlu; Belkıs Durmuş; Sonia Amado Exploring mandibular asymmetry: Insights from visual perception using eye-tracking technology Journal Article In: BMC Oral Health, vol. 25, no. 1, pp. 1–10, 2025. @article{Sar2025,Background: The visual attention provides an objective perspective on how a stimulus take attention. In dentistry, one of the important facial determinants in esthetic perception is the mandibular asymmetry. The study aimed to evaluate the eye movements of the orthodontists and non-professionals on the images with different severity of mandibular asymmetry using eye tracking technology. Methods: The eye movements of 26 orthodontists and 30 non-professionals were captured. Thirty images were visually evaluated for the presence of mandibular asymmetry by two orthodontists. 2 mm, 4 mm, 6 mm, and 8 mm chin deviation were simulated on the images and the images without asymmetry were considered as control group. A total of 50 photographs from 10 individuals were included in the study. Participants' eye movements were recorded using an Eyelink 1000 plus eye-tracking device (Sr-Research, Canada). Repeated Measures Analysis of Variance (ANOVA) was used for statistical comparisons. Results: The number of fixations on the lower lip-chin area in either the right or left direction did not show a statistically significant difference. (F(1,000;59,000) = 2.133, p > 0.05,). Time to first fixation was faster to the lower lip-chin area in 8 mm asymmetry condition compared to 2 mm (F(1,2) = 31.423, p < 0.05, ηp2 = 0.940). Orthodontists made less fixations before the lower lip-chin area in 8 mm condition compared to 2 mm (F(1,2) = 20.758, p < 0.05, ηp2 = 0.912). Conclusions: While the direction of mandibular asymmetry did not affect voluntary attention, an increase in asymmetry, regardless of profession, attracted more attention to the lower lip-chin area. While the 8 mm asymmetry caught the involuntary attention of orthodontists, the same did not occur in non-professionals. |
Anthony W. Sali; Madison P. Shaver; Anna B. Toledo; Austin L. Torain; Isabel N. Flicker Learned saccade readiness varies with fluctuations in sustained attention Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–15, 2025. @article{Sali2025,Both the focus of sustained attention and an individual's readiness to shift attention among spatial locations fluctuate over time. However, the interaction of these ongoing changes in attentional states remains unknown. In the current study, participants completed a modified gradual continuous performance task during which they monitored one of two lateralized streams of black and white images for the appearance of frequent target stimuli, withholding responses to foils. Periodically, a visual cue signaled participants to either maintain fixation at the current stream or to make a saccade to the opposing stream, and participants made a parity categorization for a digit appearing at the cued location. Trial-by-trial variation in pupil size, an indicator of arousal, accounted for both fluctuations in sustained attention and shift readiness but fluctuations in sustained attention were not associated with general modulations of shift readiness. Furthermore, we manipulated the frequency of gaze shift cues over time and observed that unexpected shift cues were most disruptive when participants lacked sustained focus, yielding a greater cost in saccade latencies than when the efficacy of sustained attention was high. Our results suggest that ongoing changes in sustained attention occur independently from gaze shifting readiness but carry consequences for learned saccade preparation. |
Cristina Rubino; Adam T. Harrison; Lara A. Boyd Oculomotor learning is evident during implicit motor sequence learning Journal Article In: Scientific Reports, vol. 15, no. 1, 2025. @article{Rubino2025,Motor sequence learning involves both oculomotor and manual motor systems, yet the role of the oculomotor system in the learning and execution of skilled arm movements remains underexplored. In the current work, the influence of sequence learning on the oculomotor system was investigated by testing 20 healthy adults for 3 days as they practiced an implicit motor learning task, the serial targeting task (STT). The STT contained a repeated sequence, which was interleaved with random sequences. This task was practiced on a KINARM robot that tracked both saccades and reaches. A delayed, 24-h retention test assessed sequence-specific motor learning. Sequence-specific changes across practice and learning were observed for both saccades and reaches; this was demonstrated by faster saccade and arm motor reaction times for the repeated sequence compared to random sequences. Notably, change in the oculomotor system occurred earlier in practice as compared to the manual motor system. Reaches were executed more quickly when led by express saccades (rapid eye movements occurring within 90–120 ms) compared to when they were preceded by regular latency (> 120 ms) saccades early in practice. Our findings highlight distinct yet interconnected functions between oculomotor and manual motor systems associated with implicit motor sequence learning. |
Gonzalo Ruarte; Gaston Bujia; Damián Care; Matias Julian Ison; Juan Esteban Kamienkowski Integrating Bayesian and neural networks models for eye movement prediction in hybrid search Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–15, 2025. @article{Ruarte2025,Visual search is crucial in daily human interaction with the environment. Hybrid search extends this by requiring observers to find any item from a given set. Recently, a few models were proposed to simulate human eye movements in visual search tasks within natural scenes, but none were implemented for Hybrid search under similar conditions. We present an enhanced neural network Entropy Limit Minimization (nnELM) model, grounded in a Bayesian framework and signal detection theory, and the Hybrid Search Eye Movements (HSEM) Dataset, containing thousands of human eye movements during hybrid tasks. A key Hybrid search challenge is that participants have to look for different objects at the same time. To address this, we developed several strategies involving the posterior probability distributions after each fixation. Adjusting peripheral visibility improved early-stage efficiency, aligning it with human behavior. Limiting the model's memory reduced success in longer searches, mirroring human performance. We validated these improvements by comparing our model with a held-out set within the HSEM and with other models in a separate visual search benchmark. Overall, the new nnELM model not only handles Hybrid search in natural scenes but also closely replicates human behavior, advancing our understanding of search processes while maintaining interpretability. |
Martin Rolfs; Richard Schweitzer; Eric Castet; Tamara L. Watson; Sven Ohl Lawful kinematics link eye movements to the limits of high-speed perception Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–17, 2025. @article{Rolfs2025,Perception requires active sampling of the environment. What part of the physical world can be perceived is limited by the sensory system's biophysical setup, but might be further constrained by the kinematic bounds of the motor actions used to acquire sensory information. Here, we tested this fundamental idea for humans' fastest and most frequent behavior—saccadic eye movements—which entail incidental sensory consequences (i.e., swift retinal motion) that rarely reach awareness in natural vision. Using high-speed video projection, we display rapidly moving stimuli that faithfully reproduce, or deviate from, saccades' lawful relation of velocity, duration, and amplitude. For each stimulus, observers perform perceptual tasks for which performance is contingent on consciously seeing the stimulus' motion trajectory. We uncover that visibility of the stimulus' movement is well predicted by the specific kinematics of saccades and their sensorimotor contingencies, reflecting even variability between individual observers. Computational modeling shows that spatiotemporal integration during early visual processing predicts this lawful relation in a tight range of biologically plausible parameters. These results suggest that the visual system takes into account motor kinematics when omitting an action's incidental sensory consequences, thereby preserving visual sensitivity to high-speed object motion. |
Jonathan Edward Robinson; Andrew W. Corcoran; Christopher J. Whyte; András Sárközy; Anil K. Seth; Gyula Kovács; Karl J. Friston; Cyriel M. A. Pennartz; Giulio Tononi; Jakob Hohwy The role of active inference in conscious awareness Journal Article In: PLoS One, vol. 20, no. 12, pp. 1–20, 2025. @article{Robinson2025,Active inference, a first-principles framework for modelling the behaviour of sentient agents, is beginning to be applied in consciousness research. One hypothesis arising from the framework is that active inference is necessary for changes in conscious content. As one component of an extensive adversarial collaboration among competing theories of consciousness, active inference will be contrasted with two other theories of consciousness, neither of which posit that active inference is necessary for consciousness. Here, we thus present a Study Protocol designed to test the active inference hypothesis using a carefully controlled adaptation of the motion-induced blindness paradigm, where an 'active' condition with richer active inference is contrasted with a 'passive' condition. In the active condition, participants direct their gaze towards a target stimulus following its disappearance from consciousness, and report on its subsequent reappearance. In the passive condition, participants maintain central fixation, while the stimulus array is moved across the visual field (in a replay of the active condition based on eye-tracking data acquired during active trials). In two experiments, we plan to investigate target reappearance across active and passive conditions to evaluate the contribution of active inference to conscious awareness. Results will eventually be considered in the context of all the experiments conducted as part of the overall adversarial collaboration. |
Ping Ran; Meng Ying Sun; Qian Sun; Qi Sun Effects of local information and egocentric reference frames on estimation of biological motion direction Journal Article In: Psychological Research, vol. 89, no. 6, pp. 1–17, 2025. @article{Ran2025a,Previous studies have established that coarse discrimination (e.g., left/right, forward/backward) of point-light walker (PLW) direction is modulated by multiple factors including global/local motion information, biological/social factors, and egocentric reference frames. However, the specific contributions of local motion information and egocentric referencing to fine-grained PLW direction estimation remain unclear. Drawing upon principles of biomechanical asymmetry and right-lateralized motor dominance, we hypothesized a systematic overall rightward bias in PLW direction estimation. Through three carefully controlled experiments, we demonstrated that: (1) right-handed participants showed consistently overall rightward estimation bias; (2) this bias was selectively enhanced by right-sided body stimuli while remaining unaffected by left-sided stimuli; and (3) spatial decoupling of stimulus center from egocentric coordinates revealed persistent egocentric coding in the direction estimation. Moreover, prolonged stimulus exposure led to expanded gaze distribution alongside heightened local information processing, underscoring the pivotal role of local information. These findings suggest that biomechanical asymmetries may shape PLW direction perception and reveal the interplay between local information analysis and egocentric referencing in fine-grained biological motion estimation. |
Rajani Raman; Anna Bognár; Ghazaleh Ghamkhari Nejad; Albert Mukovskiy; Lucas Martini; Martin Giese; Rufin Vogels Keypoint-based modeling reveals fine-grained body pose tuning in superior temporal sulcus neurons Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–16, 2025. @article{Raman2025,Body pose and orientation serve as vital visual signals in primate non-verbal social communication. Leveraging deep learning algorithms that extract body poses from videos of behaving monkeys, applied to a monkey avatar, we investigated neural tuning for pose and viewpoint, targeting fMRI-defined mid and anterior Superior Temporal Sulcus (STS) body patches. We modeled the pose and viewpoint selectivity of the units with keypoint-based principal component regression with cross-validation and applied model inversion as a key approach to identify effective body parts and views. Mid STS units were effectively modeled using view-dependent 2D keypoint representations, revealing that their responses were driven by specific body parts that differed among neurons. Some anterior STS units exhibited better predictive performances with a view-dependent 3D model. On average, anterior STS units were better fitted by a keypoint-based model incorporating mirror-symmetric viewpoint tuning than by view-dependent 2D and 3D keypoint models. However, in both regions, a view-independent keypoint model resulted in worse predictive performance. This keypoint-based approach provides insights into how the primate visual system encodes socially relevant body cues, deepening our understanding of body pose representation in the STS. |
Estelle Raffin; Michele Bevilacqua; Fabienne Windel; Pauline Menoud; Roberto F. Salamanca-Giron; Sarah Feroldi; Sarah B. Zandvliet; Nicola Ramdass; Laurijn Draaisma; Patrik Vuilleumier; Adrian G Guggisberg; Christophe Bonvin; Lisa Fleury; Krystel R. Huxlin; Elena Beanato; Friedhelm C. Hummel Boosting hemianopia recovery: The power of interareal cross-frequency brain stimulation Journal Article In: Brain, vol. 148, pp. 4548–4561, 2025. @article{Raffin2025,Visual field loss is a common consequence of stroke and manifests in approximatively one-third of patients in the chronic stage. Such loss can significantly impact daily life activities, compromising tasks such as reading, navigating or driving. Although slow and labour intensive, evidence suggests that early interventions with tailored rehabilitation programmes might stimulate visual recovery and improve quality of life in stroke survivors.To enhance the effects of such rehabilitation programmes, we designed a novel, non-invasive, pathway-specific, physiology-inspired cross-frequency brain stimulation protocol, where complex oscillatory signal integration was inferred from phase–amplitude coupling of oscillatory signals between the primary visual cortex and the motion-sensitive medio-temporal area. Sixteen stroke patients were enrolled in a double-blind, randomized, cross-over trial, during which they performed two blocks of 10 daily training sessions of a direction discrimination task, combined with one of the two cross-frequency transcranial alternative brain stimulation (cf-tACS versus control cf-tACS) conditions.We found that the cf-tACS condition promoting feedforward visual inputs to the medio-temporal area significantly enhanced motion discrimination performance and shifted visual field borders (i.e. through localized enlargement of isopters). Behavioural improvements associated with a change in oscillatory activity within motion processing pathways were proportional to the amount of residual structural fibres along these pathways and perilesional primary visual cortex activity. In sum, we report, for the first time, that cf-tACS, a novel, pathway-specific, physiology-inspired brain stimulation approach, is able to boost the efficacy of perceptual training, restoring visual motion processing and reducing the severity of visual impairments in adult stroke patients. |
Vanessa C. Radtke; Wanja Wolff; Corinna S. Martarelli How effortful is boredom? Studying self-control demands through pupillometry Journal Article In: Collabra: Psychology, vol. 11, no. 1, pp. 1–24, 2025. @article{Radtke2025,Self-control is essential for managing actions, yet its exertion is perceived as effortful. Performing a task may require effort not only because of its inherent difficulty but also due to its potential for inducing boredom, as boredom has been shown to be self-control demanding itself. So far, the extent of self-control demands during boredom and its temporal dynamics remain elusive. We employed a multimethod approach to address this knowledge gap. Ninety-five participants took part in an easy and hard version of the Stroop task. During both tasks, they indicated several times their perceived task difficulty, boredom, boredom-related effort, difficulty-related effort, overall effort, and fatigue. We tested whether pupil size, as a physiological indicator of cognitive effort, was predicted more accurately by difficulty- and boredom-related effort together than by task-difficulty-related effort alone. The best model fit included boredom-, difficulty-related effort, and their interactions with task type (easy, hard Stroop). Tonic pupil size increased during the easy Stroop, while phasic pupil size decreased with greater boredom-related effort in both tasks. Greater difficulty-related effort was linked to increases in tonic and phasic pupil size in the easy, but not in the hard Stroop. Finally, boredom-related effort in the Stroop predicted performance in a subsequent flanker task. Our results provide preliminary support that enduring boredom may not only be perceived as effortful but also be reflected in psychophysiological changes. Moreover, it may influence subsequent behavior. This underscores the importance of considering boredom as a potential confound in self-control research and broader study designs. |
Katrina R. Quinn; Florian Sandhaeger; Nima Noury; Ema Zezelic; Markus Siegel Abstract choice representations during stable choice-response associations Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1–8, 2025. @article{Quinn2025,An increasing body of evidence has demonstrated neural representations of choices independent of the motor actions used to report them – so-called abstract choices. However, it remains unclear whether such representations arise due to dynamic changes in choice-response associations or reflect a general property of decision-making. Here, we show that in the human brain, choices are represented abstractly even when choice-response associations remain stable over time. We recorded neural activity using magnetoencephalography while participants performed a motion discrimination task, with choice-response mappings held constant within blocks. We found neural information about participants' perceptual choices independent of both motor response and visual stimulus. Choice information increased during the stimulus and peaked after the response. Moreover, choice and response information showed distinct cortical distributions, with choice-related signals strongest in frontoparietal regions. Thus, abstract choice representations are not limited to dynamic or action-independent contexts and may be a general feature of decision-making. |
Ying Que; Yueyuan Zheng; Janet H. Hsiao; Xiao Hu Using eye movements, electrodermal activities, and heart rates to predict different types of cognitive load during reading with background music Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–12, 2025. @article{Que2025a,The triarchic model of cognitive load postulates three types of cognitive load—extraneous, intrinsic, and germane load. While various approaches have been proposed to measure the three types of cognitive load, most measurements are intrusive. To address this issue, we leveraged multimodal learning analytics to collect eye movement (EM), electrodermal activity (EDA), heart rate (HR), and heart rate variability (HRV) from non-intrusive sensors and investigate whether they could predict the three types of cognitive load. We examined extraneous load (created by adding background music (BGM)), intrinsic load (created by text complexity), and germane load (reflected by comprehension accuracy) in a novel reading context with self-selected preferred BGM. One hundred and two (102) non-native English speakers were recruited. Half of them read English passages with BGM, while the other half read in silence. Results of logistic regression indicated that EM measures were predictive of the three load types, while HR/HRV measures predicted extraneous and germane load. Our findings provide evidence supporting the triarchic structure of cognitive load theory and implications for the design of non-intrusive measurement of cognitive load. |
Sorin Pojoga; Ariana Andrei; Valentin Dragoi Unsupervised learning of temporal regularities in visual cortical populations Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–12, 2025. @article{Pojoga2025,The brain's ability to extract temporal information from dynamic stimuli in the environment is essential for everyday behavior. To extract temporal statistical regularities, neural circuits must possess the ability to measure, produce, and anticipate sensory events. Here we report that when neural populations in macaque primary visual cortex are triggered to exhibit a periodic response to a repetitive sequence of optogenetic laser flashes, they learn to accurately reproduce the temporal sequence even when light stimulation is turned off. Despite the fact that individual cells had a poor capacity to extract temporal information, the population of neurons reproduced the periodic sequence in a temporally precise manner. The same neural population could learn different frequencies of external stimulation, and the ability to extract temporal information was found in all cortical layers. These results demonstrate a remarkable ability of sensory cortical populations to extract and reproduce complex temporal structure from unsupervised external stimulation even when stimuli are perceptually irrelevant. |
Marek Placiński; Theresa Matzinger Structural alignment leads to lower cognitive load in a collaborative task Journal Article In: Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systems, vol. 26, no. 1, pp. 102–129, 2025. @article{Placinski2025,<p>One of the characteristics of dialogue is that interlocutors tend to converge on the same linguistic choices, called alignment. In this paper, we aim to investigate whether structural alignment — the tendency to use the same syntactic structures — has a positive effect on cognitive load and task completion in a task-based conversation. To do so, we engage participants in a collaborative task where they have to interact with another interlocutor (actually a bot) and inform each other about the location of landmarks on a map. In one condition the bot aligns with the participant and in the other it does not. Participants are recorded with an eye tracker during the experiment so that we can evaluate cognitive load and performance in the task. We found that when participants interact with an aligning bot, their cognitive load decreases and task completion is facilitated, but only to a certain degree. The results of the study suggest that alignment is a strategy that can be used in order to facilitate task performance.</p> |
Oria Pitem; Yaniv Mama Predicting long-term memory via pupillometry Journal Article In: Scientific Reports, vol. 15, no. 1, 2025. @article{Pitem2025,Pupillometry research has established that pupil size reflects cognitive processes through autonomic nervous system activity, with high arousal triggering pupil dilation. Studies examining pupil size during encoding have yielded conflicting results regarding its relationship with subsequent memory performance, and few have investigated baseline pupil size. This study examined whether pupil diameter before and during stimulus presentation predicts memory performance. We hypothesized that successfully recalled words would be associated with larger pupils than forgotten words, based on the role of arousal and attention in memory formation. To test these hypotheses, we conducted two experiments in which we tracked ninety-five psychology students' eyes while they performed a long-term memory test. The results depict larger pupil size while studying later successfully retrieved words. Interestingly, this phenomenon also occurs before word presentation (during baseline), which supports the “readiness to remember” (R2R) framework. This implies that pupillary changes while preparing to encode information can indicate later memory performance. |
Zhongling Pi; Jingjing Dong; Jiayu Wang; Xiying Li; Xin Zhao Modality matters: How combining oral and written instructional explanations improves STEM learning from video lectures Journal Article In: International Journal of STEM Education, vol. 12, no. 1, pp. 1–19, 2025. @article{Pi2025a,Background and purpose of the study: STEM learning often involves a multitude of complex and abstract concepts and ideas that can be challenging for students to comprehend. Research suggests that the oral and visual representations in video lectures can maximize students' cognitive infrastructure, helping them to organize knowledge more effectively. However, compared to traditional learning methods, video lectures may lack interaction and feedback, which can lead to ineffective learning strategies (e.g., passive viewing) and reduced learning engagement. Instructional explanations serve as a generative strategy, enabling students to create oral and written pieces based on the knowledge gained from video lectures and their prior knowledge. This study recruited a total of 87 undergraduate students and explored how the modality of instructional explanations generated by these students for a fictious student influenced their learning. Specifically, the study explored the effects on students' learning performance, attention, behavioral patterns of preparing-to-explain, the quality of notes, and the quality of instructional explanations in video lectures on a STEM subject. Results: The results revealed that students who adopted a combination of oral and written instructional explanations showed better immediate retention and transfer than those adopted just one type of explanation. In addition, both oral-only and combined oral-and-written explanations promoted more self-regulated learning behaviors during the phase of preparing-to-explain. The study also found that the quality of instructional explanations played a mediating role in the effects of modality. Conclusions and potential implications: Our findings suggest that combining oral and written instructional explanations is more effective in supporting students' STEM learning from video lectures compared to using a single form of explanation. These findings have significant implications for teaching and learning STEM subjects through video lectures. Students and educators should recognize the complementary roles of oral and written instructional explanations and opt for a combined oral-and-written approach during STEM learning activities. |
Joris Perra; Bénédicte Poulin-Charronnat; Thierry Baccino; Patrick Bard; Philippe Pfister; Philippe Lalitte; Melissa Zerbib; Véronique Drai-Zerbib In: Quarterly Journal of Experimental Psychology, vol. 78, no. 12, pp. 2660–2680, 2025. @article{Perra2025,Expertise is associated with a knowledge-driven information-processing approach. Experts benefit from long-term knowledge structures—chunks and retrieval structures/templates—leading them to formulate expectations about local stimulus characteristics and to extract information projected onto distant areas from the fixation location. In an attempt to shed light on the way knowledge-driven processing impacts eye movements during music reading, this study aimed to determine how expert musicians deal with local complexity in a sight-reading task. Thirty musicians from two expertise levels had to sight read 4 bar score excerpts. Local analyses were conducted to investigate how the gaze behaves prior to and during the sight reading of different score characteristics, such as alteration, location of the notes on the staff, note count, and heterogeneity of notes. The more experts (1) were less affected by the foveal load induced by local complexity, showing a lower increase in fixation durations between noncomplex features and local complexity compared to the less experts; (2) presented a saccadic flexibility towards the local complexity projected onto the parafoveal area, being the only group to exhibit shorter progressive incoming saccade sizes on accidentals and larger progressive incoming saccade sizes on new notes compared to noncomplex features; and (3) presented a visuo-motor flexibility depending on the played complexity, being the only group to exhibit a shorter eye-hand span when playing accidentals or distant notes compared to noncomplex features. Overall, this study highlights the usefulness of local analyses as a relevant tool to investigate foveal and parafoveal processing skills during music reading. |
Hame Park; Ayelet Arazi; Bharath Chandra Talluri; Marco Celotto; Stefano Panzeri; Alan A. Stocker; Tobias H. Donner Confirmation bias through selective readout of information encoded in human parietal cortex Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–15, 2025. @article{Park2025,Decision-makers often process new evidence selectively, depending on their current beliefs about the world. We asked whether such confirmation biases result from biases in wthe encoding of sensory evidence in the brain, or alternatively in the utilization of encoded evidence for behavior. Human participants estimated the source of a sequence of visual-spatial evidence samples while we measured cortical population activity with magnetoencephalography. Halfway through the sequence, participants were prompted to judge the more likely source category. We find that processing of subsequent evidence depends on its consistency with the previously chosen category. Evidence encoded in parietal cortex contributes more to the estimation report when that evidence is consistent with the previous choice compared to when it contradicts that choice. Our results indicate that information contradicting pre-existing beliefs has little impact on subsequent behavior, despite being precisely encoded in the brain. This provides room for deliberative control to counteract confirmation biases. |
Elisabet Parés-Pujolràs; Simon P. Kelly; Peter R. Murphy Dissociable encoding of evolving beliefs and momentary belief updates in distinct neural decision signals Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–14, 2025. @article{ParesPujolras2025,Making accurate decisions in noisy environments requires integrating evidence over time. Studies of simple perceptual decisions in static environments have identified two human neurophysiological signals that evolve with similar integration dynamics, with one - the centroparietal positivity - appearing to compute the running integral and continuously feed it to the other - motor beta lateralisation. However, it remains unknown whether and how these signals serve more distinct functional roles in more complex scenarios. Here, we use a volatile expanded judgement task that dissociates raw sensory information, belief updates, and the evolving belief itself. We find that motor beta lateralisation traces the evolving belief across stimuli, while the centroparietal positivity locally encodes the belief updates associated with each individual stimulus. These results suggest a flexible computational hierarchy where context-dependent belief updates can be computed sample-by-sample at an intermediate processing level to modify downstream belief representations for protracted decisions about discrete stimuli. |
