EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2025 |
Xiaomei Zhao; Yabo Wang; Keke Wang; Luyao Wang Effects of sequential and non-sequential presentation conditions of multiple-stem facts on memory integration and cognitive resource allocation Journal Article In: Psychological Research, vol. 89, no. 10, pp. 1–14, 2025. @article{Zhao2025, What limits the self-generation of new knowledge in the memory integration process? One striking contender is the amount of necessary pieces of information that are dispersed. Specifically, when essential information is scattered across multiple sources/places, it becomes challenging to effectively integrate and generate new knowledge. Most of the studies on memory integration have focused on the study of paired stem facts, but have neglected the exploration of multiple-stem facts. The present study examined college students' performance on memory integration under different conditions of three stem facts. In Experiment 1, participants were exposed to a series of novel, authentic stem facts in which every three relevant ones could be integrated to generate new knowledge. The results of Experiment 1 found that college students could spontaneously generate a new piece of information by integrating two or three separate but related facts. The integration can occur in at least two distinct types due to the different presentation orders of the learning materials: sequential recursive integration and non-recursive integration. College students performed better in sequential recursive integration than in non-sequential recursive integration, and this difference in integration performance is not caused by differences in memory for the stem facts. Based on Experiment 1, Experiment 2 used eye-tracking technology to explore the allocation of internal cognitive resources across different conditions of three stem facts. We found that in non-sequential recursive integration, college students had the longest visual duration and the highest number of fixations on the second stem fact. In sequential recursive integration, there were no other significant differences in the number and duration of visual fixations for the three stem facts. College students paid longer fixations to the second stem fact and the third stem fact in the non-sequential recursive condition than in the sequential recursive condition. Our study suggests that when information is related but cannot be integrated, longer fixation indicates stumbling when dealing with an unresolvable difficulty. When knowledge is presented in a stepwise manner (such as in the sequential recursive integration condition), it results in better semantic memory extension. |
Yang Zhang; Yangping Li; Weiping Hu; Huizhi Bai; Yuanjing Lyu Applying machine learning to intelligent assessment of scientific creativity based on scientific knowledge structure and eye-tracking data Journal Article In: Journal of Science Education and Technology, pp. 1–19, 2025. @article{Zhang2025b, Scientific creativity plays an essential role in science education as an advanced cognitive ability that inspires students to solve scientific problems inventively. The cultivation of scientific creativity relies heavily on effective assessment. Typically, human raters manually score scientific creativity using the Consensual Assessment Technique (CAT), which is a labor-intensive, time-consuming, and error-prone process. The assessment procedure is susceptible to subjective biases stemming from cognitive prejudice, distractions, fatigue, and fondness, potentially compromising its reliability, consistency, and efficiency. Previous research has sought to mitigate these risks by automating the assessment through latent semantic analysis and artificial intelligence. In this study, we developed machine learning (ML) models based on a training dataset that included output labels from the Scientific Creativity Test (SCT) evaluated by human experts, along with input features derived from objectively measurable semantic network parameters (representing the scientific knowledge structure) and eye-tracking blink duration (indicating attention patterns during the SCT). Most models achieve over 90% accuracy in predicting the scientific creativity levels of new individuals outside the training set, with some models achieving perfect accuracy. The results indicate that the ML models effectively capture the underlying relationship between scientific knowledge, eye movements, and scientific creativity. These models enable the fairly objective prediction of scientific creativity levels based on semantic network parameters and blink durations during the SCT, eliminating the need for ongoing human scoring. Therefore, laborious and complex manual assessment methods typically used for SCT can be avoided. This new method improves the efficiency of scientific creativity assessment by automating processes, minimizing subjectivity, providing rapid feedback, and enabling large-scale evaluations, all while reducing evaluators' workloads. |
Hao Zhang; Yiqing Hu; Yang Li; Shuangyu Zhang; Xiao Li Li; Chenguang Zhao Simultaneous dataset of brain, eye and hand during visuomotor tasks Journal Article In: Scientific Data, vol. 12, no. 1, pp. 1–15, 2025. @article{Zhang2025a, Visuomotor integration is a complex skill set encompassing many fundamental abilities, such as visual search, attention monitoring, and motor control. To explore the dynamic interplay between visual inputs and motor outputs, it is necessary to simultaneously record multiple brain activities with high temporal and spatial resolution, as well as to record implicit and explicit behaviors. However, there is a lack of public datasets that provide simultaneous multiple modalities during a visual-motor task. Functional near-infrared spectroscopy and electroencephalography to record brain activity simultaneously facilitate more precise capture of the complex visuomotor of brain mechanisms. Additionally, by employing a combined eye movement and manual response, it is possible to fully evaluate the effects of visuomotor outputs from implicit and explicit dimensions. We recorded whole-brain EEG (34 electrodes) and fNIRS (44 channels) covering the frontal and parietal cortex along with eye movements, behavior sampling, and operant behavior. The dataset underwent rigorous synchronization, quality control to highlight the effectiveness of our experiments and to demonstrate the high quality of our multimodal data framework. |
Han Zhang; Jacob Sellers; Taraz G. Lee; John Jonides The temporal dynamics of visual attention Journal Article In: Journal of Experimental Psychology: General, vol. 154, no. 2, pp. 435–456, 2025. @article{Zhang2025, Researchers have long debated how humans select relevant objects amid physically salient distractions. An increasingly popular view holds that the key to avoiding distractions lies in suppressing the attentional priority of a salient distractor. However, the precise mechanisms of distractor suppression remain elusive. Because the computation of attentional priority is a time-dependent process, distractor suppression must be understood within these temporal dynamics. In four experiments, we tracked the temporal dynamics of visual attention using a novel forced-response method, by which participants were required to express their latent attentional priority at varying processing times via saccades. We show that attention could be biased either toward or away from a salient distractor depending on the timing of observation, with these temporal dynamics varying substantially across experiments. These dynamics were explained by a computational model assuming the distractor and target priority signals arrive asynchronously in time and with different influences on saccadic behavior. The model suggests that distractor signal suppression can be achieved via a "slow" mechanism in which the distractor priority signal dictates saccadic behavior until a late-arriving priority signal overrides it, or a "fast" mechanism which directly suppresses the distractor priority signal's behavioral expression. The two mechanisms are temporally dissociable and can work collaboratively, resulting in time-dependent patterns of attentional allocation. The current work underscores the importance of considering the temporal dynamics of visual attention and provides a computational architecture for understanding the mechanisms of distractor suppression. |
Mengying Yuan; Min Gao; Xinzhong Cui; Xin Yue; Jing Xia; Xiaoyu Tang The power of sound: Exploring the auditory influence on visual search efficiency Journal Article In: Cognition, vol. 256, pp. 1–13, 2025. @article{Yuan2025, In a dynamic visual search environment, a synchronous and meaningless auditory signal (pip) that corresponds with a change in a visual target promotes the efficiency of visual search (pop out), which is known as the pip-and-pop effect. We conducted three experiments to investigate the mechanism of the pip-and-pop effect. Using the eye movement technique, we manipulated the interval rhythm (Exp. 1) and interval duration time (Exp. 2) of dynamic color changes in visual stimuli in the dynamic visual search paradigm to ensure that there was a significant pip-and-pop effect. In Exp. 3, we modulated the appearance of the sound by employing a visual-only condition, an auditory target condition (synchronized sounds), an auditory oddball condition (a high-frequency sound in a series of low-frequency sounds), an omitted oddball condition (an omitted sound in a series of sounds) and an auditory non-oddball condition (the last of the four sounds). We aim to clarify the role of audiovisual cross-modal information in the pip-and-pop effect by comparing different conditions. The search time results showed that a significant pip-and-pop effect was found for the auditory target, auditory oddball and auditory non-oddball conditions. The eye movement results revealed an increase in the fixation duration and a decrease in the number of fixations for the auditory target and auditory oddball conditions. Our findings suggest that the pip-and-pop effect is indeed a cross-modal effect. Furthermore, the interaction between auditory and visual information is necessary for the pip-and-pop effect, whereas auditory oddball stimuli attract attention and therefore moderate this effect. Our study provides a solution for the pip-and-pop effect mechanism in a dynamic visual search paradigm. |
Soon Young; Park Id; Diederick C Niehorster Id; Ludwig Huber Examining holistic processing strategies in dogs and humans through gaze behavior Journal Article In: PLoS ONE, vol. 20, no. 2, pp. 1–27, 2025. @article{Young2025, Extensive studies have shown that humans process faces holistically, considering not only individual features but also the relationships among them. Knowing where humans and dogs fixate first and the longest when they view faces is highly informative, because the locations can be used to evaluate whether they use a holistic face processing strategy or not. However, the conclusions reported by previous eye-tracking studies appear inconclu- sive. To address this, we conducted an experiment with humans and dogs, employing experimental settings and analysis methods that can enable direct cross-species compari- sons. Our findings reveal that humans, unlike dogs, preferentially fixated on the central region, surrounded by the inner facial features, for both human and dog faces. This pattern was consistent for initial and sustained fixations over seven seconds, indicating a clear ten- dency towards holistic processing. Although dogs did not show an initial preference for what to look at, their later fixations may suggest holistic processing when viewing faces of their own species. We discuss various potential factors influencing species differences in our results, as well as differences compared to the results of previous studies. |
Zheng Yang; Bing Han; Xinbo Gao; Zhi Hui Zhan Eye-movement-prompted large image captioning model Journal Article In: Pattern Recognition, vol. 159, pp. 1–13, 2025. @article{Yang2025a, Pretrained large vision-language models have shown outstanding performance on the task of image captioning. However, owing to the insufficient decoding of image features, existing large models sometimes lose important information, such as objects, scenes, and their relationships. In addition, the complex “black-box” nature of these models makes their mechanisms difficult to explain. Research shows that humans learn richer representations than machines do, which inspires us to improve the accuracy and interpretability of large image captioning models by combining human observation patterns. We built a new dataset, called saliency in image captioning (SIC), to explore relationships between human vision and language representation. One thousand images with rich context information were selected as image data of SIC. Each image was annotated with five caption labels and five eye-movement labels. Through analysis of the eye-movement data, we found that humans efficiently captured comprehensive information for image captioning during their observations. Therefore, we propose an eye-movement-prompted large image captioning model, which is embedded with two carefully designed modules: the eye-movement simulation module (EMS) and the eye-movement analyzing module (EMA). EMS combines the human observation pattern to simulate eye-movement features, including the positions and scan paths of eye fixations. EMA is a graph neural network (GNN) based module, which decodes graphical eye-movement data and abstracts image features as a directed graph. More accurate descriptions can be predicted by decoding the generated graph. Extensive experiments were conducted on the MS-COCO and NoCaps datasets to validate our model. The experimental results showed that our network was interpretable, and could achieve superior results compared with state-of-the-art methods, i.e., 84.2% BLEU-4 and 145.1% CIDEr-D on MS-COCO Karpathy test split, indicating its strong potential for use in image captioning. |
Liu Yang; Wenmao Zhang; Peitao Li; Hongjie Tang; Shuying Chen; Xinhong Jin The aiming advantages in experienced first-person shooter gamers: Evidence from eye movement patterns Journal Article In: Computers in Human Behavior, vol. 165, pp. 1–12, 2025. @article{Yang2025, The esports industry is expanding rapidly, with First-Person Shooter (FPS) games gaining unprecedented popularity, attracting millions of players and viewers worldwide. Proficiency in aiming is crucial in FPS games, serving as a critical factor for performance and victory. The present study explores the aiming advantages of experienced FPS players by analyzing their eye movement patterns under varying spatial and temporal conditions. Utilizing eye-tracking technology, data were collected from 63 participants, including 28 experienced FPS players and 35 non-FPS players. Task performance and eye movement indices such as accuracy, execution time, fixation count, and saccade count were analyzed. Results indicated that experienced FPS players exhibit faster execution times and more efficient eye movement patterns. Specifically, they more frequently exhibited the 0-fixation-1-saccade pattern, characterized by a single saccade without fixation, while showing fewer patterns requiring multiple corrective adjustments. This enhanced efficiency in visual search and eye-hand coordination likely contributes to their superior performance. Moreover, the study found that target distance and appearance latency significantly affect task performance and eye movement behavior. Greater distances and higher temporal uncertainty negatively impact performance, while spatiotemporal interactions are most influential near the fovea. These findings highlight the critical role of efficient eye movement patterns in enhancing aiming performance and suggest that FPS players could benefit from targeted eye-hand coordination training. |
Yao Yan; Yilin Wu; Hoi Ming Ken Yip Nicholas; Nicholas Seow Chiang Price Metrics of two-dimensional smooth pursuit are diverse across participants and stable across days Journal Article In: Journal of Vision, vol. 25, no. 2, pp. 1–18, 2025. @article{Yan2025b, Smooth pursuit eye movements are used to volitionally track moving objects, keeping their image near the fovea. Pursuit gain, the ratio of eye to stimulus speed, is used to quantify tracking accuracy and is usually close to 1 for healthy observers. Although previous studies have shown directional asymmetries such as horizontal gain exceeding vertical gain, the temporal stability of these biases and the correlation between oculomotor metrics for tracking in different directions and speeds have not been investigated. Here, in testing sessions 4 to 10 days apart, 45 human observers tracked targets moving along two-dimensional trajectories. Horizontal, vertical, and radial pursuit gain had high test–retest reliability (mean intraclass correlation 0.84). The frequency of all saccades and anticipatory saccades during pursuit also had high test–retest reliability (intraclass correlation coefficients = 0.66 and 0.73, respectively). In addition, gain metrics showed strong intermetric correlation, and saccade metrics separately showed strong intercorrelation; however, gain and saccade metrics showed only weak intercorrelation. These correlations are likely to originate from a mixture of sensory, motor, and integrative mechanisms. The test–retest reliability of multiple distinct pursuit metrics represents a “pursuit identity” for individuals, but we argue against this ultimately contributing to an oculomotor biomarker. |
Chuyao Yan; Hao Wang; Xueyan Jiang; Zhiguo Wang Attention modulates subjective time perception across eye movements Journal Article In: Vision Research, vol. 227, pp. 1–9, 2025. @article{Yan2025, Prior research has established that actions, such as eye movements, influence time perception. However, the relationship between pre-saccadic attention, which is often associated with eye movement, and subjective time perception is not explored. Our study examines the impact of pre-saccadic attention on the subjective experience of time during eye movements, particularly focusing on its influence on subjective time perception at the saccade target. Participants were presented with two clocks featuring spinning hands, positioned at distinct locations corresponding to fixation and the saccade target. They were required to report the perceived time of these clocks across the eye movements, enabling us to measure and compare both the perceived and actual timing at these specific clock locations. In Experiment 1, we observed that participants tended to report the timing of their eyes' arrival at the target location as occurring slightly ahead of the actual time. In contrast, in Experiment 2, when participants divert their attention to the fixation clock prior to the imperative saccade, this perceptual bias diminishes. These results indicate that subjective time perception is strongly impacted by attentional conditions across the two experiments. Together, these findings offer further evidence for the notion that stable time perception during eye movements is not solely an inherent property of the eye movement system but also encompasses other cognitive mechanisms, such as attention. Statement of relevance: While we often remain unaware of the frequent saccades (rapid eye movements) we make, they have a profound impact on our perception of the world and the flow of time. Nevertheless, the connection between pre-saccadic attention, often associated with eye movements, and our subjective perception of time remains largely unexplored. In our research, we investigated the relationship between attention and our subjective experience of time. Our findings revealed the crucial role of attention, serving as a bridge between the physical movements of our eyes and our internal sense of temporal continuity. In essence, although previous studies have demonstrated the impact of eye movements on time perception, our current study emphasizes the critical influence of attention during the preparatory phase of saccades on the subjective experience of time during eye movements. |
Jingjing Xu; Zhongling Pi; Meng Liu; Chaoqun Ye; Weiping Hu Effective learning through task motivation and learning scaffolding: Analyzing online collaborative interaction with eye tracking technology Journal Article In: Instructional Science, pp. 1–28, 2025. @article{Xu2025, Discussion has become a crucial method of interactive learning in online collaborative environments. This study aims to identify the impact of different task motivation compositions and learning scaffolding on attention, learning performance, and behavioral patterns. The 90 Chinese undergraduate and graduate students (Mage=20.38 |
Pei Xie; Han Bin Sang; Chao Zheng Huang; Ai Bao Zhou The effect of body-related information on food attentional bias in women with body weight dissatisfaction Journal Article In: Appetite, vol. 208, pp. 1–9, 2025. @article{Xie2025a, The eating behavior of individuals is susceptible to various factors. Emotion is an important factor that influences eating behaviors, especially in women who care about their body weight and dissatisfied with their bodies. This study explored the effect of emotional cues on attentional bias toward food in women with body weight dissatisfaction (BWD). Following the Negative Physical Self Scale-Fatness scores, a total of 60 females were recruited: twenty-nine were assigned to the BWD group, and thirty-one were assigned to the no body weight dissatisfaction (NBWD) group. All participants completed the food dot-probe task after exposure to emotional cues, and their eye-tracking data were recorded. The results showed greater duration bias and first fixation di- rection bias for high-calorie food in the BWD group than in the NBWD group after exposure to negative emotional cues. After exposure to positive emotional cues, the BWD group showed greater first-fixation duration bias and duration bias for high-calorie food than for low-calorie food. The present study found an effect of emotion on the attention bias toward food in women with BWD, and it provided insight into the psychological mechanism of the relationship between emotion and eating behaviors in women with BWD. Our study suggests that both negative and positive emotional cues may lead women with BWD to focus on high-calorie foods. |
Iris Wiegand; Mariska Van Pouderoijen; Joukje M. Oosterman; Kay Deckers; Gernot Horstmann Contributions of distractor dwelling , skipping , and revisiting to age differences in visual search Journal Article In: Scientific Reports, vol. 15, pp. 1–28, 2025. @article{Wiegand2025, Visual search becomes slower with aging, particularly when targets are difficult to discriminate from distractors. Multiple distractor rejection processes may contribute independently to slower search times: dwelling on, skipping of, and revisiting of distractors, measurable by eye-tracking. The present study investigated how age affects each of the distractor rejection processes, and how these contribute to the final search times in difficult (inefficient) visual search. In a sample of Dutch healthy adults (19–85 years), we measured reaction times and eye-movements during a target present/absent visual search task, with varying target-distractor similarity and visual set size. We found that older age was associated with longer dwelling and more revisiting of distractors, while skipping was unaffected by age. This suggests that increased processing time and reduced visuo-spatial memory for visited distractor locations contribute to age-related decline in visual search. Furthermore, independently of age, dwelling and revisiting contributed stronger to search times than skipping of distractors. In conclusion, under conditions of poor guidance, dwelling and revisiting have a major contribution to search times and age-related slowing in difficult visual search, while skipping is largely negligible. |
Xin Wang; Lizhou Fan; Haiyun Li; Xiaochan Bi; Wenjing Jiang; Xin Ma Skip-AttSeqNet: Leveraging skip connection and attention-driven Seq2seq model to enhance eye movement event detection in Parkinson's disease Journal Article In: Biomedical Signal Processing and Control, vol. 99, pp. 1–17, 2025. @article{Wang2025a, To address the limitations of traditional algorithms in detecting eye movement events, particularly in Parkinson's disease (PD) patients, this study introduces Skip-AttSeqNet. It presents an innovative approach combining skip-connected, one-dimensional convolutional neural networks with an attention-enhanced, bidirectional long short-term memory network. This hybrid architecture significantly advances smooth pursuit (SP) event detection, as evidenced by its performance on both the GazeCom dataset and a unique dataset of PD patient eye movements. Key innovations in this work include the utilization of skip connections and attention mechanisms, along with optimized training–validation set division, collectively enhancing the model's accuracy while mitigating overfitting. Skip-AttSeqNet outperforms existing algorithms, achieving a 3.2% higher sample-level F1 score and a notable 6.2% increase in event-level F1 scores for SP detection. Furthermore, we established a smooth-pursuit experimental paradigm and identified significant differences in saccade and SP features between PD patients and healthy older adults through statistical analysis using the Mann–Whitney test. These findings underscore the potential of eye movement metrics as biomarkers for PD, thereby not only strengthening PD diagnosis but also enriching the intersection of computer vision and biomedical research domains. |
Rongwei Wang; Jianrong Jia Aperiodic pupil fluctuations at rest predict orienting of visual attention Journal Article In: Psychophysiology, vol. 62, no. 1, pp. 1–10, 2025. @article{Wang2025, The aperiodic exponent of the power spectrum of signals in several neuroimaging modalities has been found to be related to the excitation/inhibition balance of the neural system. Leveraging the rich temporal dynamics of resting-state pupil fluctuations, the present study investigated the association between the aperiodic exponent of pupil fluctuations and the neural excitation/inhibition balance in attentional processing. In separate phases, we recorded participants' pupil size during resting state and assessed their attentional orienting using the Posner cueing tasks with different cue validities (i.e., 100% and 50%). We found significant correlations between the aperiodic exponent of resting pupil fluctuations and both the microsaccadic and behavioral cueing effects. Critically, this relationship was particularly evident in the 50% cue-validity condition rather than in the 100% cue-validity condition. The microsaccadic responses mediated the association between the aperiodic exponent and the behavioral response. Further analysis showed that the aperiodic exponent of pupil fluctuations predicted the self-rated hyperactivity/impulsivity trait across individuals, suggesting its potential as a marker of attentional deficits. These findings highlight the rich information contained in pupil fluctuations and provide a new approach to assessing the neural excitation/inhibition balance in attentional processing. |
Carla A. Wall; Frederick Shic; Elizabeth A. Will; Quan Wang; Jane E. Roberts Similar gap-overlap profiles in children with fragile x syndrome and IQ-matched autism Journal Article In: Journal of Autism and Developmental Disorders, vol. 55, pp. 891–903, 2025. @article{Wall2025a, Purpose: Fragile X syndrome (FXS) is a single-gene disorder characterized by moderate to severe cognitive impairment and a high association with autism spectrum disorder (ASD) and Attention-Deficit/Hyperactivity Disorder (ADHD). Atypical visual attention is a feature of FXS, ASD, and ADHD. Thus, studying early attentional patterns in young children with FXS can offer insight into early emerging neurocognitive processes underlying challenges and contribute to our understanding of common and unique features of ASD and ADHD in FXS. Methods: The present study examined visual attention indexed by the gap-overlap paradigm in children with FXS (n = 39) compared to children with ASD matched on intellectual ability and age (n = 40) and age-matched neurotypical controls (n = 34). The relationship between gap-overlap performance and intellectual ability, ASD, and ADHD across groups was characterized. Saccadic reaction times (RT) were collected across baseline, gap, and overlap conditions. Results: Results indicate no group differences in RT for any conditions. However, RT of the ASD and NT groups became slower throughout the experiment whereas RT of the FXS group did not change, suggesting difficulties in habituation for the FXS group. There was no relationship between RT and intellectual ability, ADHD, or ASD symptoms in the FXS and ASD groups. In the NT group, slower RT was related to elevated ADHD symptoms only. Conclusion: Taken together, findings suggest that the social attention differences documented in FXS and ASD may be due to other cognitive factors, such as reward or motivation, rather than oculomotor control of visual attention. |
Carla A. Wall; Caitlin Hudac; Kelsey Dommer; Beibin Li; Adham Atyabi; Claire Foster; Quan Wang; Erin Barney; Yeojin Amy Ahn; Minah Kim; Monique Mahony; Raphael Bernier; Pamela Ventola; Frederick Shic Preserved but un-sustained responses to bids for dyadic engagement in school-age children with Autism Journal Article In: Journal of Autism and Developmental Disorders, pp. 1–9, 2025. @article{Wall2025, Purpose: Dynamic eye-tracking paradigms are an engaging and increasingly used method to study social attention in autism. While prior research has focused primarily on younger populations, there is a need for developmentally appropriate tasks for older children. Methods: This study introduces a novel eye-tracking task designed to assess school-aged children's attention to speakers involved in conversation. We focused on a primary outcome of attention to speakers' faces during conversation between three actors and during emulated bids for dyadic engagement (dyadic bids). Results: In a sample of 161 children (78 autistic, 83 neurotypical), children displayed significantly lower overall attention to faces compared to their neurotypical peers (p <.0001). Contrary to expectations, both groups demonstrated preserved attentional responses to dyadic bids, with no significant group differences. However, a divergence was observed following the dyadic bid: neurotypical children showed more attention to other conversational agents' faces than autistic children (p =.017). Exploratory analyses in the autism group showed that reduced attention to faces was associated with greater autism features during most experimental conditions. Conclusion: These findings highlight key differences in how autistic and neurotypical children engage with social cues, particularly in dynamic and interactive contexts. The preserved response to dyadic bids in autism, alongside the absence of post-bid attentional shifts, suggests nuanced and context-dependent social attention mechanisms that should be considered in future research and intervention strategies. |
Anne C. L. Vrijling; Minke J. Boer; Remco J. Renken; Jan-Bernard C. Marsman; Joost Heutink; Frans W. Cornelissen; Nomdo M. Jansonius Detecting and quantifying glaucomatous visual function loss with continuous visual stimulus tracking: A case-control study Journal Article In: Translational Vision Science & Technology, vol. 14, no. 2, pp. 1–14, 2025. @article{Vrijling2025, Purpose: Continuous visual stimulus tracking could be used as an easy alternative to standard automated perimetry (SAP) for visual function screening. With continuous visual stimulus tracking, we simplified the perimetric task to following a moving dot on a screen with the eyes. Here, we determined whether tracking performance (the agreement between gaze and stimulus position) enables the detection and quantification of glaucomatous visual function loss (in terms of SAP), and whether it shows a learning effect. Methods: We evaluated the tracking performance of 36 cases with early, moderate, or severe glaucoma (median with interquartile range [IQR] age = 70 [67-74] years) and 36 controls (median = 70 |
Naomi Vingron; Lea Alexandra Müller Karoza; Nancy Azevedo; Aaron Johnson; Evdokimos Konstantinidis; Panagiotis Bamidis; Melissa Võ; Eva Kehayia How words can guide our eyes: Increasing engagement with art through audio-guided visual search in young and older adults Journal Article In: The Mental Lexicon, pp. 1–12, 2025. @article{Vingron2025, Pursuing cognitively stimulating activities, such as engaging with art, is crucial to a healthy lifestyle. The current work simulates visits to an art museum in a laboratory setting. Using eye tracking, we explored how linguistically guided visual search may increase attention, enjoyment and retention of information when viewing art. Two groups of adults, young (under 35 years) and older (over 65 years) viewed ten paintings on a computer screen presented either with or without an accompanying audio-guide, while having their eye movements recorded. Audio-guides referred to specific areas of the painting, marked as Interest Areas (IA). Across age groups, as attested by gaze fixations, the audio-guides increased attention to these areas compared to free-viewing. Audio-guided viewing did not lead to a significantly increase over free-viewing in information recall accuracy or feelings of enjoyment and engagement. Overall, older adults did report feeling more positively about both audio-guided and free viewing than young adults. Thus, the use of audio-guides, specifically the gamification through linguistically guided visual search, may be a useful tool to promote meaningful attentional interactions with art. |
Ana Vilotijević; Sebastiaan Mathôt The effect of covert visual attention on pupil size during perceptual fading Journal Article In: Cortex, vol. 182, pp. 112–123, 2025. @article{Vilotijevic2025, Pupil size is modulated by various cognitive factors such as attention, working memory, mental imagery, and subjective perception. Previous studies examining cognitive effects on pupil size mainly focused on inducing or enhancing a subjective experience of brightness or darkness (for example by asking participants to attend to/memorize a bright or dark stimulus), and then showing that this affects pupil size. Surprisingly, the inverse has never been done; that is, it is still unknown what happens when a subjective experience of brightness or darkness is eliminated or strongly reduced even though bright or dark stimuli are physically present. Here, we aim to answer this question by using perceptual fading, a phenomenon where a visual stimulus gradually fades from visual awareness despite its continuous presentation. The study contains two blocks: Fading and Non-Fading. In the Fading block, participants were presented with black and white patches with a fuzzy outline that were presented at the same location throughout the block, thus inducing strong perceptual fading. In contrast, in the Non-Fading block, the patches switched sides on each trial, thus preventing perceptual fading. Participants covertly attended to one of the two patches, indicated by a cue, and reported the offset of one of a set of circles that are displayed on top. We hypothesized that pupil size will be modulated by covert visual attention in the Non-Fading block, but that this effect will not (or to a lesser extent) arise in the Fading block. We found that covert visual attention to bright/dark does modulate pupil size even during perceptual fading (Fading block), but to a lesser extent than when the perceptual experience of brightness/darkness is preserved (Non-Fading block). This implies that pupil size is always modulated by covert attention, but that the effect decreases as subjective experience of brightness or darkness decreases. In broader terms, this suggests that cognitive modulations of pupil size reflect a mixture of high-level and lower-level visual processing. |
Martin R. Vasilev; Zeynep Ozkan; Julie A. Kirkby; Antje Nuthmann; Fabrice B. R. Parmentier Unexpected sounds induce a rapid inhibition of eye-movement responses Journal Article In: Psychophysiology, vol. 62, pp. 1–19, 2025. @article{Vasilev2025, Abstract Unexpected sounds have been shown to trigger a global and transient inhibition of motor responses. Recent evidence suggests that eye movements may also be inhibited in a similar way, but it is not clear how quickly unexpected sounds can affect eye-movement responses. Additionally, little is known about whether they affect only voluntary saccades or also reflexive saccades. In this study, participants performed a pro-saccade and an anti- saccade task while the timing of sounds relative to stimulus onset was manipulated. Pro-saccades are generally reflexive and stimulus-driven, whereas anti- saccades require the generation of a voluntary saccade in the opposite direction of a peripheral stimulus. Unexpected novel sounds inhibited the execution of both pro- and anti-saccades compared to standard sounds, but the inhibition was stronger for anti-saccades. Novel sounds affected response latencies as early as 150 ms before the peripheral cue to make a saccade, all the way to 25 ms after the cue to make a saccade. Interestingly, unexpected sounds also reduced anti-saccade task errors, indicating that they aided inhibitory control. Overall, these results suggest that unexpected sounds yield a global and rapid inhibition of eye-movement responses. This inhibition also helps suppress reflexive eye-movement responses in favor of more voluntarily generated |
Timo Kerkoerle; Louise Pape; Milad Ekramnia; Xiaoxia Feng; Jordy Tasserie; Morgan Dupont; Xiaolian Li; Bechir Jarraya; Wim Vanduffel; Stanislas Dehaene; Ghislaine Dehaene-Lambertz Brain mechanisms of reversible symbolic reference: A potential singularity of the human brain Journal Article In: eLife, vol. 12, pp. 1–28, 2025. @article{Kerkoerle2025, The emergence of symbolic thinking has been proposed as a dominant cognitive criterion to distinguish humans from other primates during hominization. Although the proper definition of a symbol has been the subject of much debate, one of its simplest features is bidirectional attachment: the content is accessible from the symbol, and vice versa. Behavioral observations scattered over the past four decades suggest that this criterion might not be met in non-human primates, as they fail to generalize an association learned in one temporal order (A to B) to the reverse order (B to A). Here, we designed an implicit fMRI test to investigate the neural mechanisms of arbitrary audio-visual and visual-visual pairing in monkeys and humans and probe their spontaneous reversibility. After learning a unidirectional association, humans showed surprise signals when this learned association was violated. Crucially, this effect occurred spontaneously in both learned and reversed directions, within an extended network of high-level brain areas, including, but also going beyond the language network. In monkeys, by contrast, violations of association effects occurred solely in the learned direction and were largely confined to sensory areas. We propose that a human-specific brain network may have evolved the capacity for reversible symbolic reference. ### Competing Interest Statement The authors have declared no competing interest. |
Ekin Tünçok; Lynne Kiorpes; Marisa Carrasco Opposite asymmetry in visual perception of humans and macaques Journal Article In: Current Biology, vol. 35, pp. 681–687, 2025. @article{Tuencok2025, In human adults, visual perception varies throughout the visual field. Performance decreases with eccentricity1,2 and varies around polar angle. At isoeccentric locations, performance is typically higher along the horizontal than vertical meridian (horizontal-vertical asymmetry [HVA]) and along the lower than the upper vertical meridian (vertical meridian asymmetry [VMA]). It is unknown whether the macaque visual system, the leading animal model for understanding human vision also exhibits these performance asymmetries. Here, we investigated whether and how visual field asymmetries differ between these two groups. Human adults and adult macaque monkeys (Macaca nemestrina) performed a two-alternative forced choice (2AFC) motion direction discrimination task for a target presented among distractors at isoeccentric locations. Both groups showed heterogeneous visual sensitivity around the visual field, but there were striking differences between them. Human observers showed a large VMA—their sensitivity was poorest at the upper vertical meridian—a weak horizontal-vertical asymmetry, and lower sensitivity at intercardinal locations. Macaque performance revealed an inverted VMA—their sensitivity was poorest in the lower vertical meridian. The opposite pattern of VMA in macaques and humans revealed in this study may reflect adaptive behavior by increasing discriminability at locations with greater relevance for visuomotor integration. This study reveals that performance also varies as a function of polar angle for monkeys, but in a different manner than in humans, and highlights the need to investigate species-specific similarities and differences in brain and behavior to constrain models of vision and brain function. |
Duncan T. Tulimieri; Amelia Decarie; Tarkeshwar Singh; Jennifer A. Semrau Impairments in proprioceptively-referenced limb and eye movements in chronic stroke Journal Article In: Neurorehabilitation and Neural Repair, vol. 39, no. 1, pp. 47 –57, 2025. @article{Tulimieri2025, Background: Upper limb proprioceptive impairments are common after stroke and affect daily function. Recent work has shown that stroke survivors have difficulty using visual information to improve proprioception. It is unclear how eye movements are impacted to guide action of the arm after stroke. Here, we aimed to understand how upper limb proprioceptive impairments impact eye movements in individuals with stroke. Methods: Control (N = 20) and stroke participants (N = 20) performed a proprioceptive matching task with upper limb and eye movements. A KINARM exoskeleton with eye tracking was used to assess limb and eye kinematics. The upper limb was passively moved by the robot and participants matched the location with either an arm or eye movement. Accuracy was measured as the difference between passive robot movement location and active limb matching (Hand-End Point Error) or active eye movement matching (Eye-End Point Error). Results: We found that individuals with stroke had significantly larger Hand (2.1×) and Eye-End Point (1.5×) Errors compared to controls. Further, we found that proprioceptive errors of the hand and eye were highly correlated in stroke participants (r =.67 |
Tommaso Tosato; Guillaume Dumas; Gustavo Rohenkohl; Pascal Fries Performance modulations phase-locked to action depend on internal state Journal Article In: iScience, vol. 28, no. 1, pp. 1–13, 2025. @article{Tosato2025, Previous studies have shown that perceptual performance can be modulated at specific frequencies phase-locked to self-paced motor actions, but findings have been inconsistent. To investigate this effect at the population level, we tested 50 participants who performed a self-paced button press followed by a threshold-level detection task, using both fixed- and random-effects analyses. Contrary to expectations, the aggregated data showed no significant action-related modulation. However, when accounting for internal states, we found that trials during periods of low performance or following a missed detection exhibited significant modulation at approximately 17 Hz. Additionally, participants with no false alarms showed similar modulation. These effects were significant in random effects tests, suggesting that they generalize to the population. Our findings indicate that action-related perceptual modulations are not always detectable but may emerge under specific internal conditions, such as lower attentional engagement or higher decision criteria, particularly in the beta-frequency range. |
Jan Theeuwes; Jonna Van Doorn; Dirk Van Moorselaar Suppression of fear-conditioned stimuli Journal Article In: Emotion, pp. 1–6, 2025. @article{Theeuwes2025, This study demonstrates that even objects generating acute fear through shock conditioning can be attentionally suppressed. Participants searched for shapes while a color singleton distractor was presented. In a preconditioning phase, participants learned to suppress a color singleton distractor frequently appearing in a specific location. Following fear conditioning, suppression remained in place even for those color distractors that were now associated with receiving an electric shock. This finding provides evidence that people can learn to suppress stimuli they fear. The current results are important as they challenge prevailing theories that suggest attentional capture by fearful stimuli is inflexible and driven by innate, bottom-up processes. Moreover, the finding that fearful stimuli can be suppressed opens up potential avenues for developing behavior modification techniques aimed at counteracting attentional biases toward fearful stimuli. |
Caleb Stone; Jason B. Mattingley; Dragan Rangelov Neural mechanisms of metacognitive improvement under speed pressure Journal Article In: Communications Biology, vol. 8, pp. 1–12, 2025. @article{Stone2025, The ability to accurately monitor the quality of one's choices, or metacognition, improves under speed pressure, possibly due to changes in post-decisional evidence processing. Here, we investigate the neural processes that regulate decision-making and metacognition under speed pressure using time- resolved analyses of brain activity recorded using electroencephalography. Participants performed a motion discrimination task under short and long response deadlines and provided a metacognitive rating following each response. Behaviourally, participants were faster, less accurate, and showed superior metacognition with short deadlines. These effects were accompanied by a larger centro- parietal positivity (CPP), a neural correlate of evidence accumulation. Crucially, post-decisional CPP amplitude was more strongly associated with participants' metacognitive ratings following errors under short relative to long responsedeadlines. Our results suggest that superior metacognition under speed pressure may stem from enhanced metacognitive readout of post-decisional evidence. |
Sophie Marie Stasch; Wolfgang Mack When automation fails - Investigating cognitive stability and flexibility in a multitasking scenario Journal Article In: Applied Ergonomics, vol. 125, pp. 1–12, 2025. @article{Stasch2025, Managing multiple tasks simultaneously often results in performance decrements due to limited cognitive resources. Task prioritization, requiring effective cognitive control, is a strategy to mitigate these effects and is influenced by the stability-flexibility dilemma. While previous studies have investigated the stability-flexibility dilemma in fully manual multitasking environments, this study explores how cognitive control modes interact with automation reliability. While no significant interaction between control mode and automation reliability was observed in single multitasking performance, our findings demonstrate that overall task performance benefits from a flexible cognitive control mode when automation is reliable. However, when automation is unreliable, a stable cognitive control mode improves manual takeover performance, though this comes at the expense of secondary task performance. Furthermore, cognitive control modes and automation reliability independently affect various eye-tracking metrics and mental workload. These findings underscore the need to integrate cognitive control and automation reliability into adaptive assistance systems, particularly during the perceive stage, to enhance safety in human-machine systems. |
Ramanujan Srinath; Amy M. Ni; Claire Marucci; Marlene R. Cohen; David H. Brainard Orthogonal neural representations support perceptual judgements of natural stimuli Journal Article In: Scientific Reports, vol. 15, pp. 1–17, 2025. @article{Srinath2025, In natural behavior, observers must separate relevant information from a barrage of irrelevant information. Many studies have investigated the neural underpinnings of this ability using artificial stimuli presented on simple backgrounds. Natural viewing, however, carries a set of challenges that are inaccessible using artificial stimuli, including neural responses to background objects that are task-irrelevant. An emerging body of evidence suggests that the visual abilities of humans and animals can be modeled through the linear decoding of task-relevant information from visual cortex. This idea suggests the hypothesis that irrelevant features of a natural scene should impair performance on a visual task only if their neural representations intrude on the linear readout of the task relevant feature, as would occur if the representations of task-relevant and irrelevant features are not orthogonal in the underlying neural population. We tested this hypothesis using human psychophysics and monkey neurophysiology, in response to parametrically variable naturalistic stimuli. We demonstrate that 1) the neural representation of one feature (the position of a central object) in visual area V4 is orthogonal to those of several background features, 2) the ability of human observers to precisely judge object position was largely unaffected by task-irrelevant variation in those background features, and 3) many features of the object and the background are orthogonally represented by V4 neural responses. Our observations are consistent with the hypothesis that orthogonal neural representations support stable perception of objects and features despite the tremendous richness of natural visual scenes. |
Connor Spiech; Mikael Hope; Valentin Bégel Evoked and entrained pupillary activity while moving to preferred tempo and beyond Journal Article In: iScience, vol. 28, no. 1, pp. 1–10, 2025. @article{Spiech2025, People synchronize their movements more easily to rhythms with tempi closer to their preferred motor rates than with faster or slower ones. More efficient coupling at one's preferred rate, compared to faster or slower rates, should be associated with lower cognitive demands and better attentional entrainment, as predicted by dynamical system theories of perception and action. We show that synchronizing one's finger taps to metronomes at tempi outside of their preferred rate evokes larger pupil sizes, a proxy for noradrenergic attention, relative to passively listening. This demonstrates that synchronizing is more cognitively demanding than listening only at tempi outside of one's preferred rate. Furthermore, pupillary phase coherence increased for all tempi while synchronizing compared to listening, indicating that synchronous movements resulted in more efficiently allocated attention. Beyond their theoretical implications, our findings suggest that rehabilitation for movement disorders should be tailored to patients' preferred rates to reduce cognitive demands. |
Lauren N. Slivka; Kenna R. H. Clayton; Greg D. Reynolds Mask-wearing affects infants' selective attention to familiar and unfamiliar audiovisual speech Journal Article In: Frontiers in Developmental Psychology, vol. 3, pp. 1–8, 2025. @article{Slivka2025, This study examined the immediate effects of mask-wearing on infant selective visual attention to audiovisual speech in familiar and unfamiliar languages. Infants distribute their selective attention to regions of a speaker's face differentially based on their age and language experience. However, the potential impact wearing a face mask may have on infants' selective attention to audiovisual speech has not been systematically studied. We utilized eye tracking to examine the proportion of infant looking time to the eyes and mouth of a masked or unmasked actress speaking in a familiar or unfamiliar language. Six-month-old and 12-month-old infants (n = 42, 55% female, 91%White Non-Hispanic/Latino) were shown videos of an actress speaking in a familiar language (English) with and without a mask on, as well as videos of the same actress speaking in an unfamiliar language (German) with and without a mask. Overall, infants spent more time looking at the unmasked presentations compared to the masked presentations. Regardless of language familiarity or age, infants spent more time looking at the mouth area of an unmasked speaker and they spent more time looking at the eyes of a masked speaker. These findings indicate mask-wearing has immediate effects on the distribution of infant selective attention to different areas of the face of a speaker during audiovisual speech. |
Yiming Shi; Jiaming Zhang; Xingyi Li; Yuchong Han; Jiangheng Guan; Yilin Li; Jiawei Shen; Tzvetomir Tzvetanov; Dongyu Yang; Xinyi Luo; Yichuan Yao; Zhikun Chu; Tianyi Wu; Zhiping Chen; Ying Miao; Yufei Li; Qian Wang; Jiaxi Hu; Jianjun Meng; Xiang Liao; Yifeng Zhou; Louis Tao; Yuqian Ma; Jutao Chen; Mei Zhang; Rong Liu; Yuanyuan Mi; Jin Bao; Zhong Li; Xiaowei Chen; Tian Xue Non-image-forming photoreceptors improve visual orientation selectivity and image perception. Journal Article In: Neuron, vol. 113, pp. 486–500, 2025. @article{Shi2025, It has long been a decades-old dogma that image perception is mediated solely by rods and cones, while intrinsically photosensitive retinal ganglion cells (ipRGCs) are responsible only for non-image-forming vision, such as circadian photoentrainment and pupillary light reflexes. Surprisingly, we discovered that ipRGC activation enhances the orientation selectivity of layer 2/3 neurons in the primary visual cortex (V1) of mice by both increasing preferred-orientation responses and narrowing tuning bandwidth. Mechanistically, we found that the tuning properties of V1 excitatory and inhibitory neurons are differentially influenced by ipRGC activation, leading to a reshaping of the excitatory/inhibitory balance that enhances visual cortical orientation selectivity. Furthermore, light activation of ipRGCs improves behavioral orientation discrimination in mice. Importantly, we found that specific activation of ipRGCs in human participants through visual spectrum manipulation significantly enhances visual orientation discriminability. Our study reveals a visual channel originating from "non-image-forming photoreceptors" that facilitates visual orientation feature perception. |
Kaiyuan Sheng; Lian Liu; Feng Wang; Songnian Li; Xu Zhou An eye-tracking study on exploring children's visual attention to streetscape elements Journal Article In: Buildings, vol. 15, pp. 1–25, 2025. @article{Sheng2025, Urban street spaces play a crucial role in children's daily commuting and social activities. Therefore, the design of these spaces must give more consideration to children's perceptual preferences. Traditional street landscape perception studies often rely on sub- jective analysis, which lacks objective, data-driven insights. This study overcomes this limitation by using eye-tracking technology to evaluate children's preferences more scientif- ically. We collected eye-tracking data from 57 children aged 6–12 as they naturally viewed 30 images depicting school commuting environments. Data analysis revealed that the proportions of landscape elements in different street types influenced the visual perception characteristics of children in this age group. On well-maintained main and secondary roads, elements such as minibikes, people, plants, and grass attracted significant visual attention from children. In contrast, commercial streets and residential streets, character- ized by greater diversity in landscape elements, elicited more frequent gazes. Children's eye-tracking behaviors were particularly influenced by vibrant elements like walls, plants, cars, signboards, minibikes, and trade. Furthermore, due to the developmental immaturity of children's visual systems, no significant gender differences were observed in visual per- ception. Understanding children's visual landscape preferences provides a new perspective for researching the sustainable development of child-friendly cities at the community level. These findings offer valuable insights for optimizing the design of child-friendly streets. |
Alexander J. Shackman; Jason F. Smith; Ryan D. Orth; Christina L. G Savage; Paige R. Didier; Julie M. Mccarthy; Melanie E. Bennett; Jack J. Blanchard Blunted ventral striatal reactivity to social reward is associated with more severe motivation and pleasure deficits in psychosis Journal Article In: Schizophrenia Bulletin, pp. 1–36, 2025. @article{Shackman2025, Background and Hypothesis: Among individuals living with psychotic disorders, social impairment is common, debilitating, and challenging to treat. While the roots of this impairment are undoubtedly complex, converging lines of evidence suggest that social motivation and pleasure (MAP) deficits play a central role. Yet most neuroimaging studies have focused on monetary rewards, precluding deci- sive inferences. Study Design: Here we leveraged parallel social and monetary incentive delay functional magnetic resonance imaging paradigms to test whether blunted reactivity to social incentives in the ventral striatum—a key component of the distributed neural circuit mediating appetitive motivation and hedonic pleasure—is associated with more severe MAP symptoms in a transdiagnostic adult sample enriched for psychosis. To maximize ecological validity and translational relevance, we capitalized on naturalistic audiovisual clips of an established social partner expressing positive feedback. Study Results: Although both paradigms robustly engaged the ventral striatum, only reactivity to social incentives was associated with clinician-rated MAP deficits. This association remained significant when controlling for other symptoms, binary diagnostic status, or striatal reactivity to monetary incentives. Follow-up analyses suggested that this association predominantly reflects diminished activation during the presentation of social reward. Conclusions: These observations provide a neurobiologically grounded framework for conceptualizing the social-anhedonia symptoms and social impairments that characterize many individuals living with psychotic disorders and underscore the need to develop targeted intervention strategies. |
Irina A. Sekerina; Olga Parshina; Vladislava Staroverova; Natalia Gagarina Attention–language interface in Multilingual Assessment Instrument for Narratives Journal Article In: Journal of Experimental Child Psychology, vol. 249, pp. 1–19, 2025. @article{Sekerina2025, The current study employed the Multilingual Assessment Instrument for Narratives (MAIN) to test comprehension of narrative macrostructure in Russian in a visual world eye-tracking paradigm. The four MAIN visual narratives are structurally similar and question referents' goals and internal states (IS). Previous research revealed that children's MAIN comprehension differed among the four narratives in German, Swedish, Russian, and Turkish, but it is not clear why. We tested whether the difference in comprehension was (a) present, (b) caused by complicated inferences in understanding IS compared with goals, and (c) ameliorated by orienting visual attention to the referents whose IS was critical for accurate comprehension. Our findings confirmed (a) and (b) but found no effect of attentional cues on accuracy for (c). The multidimensional theory of narrative organization of children's knowledge of macrostructure needs to consider the type of inferences necessary for IS that are influenced by subjective interpretation and reasoning. |
Marie Schroth; Wim Fias; Muhammet Ikbal Sahan Eye movements follow the dynamic shifts of attention through serial order in verbal working memory Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–11, 2025. @article{Schroth2025, How are arbitrary sequences of verbal information retained and manipulated in working memory? Increasing evidence suggests that serial order in verbal WM is spatially coded and that spatial attention is involved in access and retrieval. Based on the idea that brain areas controlling spatial attention are also involved in oculomotor control, we used eye tracking to reveal how the spatial structure of serial order information is accessed in verbal working memory. In two experiments, participants memorized a sequence of auditory words in the correct order. While their eye movements were being measured, they named the memorized items in a self-determined order in Experiment 1 and in a cued order in Experiment 2. We tested the hypothesis that serial order in verbal working memory interacts with the spatial attention system whereby gaze patterns in visual space closely follow attentional shifts in the internal space of working memory. In both experiments, we found that the gaze shifts in visual space correlated with the spatial shifts of attention along the left-to-right one-dimensional mapping of serial order positions in verbal WM. These findings suggest that spatial attention is employed for dynamically searching through verbal WM and that eye movements reflect the spontaneous association of order and space even in the absence of visuospatial input. |
Jason F. Rubinstein; Noelia Gabriela Alcalde; Adrien Chopin; Preeti Verghese Oculomotor challenges in macular degeneration impact motion extrapolation Journal Article In: Journal of Vision, vol. 25, no. 1, pp. 1–19, 2025. @article{Rubinstein2025, Macular degeneration (MD), which affects the central visual field including the fovea, has a profound impact on acuity and oculomotor control.We used a motion extrapolation task to investigate the contribution of various factors that potentially impact motion estimation, including the transient disappearance of the target into the scotoma, increased position uncertainty associated with eccentric target positions, and increased oculomotor noise due to the use of a non-foveal locus for fixation and for eye movements. Observers performed a perceptual baseball task where they judged whether the target would intersect or miss a rectangular region (the plate). The target was extinguished before reaching the plate and participants were instructed either to fixate a marker or smoothly track the target before making the judgment.We tested nine eyes of six participants with MD and four control observers with simulated scotomata that matched those of individual participants with MD. Both groups used their habitual oculomotor locus—eccentric preferred retinal locus (PRL) for MD and fovea for controls. In the fixation condition, motion extrapolation was less accurate for controls with simulated scotomata than without, indicating that occlusion by the scotoma impacted the task. In both the fixation and pursuit conditions, MD participants with eccentric preferred retinal loci typically had worse motion extrapolation than controls with a matched artificial scotoma and foveal preferred retinal loci. Statistical analysis revealed occlusion and target eccentricity significantly impacted motion extrapolation in the pursuit condition, indicating that these factors make it challenging to estimate and track the path of a moving target in MD. |
Lilly Roth; Hans Christoph Nuerk; Felix Cramer; Gabriella Daroczy In: Psychological Research, vol. 89, no. 1, pp. 1–24, 2025. @article{Roth2025, Solving arithmetic word problems requires individuals to create a correct mental representation, and this involves both text processing and number processing. The latter comprises understanding the semantic meaning of numbers (i.e., their magnitudes) and potentially executing the appropriate mathematical operation. However, it is not yet clear whether number processing occurs after text processing or both take place simultaneously. We hypothesize that number processing occurs early and simultaneously with other problem-solving processes such as text processing. To test this hypothesis, we created non-solvable word problems that do not require any number processing and we manipulated the calculation difficulty using carry/borrow vs. non-carry/non-borrow within addition and subtraction problems. According to a strictly sequential model, this manipulation should not matter, because when problems are non-solvable, no calculation is required. In contrast, according to an interactive model, attention to numbers would be higher when word problems require a carry/borrow compared to a non-carry/non-borrow operation. Eye-tracking was used to measure attention to numbers and text in 63 adults, operationalized by static (duration and count of fixations and regressions) and dynamic measures (count of transitions). An interaction between difficulty and operation was found for all static and dynamic eye-tracking variables as well as for response times and error rates. The observed number processing in non-solvable word problems, which indicates that it occurs simultaneously with text processing, is inconsistent with strictly sequential models. |
Rishi Rajalingham; Hansem Sohn; Mehrdad Jazayeri Dynamic tracking of objects in the macaque dorsomedial frontal cortex Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–16, 2025. @article{Rajalingham2025, A central tenet of cognitive neuroscience is that humans build an internal model of the external world and use mental simulation of the model to perform physical inferences. Decades of human experiments have shown that behaviors in many physical reasoning tasks are consistent with predictions fromthemental simulation theory. However, evidence for the defining feature ofmental simulation – that neural population dynamics reflect simulations of physical states in the environment – is limited. We test the mental simulation hypothesis by combining a naturalistic ball-interception task, large-scale electrophysiology in non-human primates, and recurrent neural network modeling. We find that neurons in the monkeys' dorsomedial frontal cortex (DMFC) represent task-relevant information about the ball position in a mul- tiplexed fashion. At a population level, the activity pattern in DMFC comprises a low-dimensional neural embedding that tracks the ball both when it is visible and invisible, serving as a neural substrate for mental simulation. A systematic comparison of different classes of task-optimized RNN models with the DMFC data provides further evidence supporting the mental simulation hypothesis. Our findings provide evidence that neural dynamics in the frontal cortex are consistent with internal simulation of external states in the environment. |
Alma Rahimi; Azar Ayaz; Chloe Edgar; Gianna Jeyarajan; Darryl Putzer; Michael Robinson; Matthew Heath; Alma Rahimi; Azar Ayaz; Chloe Edgar; Gianna Jeyarajan; Darryl Putzer; Michael Robinson; Matthew Heath; Feb Sub-symptom Sub-symptom threshold aerobic exercise improves executive function during the early stage of sport-related concussion recovery Journal Article In: Journal of Sports Sciences, pp. 1–14, 2025. @article{Rahimi2025, We examined whether persons with a sport-related concussion (SRC) derive a postexercise executive function (EF) benefit, and whether a putative benefit is related to an exercise-mediated increase in cerebral blood flow (CBF). Participants with an SRC completed the Buffalo Concussion Bike Test to determine the heart rate threshold (HRt) associated with symptom exacerbation and/or voluntary exhaustion. On a separate day, SRC participants – and healthy controls (HC group) – completed 20-min of aerobic exercise at 80% HRt while middle cerebral artery velocity (MCAv) was measured to estimate CBF. The antisaccade task (i.e. saccade mirror-symmetrical to target) was completed pre- and postexercise to evaluate EF. SRC and HC groups showed a comparable exercise-mediated increase in CBF (ps < .001), and both groups elicited a postexercise EF benefit (ps < .001); however, the benefit was unrelated to the magnitude of the MCAv change. Moreover, SRC symptomology was not increased when assessed immediately postexercise and showed a 24 h follow-up benefit. Accordingly, persons with an SRC demonstrated an EF benefit following a single bout of sub-symptom threshold aerobic exercise. Moreover, the exercise intervention did not result in symptom exacerbation and thus demonstrates that a tailored aerobic exercise program may support cognitive and symptom recovery following an SRC. |
Meizhen Qian; Jianbao Wang; Yang Gao; Ming Chen; Yin Liu; Dengfeng Zhou; Haidong D Lu; Xiaotong Zhang; Jia Ming Hu; Anna Wang Roe Multiple loci for foveolar vision in macaque monkey visual cortex Journal Article In: Nature Neuroscience, vol. 28, no. 1, pp. 137–149, 2025. @article{Qian2025, In humans and nonhuman primates, the central 1° of vision is processed by the foveola, a retinal structure that comprises a high density of photoreceptors and is crucial for primate-specific high-acuity vision, color vision and gaze-directed visual attention. Here, we developed high-spatial-resolution ultrahigh-field 7T functional magnetic resonance imaging methods for functional mapping of the foveolar visual cortex in awake monkeys. In the ventral pathway (visual areas V1–V4 and the posterior inferior temporal cortex), viewing of a small foveolar spot elicits a ring of multiple (eight) foveolar representations per hemisphere. This ring surrounds an area called the ‘foveolar core', which is populated by millimeter-scale functional domains sensitive to fine stimuli and high spatial frequencies, consistent with foveolar visual acuity, color and achromatic information and motion. Thus, this elaborate rerepresentation of central vision coupled with a previously unknown foveolar core area signifies a cortical specialization for primate foveation behaviors. |
Christian H. Poth; Werner X. Schneider Vision of objects happens faster and earlier for location than for identity Journal Article In: iScience, vol. 28, no. 2, pp. 1–11, 2025. @article{Poth2025, Visual perception of objects requires the integration of separate independent stimulus features, such as object identity and location. We ask whether the location and the identity of an object are processed with different efficiency for being consciously recognized and reported. Participants viewed a target letter at one out of several locations that were terminated by pattern masks at all possible locations. Participants reported the location of the target and/or its letter identity. Report performance as a function of the target duration before the mask is enabled to estimate the speed of visual processing and the minimum duration for processing to start. Visual processing was faster and started earlier for spatial location than for object identity, even though the processing of the features was (stochastically) independent. Together, these findings reveal an intrinsic preference of the human visual system for the perceptual processing of space as opposed to visual features such as categorical identity. |
Marlene Poncet; Sara Spotorno; Margaret C. Jackson Competition between emotional faces in visuospatial working memory Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 51, no. 1, pp. 68–81, 2025. @article{Poncet2025, Visuospatial working memory (VSWM) helps track the identity and location of people during social interactions. Previous work showed better VSWM when all faces at encoding displayed a happy compared to an angry expression, reflecting a prosocial preference for monitoring who was where. However, social environments are not typically uniform, and certain expressions may more strongly compete for and bias face monitoring according to valence and/or arousal properties. Here, we used heterogeneous encoding displays in which two faces shared one emotion and two shared another, and asked participants to relocate a central neutral probe face after a blank delay. When considering the emotion of the probed face independently of the cooccurring emotion at encoding, an overall happy benefit was replicated. However, accuracy was modulated by the nonprobed emotion, with a relocation benefit for angry over sad, happy over fearful, and sad over happy faces. These effects did not depend on encoding fixation time, stimulus arousal, perceptual similarity, or response bias. Thus, emotional competition for faces in VSWM is complex and appears to rely on more than simple arousal- or valence-biased mechanisms. We propose a “social value (SV)” account to better explain when and why certain emotions may be prioritized in VSWM. |
Vincent Plikata; Pablo R. Grassia; Julius Frackd; Andreas Bartels Hierarchical surprise signals in naturalistic violation of expectations Journal Article In: Imaging Neuroscience, vol. 3, pp. 1–23, 2025. @article{Plikata2025, Surprise responses signal both high-level cognitive alerts that information is missing, and increasingly specific back-propagating error signals that allow updates in processing nodes. Studying surprise is, hence, central for cognitive neuroscience to understand internal world representations and learning. Yet, only few prior studies used naturalistic stimuli targeting our high-level understanding of the world. Here, we use magic tricks in an fMRI experiment to investigate neural responses to violations of core assumptions held by humans about the world. We showed participants naturalistic videos of three types of magic tricks, involving objects appearing, changing color, or disappearing, along with control videos without any violation of expectation. Importantly, the same videos were presented with and without prior knowledge about the tricks' explanation. Results revealed generic responses in frontal and parietal areas, together with responses specific to each of the three trick types in posterior sensory areas. A subset of these regions, the midline areas of the default mode network (DMN), showed surprise activity that depended on prior knowledge. Equally, sensory regions showed sensitivity to prior knowledge, reflected in differing decoding accuracies. These results suggest a hierarchy of surprise signals involving generic processing of violation of expectations in frontal and parietal areas with concurrent surprise signals in sensory regions that are specific to the processed features. |
Alessandro Piras The role of the peripheral target in stimulating eye movements Journal Article In: Psychology of Sport & Exercise, vol. 76, pp. 1–10, 2025. @article{Piras2025, The present study investigated the role of top-down and bottom-up processes during a deceptive sports strategy called “no-look passes” and how microsaccades and small saccades modulate these processes. The first experiment examined the role of expertise in modulating the shift of covert attention with the bottom-up procedure. Results showed more saccades of greater amplitude and faster peak velocity in amateur than in expert groups. In the second experiment, the shift of covert attention between top-down and bottom-up conditions was investigated in a group of expert basketball players. Analysis showed that athletes make more microsaccades during the bottom-up condition; meanwhile, during the top-down condition, they were pushed to make more small saccades to decide where to send the ball. The findings suggested that the top-down process stimulates the eyes to move more concerning the bottom-up condition. It could be explained by the fact that during the top-down condition, athletes do not have an "eyehold” that stimulates their attention. During the top-down condition, athletes had to shift their attention to both sides before making the pass, resulting in their eyes being more “hesitant” concerning the situation in which they are peripherally stimulated. |
Zhongling Pi; Xuemei Huang; Yun Wen; Qin Wang; Xin Zhao; Xiying Li Happy facial expressions and mouse pointing enhance EFL vocabulary learning from instructional videos Journal Article In: British Journal of Educational Technology, vol. 56, pp. 388–409, 2025. @article{Pi2025, Given their easy accessibility and dual-channel model of content presentation, instructional videos have become a favoured tool for EFL vocabulary learning tool among many students. Teachers often use various nonverbal behaviours to elicit social reactions and guide learners' attention in instructional videos. The current study conducted three eye-tracking experiments to examine the circumstances under which a teacher's happy facial expressions are beneficial in instructional videos, with or without pointing gestures and mouse pointing. Experiments 1 and 2 demonstrated that the combination of happy facial expressions and pointing gestures attracted learners' attention to the teacher and hindered students' learning performance, regardless of the complexity of slides. Experiment 3 showed that in instructional videos with complex slides, using happy facial expressions along with mouse pointing can enhance students' learning performance. Teachers are advised to show happy facial expressions and avoid using pointing gestures when designing instructional videos. |
Jessica L. Parker; A. Caglar Tas The saccade target is prioritized for visual stability in naturalistic scenes Journal Article In: Vision Research, vol. 227, pp. 1–12, 2025. @article{Parker2025a, The present study investigated the mechanisms of visual stability using naturalistic scene images. In two experiments, we asked whether the visual system relies on spatial location of the saccade target, as previously found with simple dot stimuli, or relational positions of the objects in the scene during visual stability decisions. Using a modified version of the saccadic suppression of displacement task, we manipulated the information that is displaced in the scene as well as visual stability using intrasaccadic target blanking paradigm. There were four displacement conditions: saccade target, saccade source (Experiment 2 only), whole scene, and background. We also included a no-displacement control condition where everything remained stationary. Participants reported whether they detected any movement. The results showed that spatial displacements that occur in the saccade target object were more easily detected than any other displacements in the scene. Further, disrupting visual stability with blanking only improved displacement detection for the saccade target and saccade source objects, suggesting that saccade target and saccade source objects are both consulted in the establishment of visual stability, most likely due to both receiving selective attention before saccade execution. The present study is the first to show that the visual system uses similar visual stability mechanisms for simple dot stimuli and more naturalistic stimuli. |
Natalie A. Paquette; Joseph Schmidt How expectations alter search performance Journal Article In: Attention, Perception, & Psychophysics, pp. 1–20, 2025. @article{Paquette2025, We assessed how expected search difficulty impacts search performance when expectations match and do not match reality. Expectations were manipulated using a blocked design (75% of trials presented at the expected difficulty; target–distractor similarity increased with difficulty). Expectancy was assessed by examining the change in search performance between trials with accurate expectations and easier-than-expected or harder-than-expected trials, matched for search difficulty. Observers searched for Landolt-C targets (Exp-1) or real-world objects (Exp-2). Increased difficulty resulted in reduced accuracy, increased RT and object dwell times (targets and distractors; both experiments), and reduced guidance (Exp-2). Relative to the same level of search difficulty and when expectations were accurate, harder-than-expected search reduced accuracy, RT, and target object dwell times (Exp-1). Whereas easier-than-expected search increased RT and target dwell times (Exp-1). While Experiment 2 showed somewhat muted expectancy effects, easier-than-expected search replicated the increased RT observed in Exp-1, with an additional guidance decrement and increased distractor dwell time. These results demonstrate that expectations shift search performance toward the expected difficulty level. Additionally, post hoc analyses revealed that observers who experience larger difficulty effects also experience larger expectancy effects in RT, guidance, and target dwell time. |
Yunxian Pan; Jie Xu Human-machine plan conflict and conflict resolution in a visual search task Journal Article In: International Journal of Human-Computer Studies, vol. 193, pp. 1–12, 2025. @article{Pan2025, With rapid technological development, humans are more likely to cooperatively work with intelligence systems in everyday life and work. Similar to interpersonal teamwork, the effectiveness of human-machine teams is affected by conflicts. Some human-machine conflict scenarios occur when neither the human nor the system was at fault, for example, when the human and the system formulated different but equally effective plans to achieve the same goal. In this study, we conducted two experiments to explore the effects of human-machine plan conflict and the different conflict resolution approaches (human adapting to the system, system adapting to the human, and transparency design) in a computer-aided visual search task. The results of the first experiment showed that when conflicts occurred, the participants reported higher mental load during the task, performed worse, and provided lower subjective evaluations towards the aid. The second experiment showed that all three conflict resolution approaches were effective in maintaining task performance, however, only the transparency design and the human adapting to the system approaches were effective in reducing mental load and improving subjective evaluations. The results highlighted the need to design appropriate human-machine conflict resolution strategies to optimize system performance and user experience. |
High Overall; Values Mitigate; Gaze-related Effects; Chih-chung Ting; Sebastian Gluth High overall values mitigate gaze-related effects in perceptual and preferential choices Journal Article In: Journal of Experimental Psychology: General, pp. 1–14, 2025. @article{Overall2025, A growing literature has shown that people tend to make faster decisions when choosing between two high- intensity or high-utility options than when choosing between two less-intensity or low-utility options. However, the underlying cognitive mechanisms of this effect of overall value (OV) on response times (RT) remains controversial, partially due to inconsistent findings of OV effects on accuracy but also due to the lack of process-tracing studies testing this effect. Here, we set out to fill this gap by testing and modeling the influence of OV on choices, RT, and eye movements in both perceptual and preferential decisions in a preregistered eye-tracking experiment (N = 61). Across perceptual and preferential tasks, we observed significant and consistently negative correlations between OV and RT, replicating previous work. Accuracy tended to increase with OV, reaching significance in preferential choices only. Eye-tracking analyses revealed a reduction of different gaze-related effects under high OV: a reduced tendency to choose the longer fixated option in perceptual choice and a reduced tendency to choose the last fixated option in preferential choice. Modeling these data with the attentional drift-diffusion model showed that the nonfixated option value was discounted least in the high-OV condition, confirming that higher OV might mitigate the impact of gaze on choices. Our results suggest that OV jointly affects behavior and gaze influences and offer a mechanistic account for the puzzling phenomenon that decisions between options of higher OV tend to be faster, but not less accurate. Public |
Wesley Orth; Shayne Sloggett; Masaya Yoshida Positive polarity items: An illusion of ungrammaticality Journal Article In: Language, Cognition and Neuroscience, pp. 1–25, 2025. @article{Orth2025, Negative Polarity Item (NPIs) produce an illusion of grammaticality in some contexts with negation. Many approaches to modelling the NPI illusion propose that it is driven by the processor's attempt to link an NPI to a negative element. We investigate an illusion effect observed with Positive Polarity Item (PPIs), another class of polarity sensitive element. While NPIs must be licensed by a negative element, PPIs are anti-licensed by negative elements. We find an illusion of ungrammaticality for PPIs in environments where an illusion of grammaticality is observed for NPIs. Thus, we argue there is a general polarity illusion. We find that several accounts of the NPI illusion either predict this PPI illusion or can capture this effect with a straightforward extension. The approaches which are able to predict this effect share a reliance on structural representation, highlighting the importance of both the licensing features of polarity items and the structural detail in sentence processing representations. |
Ryan M. O'Leary; Nicole M. Amichetti; Zoe Brown; Alexander J. Kinney; Arthur Wingfield Congruent prosody reduces cognitive effort in memory for spoken sentences: A pupillometric study with young and older adults Journal Article In: Experimental Aging Research, vol. 51, no. 1, pp. 35–58, 2025. @article{OLeary2025, Background: In spite of declines in working memory and other processes, older adults generally maintain good ability to understand and remember spoken sentences. In part this is due to preserved knowledge of linguistic rules and their implementation. Largely overlooked, however, is the support older adults may gain from the presence of sentence prosody (pitch contour, lexical stress, intra-and inter-word timing) as an aid to detecting the structure of a heard sentence. Methods: Twenty-four young and 24 older adults recalled recorded sentences in which the sentence prosody corresponded to the clausal structure of the sentence, when the prosody was in conflict with this structure, or when there was reduced prosody uninformative with regard to the clausal structure. Pupil size was concurrently recorded as a measure of processing effort. Results: Both young and older adults' recall accuracy was superior for sentences heard with supportive prosody than for sentences with uninformative prosody or for sentences in which the prosodic marking and causal structure were in conflict. The measurement of pupil dilation suggested that the task was generally more effortful for the older adults, but with both groups showing a similar pattern of effort-reducing effects of supportive prosody. Conclusions: Results demonstrate the influence of prosody on young and older adults' ability to recall accurately multi-clause sentences, and the significant role effective prosody may play in preserving processing effort. |
Salar Nouri; Amirali Soltani Tehrani; Niloufar Faridani; Ramin Toosi; Jalaledin Noroozi; Mohammad Reza A. Dehaqani Microsaccade selectivity as discriminative feature for object decoding Journal Article In: iScience, vol. 28, no. 1, pp. 1–19, 2025. @article{Nouri2025, Microsaccades, a form of fixational eye movements, help maintain visual stability during stationary observations. This study examines the modulation of microsaccadic rates by various stimulus categories in monkeys and humans during a passive viewing task. Stimulus sets were grouped into four primary categories: human, animal, natural, and man-made. Distinct post-stimulus microsaccade patterns were identified across these categories, enabling successful decoding of the stimulus category with accuracy and recall of up to 85%. We observed that microsaccade rates are independent of pupil size changes. Neural data showed that category classification in the inferior temporal (IT) cortex peaks earlier than changes in microsaccade rates, suggesting feedback from the IT cortex influences eye movements after stimulus discrimination. These results contribute to neurobiological models, enhance human-machine interfaces, optimize experimental visual stimuli, and deepen understanding of microsaccades' role in object decoding. |
Sergio Navas‑León; Milagrosa Sánchez‑Martín; Ana Tajadura‑Jiménez; Lize De Coster; Mercedes Borda‑Mas; Luis Morales Exploring eye‑movement changes as digital biomarkers and endophenotypes in subclinical eating disorders: An eye tracking study Journal Article In: BMC Psychiatry, vol. 133, pp. 1–12, 2025. @article{Navas‑Leon2025, Objective: Previous research has indicated that patients with Anorexia Nervosa (AN) exhibit specific eye movement changes, identified through eye tracking sensor technology. These changes have been proposed as potential digital biomarkers and endophenotypes for early diagnosis and preventive clinical interventions. This study aims to explore whether these eye movement changes are also present in individuals with subclinical eating disorder (ED) symptomatology compared to control participants. Method: The study recruited participants using convenience sampling and employed the Eating Disorder Examination Questionnaire for initial screening. The sample was subsequently divided into two groups: individuals exhibiting subclinical ED symptomatology and control participants. Both groups performed various tasks, including a fixation task, prosaccade/antisaccade task, and memory‑guided task. Alongside these tasks, anxiety and premorbid intel‑ ligence were measured as potential confounding variables. The data were analyzed through means comparison and exploratory Pearson's correlations. Results No significant differences were found between the two groups in the three eye tracking tasks. Discussion The findings suggest that the observed changes in previous research might be more related to the clinical state of the illness rather than a putative trait. Implications for the applicability of eye movement changes as early biomarkers and endophenotypes for EDs in subclinical populations are discussed. Further research is needed to validate hese findings and understand their implications for preventive diagnostics. |
William Narhi-Martinez; Yong Min Choi; Blaire Dube; Julie D. Golomb Allocation of spatial attention in human visual cortex as a function of endogenous cue validity Journal Article In: Cortex, vol. 185, pp. 4–19, 2025. @article{NarhiMartinez2025, Several areas of visual cortex contain retinotopic maps of the visual field, and neuroimaging studies have shown that covert attentional guidance will result in increases of activity within the regions representing attended locations. However, little research has been done to directly compare neural activity for different types of attentional cues. Here, we used fMRI to investigate how retinotopically-specific cortical activity would be modulated depending on whether we provided deterministic or probabilistic spatial information. On each trial, a four-item memory array was presented and participants' memory for one of the items would later be probed. Critically, trials began with a foveally-presented endogenous cue that was either 100% valid (deterministic runs), 70% valid (probabilistic runs), or neutral. By dividing visual cortex into quadrant-specific regions of interest (qROIs), we could examine how attention was spatially distributed across the visual field within each trial, depending on cue type and delay. During the anticipatory period prior to the memory array, we found increased activation at the cued location compared to noncued locations, with surprisingly comparable levels of facilitation for both deterministic and probabilistic cues. However, we found significantly greater facilitation on deterministic relative to probabilistic trials following the onset of the memory array, with only deterministic cue-related facilitation persisting through the presentation of the probe. These findings reveal how cue validity can drive differential allocations of neural resources over time across cued and noncued locations, and that the allocation of attention should not be assumed to invariably scale alongside the validity of a cue. |
Erin Morrow; David Clewett Distortion of overlapping memories relates to arousal and anxiety Journal Article In: Cognitive, Affective, & Behavioral Neuroscience, vol. 25, pp. 154–172, 2025. @article{Morrow2025, Everyday experiences often overlap, challenging our ability to maintain distinct episodic memories. One way to resolve such interference is by exaggerating subtle differences between remembered events, a phenomenon known as memory repulsion. Here, we tested whether repulsion is influenced by emotional arousal, when resolving memory interference is perhaps most needed. We adapted an existing paradigm in which participants repeatedly studied object–face associations. Participants studied two different-colored versions of each object: a to-be-tested “target” and its not-to-be-tested “competitor” pair mate. The level of interference between target and competitor pair mates was manipulated by making the object colors either highly similar or less similar, depending on the participant group. To manipulate arousal, the competitor object–face associations were preceded by either a neutral tone or an aversive and arousing burst of white noise. Memory distortion for the color of the target objects was tested after each study round to examine whether memory distortions emerge after learning. We found that participants with greater sound-induced pupil dilations, an index of physiological arousal, showed greater memory attraction of target colors towards highly similar competitor colors. Greater memory attraction was also correlated with greater memory interference in the last round of learning. Additionally, individuals who self-reported higher trait anxiety showed greater memory attraction when one of the overlapping memories was associated with something aversive. Our findings suggest that memories of similar neutral and arousing events may blur together after repeated exposures, especially in individuals who show higher arousal responses and symptoms of anxiety. |
Vanessa Carneiro Morita; David Souto; Guillaume S. Masson; Anna Montagnini Anticipatory smooth eye movements scale with the probability of visual motion: Role of target speed and acceleration Journal Article In: Journal of Vision, vol. 25, no. 1, pp. 1–22, 2025. @article{Morita2025, Sensory-motor systems are able to extract statistical regularities in dynamic environments, allowing them to generate quicker responses and anticipatory behavior oriented towards expected events. Anticipatory smooth eye movements (aSEM) have been observed in primates when the temporal and kinematic properties of a forthcoming visual moving target are fully or partially predictable. However, the precise nature of the internal model of target kinematics which drives aSEM remains largely unknown, as well as its interaction with environmental predictability. In this study we investigated whether and how the probability of target speed or acceleration is taken into account for driving aSEM. We recorded eye movements in healthy human volunteers while they tracked a small visual target with either constant, accelerating or decelerating speed, keeping the direction fixed. Across experimental blocks, we manipulated the probability of the presented target motion properties, with either 100% probability of occurrence of one kinematic condition (fully-predictable sessions), or a mixture with different proportions of two conditions (mixture sessions). We show that aSEM are robustly modulated by the target kinematic properties. With constant-velocity targets, aSEM velocity scales linearly with target velocity across the blocked sessions, and it follows overall a probability-weighted average in the mixture sessions. Predictable target acceleration/deceleration does also have an influence on aSEM, but with more variability across participants. Finally, we show that the latency and eye acceleration at the initiation of visually-guided pursuit do also scale, overall, with the probability of target motion. This scaling is consistent with Bayesian integration of sensory and predictive information. ### Competing Interest Statement The authors have declared no competing interest. |
Maria Eleonora Minissi; Alexia Antzaka; Simona Mancini; Marie Lallier Can playing video games enhance reading skills through more efficient serial visual search mechanisms? Insights from an eye tracking study Journal Article In: Language, Cognition and Neuroscience, vol. 40, no. 2, pp. 209–230, 2025. @article{Minissi2025, Reading disorders are associated with atypical top-down visual attention (VA) processes like reduced VA span and slower serial visual search (SVS). In contrast, expert action video game (AVG) players, known for their efficient top-down VA, exhibit improved reading abilities. It is unclear whether these benefits stem solely from AVGs or apply to other gaming experiences. To explore this, AVG players (AVGPs), players of genres excluding AVGs (VGPs), and non-players were evaluated on their VA span, and behavioural and oculomotor performance in SVS. VGPs, but not AVGPs, demonstrated enhanced performance and oculomotor behaviour in SVS compared to non-players, while both player groups showed a trend towards better VA span skills. Notably, reading-related skills were enhanced in the two player groups, but particularly more so in VGPs. These findings support the existence of potential benefits of playing video games different from classical AVGs for the development of top-down VA and reading-related abilities. |
Mylène Michaud; Annie Roy-Charland; Mélanie Perron Effects of explicit knowledge and attentional-perceptual processing on the ability to recognize fear and surprise Journal Article In: Behavioral Sciences, vol. 15, pp. 1–11, 2025. @article{Michaud2025, When participants are asked to identify expressed emotions from pictures, fear is often confused with surprise. The present study explored this confusion by utilizing one prototype of surprise and three prototypes of fear varying as a function of distinctive cues in the fear prototype (cue in the eyebrows, in the mouth or both zones). Participants were presented with equal numbers of pictures expressing surprise and fear. Eye movements were monitored when they were deciding if the picture was fear or surprise. Following each trial, explicit knowledge was assessed by asking the importance (yes vs. no) of five regions (mouth, nose, eyebrows, eyes, cheeks) in recognizing the expression. Results revealed that fear with both distinctive cues was recognized more accurately, followed by the prototype of surprise and fear with a distinctive cue in the mouth at a similar level. Finally, fear with a distinctive cue in the eyebrows was the least accurately recognized. Explicit knowledge discriminability results revealed that participants were aware of the relevant areas for each prototype but not equally so for all prototypes. Specifically, participants judged the eyebrow area as more important when the distinctive cue was in the eyebrows (fear–eyebrow) than when the cue was in the mouth (fear–mouth) or when both cues were present (fear–both). Results are discussed considering the attentional-perceptual and explicit knowledge limitation hypothesis. |
Elif Memis; Gizem Y. Yildiz; Gereon R. Fink; Ralph Weidner Hidden size: Size representations in implicitly coded objects Journal Article In: Cognition, vol. 256, pp. 1–15, 2025. @article{Memis2025, Its angular representation on the retina does not solely determine the perceived size of an object. Instead, contextual information is interpreted. We investigated the levels of processing at which this interpretation occurs. Combining three experimental paradigms, we explored whether masked and more implicitly coded objects are already size-rescaled. We induced object size rescaling using a modified variant of the Ebbinghaus illusion. In this variant, six dots altered the size of a central stimulus and served as inducers generating Object-Substitution Masking (OSM). Participants reported the average size of multiple circles using the size-averaging paradigm, allowing us to test the contribution of masked and non-masked central target circles. Our Ebbinghaus illusion variant altered perceived stimulus size and showed robust masking via OSM. Furthermore, size-averaging was sensitive enough to detect perceived size changes in the magnitude of the ones induced by the Ebbinghaus illusion. Finally, combining all three paradigms, we observed that masked and non-masked stimuli contributed to size averaging in a size-rescaled manner. In a control experiment testing the general effects of Ebbinghaus inducers, we observed a contrast-like effect on size averaging. Large inducers decreased the perceived average size, while small inducers increased it. In summary, our experiments indicate that context integration, induced by the Ebbinghaus illusion, alters size representations at an early stage. These modified size representations are independent of whether a target is recognisable. Moreover, perceived average size appears to be coded relative to surrounding perceptual groups. |
Kimberly Meier; Simon Warner; Miriam Spering; Deborah Giaschi Poor fixation stability does not account for motion perception deficits in amblyopia Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–14, 2025. @article{Meier2025, People with amblyopia show deficits in global motion perception, especially at slow speeds. These observers are also known to have unstable fixation when viewing stationary fixation targets, relative to healthy controls. It is possible that poor fixation stability during motion viewing interferes with the fidelity of the input to motion-sensitive neurons in visual cortex. To probe these mechanisms at a behavioral level, we assessed motion coherence thresholds in adults with amblyopia while measuring fixation stability. Consistent with prior work, participants with amblyopia had elevated coherence thresholds for the slow speed stimuli, but not the fast speed stimuli, using either the amblyopic or the fellow eye. Fixation stability was elevated in the amblyopic eye relative to controls across all motion stimuli, and not selective for conditions on which perceptual deficits were observed. Fixation stability was not related to visual acuity, nor did it predict coherence thresholds. These results suggest that motion perception deficits might not be a result of poor input to the motion processing system due to unstable fixation, but rather due to processing deficits in motion-sensitive visual areas. |
Stella Mayer; Pankhuri Saxena; Max Arwed Crayen; Stefan Treue In: Journal of Neuroscience Methods, vol. 415, pp. 1–11, 2025. @article{Mayer2025, Background: Neuronal activity is modulated by behavior and cognitive processes. The combination of several neurotransmitter systems, acting directly or indirectly on specific populations of neurons, underlie such modulations. Most studies with non-human primates (NHPs) fail to capture this complexity, partly due to the lack of adequate methods for reliably and simultaneously measuring a broad spectrum of neurotransmitters while the animal engages in behavioral tasks. New Method: To address this gap, we introduce a novel implementation of brain microdialysis (MD), employing semi-chronically implanted guides and probes in awake, behaving NHPs facilitated by removable insets within a standard recording chamber over extrastriate visual cortex (here, the visual middle temporal area (MT)). This approach allows flexible access to diverse brain regions, including areas deep within the sulcus. Results: Reliable concentration measurements of GABA, glutamate, norepinephrine, epinephrine, dopamine, serotonin, and choline were achieved from small sample volumes (<20 µl) using ultra-performance liquid chromatography with electrospray ionization-mass spectrometry (UPLC-ESI-MS). Comparing two behavioral states – ‘active' and ‘inactive', we observe subtle concentration variations between the two behavioral states and a greater variability of concentrations in the active state. Additionally, we find positively and negatively correlated concentration changes for neurotransmitter pairs between the behavioral states. Conclusions: Therefore, this MD setup allows insights into the neurochemical dynamics in awake primates, facilitating comprehensive investigations into the roles and the complex interplay of neurotransmitters in cognitive and behavioral functions. |
Stanford Martinez; Carolina Ramirez-Tamayo; Syed Hasib Akhter Faruqui; Kal Clark; Adel Alaeddini; Nicholas Czarnek; Aarushi Aggarwal; Sahra Emamzadeh; Jeffrey R. Mock; Edward J. Golob Discrimination of radiologists' experience level using eye-tracking technology and machine learning: Case study Journal Article In: JMIR Formative Research, vol. 9, pp. 1–16, 2025. @article{Martinez2025, Background: Perception-related errors comprise most diagnostic mistakes in radiology. To mitigate this problem, radiologists use personalized and high-dimensional visual search strategies, otherwise known as search patterns. Qualitative descriptions of these search patterns, which involve the physician verbalizing or annotating the order he or she analyzes the image, can be unreliable due to discrepancies in what is reported versus the actual visual patterns. This discrepancy can interfere with quality improvement interventions and negatively impact patient care. Objective: The objective of this study is to provide an alternative method for distinguishing between radiologists by means of captured eye-tracking data such that the raw gaze (or processed fixation data) can be used to discriminate users based on subconscious behavior in visual inspection. Methods: We present a novel discretized feature encoding based on spatiotemporal binning of fixation data for efficient geometric alignment and temporal ordering of eye movement when reading chest x-rays. The encoded features of the eye-fixation data are used by machine learning classifiers to discriminate between faculty and trainee radiologists. A clinical trial case study was conducted using metrics such as the area under the curve, accuracy, F1-score, sensitivity, and specificity to evaluate the discriminability between the 2 groups regarding their level of experience. The classification performance was then compared with state-of-the-art methodologies. In addition, a repeatability experiment using a separate dataset, experimental protocol, and eye tracker was performed with 8 participants to evaluate the robustness of the proposed approach. Results: The numerical results from both experiments demonstrate that classifiers using the proposed feature encoding methods outperform the current state-of-the-art in differentiating between radiologists in terms of experience level. An average performance gain of 6.9% is observed compared with traditional features while classifying experience levels of radiologists. This gain in accuracy is also substantial across different eye tracker–collected datasets, with improvements of 6.41% using the Tobii eye tracker and 7.29% using the EyeLink eye tracker. These results signify the potential impact of the proposed method for identifying radiologists' level of expertise and those who would benefit from additional training. Conclusions: The effectiveness of the proposed spatiotemporal discretization approach, validated across diverse datasets and various classification metrics, underscores its potential for objective evaluation, informing targeted interventions and training strategies in radiology. This research advances reliable assessment tools, addressing challenges in perception-related errors to enhance patient care outcomes. |
Soodeh Majidpour; Mehdi Sanayei; Reza Ebrahimpour; Sajjad Zabbah Better than expected performance effect depends on the spatial location of visual stimulus Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–12, 2025. @article{Majidpour2025, The process of perceptual decision-making in the real world involves the aggregation of pieces of evidence into a final choice. Visual evidence is usually presented in different pieces, distributed across time and space. We wondered whether adding variation in the location of the received information would lead to differences in how subjects integrated visual information. Seven participants viewed two pulses of random dot motion stimulus, separated by time gaps and presented at different locations within the visual field. Our findings suggest that subjects accumulate discontinuous information (over space or time) differently than when it is presented continuously, in the same location or with no gaps between them. These findings indicate that the discontinuity of evidence impacts the process of evidence integration in a manner more nuanced than that presumed by the theory positing perfect integration of evidence. |
Xingyang Lv; Zixin Yuan; Fang Wan; Tian Lan; Gila Oren Do tourists experience suffering when they touch the wailing wall? Journal Article In: Tourism Management, vol. 106, pp. 1–21, 2025. @article{Lv2025, Tactile engagement is a critical aspect of tourist experiences. Embodied cognition theory suggests a direct correlation between physical sensations and psychological perceptions. For example, touching the textured stones at the Wailing Wall, a revered religious site in Jerusalem, can evoke intense emotions in tourists. This study explores the impact of rough tactile sensations on dark experiences through six studies. We used content analysis, on-site surveys, eye movement experiments, and scenario experiments to validate these effects. Our findings emphasize the pivotal role of rough tactile sensations in shaping profound emotions and individual experiences while uncovering alternative routes for developing sensory strategies to enrich dark tourism experiences. |
Selma Lugtmeijer; Aleksandra M. Sobolewska; The Visual Brain Group; Edward H. F. Haan; H. Steven Scholte Visual feature processing in a large stroke cohort: Evidence against modular organization Journal Article In: Brain, pp. 1–11, 2025. @article{Lugtmeijer2025, Mid-level visual processing represents a crucial stage between basic sensory input and higher-level object recognition. The conventional model posits that fundamental visual qualities, such as colour and motion, are processed in specialized, retinotopic brain regions (e.g. V4 for colour, MT/V5 for motion). Using atlas-based lesion–symptom mapping and disconnectome maps in a cohort of 307 ischaemic stroke patients, we examined the neuroanatomical correlates underlying the processing of eight mid-level visual qualities.Contrary to the predictions of the standard model, our results did not reveal consistent relationships between processing impairments and damage to traditionally associated brain regions. Although we validated our methodology by confirming the established relationship between visual field defects and damage to primary visual areas (V1, V2 and V3), we found no reliable evidence linking processing deficits to specific regions in the posterior brain.These findings challenge the traditional modular view of visual processing and suggest that mid-level visual processing might be more distributed across neural networks than previously thought. This supports alternative models where visual maps represent constellations of co-occurring information rather than specific qualities. |
Mareike Ludwig; Matthew J. Betts; Dorothea Hämmerer Stimulate to remember? The effects of short burst of transcutaneous vagus nerve stimulation (taVNS) on memory performance and pupil dilation Journal Article In: Psychophysiology, vol. 62, no. 1, pp. 1–16, 2025. @article{Ludwig2025, The decline in noradrenergic (NE) locus coeruleus (LC) function in aging is thought to be implicated in episodic memory decline. Transcutaneous auricular vagus nerve stimulation (taVNS), which supports LC function, might serve to preserve or improve memory function in aging. However, taVNS effects are generally very heterogeneous, and it is currently unclear whether taVNS has an effect on memory. In this study, an emotional memory task with negative events involving the LC-NE system was combined with the short burst of event-related taVNS (3 s) in younger adults (N = 24). The aim was to investigate taVNS-induced changes in pupil dilation during encoding and possible taVNS-induced improvements in (emotional) memory performance for early and delayed (24 h) recognition. Negative events were associated with increased pupil dilation and better memory performance. Additionally, real as compared to sham or no stimulation selectively increased memory for negative events. Short bursts of stimulation, whether real or sham, led to an increase in pupil dilation and an improvement in memory performance over time, likely due to the attention-inducing sensory modulation of electrical stimulation. |
Óscar Loureda Lamas; Mathis Teucher; Celia Hernández Pérez; Adriana Cruz Rubio; Carlos Gelormini-Lezama (Re)categorizing lexical encapsulation: An experimental approach Journal Article In: Journal of Pragmatics, vol. 239, pp. 4–15, 2025. @article{LouredaLamas2025, Anaphoric encapsulation is a discursive mechanism by which a noun phrase recovers an explicature. This eye tracking study addresses the question of whether categorizing versus recategorizing encapsulation lead to different processing patterns. Results show that (1) encapsulating noun phrases are cognitively prominent areas, (2) recategorization is never less effortful than categorization, (3) the prominence and instructional asymmetry of the encapsulating noun phrase with respect to the antecedent is greater in cases of recategorizing encapsulation. Overall, encapsulating noun phrases initiate a complex cognitive operation due to the nature of their antecedent, which includes both encoded and inferred information. A distinctive processing pattern emerges for recategorizing encapsulating noun phrases: greater local efforts, due to the introduction of new information, do not result in higher total reading times. Beyond the introductory section, the structure of this study is as follows: Section 2 discusses the properties of categorizing and recategorizing mechanisms. Section 3 reviews experimental research on nominal anaphoric encapsulation in Spanish. Section 4 outlines the key aspects of the experimental design and execution. Finally, sections 5 and 6 present the results of the experiment and offer a theoretical discussion of the findings. |
Belén López Assef; Tania Zamuner Task effects in children's word recall: Expanding the reverse production effect Journal Article In: Journal of Child Language, pp. 1–13, 2025. @article{LopezAssef2025, Words said aloud are typically recalled more than words studied under other techniques. In certain circumstances, production does not lead to this memory advantage. We investigated the nature of this effect by varying the task during learning. Children aged five to six years were trained on novel words which required no action (Heard) compared to Verbal-Speech (production), Non-Verbal-Speech (stick out tongue), and Non-Verbal-Non-Speech (touch nose). Eye-tracking showed successful learning of novel words in all training conditions, but no differences between conditions. Both non-verbal tasks disrupted recall, demonstrating that encoding can be disrupted when children perform different types of concurrent actions. |
Yaohui Liu; Keren He; Kaiwen Man; Peida Zhan Exploring critical eye-tracking metrics for identifying cognitive strategies in Raven's Advanced Progressive Matrices: A data-driven perspective Journal Article In: Journal of Intelligence, vol. 13, no. 14, pp. 1–20, 2025. @article{Liu2025a, The present study utilized a recursive feature elimination approach in conjunction with a random forest algorithm to assess the efficacy of various features in predicting cogni- tive strategy usage in Raven's Advanced Progressive Matrices. In addition to item response accuracy (RA) and response time (RT), five key eye-tracking metrics were examined: pro- portional time on matrix (PTM), latency to first toggle (LFT), rate of latency to first toggle (RLT), number of toggles (NOT), and rate of toggling (ROT). The results indicated that PTM, RLT, and LFT were the three most critical features, with PTM emerging as the most significant predictor of cognitive strategy usage, followed by RLT and LFT. Clustering anal- ysis of these optimal features validated their utility in effectively distinguishing cognitive strategies. The study's findings underscore the potential of specific eye-tracking metrics as objective indicators of cognitive processing while providing a data-driven method to identify strategies used in complex reasoning tasks. |
Xinhe Liu; Zhiting Zhang; Lu Gan; Panke Yu; Ji Dai Medium spiny neurons mediate timing perception in coordination with prefrontal neurons in primates Journal Article In: Advanced Science, pp. 1–15, 2025. @article{Liu2025, Timing perception is a fundamental cognitive function that allows organisms to navigate their environment effectively, encompassing both prospective and retrospective timing. Despite significant advancements in understanding how the brain processes temporal information, the neural mechanisms underlying these two forms of timing remain largely unexplored. In this study, it aims to bridge this knowledge gap by elucidating the functional roles of various neuronal populations in the striatum and prefrontal cortex (PFC) in shaping subjective experiences of time. Utilizing a large-scale electrode array, it recorded responses from over 3000 neurons in the striatum and PFC of macaque monkeys during timing tasks. The analysis classified neurons into distinct groups and revealed that retrospective and prospective timings are governed by separate neural processes. Specifically, this study demonstrates that medium spiny neurons (MSNs) in the striatum play a crucial role in facilitating these timing processes. Through cell-type-specific manipulation, it identified D2-MSNs as the primary contributors to both forms of timing. Additionally, the findings indicate that effective processing of timing requires coordination between the PFC and the striatum. In summary, this study advances the understanding of the neural foundations of timing perception and highlights its behavioral implications. |
Zheng Liang; Riman Ga; Han Bai; Qingbai Zhao; Guixian Wang; Qing Lai; Shi Chen; Quanlei Yu; Zhijin Zhou Teaching expectancy improves video-based learning: Evidence from eye-movement synchronization Journal Article In: British Journal of Educational Technology, vol. 56, pp. 231–249, 2025. @article{Liang2025, Abstract: Video-based learning (VBL) is popular, yet students tend to learn video material passively. Instilling teaching expectancy is a strategy to promote active processing by learners, but it is unclear how effective it will be in improving VBL. This study examined the role of teaching expectancy on VBL by comparing the learning outcomes and metacognitive monitoring of 94 learners with different expectancies (teaching, test or no expectancy). Results showed that the teaching expectancy group had better learning outcomes and no significant difference in the metacognitive monitoring of three groups. We further explored the visual behaviour patterns of learners with different expectancies by using the indicator of eye-movement synchronization. It was found that synchronization was significantly lower in both the teaching and test expectancy groups than in the no expectancy group, and the test expectancy group was significantly lower than the teaching expectancy group. This result suggests that both teaching and test expectancy enhance the active processing of VBL. However, by sliding window analysis, we found that the teaching expectancy group used a flexible and planned attention allocation. Our findings confirmed the effectiveness of teaching expectancy in VBL. Also, this study provided evidence for the applicability of eye-tracking techniques to assess VBL. |
Wenrui Li; Xiaofang Ma; Lei Huang; Jian Guan Scene inconsistency effect in object judgement: Evidence from semantic and syntactic separation Journal Article In: Current Psychology, pp. 1–11, 2025. @article{Li2025a, Objects are always situated within a scene context and have specific relationships with their environment. Understanding how scene context and the relationships between objects and their context affect object identification is crucial. Previous studies have indicated that scene-incongruent objects are detected faster than scene-congruent ones, and that “context cueing” can enhance object identification. However, no study has directly tested this relationship while considering the effects of bottom-up and top-down attention processes on object judgment. In our research, we explored the influence of context and its relationships by incorporating “context cueing” and categorizing these relationships into two types: semantic and syntactic, within an object judgment task. The behavioral results from Experiment 1 revealed that the recognition accuracy for syntactically incongruent objects was higher, with shorter response times. Eye-tracking data indicated that when semantic congruence was present, the first fixation duration on syntactically incongruent objects was shorter; conversely, when semantic incongruence was present, the first fixation duration on syntactically congruent objects was longer. In Experiment 2, which introduced context cueing, we found that the recognition accuracy for semantically congruent objects was higher, and they received more fixations. Notably, when syntactic incongruence was present, the first fixation duration on semantically congruent objects was longer. These findings suggest that under conditions without background cueing, syntactic processing has priority in scene processing. We interpret these results as evidence that top-down attention biases object processing, leading to reduced processing of scene-congruent objects compared to scene-incongruent ones. Thus, “context cueing” activates top-down attention, playing a pivotal role in object identification. |
Ting Xun Li; Chi Wen Liang In: Cognitive Therapy and Research, vol. 49, pp. 62–74, 2025. @article{Li2025, Background: Attentional bias modification (ABM) is a computerized treatment for anxiety. Most ABMs using a dot-probe task aim to direct anxious individuals' attention away from threats. Recently, a new ABM approach using a visual search task (i.e., ABM-positive-search) has been developed to facilitate the allocation of attention toward positive stimuli. This study examined the efficacies of two versions of ABM-positive-search in socially anxious individuals. Methods: Eighty-six participants were randomly assigned to the search positive in threat (SP-T; n = 28), search positive in neutral (SP-N; n = 29), or control training (CT) (n = 29) group. All participants completed four training sessions within two weeks. Attentional bias, attentional control, self-report social anxiety, and anxiety responses (i.e., subjective anxiety, psychophysiological reactivity, and gaze behavior) to the speech task were assessed pre-training and post-training. Results: Results showed that ABM-positive-search trainings facilitated disengagement from threats compared to CT. Regardless of group, participants exhibited a reduction in attention allocation to negative feedback during speech. However, only SP-N increased attention allocation to positive feedback. Participants in three groups showed a decrease in subjective anxiety but no changes in psychophysiological reactivity to speech challenge from pre-training to post-training. ABM-positive-search trainings had no beneficial effects on attentional control or self-report social anxiety when compared with CT. Conclusions: The findings do not support the efficacy of ABM-positive-search trainings for social anxiety. |
Haiting Lan; Sixin Liao; Jan Louis Kruger Do advertisements disrupt reading? evidence from eye movements Journal Article In: Applied Cognitive Psychology, vol. 39, pp. 1–19, 2025. @article{Lan2025, Reading online texts is often accompanied by visual distractors such as advertisements. Although previous studies have found that visual distractors are attention-demanding, little is known about how they impact reading. Drawing on text-based and word-based eye-movement measures, the current study examines how three types of ads (static image, flashing text and video) influence readers' reading comprehension and reading process. Results show that increasingly animated ads were more distracting than static ones at the text level, as evidenced by more and longer fixations, and more regressions. Moreover, the word frequency effect was stronger when reading with ads with flashing text than without ads on gaze duration and total reading time, suggesting that linguistic-related animated ads interfere with word processing. Although visual distractors reduced their reading speed and word processing efficiency, readers managed to maintain sufficient comprehension by adopting a more mindful reading strategy, indicating how metacognition functions in complex reading situations. |
Melanie Labusch; Manuel Perea The CASE of brand names during sentence reading Journal Article In: Psychological Research, vol. 89, no. 1, pp. 1–10, 2025. @article{Labusch2025, Brand names typically maintain a distinctive letter case (e.g., IKEA, Google). This element is essential for theoretical (word recognition models) and practical (brand design) reasons. In abstractionist models, letter case is considered irrelevant, whereas instance-based models use surface information like letter case during lexical retrieval. Previous brand identification tasks reported faster responses to brands in their characteristic letter case (e.g., IKEA and Google faster than ikea and GOOGLE), favoring instance-based models. We examined whether this pattern can be generalized to normal sentence reading: Participants read sentences in which well-known brand names were presented intact (e.g., IKEA, Google) or with a modified letter case (e.g., Ikea, GOOGLE). Results showed a cost for brands written in uppercase, independently of their characteristic letter case, in early eye fixation measures (probability of first-fixation, first-fixation duration). However, for later measures (gaze duration and total times), fixation times were longer when the brand's letter case was modified, restricted to those brands typically written in lowercase (e.g., GOOGLE > Google, whereas Ikea ≲ IKEA). Thus, during sentence reading, both the actual letter case and the typical letter case of brand names interact dynamically, posing problems for abstractionist models of reading. |
Marianna Kyriacou; Franziska Köder The cognitive underpinnings of irony comprehension: Fluid intelligence but not working memory modulates processing Journal Article In: Applied Psycholinguistics, vol. 45, pp. 1219–1250, 2025. @article{Kyriacou2025, The comprehension of irony involves a sophisticated inferential process requiring language users to go beyond the literal meaning of an utterance. Because of its complex nature, we hypothesized that working memory (WM) and fluid intelligence, the two main components of executive attention, would be involved in the understanding of irony: the former by maintaining focus and relevant information active during processing, the latter by disengaging irrelevant information and offering better problem-solving skills. In this eye-tracking reading experiment, we investigated how adults (N = 57) process verbal irony, based on their executive attention skills. The results indicated a null (or indirect) effect for WM, while fluid intelligence directly modulated the comprehension and processing of irony during reading. As fluid intelligence is an important individual-difference variable, the findings pave the way for future research on developmental and clinical populations who tend to struggle with nonliteral language. |
Jens Kürten; Christina Breil; Roxana Pittig; Lynn Huestegge; Anne Böckler How eccentricity modulates attention capture by direct face/gaze and sudden onset motion Journal Article In: Attention, Perception, & Psychophysics, pp. 1–13, 2025. @article{Kuerten2025, We investigated how processing benefits for direct face/gaze and sudden onset motion depend on stimulus presentation location, specifically eccentricity from fixation. Participants responded to targets that were presented on one of four stimuli that displayed a direct or averted face and gaze either statically or suddenly. Between participants, stimuli were presented at different eccentricities relative to central fixation, spanning 3.3°, 4.3°, 5.5° or 6.5° of the visual field. Replicating previous studies, we found processing advantages for direct (vs. averted) face/gaze and motion onset (vs. static stimuli). Critically, while the motion-onset advantage increased with increasing distance to the center, the face/gaze direction advantage was not significantly modulated by target eccentricity. Results from a control experiment with eye tracking indicate that face/gaze direction could be accurately discriminated even at the largest eccentricity. These findings demonstrate a distinction between the processing of basic facial and gaze signals and exogenous motion cues, which may be based on functional differences between central and peripheral retinal regions. Moreover, the results highlight the importance of taking specific stimulus properties into account when studying perception and attention in the periphery. |
Zhiming Kong; Chen Chen; Jianrong Jia Pupil responds spontaneously to visuospatial regularity Journal Article In: Journal of Vision, vol. 25, no. 1, pp. 1–10, 2025. @article{Kong2025, Beyond the light reflex, the pupil responds to various high-level cognitive processes. Multiple statistical regularities of stimuli have been found to modulate the pupillary response. However, most studies have used auditory or visual temporal sequences as stimuli, and it is unknown whether the pupil size is modulated by statistical regularity in the spatial arrangement of stimuli. In three experiments, we created perceived regular and irregular stimuli, matching physical regularity, to investigate the effect of spatial regularity on pupillary responses during passive viewing. Experiments using orientation (Experiments 1 and 2) and size (Experiment 3) as stimuli consistently showed that perceived irregular stimuli elicited more pupil constriction than regular stimuli. Furthermore, this effect was independent of the luminance of the stimuli. In conclusion, our study revealed that the pupil responds spontaneously to perceived visuospatial regularity, extending the stimulus regularity that influences the pupillary response into the visuospatial domain. |
Lua Koenig; Biyu J. He 2025. @book{Koenig2025, Perceptual awareness results from an intricate interaction between external sensory input and the brain's spontaneous activity. Pre-stimulus ongoing activity influencing conscious perception includes both brain oscillations in the alpha (7 to 14 Hz) and beta (14 to 30 Hz) frequency ranges and aperiodic activity in the slow cortical potential (SCP, <5 Hz) range. However, whether brain oscillations and SCPs independently influence conscious perception or do so through shared mechanisms remains unknown. Here, we addressed this question in 2 independent magnetoencephalography (MEG) data sets involving near-threshold visual perception tasks in humans using low-level (Gabor patches) and high-level (objects, faces, houses, animals) stimuli, respectively. We found that oscillatory power and large-scale SCP activity influence conscious perception through independent mechanisms that do not have shared variance. In addition, through mediation analysis, we show that pre-stimulus oscillatory power and SCP activity have different relations to pupil size-an index of arousal-in their influences on conscious perception. Together, these findings suggest that oscillatory power and SCPs independently contribute to perceptual awareness, with distinct relations to pupil-linked arousal. |
Anna R. Knippenberg; Sabrina Yavari; Gregory P. Strauss Negative auditory hallucinations are associated with increased activation of the defensive motivational system in schizophrenia Journal Article In: Schizophrenia Research: Cognition, vol. 39, pp. 1–6, 2025. @article{Knippenberg2025, Auditory hallucinations (AH) are the most common symptom of psychosis. The voices people hear make comments that are benign or even encouraging, but most often voices are threatening and derogatory. Negative AH are often highly distressing and contribute to suicide risk and violent behavior. Biological mechanisms underlying the valence of voices (i.e., positive, negative, neutral) are not well delineated. In the current study, we examined whether AH voice valence was associated with increased activation of the Defensive Motivational System, as indexed by central and autonomic system response to unpleasant stimuli. Data were evaluated from two studies that used a common symptom rating instrument, the Psychotic Symptom Rating Scale (PSY-RATS), to measure AH valence. Participants included outpatients diagnosed with SZ. Tasks included: Study 1: Trier Social Stress Task while heart rate was recorded via electrocardiography (N = 27); Study 2: Passive Viewing Task while participants were exposed to pleasant, unpleasant, and neutral images from the International Affective Picture System (IAPS) library while eye movements, pupil dilation, and electroencephalography were recorded (N = 25). Results indicated that negative voice content was significantly associated with: 1) increased heart rate during an acute social stressor, 2) increased pupil dilation to unpleasant images, 3) higher neural reactivity to unpleasant images, and 4) a greater likelihood of having bottom-up attention drawn to unpleasant stimuli. Findings suggest that negative AH are associated with greater Defensive Motivational System activation in terms of central and autonomic nervous system response. |
Michaela Klímová; Ilona M. Bloem; Sam Ling How does orientation-tuned normalization spread across the visual field? Journal Article In: Journal of Neurophysiology, vol. 133, no. 2, pp. 539–546, 2025. @article{Klimova2025, Visuocortical responses are regulated by gain control mechanisms, giving rise to fundamental neural and perceptual phenomena such as surround suppression. Suppression strength, determined by the composition and relative properties of stimuli, controls the strength of neural responses in early visual cortex, and in turn, the subjective salience of the visual stimulus. Notably, suppression strength is modulated by feature similarity; for instance, responses to a center-surround stimulus in which the components are collinear to each other are weaker than when they are orthogonal. However, this feature-tuned aspect of normalization, and how it may affect the gain of responses, has been understudied. Here, we examine the contribution of the tuned component of suppression to contrast response modulations across the visual field. To do so, we used functional magnetic resonance imaging (fMRI) to measure contrast response functions (CRFs) in early visual cortex (areas V1–V3) in 10 observers while they viewed full-field center-surround gratings. The center stimulus varied in contrast between 2.67% and 96% and was surrounded by a collinear or orthogonal surround at full contrast. We found substantially stronger suppression of responses when the surround was parallel to the center, manifesting as shifts in the population CRF. The magnitude of the CRF shift was strongly dependent on voxel spatial preference and seen primarily in voxels whose receptive field spatial preference corresponds to the area straddling the center-surround boundary in our display, with little-to-no modulation elsewhere. |
Leor N. Katz; Martin O. Bohlen; Gongchen Yu; Carlos Mejias-Aponte; Marc A. Sommer; Richard J. Krauzlis Optogenetic manipulation of covert attention in the nonhuman primate Journal Article In: Journal of Cognitive Neuroscience, vol. 37, no. 2, pp. 266–285, 2025. @article{Katz2025, Optogenetics affords new opportunities to interrogate neuronal circuits that control behavior. In primates, the usefulness of optogenetics in studying cognitive functions remains a challenge. The technique has been successfully wielded, but behavioral effects have been demonstrated primarily for sensorimotor processes. Here, we tested whether brief optogenetic suppression of primate superior colliculus can change performance in a covert attention task, in addition to previously reported optogenetic effects on saccadic eye movements. We used an attention task that required the monkey to detect and report a stimulus change at a cued location via joystick release, while ignoring changes at an uncued location. When the cued location was positioned in the response fields of transduced neurons in the superior colliculus, transient light delivery coincident with the stimulus change disrupted the monkey's detection performance, significantly lowering hit rates. When the cued location was elsewhere, hit rates were unaltered, indicating that the effect was spatially specific and not a motor deficit. Hit rates for trials with only one stimulus were also unaltered, indicating that the effect depended on selection among distractors rather than a low-level visual impairment. Psychophysical analysis revealed that optogenetic suppression increased perceptual threshold, but only for locations matching the transduced site. These data show that optogenetic manipulations can cause brief and spatially specific deficits in covert attention, independent of sensorimotor functions. This dissociation of effect, and the temporal precision provided by the technique, demonstrates the utility of optogenetics in interrogating neuronal circuits that mediate cognitive functions in the primate. |
Juliano Setsuo Violin Kanamota; Gerson Yukio Tomanari; William J. McIlvane Tracking eye fixations during stimulus generalization tests Journal Article In: Psychological Record, pp. 1–10, 2025. @article{Kanamota2025, In the analysis of operant behavior, there is little empirical research on the relationship between observing responses and primary stimulus generalization. This work aimed to investigate eye fixations when S+ and S- dimensions were varied on generalization tests. Ten university students participated. Their training consisted of a MULT VI 1 s EXT schedule followed by MULT VI 2 s EXT schedule. Discriminative stimuli were three Gabor line tilts. S+ and S- had 45º and 135º slopes, respectively. After participants achieved discrimination indices of 75%, generalization tests in extinction began. There were two different conditions: (1) S+ was replaced by stimuli with angles of 15ο, 30ο, 45ο, 60ο, and 75ο (five participants). (2) S- was replaced by 105ο, 120ο, 135ο,, 150º, and 165º (five participants). In both training and tests, eye tracking equipment recorded observing responses defined as visual fixations. S+ variations yielded sharp observing response gradients. However, S- variations yielded flattened, bell-shaped, and U-shaped observing response gradients. These data contribute to the limited information on human observing during tests of primary stimulus generalization. The study provides a methodology for accomplishing a more complete characterization of behavioral processes that may be operative when normally capable adults are exposed to variations in S+ and S- on generalization tasks. |
Tristan Jurkiewicz; Audrey Vialatte; Yaffa Yeshurun; Laure Pisella Attentional modulation of peripheral pointing hypometria in healthy participants: An insight into optic ataxia? Journal Article In: Neuropsychologia, vol. 208, pp. 1–12, 2025. @article{Jurkiewicz2025, Damage to the superior parietal lobule and intraparietal sulcus (SPL-IPS) causes optic ataxia (OA), characterized by pathological gaze-centered hypometric pointing to targets in the affected peripheral visual field. The SPL-IPS is also involved in covert attention. Here, we investigated the possible link between attention and action. This study investigated the effect of attention on pointing performance in healthy participants and two OA patients. In invalid trials, targets appeared unpredictably across different visual fields and eccentricities. Valid trials involved cued targets at specific locations. The first experiment used a central cue with 75% validity, the second used a peripheral cue with 50% validity. The effect of attention on pointing variability (noise) or time was expected as a confirmation of cueing efficiency. Critically, if OA reflects an attentional deficit, then healthy participants, in the invalid condition (without attention), were expected to produce the gaze-centered hypometric pointing bias characteristic of OA. Results: revealed main effects of validity on pointing biases in all participants with central predictive cueing, but not with peripheral low predictive cueing. This suggests that the typical underestimation of visual eccentricity in OA (visual field effect) at least partially results from impaired endogenous attention orientation toward the affected visual field. |
Yu Cin Jian; Leo Yuk Ting Cheung Prediction of text-and-diagram reading comprehension by eye-movement indicators: A longitudinal study in elementary schools Journal Article In: European Journal of Psychology of Education, vol. 40, no. 1, pp. 1–25, 2025. @article{Jian2025, Eye-movement technology has been often used to examine reading processes, but research has seldom examined the relationship between the reading process and comprehension performance, and whether the relationships are similar or different across grades. To investigate this, we conducted a 3-year longitudinal study starting at grade 4, with 175 effect samples to track the development data of eye movements on text-and-diagram reading. A series of temporal and spatial eye-movement predictors were identified to predict reading comprehension in various grades. The result of a hierarchical regression model established that total fixation duration measures (reflects processing level) and mean fixation duration (reflects decoding efficiency) were relatively better predictors of the post-reading tests at grades 5 and 6. That is, the readers made more mental efforts and had better decoding ability, which predict better post-reading test scores. Meanwhile, in grades 5 and 6, rereading total fixation duration on both the main text and diagrams consistently predicted the post-reading tests, indicating that the readers' self-regulated study time on re-processing some specific information is important for reading comprehension. Besides, a longitudinal structural equation modeling (SEM) revealed that the readers' fixation durations and text-and-diagram regression count in the lower fourth grade could significantly predict the same indicators in the following 2 years. In summary, this study identified the critical eye-movement indicators for predicting reading-test performance, and these predictions were more effective for the readers in upper grades than for those in the lower grades. |
Gianna Jeyarajan; Lian Buwadi; Azar Ayaz; Lindsay S. Nagamatsu; Denait Haile; Liye Zou; Matthew Heath Passive and active exercise do not mitigate mental fatigue during a sustained vigilance task Journal Article In: Experimental Brain Research, vol. 243, no. 1, pp. 1–13, 2025. @article{Jeyarajan2025, Executive function (EF) is improved following a single bout of exercise and impaired when an individual experiences mental fatigue (MF). These performance outcomes have been linked to a bi-directional change in cerebral blood flow (CBF). Here, we sought to determine whether MF-induced by a sustained vigilance task (i.e., psychomotor vigilance task: PVT) is mitigated when preceded by a single bout of exercise. Participants completed 20-min single bouts of active exercise (cycle ergometry involving volitional muscle activation), passive exercise (cycle ergometry involving a mechanical flywheel) and a non-exercise control intervention. EF was assessed pre- and post-intervention via the antisaccade task. Following each intervention, a 20-min PVT was completed to induce and assess MF, and transcranial Doppler ultrasound of middle cerebral artery velocity (MCAv) was used to estimate intervention- and PVT-based changes in CBF. Active and passive exercise provided a post-intervention reduction in antisaccade reaction times; that is, exercise benefitted EF. Notably, however, frequentist and Bayesian statistics indicated the EF benefit did not mitigate MF during the PVT. As well, although exercise (active and passive) and the PVT respectively increased and decreased CBF, these changes were not correlated with behavioral measures of EF or MF. Accordingly, a postexercise EF benefit does not mitigate MF during a sustained vigilance task and a bi-directional change in CBF does not serve as a primary mechanism associated with EF and MF changes. Such results provide a framework for future work to explore how different exercise types, intensities and durations may impact MF. |
Juyoen Hur; Rachael M. Tillman; Hyung Cho Kim3; Paige Didier; Allegra S. Anderson; Samiha Islam; Melissa D. Stockbridge; Andres De Los Reyes; Kathryn A. DeYoung; Jason F. Smith; Alexander J. Shackman In: Journal of Psychopathology and Clinical Science, vol. 134, no. 1, pp. 41–56, 2025. @article{Hur2025, Social anxiety-which typically emerges in adolescence-lies on a continuum and, when extreme, can be devastating. Socially anxious individuals are prone to heightened fear, anxiety, and the avoidance of contexts associated with potential social scrutiny. Yet most neuroimaging research has focused on acute social threat. Much less attention has been devoted to understanding the neural systems recruited during the uncertain anticipation of potential encounters with social threat. Here we used a novel fMRI paradigm to probe the neural circuitry engaged during the anticipation and acute presentation of threatening faces and voices in a racially diverse sample of 66 adolescents selectively recruited to encompass a range of social anxiety and enriched for clinically significant levels of distress and impairment. Results demonstrated that adolescents with more severe social anxiety symptoms experience heightened distress when anticipating encounters with social threat, and reduced discrimination of uncertain social threat and safety in the bed nucleus of the stria terminalis (BST), a key division of the central extended amygdala (EAc). Although the EAc-including the BST and central nucleus of the amygdala-was robustly engaged by the acute presentation of threatening faces and voices, the degree of EAc engagement was unrelated to the severity of social anxiety. Together, these observations provide a neurobiologically grounded framework for conceptualizing adolescent social anxiety and set the stage for the kinds of prospective-longitudinal and mechanistic research that will be necessary to determine causation and, ultimately, to develop improved interventions for this often-debilitating illness. |
Qian Huangfu; Qianmei He; Sisi Luo; Weilin Huang; Yahua Yang Does teacher enthusiasm facilitate students' chemistry learning in video lectures regardless of students' prior chemistry knowledge levels? Journal Article In: Journal of Computer Assisted Learning, vol. 41, no. 1, pp. 1–14, 2025. @article{Huangfu2025, Background: Video lectures which include the teachers' presence have become increasingly common. As teacher enthusiasm is a nonverbal cue in video lectures, more and more studies are focusing on this topic. However, little research has been carried out on the interactions between teacher enthusiasm and prior knowledge when learning from video lectures. Objectives: We tested whether prior chemistry knowledge moderated the impact of teacher enthusiasm on students' chemistry learning during video lectures. Methods: One hundred and forty-two Grade 7 (low-prior chemistry knowledge) and Grade 9 (high-prior chemistry knowledge) Chinese students engaged with this research. Each group of students was randomised into viewing a video lecture with either a low or high degree of teacher enthusiasm. Outcomes were assessed by attention allocation, learning performance, cognitive load, learning satisfaction and student engagement. Results and Conclusions: Our findings revealed significant benefits of teacher enthusiasm and also showed that prior chemistry knowledge could moderate the impact of teacher enthusiasm on students' attention and cognitive outcomes (cognitive load and transfer). Visual attention mediates the effects on transfer. For students with low prior knowledge, there is more focus on the learning content, lower extraneous cognitive load, and higher transfer scores when watching videos with high levels of teacher enthusiasm; however, students with high prior knowledge do not show these differences. In addition, high prior chemistry knowledge had a significant beneficial impact on the motivational outcomes of the students (satisfaction and engagement). Implications: The results suggest that teacher enthusiasm in a video lecture may affect students' chemistry learning, and students' prior chemistry knowledge should be considered when choosing whether to display teacher enthusiasm. |
Lingshan Huang The cognitive processing of nouns and verbs in second language reading: An eye-tracking study Journal Article In: Linguistics Vanguard, no. 288, pp. 1–11, 2025. @article{Huang2025a, This study explores the cognitive processing of nouns and verbs in second language (L2) reading, aiming to investigate the potential differences and their effects on comprehension performance. Twenty-five Chinese students read an English text while their eye movements were recorded. A reading comprehension test evaluated the participants' L2 reading comprehension performance. The results reveal a significant difference in total reading time between nouns and verbs. Additionally, total reading time, gaze duration, and the number of fixations on both nouns and verbs are negatively correlated with L2 reading comprehension performance. These findings suggest that while the initial processing mechanisms of nouns and verbs may be similar, they diverge in late stages of processing. |
Jinghua Huang; Mingyan Wang; Ting Zhang; Dongliang Zhang; Yi Zhou; Lujin Mao; Mengyao Qi Investigating the effect of emoji position on eye movements and subjective evaluations on Chinese sarcasm comprehension Journal Article In: Ergonomics, vol. 68, no. 2, pp. 251–266, 2025. @article{Huang2025, Evidence indicated that emojis could influence sarcasm comprehension and sentence processing in English. However, the effect of emojis on Chinese sarcasm comprehension remains unclear. Therefore, this study investigated the impact of the smiley emoji position and semantics on eye movements and subjective assessments during Chinese online communication. Our results showed that the presence of a smiley emoji improved participants' interpretation and perception of sarcasm. We also found shorter dwell times on sarcastic words compared to literal words under the comment-final emoji condition. Additionally, we clarified the time course of emojified sentence processing during Chinese reading: the presence of emoji initially decreased first fixation durations compared to the absence of emoji and then the comment-final emoji shortened dwell times on sarcastic words compared to literal words in the critical area of interest. Our findings suggested that the comment-final emoji was the preferable choice for avoiding semantic comprehension bias in China. |
Jessica Heeman; Brian J. White; Stefan Van der Stigchel; Jan Theeuwes; Laurent Itti; Douglas P. Munoz Saliency response in superior colliculus at the future saccade goal predicts fixation duration during free viewing of dynamic scenes Journal Article In: The Journal of Neuroscience, vol. 45, no. 3, pp. 1–10, 2025. @article{Heeman2025, Eye movements in daily life occur in rapid succession and often without a predefined goal. Using a free viewing task, we examined how fixation duration prior to a saccade correlates to visual saliency and neuronal activity in the superior colliculus (SC) at the saccade goal. Rhesus monkeys (three male) watched videos of natural, dynamic, scenes while eye movements were tracked and, simultaneously, neurons were recorded in the superficial and intermediate layers of the superior colliculus (SCs and SCi, respectively), a midbrain structure closely associated with gaze, attention, and saliency coding. Saccades that were directed into the neuron's receptive field (RF) were extrapolated from the data. To interpret the complex visual input, saliency at the RF location was computed during the pre-saccadic fixation period using a computational saliency model. We analyzed if visual saliency and neural activity at the saccade goal predicted pre-saccadic fixation duration. We report three major findings: (1) Saliency at the saccade goal inversely correlated with fixation duration, with motion and edge information being the strongest predictors. (2) SC visual saliency responses in both SCs and SCi were inversely related to fixation duration. (3) SCs neurons, and not SCi neurons, showed higher activation for two consecutive short fixations, suggestive of concurrent saccade processing during free viewing. These results reveal a close correspondence between visual saliency, SC processing, and the timing of saccade initiation during free viewing and are discussed in relation to their implication for understanding saccade initiation during real-world gaze behavior. |
Tobias Hausinger; Björn Probst; Stefan Hawelka; Belinda Pletzer Own‑gender bias in facial feature recognition yields sex differences in holistic face processing Journal Article In: Biology of Sex Differences, vol. 16, no. 14, pp. 1–15, 2025. @article{Hausinger2025, Introduction Female observers in their luteal cycle phase exhibit a bias towards a detail-oriented rather than global visuospatial processing style that is well-documented across cognitive domains such as pattern recognition, naviga- tion, and object location memory. Holistic face processing involves an integration of global patterns and local parts into a cohesive percept and might thus be susceptible to the influence of sex and cycle-related processing styles. This study aims to investigate potential sex differences in the part-whole effect as a measure a of holistic face processing and explores possible relationships with sex hormone levels. Methods 147 participants (74 male, 51 luteal, 22 non-luteal) performed a part-whole face recognition task while being controlled for cycle phase and sex hormone status. Eye tracking was used for fixation control and record- ing of fixation patterns. Results We found significant sex differences in the part-whole effect between male and luteal phase female partici- pants. In particular, this sex difference was based on luteal phase participants exhibiting higher face part recognition accuracy than male participants. This advantage was exclusively observed for stimulus faces of women. Explora- tory analyses further suggest a similar advantage of luteal compared to non-luteal participants, but no significant difference between non-luteal and male participants. Furthermore, testosterone emerged as a possible mediator for the observed sex differences. Conclusion Our results suggest a possible modulation of face encoding and/or recognition by sex and hormone sta- tus. Moreover, the established own-gender bias in face recognition, that is, female advantage in recognition of faces of the same gender might be based on more accurate representations of face-parts. Plain English summary In this study, participants were required to recognize a previously encountered face from one of two options. The correct face and the distractor face did only differ in one certain face part, that is, either the eyes, nose or mouth. When participants were presented only with the respective face parts instead of complete faces, female participants during their luteal cycle phase were more accurate in recognizing these parts than male participants. This advantage was observed only if female participants had to recognize face parts of women. Since previous studies have shown a female advantage in utilizing detail information, for instance when having to process local features within a global pattern or memorizing the location of features on a map, our findings represent a good fit with existing literature. Moreover, previous findings of better female recognition of women's faces may be attributed to enhanced memory for individual face parts. |
Jiaxu Han; Catharine E. Fairbairn; Walter James Venerable; Sarah Brown-Schmidt; Talia Ariss Examining social attention as a predictor of problem drinking behavior: A longitudinal study using eye-tracking Journal Article In: Alcohol, Clinical and Experimental Research, no. October 2024, pp. 153–164, 2025. @article{Han2025, Background: Researchers have long been interested in identifying objective markers for problem drinking susceptibility informed by the environments in which individuals drink. However, little is known of objective cognitive-behavioral indices relevant to the social contexts in which alcohol is typically consumed. Combining group-based alcohol administration, eye-tracking technology, and longitudinal follow-up over a 2-year span, the current study examined the role of social attention in predicting patterns of problem drinking over time. Methods: Young heavy drinkers (N = 246) were randomly assigned to consume either an alcoholic (target BAC 0.08%) or a control beverage in dyads comprising friends or strangers. Dyads completed a virtual video call in which half of the screen comprised a view of themselves (“self-view”) and half a view of their interaction partner (“other-view”). Participants' gaze behaviors, operationalized as the proportion of time spent looking at “self-view” and “other-view,” were tracked throughout the call. Problem drinking was assessed at the time of the laboratory visit and then every year subsequent for 2 years. Results: Significant interactions emerged between beverage condition and social attention in predicting binge drinking days. In cross-sectional analyses, among participants assigned to the control (but not alcohol) group, heightened self-focused attention was linked with increased binge days at baseline |
Elizabeth H. Hall; Joy J. Geng Object-based attention during scene perception elicits boundary contraction in memory Journal Article In: Memory & Cognition, vol. 53, no. 1, pp. 6–18, 2025. @article{Hall2025, Boundary contraction and extension are two types of scene transformations that occur in memory. In extension, viewers extrapolate information beyond the edges of the image, whereas in contraction, viewers forget information near the edges. Recent work suggests that image composition influences the direction and magnitude of boundary transformation. We hypothesize that selective attention at encoding is an important driver of boundary transformation effects, selective attention to specific objects at encoding leading to boundary contraction. In this study, one group of participants (N = 36) memorized 15 scenes while searching for targets, while a separate group (N = 36) just memorized the scenes. Both groups then drew the scenes from memory with as much object and spatial detail as they could remember. We asked online workers to provide ratings of boundary transformations in the drawings, as well as how many objects they contained and the precision of remembered object size and location. We found that search condition drawings showed significantly greater boundary contraction than drawings of the same scenes in the memorize condition. Search drawings were significantly more likely to contain target objects, and the likelihood to recall other objects in the scene decreased as a function of their distance from the target. These findings suggest that selective attention to a specific object due to a search task at encoding will lead to significant boundary contraction. |
Julian Gutzeit; Lynn Huestegge The impact of the degree of action voluntariness on sense of agency in saccades Journal Article In: Consciousness and Cognition, vol. 127, pp. 1–15, 2025. @article{Gutzeit2025, Experiencing a sense of agency (SoA), the feeling of being in control over one's actions and their outcomes, typically requires intentional and voluntary actions. Prior research has compared the association of voluntary versus completely involuntary actions with the SoA. Here, we leveraged unique characteristics of oculomotor actions to partially manipulate the degree of action voluntariness. Participants performed either highly automatized prosaccades or highly controlled (voluntary) anti-saccades, triggering a gaze-contingent visual action effect. We assessed explicit SoA ratings and temporal action and effect binding as an implicit SoA measure. Anti-saccades were associated with stronger action binding compared to prosaccades, demonstrating a robust association between higher action voluntariness and a stronger implicit sense of action agency. We conclude that our manipulation of action voluntariness may have impacted the implicit phenomenological feeling of bodily agency, but it did not affect the SoA over effect outcomes or explicit agency perception. |
Magdalena Gruner; Andreas Widmann; Stefan Wöhner; Erich Schröger; Jörg D. Jescheniak Semantic context effects in picture and sound naming: Evidence from event-related potentials and pupillometric data Journal Article In: Journal of cognitive neuroscience, vol. 37, no. 2, pp. 443–463, 2025. @article{Gruner2025, When a picture is repeatedly named in the context of semantically related pictures (homogeneous context), responses are slower than when the picture is repeatedly named in the context of unrelated pictures (heterogeneous context). This semantic interference effect in blocked-cyclic naming plays an important role in devising theories of word production. Wöhner, Mädebach, and Jescheniak [Wöhner, S., Mädebach, A., & Jescheniak, J. D. Naming pictures and sounds: Stimulus type affects semantic context effects. Journal of Experimental Psychology: Human Perception and Performance, 47, 716-730, 2021] have shown that the effect is substantially larger when participants name environmental sounds than when they name pictures. We investigated possible reasons for this difference, using EEG and pupillometry. The behavioral data replicated Wöhner and colleagues. ERPs were more positive in the homogeneous compared with the heterogeneous context over central electrode locations between 140-180 msec and 250-350 msec for picture naming and between 250 and 350 msec for sound naming, presumably reflecting semantic interference during semantic and lexical processing. The later component was of similar size for pictures and sounds. ERPs were more negative in the homogeneous compared with the heterogeneous context over frontal electrode locations between 400 and 600 msec only for sounds. The pupillometric data showed a stronger pupil dilation in the homogeneous compared with the heterogeneous context only for sounds. The amplitudes of the late ERP negativity and pupil dilation predicted naming latencies for sounds in the homogeneous context. The latency of the effects indicates that the difference in semantic interference between picture and sound naming arises at later, presumably postlexical processing stages closer to articulation. We suggest that the processing of the auditory stimuli interferes with phonological response preparation and self-monitoring, leading to enhanced semantic interference. |
Xizi Gong; Tao He; Qian Wang; Junshi Lu; Fang Fang Time course of orientation ensemble representation in the human brain Journal Article In: The Journal of Neuroscience, vol. 45, no. 7, pp. 1–13, 2025. @article{Gong2025, Natural scenes are filled with groups of similar items. Humans employ ensemble coding to extract the summary statistical information of the environment, thereby enhancing the efficiency of information processing, something particularly useful when observing natural scenes. However, the neural mechanisms underlying the representation ofensemble information in the brain remain elusive. In particular, whether ensemble representation results from the mere summation of individual item representations or it engages other specific processes remains unclear. In this study, we utilized a set of orientation ensembles wherein none ofthe individual item orientations were the same as the ensemble orientation. We recorded magnetoencephalography (MEG) signals from human participants (both sexes) when they performed an ensemble orientation discrimination task. Time-resolved multivariate pattern analysis (MVPA) and the inverted encoding model (IEM) were employed to unravel the neural mechanisms of the ensemble orientation representation and track its time course. First, we achieved successful decoding of the ensemble orientation, with a high correlation between the decoding and behavioral accuracies. Second, the IEM analysis demonstrated that the representation of the ensemble orientation differed from the sum of the representations of individual item orientations, suggesting that ensemble coding could fur- ther modulate orientation representation in the brain. Moreover, using source reconstruction, we showed that the representation of ensemble orientation manifested in early visual areas. Taken together, our findings reveal the emergence of the ensemble representation in the human visual cortex and advance the understanding of how the brain captures and represents ensemble information. |
Laurie Galas; Ian Donovan; Laura Dugué Attention rhythmically shapes sensory tuning Journal Article In: The Journal of Neuroscience, vol. 45, no. 7, pp. 1–11, 2025. @article{Galas2025, Attention is key to perception and human behavior, and evidence shows that it periodically samples sensory information (<20 Hz). However, this view has been recently challenged due to methodological concerns and gaps in our understanding of the function and mechanism of rhythmic attention. Here we used an intensive ∼22 h psychophysical protocol combined with reverse correlation analyses to infer the neural representation underlying these rhythms. Participants (male/female) performed a task in which covert spatial (sustained and exploratory) attention was manipulated and then probed at various delays. Our results show that sustained and exploratory attention periodically modulate perception via different neural computations. While sustained attention suppresses distracting stimulus features at the alpha (∼12 Hz) frequency, exploratory attention increases the gain around task-relevant stimulus feature at the theta (∼6 Hz) frequency. These findings reveal that both modes of rhythmic attention differentially shape sensory tuning, expanding the current understanding of the rhythmic sampling theory of attention. |
Lara Fontana; Javier Albayay; Letizia Zurlo; Viola Ciliberto; Massimiliano Zampini Olfactory modulation of visual attention and preference towards congruent food products: An eye tracking study Journal Article In: Food Quality and Preference, vol. 124, pp. 1–11, 2025. @article{Fontana2025, In multisensory environments, odours often accompany visual stimuli, directing attention towards congruent objects. While previous research shows that people fixate longer on objects that match a recently smelled odour, it remains unclear whether odours directly influence product choices. Since odours persist in real-world settings, we investigated the effects of repeated odour exposure on visual attention and product choice, accounting for potential olfactory habituation. In a within-participant design, 30 participants completed a task where either a lemon odour (experimental condition) or clean air (control) was paired with congruent lemon-based food images, which varied to prevent visual habituation. We measured eye movements and choice preferences for these food products. Results revealed that participants exhibited longer gaze durations and more frequent fixations on food products congruent with the lemon odour. Repeated odour exposure had no effect on gaze patterns, as participants consistently focused on odour-congruent products throughout the experiment. The intensity and pleasantness of the lemon odour remained stable over time, suggesting no olfactory habituation occurred with this food-related odour. Despite this stable visual attention and odour intensity and pleasantness, participants began to diversify their product choices, selecting fewer odour-congruent items over time. These findings suggest that while odours continue to direct attention towards matching products, repeated exposure may reduce their influence on product choice, highlighting the complex role of olfactory stimuli in decision-making. The study provides insights into how odours interact with visual cues and influence consumer behaviour in prolonged exposure scenarios. |