Cognitive Eye-Tracking Publications
All EyeLink eye tracker cognitive and perception eye tracker research publications up until 2025 (with some early 2026s) are listed below by year. You can search the eye-tracking publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2025 |
Yangpan Ou; Zhaobin Chen; Ying Wang; Huabing Li; Feng Liu; Ping Li; Dongsheng Lv; Yong Liu; Bing Lang; Jingping Zhao; Wenbin Guo In: BMC Psychiatry, vol. 25, no. 1, pp. 1–13, 2025. @article{Ou2025,Background: Clinical high-risk (CHR) refers to prodromal phase before schizophrenia onset, characterized by attenuated psychotic symptoms and functional decline. They exhibit similar but milder cognitive impairments, brain abnormalities and eye movement change compared with first-episode schizophrenia (FSZ). These alterations may increase vulnerability to transitioning to the disease. This study explores cognitive-related functional connectivity (FC) and eye movement abnormalities to examine differences in the progression of schizophrenia. Methods: Thirty drug-naive FSZ, 28 CHR, and 30 healthy controls (HCs) were recruited to undergo resting-state functional magnetic resonance imaging (rs-fMRI). Connectome-based predictive modeling (CPM) was employed to extract cognitive-related brain regions, which were then selected as seeds to form FC networks. Support vector machine (SVM) was used to distinguish FSZ from CHR. Smooth pursuit eye-tracking tasks were conducted to assess eye movement features. Results: FSZ displayed decreased cognitive-related FC between right posterior cingulate cortex and right superior frontal gyrus compared with HCs and between right amygdala and left inferior parietal gyrus (IPG) compared with CHR. SVM analysis indicated a combination of BACS-SC and CFT-A scores, and FC between right amygdala and left IPG could serve as a potential biomarker for distinguishing FSZ from CHR with high sensitivity. FSZ also exhibited a wide range of eye movement abnormalities compared with HCs, which were associated with alterations in cognitive-related FC. Conclusions: FSZ and CHR exhibited different patterns of cognitive-related FC and eye movement alteration. Our findings illustrate potential neuroimaging and cognitive markers for early identification of psychosis that could help in the intervention of schizophrenia in high-risk groups. |
Ulises Orbe; Hinze Hogendoorn; Stefan Bode; Gereon R. Fink; Ralph Weidner; Simone Vossel Load-dependent processing of prediction violations in task-irrelevant space Journal Article In: Journal of Vision, vol. 25, no. 14, pp. 1–13, 2025. @article{Orbe2025,Attentive and predictive mechanisms crucially shape perception, but the interplay between these fundamental processes remains poorly understood. Studies on interactions between attention and prediction have yielded discrepant results, potentially because of differences in task demands. The present study examined whether the perceptual load (i.e., task difficulty) affects predictive processing in task-relevant and task-irrelevant hemifields. To this end, we developed a novel delayed match-to-reference task that orthogonally manipulated task-relevance, prediction, and perceptual load. We hypothesized that a low-load condition should facilitate the processing of prediction violations (oddball effects) in task-irrelevant space because of the availability of spare processing resources. We analyzed accuracy and response time (RT) data from 28 healthy young participants with separate repeated measures analyses of variance. The results confirmed the effectiveness of the load manipulation because a high perceptual load significantly increased RTs and decreased accuracy. Notably, the accuracy analysis yielded a significant three-way interaction between task-relevance, prediction, and load. Post-hoc tests revealed that load modulated the processing of prediction violations in the task-irrelevant hemifield. Importantly, the prediction violation, induced by a low-frequency and task-irrelevant feature (orientation), reduced accuracy in the low-load but not in the high-load condition. This finding suggests that predictive processing in task-irrelevant space is contingent on the availability of processing resources, with high perceptual load inhibiting the processing of unexpected events in task-irrelevant regions. The present study shows that load is a crucial factor in the interaction between task-relevance and prediction. |
Claire O'Callaghan; Frank H. Hezemans; Naresh Subramaniam; Rong Ye; Kamen A. Tsvetanov; Alexander G. Murley; Negin Holland; Isabella F. Orlando; Ralf Regenthal; Roger A. Barker; Caroline H. Williams-Gray; Luca Passamonti; Trevor W. Robbins; James B. Rowe Pharmacological and pupillary evidence for the noradrenergic contribution to reinforcement learning in Parkinson's disease Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1–15, 2025. @article{OCallaghan2025,Noradrenaline plays an integral role in learning by optimising behavioural strategies and facilitating choice execution. Testing the noradrenergic framework of learning in the context of human diseases offers a test bed for current normative neuroscience theories and may also indicate therapeutic potential. Parkinson's disease is often considered as a model of dopamine deficits, including dopamine's role in reinforcement learning. However, noradrenergic function is also severely impaired by Parkinson's disease, contributing to cognitive deficits. Using a single dose of the noradrenaline reuptake inhibitor atomoxetine in people with Parkinson's disease (in a randomised double-blind placebo-controlled crossover design), we show improvements in learning compared to placebo. Computational cognitive modelling confirmed a substantial shift in the decision noise parameter, indicative of more exploitative choices. This response pattern closely resembled that of age-matched controls and simulations of optimal response strategies. Pupillometry revealed increased baseline pupil diameter under atomoxetine, which correlated with behavioural improvements, and a heightened phasic pupillary response to feedback. Our findings confirm the noradrenergic contribution to reinforcement learning, and in doing so they challenge the simple interpretation of tonic-phasic locus coeruleus firing patterns based on pupillometry. Noradrenergic modulation is a potential treatment strategy for cognitive symptoms in Parkinson's disease and related disorders. |
Ewa Niechwiej-Szwedo; Susana Wu; Deborah Giaschi; Linda Colpa; Agnes M. F. Wong; Lisa Christian Eye-hand coordination during a precision grasping and placement task in children with a history of amblyopia Journal Article In: Vision Research, vol. 237, pp. 1–11, 2025. @article{NiechwiejSzwedo2025a,Eye-hand coordination is a key aspect of visuomotor control essential for performing most daily activities. Disruption in visuomotor control, characterized by slower arm movements and grasping errors, has been documented in children with amblyopia. This study aimed to characterize the effects of amblyopia on the temporal pattern of eye and hand coordination during the performance of a task that involves reaching, precision grasping, and placement. The study recruited 28 children with a history of amblyopia and 56 typically developing peers (age range 6–14 years). Children performed a bead-threading task while their eyes and hand movements were recorded concurrently. As hypothesized, children with amblyopia demonstrated poorer task performance, with greater deficits for the object manipulation compared to the reaching (transport) components. In comparison to their peers with normal vision, children with amblyopia had shorter reaction time for initiating eye and hand movement, longer object fixation duration to guide grasp execution and object placement, and lower eye-hand latency difference for the second movement indicating that the hand movement preceded eye initiation. These results suggest that children with amblyopia have poorer motor planning ability, which impacts movement execution. Longer fixations during object manipulations indicate that more time is required to transform the noisy visual input into a motor response. Overall, the study adds to the growing body of evidence highlighting deficits in visuomotor control in amblyopia. |
Matthias Nau; Austin Greene; Hannah Tarder-Stoll; Juan Antonio Lossio-Ventura; Francisco Pereira; Janice Chen; Christopher Baldassano; Chris I. Baker Neural and behavioral reinstatement jointly reflect retrieval of narrative events Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–15, 2025. @article{Nau2025,When recalling past events, patterns of gaze position and neural activity resemble those observed during the original experience. We hypothesized that these two phenomena, known as gaze reinstatement and neural reactivation, are linked through a common process that underlies the reinstatement of past experiences during memory retrieval. Here, we tested this proposal based on the viewing and recall of a narrative movie, which we assessed through functional magnetic resonance imaging, deep learning-based gaze prediction, and language modeling of spoken recall. In line with key predictions, gaze behavior adhered to the same principles as neural activity; it was event-specific, robust across individuals, and generalized across viewing and recall. Additionally, gaze-dependent brain activity overlapped substantially across tasks. Collectively, these results suggest that retrieval engages mechanisms similar to those that direct our eyes during natural vision, reflecting common constraints within the functional organization of the nervous system. Moreover, they highlight the importance of considering behavioral and neural reinstatement together in our understanding of remembering. |
Krishna S. Nair; Nicholas Hedger; Roana Liz George; Goutam Chandra; Kochupurackal P. Mohanakumar; Bhismadev Chakrabarti; Usha Rajamma Eye tracking demonstrates the influence of autistic traits on social attention in a community sample from India Journal Article In: Scientific Reports, vol. 15, no. 1, 2025. @article{Nair2025,The ability to attend to social stimuli is fundamental for processing social cues and shaping social behavior, yet cultural variability in this capacity remains relatively unexplored. Social attention is typically tested using preferential-looking paradigms in labs, which have demonstrated that autistic individuals attend less to social stimuli. Such studies are limited, by the fact that they have almost all been conducted in Western Europe and the USA. To address this gap, our objective was to test the cultural generalizability of these results by investigating whether autistic symptoms are negatively associated with social attention in a traditionally understudied sample: Indian adults. Additionally, we tested the specificity of this relation by investigating whether a similar association exists with the traits of attention-deficit/hyperactivity disorder (ADHD). Our study involved 121 young adults from Kerala, India. Autistic and ADHD traits were evaluated using the Autism Spectrum Quotient (AQ) and Adult ADHD Self-Report Scale (ASRS), respectively. The participants' gaze behavior was recorded during a preferential-looking task, where pairs of social and non-social images were presented simultaneously. Individuals with higher autistic traits exhibited a reduced preference for social stimuli. No such association of social attention was noted with ADHD traits. Follow-up analysis of AQ subscales indicated that the association between gaze duration and autistic traits was driven by the social, and not the attention to detail factor of autistic traits. Our results provide new evidence for the cultural generalizability of the social attention task and offer the potential for culture-agnostic phenotypic assessments for adults with autism. |
Surpiya Murali; Beshoy Agayby; Michael C. Schmid; Barbara F. Händel Multiunit and oscillatory activity in macaque V1 is modulated by blinking in a context-dependent way Journal Article In: Cerebral Cortex, vol. 35, no. 12, pp. 1–14, 2025. @article{Murali2025,Eye blinks modulate neural activity in visual areas even if the visual input is unchanged. Is the influence of blinking defined by the motor output of the blink? We analyzed blink-related neural activity with laminar resolution in V1 of two macaque monkeys in two conditions, viewing a video and at rest. During free viewing a video, blinks induced a modulation of the local field potential (LFP) in the theta, beta, and gamma band with a granular/infragranular focus. The multiunit activity (MUA) decreased around blink execution. Surprisingly, when comparing the results to blinks executed during the rest condition, we found that MUA increased around blinks. The blink-related LFP power changes, while increasing after a blink in both conditions, were significantly different in amplitude and latency. Our findings show that the blink induced modulation of V1 activity is not determined by the motor execution but depends on the condition in which the movement is executed. This suggests that interactions between movement and neural processes in sensory areas are context-dependent. These interactions may play an important role in predictive coding within the framework of active sensing. |
Sebastián Muñoz; Vladimir Maksimenko; Bastian Henriquez-Jara; Prateek Bansal; Omar David Perez In: Behavior Research Methods, vol. 57, no. 12, pp. 1–30, 2025. @article{Munoz2025,Eye-tracking has gained considerable attention across multiple research domains. Recently, web-based eye-tracking has become feasible, demonstrating reliable performance in perceptual and cognitive tasks. However, its systematic evaluation in decision-making remains unknown. Here we compare a laboratory-based eye tracker (the EyeLink 1000 Plus) with a webcam-based method (WebGazer) across two discrete-choice experiments. We systematically manipulated display size to approximate common device classes (monitor, laptop, tablet, mobile) and task complexity (simple vs. complex choice matrices). We find that on larger displays and simpler tasks, WebGazer produces gaze patterns and parameter inferences from computational models of behavior comparable to EyeLink. However, reliability diminishes on smaller displays and with more complex choice matrices. These results provide the first systematic evaluation of web-based eye tracking for decision-making research and offer practical guidance regarding its viability for online behavioral studies. |
Gabriela Mueller de Melo; Isabella Oliveira Pitorri; Gustavo Rohenkohl Presaccadic modulation of lateral interactions Journal Article In: Journal of Vision, vol. 25, no. 14, pp. 1–12, 2025. @article{MuellerdeMelo2025,Lateral interactions are pervasive in early visual processing, contributing directly to processes such as object grouping and segregation. This study examines whether saccade preparation — known to affect visual perception —modulates lateral interactions. In a psychophysical task, participants were instructed to detect a Gabor target flanked by two adjacent Gabors, while they either prepared a saccade to the target or maintained central fixation. Flanker gratings could be iso- or orthogonally oriented to the target and were positioned at three different distances (4λ,8λ,and 16λ). Contrast thresholds for target detection were estimated in each condition using a 3-down/1-up staircase procedure. The results showed that in both presaccadic and fixation conditions, the target was suppressed at the shortest flanker distance (4λ), revealed by markedly higher thresholds in iso-oriented compared to orthogonal flanker configurations. Lateral interaction effects were completely abolished at their largest separation (16λ). Interestingly, at the intermediate flanker distance (8λ), target suppression seemed to increase during the presaccadic period, whereas no such effect was observed during fixation. This result suggests that saccade preparation can modulate lateral interactions, promoting suppressive effects over larger distances. These findings are consistent with the visual remapping phenomenon observed before saccade execution, especially the convergent remapping of receptive fields in oculomotor and visual areas. Finally, this presaccadic expansion of inhibitory lateral interactions could assist target selection by suppressing homogeneous peripheral signals — such as iso-oriented collinear patterns —while prioritizing the processing of more salient visual information. |
Arasch Mostauli; Jonas Rauh; Matthias Gamer; Christian Büchel; Winfried Rief; Stefanie Brassen Placebo treatment entails resource-dependent downregulation of negative inputs Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–12, 2025. @article{Mostauli2025,Clinical trials with antidepressants reveal significant improvements in placebo groups, with effects of up to 80% compared to real treatment. While it has been suggested that treatment expectations rely on cognitive control, direct evidence for affective placebo effects is sparse. Here, we investigated how cognitive resources at both the behavioral and neural levels influence the effects of positive expectations on emotional processing. Forty-nine healthy volunteers participated in a cross-over fMRI study where positive expectations were induced through an alleged oxytocin nasal spray and verbal instruction. Participants completed a spatial cueing task that manipulated attention to emotional face distractors while being scanned and were characterized regarding their general attention control ability. Placebo treatment improved mood and reduced distractibility from fearful compared to happy faces, particularly when more attentional resources were available for processing face distractors. This aligned with changes in activation and functional coupling within prefrontal-limbic networks, suggesting that expectations induce top-down regulation of aversive inputs. Additionally, neurobehavioral effects correlated with individual control ability. Our findings highlight the critical role of cognitive resources in verbally instructed placebo effects. This may be particularly relevant in patients with major depressive disorder, who often demonstrate enhanced negativity processing but have limited cognitive control capacity. |
Nasrin Mortazavi; Puneet Talwar; Ekaterina Koshmanova; Roya Sharifpour; Elise Beckers; Alexandre Berger; Islay Campbell; Ilenia Paparella; Fermin Balda; Ismael Dardour Hamzaoui; Christian Berthomier; Christine Bastin; Christophe Phillips; Pierre Maquet; Fabienne Collette; Mikhail Zubkov; Laurent Lamalle; Gilles Vandewalle REM sleep quality is associated with balanced tonic activity of the locus coeruleus during wakefulness Journal Article In: Journal of Biomedical Science, vol. 32, no. 1, pp. 1–13, 2025. @article{Mortazavi2025,Background: Animal studies established that the locus coeruleus (LC) plays important roles in sleep and wakefulness regulation. Whether it contributes to sleep variability in humans is not yet established. Here, we investigated if the in vivo activity of the LC is related to the variability in the quality of Rapid Eye Movement (REM) sleep. Methods: We assessed the LC activity of 34 healthy younger (~ 22y) and 18 older (~ 61y) individuals engaged in bottom-up and top-down cognitive tasks using 7-Tesla functional Magnetic Resonance Imaging (fMRI). We further recorded their sleep electroencephalogram (EEG) to evaluate associations between LC fMRI measures and REM sleep EEG metrics. Results: Theta oscillation energy during REM sleep was positively associated with LC response in the top-down task. In contrast, REM sleep theta energy was negatively associated with LC activity in older individuals during the bottom-up task. Importantly, sigma oscillations power immediately preceding a REM sleep episode was positively associated with LC activity in the top-down task. Conclusions: LC activity during wakefulness was related to REM sleep intensity and to a transient EEG change preceding REM sleep, a feature causally related to LC activity in animal studies. The associations depend on the cognitive task, suggesting that a balanced level of LC tonic activity during wakefulness is required for optimal expression of REM sleep. The findings may have implications for the high prevalence of sleep complaints reported in aging and for disorders such as insomnia, Alzheimer's, and Parkinson's disease, for which the LC may play pivotal roles through sleep. |
Joonsik Moon; Peter Bex Distinctive feature sensitivity of ocular following initiation during global motion perception Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1–12, 2025. @article{Moon2025,We investigate how sensory and motor components of the visual system respond to carrier (first-order) and envelope (second-order) motion features for global motion perception. While both ocular following responses (OFRs) and perceptual judgments exhibit higher responsivity to envelope motion, carrier motion alone was insufficient, leading to large perceptual direction errors and speed biases and minimal OFRs. Shorter presentation times selectively impaired perceptual speed discriminability by decreasing the signal and increasing noise, with no corresponding effect on oculomotor responses. In direction discriminability analysis, in contrast, OFRs and perceptual responses have a similar relative noise pattern to motion features, suggesting shared noise sources in global motion processing. Trial-by-trial correlation analysis confirmed the dissociation where perceptual speed was uncorrelated with OFR speed, whereas perceptual direction showed a delayed correlation with eye direction relative to movement onset. This delayed correlation timing for direction suggests global motion modulates both systems via feedback control processes. |
Haneieh Molaei; Reza Abbas Farishta; Reza Farivar Letter distortion mapping in amblyopia: Spatial patterns, stability, and relationship to visual acuity Journal Article In: Investigative Ophthalmology and Visual Science, vol. 66, no. 15, pp. 1–12, 2025. @article{Molaei2025,PURPOSE. To investigate whether letter-based perceptual distortions in amblyopia follow spatially consistent patterns across different letters and to determine if these spatial distortion maps are letter specific or reflect a common underlying spatial organization of visual distortion in the amblyopic eye. METHODS. Twenty-one individuals with amblyopia completed a distortion mapping task using the letters A, D, and E, shown at 36 visual field locations. Each letter was first viewed with the fellow eye and then with the amblyopic eye. Participants reported distortions, which were recorded to generate binary spatial maps. The task was repeated over three sessions to assess within-subject consistency, and spatial correlations were analyzed across letters and subjects. RESULTS. Letter distortions were reported by 95% of participants and remained consistent across sessions. Within subjects, spatial distortion maps were significantly correlated across letters in 62% of cases (P ≤ 0.028), suggesting shared spatial patterns. However, across subjects, maps were largely uncorrelated, indicating individualized distortion profiles. No single letter consistently showed more distortion across the group, χ2(2) = 1.279 |
Anna Metzger; Callie Dugan; Matteo Toscani The similarity with a face presented in central vision improves face recognition in peripheral vision Journal Article In: Perception, vol. 54, no. 12, pp. 975–985, 2025. @article{Metzger2025,The fovea, with its high concentration of cone photoreceptors, results in increased sensitivity and visual acuity, while the periphery, with its lower contrast sensitivity and resolution, provides better spatial summation. Despite these differences, our experience of the visual field remains detailed and uniform, supported by the influence of central vision on peripheral vision. There is evidence that recognition of simple shapes in the periphery is enhanced by the presence of a similar shape in central vision. However, it is unclear whether such mechanisms generalise to more complex stimuli, such as faces. In a face matching task, we found that having a similar face in central vision improved face matching performance in the periphery. This suggests that general mechanisms govern the interaction between central and peripheral vision in recognising faces. |
Rebecca Jane Mcclements; Julie-Ann Jordan; David Curran; Donncha Hanna; John Paul Corrigan; Kevin F. W. Dyer The role of pre-existing assumptions and cognitive flexibility in the development of post-trauma cognitive processes - an analogue study Journal Article In: Behavioural and Cognitive Psychotherapy, vol. 53, pp. 349–364, 2025. @article{Mcclements2025,Objective: This experimental study investigated whether the trait factors of world assumptions and cognitive flexibility were predictive of levels of attentional bias to threat stimuli, memory integration, and data-driven processing. Methods: An opportunity sample of 74 participants took part in the investigation. Participants viewed a virtual reality film to induce mild distress to mimic processes that can occur in individuals when experiencing a traumatic event. A prospective experimental design was conducted involving measurements at pre-trauma exposure (Time 1), post-exposure (Time 2) and one-week follow-up (Time 3). Self-report measures of world assumptions, cognitive flexibility, and cognitive processing were administered. Eye-tracking equipment was used to assess attentional bias towards threat images, and a free recall task to assess memory integration. Results: A mixed effects linear model found increased cognitive bias towards trauma-related threat images pre/post-exposure, specifically for a maintenance attentional bias. Significantly greater data-driven processing was observed post-exposure, with greater conceptually driven processing observed at one-week follow-up. No significant findings were observed for memory integration. World assumptions were predictive of increased data-driven processing; the relative use of data-driven to conceptually driven processing; and trait anxiety. Cognitive flexibility was predictive of state anxiety. Conclusion: These results provide additional support for the role of maintained attention, data-driven processing, and conceptually driven processing in post-trauma reactions as per established cognitive theories of post-traumatic stress disorder. More research is required to fully explore the roles of core beliefs, assumptions and cognitive flexibility in this area. |
Emma Krane Mathisen; Nicholas Allott; Camilo R. Ronderos Cognitive mechanisms in simile and metaphor comprehension Journal Article In: Language and Cognition, vol. 17, pp. 1–19, 2025. @article{Mathisen2025,This study investigates whether metaphors and similes are processed the same way or not. Comparison accounts of metaphor claim that metaphors and similes use the same cognitive mechanisms because metaphors are implicit similes, while Categorization accounts claim that the two figures of speech require different cognitive mechanisms. It is unclear which position has the most support. We address this by introducing the distinction between single and extended metaphors to this debate. Several experiments have shown that a metaphor preceded by another metaphor is read faster than a single metaphor. If similes in extended and non-extended contexts display a similar processing difference, this would support views saying that metaphors and similes are processed the same way. If not, it would be more in line with the view that they are processed differently. Using an eye-tracking reading paradigm, we find that the difference between processing single and extended metaphors does not hold in the case of simile comprehension. This is more compatible with Categorization accounts than with Comparison accounts; if the cognitive mechanism behind metaphor and simile processing is the same, we would expect there to be a comparable processing difference between metaphors and similes in the single and extended conditions. |
Andrea Massironi; Carlotta Lega; Luca Ronconi; Emanuela Bricolo Statistical learning re-shapes the center-surround inhibition of the visuo-spatial attentional focus Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–18, 2025. @article{Massironi2025,To effectively navigate a crowded and dynamic visual world, our neurocognitive system possesses the remarkable ability to extract and learn its statistical regularities to implicitly guide the allocation of spatial attention resources in the immediate future. The way through which we deploy attention in the visual space has been consistently outlined by a “center-surround inhibition” pattern, wherein a ring of sustained inhibition is projected around the center of the attentional focus to optimize the signal–noise ratio between goal-relevant targets and interfering distractors. While it has been observed that experience-dependent mechanisms could disrupt the inhibitory ring, whether statistical learning of spatial contingencies has an effect on such a surround inhibition and – if any – through which exact mechanisms it unravels are hitherto unexplored questions. Therefore, in a visual search psychophysical experiment, we aimed to fill this gap by entirely mapping the visuo-spatial attentional profile, asking subjects (N = 26) to detect and report the gap orientation of a ‘C' letter appearing either as a color singleton (Baseline Condition) or as a non-salient probe (Probe Condition) – among other irrelevant objects – at progressively increasing probe-to-singleton distances. Critically, we manipulated the color singleton spatial contingency so as to make it appear more frequently adjacent to the probe, specifically at a spatial distance where attending the color singleton generates surround-inhibition on the probe, hindering attentional performance. Results showed that statistical learning markedly reshaped the attentional focus, transforming the center-surround inhibition profile into a non-linear gradient one through a performance gain over the high probability probe-to-singleton distance. Noteworthy, such reshaping was uneven in time and asymmetric, as it varied across blocks and specifically appeared only within manipulated visual quadrants, leaving unaltered the unmanipulated ones. Our findings offer insights of theoretical interest in understanding how environmental regularities orchestrate the way we allocate attention in space through plastic re-weighting of spatial priority maps. Additionally, going beyond the physical dimension, our data provide interesting implications about how visual information is coded within working memory representations, especially under scenarios of heightened uncertainty. |
Rotem Mairon; Ohad Ben-Shahar Stimulus center bias persists irrespective of its position on the display Journal Article In: Journal of Eye Movement Research, vol. 18, no. 6, pp. 1–17, 2025. @article{Mairon2025a,Since the earliest studies on human eye movements, it has been repeatedly demonstrated that observers fixate the center of visual stimuli more than their periphery, regardless of visual content. Subsequent research suggested only little effect of typical biases in experimental setups, such as observer's position relative to the screen or the relative location of the cue marker. While comparative studies of the screen center vs. stimulus center revealed that both conspire in the process, much of the prior art is still confounded by experimental details that leave the origins of the center-bias debatable. We thus propose methodological novelties to rigorously test the effect of the stimulus center, isolated from other factors. In particular, eye movements were tracked in a free-viewing experiment in which stimuli were presented at a wide range of horizontal displacements from a counterbalanced cue marker in a wide visual field. Stimuli spanned diverse natural scene images to allow inherent biases to surface in the pooled data. Various analyses of the first few fixations show a robust bias toward the center of the stimulus, independent of its position on the display, but affected by its distance to the cue marker. Center bias is thus a tangible phenomenon related to the stimulus. |
Lauren Luther; Rebecca F. Mathis; William R. Keller; Robert W. Buchanan; James M. Gold; James I. Koeing; Gregory P. Strauss Aberrant visual attention is associated with social judgements of attractiveness and negative symptoms in schizophrenia Journal Article In: Schizophrenia Research, vol. 286, pp. 1–8, 2025. @article{Luther2025,Accurate perception of facial attractiveness supports normative social motivation and approach behaviors, in part via its association with endogenous oxytocin levels. Individuals with schizophrenia (SZ) display impaired social functioning that is associated with endogenous oxytocin levels. However, it is unclear whether judgements of facial attractiveness and the attentional processes that support them contribute to social abnormalities in SZ. The current study examined whether judgements of facial attractiveness and gaze behavior were associated with negative symptoms, social functioning, and oxytocin. Forty-one individuals with SZ and 23 healthy controls (CN) rated male and female facial stimuli for levels of attractiveness while gaze behavior was recorded via eye-tracking. Fixation count and gaze duration in facial regions of interest were used to evaluate facial scanning behavior. Plasma oxytocin concentrations were derived via radioimmunoassay. CN and SZ did not significantly differ on perceptions of facial attractiveness; however, SZ displayed an aberrant visual scan pattern characterized by reduced attention to salient facial features on both male and female faces. Further, this aberrant scanning pattern was associated with greater negative symptoms and reduced social functioning in SZ. Oxytocin was not associated with attractiveness perceptions or gaze behavior. Findings suggest that negative symptoms and social functioning are associated with diminished judgements of facial attractiveness and corresponding patterns of aberrant gaze behavior. Attention training programs focused on increasing gaze to salient facial features may support better social functioning and lower negative symptoms in SZ. |
Marie Loescher; Patrick Haggard; Catherine Tallon-Baudry Interoception vs. exteroception: Cardiac interoception competes with tactile perception, yet also facilitates self-relevance encoding Journal Article In: PNAS, vol. 122, no. 49, pp. 1–12, 2025. @article{Loescher2025,Internal bodily signals, notably the heartbeat, influence our perception of the external world—but the nature of this influence remains unclear. Different frameworks, originating in opposing views of the function of interoception, have developed largely in parallel. One line of evidence (Internal/External Competition) indicates that interoceptive and exteroceptive inputs compete for neural resources. Another line (Self-related Facilitation) shows a link between interoceptive and self-related processing, which might include computing the self-relevance of exteroceptive inputs. We contrasted these accounts within a single experimental task for which they yielded distinct predictions. We measured heartbeat-evoked potentials (HEPs, a measure of cardiac interoception) with electroencephalogram and manipulated the self-relevance of an audio-tactile stimulus by placing the audio source either inside or outside the peripersonal space immediately around the body. On the one hand, prestimulus HEP amplitudes over the somatosensory cortex were linked to slower reaction times and affected audio-tactile stimulus-evoked responses in the same area, indicating competition for shared neural resources. On the other hand, prestimulus HEPs over integrative sensorimotor and default-mode network regions facilitated stimulus self-relevance encoding, both in reaction times and audio-tactile evoked responses. Importantly, Competition and Facilitation effects were spatially and statistically independent from each other. We therefore reconcile the two views by showing the coexistence of two independent mechanisms: one that allocates neural resources to either internal bodily signals or the external world, and another by which interoception and exteroception are combined to determine the self-relevance of external signals. Our results highlight the multidimensionality of HEPs and of internal states more generally. |
Tina T. Liu; Michael C. Granovetter; Anne Margarette Anne; Sophia Robert; Jason Z Fu; Christina Patterson; David C. Plaut; Marlene Behrmann Cross-sectional and longitudinal changes in category selectivity in visual cortex following pediatric cortical resection Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1–18, 2025. @article{Liu2025p,The topographic organization of category-selective responses in human ventral occipitotemporal cortex (VOTC) and its relationship to regions subserving language functions is remarkably uniform across individuals. This arrangement is thought to result from the clustering of neurons responding to similar inputs, constrained by intrinsic architecture and tuned by experience. We examine the malleability of this organization in individuals with unilateral resection of VOTC during childhood for the management of drug-resistant epilepsy. In cross-sectional and longitudinal functional imaging studies, we compare the topography and neural representations of 17 category-selective regions in individuals with a VOTC resection, a ‘control patient' with a resection outside VOTC, and typically developing matched controls. We demonstrate both adherence to and deviation from the standard topography, particularly with respect to the hemispheric lateralization of category-selective regions, and uncover fine-grained competitive dynamics between word- and face-selectivity over time in the single, preserved VOTC. The findings elucidate the nature and extent of cortical plasticity and highlight the potential for remodeling of extrastriate architecture and function. |
Shun Liu; Wenpeng Hu; Xiqin Liu Different effects of verbal and visual working memory loads on language prediction Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–11, 2025. @article{Liu2025o,Mounting studies suggest that working memory (WM) plays a crucial role in language prediction, but how varying types of WM loads influence language prediction remains unclear. This study investigated whether verbal and visual WM loads differentially impact language predictions during speech comprehension. Using a dual-task paradigm combined with eye-tracking in a visual world setting, we asked 48 participants to complete a sentence comprehension task under concurrent WM load conditions. Participants were divided into two groups, one of which performed a visual dots memory task and the other completed a visual words memory task, with memory load being applied in half of the trials. Results revealed anticipatory gaze towards target objects, suggesting the prediction of upcoming linguistic information. Notably, early fixations during the tonal cue window indicated tonal prediction in spoken sentence processing. Furthermore, WM load significantly disrupted participants' language prediction effects, highlighting the involvement of working memory resources in this process. Importantly, the verbal memory task imposed a more severe disruption to language prediction than the visual memory task, suggesting differential roles of WM subtypes in linguistic prediction. This offers novel insights into how verbal WM and visual-spatial WM differentially influence predictive language processing. |
Guoyang Liu; Yueyuan Zheng; Michelle Hei Lam Tsang; Yazhou Zhao; Janet H. Hsiao Understanding the role of eye movement pattern and consistency during face recognition through EEG decoding Journal Article In: npj Science of Learning, vol. 10, no. 1, pp. 1–13, 2025. @article{Liu2025i,Eye movement patterns and consistency during face recognition are both associated with recognition performance. We examined whether they reflect different mechanisms through EEG decoding. Eighty-four participants performed an old-new face recognition task with eye movement pattern and consistency quantified using eye movement analysis with hidden Markov models (EMHMM). Temporal dynamics of neural representation quality for face recognition were assessed through decoding old vs new faces using a support vector machine classifier. Results showed that a more eye-focused pattern was associated with higher decoding accuracy in the high-alpha band, reflecting better neural representation quality. In contrast, higher eye movement consistency was associated with shorter latency of peak decoding accuracy in the high-alpha band, which suggested more efficient neural representation development, in addition to higher ERP decoding accuracy. Thus, eye movement patterns are associated with neural representation effectiveness, whereas eye movement consistency reflects neural representation development efficiency, unraveling different aspects of cognitive processes. |
Chunyu Liang; Yili Chen; Yongyun Zhu; Yangfan Zhu; Jieyu Chen; Chenxi Liu; Fang Wang; Xinglong Yang Construction of a mild cognitive impairment prediction model for Parkinson's disease patients on the basis of multimodal data Journal Article In: npj Parkinson's Disease, vol. 11, no. 1, pp. 1–13, 2025. @article{Liang2025b,This research aimed to establish a model predicting mild cognitive impairment in Parkinson's disease patients (PDMCI) by integrating multimodal indicators. We prospectively collected general demographic data, clinical scales, gait parameters, eye tracking parameters, and neuroimaging parameters from 50 PDMCI patients, 50 Parkinson's disease patients with normal cognition (PDNCs), and 20 healthy controls (HCs). Support Vector Machine (SVM) classifiers and nested cross-validation were used to evaluate 31 feature combinations. Results demonstrated that the combination of clinical, gait, eye tracking, Diffusion Tensor Image Analysis along the Perivascular Space (DTI-ALPS), and Global Functional Connectivity Density (gFCD) features achieved an average accuracy of 0.9135 and an average area under the curve of 0.9602 on the test dataset. Notably, the combination of eye tracking and gait features also showed superior performance. These findings indicate that multimodal data integrated with machine learning (ML) can effectively distinguish between PDMCI and PDNC patients. |
Siwei Li; Jingwen Chen; Cong Zhang; Shiming Tang; Yang Xie; Liping Wang Flexible use of limited resources for sequence working memory in macaque prefrontal cortex Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–18, 2025. @article{Li2025m,Our brain is remarkably limited in how many items it can hold simultaneously, but it can also represent unbounded novel items through generalization. How the brain rationally uses limited resources in working memory (WM) remains unexplored. We investigated mechanisms of WM resource allocation using calcium imaging and electrophysiological recording in the prefrontal cortex of monkeys performing sequence WM (SWM) tasks. We found that changes in the neural representation of SWM, including geometry, generalizable and separate rank subspaces, reflected WM load. SWM resources, represented by neurons' signal strength and spatial tuning projected onto each rank subspace, were shared flexibly between ranks. Crucially, the prefrontal cortex dynamically utilized shared tuning neurons to ensure generalization, while engaging disjoint and spatially shifted neurons to minimize interference, thus achieving a trade-off between behavioral and neural costs within capacity. The allocated resources can predict monkeys' behavior. Thus, the geometry of compositionality underlies the flexible use of limited resources in SWM. |
Hongxiao Li; Jiashen Li; Xin Hao; Wei Liu Behavioral and eye-tracking investigation of event segmentation following short video watching Journal Article In: npj Science of Learning, vol. 10, no. 1, pp. 1–15, 2025. @article{Li2025c,The proliferation of short-video platforms prompts critical investigation of their effects on human cognitive functions. We hypothesized that the frequent, user-driven content shifts inherent to short-video watching impair event segmentation, a cognitive process critical for continuous memory encoding. Combining behavioral, eye-tracking, and self-report data, we revealed that acute exposure to randomly selected short videos was associated with poorer memory for continuous movies, particularly in participants with more frequent daily short-video viewing. This effect was absent after viewing personalized short videos and did not apply to static image encoding tasks. Intersubject correlation analysis of eye movements revealed that random short video watching attenuated eye synchronization at event boundaries. Furthermore, Hidden Markov Model analysis indicated that personalized and random short videos induced qualitatively different latent event structures. These findings indicate that the algorithmic curation of content, not merely the short-video format, is a crucial factor shaping event segmentation and subsequent memory. |
Bao Li; Li Tong; Chi Zhang; Panpan Chen; Long Cao; Hui Gao; Zi Ya Yu; Lin Yuan Wang; Bin Yan An fMRI dataset on occluded image interpretation for human amodal completion research Journal Article In: Scientific Data, vol. 12, no. 1, pp. 1–10, 2025. @article{Li2025b,In everyday environments, partially occluded objects are more common than fully visible ones. Despite their visual incompleteness, the human brain can reconstruct these objects to form coherent perceptual representations, a phenomenon referred to as amodal completion. However, current computer vision systems still struggle to accurately infer the hidden portions of occluded objects. While the neural mechanisms underlying amodal completion have been partially explored, existing findings often lack consistency, likely due to limited sample sizes and varied stimulus materials. To address these gaps, we introduce a novel fMRI dataset,the Occluded Image Interpretation Dataset (OIID), which captures human perception during image interpretation under different levels of occlusion. This dataset includes fMRI responses and behavioral data from 65 participants. The OIID enables researchers to identify the brain regions involved in processing occluded images and examines individual differences in functional responses. Our work contributes to a deeper understanding of how the human brain interprets incomplete visual information and offers valuable insights for advancing both theoretical research and related practical applications in amodal completion fields. |
Jad Laaboudi; Anne Hillairet de Boisferon; Céline Paeye Pre-saccadic attention (and not arousal) modulates the size-eccentricity effect Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–12, 2025. @article{Laaboudi2025,Peripherally located objects are often perceived to be smaller than centrally located objects. This perceptual phenomenon, known as the Size-Eccentricity Effect (SEE), is mainly due to the structural properties of the visual system and is further modulated by covert attention. In this study, we evaluated whether pre-saccadic attention could also compensate for this effect. Participants performed a judgment task where they compared a test disk of varying size, briefly presented in peripheral vision, to a reference disk appearing 450 ms later in foveal vision. When no saccade was made towards the location of the test disk, the SEE was observed. However, when participants initiated saccades about 200 ms after the test disk extinction, points of subjective equality were close to objective equality. The second experiment aimed at excluding an explanation involving non-specific arousal mechanisms, also known to enhance visual perception. Participants executed a keypress or an antisaccade instead of a saccade. The SEE disappeared only in the saccade condition, confirming the crucial role of pre-saccadic attention shifts in this SEE compensation. Therefore, pre-saccadic attention improves not only the processing of orientation, contrast and spatial frequency (as previously demonstrated), but also the processing of peripheral object size. |
Yuna Kwak; Nina M. Hanning; Marisa Carrasco Saccade direction modulates the temporal dynamics of presaccadic attention Journal Article In: Journal of Vision, vol. 25, no. 14, pp. 1–15, 2025. @article{Kwak2025,Presaccadic attention enhances visual perception at the upcoming saccade target location. While this enhancement is often described as obligatory and temporally stereotyped, recent studies indicate that its strength varies depending on saccade direction. Here, we investigated whether the time course of presaccadic attention also differs across saccade directions. Participants performed a two-alternative forced-choice orientation discrimination task during saccade preparation. Tilt angles were individually titrated in a fixation baseline condition to equate task difficulty across the upper and lower vertical meridians. Sensitivity was then assessed at different time points relative to saccade onset and cue onset, allowing us to characterize the temporal dynamics of attentional enhancement. We found that presaccadic attention built up faster and reached higher levels preceding downward than upward saccades. Linear model fits revealed significant slope differences but no differences in intercepts, suggesting that the observed asymmetries reflect differences in attentional deployment during saccade preparation rather than preexisting differences in sensitivity. Saccade parameters did not account for these asymmetries. Our findings demonstrate that the temporal dynamics of presaccadic attention vary with saccade direction, which may be a potential mechanism underlying previously observed differences in presaccadic benefit at the upper and lower vertical meridians. This temporal flexibility challenges the view of a uniform presaccadic attention mechanism and suggests that presaccadic attentional deployment is shaped by movement goals. Our results provide new insights into how the visual and oculomotor systems coordinate under direction-specific demands. |
Jan W. Kurzawski; Brenda S. Qiu; Najib J. Majaj; Noah C. Benson; Denis G. Pelli; Jonathan Winawer Human V4 size predicts crowding distance Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–11, 2025. @article{Kurzawski2025,Visual recognition is limited by both object size (acuity) and spacing. The spacing limit, called “crowding”, is the failure to recognize an object in the presence of other objects. Here, we take advantage of individual differences in crowding to investigate its biological basis. Crowding distance, the minimum object spacing needed for recognition, varies 2-fold among healthy adults. We test the conjecture that this variation in psychophysical crowding distance is due to variation in cortical map size. To test this, we make paired measurements of brain and behavior in 49 observers. We use psychophysics to measure crowding distance and calculate λ, the number of letters that fit into each observer's visual field without crowding. In the same observers, we use functional magnetic resonance imaging (fMRI) to measure the surface area A of retinotopic maps V1, V2, V3, and V4. Across observers, λ is proportional to the surface area of V4 but is uncorrelated with the surface area of V1 to V3. The proportional relationship of λ to area of V4 indicates conservation of cortical crowding distance across individuals: letters can be recognized if they are spaced by at least 1.4 mm on the V4 map, irrespective of map size and psychophysical crowding distance. We conclude that the size of V4 predicts the spacing limit of visual perception. |
Sharif I. Kronemer; Victoria E. Gobo; Shruti Japee; Elisha P. Merriam; Benjamin Osborne; Peter A. Bandettini; Tina T. Liu Eye metrics often reflect visual conscious awareness, conscious content, and neural processing in cerebral blindness Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1724, 2025. @article{Kronemer2025a,<p>Cerebral blindness is caused by damage to the primary visual pathway. Some people with cerebral blindness retain degraded vision and non-visual sensations and can perform visually guided behaviors within their blind visual field. These cases raise questions about visual conscious perception and residual neural processing in cerebral blindness. A major challenge in this research is that subjective reporting on experiences in the blind field can be unreliable. Alternatively, eye metrics offer a promising objective marker of conscious awareness, conscious content, and brain activity. In this study, we recorded visual stimulus-evoked pupil size, blink, and microsaccade responses in neurotypical participants and both the sighted and blind fields of cerebrally blind participants. For most patients, we found that eye metrics inferred conscious awareness in the blind field. Also, pupil size responded to both real and illusory stimulus luminance in the sighted field but not in the blind field. Furthermore, eye metrics were linked to visual stimulus-evoked occipital cortical field potentials in the blind field, suggesting residual cortical processing. These findings support eye metrics as an indicator of visual conscious perception and neural processing in cerebral blindness, with potential applications for tracking vision recovery following damage to the primary visual pathway.</p> |
Diana Kollenda; Anna Sophia Reher; Benjamin Haas Individual gaze predicts individual scene descriptions Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–9, 2025. @article{Kollenda2025,Do different people looking at the same scene perceive individual versions of what's in front of them? If perception is individual, which mechanisms mediate our particular view of the world? Recent findings have shown systematic observer differences in gaze, but it is unclear whether individual fixation biases translate to divergent impressions of the same scene. Here, we find systematic differences in the scene descriptions individual observers provide for identical complex scenes. Crucially, observer differences in fixation patterns predicted pairwise differences in scene descriptions, particularly the use of nouns, even for out-of-sample images. Part of this could be explained by the individual tendency to fixate text and people predicting corresponding description references. Our results strongly suggest that subjective scene perception is shaped by individual gaze. |
Jamie Koerner; Erin Zou; Jessica A. Karl; Cynthia Poon; Leo Verhagen Metman; Charles G. Sodini; Vivienne Sze; Fabian J. David; Thomas Heldt In: npj Parkinson's Disease, vol. 11, no. 1, 2025. @article{Koerner2025,Early detection and monitoring of Parkinson's disease (PD) remain challenging, highlighting the need for accessible, cost-effective tools. Saccadic eye movement abnormalities are promising noninvasive biomarkers for PD screening and monitoring. Here, we present an iPad-based system that uses a deep learning algorithm to extract saccade metrics and validate these metrics against the clinical-grade EyeLink 1000 Plus. Twenty-five participants (10 with PD, 15 controls) completed pro-saccade, anti-saccade, memory-guided-saccade, and self-generated-saccade tasks. Relative to the EyeLink, the iPad system achieved average subject-level errors of 2 ms for latency and 0.7∘ for amplitude in pro-, anti-, and memory-guided saccades, and 0.003 s−1 for inter-saccadic rate and 1.6∘ for amplitude in self-generated saccades. A review of 22 studies on PD-related saccadic impairments established benchmarks for clinically meaningful effects. The iPad-based system meets or exceeds these benchmarks, supporting its use as a scalable and cost-effective tool for screening and monitoring PD. |
Dirk Kerzel Electrophysiological evidence for the optimal tuning of attention Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–14, 2025. @article{Kerzel2025,Optimal tuning of attention refers to shifts in goal-driven attention that increase the difference between the representation of the target and nontarget features. Evidence for optimal tuning comes from studies measuring the memory representation of the target and, to a lesser degree, from studies measuring attentional selectivity. In one study on attentional selectivity, cueing effects were found to be greater for cue colors deviating away from the nontarget color compared to cue colors deviating toward the nontarget color, suggesting that participants' search goal was optimally tuned. To address alternative accounts, we measured event-related potentials (ERPs) elicited by different cue colors at posterior electrodes PO7/PO8. We found that ERPs associated with attentional orienting (N1pc) or selection (N2pc) were larger for cue colors deviating away from the nontarget color, which is consistent with the optimal tuning of attention. In contrast, the results are difficult to reconcile with alternative accounts such as rapid disengagement or object updating. Further, we aimed to evaluate contributions from sensory adaptation by analyzing the Ppc component, a lateralized ERP in the P1 time range. Two control conditions, however, suggested that the Ppc was more likely driven by imbalanced saliency than sensory adaptation. |
Krista R. Kelly; Mina Nouradanesh; Reed M. Jost; Christina S. Cheng-Patel; Eileen E. Birch; Serena X. Wang; James Y. Tung; Ewa Niechwiej-Szwedo Eye-hand coordination during visually-guided reaching in children with monocular deprivation amblyopia Journal Article In: Vision Research, vol. 237, pp. 1–10, 2025. @article{Kelly2025,Monocular deprivation (MD) amblyopia caused by a dense unilateral congenital or infantile cataract leads to both sensory and ocular motor deficits, which can in turn affect motor performance. Previous research shows reduced fine motor skills in children with MD amblyopia on standardized tasks. Here, we evaluate eye-hand coordination during visually-guided reaching in MD amblyopia. A group of 17 children aged 7–15 years with MD amblyopia resulting from a unilateral cataract and a group of 41 age-similar control children were enrolled. During binocular viewing, children's reaching movements (LEAP Motion Controller) and eye movements (EyeLink 1000 binocular eye tracker) were recorded as they reached to touch a dot displayed at one of four locations (±5 deg or ±10 deg) on a computer monitor. Saccade and reach kinematic measures were assessed between groups, and factors associated with impairments in the MD amblyopia group were evaluated. The MD amblyopia group as a whole had impaired saccade (lower saccade gain, reduced saccade precision, more reach-related saccades) and reach (longer total reach duration, slower peak velocity, reduced touch accuracy) kinematics compared to controls. However, performance was worse in those with a poorer visual acuity outcome (≥0.7 logMAR) compared to good visual acuity outcome (≤0.6 logMAR). MD amblyopia impacts the development of eye-hand coordination during reaching, particularly in those with a poorer visual acuity outcome. Longer deceleration in the final approach and more reach-related saccades may suggest an inability to adapt or form an efficient compensatory strategy and may also be indicative of impaired on-line control. |
Srijita Karmakar; Miguel P. Eckstein The psychophysics of dynamic gaze-following saccades during search Journal Article In: Journal of Vision, vol. 25, no. 14, pp. 1–33, 2025. @article{Karmakar2025,The ability to quickly and precisely follow another person's gaze reflects critical evolutionary mechanisms underlying social interactions, such as attention modulation and the prediction of others' future actions. Recent studies show that observers use another person's gaze direction and peripheral scene information to make anticipatory saccades toward the gaze goal. However, it remains unclear how these eye movements are influenced by complex features of natural scenes, such as a foveal gazer, multiple peripheral gaze goals, and the relative distance between gazer and goal. We presented dynamic stimuli (videos) of real-world scenes with or without a gazer shifting their head to gaze at other individuals (gaze goals). Participants were instructed to search for a specific target individual in the videos while their eye movements were recorded. We measured the accuracy of the first saccade in locating the gaze goal. First, we found that the absence of a foveal gazer significantly increased saccade error, but only when the goal was at least approximately 9 degrees of visual angle from the initial fixation. First saccade amplitude and onset latency were higher in the gazer-present condition. Second, when there were multiple potential gaze goals in the periphery, the first saccade was directed to the individual closer to the initial fixation (gazer) location. Finally, the presence of multiple peripheral gaze goals shortened saccade latencies and increased the frequency of anticipatory saccades made before the gazer completed their head movement. These findings extend our understanding of gaze following in complex, naturalistic scenes and inform theories of attention and real-world decision-making. |
Maryam Nouri Kadijani; Theda Backen; Kaustubh Manchanda; Sandeep K. Mody; Stefan Treue; Julio C. Martinez-Trujillo Bilateral field advantage of spatial attention in macaque lateral prefrontal cortex Journal Article In: Journal of Cognitive Neuroscience, vol. 37, no. 12, pp. 2430–2444, 2025. @article{Kadijani2025,Allocating visual attention to behaviorally relevant stimuli is easier when distractors are in the opposite visual hemifield relative to when they are in the same hemifield. The neural mechanisms underlying this bilateral field advantage remains unclear. We documented this effect in two macaques performing a covert spatial attention task in two different conditions: when the target and distracter were positioned in different hemifields (across condition), and when they were positioned on the top and bottom quadrants within the same visual hemifield (within condition). The animals' behavioral performance at detecting a change in the attended stimulus was higher in the across relative to the within condition. We recorded the responses of lateral prefrontal cortex (LPFC, area 8A) neurons in one animal. The proportion of LPFC neurons encoding the allocation of attention was larger in the across relative to the within condition. The latter was accompanied by an increase in the ability of single neurons to discriminate the allocation of attention in the across relative to the within condition. Finally, we used linear classifiers to decode the allocation of attention from the activity of neuronal ensembles and found a similar bilateral field advantage in decoding performance in the across relative to the within condition that generalized to different integration time windows and number of neurons used by the classifier. Our finding provides a neural correlate of the bilateral field advantage reported in behavioral studies of attention and suggest a role of the LPFC circuitry in its origin. |
Nathalie John; Sebastian P. Korinth; Mareike Kunter; Franziska Baier-Mosch Gaze cluster analysis reveals heterogeneity in attention allocation and predicts learning outcomes Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–14, 2025. @article{John2025,Instructional videos need to maintain learners' attention to foster learning, therefore, a fine-grained measurement of attention is required. Existing gaze measures like inter-subject correlation (ISC) assume a singular focal point deemed meaningful for indicating attention. We argue that multiple meaningful foci can exist and propose an automatically generated gaze measure labeled gaze cluster membership (GCM). By applying the density-based clustering in spatial databases (DBSCAN) algorithm to gaze position data from over 100 participants, we categorize viewers as attentive when they are part of a cluster and as inattentive when they are not. Using two videos, we demonstrate that our settings of DBSCAN generate meaningful clusters. We show that low ISC values (neuronal and eye tracking data) during multiple meaningful foci do not necessarily indicate a lack of attention. Additionally, GCM predicts participants' self-reported mental effort and their tested knowledge. Our innovative approach is of high value for assessing learner attention and designing instructional videos. |
Akram Jamali; Tourandokht Baluchnejadmojarad; Hajar Mehdizadeh; Seyede Zohreh Jazaeri; Soheila Fallah; Ghorban Taghizadeh Impact of post stroke fatigue on saccadic eye movement control and learning through inflammatory mechanisms Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–12, 2025. @article{Jamali2025,This study aimed to (1) examine the impact of post-stroke fatigue (PSF) on saccadic control, (2) assess the effect of PSF on saccade adaptation, and (3) explore the correlation between serum levels of interleukin-6 (IL-6) and high-sensitivity C-reactive protein (hsCRP) with saccade control and adaptation in chronic stroke survivors. Fatigue was assessed in stroke survivors with high fatigue (HF-stroke |
Liat Israeli-Ran; Tamar Kadosh Laor; Florina Uzefovsky Emotion regulation dynamics in empathy in young children Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–13, 2025. @article{IsraeliRan2025,The capacity to empathize plays a pivotal role in most forms of social interaction, contributing significantly to adaptive social behavior. Empathy entails experiencing others' emotions, making the ability to regulate one's emotional reactions to both positive and negative emotions of others crucial for effective empathy. Both empathy and emotion regulation are capacities that develop within the context of parenting, yet the dynamics of this process are not well understood. Moreover, while there has been considerable research on empathy towards others' distress, there is less understanding of how people regulate their emotions in response to the positive emotions of others. This lack of knowledge is particularly pronounced in childhood. To address these gaps, our study focused on young children (, years and 8 months, months, female), observing their attention patterns through eye-tracking as they watched video clips designed to elicit empathic responses, both positive and negative. Additionally, we collected mothers' reports on the children's behavioral symptoms. Our findings revealed a decline in the children's attention to the face over time. However, this decline was slower in situations eliciting positive empathy and faster in those eliciting negative empathy. Interestingly, this pattern varied with the children's behavioral problems. Specifically, children with higher internalizing problems maintained their attention in positive empathy situations, whereas those with medium to high levels of externalizing symptoms initially showed a decline, followed by an increase in attention to other's negative emotional expressions. These results indicate that individual differences in behavioral issues are linked to distinct approaches to regulating emotions in empathic contexts. |
Cenlou Hu; Ziwen Luo; Sai Huang; Bao Zhang Coarse matching was sufficient to capture attention by working memory representations unless matching features with the target Journal Article In: BMC Psychology, vol. 13, no. 1, pp. 1–11, 2025. @article{Hu2025,Background: In most theoretical frameworks, the effectiveness of attentional selection relies significantly on the perceptual similarity between the target template and visual input. Nevertheless, ambiguity exists surrounding whether attentional capture triggered by irrelevant representations in Working Memory (WM) is influenced by the perceptual similarity levels of features between WM content and its matching distractors. Methods: We designed a hybrid WM and visual search task, varying such perceptual similarity of colors across three levels: exact, high-similar, and low-similar matching. To quantify the extent of the capture effect, we compared these conditions against a neutral baseline (i.e., completely different color) using eye movement and behavioral data in two experiments. Results: We consistently observed robust attentional capture effects across two experiments, evident in both eye movement indices and manual reaction times. In Experiment 1, where WM representations solely matched features to visual search distractors (task-irrelevant scenario), we found that changes in perceptual similarity did not influence attentional capture. Conversely, in Experiment 2, where WM representations had the potential to match the visual search target (task-relevant scenario), we observed a significantly more robust attentional capture effect for high-similar matching compared to low-similar matching conditions. Conclusions: These findings imply that coarse matching between distractors and WM contents is sufficient to capture attention, unless the matching features potentially correspond to the visual target. Furthermore, task relevance sharpens perceptual sensitivity to visual input, highlighting distinct mechanisms underlying attentional capture by irrelevant representations and target templates within WM. |
Alexandra Hibble; Hannah Smithson; Paul Azzopardi Visual motion thresholds mapped to midget and parasol ganglion cell topography in the human retina Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–13, 2025. @article{Hibble2025,Motion in visual images can be described in terms of changes in phases of Fourier components (phase cues), or displacements in the position of specific features (position cues) over time. Human observers are able to perceive motion using both cues, where perceived direction of motion is biased in favour of phase cues at higher spatial and temporal frequencies, and in favour of position cues at lower spatial and temporal frequencies. This suggests the existence of separable mechanisms for processing phase and position cues. We propose that these mechanisms receive separate inputs from the parasol (magnocellular) and midget (parvocellular) retinal ganglion cells. Using two-frame apparent motion Gabor stimuli that isolated phase and position cues, we measured displacement thresholds for motion direction discrimination across the visual field (from 0 to 15 degrees eccentricity) for 7 observers. Thresholds for positional displacements decreased significantly more steeply with eccentricity than those for phase displacements, mirroring precisely the decline with increasing eccentricity of the linear densities of the midget and parasol retinal ganglion cell populations respectively. These results suggest that the magnocellular and parvocellular visual pathways could constitute separable neural substrates for first-order (Fourier) and third-order (feature-tracking) motion perception. |
Dorottya Hetenyi; Joost Haarsma; Peter Kok Contents of visual predictions oscillate at alpha frequencies Journal Article In: The Journal of Neuroscience, vol. 45, no. 49, pp. 1–12, 2025. @article{Hetenyi2025,Predictions of future events have a major impact on how we process sensory signals. However, it remains unclear how the brain keeps predictions online in anticipation of future inputs. Here, we combined magnetoencephalography (MEG) and multivariate decoding techniques to investigate the content of perceptual predictions and their frequency characteristics. Thirty-two participants (23 female) were engaged in a shape discrimination task, while auditory cues predicted which specific shape would likely appear. Frequency analysis revealed significant oscillatory fluctuations of predicted shape representations in the pre-stimulus window in the alpha band (10–11 Hz). Furthermore, we found that this stimulus-specific alpha power was linked to expectation effects on shape discrimination behavior. Our findings demonstrate that sensory predictions are embedded in pre-stimulus alpha oscillations and modulate subsequent perceptual performance, providing a neural mechanism through which the brain deploys perceptual predictions. |
Jason Helbing; Dejan Draschkow; Melissa L. H. Võ Incidental encoding of objects during search is stronger than intentional memorization due to increased recollection rather than familiarity Journal Article In: Journal of Cognitive Neuroscience, vol. 37, no. 12, pp. 2538–2557, 2025. @article{Helbing2025,Most memory is not formed deliberately but as a by-product of natural behavior. These incidental representations, when generated during visual search, can be stronger than intentionally memorized content (search superiority effect). However, it is unknown if the search superiority effect is purely quantitative (stronger memory) or also driven by differences in the degrees of recollection and familiarity, two hallmark processes supporting recognition memory. Here, we use signal detection modeling, introspective judgments, event-related EEG potentials, and eye tracking measures to answer this question. In a preregistered study, 30 participants searched for objects in scenes and intentionally memorized others before completing a surprise recognition memory test. Behavioral data from remember-know judgments and receiver operating characteristics indicate that search targets were more often recollected compared with intentionally memorized objects, whereas the two tasks did not lead to differences in familiarity. Surprisingly, the neural signatures did not fully align with the behavioral findings regarding recollection and familiarity. That is, both search targets and intentionally memorized objects elicited a more positive-going mid-frontal negativity peaking at around 400 msec post stimulus onset (FN400), which is associated with familiarity, as well as a more positive-going parietal late component (LPC), indicative of recollection. Both components showed no differences between tasks, indicating equal contributions of recollection and familiarity to remembering searched and memorized objects. Furthermore, the LPC was, as expected, sensitive to differences between recollected and familiar objects when these were intentionally memorized, but it was not affected by these differences for searched objects. Overall, our findings indicate that search superiority relies predominantly on increased recollection. The fact that established neural markers of recollection (LPC) behaved as anticipated for intentionally memorized objects but carried no predictive power for incidentally memorized objects implies that memories established in more ecologically valid tasks might involve neural processes different from those activated in commonly used settings that are more reductionist. |
Seyed-Reza Hashemirad; Mojtaba Abbaszadeh; Ali Ghazizadeh Prefrontal cortex temporally multiplexes slow and fast dynamics in value learning and memory Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–17, 2025. @article{Hashemirad2025,Seyed-Reza Hashemirad 1,Mojtaba Abbaszadeh 1 & Ali Ghazizadeh 2 Balancing stability and flexibility is a fundamental challenge in value-based learning: how does the brain maintain long-term value memories while adapting to new environmental contingencies? To address this, we propose a reinforcement learning model composed of two distinct processes with fast and slow dynamics for updating and forgetting object values. Using a combined theoretical and experimental approach in male macaque monkeys, we validate a key behavioral prediction of this two-rate system—spontaneous recovery of prior value memories following value reversal. At the neural level, we show that single neurons in the ventrolateral prefrontal cortex (vlPFC) temporally multiplex these dynamics, with distinct firing components reflecting fast and slow learning processes. Together, these findings suggest that reward learning and memory are supported by a two-rate system that enables both flexibility and stability, and identify the vlPFC as a critical neural substrate for this mechanism. Foods, |
Zirui Gu; Christian N. L. Olivers; Mieke Donk Distinguishing a central selection bias from a central fixation bias: The role of retinal eccentricity in visual selection Journal Article In: Vision Research, vol. 237, pp. 1–12, 2025. @article{Gu2025c,Earlier work has shown that the eyes preferably select stimuli that are presented close to central fixation over stimuli presented further away, suggesting the existence of a central selection bias. However, so far studies have confounded retinal eccentricity with distance from the center of a display, and the observed effects may thus have been driven by what is known as the central fixation bias, which is the preference for items near the center of a display rather than the center of the retina. This study aimed to dissociate the central selection bias from the central fixation bias, and to uncover its time course. In two experiments, participants were instructed to make a single eye movement to one of two simultaneously presented singletons. The singletons were always presented at the same distance from the center of the display (thus controlling for the central fixation bias) but their eccentricity relative to the initial fixation point was varied (thus allowing for a central selection bias to operate). When the two singletons were displayed at different eccentricities, participants preferred selecting the nearest item. This central selection bias occurred rapidly and transiently, peaking around 230 ms and lasting until approximately 320 ms after display onset. Together, these results suggest that retinal eccentricity is a major factor when multiple objects compete for selection. |
Charlotte Grosse Wiesmann; Katrin Rothmaler; Esra Hasan; Kathrine Habdank; Chen Yang; Emanuela Yeung; Victoria Southgate The self-reference memory bias is preceded by an other-reference bias in infancy Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–8, 2025. @article{GrosseWiesmann2025,One of the most established biases in human memory is that we remember information better when it refers to ourselves. We investigated the development of this self-reference effect and its relationship with the emergence of a self-concept. We presented 18-month-old infants with objects that were assigned either to them, or to another agent. Infants were then tested on their memory for the objects by presenting them with an image of each object, alongside a modified version of it. Mirror self-recognition served as an index of self-concept emergence. Infants who recognize themselves in the mirror remember objects assigned to themselves better than those assigned to the other. In contrast, non-self-recognizers only remember the objects assigned to the other rather than themselves. This difference is not explained by differences in infants' age or inhibitory abilities. This suggests that the self-reference effect emerges with the development of self-concept in the second year. Prior to the emergence of a self-concept, however, infants instead seem to exhibit an other-reference effect. This reversal of the classic self-reference effect suggests that early in life, when infants are heavily reliant on others for information, they may be biased towards encoding the world as it relates to others. |
Marius Grandjean; Louise Kauffmann; Alexia Roux-Sibilon; Valérie Goffaux Does radial bias contribute to fast saccades toward faces in the periphery? Journal Article In: Journal of Vision, vol. 25, no. 14, pp. 1–22, 2025. @article{Grandjean2025,Saccadic choice studies have shown that humans initiate faster saccades toward faces than other visual categories. Here, we tested whether the saccadic advantage for faces observed in past studies is partly due to stimuli being typically presented along the horizontal meridian (HM). Our previous work suggests that the radial bias along the HM facilitates access to the horizontal structure of faces, which optimally drives human face-specialized processing. We expected to corroborate the saccadic advantage for faces along the HM, where the radial bias facilitates access to horizontal content, and to observe a reduction of this advantage along the vertical meridian (VM), especially in participants showing a strong horizontal tuning for face recognition. Fifty participants performed a saccadic choice task targeting faces or vehicles presented at 15° eccentricity along the HM and VM. We also assessed the strength of the radial bias and the horizontal tuning for face identity recognition in each individual. As expected, saccades were faster and more accurate toward faces than vehicles; they were also faster along the HM than the VM. Contrary to our hypothesis, the saccadic face advantage did not differ between meridians, suggesting the robustness of face saccadic advantage. However, the saccadic face advantage along the VM correlated with the strength of the horizontal tuning of face identity recognition. Additionally, the radial bias predicted saccade latency toward faces along the HM. These findings indicate that low-level radial biases and high-level face-specialized mechanisms independently contribute to distinct functional aspects of the ultra-fast saccadic responses toward faces. |
Luise P. Graichen; Magdalena S. Linder; Lars Keuter; Ole Jensen; Christian F. Doeller; Claus Lamm; Tobias Staudigl; Isabella C. Wagner Entorhinal grid-like codes for visual space during memory formation Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–15, 2025. @article{Graichen2025,Eye movements, such as saccades, allow us to gather information about the environment and, in this way, can shape memory. In non-human primates, saccades are associated with the activity of grid cells in the entorhinal cortex. Grid cells are essential for spatial navigation, but whether saccade-based grid-like signals play a role in human memory formation is currently unclear. Here, human participants undergo functional magnetic resonance imaging and continuous eye gaze monitoring while studying scene images. Recognition memory is probed immediately thereafter. Results reveal saccade-based grid-like codes in the left entorhinal cortex that are specific to later remembered trials during study, a finding that we replicate with an independent data set. The grid-related effects are time-locked to activation increases in the frontal eye fields. Unexpectedly, lower saccade-based grid-like codes are associated with better subsequent recognition memory performance. Our findings suggest an entorhinal map of visual space that is timed with neural activity in oculomotor regions, and negatively associated with subsequent memory. Grid-like codes, entorhinal cortex, saccades, frontal eye fields (FEF), memory, functional magnetic resonance imaging (fMRI) |
Matthias Grabenhorst; David Poeppel; Georgios Michalareas Neural signatures of temporal anticipation in human cortex represent event probability density Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–17, 2025. @article{Grabenhorst2025,Temporal prediction is a fundamental function of neural systems. Recent results show that humans anticipate future events by calculating probability density functions, rather than hazard rates. However, direct neural evidence for this hypothesized mechanism is lacking. We recorded neural activity using magnetoencephalography as participants anticipated auditory and visual events distributed in time. We show that temporal anticipation, measured as reaction times, approximates the event probability density function, but not hazard rate. Temporal anticipation manifests as spatiotemporally patterned activity in three anatomically and functionally distinct parieto-temporal and sensorimotor cortical areas. Each of these areas revealed a marked neural signature of anticipation: Prior to sensory cues, activity in a specific frequency range of neural oscillations, spanning alpha and beta ranges, encodes the event probability density function. These neural signals predicted reaction times to imminent sensory cues. These results demonstrate that supra-modal representations of probability density across cortex underlie the anticipation of future events. |
Dongyu Gong; Dejan Draschkow; Anna C. Nobre Focusing attention in working and long-term memory through dissociable mechanisms Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–14, 2025. @article{Gong2025a,We developed an experimental approach to compare how attentional orienting facilitates retrieval from spatial working memory (WM) and long-term memory (LTM), and how selective attention within these two memory types impacts incoming sensory information processing. In three experiments with healthy young adults, retrospective attention cues prioritize an item represented in WM or LTM. Participants then retrieve a memory item or perform a perceptual task. The retrocue is informative for the retrieval task but not for the perceptual task. We show that attentional orienting benefits performance for both WM and LTM, with stronger effects for WM. Eye-tracking reveals significant gaze shifts and microsaccades correlated with attention in WM, but no statistically significant gaze biases were found for LTM. Visual discrimination of unrelated visual stimuli is consistently improved for items matching attended WM locations. Similar effects occur at LTM locations but less consistently. The findings suggest at least partly dissociable attention-orienting processes for different memory types. Although our conclusions are necessarily constrained to the type of WM and LTM representations relevant to our task, they suggest that, under certain conditions, attentional prioritization in LTM can operate independently from WM. Future research should explore whether similar dissociations extend to non-spatial or more complex forms of LTM. |
Jessica N. Goetz; Mark B. Neider MATCH: A toolbox to assess the primary color of real-world objects and generate color-matching stimuli Journal Article In: Behavior Research Methods, vol. 57, no. 12, pp. 1–22, 2025. @article{Goetz2025,Real-world stimuli can be difficult to manipulate and control in experimental psychology studies. Color information is frequently used as a variable, and researchers often rely on subjective color labels that imprecisely describe the color information within real-world objects. Here, we describe a new toolbox called MATCH (Matching And Transforming Closely Hued objects) that can easily and objectively quantify and manipulate color information within real-world objects to generate object pairs that match in color. MATCH was designed incorporating theoretical frameworks and conceptual understanding from visual cognition research. Additionally, MATCH provides critical information on the distribution of color and the specific color values of any stimulus set. We also present two experimental studies to validate whether MATCH produces images that are consistent with human visual perception. In the first study, we provide evidence that the stimuli generated by MATCH are perceptually closer in color to a reference object compared to human categorization of object–color pairs. In the second study, we investigated the search for real-world objects with distractors generated by MATCH that matched the target object's color. We found patterns of data that are consistent with current theories of human search behavior. In summary, MATCH allows researchers to carefully control the color of real-world stimuli used in their studies. |
Jessica Galli; Marika Vezzoli; Erika Loi; Serena Micheletti; Anna Molinaro; Lucia Tagliavento; Stefano Calza; Alexander N. Sokolov; Marina A. Pavlova; Elisa Fazzi Alterations in looking at face-pareidolia images in autism Journal Article In: Scientific Reports, vol. 15, no. 1, 2025. @article{Galli2025,Face tuning is vital for adaptive and effective social cognition and interaction. This capability is impaired in a wide range of mental conditions including autism spectrum disorder (ASD). Yet the origins of this deficit are largely unknown. Here, an eye-tracking methodology had been implemented in adolescents with high-functioning ASD and in typically developing (TD) matched controls while administering a face-pareidolia task. The spatial distributions of eye fixation in five regions of interest [face, eyes, mouth, CFA (complementary face area, a face area beyond eyes and mouth) and non-face area (a screen area outside a face)] were recorded during spontaneous recognition of a set of Arcimboldo-like Face-n-Food images presented in a predetermined order from the least to most resembling a face. Individuals with ASD gave significantly fewer face responses and looked more often at the mouth, CFA, and non-face areas. By contrast, TD controls mostly fixated the face and eyes areas. The atypical visual scanning strategies could, at least partly, account for the lower face tuning in ASD, supporting the eye avoidance hypothesis, according to which ASD individuals concentrate less on the eyes because the eyes represent a source of emotional information that may make them feel uncomfortable. |
Bachman P. Fulmer; Gregory J. Gerard In: International Journal of Accounting Information Systems, vol. 56, pp. 1–16, 2025. @article{Fulmer2025,The widespread availability of digital financial statements across different platforms presents challenges related to potentially misdirecting visual cues and inconsistent terminology. This study employs a quasi-experimental design to analyze the influence of misdirecting visual cues and alternative terminology on attention during information search behavior, while also examining how accounting domain knowledge moderates these effects. All participants demonstrated more efficient search behavior over time, but the effect was moderated by accounting domain knowledge. Those with high accounting domain knowledge showed significantly greater performance improvements, underscoring the role of domain knowledge in the search process. Participants with high accounting domain knowledge searched more efficiently and adapted better to irrelevant cues and inconsistent terminology, illustrating the advantage of a cognitive schema even in a basic tabular search task. |
Davide Frattini; Mariagrazia Benassi; Tobias Wibble; Mattias Nilsson; Roberto Bolzani; Tony Pansell Temporal visual processing deficits in post concussion syndrome Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–11, 2025. @article{Frattini2025,Post-concussive (PCS) motion hypersensitivity represents a common sequela of mild traumatic brain injury. This study investigated whether PCS alters visual temporal resolution thresholds in psychophysical measures that sustain motion detection. Fifteen PCS patients and fifteen age-matched controls underwent critical flicker fusion (CFF) threshold assessments across visual-field eccentricities. A Generalized linear mixed model tested group differences in CFF thresholds, treating eccentricity as a repeated factor and including CFF variability as a covariate. Pupil measurements and catch trials controlled for fatigue and alertness. Nonparametric correlations assessed relationships among time from injury, symptom severity, and CFF measures. Results showed CFF variability heightening CFF thresholds in the PCS group to a significantly larger extent compared to controls. Absence of significant CFF variability differences between groups, and modulation by eccentricity, suggests perceptual noise more strongly influences the overall visual temporal sensitivity in PCS. Days since injury negatively correlated with variability, indicating compensatory stabilization of temporal sensitivity over time. Symptom severity did not correlate with CFF measures. In conclusion, PCS motion hypersensitivity may reflect disturbances in visual temporal processing parameters, potentially involving altered internal neural noise. Although some recalibration occurs post-injury, persistent abnormalities underscore the need for further research into early, clinical interventions targeting perceptual noise. |
Johannes B. Finke; Anna M. Schippers; Tim Klucken Intra-individual comparison of appetitive trace and delay conditioning in humans across acquisition and extinction Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–14, 2025. @article{Finke2025a,Temporal contiguity between conditioned (CS) and unconditioned stimuli (US) is a crucial factor in Pavlovian learning, yet little is known about its role in appetitive conditioning and extinction. In a within-subject design, 60 participants underwent both a delay (DC) and trace conditioning (TC) session with partial reinforcement (75%) by monetary rewards (US) and varying interval between CS offset and US onset (DC: 0s; TC: 4s). In addition to self-report indices (reward expectancy, arousal, valence), psychophysiological markers (pupil dilation, heart-period and startle reflex modulation) were recorded during acquisition and extinction training. For most measures, significant differential conditioned responses emerged, irrespective of temporal contiguity, with no major differences observed between TC and DC during acquisition (except for potentially diminished startle attenuation in TC). Despite overall similar patterns in conditioned responding (with small to moderate effects on physiological measures), there was no intraindividual concordance between sessions, yet evidence for differential TC effects on extinction learning. Specifically, smaller reductions in differential reward expectancy, heart-period deceleration and startle modulation after extinction in TC suggested relatively diminished extinction learning. Conditioned pupil dilation (0–2 s after CS onset) remained comparatively stable. Taken together, our findings extend evidence of differences in underlying learning mechanisms between TC and DC to the context of reward learning. |
Mariana Ferreira; João Pedro Marques; Miguel Raimundo; Hugo Quental; Miguel Castelo-Branco Improvements induced by retinal gene therapy with voretigene neparvovec depend on visual cortical hemispheric dominance mechanisms Journal Article In: Communications Medicine, vol. 5, no. 1, pp. 1–9, 2025. @article{Ferreira2025,Background: RPE65-associated retinal degeneration (RPE65-RD) causes severe visual deficits. Gene therapy with AAV2-hRPE65v2 is a breakthrough but it is currently unknown which visual pathways benefit from treatment and if cortical mechanisms can amplify retinal improvements. Methods: In this within-subject design, ten patients with biallelic RPE65-RD underwent sub-retinal injection of AAV2-hRPE65v2. Psychophysical full-field stimulus threshold determination and functional magnetic resonance imaging were performed before and 12 months after treatment. Population receptive fields (pRF) were computed in V1 and visual responses assessed using contrast-reversed checkerboards (3 contrast levels). Results: Here we show significant improvement in light sensitivity at low-luminance and neural response enhancements under low-luminance conditions specifically in the right hemisphere, which is known to show dominance in attentional and visual pooling of spatial information. Changes in pRF size also reflect known hemispheric spatial asymmetries (left/right biased for local/global analysis, respectively). Conclusions: Our findings show a contribution of known early and high-level cortical dominance mechanisms on improvement, which constrain the effects of therapy and are therefore a target for neurorehabilitation. These findings provide insight into the limits of clinical benefits of gene therapy and suggest that neurorehabilitation approaches may be needed to enhance improvements, similarly to cochlear implants. |
Tingting Feng; Yun Zhang; Wenhao Han; Xiaoling Luo; Yifei Han; Wenjie Wei; Hong Qu; Shenbing Kuang; Tao Zhang; Yi Zhang Hierarchical and distinct biological motion processing in macaque visual cortical areas MT and MST Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1–14, 2025. @article{Feng2025,It is widely accepted that biological motion (BM) perception involves the posterior superior temporal sulcus (pSTS). Yet, how individual neurons and neural circuits in pSTS encode BM remains unclear. Here we combined electrophysiological recordings with neural network modeling to elucidate BM computations in two subregions of pSTS. We recorded single-cell activity from the middle temporal area (MT) and the medial superior temporal area (MST) of three macaque monkeys when they viewed point-light displays portraying BM walking in different directions (left vs. right), orientations (upright vs. inverted), and forms (intact vs. scrambled). We found that, while individual neurons in both MT and MST showed selectivity for these features, neural populations in MST but not MT exhibit BM-specific encoding, i.e., preferential representation of intact upright BM—the defining characteristic of BM recognition. A neural network model trained to replicate these neurophysiological findings implicated that, BM-specific encoding in MST may arise from feedforward connectivity patterns, i.e., MT subpopulations selective for linear translational motion and nonlinear optic flow projected preferentially to distinct MST cells. Taken together, our findings highlight hierarchical and distinct BM processing in MT and MST, advancing our understanding of BM computations in pSTS at the single-cell and neural circuit levels in the primate brain. |
Esmaeil Farhang; Ramin Toosi; Behnam Karami; Roxana Koushki; Narges Kheirkhah; Farideh Shakerian; Jalaledin Noroozi; Ehsan Rezayat; Abdol Hossein Vahabie; Mohammad Reza A. Dehaqani The impact of spatial frequency on hierarchical category representation in macaque temporal cortex Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1–15, 2025. @article{Farhang2025,Objects are recognized in three hierarchical levels: superordinate, mid-level, and subordinate. Psychophysics shows that mid-level categories and low spatial frequency (LSF) information are rapidly recognized. However, the interaction between spatial frequency (SF) and abstraction is not well understood. To address this, we examine neural responses in the inferior temporal cortex and superior temporal sulcus of two male macaque monkeys. Our findings reveal that mid-level categories are well represented at both LSF and high SF (HSF), suggesting robust mid-level boundary maps in these areas, unaffected by SF changes. Conversely, superordinate category representation depends on HSF, indicating its crucial role in encoding global category information. The absence of subordinate representation in both LSF and HSF compared to intact stimuli further implies that full SF content is essential for fine-category processing. A supporting human psychophysics task confirms that superordinate categorization relies on HSF, while subordinate object recognition requires both LSF and HSF. |
Rania Ezzo; Bogeng Song; Bas Rokers; Marisa Carrasco Eyes on hold: Motion task difficulty jointly delays microsaccade and pupil responses Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–16, 2025. @article{Ezzo2025,Microsaccades and pupil dynamics exhibit canonical temporal profiles, providing insights into perceptual and cognitive processes. Microsaccades are typically suppressed with respect to expected stimulus onset and followed by a rebound to baseline rates. Here, we investigated whether and how the temporal dynamics of microsaccades and pupil dilation vary with task difficulty for a motion perception task. We hypothesized that difficulty jointly delays the rebound of microsaccade rates and the time of peak pupil dilation when discriminating motion direction. Human observers discriminated motion direction (clockwise or counterclockwise) in a briefly presented perifoveal drifting stimulus, which varied according to two ‘easy' vs ‘hard' difficulty manipulations –cardinal vs oblique motion directions, and large vs small tilt offsets from the discriminated direction. We found that (1) increased task difficulty strengthened and prolonged microsaccade inhibition resulting in delayed rebounds, (2) peak pupillary responses were both larger in amplitude and delayed for more difficult conditions, (3) discrimination response time correlated with microsaccade rebounds and peak pupillary responses. We conclude that the delays in these microsaccade rebound and pupil responses are due to a prolonged period of sensory evidence accumulation, and that their correlated temporal dynamics support a shared neural mechanism underlying both pupil and microsaccade responses. |
Léa Entzmann; Árni Kristjánsson; Árni Gunnar Ásgeirsson Saccade endpoints reflect attentional templates in visual search: Evidence from feature distribution learning Journal Article In: Journal of Vision, vol. 25, no. 14, pp. 1–19, 2025. @article{Entzmann2025a,In visual search, our gaze is guided by mental representations of stimulus features, known as attentional templates. These templates are thought to be probabilistic, shaped by environmental regularities. For example, participants can learn to distinguish between the shapes of different distractor color distributions in visual search. The present study assessed whether such subtle differences in distractor color distributions (Gaussian vs. uniform) are reflected in saccade endpoints. We conducted two experiments, each consisting of learning trials, designed to prime a specific distractor color distribution, and test trials, where target color varied in its distance from the mean of previously presented distractor distributions. Saccade endpoint deviations were observed through the global effect, where the saccades tended to land between two nearby stimuli. The experiments differed in difficulty, with test trials in Experiment 2 involving more distractors and colors. During test trials, reaction times and saccade endpoints were affected by target distance from the mean of the preceding distractor distribution. The farther the target color was from this mean, the less the saccade deviated from the target and the lower the reaction times. However, saccade endpoints did not reflect the shape of distractor color distributions, an effect that was observed only on reaction times in Experiment 2. Overall, color priming affects both reaction times and saccade deviations, but distractor feature distribution learning depends on search difficulty and response measures, with saccade endpoints less sensitive to subtle differences in the shape of color distributions. |
Karen Emmorey; Emily M. Akers; Emily Saunders; Marzieh Bannazadeh; Elizabeth Droubi; Frances G. Cooley; Elizabeth R. Schotter Assessing the effects of sign language experience versus deafness on the leftward reading span Journal Article In: Cognitive Science, vol. 49, no. 12, pp. 1–21, 2025. @article{Emmorey2025,Both deafness and sign language experience impact the distribution of visual attention, and either factor could affect reading span size, the area around fixation from which useful information is obtained. In contrast to the typical asymmetrical span (smaller on the left), deaf signers have a larger leftward span than skill-matched hearing readers. We investigated whether this enhanced span is due to changes in visual attention associated with early deafness or sign language experience (right-handed signs fall in the left periphery). A gaze-contingent moving-window paradigm was used to assess the leftward reading span of hearing early signers, deaf early signers, and hearing nonsigners with similar reading abilities. The size of the leftward span for deaf and hearing signers was the same (10 characters) and was larger than that of hearing nonsigners (4 characters). Thus, sign language experience appears to be at least one source of the larger leftward span in deaf signers. However, deaf signers were more efficient readers than both hearing groups (faster reading rate, more skipped words, fewer regressions), suggesting that their greater reading efficiency does not stem solely from a larger leftward span. |
Katharina Duecker; Kimron L. Shapiro; Simon Hanslmayr; Benjamin J. Griffiths; Yali Pan; Jeremy M. Wolfe; Ole Jensen Guided visual search is associated with target boosting and distractor suppression in early visual cortex Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1–11, 2025. @article{Duecker2025,Visual attention paradigms have revealed that neural excitability in higher-order visual areas is modulated according to a priority map guiding attention towards task-relevant locations. Neural activity in early visual regions, however, has been argued to be modulated based on bottom-up salience. Here, we combined Magnetoencephalography (MEG) and Rapid Invisible Frequency Tagging (RIFT) in a classic visual search paradigm to study feature-guidance in early human visual cortex. Our results demonstrate evidence for both target boosting and distractor suppression when the participants were informed about the task-relevant and -irrelevant colour (guided search) compared to when they were not (unguided search). These results conceptually replicated using both a magnitude-squared coherence approach and a General Linear Model based on a single-trial measure of the RIFT response. The present findings reveal that feature-guidance in visual search affects neuronal excitability as early as primary visual cortex, possibly contributing to a priority-map-based mechanism. |
Bernd T. Douze; Antonia F. Ten Brink; H. Chris Dijkerman; Christoph Strauch Pupil responses objectively index pharmacologically altered tactile sensitivity Journal Article In: Cortex, vol. 193, pp. 90–104, 2025. @article{Douze2025,Tactile perception is a subjective experience, yet it can be physiologically quantified. This offers new avenues for studying sensory processing in contexts where verbal feedback is limited or unreliable. A growing body of research uses changes in pupil size, showing that stronger tactile stimuli lead to greater pupil dilation. Building on this, we investigated whether pupil responses could serve as an objective measure of tactile sensitivity. To explore this, we pharmacologically manipulated tactile sensitivity in healthy participants (n = 32). In separate sessions, an anaesthetic cream or a placebo cream was applied to one forearm. At the beginning and/or end of each session, Von Frey assessments and a vibrotactile detection task were conducted to confirm the efficacy of the anaesthetic cream in reducing tactile sensitivity. During each session, pupil responses to vibrotactile stimuli applied to both the cream and non-cream arms were recorded. Our results confirmed that the anaesthetic cream significantly reduced the perceived intensity of tactile stimulation, an effect that persisted throughout the session. Crucially, we observed weaker pupil dilation responses to vibrotactile stimuli applied to the anaesthetised arm compared to the placebo or non-cream arm. Exploratory analyses showed that participants for whom the anaesthetic cream was most effective in reducing tactile sensitivity also showed the weakest pupil responses when the anaesthetised arm was stimulated. Overall, these findings demonstrate that the pupil response is a reliable and objective index of tactile sensitivity, highlighting its potential for studying sensory processing in populations where verbal feedback is limited or unreliable. |
Temenuzhka Dimova; Nicolas Lapique; Raphael Rosenberg Brief glance, lasting effect: How pointing gestures influence the perception of paintings. Journal Article In: Psychology of Aesthetics, Creativity, and the Arts, pp. 1–21, 2025. @article{Dimova2025,Pointing fingers are among the most common hand gestures in early modern painting. Art historians have long assumed that they directly guide the viewer's gaze toward key elements. However, this assumption has never been tested empirically. To investigate how depicted pointing gestures influence eye movements, we digitally removed the pointed forefingers from 15 paintings (16th–17th century) and compared the perception of these edited versions with the originals. While pointing fingers attract few direct fixations, their fleeting perception significantly reshapes how other elements are viewed. Semantically related areas—such as the targets of pointing gestures and the faces of pointing characters—receive increased direct attention. Also, our novel areas of interest dwell time correlation method reveals that viewers establish different semantic connections between characters and objects when pointing gestures are present. In the second phase ofthe study, open-ended interviews revealed that interpretations of the original paintings differ from those of the edited versions. These results underscore the central role of depicted pointing gestures. They reshape the narrative connections between elements, ultimately leading to different interpretations. Among all the art-works we analyzed, Caravaggio's and Raphael's compositions were the most affected by the removal of pointing gestures. This confirms that the effectiveness of pointing gestures in art also depends on individual artistic approaches and the artist's mastery in guiding the viewer's attention. |
Thibault J. Desbordes; Nadia Alahyane; Alain Guillaume Task-relevant information availability shapes eye movements and perceptual judgment confidence Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–16, 2025. @article{Desbordes2025,Humans continuously decide where to look to gather task-relevant information. While affective rewards such as money are known to bias gaze direction, it remains unclear whether non-affective informational value can similarly shape oculomotor decisions. Here, we modulated the availability of task-relevant visual information at saccade targets by probabilistically varying its presentation duration, in a perceptual judgment task performed by human participants. Results showed that participants developed implicit biases, increasingly avoiding an experimentally introduced low-information region. These learned preferences were associated with longer saccade latencies toward non-preferred regions, similar to patterns observed with affective reward learning. However, saccade peak velocity remained unchanged across locations. Perceptual accuracy was not influenced either. When participants' confidence ratings reliably distinguished correct from incorrect responses, confidence was higher for preferred regions, suggesting a dissociation between perceptual and metacognitive performance. These findings demonstrate that the probability of accessing easily usable information can be implicitly learned to guide eye movement decisions, much like reward. Moreover, subjective confidence can be linked to learned preferences, without modulation of perceptual performance. Our results highlight that informational value, independent of affective cues, shapes oculomotor decision-making and post-perceptual judgment confidence. |
Mario Dalmaso; Anna Lorenzoni; Giovanni Galfano; Marta Riva; Luigi Castelli Uncovering everyday attention in the lab: Front-viewed heads boost overt social orienting Journal Article In: Cognitive Research: Principles and Implications, vol. 10, no. 1, pp. 1–12, 2025. @article{Dalmaso2025a,Social attention can be defined as the tendency to orient attentional resources in response to spatial cues provided by others, such as their gaze or head direction. This mechanism is essential for navigating real-world environments, where rapidly and accurately interpreting others' behaviour is often critical. Regarding head-driven orienting, research studies suggest that social attention can be enhanced when a front-facing head cue establishes eye contact (vs. no eye contact) with the observer, but also when the head cue is viewed from behind (vs. from the front), and hence, eye contact cannot be established. Across three experiments, we directly compared these two scenarios—which are highly common in everyday life—by presenting a central head cue showing either the front (establishing eye contact) or back, followed by a turn to the left or right. In Experiments 1 and 2, participants were required to manually respond to peripheral targets while ignoring the head cue, whereas in Experiment 3, oculomotor responses were recorded. Although the initial view of the head did not affect manual responses, eye movement data revealed enhanced social attention when the head was initially viewed from the front. These results suggest that eye movements provide a sensitive measure for detecting potential social modulations of attention. Moreover, eye contact confirms here its role as a powerful social signal for humans, capable of boosting overt orienting responses. Future research should explore these effects in more dynamic and ecologically valid settings, such as real social interactions. |
Mrugank Dake; Clayton E. Curtis Perturbing human V1 degrades the fidelity of visual working memory Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–8, 2025. @article{Dake2025,Decades of macaque research established the importance of prefrontal cortex for working memory. Surprisingly, recent human neuroimaging studies demonstrated that the contents of working memory can be decoded from primary visual cortex (V1). However the necessity of this mnemonic information remains unknown and contentious. Here we provide causal evidence that transcranial magnetic stimulation targeting human V1 disrupted the fidelity of visual working memory. Errors increased only for targets remembered in the portion of the visual field disrupted by stimulation. Moreover, concurrently measured electroencephalography confirmed that stimulation disrupted not only memory behavior, but neurophysiological signatures of working memory. These results change the question from whether visual cortex is necessary for working memory to what mechanisms it uses to support memory. Moreover, they point to models in which the mechanisms supporting working memory are distributed across brain regions, including sensory areas that here we show are critical for memory storage. |
Andrew W. Corcoran; Arthur Le Coz; Jakob Hohwy; Thomas Andrillon When your heart isn't in it anymore: Cardiac correlates of task disengagement Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1–16, 2025. @article{Corcoran2025,Neuroscience is beginning to uncover the role of interoceptive feedback in perception, learning, and decision-making; however, the relation between spontaneous visceral and cognitive dynamics has received surprisingly little scrutiny. Here, we investigate how subjective, physiological, and behavioural indicators of arousal and attentional state vary in relation to ongoing cardiac activity and brain-heart coupling. Electroencephalogram, electrocardiogram, and pupillometric records were obtained from 65 adults during the performance of a sustained attention to response task (SART). Thought probes were intermittently administered during the SART to collect subjective reports of attentional state (on-task, mind-wandering, mind-blanking) and vigilance level (alertness vs. sleepiness). Mind-wandering and mind-blanking reports increased in frequency with time-on-task and were accompanied by decreases in alertness and pupil-linked arousal, but evinced distinct psychophysiological and behavioural profiles: While mind-wandering was associated with greater heart-rate variability and late modulation of the heartbeat-evoked potential, mind-blanking was characterised by more profound decreases in heart-rate, pupil size, and brain-heart coupling. Lower heart-rate predicted decreased vigilance and pupil size, in addition to slower, less-biased responses; increased heart-rate variability predicted more impulsive behaviour and pupil dilation. Together, these findings reveal that cardiac parameters and brain-heart connectivity measures afford complementary information about arousal states and attentional dynamics during task performance. |
Anthony Clément; Catherine Tallon-Baudry In: The Journal of Neuroscience, vol. 45, no. 49, pp. 1–14, 2025. @article{Clement2025,How is spatial attention deployed in mental images? Mental imagery is often assumed to share mechanisms with visual perception and visual working memory. Top-down, endogenous spatial attention in both visual perception and working memory modulates behavior and parieto-occipital alpha-band activity. However, working memory captures only a subset of mental imagery, which can also draw upon long-term memory. Here, we ask whether and how spatial attention operates in mental images derived from general knowledge in long-term memory and whether it recruits the same neural mechanisms as visual perception. We recorded EEG in 28 healthy volunteers (13 males, 15 females) as they performed two discrimination tasks with spatial cues (70% valid): one involving the mental visualization of a long-term memory map (a map of France) and the other using visual stimuli. We show that spatial attention shortens response times in both tasks, but through distinct mechanisms. Behavioral attentional benefits were uncorrelated across tasks, and spatial attention in mental imagery engaged distinct neural mechanisms, with frontal rather than posterior alpha activity modulation. We further reveal fundamental differences in the spatial structures of mental imagery and visual perception. Altogether, our results show that mental images drawn from long-term semantic memory are spatially organized and are amenable to spatial attention deployment, but the underlying neural mechanisms differ from those of visual perception. Our results thus point to marked differences between mental imagery from long-term memory and visual perception. |
Agnieszka Chmiel; Nicoletta Spinolo; Paweł Korpal; Christian Olalla-Soler; Paulina Rozkrut; Marta Kajzer-Wietrzny; Serena Ghiselli The impact of remote interpreting settings on interpreter experience and performance Journal Article In: Translation and Interpreting Studies, vol. 20, no. 2, pp. 212–243, 2025. @article{Chmiel2025,<p>This study investigates the effect of different remote simultaneous interpreting (RSI) settings on interpreter performance, experience, anxiety, and cognitive load. Thirty-six professional English-Polish and Spanish-Italian interpreters performed RSI in three conditions: with a co-located boothmate, a not co-located boothmate communicating via chat, and a boothmate in a virtual booth. Interpreter renditions, questionnaire responses, and eye-tracking data were analyzed. Objective accuracy and self-assessed performance were scored lowest in the not co-located setting, with little difference between the co-located and virtual conditions, suggesting that virtual booths may effectively replicate traditional booths. Unexpectedly, boothmate presence did not affect cognitive load, anxiety or user experience, demonstrating interpreters' adaptability to diverse RSI setups. Findings also suggest positive attitudes toward technology and high technological competence improve user experience and facilitate more structured visual attention. The study enhances our understanding of RSI and underscores interpreters' ability to navigate visually complex environments.</p> |
Tzu-Yao Chiu; Isabel Jaen; Julie D. Golomb Spatiotemporal predictability of saccades modulates postsaccadic feature interference Journal Article In: Journal of vision, vol. 25, no. 14, pp. 1–16, 2025. @article{Chiu2025a,Spatial attention and eye movements jointly contribute to efficient sampling of visual information in the environment, but maintaining precise spatial attention across saccades becomes challenging due to the drastic retinal shifts. Previous studies have provided evidence that spatial attention may remap imperfectly across saccades, incurring systematic feature inference with ongoing perception, yet the role of saccade predictability remains largely untested. In the current study, we investigated whether spatiotemporal predictability of saccades influences postsaccadic remapping and feature perception. In two preregistered experiments, we implemented the postsaccadic feature report paradigm and manipulated spatiotemporal predictability of saccades. Experiment 1 manipulated spatial and temporal saccade predictability together, whereas Experiment 2 dissociated the roles of spatial and temporal predictability in separate conditions. In addition to spatial and temporal saccade predictability both improving general task performance, we found that spatial saccade predictability specifically modulated postsaccadic feature interference. When saccades were spatially unpredictable, "swap errors" occurred at the early postsaccadic time point, where participants misreported the retinotopic color instead of the spatiotopic target color. However, the swapping errors were reduced when saccades were made spatially predictable. These results suggest that systematic feature interference associated with postsaccadic remapping is malleable to expectations of the upcoming saccade target location, highlighting the role of predictions in maintaining perceptual stability across saccades. |
Siyi Chen; Si Cheng; Thomas Geyer; Hermann J. Müller; Zhuanghua Shi Distinct hippocampus codes for contextual cueing: Learning contexts and their predictive associations with targets in visual search Journal Article In: NeuroImage, vol. 323, pp. 1–11, 2025. @article{Chen2025k,Humans can learn to exploit repeated distractor arrangements to optimize attentional selection of targets, producing contextual facilitation. The hippocampus is thought to support context representations acquired from repeatedly searching a given scene layout. However, it remains unclear whether the hippocampus primarily encodes context–target associations, in which the distractor layout directly predicts the target location, or whether it additionally encodes associations among distractors, enabling target prioritization indirectly via context suppression. To examine the neural mechanisms of contextual learning, we combined functional magnetic resonance imaging with a two-phase visual search paradigm: Phase 1 presented predictive (repeated) distractor layouts with consistent target locations, affording contextual cueing; Phase 2 rendered these layouts non-predictive by randomizing target locations, fostering context suppression. Contextual facilitation was compared against a baseline of non-repeated arrangements. We found that both context–target (Phase 1) and distractor–distractor (Phase 2) associations were reliably decoded from the hippocampus using correlation-based multi-voxel pattern analysis. A functional dissociation emerged along the hippocampal axis: anterior and posterior hippocampal regions identified in the whole-brain univariate analyses exhibited relatively greater contributions to Phase 1 context-target and Phase 2 distractor–distractor associations, respectively, indicating their stronger involvement in the corresponding memory representations. Connectivity modeling showed the temporoparietal junction (TPJ) interacted with the hippocampus in different ways depending on context predictivity. These findings indicate anatomically separable hippocampal circuits represent predictive context–target and non-predictive distractor–distractor relations, with their attentional effects gated by the TPJ. Significance Statement Although the hippocampus supports the encoding and retrieval of recurrent spatial distractor-target relations in visual search, its distinct roles in representing context-target relations (associating the target location with the configuration of distractors) versus sole-context (distractor-distractor) configurations has not been dissociated. The present study decoded both forms of contextual representation in the hippocampus: the anterior hippocampus preferentially encoded context-target associations, whereas the posterior hippocampus maintained sole-context memory. The signals generated by these distinct hippocampal regions regulate how the target is prioritized for attentional selection, with the temporoparietal junction (TPJ) dynamically adjusting the mode of prioritization in response to the learnt configural structure and how reliably it predicts the target location. |
Lixiang Chen; Radoslaw Martin Cichy; Daniel Kaiser Representational shifts from feedforward to feedback rhythms index phenomenological integration in naturalistic vision Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1–5, 2025. @article{Chen2025f,How does the brain integrate complex and dynamic visual inputs into phenomenologically seamless percepts? Previous results demonstrate that when visual inputs are organized coherently across space and time, they are more strongly encoded in feedback-related alpha rhythms, and less strongly in feedforward-related gamma rhythms. Here, we tested whether this representational shift from feedforward to feedback rhythms is linked to the phenomenological experience of coherence. In an Electroencephalography (EEG) study, we manipulated the degree of spatiotemporal coherence by presenting two segments from the same video across visual hemifields, either synchronously or asynchronously (with a delay between segments). We asked participants whether they perceived the stimulus as coherent or incoherent. When stimuli were presented at the perceptual threshold (i.e., when the same stimulus was judged as coherent 50% of times), perception co-varied with stimulus coding across alpha and gamma rhythms: When stimuli were perceived as coherent, they were represented in alpha activity; when stimuli were perceived as incoherent, they were represented in gamma activity. Whether the same visual input is perceived as coherent or incoherent thus depends on representational shifts between feedback-related alpha and feedforward-related gamma rhythms. |
Seah Chang; Julie D. Golomb From the eye to the world: Spatial suppression is primarily coded in retinotopic coordinates but can be learned in spatiotopic coordinates Journal Article In: Psychonomic Bulletin & Review, vol. 32, no. 6, pp. 3009–3024, 2025. @article{Chang2025c,Attention is multifaceted, with evidence for distinct mechanisms of attentional facilitation and suppression processes. Interestingly, much less is known about the spatial coordinate system of suppression compared to that of facilitation. The present study examined the coordinate system of spatial suppression by manipulating gaze position and distractor regularities, asking whether suppression is coded in retinotopic (eye-centered) and/or spatiotopic (world-centered) coordinates, and if this varies with more ecological and dynamic contexts. In the current study, we demonstrate that learned spatial suppression primarily transfers across gaze position in retinotopic coordinates; however, in more dynamic contexts favoring spatiotopic information, spatial suppression can be learned in spatiotopic coordinates. These results suggest that the default coordinate system of spatial suppression is retinotopic under static contexts, but suppression can be rapidly learned in spatiotopic coordinates when a spatiotopic representation is beneficial in more naturalistic dynamic contexts. |
Jiří Čeněk; Daniela Halámková; Jan Caha; David Lacko; Petra Kalenská; Zdeněk Stachoň; Jie Li Tsai; Albert Ahenkan; Thomas Dresler; Jana Lüdtke; Nicol Dostálová; Alžběta Šašinková; Pavel Ugwitz; Čeněk Šašinka Cross-cultural analysis of eye-movement patterns in visual scene perception: A comparison of seven cultural samples Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–14, 2025. @article{Cenek2025,This eye-tracking research investigates cross-cultural similarities and differences in visual attention in complex scenes free-viewing perception. The study utilizes 70 real-world photos with one or two focal objects as stimulus materials. The study examines the amount of time spent on focal objects, saccadic lengths, temporal changes in saccadic lengths and factors that influence these metrics. Data were collected between 2020 and 2022 from seven cultural samples in Africa, East Asia, Europe, and the Near East (N = 408). Contrary to initial hypotheses, the findings challenge the expected order of countries in terms of attention toward objects. Participants from Taiwan, assumed to exhibit holistic patterns, displayed the most holistic viewing pattern. Surprisingly, participants from Germany and Czechia did not significantly differ from those in Taiwan. Furthermore, participants from Ghana and Türkiye, expected to be moderate, showed the most analytic pattern. This challenges preconceived notions and contributes to understanding patterns of scene perception in underrepresented countries. Additional analyses explored the relationship between number and size of focal objects and dwell time, as well as the potential influence of sociodemographic variables, on dwell time. |
Chloe Brittenham; Antoinette Sabatino DiCriscio; Vanessa Troiani; Yirui Hu; Jennifer B. Wagner Task-evoked pupil responses during free-viewing of hierarchical figures in relation to autistic traits in adults Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–15, 2025. @article{Brittenham2025,Sensory processing differences, particularly within the visual domain, are common in neurodevelopmental conditions, including autism. Studies examining hierarchical processing of figures containing global (i.e., gist) and local (i.e., detail) elements are inconsistent but converge on a common theme in relation to autism: slowed global processing and a locally-oriented default. We examined behavioral and pupillary responses in adults with varying levels of autistic traits during a free-viewing hierarchical processing task. Results showed that participants were both more likely and faster to report global elements, but contrary to our hypothesis, differences in level of autistic traits were unrelated to spontaneous reporting of global vs. local elements. When examining phase-based analysis of pupillary responses, participants high on autistic traits showed more early and less later constriction within trials. Further, trajectory-based pupillary analysis revealed two trajectories, one characterized by constriction and the other dilation, and results showed that the dilation group disproportionately included low traits individuals. Findings suggest that although high and low traits groups showed similar behavioral responses, visual strategies used may differ, as indicated by pupillometry. This study advances our understanding of the relationship between autistic traits and visual processing, laying groundwork for further investigations into neurodivergent visual processing mechanisms. |
Fabian Breuer; Anne Sophie Hildebrand; Johannes B. Finke; Leandra Bucher; Udo Dannlowski; Tim Klucken; Kati Roesmann; Elisabeth Johanna Leehr Antisaccade performance in spider phobia and its association with multimodal correlates of fear Journal Article In: Journal of Anxiety Disorders, vol. 116, pp. 1–9, 2025. @article{Breuer2025,Introduction: This study explored inhibitory control in spider phobic (SP) and healthy control (HC) individuals using an emotional antisaccade task. Attentional control theory (ACT) suggests anxiety related deficits in inhibitory control, yet studies on antisaccade performance in anxiety disordered patients are sparse. This study addressed this research gap and additionally aimed to explore putative associations of antisaccade performance with multimodal measures of fear of spiders. Methods: A sample of 76 participants (41 SP, 35 HC) completed an antisaccade task, employing schematic pictures of spiders and flowers. We measured antisaccade latencies and error rates, respectively. In a free-viewing task, we obtained psychophysiological and subjective fear responses to pictures of spiders. Self-rated fear of spiders was assessed via questionnaires and avoidance behavior was assessed in a behavioral avoidance test. Results: Contrary to ACT predictions, SP exhibited shorter antisaccade latencies irrespective of stimulus category, indexing more efficient inhibitory control, while showing no differences in antisaccade error rates when compared to HC. Consistent with prior findings, SP participants showed elevated psychophysiological responding, fear ratings and avoidance behavior. No significant associations emerged between inhibitory control performance and these measures of fear. Discussion: Our findings suggest enhanced inhibitory control efficiency in SP compared to HC, contrasting impairments predicted by ACT and observed in subclinical anxiety. These findings may indicate a compensatory adaptation in anxiety disorders, enabling rapid attentional avoidance of threat. Our results also imply that inhibitory control may be differentially affected across various anxiety disorders, depending on their predisposition towards fear or anxiety, while also being independent from diverse measures of fear and anxiety. |
Schea Fissel Brannick; Arianna N. LaCroix Blinking indexes dynamic attending during and after music listening Journal Article In: Scientific Reports, vol. 15, no. 1, 2025. @article{Brannick2025,Music's rhythmic and acoustic structure can shape how attention unfolds over time, but little is known about how music listening influences the temporal dynamics of attention. This study examined whether blinking, a linked marker of attention, entrains to acoustic features of music, and whether this entrainment predicts changes in attention post-listening. Fifty-seven middle-aged and older adults listened to either high dynamic (fast, perceptually complex), low dynamic (slow, perceptually stable), or no music for 10 min before and after completing the Attention Network Test (ANT). Blink probabilities were analyzed in relation to perceptual dynamics during music listening (spectral novelty) and at task-relevant timepoints during the ANT. Spectral novelty in the music predicted non-linear fluctuations in blinking, with high dynamic music eliciting early blink–music coupling and low dynamic music producing delayed, later stage entrainment. After listening, alerting effects differed by music condition: low dynamic music was associated with reduced blinking on double cue trials, suggesting greater cue-based attentional readiness, whereas high dynamic listeners showed increased blinking probabilities, possibly reflecting internally guided task preparation. The low dynamic group also showed enhanced executive control, marked by increased and earlier blinking on high-conflict trials, with greater entrainment also predicting earlier blink onsets. These results suggest that music entrainment supports flexible attentional coordination and may enhance attention in aging through distinct cognitive pathways. |
Shailendra Bhandari; Pedro Lencastre; Rujeena Mathema; Alexander Szorkovszky; Anis Yazidi; Pedro G. Lind Modeling eye gaze velocity trajectories using GANs with spectral loss for enhanced fidelity Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–13, 2025. @article{Bhandari2025,Accurate modeling of eye gaze dynamics is essential for advancement in human-computer interaction, neurological diagnostics, and cognitive research. Traditional generative models like Markov models often fail to capture the complex temporal dependencies and distributional nuance inherent in eye gaze trajectories data. This study introduces a Generative Adversarial Network (GAN) framework employing Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN) generators and discriminators to generate high-fidelity synthetic eye gaze velocity trajectories. We conducted a comprehensive evaluation of four GAN architectures: CNN-CNN, LSTM-CNN, CNN-LSTM, and LSTM-LSTM–trained under two conditions: using only adversarial loss () and using a weighted combination of adversarial and spectral losses. Our findings reveal that the LSTM-CNN architecture trained with this new loss function exhibits the closest alignment to the real data distribution, effectively capturing both the distribution tails and the intricate temporal dependencies. The inclusion of spectral regularization significantly enhances the GANs' ability to replicate the spectral characteristics of eye gaze movements, leading to a more stable learning process and improved data fidelity. Comparative analysis with a Hidden Markov Model (HMM) optimized to four hidden states further highlights the advantages of the LSTM-CNN GAN. Statistical metrics show that the HMM-generated data significantly diverges from the real data in terms of mean, standard deviation, skewness, and kurtosis. In contrast, the LSTM-CNN model closely matches the real data across these statistics, affirming its capacity to model the complexity of eye gaze dynamics effectively. These results position the spectrally regularized LSTM-CNN GAN as a robust tool for generating synthetic eye gaze velocity data with high fidelity. Its ability to accurately replicate both the distributional and temporal properties of real data holds significant potential for applications in simulation environments, training systems, and the development of advanced eye-tracking technologies, ultimately contributing to more naturalistic and responsive human-computer interactions. |
Yevgeni Berzak; Jonathan Malmaud; Omer Shubi; Yoav Meiri; Ella Lion; Roger Levy OneStop: A 360-participant English eye tracking dataset with different reading regimes Journal Article In: Scientific Data, vol. 12, no. 1, pp. 1–15, 2025. @article{Berzak2025,We present OneStop Eye Movements, a large-scale corpus of eye movements in reading, in which native (L1) speakers read newswire texts in English and answer reading comprehension questions. OneStop has 152 hours of eye movement recordings from 360 participants for 2.6 million word tokens, more data than all the existing public broad coverage English L1 eye tracking datasets combined. The eye movement data was collected for extensively piloted reading comprehension materials comprising 486 reading comprehension questions and auxiliary text annotations geared towards behavioral analyses of reading comprehension. Importantly, OneStop includes multiple reading regimes: ordinary reading, information seeking, repeated reading of the same text, and reading simplified text. The combination of the unprecedented size, high-quality reading comprehension materials and multiple reading scenarios, aims to enable new research avenues in the study of reading and human language processing. It further aims to facilitate the integration of eye tracking data in Natural Language Processing (NLP), Artificial Intelligence (AI), Human Computer Interaction (HCI) and educational applications. |
Burcu Bayram; David Meijer; Roberto Barumerli; Michelle Spierings; Robert Baumgartner; Ulrich Pomper Bayesian prior uncertainty and surprisal elicit distinct neural patterns during sound localization in dynamic environments Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–18, 2025. @article{Bayram2025,Estimating the location of a stimulus is a key function in sensory processing, and widely considered to result from the integration of prior information and sensory input according to Bayesian principles. A deviation of sensory input from the prior elicits surprisal, depending on the uncertainty of the prior. While this mechanism is increasingly understood in the visual domain, much less is known about its implementation in audition, especially regarding spatial localization. Here, we combined human EEG with computational modeling to study auditory spatial inference in a noisy, volatile environment and analyzed behavioral and neural patterns associated with prior uncertainty and surprisal. First, our results demonstrate that participants indeed used prior information during periods of stable environmental statistics, but showed evidence of surprisal and discarded prior information following environmental changes. Second, we observed distinct EEG activity patterns associated with prior uncertainty and surprisal in both the time- and time–frequency domain, which are in line with previous studies using visual tasks. Third, these EEG activity patterns were predictive of our participants' sound localization error, response uncertainty, and prior bias on a trial-by-trial basis. In summary, our work provides novel behavioral and neural evidence for Bayesian inference during dynamic auditory localization. |
Cemre Baykan; Alexander C. Schütz Eye movements do not preferentially test inferences in the blind spot Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–10, 2025. @article{Baykan2025a,Humans make eye movements to regions of high uncertainty to maximize the information gain in their visual search. Along with veridical sensory information, there is also perceptually inferred information that arises at the gaps caused by anatomical or environmental factors. It is unclear how those inferences are treated in comparison to veridical information during search behavior. Here, in two experiments, we tested if eye movements are preferentially directed towards the blind spot as an area of high uncertainty and high information gain in a monocular visual search task. The results show that the first saccade was not directed primarily to the blind spot when “invisible” targets in the left and right blind spot occured interleaved. Only when viewing conditions were blocked, such that “invisible” targets occured always in the same blind spot side, participants learned to look preferentially at the blind spot. These results show that perceptual inferences in the blind spot are not preferentially tested by eye movements in general, but that they can be optimized by using contextual information. |
Ali Batikh; Éric Koun; Roméo Salemme; Alessandro Farnè; Denis Pélisson The effect of spatial attention on saccadic adaptation Journal Article In: Journal of Vision, vol. 25, no. 14, pp. 1–26, 2025. @article{Batikh2025a,Eye movements and spatial attention are both crucial to visual perception. Orienting gaze to objects of interest is achieved by voluntary saccades (VSs) driven by internal goals or reactive saccades (RSs) triggered automatically by sudden environmental changes. Both VSs and RSs are known to undergo plastic adjustments to maintain their accuracy throughout life, driven by saccadic adaptation processes. Spatial attention enhances visual processing within a restricted zone, and it can be shifted voluntarily following our internal goals (endogenous) or automatically in response to unexpected changes in sensory stimulation (exogenous). Despite the widely accepted notion that saccadic and attention shifts are governed by distinct but highly interconnected systems, the relationship between saccadic adaptation and spatial attention is still unclear. To address this relationship, we conducted two experiments combining modified versions of the double-step adaptation paradigm and the attention-orienting paradigm. Experiment 1 tested the effect of shifting exogenous attention by a tactile cue near or away from the saccade's target on RS adaptation. Experiment 2 also used tactile cueing but now to investigate the effect of shifting endogenous attention on VS adaptation. Although we were unable to obtain direct evidence for an effect of spatial attention on saccadic adaptation, correlation analyses indicated that both the rate and magnitude of saccadic adaptation were positively correlated with the allocation of attention toward the saccade target and negatively correlated with attention directed away from the target. Introduction |
Dana Basel; Rotem Asher; Amit Lazarov Reward learning in obsessive-compulsive disorder: An attentional perspective Journal Article In: Motivation and Emotion, vol. 49, no. 6, pp. 717–730, 2025. @article{Basel2025,Obsessive-compulsive disorder (OCD) has been recently associated with aberrant reward learning. In the realm of attention, a previous study using an eye-tracking version of the reward-based value modulated attentional capture (VMAC) task showed a greater interference by high-reward signaling distractors than low-reward signaling distractors among obsessive-compulsive individuals compared with controls. However, this study used individuals with high and low levels of obsessive-compulsive symptoms, restricting generalizability to clinical OCD, while also inherently confounding OCD and anxiety. The present study addressed both these limitations. The eye-tracking-based VMAC task was completed by clinically diagnosed OCD participants (n=32), participants with anxiety disorders (AN; n=30), and healthy controls (HC; n=31). Attentional capture by high and low reward-signaling distractors was assessed via number of fixations on these distractors and number of first saccades toward them. Both fixation and saccade data showed a heightened VMAC effect (i.e., higher attentional capture by high-reward signaling distractors than low-reward signaling distractors) among OCD participants, compared with both AN and HC participants. Surprisingly, the AN group showed no VMAC effect, reflecting blunted attentional reward learning. The current VMAC task did not include an extinction phase, and hence could not examine the endurance of the VMAC effect when no reward is available. Also, the study did not explore potential performance differences across different OCD subtypes. |
Yaqian Borogjoon Bao; Xingshan Li; Victor Kuperman The eye movement database of passage reading in vertically written traditional Mongolian Journal Article In: Scientific Data, vol. 12, no. 1, pp. 1–12, 2025. @article{Bao2025,This paper introduces an eye-tracking corpus of passage reading data in the vertical writing system of traditional Mongolian. This corpus extends the Multilingual Eye Movement Corpus (MECO) database and includes data from 66 native readers of traditional Mongolian script reading 12 texts comprising 99 sentences and 2,592 words. This traditional Mongolian MECO corpus aims to address the research gap in reading studies on understudied languages. As one of the very few actively used vertical writing systems, these data offer unique insights into the cognitive and visual processing demands of vertical reading. The paper provides reliability estimates for the data and reports lexical benchmark effects of word frequency and length. Additionally, the corpus provides a valuable opportunity for cross-linguistic comparisons of eye movement data, especially with horizontal writing systems, contributing to a better understanding of how reading direction influences cognitive processing. |
Bilikis Banire; Hailey Burns; Dawson Sutherland; Youna McGowan; Sherry H. Stewart; Raymond M. Klein; Sandra Meier Role of social competence in emotion processing among emerging adults with anxiety Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–14, 2025. @article{Banire2025,Individuals with anxiety disorders tend to gravitate their attention to faces showing anger, which may reinforce fears associated with social situations and impact their social competence. Yet it is unclear whether social competence may explain differences in attention allocation to emotional faces in anxiety disorders. This study used eye-tracking to assess gaze patterns in 57 females aged 15 to 24 who viewed emotional faces (angry and neutral) on a screen. It explored whether latency to first fixation and dwell time on emotional faces (female and male) are dependent on anxiety symptoms and social competence, and if social competence accounts for the association of anxiety with attention allocation. With increasing anxiety symptoms, participants' dwell time on neutral compared to angry female faces increased, yet no effects were observed for male faces. Similarly, with decreasing social competence, participants' dwell time on neutral compared to angry female faces increased, yet no differences were observed for male faces. Contrary to the hypothesis, social competence did not account for the effects of anxiety on attention allocation. No effects were observed for latency to first fixation. Anxiety and social competence are both independently associated with attentional biases toward facial expressions in female participants. Yet, these associations seemed to depend on the gender of the face seen. |
Lukas K. Amann; Virginia Casasnovas; Alexander Gail Visual target and task-critical feedback uncertainty impair different stages of reach planning in motor cortex Journal Article In: Nature Communications, vol. 16, no. 1, pp. 1–15, 2025. @article{Amann2025,Sensory uncertainty jeopardizes accurate movement. During reaching, visual uncertainty can affect the estimation of hand position (feedback) and the desired movement endpoint (target). While impairing motor learning, it is unclear how either form of uncertainty affects cortical reach goal encoding. We show that reach trajectories vary more with higher visual uncertainty of the target, but not the feedback. Accordingly, cortical motor goal activities in male rhesus monkeys are less accurate during planning and movement initiation under target but not feedback uncertainty. Yet, when monkeys critically depend on visual feedback to conduct reaches via a brain-computer interface, then visual feedback uncertainty impairs reach accuracy and neural motor goal encoding around movement initiation. Neural state space analyses reveal a dimension that separates population activity by uncertainty level in all tested conditions. Our findings demonstrate that while both target and feedback uncertainty always reflect in neural activity, uncertain feedback only deteriorates neural reach goal information and behavior when it is task-critical, i.e., when having to rely on the sensory feedback and no other more reliable sensory modalities are available. Further, uncertain target and feedback impair reach goal encoding in a time-dependent manner, suggesting that they are integrated during different stages of reach planning. |
Yusuke Akiyama; Hiroshi Yamada; Masayuki Matsumoto; Jun Kunimatsu Sustained visual signals in the primate cerebellar dentate nucleus drive associative learning Journal Article In: Communications Biology, vol. 8, no. 1, pp. 1–12, 2025. @article{Akiyama2025,A number of studies have suggested that the cerebellum has cognitive functions; however, the underlying neuronal mechanisms remain unclear. In this study, we demonstrated that sustained visual signals in the cerebellar dentate nucleus represent the visuomotor associative information. We recorded neuronal activity from the dentate nucleus when monkeys performed a learning task involving the association between visual objects and saccade directions. We found that sustained visual activity was greater during learning than during memory retrieval. This enhancement disappeared under the uncertain reward condition, in which the monkeys did not engage in learning behavior. Furthermore, sustained visual signals changed the response to visual objects depending on the associated saccade direction. This direction selectivity was positively correlated with modulation during learning. These results suggest that sustained visual signals in the dentate nucleus reflect learning related motivation and drive learning by increasing the strength of discrimination among visual objects. |
Domenica Abad-Malo; Omar Alvarado-Cando; Hakan Karsilar The role of spontaneous eye blinks in temporal perception: An eye tracking study Journal Article In: Journal of Eye Movement Research, vol. 18, no. 6, pp. 1–10, 2025. @article{AbadMalo2025,Our interaction with the world depends on our ability to process temporal information, which is a key component of human cognition that directly impacts decision-making, planning, and prediction of events. Visual information plays a crucial role in shaping our subjective perception of time, and even brief interruptions, such as those caused by eye blinks, can disrupt the continuity of our perception and alter how we estimate durations. The purpose of this study is to investigate the relationship between spontaneous eye blinks and time perception using a temporal bisection task. In particular, we focus on how blinks preceding stimulus presentation impact the perceived duration of that stimulus. The results of fitting a generalized linear mixed-effects model revealed that blinking can indeed influence the duration estimation. Specifically, the presence of a single blink before the stimulus presentation had a significant effect on subjective time perception; participants were more likely to categorize a duration as shorter compared to when they did not blink. In contrast, two or more blinks before stimulus presentation did not have a significant effect compared to not blinking. This study further elucidates the complex interaction between the momentary suppression of visual input and the perception of time. |
Artyom Zinchenko; Markus Conci; Hermann J. Müller; Thomas Geyer Eye on context: Individual differences reveal the mechanisms of statistical learning Journal Article In: Quarterly Journal of Experimental Psychology, vol. 78, no. 11, pp. 2570–2582, 2025. @article{Zinchenko2025,If a searched-for target object is consistently encountered within repeating spatial distractor arrangements, target detection becomes more efficient relative to nonrepeated, that is, random arrangements (contextual cueing [CC] effect). However, target location changes within otherwise unchanged distractor arrays substantially weaken the cueing effect. Previous studies reported substantial variations in individual participants' abilities to learn and relearn invariant contexts. Therefore, the current study examined how individual differences in attentional control and focus, as indexed by the well-established Stroop and Navon tasks, respectively, relate to CC in a learning phase/relocation phase design. During the visual search, we recorded behavioural reaction times (RTs) and fixation locations, the latter permitting us to decompose search RTs into search- and motor-related substages. We could thus evaluate the processes responsible for CC and the lack thereof after target relocation while also testing whether search and motor components of CC are different for individuals depending on their Stroop/Navon scores. Repeated contexts yielded faster RTs (and reduced fixation numbers), though there was a substantial decrease in cueing from learning to adaptation, consistent with previous studies. Critically, contextual learning, but not relearning, varied across individuals: participants with high-Stroop interference displayed overall larger CC during early target search, while a more local Navon task bias was associated with increased CC during later processes of target response decisions. Our results demonstrate that analysing individual differences can help validate the processes responsible for CC in search tasks, particularly distinguishing between early search and later response-related mechanisms. |
Yu-Wan Zhao; Jing-Wen Xiang; Yong-Chun Cai Modulation of alerting and orienting attention on spatial suppression Journal Article In: Vision Research, vol. 236, pp. 1–12, 2025. @article{Zhao2025e,Spatial suppression is a phenomenon in which, for high-contrast stimuli, larger stimuli typically elicit weaker neural responses and produce worse perceptual performance compared to smaller stimuli. This phenomenon is thought to arise from inhibitory connections between neurons. Although recent studies have suggested that feedback connections from high areas can influence these inhibitory processes, implying that attention may modulate spatial suppression, direct evidence for such modulation remains scarce. In particular, the impact of an important component of attention, alerting, has been overlooked. The present study aimed to explore the effects of two distinct components of attention—alerting and orienting—on spatial suppression. Our results indicate that alerting enhances spatial suppression. Furthermore, upon isolating the influence of orienting after controlling for alerting levels, we discovered that the influence of orienting on spatial suppression is feature-dependent. Specifically, while orienting attention to orientation enhances spatial suppression, orienting to contrast does not elicit the same effect. Our results indicate that spatial suppression is a flexible processing mechanism subject to widespread high-level cognitive modulations. |
Mengying Yuan; Min Gao; Xinzhong Cui; Sa Lu; Xiaoyu Tang Breaking the silence: Exploring the influence of auditory singularity on visual search Journal Article In: Quarterly Journal of Experimental Psychology, vol. 78, no. 12, pp. 2627–2642, 2025. @article{Yuan2025a,The pip-and-pop effect describes the phenomenon of auditory pure-tone stimuli (pip) causing simultaneously visual target to pop out. This study utilised a dynamic visual search paradigm and conducted two eye movement experiments (Experiment 1: set size = 24 items; Experiment 2: set size = 48 items) to explore the influence of auditory singularity on the Pip-and-Pop effect through single-sound condition (singularity) and multiple-sound condition (non-singularity). In Experiment 1, there were no significant differences between the no-sound, single-sound, and multiple-sound conditions in terms of reaction time, accuracy, or fixation number. In Experiment 2, compared with the no-sound condition, both the single-sound and multiple-sound conditions significantly reduced the Search time (RTs), accuracy, and fixation numbers when the target was present. Both Experiments 1 and 2 revealed that the fixation duration under the single-sound condition was significantly longer than that under the no-sound condition. These findings suggest that the singularity of auditory stimuli is not a necessary condition for the pip-and-pop effect. Audiovisual interaction is more likely to be a prerequisite for the occurrence of the pip-and-pop effect. |
Shimpei Yamagishi; Shigeto Furukawa Microsaccade direction reveals the variation in auditory selective attention processes Journal Article In: The Journal of Neuroscience, vol. 45, no. 45, pp. 1–12, 2025. @article{Yamagishi2025,Selective spatial attention plays a critical role in perception in the daily environment where multiple sensory stimuli exist. Even covertly directing attention to a specific location facilitates the brain's information processing of stimuli at the attended location. Previous behavioral and neurophysiological studies have shown that microsaccades (MSs), tiny involuntary saccadic eye movements, reflect such a process in terms of visual space and can be a marker of spatial attention. However, it is unclear whether auditory spatial attention processes that are supposed to interact with visual attention processes influence MSs and vice versa. Here, we examine the relationship between MS direction and auditory spatial attention during dichotic oddball sound detection tasks with human participants of both sexes. The results showed that the MS direction was generally biased contralateral to the ear to which the oddball sound was presented or that to which sustained auditory attention was directed. The postoddball modulation of MS direction was associated with the behavioral performance of the detection task. The results suggest that the inhibition of stimulus-directed MSs occurs to reduce erroneous orientation of ocular responses during selective detection tasks. We also found that the correlation between MS direction and neural response to the tone originated from the auditory brainstem (frequency-following response). Overall, the present study suggests that MSs can be a marker of auditory spatial attention and that the auditory neural activity fluctuates over time with the states of attention and the oculomotor system, also involving auditory subcortical processes. |
Yiyang Wu; Xiangbin Teng; Yi Du Eye blinks synchronize with musical beats during music listening Journal Article In: PLoS Biology, vol. 23, no. 11, pp. 1–26, 2025. @article{Wu2025b,Auditory-motor synchronization, the alignment of body movements with rhythmic patterns in music, is a universal human behavior, yet its full scope remains incompletely understood. Through four experiments with 123 young nonmusicians, integrating eye-tracking, neurophysiological recordings, white matter structural imaging, and behavioral analysis, we reveal a previously unrecognized form of synchronization: spontaneous eye blinks synchronize with musical beats. Blinks robustly synchronized with beats across a range of tempi and independently of melodic cues. Electroencephalogram recordings revealed a dynamic correspondence between blink timing and neural beat tracking. Blink synchronization performance was linked to white matter microstructure variation in the left superior longitudinal fasciculus, a key sensorimotor pathway. Additionally, the strength of blink synchronization reflected the modulation of dynamic auditory attention. These findings establish blink synchronization as a novel behavioral paradigm, expanding the auditory-motor synchronization repertoire and highlighting the intricate interplay between music rhythms and oculomotor activity. This discovery underscores a cross-modal active sensing mechanism, offering new insights into embodied music perception, rhythm processing, and their potential clinical applications. |
Micaela Wiseman; Rachel Yep; Madeline Wood Alexander; Christopher B. Pople; Lucas Perri; Georgia Gopinath; Maria Vasileiadi; Jessica Robin; Michael J. Spilka; William Simpson; Yana Yunusova; Douglas P. Munoz; Brian C. Coe; Donald Brien; Sean Nestor; Nir Lipsman; Peter Giacobbe; Jennifer S. Rabin Objective speech measures capture depressive symptoms and associated cognitive difficulties Journal Article In: Translational Psychiatry, pp. 1–9, 2025. @article{Wiseman2025,Psychiatry lacks objective biomarkers for assessing depression, relying instead on subjective measures, such as the Hamilton Depression Rating Scale (HAMD-17). This study examined whether speech features could serve as objective markers of depressive symptoms and its associated cognitive difficulties. Sixty-six individuals with major depressive disorder (MDD) and 54 non-depressed control participants completed a speech assessment, responding to the prompt: “Please tell me how you are feeling today.” Linguistic (valence, emotional intensity, agency) and acoustic (pitch, pitch variance, speech rate, time spent pausing) features were derived from natural language processing. These speech features were analyzed individually and collectively as a composite score representing overall speech disturbance. A subset of participants (40 with MDD, 38 controls) also completed a validated executive function task. ANCOVA models compared speech features between groups. Linear regression models examined associations between speech features, depression severity (HAMD-17), and performance on an executive function task. Compared to controls, individuals with MDD used language that was more negatively valenced, emotionally intense, and less agentic. They also demonstrated lower pitch, slower speech rate, and more time spent pausing. The composite speech score also differed between groups. Speech features and executive function were not associated with depression severity, as measured by the HAMD-17. However, several speech features were associated with executive function. Taken together, these findings suggest that speech features may provide a scalable, objective method for detecting depressive symptoms and associated executive difficulties. |
Andi Wang; Ana Pellicer-Sánchez Exploring L2 learners' processing of unknown words during subtitled viewing through self-reports Journal Article In: International Review of Applied Linguistics in Language Teaching, vol. 63, no. 4, pp. 2379–2408, 2025. @article{Wang2025d,Studies have shown the benefits of subtitled viewing for incidental vocabulary learning, but the effects of different subtitling types varied across studies. The effectiveness of different types of subtitled viewing could be related to how unknown vocabulary is processed during viewing. However, no studies have investigated L2 learners' processing of unknown words in viewing beyond exploring learners' attention allocation. The present research followed a qualitative approach to explore L2 learners' processing of unknown words during subtitled viewing under three conditions (i.e., captions, L1 subtitles, and bilingual subtitles) by tapping into learners' reported awareness of the unknown words and the vocabulary processing strategies used to engage with unknown words. According to stimulated recall data (elicited by eye-tracking data) from 45 intermediate-to-advanced-level Chinese learners of English, captions led to increased awareness of the unknown words. Moreover, the types of strategies learners used to cope with unknown vocabulary were determined by subtitling type. |
Michael C. S. Trumbo; Aaron P. Jones; Bradley M. Robert; Mason S. Briggs; Vincent P. Clark Using eye tracking to elucidate the mechanisms underlying stimulation-enhanced visual target detection Journal Article In: International Journal of Cognitive Sciences, vol. 1, no. 1, pp. 1–18, 2025. @article{Trumbo2025,Transcranial direct current stimulation (tDCS) is a noninvasive form of brain stimulation that involves passing a weak electrical current between electrodes on the scalp to modulate underlying neural tissue. TDCS has been shown to modulate cognition in a variety of domains, including memory, attention, and visual processing. Prior work from our laboratory has shown positive effects of tDCS on learning to detect target objects hidden in complex naturalistic visual scenes and learn rules for categorizing images, though the mechanism for these benefits remains unknown. One possibility is that tDCS optimizes visual search by modulating visual attention or via the reduction in search errors. One method of quantifying visual attention is to use eye tracking to record search patterns to determine if and how visual search is adjusted under verum stimulation conditions. Eye tracking data allows classification of errors into error types, including sampling errors (failing to look in the relevant region), recognition errors (looking at the critical portion of a scene, but failing to recognize it as such as evidenced by visual fixation), and decision-making errors (fixating on the relevant portion of a scene, but making the wrong determination). Our results indicate that the benefit tDCS confers on visual search for targets stems from the reduction in decision-making errors when targets are present (Cohen's d = 0.86). Also reported is a replication of previous findings showing a tDCS-dependent improvement in learning this task, learning score (Cohen's d = 0.88); d' (Cohen's d = 1.00). This provides support for moving tDCS into the application space by pairing it with analysts who are concerned with the type of search error that is corrected via stimulation. |
Brian Szekely; Paul R. MacNeilage Dynamic contrast sensitivity during human locomotion Journal Article In: Journal of Vision, vol. 25, no. 13, pp. 1–14, 2025. @article{Szekely2025,Locomotion poses a challenge to clear and stable vision. Reflexive head and eye movements act to stabilize the retinal image, but these do not act perfectly, so retinal image motion is increased during walking compared with standing. We nevertheless perceive the world as clear and stable during locomotion, suggesting that the visual system is well-adapted to meet the challenges posed by locomotion. To better understand these processes, we assessed dynamic contrast sensitivity during locomotion by presenting brief (24 ms) foveal Gabor targets (6°, 11 cpd) at threshold contrast to observers walking on a treadmill in an otherwise darkened room. Head and ankle motion were tracked, and presentation time was randomized, which allowed post hoc binning of responses according to stride-cycle timing to investigate how sensitivity is impacted by head motion and stride-cycle timing. Contrast sensitivity was improved during walking compared with standing over large portions of the stride cycle, except for epochs aligned with heel strikes, which drive large and unpredictable perturbations. This resulted in periodicity in contrast sensitivity at two cycles per stride, with additional oscillations observed at four and six cycles per stride. Pupil size was found to be moderately larger during walking compared with standing and also exhibited periodic fluctuations that were phase-locked to the stride cycle. Perceptual oscillations reflect the entrainment of visual processing by active behaviors. Robust contrast sensitivity during walking may be supported by action-contingent effects of locomotion on visual cortical activity that have been observed in several animal models. |
Paul Justin Connor Smith; Niko A. Busch Spontaneous alpha-band lateralization extends persistence of visual information in iconic memory by modulating cortical excitability Journal Article In: The Journal of Neuroscience, vol. 45, no. 48, pp. 1–10, 2025. @article{Smith2025c,Pre-stimulus alpha oscillations in the visual cortex modulate neuronal excitability, influencing sensory processing and decision-making. While this relationship has been demonstrated mostly in detection tasks with low visibility stimuli, interpretations of such effects can be ambiguous due to biases, making it difficult to clearly distinguish between perception-related and decision-related effects. In this study, we investigated how spontaneous fluctuations in pre-stimulus alpha power affect iconic memory, a high-capacity, ultra-short visual memory store. Data from 49 healthy adults (34 female and 15 male) was analyzed. We employed a partial report task, where a brief display of six stimuli was followed by a report cue indicating the target stimulus. In this paradigm, accuracy at short stimulus-cue onset asynchronies (SOAs) is typically high, reflecting the initial availability of sensory information, but it rapidly declines at intermediate SOAs due to the decay of the iconic memory trace, stabilizing at a low asymptote at long SOAs, representing the limited capacity of short-term memory. Crucially, performance in this task is constrained by the temporal persistence of sensory information, not by low visibility or response bias. We found that strong pre-stimulus alpha power enhanced performance by amplifying initial stimulus availability without affecting the speed of iconic decay. This effect partially reflects stronger pre-stimulus alpha power in the hemisphere ipsilateral to the to-be-reported target, likely suppressing neuronal excitability of neurons coding irrelevant stimuli. Our findings underscore the role of alpha oscillations in modulating neuronal excitability and visual perception, independent of decision-making strategies implicated in prior studies. |
