All EyeLink Publications
All 13,000+ peer-reviewed EyeLink research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2025 |
Rotem Mairon; Ohad Ben-shahar The polar saccadic flow model: Re-modeling the center bias from fixations to saccades Journal Article In: Vision Research, vol. 228, pp. 1–12, 2025. @article{Mairon2025, Research indicates that a significant component of human eye movement behavior constitutes a set of consistent biases independent of visual content, the most well-known of which is the central bias. While all prior art focuses on representing saccadic motion and biases in Cartesian retinotopic coordinates, here we propose the Polar Saccadic Flow model, a novel approach for modeling saccades' space-dependent biases in a polar representation. By breaking saccades into orientation and amplitude, the Polar Saccadic Flow model enables more accurate modeling of these components, leading also to a better understanding of the saccadic bias. Moreover, the polar representation also uncovers hitherto unknown patterns and biases in eye movement data, allowing for a more detailed and nuanced analysis of saccadic behavior. These findings have implications for the study of human visual perception, can help to develop more accurate eye movement models, and also may improve eye tracking technologies. |
Xingyang Lv; Zixin Yuan; Fang Wan; Tian Lan; Gila Oren Do tourists experience suffering when they touch the wailing wall? Journal Article In: Tourism Management, vol. 106, pp. 1–21, 2025. @article{Lv2025, Tactile engagement is a critical aspect of tourist experiences. Embodied cognition theory suggests a direct correlation between physical sensations and psychological perceptions. For example, touching the textured stones at the Wailing Wall, a revered religious site in Jerusalem, can evoke intense emotions in tourists. This study explores the impact of rough tactile sensations on dark experiences through six studies. We used content analysis, on-site surveys, eye movement experiments, and scenario experiments to validate these effects. Our findings emphasize the pivotal role of rough tactile sensations in shaping profound emotions and individual experiences while uncovering alternative routes for developing sensory strategies to enrich dark tourism experiences. |
Selma Lugtmeijer; Aleksandra M. Sobolewska; The Visual Brain Group; Edward H. F. Haan; H. Steven Scholte Visual feature processing in a large stroke cohort: Evidence against modular organization Journal Article In: Brain, pp. 1–11, 2025. @article{Lugtmeijer2025, Mid-level visual processing represents a crucial stage between basic sensory input and higher-level object recognition. The conventional model posits that fundamental visual qualities, such as colour and motion, are processed in specialized, retinotopic brain regions (e.g. V4 for colour, MT/V5 for motion). Using atlas-based lesion–symptom mapping and disconnectome maps in a cohort of 307 ischaemic stroke patients, we examined the neuroanatomical correlates underlying the processing of eight mid-level visual qualities.Contrary to the predictions of the standard model, our results did not reveal consistent relationships between processing impairments and damage to traditionally associated brain regions. Although we validated our methodology by confirming the established relationship between visual field defects and damage to primary visual areas (V1, V2 and V3), we found no reliable evidence linking processing deficits to specific regions in the posterior brain.These findings challenge the traditional modular view of visual processing and suggest that mid-level visual processing might be more distributed across neural networks than previously thought. This supports alternative models where visual maps represent constellations of co-occurring information rather than specific qualities. |
Mareike Ludwig; Matthew J. Betts; Dorothea Hämmerer Stimulate to remember? The effects of short burst of transcutaneous vagus nerve stimulation (taVNS) on memory performance and pupil dilation Journal Article In: Psychophysiology, vol. 62, no. 1, pp. 1–16, 2025. @article{Ludwig2025, The decline in noradrenergic (NE) locus coeruleus (LC) function in aging is thought to be implicated in episodic memory decline. Transcutaneous auricular vagus nerve stimulation (taVNS), which supports LC function, might serve to preserve or improve memory function in aging. However, taVNS effects are generally very heterogeneous, and it is currently unclear whether taVNS has an effect on memory. In this study, an emotional memory task with negative events involving the LC-NE system was combined with the short burst of event-related taVNS (3 s) in younger adults (N = 24). The aim was to investigate taVNS-induced changes in pupil dilation during encoding and possible taVNS-induced improvements in (emotional) memory performance for early and delayed (24 h) recognition. Negative events were associated with increased pupil dilation and better memory performance. Additionally, real as compared to sham or no stimulation selectively increased memory for negative events. Short bursts of stimulation, whether real or sham, led to an increase in pupil dilation and an improvement in memory performance over time, likely due to the attention-inducing sensory modulation of electrical stimulation. |
Óscar Loureda Lamas; Mathis Teucher; Celia Hernández Pérez; Adriana Cruz Rubio; Carlos Gelormini-Lezama (Re)categorizing lexical encapsulation: An experimental approach Journal Article In: Journal of Pragmatics, vol. 239, pp. 4–15, 2025. @article{LouredaLamas2025, Anaphoric encapsulation is a discursive mechanism by which a noun phrase recovers an explicature. This eye tracking study addresses the question of whether categorizing versus recategorizing encapsulation lead to different processing patterns. Results show that (1) encapsulating noun phrases are cognitively prominent areas, (2) recategorization is never less effortful than categorization, (3) the prominence and instructional asymmetry of the encapsulating noun phrase with respect to the antecedent is greater in cases of recategorizing encapsulation. Overall, encapsulating noun phrases initiate a complex cognitive operation due to the nature of their antecedent, which includes both encoded and inferred information. A distinctive processing pattern emerges for recategorizing encapsulating noun phrases: greater local efforts, due to the introduction of new information, do not result in higher total reading times. Beyond the introductory section, the structure of this study is as follows: Section 2 discusses the properties of categorizing and recategorizing mechanisms. Section 3 reviews experimental research on nominal anaphoric encapsulation in Spanish. Section 4 outlines the key aspects of the experimental design and execution. Finally, sections 5 and 6 present the results of the experiment and offer a theoretical discussion of the findings. |
Belén López Assef; Tania Zamuner Task effects in children's word recall: Expanding the reverse production effect Journal Article In: Journal of Child Language, pp. 1–13, 2025. @article{LopezAssef2025, Words said aloud are typically recalled more than words studied under other techniques. In certain circumstances, production does not lead to this memory advantage. We investigated the nature of this effect by varying the task during learning. Children aged five to six years were trained on novel words which required no action (Heard) compared to Verbal-Speech (production), Non-Verbal-Speech (stick out tongue), and Non-Verbal-Non-Speech (touch nose). Eye-tracking showed successful learning of novel words in all training conditions, but no differences between conditions. Both non-verbal tasks disrupted recall, demonstrating that encoding can be disrupted when children perform different types of concurrent actions. |
Zhiwei Liu; Yan Li; Jingxin Wang Flexible word position encoding in Chinese reading: Evidence from parafoveal preprocessing Journal Article In: Visual Cognition, pp. 1–9, 2025. @article{Liu2025b, Accurately encoding word positions plays a critical role in fluent reading, allowing readers to facilitate efficient comprehension. However, whether word position information can be encoded parafoveally remains unknown, particularly in unspaced languages like Chinese. This study investigated whether Chinese readers can extract word order information from parafoveal vision using the boundary paradigm and eye-tracking. Participants read sentences containing identical, transposed, or unrelated preview words, which were replaced by the target words upon the eyes crossing an invisible boundary. Results showed that reading times on the target words were longer for transposed compared to identical previews but shorter than unrelated previews. These findings suggest that word positional information can be encoded parafoveally during Chinese reading, but not in a strictly precise manner. The implications of the findings for the Chinese reading model are discussed. |
Yaohui Liu; Keren He; Kaiwen Man; Peida Zhan Exploring critical eye-tracking metrics for identifying cognitive strategies in Raven's Advanced Progressive Matrices: A data-driven perspective Journal Article In: Journal of Intelligence, vol. 13, no. 14, pp. 1–20, 2025. @article{Liu2025a, The present study utilized a recursive feature elimination approach in conjunction with a random forest algorithm to assess the efficacy of various features in predicting cogni- tive strategy usage in Raven's Advanced Progressive Matrices. In addition to item response accuracy (RA) and response time (RT), five key eye-tracking metrics were examined: pro- portional time on matrix (PTM), latency to first toggle (LFT), rate of latency to first toggle (RLT), number of toggles (NOT), and rate of toggling (ROT). The results indicated that PTM, RLT, and LFT were the three most critical features, with PTM emerging as the most significant predictor of cognitive strategy usage, followed by RLT and LFT. Clustering anal- ysis of these optimal features validated their utility in effectively distinguishing cognitive strategies. The study's findings underscore the potential of specific eye-tracking metrics as objective indicators of cognitive processing while providing a data-driven method to identify strategies used in complex reasoning tasks. |
Xinhe Liu; Zhiting Zhang; Lu Gan; Panke Yu; Ji Dai Medium spiny neurons mediate timing perception in coordination with prefrontal neurons in primates Journal Article In: Advanced Science, pp. 1–15, 2025. @article{Liu2025, Timing perception is a fundamental cognitive function that allows organisms to navigate their environment effectively, encompassing both prospective and retrospective timing. Despite significant advancements in understanding how the brain processes temporal information, the neural mechanisms underlying these two forms of timing remain largely unexplored. In this study, it aims to bridge this knowledge gap by elucidating the functional roles of various neuronal populations in the striatum and prefrontal cortex (PFC) in shaping subjective experiences of time. Utilizing a large-scale electrode array, it recorded responses from over 3000 neurons in the striatum and PFC of macaque monkeys during timing tasks. The analysis classified neurons into distinct groups and revealed that retrospective and prospective timings are governed by separate neural processes. Specifically, this study demonstrates that medium spiny neurons (MSNs) in the striatum play a crucial role in facilitating these timing processes. Through cell-type-specific manipulation, it identified D2-MSNs as the primary contributors to both forms of timing. Additionally, the findings indicate that effective processing of timing requires coordination between the PFC and the striatum. In summary, this study advances the understanding of the neural foundations of timing perception and highlights its behavioral implications. |
Zheng Liang; Riman Ga; Han Bai; Qingbai Zhao; Guixian Wang; Qing Lai; Shi Chen; Quanlei Yu; Zhijin Zhou Teaching expectancy improves video-based learning: Evidence from eye-movement synchronization Journal Article In: British Journal of Educational Technology, vol. 56, pp. 231–249, 2025. @article{Liang2025, Abstract: Video-based learning (VBL) is popular, yet students tend to learn video material passively. Instilling teaching expectancy is a strategy to promote active processing by learners, but it is unclear how effective it will be in improving VBL. This study examined the role of teaching expectancy on VBL by comparing the learning outcomes and metacognitive monitoring of 94 learners with different expectancies (teaching, test or no expectancy). Results showed that the teaching expectancy group had better learning outcomes and no significant difference in the metacognitive monitoring of three groups. We further explored the visual behaviour patterns of learners with different expectancies by using the indicator of eye-movement synchronization. It was found that synchronization was significantly lower in both the teaching and test expectancy groups than in the no expectancy group, and the test expectancy group was significantly lower than the teaching expectancy group. This result suggests that both teaching and test expectancy enhance the active processing of VBL. However, by sliding window analysis, we found that the teaching expectancy group used a flexible and planned attention allocation. Our findings confirmed the effectiveness of teaching expectancy in VBL. Also, this study provided evidence for the applicability of eye-tracking techniques to assess VBL. |
Wenrui Li; Xiaofang Ma; Lei Huang; Jian Guan Scene inconsistency effect in object judgement: Evidence from semantic and syntactic separation Journal Article In: Current Psychology, pp. 1–11, 2025. @article{Li2025a, Objects are always situated within a scene context and have specific relationships with their environment. Understanding how scene context and the relationships between objects and their context affect object identification is crucial. Previous studies have indicated that scene-incongruent objects are detected faster than scene-congruent ones, and that “context cueing” can enhance object identification. However, no study has directly tested this relationship while considering the effects of bottom-up and top-down attention processes on object judgment. In our research, we explored the influence of context and its relationships by incorporating “context cueing” and categorizing these relationships into two types: semantic and syntactic, within an object judgment task. The behavioral results from Experiment 1 revealed that the recognition accuracy for syntactically incongruent objects was higher, with shorter response times. Eye-tracking data indicated that when semantic congruence was present, the first fixation duration on syntactically incongruent objects was shorter; conversely, when semantic incongruence was present, the first fixation duration on syntactically congruent objects was longer. In Experiment 2, which introduced context cueing, we found that the recognition accuracy for semantically congruent objects was higher, and they received more fixations. Notably, when syntactic incongruence was present, the first fixation duration on semantically congruent objects was longer. These findings suggest that under conditions without background cueing, syntactic processing has priority in scene processing. We interpret these results as evidence that top-down attention biases object processing, leading to reduced processing of scene-congruent objects compared to scene-incongruent ones. Thus, “context cueing” activates top-down attention, playing a pivotal role in object identification. |
Ting Xun Li; Chi Wen Liang In: Cognitive Therapy and Research, vol. 49, pp. 62–74, 2025. @article{Li2025, Background: Attentional bias modification (ABM) is a computerized treatment for anxiety. Most ABMs using a dot-probe task aim to direct anxious individuals' attention away from threats. Recently, a new ABM approach using a visual search task (i.e., ABM-positive-search) has been developed to facilitate the allocation of attention toward positive stimuli. This study examined the efficacies of two versions of ABM-positive-search in socially anxious individuals. Methods: Eighty-six participants were randomly assigned to the search positive in threat (SP-T; n = 28), search positive in neutral (SP-N; n = 29), or control training (CT) (n = 29) group. All participants completed four training sessions within two weeks. Attentional bias, attentional control, self-report social anxiety, and anxiety responses (i.e., subjective anxiety, psychophysiological reactivity, and gaze behavior) to the speech task were assessed pre-training and post-training. Results: Results showed that ABM-positive-search trainings facilitated disengagement from threats compared to CT. Regardless of group, participants exhibited a reduction in attention allocation to negative feedback during speech. However, only SP-N increased attention allocation to positive feedback. Participants in three groups showed a decrease in subjective anxiety but no changes in psychophysiological reactivity to speech challenge from pre-training to post-training. ABM-positive-search trainings had no beneficial effects on attentional control or self-report social anxiety when compared with CT. Conclusions: The findings do not support the efficacy of ABM-positive-search trainings for social anxiety. |
Matthew Lehet; Beier Yao; Ivy F. Tso; Vaibhav A. Diwadkar; Jessica Fattal; Jacqueline Bao; Katharine N. Thakkar Altered effective connectivity within a thalamocortical corollary discharge network in individuals with schizophrenia Journal Article In: Schizophrenia Bulletin, pp. 1–14, 2025. @article{Lehet2025, Background and Hypothesis: Sequential saccade planning requires corollary discharge (CD) signals that provide information about the planned landing location of an eye movement. These CD signals may be altered among individuals with schizophrenia (SZ), providing a potential mechanism to explain passivity and anomalous self-experiences broadly. In healthy controls (HC), a key oculomotor CD network transmits CD signals from the thalamus to the frontal eye fields (FEF) and the intraparietal sulcus (IPS) and also remaps signals from FEF to IPS. Study Design: Here, we modeled fMRI data using dynamic causal modeling (DCM) to examine patient-control differences in effective connectivity evoked by a double-step (DS) task (30 SZ, 29 HC). The interrogated network was formed from a combination of (1) functionally identified FEF and IPS regions that robustly responded on DS trials and (2) anatomically identified thalamic regions involved in CD transmission. We also examined the relationship between clinical symptoms and effective connectivity parameters associated with task modulation of network pathways. Study Results: Network connectivity was indeed modulated by the DS task, which involves CD transmission. More importantly, we found reduced effective connectivity from thalamus to IPS in SZ, which was further correlated with passivity symptom severity. Conclusions: These results reaffirm the importance of IPS and thalamocortical connections in oculomotor CD signaling and provide mechanistic insights into CD alterations and consequently agency disturbances in schizophrenia. |
Yao-Tung Lee; Ying-Hsuan Tai; Yi-Hsuan Chang; Cesar Barquero; Shu-Ping Chao; Chin-An Wang Disrupted microsaccade responses in late-life depression Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–11, 2025. @article{Lee2025, Late-life depression (LLD) is a psychiatric disorder in older adults, characterized by high prevalence and significant mortality rates. Thus, it is imperative to develop objective and cost-effective methods for detecting LLD. Individuals with depression often exhibit disrupted levels of arousal, and microsaccades, as a type of fixational eye movement that can be measured non-invasively, are known to be modulated by arousal. This makes microsaccades a promising candidate as biomarkers for LLD. In this study, we used a high-resolution, video-based eye-tracker to examine microsaccade behavior in a visual fixation task between LLD patients and age-matched healthy controls (CTRL). Our goal was to determine whether microsaccade responses are disrupted in LLD compared to CTRL. LLD patients exhibited significantly higher microsaccade peak velocities and larger amplitudes compared to CTRL. Although microsaccade rates were lower in LLD than in CTRL, these differences were not statistically significant. Additionally, while both groups displayed microsaccadic inhibition and rebound in response to changes in background luminance, this modulation was significantly blunted in LLD patients, suggesting dysfunction in the neural circuits responsible for microsaccade generation. Together, these findings, for the first time, demonstrate significant alterations in microsaccade behavior in LLD patients compared to CTRL, highlighting the potential of these disrupted responses as behavioral biomarkers for identifying individuals at risk for LLD. |
Haiting Lan; Sixin Liao; Jan Louis Kruger Do advertisements disrupt reading? evidence from eye movements Journal Article In: Applied Cognitive Psychology, vol. 39, pp. 1–19, 2025. @article{Lan2025, Reading online texts is often accompanied by visual distractors such as advertisements. Although previous studies have found that visual distractors are attention-demanding, little is known about how they impact reading. Drawing on text-based and word-based eye-movement measures, the current study examines how three types of ads (static image, flashing text and video) influence readers' reading comprehension and reading process. Results show that increasingly animated ads were more distracting than static ones at the text level, as evidenced by more and longer fixations, and more regressions. Moreover, the word frequency effect was stronger when reading with ads with flashing text than without ads on gaze duration and total reading time, suggesting that linguistic-related animated ads interfere with word processing. Although visual distractors reduced their reading speed and word processing efficiency, readers managed to maintain sufficient comprehension by adopting a more mindful reading strategy, indicating how metacognition functions in complex reading situations. |
Melanie Labusch; Manuel Perea The CASE of brand names during sentence reading Journal Article In: Psychological Research, vol. 89, no. 1, pp. 1–10, 2025. @article{Labusch2025, Brand names typically maintain a distinctive letter case (e.g., IKEA, Google). This element is essential for theoretical (word recognition models) and practical (brand design) reasons. In abstractionist models, letter case is considered irrelevant, whereas instance-based models use surface information like letter case during lexical retrieval. Previous brand identification tasks reported faster responses to brands in their characteristic letter case (e.g., IKEA and Google faster than ikea and GOOGLE), favoring instance-based models. We examined whether this pattern can be generalized to normal sentence reading: Participants read sentences in which well-known brand names were presented intact (e.g., IKEA, Google) or with a modified letter case (e.g., Ikea, GOOGLE). Results showed a cost for brands written in uppercase, independently of their characteristic letter case, in early eye fixation measures (probability of first-fixation, first-fixation duration). However, for later measures (gaze duration and total times), fixation times were longer when the brand's letter case was modified, restricted to those brands typically written in lowercase (e.g., GOOGLE > Google, whereas Ikea ≲ IKEA). Thus, during sentence reading, both the actual letter case and the typical letter case of brand names interact dynamically, posing problems for abstractionist models of reading. |
Marianna Kyriacou; Franziska Köder The cognitive underpinnings of irony comprehension: Fluid intelligence but not working memory modulates processing Journal Article In: Applied Psycholinguistics, vol. 45, pp. 1219–1250, 2025. @article{Kyriacou2025, The comprehension of irony involves a sophisticated inferential process requiring language users to go beyond the literal meaning of an utterance. Because of its complex nature, we hypothesized that working memory (WM) and fluid intelligence, the two main components of executive attention, would be involved in the understanding of irony: the former by maintaining focus and relevant information active during processing, the latter by disengaging irrelevant information and offering better problem-solving skills. In this eye-tracking reading experiment, we investigated how adults (N = 57) process verbal irony, based on their executive attention skills. The results indicated a null (or indirect) effect for WM, while fluid intelligence directly modulated the comprehension and processing of irony during reading. As fluid intelligence is an important individual-difference variable, the findings pave the way for future research on developmental and clinical populations who tend to struggle with nonliteral language. |
Jens Kürten; Christina Breil; Roxana Pittig; Lynn Huestegge; Anne Böckler How eccentricity modulates attention capture by direct face/gaze and sudden onset motion Journal Article In: Attention, Perception, & Psychophysics, pp. 1–13, 2025. @article{Kuerten2025, We investigated how processing benefits for direct face/gaze and sudden onset motion depend on stimulus presentation location, specifically eccentricity from fixation. Participants responded to targets that were presented on one of four stimuli that displayed a direct or averted face and gaze either statically or suddenly. Between participants, stimuli were presented at different eccentricities relative to central fixation, spanning 3.3°, 4.3°, 5.5° or 6.5° of the visual field. Replicating previous studies, we found processing advantages for direct (vs. averted) face/gaze and motion onset (vs. static stimuli). Critically, while the motion-onset advantage increased with increasing distance to the center, the face/gaze direction advantage was not significantly modulated by target eccentricity. Results from a control experiment with eye tracking indicate that face/gaze direction could be accurately discriminated even at the largest eccentricity. These findings demonstrate a distinction between the processing of basic facial and gaze signals and exogenous motion cues, which may be based on functional differences between central and peripheral retinal regions. Moreover, the results highlight the importance of taking specific stimulus properties into account when studying perception and attention in the periphery. |
Sharif I. Kronemer; Victoria E. Gobo; Catherine R. Walsh; Joshua B. Teves; Diana C. Burk; Somayeh Shahsavarani; Javier Gonzalez; Castillo Peter Cross‑species real‑time detection of trends in pupil size fluctuation Journal Article In: Behavior Research Methods, vol. 57, no. 1, pp. 1–14, 2025. @article{Kronemer2025, Pupillometry is a popular method because pupil size is easily measured and sensitive to central neural activity linked to behavior, cognition, emotion, and perception. Currently, there is no method for online monitoring phases of pupil size fluctuation. We introduce rtPupilPhase—an open-source software that automatically detects trends in pupil size in real time. This tool enables novel applications of real-time pupillometry for achieving numerous research and translational goals. We validated the performance of rtPupilPhase on human, rodent, and monkey pupil data, and we propose future implementations of real-time pupillometry. |
Aylin König; Uwe Thomas; Frank Bremmer; Stefan Dowiasch Quantitative comparison of a mobile , tablet‑based eye‑tracker and two stationary, video‑based eye‑trackers Journal Article In: Behavior Research Methods, vol. 57, no. 1, pp. 1–19, 2025. @article{Koenig2025a, The analysis of eye movements is a noninvasive, reliable and fast method to detect and quantify brain (dys)function. Here, we investigated the performance of two novel eye-trackers—the Thomas Oculus Motus-research mobile (TOM-rm) and the TOM-research stationary (TOM-rs)—and compared them with the performance of a well-established video-based eye-tracker, i.e., the EyeLink 1000 Plus (EL). The TOM-rm is a fully integrated, tablet-based mobile device that presents visual stimuli and records head-unrestrained eye movements at 30 Hz without additional infrared illumination. The TOM-rs is a stationary, video-based eye-tracker that records eye movements at either high spatial or high temporal resolution. We compared the performance of all three eye-trackers in two different behavioral tasks: pro- and anti-saccade and free viewing. We collected data from 30 human subjects while running all three eye-tracking devices in parallel. Parameters requiring a high spatial or temporal resolution (e.g., saccade latency or gain), as derived from the data, differed significantly between the EL and the TOM-rm in both tasks. Differences between results derived from the TOM-rs and the EL were most likely due to experimental conditions, which could not be optimized for both systems simultaneously. We conclude that the TOM-rm can be used for measuring basic eye-movement parameters, such as the error rate in a typical pro- and anti-saccade task, or the number and position of fixations in a visual foraging task, reliably at comparably low spatial and temporal resolution. The TOM-rs, on the other hand, can provide high-resolution oculomotor data at least on a par with an established reference system. wibble99 |
Zhiming Kong; Chen Chen; Jianrong Jia Pupil responds spontaneously to visuospatial regularity Journal Article In: Journal of Vision, vol. 25, no. 1, pp. 1–10, 2025. @article{Kong2025, Beyond the light reflex, the pupil responds to various high-level cognitive processes. Multiple statistical regularities of stimuli have been found to modulate the pupillary response. However, most studies have used auditory or visual temporal sequences as stimuli, and it is unknown whether the pupil size is modulated by statistical regularity in the spatial arrangement of stimuli. In three experiments, we created perceived regular and irregular stimuli, matching physical regularity, to investigate the effect of spatial regularity on pupillary responses during passive viewing. Experiments using orientation (Experiments 1 and 2) and size (Experiment 3) as stimuli consistently showed that perceived irregular stimuli elicited more pupil constriction than regular stimuli. Furthermore, this effect was independent of the luminance of the stimuli. In conclusion, our study revealed that the pupil responds spontaneously to perceived visuospatial regularity, extending the stimulus regularity that influences the pupillary response into the visuospatial domain. |
Lua Koenig; Biyu J. He 2025. @book{Koenig2025, Perceptual awareness results from an intricate interaction between external sensory input and the brain's spontaneous activity. Pre-stimulus ongoing activity influencing conscious perception includes both brain oscillations in the alpha (7 to 14 Hz) and beta (14 to 30 Hz) frequency ranges and aperiodic activity in the slow cortical potential (SCP, <5 Hz) range. However, whether brain oscillations and SCPs independently influence conscious perception or do so through shared mechanisms remains unknown. Here, we addressed this question in 2 independent magnetoencephalography (MEG) data sets involving near-threshold visual perception tasks in humans using low-level (Gabor patches) and high-level (objects, faces, houses, animals) stimuli, respectively. We found that oscillatory power and large-scale SCP activity influence conscious perception through independent mechanisms that do not have shared variance. In addition, through mediation analysis, we show that pre-stimulus oscillatory power and SCP activity have different relations to pupil size-an index of arousal-in their influences on conscious perception. Together, these findings suggest that oscillatory power and SCPs independently contribute to perceptual awareness, with distinct relations to pupil-linked arousal. |
Anna R. Knippenberg; Sabrina Yavari; Gregory P. Strauss Negative auditory hallucinations are associated with increased activation of the defensive motivational system in schizophrenia Journal Article In: Schizophrenia Research: Cognition, vol. 39, pp. 1–6, 2025. @article{Knippenberg2025, Auditory hallucinations (AH) are the most common symptom of psychosis. The voices people hear make comments that are benign or even encouraging, but most often voices are threatening and derogatory. Negative AH are often highly distressing and contribute to suicide risk and violent behavior. Biological mechanisms underlying the valence of voices (i.e., positive, negative, neutral) are not well delineated. In the current study, we examined whether AH voice valence was associated with increased activation of the Defensive Motivational System, as indexed by central and autonomic system response to unpleasant stimuli. Data were evaluated from two studies that used a common symptom rating instrument, the Psychotic Symptom Rating Scale (PSY-RATS), to measure AH valence. Participants included outpatients diagnosed with SZ. Tasks included: Study 1: Trier Social Stress Task while heart rate was recorded via electrocardiography (N = 27); Study 2: Passive Viewing Task while participants were exposed to pleasant, unpleasant, and neutral images from the International Affective Picture System (IAPS) library while eye movements, pupil dilation, and electroencephalography were recorded (N = 25). Results indicated that negative voice content was significantly associated with: 1) increased heart rate during an acute social stressor, 2) increased pupil dilation to unpleasant images, 3) higher neural reactivity to unpleasant images, and 4) a greater likelihood of having bottom-up attention drawn to unpleasant stimuli. Findings suggest that negative AH are associated with greater Defensive Motivational System activation in terms of central and autonomic nervous system response. |
Michaela Klímová; Ilona M. Bloem; Sam Ling How does orientation-tuned normalization spread across the visual field? Journal Article In: Journal of Neurophysiology, vol. 133, no. 2, pp. 539–546, 2025. @article{Klimova2025, Visuocortical responses are regulated by gain control mechanisms, giving rise to fundamental neural and perceptual phenomena such as surround suppression. Suppression strength, determined by the composition and relative properties of stimuli, controls the strength of neural responses in early visual cortex, and in turn, the subjective salience of the visual stimulus. Notably, suppression strength is modulated by feature similarity; for instance, responses to a center-surround stimulus in which the components are collinear to each other are weaker than when they are orthogonal. However, this feature-tuned aspect of normalization, and how it may affect the gain of responses, has been understudied. Here, we examine the contribution of the tuned component of suppression to contrast response modulations across the visual field. To do so, we used functional magnetic resonance imaging (fMRI) to measure contrast response functions (CRFs) in early visual cortex (areas V1–V3) in 10 observers while they viewed full-field center-surround gratings. The center stimulus varied in contrast between 2.67% and 96% and was surrounded by a collinear or orthogonal surround at full contrast. We found substantially stronger suppression of responses when the surround was parallel to the center, manifesting as shifts in the population CRF. The magnitude of the CRF shift was strongly dependent on voxel spatial preference and seen primarily in voxels whose receptive field spatial preference corresponds to the area straddling the center-surround boundary in our display, with little-to-no modulation elsewhere. |
Leor N. Katz; Martin O. Bohlen; Gongchen Yu; Carlos Mejias-Aponte; Marc A. Sommer; Richard J. Krauzlis Optogenetic manipulation of covert attention in the nonhuman primate Journal Article In: Journal of Cognitive Neuroscience, vol. 37, no. 2, pp. 266–285, 2025. @article{Katz2025, Optogenetics affords new opportunities to interrogate neuronal circuits that control behavior. In primates, the usefulness of optogenetics in studying cognitive functions remains a challenge. The technique has been successfully wielded, but behavioral effects have been demonstrated primarily for sensorimotor processes. Here, we tested whether brief optogenetic suppression of primate superior colliculus can change performance in a covert attention task, in addition to previously reported optogenetic effects on saccadic eye movements. We used an attention task that required the monkey to detect and report a stimulus change at a cued location via joystick release, while ignoring changes at an uncued location. When the cued location was positioned in the response fields of transduced neurons in the superior colliculus, transient light delivery coincident with the stimulus change disrupted the monkey's detection performance, significantly lowering hit rates. When the cued location was elsewhere, hit rates were unaltered, indicating that the effect was spatially specific and not a motor deficit. Hit rates for trials with only one stimulus were also unaltered, indicating that the effect depended on selection among distractors rather than a low-level visual impairment. Psychophysical analysis revealed that optogenetic suppression increased perceptual threshold, but only for locations matching the transduced site. These data show that optogenetic manipulations can cause brief and spatially specific deficits in covert attention, independent of sensorimotor functions. This dissociation of effect, and the temporal precision provided by the technique, demonstrates the utility of optogenetics in interrogating neuronal circuits that mediate cognitive functions in the primate. |
Dmytro Katrychuk; Dillon J. Lohr; Oleg V. Komogortsev Oculomotor plan mathematical model in Kalman filter form with peak velocity-based neural pulse for continuous gaze prediction Journal Article In: IEEE Access, vol. 13, pp. 11544–11559, 2025. @article{Katrychuk2025, An oculomotor plant mathematical model (OPMM) employs physical and neurological characteristics of human visual system to define its dynamics. One of its most prominent applications in modern eye-tracking pipelines was hypothesized to be latency reduction via the means of eye movement prediction. However, this use case was only explored with OPMMs originally designed for saccade simulation. Such models typically relied on the neural pulse control being estimated from intended saccade amplitude - a property that becomes fully observed only after a saccade already ended, which greatly limits the model's prediction capabilities. We present the first OPMM designed with the prediction task in mind. We draw our inspiration from a "peak velocity - amplitude" main sequence relationship and propose to use saccade's peak velocity for neural pulse estimation. We additionally extend the prior work by evaluating the proposed model on the largest to date pool of 322 subjects against the naive zero displacement baseline and a long short-term memory (LSTM) neural network. |
Juliano Setsuo Violin Kanamota; Gerson Yukio Tomanari; William J. McIlvane Tracking eye fixations during stimulus generalization tests Journal Article In: Psychological Record, pp. 1–10, 2025. @article{Kanamota2025, In the analysis of operant behavior, there is little empirical research on the relationship between observing responses and primary stimulus generalization. This work aimed to investigate eye fixations when S+ and S- dimensions were varied on generalization tests. Ten university students participated. Their training consisted of a MULT VI 1 s EXT schedule followed by MULT VI 2 s EXT schedule. Discriminative stimuli were three Gabor line tilts. S+ and S- had 45º and 135º slopes, respectively. After participants achieved discrimination indices of 75%, generalization tests in extinction began. There were two different conditions: (1) S+ was replaced by stimuli with angles of 15ο, 30ο, 45ο, 60ο, and 75ο (five participants). (2) S- was replaced by 105ο, 120ο, 135ο,, 150º, and 165º (five participants). In both training and tests, eye tracking equipment recorded observing responses defined as visual fixations. S+ variations yielded sharp observing response gradients. However, S- variations yielded flattened, bell-shaped, and U-shaped observing response gradients. These data contribute to the limited information on human observing during tests of primary stimulus generalization. The study provides a methodology for accomplishing a more complete characterization of behavioral processes that may be operative when normally capable adults are exposed to variations in S+ and S- on generalization tasks. |
Tristan Jurkiewicz; Audrey Vialatte; Yaffa Yeshurun; Laure Pisella Attentional modulation of peripheral pointing hypometria in healthy participants: An insight into optic ataxia? Journal Article In: Neuropsychologia, vol. 208, pp. 1–12, 2025. @article{Jurkiewicz2025, Damage to the superior parietal lobule and intraparietal sulcus (SPL-IPS) causes optic ataxia (OA), characterized by pathological gaze-centered hypometric pointing to targets in the affected peripheral visual field. The SPL-IPS is also involved in covert attention. Here, we investigated the possible link between attention and action. This study investigated the effect of attention on pointing performance in healthy participants and two OA patients. In invalid trials, targets appeared unpredictably across different visual fields and eccentricities. Valid trials involved cued targets at specific locations. The first experiment used a central cue with 75% validity, the second used a peripheral cue with 50% validity. The effect of attention on pointing variability (noise) or time was expected as a confirmation of cueing efficiency. Critically, if OA reflects an attentional deficit, then healthy participants, in the invalid condition (without attention), were expected to produce the gaze-centered hypometric pointing bias characteristic of OA. Results: revealed main effects of validity on pointing biases in all participants with central predictive cueing, but not with peripheral low predictive cueing. This suggests that the typical underestimation of visual eccentricity in OA (visual field effect) at least partially results from impaired endogenous attention orientation toward the affected visual field. |
Yu Cin Jian; Leo Yuk Ting Cheung Prediction of text-and-diagram reading comprehension by eye-movement indicators: A longitudinal study in elementary schools Journal Article In: European Journal of Psychology of Education, vol. 40, no. 1, pp. 1–25, 2025. @article{Jian2025, Eye-movement technology has been often used to examine reading processes, but research has seldom examined the relationship between the reading process and comprehension performance, and whether the relationships are similar or different across grades. To investigate this, we conducted a 3-year longitudinal study starting at grade 4, with 175 effect samples to track the development data of eye movements on text-and-diagram reading. A series of temporal and spatial eye-movement predictors were identified to predict reading comprehension in various grades. The result of a hierarchical regression model established that total fixation duration measures (reflects processing level) and mean fixation duration (reflects decoding efficiency) were relatively better predictors of the post-reading tests at grades 5 and 6. That is, the readers made more mental efforts and had better decoding ability, which predict better post-reading test scores. Meanwhile, in grades 5 and 6, rereading total fixation duration on both the main text and diagrams consistently predicted the post-reading tests, indicating that the readers' self-regulated study time on re-processing some specific information is important for reading comprehension. Besides, a longitudinal structural equation modeling (SEM) revealed that the readers' fixation durations and text-and-diagram regression count in the lower fourth grade could significantly predict the same indicators in the following 2 years. In summary, this study identified the critical eye-movement indicators for predicting reading-test performance, and these predictions were more effective for the readers in upper grades than for those in the lower grades. |
Gianna Jeyarajan; Lian Buwadi; Azar Ayaz; Lindsay S. Nagamatsu; Denait Haile; Liye Zou; Matthew Heath Passive and active exercise do not mitigate mental fatigue during a sustained vigilance task Journal Article In: Experimental Brain Research, vol. 243, no. 1, pp. 1–13, 2025. @article{Jeyarajan2025, Executive function (EF) is improved following a single bout of exercise and impaired when an individual experiences mental fatigue (MF). These performance outcomes have been linked to a bi-directional change in cerebral blood flow (CBF). Here, we sought to determine whether MF-induced by a sustained vigilance task (i.e., psychomotor vigilance task: PVT) is mitigated when preceded by a single bout of exercise. Participants completed 20-min single bouts of active exercise (cycle ergometry involving volitional muscle activation), passive exercise (cycle ergometry involving a mechanical flywheel) and a non-exercise control intervention. EF was assessed pre- and post-intervention via the antisaccade task. Following each intervention, a 20-min PVT was completed to induce and assess MF, and transcranial Doppler ultrasound of middle cerebral artery velocity (MCAv) was used to estimate intervention- and PVT-based changes in CBF. Active and passive exercise provided a post-intervention reduction in antisaccade reaction times; that is, exercise benefitted EF. Notably, however, frequentist and Bayesian statistics indicated the EF benefit did not mitigate MF during the PVT. As well, although exercise (active and passive) and the PVT respectively increased and decreased CBF, these changes were not correlated with behavioral measures of EF or MF. Accordingly, a postexercise EF benefit does not mitigate MF during a sustained vigilance task and a bi-directional change in CBF does not serve as a primary mechanism associated with EF and MF changes. Such results provide a framework for future work to explore how different exercise types, intensities and durations may impact MF. |
Ivan Iotzov; Lucas C. Parra Effects of noise and reward on pupil size and electroencephalographic speech tracking in a word-detection task Journal Article In: European Journal of Neuroscience, vol. 61, pp. 1–12, 2025. @article{Iotzov2025, Speech is hard to understand when there is background noise. Speech intelligibility and listening effort both affect our ability to understand speech, but the relative contribution of these factors is hard to disentangle. Previous studies suggest that speech intelligibility could be assessed with EEG speech tracking and listening effort via pupil size. However, these measures may be confounded, because poor intelligibility may require a larger effort. To address this, we developed a novel word-detection paradigm that allows for a rapid behavioural assessment of speech processing. In this paradigm, words appear on the screen during continuous speech, similar to closed captioning. In two listening experiments with a total of 51 participants, we manipulated intelligibility by changing signal-to-noise ratios (SNRs) and modulated effort by varying monetary reward. Increasing SNR improved detection performance along with EEG speech tracking. Additionally, we find that pupil size increases with increased SNR. Surprisingly, when we modulated both reward and SNR, we found that reward modulated only pupil size, whereas SNR modulated only EEG speech tracking. We interpret this as the effects of arousal and listening effort on pupil size and of intelligibility on EEG speech tracking. The experimental paradigm |
Juyoen Hur; Rachael M. Tillman; Hyung Cho Kim3; Paige Didier; Allegra S. Anderson; Samiha Islam; Melissa D. Stockbridge; Andres De Los Reyes; Kathryn A. DeYoung; Jason F. Smith; Alexander J. Shackman In: Journal of Psychopathology and Clinical Science, vol. 134, no. 1, pp. 41–56, 2025. @article{Hur2025, Social anxiety-which typically emerges in adolescence-lies on a continuum and, when extreme, can be devastating. Socially anxious individuals are prone to heightened fear, anxiety, and the avoidance of contexts associated with potential social scrutiny. Yet most neuroimaging research has focused on acute social threat. Much less attention has been devoted to understanding the neural systems recruited during the uncertain anticipation of potential encounters with social threat. Here we used a novel fMRI paradigm to probe the neural circuitry engaged during the anticipation and acute presentation of threatening faces and voices in a racially diverse sample of 66 adolescents selectively recruited to encompass a range of social anxiety and enriched for clinically significant levels of distress and impairment. Results demonstrated that adolescents with more severe social anxiety symptoms experience heightened distress when anticipating encounters with social threat, and reduced discrimination of uncertain social threat and safety in the bed nucleus of the stria terminalis (BST), a key division of the central extended amygdala (EAc). Although the EAc-including the BST and central nucleus of the amygdala-was robustly engaged by the acute presentation of threatening faces and voices, the degree of EAc engagement was unrelated to the severity of social anxiety. Together, these observations provide a neurobiologically grounded framework for conceptualizing adolescent social anxiety and set the stage for the kinds of prospective-longitudinal and mechanistic research that will be necessary to determine causation and, ultimately, to develop improved interventions for this often-debilitating illness. |
Qian Huangfu; Qianmei He; Sisi Luo; Weilin Huang; Yahua Yang Does teacher enthusiasm facilitate students' chemistry learning in video lectures regardless of students' prior chemistry knowledge levels? Journal Article In: Journal of Computer Assisted Learning, vol. 41, no. 1, pp. 1–14, 2025. @article{Huangfu2025, Background: Video lectures which include the teachers' presence have become increasingly common. As teacher enthusiasm is a nonverbal cue in video lectures, more and more studies are focusing on this topic. However, little research has been carried out on the interactions between teacher enthusiasm and prior knowledge when learning from video lectures. Objectives: We tested whether prior chemistry knowledge moderated the impact of teacher enthusiasm on students' chemistry learning during video lectures. Methods: One hundred and forty-two Grade 7 (low-prior chemistry knowledge) and Grade 9 (high-prior chemistry knowledge) Chinese students engaged with this research. Each group of students was randomised into viewing a video lecture with either a low or high degree of teacher enthusiasm. Outcomes were assessed by attention allocation, learning performance, cognitive load, learning satisfaction and student engagement. Results and Conclusions: Our findings revealed significant benefits of teacher enthusiasm and also showed that prior chemistry knowledge could moderate the impact of teacher enthusiasm on students' attention and cognitive outcomes (cognitive load and transfer). Visual attention mediates the effects on transfer. For students with low prior knowledge, there is more focus on the learning content, lower extraneous cognitive load, and higher transfer scores when watching videos with high levels of teacher enthusiasm; however, students with high prior knowledge do not show these differences. In addition, high prior chemistry knowledge had a significant beneficial impact on the motivational outcomes of the students (satisfaction and engagement). Implications: The results suggest that teacher enthusiasm in a video lecture may affect students' chemistry learning, and students' prior chemistry knowledge should be considered when choosing whether to display teacher enthusiasm. |
Lingshan Huang The cognitive processing of nouns and verbs in second language reading: An eye-tracking study Journal Article In: Linguistics Vanguard, no. 288, pp. 1–11, 2025. @article{Huang2025a, This study explores the cognitive processing of nouns and verbs in second language (L2) reading, aiming to investigate the potential differences and their effects on comprehension performance. Twenty-five Chinese students read an English text while their eye movements were recorded. A reading comprehension test evaluated the participants' L2 reading comprehension performance. The results reveal a significant difference in total reading time between nouns and verbs. Additionally, total reading time, gaze duration, and the number of fixations on both nouns and verbs are negatively correlated with L2 reading comprehension performance. These findings suggest that while the initial processing mechanisms of nouns and verbs may be similar, they diverge in late stages of processing. |
Jinghua Huang; Mingyan Wang; Ting Zhang; Dongliang Zhang; Yi Zhou; Lujin Mao; Mengyao Qi Investigating the effect of emoji position on eye movements and subjective evaluations on Chinese sarcasm comprehension Journal Article In: Ergonomics, vol. 68, no. 2, pp. 251–266, 2025. @article{Huang2025, Evidence indicated that emojis could influence sarcasm comprehension and sentence processing in English. However, the effect of emojis on Chinese sarcasm comprehension remains unclear. Therefore, this study investigated the impact of the smiley emoji position and semantics on eye movements and subjective assessments during Chinese online communication. Our results showed that the presence of a smiley emoji improved participants' interpretation and perception of sarcasm. We also found shorter dwell times on sarcastic words compared to literal words under the comment-final emoji condition. Additionally, we clarified the time course of emojified sentence processing during Chinese reading: the presence of emoji initially decreased first fixation durations compared to the absence of emoji and then the comment-final emoji shortened dwell times on sarcastic words compared to literal words in the critical area of interest. Our findings suggested that the comment-final emoji was the preferable choice for avoiding semantic comprehension bias in China. |
Ignace T. C. Hooge; Roy S. Hessels; Diederick C. Niehorster; Richard Andersson; Marta K. Skrok; Robert Konklewski; Patrycjusz Stremplewski; Maciej Nowakowski; Szymon Tamborski; Anna Szkulmowska; Maciej Szkulmowski; Marcus Nyström Eye tracker calibration: How well can humans refixate a target? Journal Article In: Behavior Research Methods, vol. 57, no. 1, pp. 1–10, 2025. @article{Hooge2025, Irrespective of the precision, the inaccuracy of a pupil-based eye tracker is about 0.5∘. This paper delves into two factors that potentially increase the inaccuracy of the gaze signal, namely, 1) Pupil-size changes and the pupil-size artefact (PSA) and 2) the putative inability of experienced individuals to precisely refixate a visual target. Experiment 1 utilizes a traditional pupil-CR eye tracker, while Experiment 2 employs a retinal eye tracker, the FreezeEye tracker, eliminating the pupil-based estimation. Results reveal that the PSA significantly affects gaze accuracy, introducing up to 0.5∘ inaccuracies during calibration and validation. Corrections based on the relation between pupil size and apparent gaze shift substantially reduce inaccuracies, underscoring the PSA's influence on eye-tracking quality. Conversely, Experiment 2 demonstrates humans' precise refixation abilities, suggesting that the accuracy of the gaze signal is not limited by human refixation inconsistencies. |
Jessica Heeman; Brian J. White; Stefan Van der Stigchel; Jan Theeuwes; Laurent Itti; Douglas P. Munoz Saliency response in superior colliculus at the future saccade goal predicts fixation duration during free viewing of dynamic scenes Journal Article In: The Journal of Neuroscience, vol. 45, no. 3, pp. 1–10, 2025. @article{Heeman2025, Eye movements in daily life occur in rapid succession and often without a predefined goal. Using a free viewing task, we examined how fixation duration prior to a saccade correlates to visual saliency and neuronal activity in the superior colliculus (SC) at the saccade goal. Rhesus monkeys (three male) watched videos of natural, dynamic, scenes while eye movements were tracked and, simultaneously, neurons were recorded in the superficial and intermediate layers of the superior colliculus (SCs and SCi, respectively), a midbrain structure closely associated with gaze, attention, and saliency coding. Saccades that were directed into the neuron's receptive field (RF) were extrapolated from the data. To interpret the complex visual input, saliency at the RF location was computed during the pre-saccadic fixation period using a computational saliency model. We analyzed if visual saliency and neural activity at the saccade goal predicted pre-saccadic fixation duration. We report three major findings: (1) Saliency at the saccade goal inversely correlated with fixation duration, with motion and edge information being the strongest predictors. (2) SC visual saliency responses in both SCs and SCi were inversely related to fixation duration. (3) SCs neurons, and not SCi neurons, showed higher activation for two consecutive short fixations, suggestive of concurrent saccade processing during free viewing. These results reveal a close correspondence between visual saliency, SC processing, and the timing of saccade initiation during free viewing and are discussed in relation to their implication for understanding saccade initiation during real-world gaze behavior. |
Tobias Hausinger; Björn Probst; Stefan Hawelka; Belinda Pletzer Own‑gender bias in facial feature recognition yields sex differences in holistic face processing Journal Article In: Biology of Sex Differences, vol. 16, no. 14, pp. 1–15, 2025. @article{Hausinger2025, Introduction Female observers in their luteal cycle phase exhibit a bias towards a detail-oriented rather than global visuospatial processing style that is well-documented across cognitive domains such as pattern recognition, naviga- tion, and object location memory. Holistic face processing involves an integration of global patterns and local parts into a cohesive percept and might thus be susceptible to the influence of sex and cycle-related processing styles. This study aims to investigate potential sex differences in the part-whole effect as a measure a of holistic face processing and explores possible relationships with sex hormone levels. Methods 147 participants (74 male, 51 luteal, 22 non-luteal) performed a part-whole face recognition task while being controlled for cycle phase and sex hormone status. Eye tracking was used for fixation control and record- ing of fixation patterns. Results We found significant sex differences in the part-whole effect between male and luteal phase female partici- pants. In particular, this sex difference was based on luteal phase participants exhibiting higher face part recognition accuracy than male participants. This advantage was exclusively observed for stimulus faces of women. Explora- tory analyses further suggest a similar advantage of luteal compared to non-luteal participants, but no significant difference between non-luteal and male participants. Furthermore, testosterone emerged as a possible mediator for the observed sex differences. Conclusion Our results suggest a possible modulation of face encoding and/or recognition by sex and hormone sta- tus. Moreover, the established own-gender bias in face recognition, that is, female advantage in recognition of faces of the same gender might be based on more accurate representations of face-parts. Plain English summary In this study, participants were required to recognize a previously encountered face from one of two options. The correct face and the distractor face did only differ in one certain face part, that is, either the eyes, nose or mouth. When participants were presented only with the respective face parts instead of complete faces, female participants during their luteal cycle phase were more accurate in recognizing these parts than male participants. This advantage was observed only if female participants had to recognize face parts of women. Since previous studies have shown a female advantage in utilizing detail information, for instance when having to process local features within a global pattern or memorizing the location of features on a map, our findings represent a good fit with existing literature. Moreover, previous findings of better female recognition of women's faces may be attributed to enhanced memory for individual face parts. |
Jiaxu Han; Catharine E. Fairbairn; Walter James Venerable; Sarah Brown-Schmidt; Talia Ariss Examining social attention as a predictor of problem drinking behavior: A longitudinal study using eye-tracking Journal Article In: Alcohol, Clinical and Experimental Research, no. October 2024, pp. 153–164, 2025. @article{Han2025, Background: Researchers have long been interested in identifying objective markers for problem drinking susceptibility informed by the environments in which individuals drink. However, little is known of objective cognitive-behavioral indices relevant to the social contexts in which alcohol is typically consumed. Combining group-based alcohol administration, eye-tracking technology, and longitudinal follow-up over a 2-year span, the current study examined the role of social attention in predicting patterns of problem drinking over time. Methods: Young heavy drinkers (N = 246) were randomly assigned to consume either an alcoholic (target BAC 0.08%) or a control beverage in dyads comprising friends or strangers. Dyads completed a virtual video call in which half of the screen comprised a view of themselves (“self-view”) and half a view of their interaction partner (“other-view”). Participants' gaze behaviors, operationalized as the proportion of time spent looking at “self-view” and “other-view,” were tracked throughout the call. Problem drinking was assessed at the time of the laboratory visit and then every year subsequent for 2 years. Results: Significant interactions emerged between beverage condition and social attention in predicting binge drinking days. In cross-sectional analyses, among participants assigned to the control (but not alcohol) group, heightened self-focused attention was linked with increased binge days at baseline |
Elizabeth H. Hall; Joy J. Geng Object-based attention during scene perception elicits boundary contraction in memory Journal Article In: Memory & Cognition, vol. 53, no. 1, pp. 6–18, 2025. @article{Hall2025, Boundary contraction and extension are two types of scene transformations that occur in memory. In extension, viewers extrapolate information beyond the edges of the image, whereas in contraction, viewers forget information near the edges. Recent work suggests that image composition influences the direction and magnitude of boundary transformation. We hypothesize that selective attention at encoding is an important driver of boundary transformation effects, selective attention to specific objects at encoding leading to boundary contraction. In this study, one group of participants (N = 36) memorized 15 scenes while searching for targets, while a separate group (N = 36) just memorized the scenes. Both groups then drew the scenes from memory with as much object and spatial detail as they could remember. We asked online workers to provide ratings of boundary transformations in the drawings, as well as how many objects they contained and the precision of remembered object size and location. We found that search condition drawings showed significantly greater boundary contraction than drawings of the same scenes in the memorize condition. Search drawings were significantly more likely to contain target objects, and the likelihood to recall other objects in the scene decreased as a function of their distance from the target. These findings suggest that selective attention to a specific object due to a search task at encoding will lead to significant boundary contraction. |
Maha Habibi; Brian C. Coe; Donald C. Brien; Jeff Huang; Heidi C. Riek; Frank Bremmer; Lars Timmermann; Annette Janzen; Wolfgang H. Oertel; Douglas P. Munoz Saccade, pupil, and blink abnormalities in prodromal and manifest Journal Article In: Journal of Parkinson's Disease, pp. 1–11, 2025. @article{Habibi2025, Background: Saccade, pupil, and blink control are impaired in patients with α-synucleinopathies (αSYN): Parkinson's disease (PD) and multiple system atrophy (MSA). Isolated REM (rapid eye movement) Sleep Behavior Disorder (iRBD) is a prodromal stage of PD and MSA and a prime candidate for investigating early oculo-pupillo-motor abnormalities that may precede or predict conversion to clinically manifest αSYN. Objective: Determine whether saccade, pupil, and blink responses in iRBD are normal or similar to those identified in PD and MSA. Methods: Video-based eye-tracking was conducted with 68 patients with iRBD, 49 with PD, 17 with MSA, and 95 healthy controls (CTRL) performing an interleaved pro-/anti-saccade task that probed sensory, motor, and cognitive processes involved in eye movement control. Results: Horizontal saccade and blink behavior was intact in iRBD, but abnormal in PD and MSA. iRBD patients, however, demonstrated reduced pupil dilation size, which closely resembled the changes found in PD and MSA. In the iRBD group, the extent of these pupillary changes appeared to correlate with the degree of hyposmia and reduction in dopamine trans- porter imaging signal. Conclusions: Pupil abnormalities were present in iRBD, but blink and horizontal saccade responses were intact. Future longitudinal studies are required to determine which prodromal pupil abnormalities predict conversion from iRBD to PD or MSA and to identify the time window, in relation to conversion, when horizontal saccade responses become abnormal. |
Julian Gutzeit; Lynn Huestegge The impact of the degree of action voluntariness on sense of agency in saccades Journal Article In: Consciousness and Cognition, vol. 127, pp. 1–15, 2025. @article{Gutzeit2025, Experiencing a sense of agency (SoA), the feeling of being in control over one's actions and their outcomes, typically requires intentional and voluntary actions. Prior research has compared the association of voluntary versus completely involuntary actions with the SoA. Here, we leveraged unique characteristics of oculomotor actions to partially manipulate the degree of action voluntariness. Participants performed either highly automatized prosaccades or highly controlled (voluntary) anti-saccades, triggering a gaze-contingent visual action effect. We assessed explicit SoA ratings and temporal action and effect binding as an implicit SoA measure. Anti-saccades were associated with stronger action binding compared to prosaccades, demonstrating a robust association between higher action voluntariness and a stronger implicit sense of action agency. We conclude that our manipulation of action voluntariness may have impacted the implicit phenomenological feeling of bodily agency, but it did not affect the SoA over effect outcomes or explicit agency perception. |
Magdalena Gruner; Andreas Widmann; Stefan Wöhner; Erich Schröger; Jörg D. Jescheniak Semantic context effects in picture and sound naming: Evidence from event-related potentials and pupillometric data Journal Article In: Journal of cognitive neuroscience, vol. 37, no. 2, pp. 443–463, 2025. @article{Gruner2025, When a picture is repeatedly named in the context of semantically related pictures (homogeneous context), responses are slower than when the picture is repeatedly named in the context of unrelated pictures (heterogeneous context). This semantic interference effect in blocked-cyclic naming plays an important role in devising theories of word production. Wöhner, Mädebach, and Jescheniak [Wöhner, S., Mädebach, A., & Jescheniak, J. D. Naming pictures and sounds: Stimulus type affects semantic context effects. Journal of Experimental Psychology: Human Perception and Performance, 47, 716-730, 2021] have shown that the effect is substantially larger when participants name environmental sounds than when they name pictures. We investigated possible reasons for this difference, using EEG and pupillometry. The behavioral data replicated Wöhner and colleagues. ERPs were more positive in the homogeneous compared with the heterogeneous context over central electrode locations between 140-180 msec and 250-350 msec for picture naming and between 250 and 350 msec for sound naming, presumably reflecting semantic interference during semantic and lexical processing. The later component was of similar size for pictures and sounds. ERPs were more negative in the homogeneous compared with the heterogeneous context over frontal electrode locations between 400 and 600 msec only for sounds. The pupillometric data showed a stronger pupil dilation in the homogeneous compared with the heterogeneous context only for sounds. The amplitudes of the late ERP negativity and pupil dilation predicted naming latencies for sounds in the homogeneous context. The latency of the effects indicates that the difference in semantic interference between picture and sound naming arises at later, presumably postlexical processing stages closer to articulation. We suggest that the processing of the auditory stimuli interferes with phonological response preparation and self-monitoring, leading to enhanced semantic interference. |
Xizi Gong; Tao He; Qian Wang; Junshi Lu; Fang Fang Time course of orientation ensemble representation in the human brain Journal Article In: The Journal of Neuroscience, vol. 45, no. 7, pp. 1–13, 2025. @article{Gong2025, Natural scenes are filled with groups of similar items. Humans employ ensemble coding to extract the summary statistical information of the environment, thereby enhancing the efficiency of information processing, something particularly useful when observing natural scenes. However, the neural mechanisms underlying the representation ofensemble information in the brain remain elusive. In particular, whether ensemble representation results from the mere summation of individual item representations or it engages other specific processes remains unclear. In this study, we utilized a set of orientation ensembles wherein none ofthe individual item orientations were the same as the ensemble orientation. We recorded magnetoencephalography (MEG) signals from human participants (both sexes) when they performed an ensemble orientation discrimination task. Time-resolved multivariate pattern analysis (MVPA) and the inverted encoding model (IEM) were employed to unravel the neural mechanisms of the ensemble orientation representation and track its time course. First, we achieved successful decoding of the ensemble orientation, with a high correlation between the decoding and behavioral accuracies. Second, the IEM analysis demonstrated that the representation of the ensemble orientation differed from the sum of the representations of individual item orientations, suggesting that ensemble coding could fur- ther modulate orientation representation in the brain. Moreover, using source reconstruction, we showed that the representation of ensemble orientation manifested in early visual areas. Taken together, our findings reveal the emergence of the ensemble representation in the human visual cortex and advance the understanding of how the brain captures and represents ensemble information. |
Helena Ghorbani; Gülcenur Özturan; Andrea Albonico; Jason J. S. Barton Reading words versus seeing font or handwriting style: A study of hemifield processing Journal Article In: Experimental Brain Research, vol. 243, no. 2, pp. 1–9, 2025. @article{Ghorbani2025, Tachistoscopic studies have established a right field advantage for the perception of visually presented words, which has been interpreted as reflecting a left hemispheric specialization. However, it is not clear whether this is driven by the linguistic task of word processing, or also occurs when processing properties such as the style and regularity of text. We had 23 subjects perform a tachistoscopic study while they viewed five-letter words in either computer font or handwriting. The task in one block was to respond if the word in the peripheral field matched a word just seen in the central field. In a second block with the same stimuli, the task was to respond if the style (handwriting or font) matched. We found a main effect of task: there was a right-field advantage for reading the word, but no field advantage for reporting the style of text. There was no effect of stimulus type and no interaction between task and stimulus type. We conclude that the field advantage for processing text is driven by the task, being specific for the processing the identity of the word and not the perception of the style of the text. We did not find evidence to support prior assertions that the type of text and its regularity influenced the field advantage during the word-reading task. |
Laurie Galas; Ian Donovan; Laura Dugué Attention rhythmically shapes sensory tuning Journal Article In: The Journal of Neuroscience, vol. 45, no. 7, pp. 1–11, 2025. @article{Galas2025, Attention is key to perception and human behavior, and evidence shows that it periodically samples sensory information (<20 Hz). However, this view has been recently challenged due to methodological concerns and gaps in our understanding of the function and mechanism of rhythmic attention. Here we used an intensive ∼22 h psychophysical protocol combined with reverse correlation analyses to infer the neural representation underlying these rhythms. Participants (male/female) performed a task in which covert spatial (sustained and exploratory) attention was manipulated and then probed at various delays. Our results show that sustained and exploratory attention periodically modulate perception via different neural computations. While sustained attention suppresses distracting stimulus features at the alpha (∼12 Hz) frequency, exploratory attention increases the gain around task-relevant stimulus feature at the theta (∼6 Hz) frequency. These findings reveal that both modes of rhythmic attention differentially shape sensory tuning, expanding the current understanding of the rhythmic sampling theory of attention. |
Zuzanna Fuchs; Olga Parshina; Irina A. Sekerina; Maria Polinsky Processing of verbal versus adjectival agreement: Implications for syntax and psycholinguistics Journal Article In: Glossa: a journal of general linguistics, vol. 10, no. 1, pp. 1–35, 2025. @article{Fuchs2025, Linguistic theories distinguish between external and internal agreement (e.g., noun-verb agreement vs.noun-modifier agreement, the latter also known as concord) and model them using different mechanisms.While this distinction has garnered considerable attention in syntactic theory, it remains largely unexplored in experimental work.In an effort to address this gap, we conducted two studies of external/internal agreement in Russian using self-paced reading and eye-tracking while reading.We measured the response to violations generated when native speakers encounter a noun that mismatches the features on an earlier element inflected for agreement (verb, modifying adjective, and predicative adjective).Both experimental studies found strong effects of ungrammaticality: participants were sensitive to agreement mismatches between the agreeing element and the trigger.However, there was no interaction observed between the effect of grammaticality and the type of agreeing element, suggesting that, while participants are sensitive to mismatches, the processing of the mismatches does not differ between external and internal agreement.Despite the cross-methodological replication of the null interaction effect, interpreting this result is necessarily tentative.We discuss possible implications, should the result be further replicated by future high-powered studies.We suggest that this outcome may indicate that differences in real-time processing of internal vs.external agreement may not be observable in time-course measures, or that the lack of such differences constitutes support for analyses of agreement as a two-step process, with one step in syntax, and the other, post-syntactic.We invite future work to test these hypotheses further. |
Elana J. Forbes; Jeggan Tiego; Joshua Langmead; Kathryn E. Unruh; Matthew W. Mosconi; Amy Finlay; Kathryn Kallady; Lydia Maclachlan; Mia Moses; Kai Cappel; Rachael Knott; Tracey Chau; Vishnu Priya Mohanakumar Sindhu; Alessio Bellato; Madeleine J. Groom; Rebecca Kerestes; Mark A. Bellgrove; Beth P. Johnson Oculomotor function in children and adolescents with autism, ADHD or co-occurring autism and ADHD Journal Article In: Journal of Autism and Developmental Disorders, pp. 1–17, 2025. @article{Forbes2025, Oculomotor characteristics, including accuracy, timing, and sensorimotor processing, are considered sensitive intermediate phenotypes for understanding the etiology of neurodevelopmental conditions, such as autism and ADHD. Oculomotor characteristics have predominantly been studied separately in autism and ADHD. Despite the high rates of co-occurrence between these conditions, only one study has investigated oculomotor processes among those with co-occurring autism + ADHD. Four hundred and five (n = 405; 226 males) Australian children and adolescents aged 4 to 18 years (M = 9.64 years; SD = 3.20 years) with ADHD (n = 64), autism (n = 66), autism + ADHD (n = 146), or neurotypical individuals (n = 129) were compared across four different oculomotor tasks: visually guided saccade, anti-saccade, sinusoidal pursuit and step-ramp pursuit. Confirmatory analyses were conducted using separate datasets acquired from the University of Nottingham UK (n = 17 autism |
Lara Fontana; Javier Albayay; Letizia Zurlo; Viola Ciliberto; Massimiliano Zampini Olfactory modulation of visual attention and preference towards congruent food products: An eye tracking study Journal Article In: Food Quality and Preference, vol. 124, pp. 1–11, 2025. @article{Fontana2025, In multisensory environments, odours often accompany visual stimuli, directing attention towards congruent objects. While previous research shows that people fixate longer on objects that match a recently smelled odour, it remains unclear whether odours directly influence product choices. Since odours persist in real-world settings, we investigated the effects of repeated odour exposure on visual attention and product choice, accounting for potential olfactory habituation. In a within-participant design, 30 participants completed a task where either a lemon odour (experimental condition) or clean air (control) was paired with congruent lemon-based food images, which varied to prevent visual habituation. We measured eye movements and choice preferences for these food products. Results revealed that participants exhibited longer gaze durations and more frequent fixations on food products congruent with the lemon odour. Repeated odour exposure had no effect on gaze patterns, as participants consistently focused on odour-congruent products throughout the experiment. The intensity and pleasantness of the lemon odour remained stable over time, suggesting no olfactory habituation occurred with this food-related odour. Despite this stable visual attention and odour intensity and pleasantness, participants began to diversify their product choices, selecting fewer odour-congruent items over time. These findings suggest that while odours continue to direct attention towards matching products, repeated exposure may reduce their influence on product choice, highlighting the complex role of olfactory stimuli in decision-making. The study provides insights into how odours interact with visual cues and influence consumer behaviour in prolonged exposure scenarios. |
Nico A. Flierman; Sue Ann Koay; Willem S. Hoogstraten; Tom J. H. Ruigrok; Pieter Roelfsema; Aleksandra Badura; Chris I. De Zeeuw Encoding of cerebellar dentate neuron activity during visual attention in rhesus macaques Journal Article In: eLife, vol. 13, pp. 1–23, 2025. @article{Flierman2025, The role of cerebellum in controlling eye movements is well established, but its contribution to more complex forms of visual behavior has remained elusive. To study cerebellar activity during visual attention we recorded extracellular activity of dentate nucleus (DN) neurons in two non-human primates (NHPs). NHPs were trained to read the direction indicated by a peripheral visual stimulus while maintaining fixation at the center, and report the direction of the cue by performing a saccadic eye movement into the same direction following a delay. We found that single-unit DN neurons modulated spiking activity over the entire time course of the task, and that their activity often bridged temporally separated intra-trial events, yet in a heterogeneous manner. To better understand the heterogeneous relationship between task structure, behavioral performance, and neural dynamics, we constructed a behavioral, an encoding, and a decoding model. Both NHPs showed different behavioral strategies, which influenced the performance. Activity of the DN neurons reflected the unique strategies, with the direction of the visual stimulus frequently being encoded long before an upcoming saccade. Moreover, the latency of the ramping activity of DN neurons following presentation of the visual stimulus was shorter in the better performing NHP. Labeling with the retrograde tracer Cholera Toxin B in the recording location in the DN indicated that these neurons predominantly receive inputs from Purkinje cells in the D1 and D2 zones of the lateral cerebellum as well as neurons of the principal olive and medial pons, all regions known to connect with neurons in the prefrontal cortex contributing to planning of saccades. Together, our results highlight that DN neurons can dynamically modulate their activity during a visual attention task, comprising not only sensorimotor but also cognitive attentional components. |
Leigh B. Fernandez; Muzna Shehzad; Lauren V. Hadley Younger adults may be faster at making semantic predictions, but older adults are more efficient Journal Article In: Psychology and Aging, pp. 1–8, 2025. @article{Fernandez2025, While there is strong evidence that younger adults use contextual information to generate semantic predictions, findings from older adults are less clear. Age affects cognition in a variety of different ways that may impact prediction mechanisms; while the efficiency of memory systems and processing speed decrease, life experience leads to complementary increases in vocabulary size, real-world knowledge, and even inhibitory control. Using the visual world paradigm, we tested prediction in younger (n = 30, between 18 and 35 years of age) and older adults (n = 30, between 53 and 78 years of age). Importantly, we differentiated early stage predictions based on simple spreading activation from the more resource-intensive tailoring of predictions when additional constraining information is provided. We found that older adults were slower than younger adults in generating early stage predictions but then quicker than younger adults to tailor those predictions given additional information. This suggests that while age may lead to delays in first activating relevant lexical items when listening to speech, increased linguistic experience nonetheless increases the efficiency with which contextual information is used. These findings are consistent with reports of age having positive as well as negative impacts on cognition and suggest conflation of different stages of prediction as a basis for the inconsistency in the aging-related literature to date. |
Alejandro J Estudillo Exploring the role of foveal and extrafoveal processing in emotion recognition: A gaze-contingent study Journal Article In: Behavioral Sciences, vol. 15, pp. 1–11, 2025. @article{Estudillo2025, Although the eye-tracking technique has been widely used to passively study emotion recognition, no studies have utilised this technique to actively manipulate eye- gaze strategies during the recognition facial emotions. The present study aims to fill this gap by employing a gaze-contingent paradigm. Observers were asked to determine the emotion displayed by centrally presented upright or inverted faces. Under the window condition, only a single fixated facial feature was available at a time, only allowing for foveal processing. Under the mask condition, the fixated facial feature was masked while the rest of the face remained visible, thereby disrupting foveal processing but allowing for extrafoveal processing. These conditions were compared with a full-view condition. The results revealed that while both foveal and extrafoveal information typically contribute to emotion identification, at a standard conversation distance, the latter alone generally suffices for efficient emotion identification. |
Lea Entzmann; Arni Gunnar Asgeirsson; Arni Kristjansson How does color distribution learning affect goal-directed visuomotor behavior? Journal Article In: Cognition, vol. 254, pp. 1–14, 2025. @article{Entzmann2025, While the visual world is rich and complex, importantly, it nevertheless contains many statistical regularities. For example, environmental feature distributions tend to remain relatively stable from one moment to the next. Recent findings have shown how observers can learn surprising details of environmental color distributions, even when the colors belong to actively ignored stimuli such as distractors in visual search. Our aim was to determine whether such learning influences orienting in the visual environment, measured with saccadic eye movements. In two visual search experiments, observers had to find an odd-one-out target. Firstly, we tested cases where observers selected targets by fixating them. Secondly, we measured saccadic eye movements when observers made judgments on the target and responded manually. Trials were structured in blocks, containing learning trials where distractors came from the same color distribution (uniform or Gaussian) while on subsequent test trials, the target was at different distances from the mean of the learning distractor distribution. For both manual and saccadic measures, performance improved throughout the learning trials and was better when the distractor colors came from a Gaussian distribution. Moreover, saccade latencies during test trials depended on the distance between the color of the current target and the distractors on learning trials, replicating results obtained with manual responses. Latencies were slowed when the target color was within the learning distractor color distribution and also revealed that observers learned the difference between uniform and Gaussian distributions. The importance of several variables in predicting saccadic and manual reaction times was studied using random forests, revealing similar rankings for both modalities, although previous distractor color had a higher impact on free eye movements. Overall, our results demonstrate learning of detailed characteristics of environmental color distributions that affects early attentional selection rather than later decisional processes. |
Thomas W. Elston; Joni D. Wallis Context-dependent decision-making in the primate hippocampal–prefrontal circuit Journal Article In: Nature Neuroscience, vol. 28, pp. 374–382, 2025. @article{Elston2025, What is good in one scenario may be bad in another. Despite the ubiquity of such contextual reasoning in everyday choice, how the brain flexibly uses different valuation schemes across contexts remains unknown. We addressed this question by monitoring neural activity from the hippocampus (HPC) and orbitofrontal cortex (OFC) of two monkeys performing a state-dependent choice task. We found that HPC neurons encoded state information as it became available and then, at the time of choice, relayed this information to the OFC via theta synchronization. During choice, the OFC represented value in a state-dependent manner; many OFC neurons uniquely coded for value in only one state but not the other. This suggests a functional dissociation whereby the HPC encodes contextual information that is broadcast to the OFC via theta synchronization to select a state-appropriate value subcircuit, thereby allowing for contextual reasoning in value-based choice. |
Joshua O. Eayrs; Haya Serena Tobing; S. Tabitha Steendam; Nicoleta Prutean; Wim Notebaert; Jan R. Wiersema; Ruth M. Krebs; C. Nico Boehler Reward and efficacy modulate the rate of anticipatory pupil dilation Journal Article In: Psychophysiology, vol. 62, no. 1, pp. 1–12, 2025. @article{Eayrs2025, Pupil size is a well-established marker of cognitive effort, with greater efforts leading to larger pupils. This is particularly true for pupil size during task performance, whereas findings on anticipatory effort triggered by a cue stimulus are less consistent. For example, a recent report by Frömer et al. found that in a cued-Stroop task, behavioral performance and electrophysiological markers of preparatory effort allocation were modulated by cued reward and ‘efficacy' (the degree to which rewards depended on good performance), but pupil size did not show a comparable pattern. Here, we conceptually replicated this study, employing an alternative approach to the pupillometry analyses. In line with previous findings, we found no modulation of absolute pupil size in the cue-to-target interval. Instead, we observed a significant difference in the rate of pupil dilation in anticipation of the target: pupils dilated more rapidly for high-reward trials in which rewards depended on good performance. This was followed by a significant difference in absolute pupil size within the first hundreds of milliseconds following Stroop stimulus onset, likely reflecting a lagging effect of anticipatory effort allocation. Finally, the slope of pupil dilation was significantly correlated with behavioral response times, and this association was strongest for the high-reward, high-efficacy trials, further supporting that the rate of anticipatory pupil dilation reflects anticipatory effort. We conclude that pupil size is modulated by anticipatory effort, but in a highly temporally-specific manner, which is best reflected by the rate of dilation in the moments just prior to stimulus onset. |
S. Duschek; T. Rainer; P. Piwkowski; J. Vorwerk; L. Riml; U. Ettinger Neural correlates of proactive and reactive control investigated using a novel precued antisaccade paradigm Journal Article In: Psychophysiology, vol. 62, pp. 1–17, 2025. @article{Duschek2025, This ERP study investigated central nervous correlates of proactive and reactive control using a novel precued antisaccade paradigm. Proactive control refers to preparatory processes during anticipation of a behaviorally relevant event; reactive control is activated after such an event to ensure goal attainment. A 64-channel EEG was obtained in 35 subjects; video-based eye tracking was applied for ocular recording. In the task, a target (probe) appeared left or right of the fixation point 1800ms after a visual cue; subjects had to move their gaze to the probe (prosaccade) or its mirror image position (antisaccade). Probes were emotional face expressions; their frame colors instructed task requirements. The cue informed about antisaccade probability (70% vs. 30%) in a trial. High antisaccade probability was associated with larger CNV amplitude than low antisaccade probability. In trials with incongruence between expected and actual task requirements, the probe N2 and P3a amplitudes were larger than in congruent trials. The P3a was smaller for affective than neutral probes. Task accuracy and speed were lower in incongruent trials and varied according to affective probe valence. EEG source imaging suggested origin of the ERPs in the orbitofrontal cortex and superior frontal gyrus. The difference for the CNV indicates greater cortical activity during higher proactive control demands. The larger probe N2 and P3a in incongruent trials reflect greater resource allocation to conflict monitoring and conflict resolution, i.e., reactive control. The influence of probe valence on the P3a suggests reduction of processing capacity due to higher emotional arousal. |
Alenka Doyle; Kamilla Volkova; Nicholas Crotty; Nicole Massa; Michael A. Grubb Information-driven attentional capture Journal Article In: Attention, Perception, & Psychophysics, pp. 1–7, 2025. @article{Doyle2025, Visual attention, the selective prioritization of sensory information, is crucial in dynamic, information-rich environments. That both internal goals and external salience modulate the allocation of attention is well established. However, recent empirical work has found instances of experience-driven attention, wherein task-irrelevant, physically non-salient stimuli reflexively capture attention in ways that are contingent on an observer's unique history. The prototypical example of experience-driven attention relies on a history of reward associations, with evidence attributing the phenomenon to reward-prediction errors. However, a mechanistic account, differing from the reward-prediction error hypothesis, is needed to explain how, in the absence of monetary reward, a history of target-seeking leads to attentional capture. Here we propose that what drives attentional capture in such cases is not target-seeking, but an association with instrumental information. To test this hypothesis, we used pre-cues to render the information provided by a search target either instrumental or redundant. We found that task-irrelevant, physically non-salient distractors associated with instrumental information were more likely to draw eye movements (a sensitive metric of information sampling) than were distractors associated with redundant information. Furthermore, saccading to an instrumental-information-associated distractor led to a greater behavioral cost: response times were slowed more severely. Crucially, the distractors had equivalent histories as sought targets, so any attentional differences between them must be due to different information histories resulting from our experimental manipulation. These findings provide strong evidence for the information history hypothesis and offer a method for exploring the neural signature of information-driven attentional capture. |
Sydney Doré; Jonathan Coutinho; Aarlenne Z. Khan; Philippe Lefèvre; Gunnar Blohm Latency and amplitude of catch-up saccades to accelerating targets Journal Article In: Journal of Neurophysiology, vol. 133, no. 1, pp. 3–13, 2025. @article{Dore2025, To track moving targets, humans move their eyes using both saccades and smooth pursuit. If pursuit eye movements fail to accurately track the moving target, catch-up saccades are initiated to rectify the tracking error. It is well known that retinal position and velocity errors determine saccade latency and amplitude, but the extent to which retinal acceleration error influences these aspects is not well quantified. To test this, 13 adult human participants performed an experiment where they pursued accelerating/decelerating targets. During the ongoing pursuit, we introduced a randomly sized target step to evoke a catch-up saccade and analyzed its latency and amplitude. We observed that retinal acceleration error (computed over a 200 ms range centered 100 ms before the saccade) was a statistically significant predictor of saccade amplitude and latency. A multiple linear regression supported our hypothesis that retinal acceleration errors influence saccade amplitude in addition to the influence of retinal position and velocity errors. We also found that saccade latencies were shorter when retinal acceleration error increased the tracking error and vice versa. In summary, our findings support a model in which retinal acceleration error is used to compute a predicted position error ̴100 ms into the future to trigger saccades and determine saccade amplitude. |
Gregory J. DiGirolamo; Federico Sorcini; Zachary Zaniewski; Jonathan B. Kruskal; Max P. Rosen In: Radiology, vol. 314, no. 2, pp. 1–7, 2025. @article{DiGirolamo2025, Background: Diagnostic error rates for detecting small lung nodules on chest CT scans remain high at 50%, despite advances in imaging technology and radiologist training. These failure rates may stem from limitations in conscious recognition processes. However, successful visual processes may be detecting the nodule independent of the radiologist's report. Purpose: To investigate visual processing in radiologists during the assessment of chest nodules to determine if radiologists have successful non- conscious processes that detect lung nodules on chest CT scans even when not consciously recognized or considered, as evidenced by changes in how long they look (dwell time) and pupil size to missed nodules. Materials and Methods: This prospective study, conducted from August 2014 to September 2023, compared six experienced radiologists with six medically naive control participants. Participants viewed 18 chest CT scans (nine abnormal with 16 nodules, nine normal) to detect lung nodules. High-speed video eye tracking measured gaze duration and pupil size (indicating physiological arousal) at missed nodule locations and the same locations on normal CT scans. The reference standard was the known presence or absence of nodules (as determined by a four-radiologist consensus panel) on abnormal and normal CT scans, respectively. Primary outcome measures were detection rates of nodules, and dwell time and pupil size at nodule locations versus normal tissue. Paired t tests were used for statistical analysis. Results: Twelve participants (six radiologists with an average of 9.3 years of radiologic experience and six controls with no radiologic experience) performed the evaluations. Radiologists missed on average 59% (9.5 of 16) of these lung nodules. For the missed nodules, radiologists exhibited longer dwell times (mean, 228 msec vs 175 msec; P = .005) and larger pupil size (mean, 1446 pixels vs 1349 pixels; P = .04.) than for normal tissue. Control participants showed no differences in dwell time (mean, 197 msec vs 180 msec; P = .64) or pupil size (mean, 1426 pixels vs 1714 pixels; P = .23) for missed nodules versus normal tissue locations. Conclusion: Radiologists' non-conscious processes during visual assessment of CT scans can detect lung nodules on chest CT scans even when conscious recognition fails, as evidenced by increased dwell time and larger pupil size. This successful non-conscious detection is a result of general radiology training. |
Nathan Didier; Dingcai Cao; Andrea C. King The eyes have it: Alcohol-induced eye movement impairment and perceived impairment in older adults with and without alcohol use disorder Journal Article In: Alcohol, Clinical and Experimental Research, no. November, pp. 1–11, 2025. @article{Didier2025, Background: While alcohol has been shown to impair eye movements in young adults, little is known about alcohol-induced oculomotor impairment in older adults with longer histories of alcohol use. Here, we examined whether older adults with chronic alcohol use disorder (AUD) exhibit more acute tolerance than age-matched light drinkers (LD), evidenced by less alcohol-induced oculomotor impairment and perceived impairment. Method: Two random-order, double-blinded laboratory sessions with administration of alcohol (0.8 g/kg) or placebo. Participants (n = 117; 55 AUD, 62 LD) were 40–65 years of age. Eye tracking outcomes (pupil size, smooth pursuit gain, pro- and anti-saccadic velocity, latency, and accuracy) were measured at baseline and repeated at peak and declining breath alcohol intervals. Participants rated their perceived impairment during rising and declining intervals. Results: Following alcohol consumption, older adults with AUD (vs. LD) showed less impairment on smooth pursuit gain and reported lower perceived impairment, but both groups showed similar pupil dilation and impairment on saccadic measures. Conclusions: While alcohol impaired older adults with AUD less than LD in terms of their ability to track a predictably moving object (i.e., smooth pursuit), both drinking groups were equally sensitive to alcohol-induced delays in reaction time, reductions in velocity, and deficits in accuracy to randomly appearing objects (i.e., saccade tasks). Thus, despite decades of chronic excessive drinking, older adults with AUD exhibited similar oculomotor tolerance on pro- and anti-saccade eye movements relative to their light-drinking counterparts. Given that these individuals also perceived less impairment during intoxication, they may be at risk for injury and harm when they engage in real-life drinking bouts. |
Sean Devine; Y. Doug Dong; Martin Sellier Silva; Mathieu Roy; A. Ross Otto Increased attention towards progress information near a goal state Journal Article In: Psychonomic Bulletin & Review, pp. 1–9, 2025. @article{Devine2025, A growing body of evidence across psychology suggests that (cognitive) effort exertion increases in proximity to a goal state. For instance, previous work has shown that participants respond more quickly, but not less accurately, when they near a goal—as indicated by a filling progress bar. Yet it remains unclear when over the course of a cognitively demanding task do people monitor progress information: Do they continuously monitor their goal progress over the course of a task, or attend more frequently to it as they near their goal? To answer this question, we used eye-tracking to examine trial-by-trial changes in progress monitoring as participants completed blocks of an attentionally demanding oddball task. Replicating past work, we found that participants increased cognitive effort exertion near a goal, as evinced by an increase in correct responses per second. More interestingly, we found that the rate at which participants attended to goal progress information—operationalized here as the frequency of gazes towards a progress bar—increased steeply near a goal state. In other words, participants extracted information from the progress bar at a higher rate when goals were proximal (versus distal). In exploratory analysis of tonic pupil diameter, we also found that tonic pupil size increased sharply as participants approached a goal state, mirroring the pattern of gaze. These results support the view that people attend to progress information more as they approach a goal. |
Jack Dempsey; Anna Tsiola; Nigel Bosch; Kiel Christianson; Mallory Stites Eye-movement indices of reading while debugging Python source code Journal Article In: Journal of Cognitive Psychology, vol. 37, no. 2, pp. 89–107, 2025. @article{Dempsey2025, Unlike text reading, the eye-movement behaviours associated with reading Python, a computer programming language, are largely understudied through a psycholinguistic lens. A general understanding of the eye movements involved in reading while troubleshooting Python, and how these behaviours compare to proofreading text, is critical for developing educational interventions and interactive tools for helping programmers debug their code. These data may also highlight to what extent humans use their underlying text reading ability when reading source code. The current work provides a profile of global reading behaviours associated with reading Python source code for debugging purposes. To this end, we recorded experienced programmers' eye movements while they determined whether 21 different Python functions would produce the desired output, an incorrect output, or an error message. Some reading behaviours seem to mirror those found in text reading (e.g. effects of stimulus complexity), while others may be specific to reading code. Results suggest that semantic errors that produce undesired outputs in programming source code may influence early stages of processing, likely due to the largely top-down strategy employed by experienced programmers when reading source code. The findings are framed to invigorate discussion and further exploration into psycholinguistic analysis of human source code reading. |
Edan Daniel-Hertz; Jewelia K. Yao; Sidney Gregorek; Patricia M. Hoyos; Jesse Gomez An eccentricity gradient reversal across high-level visual cortex Journal Article In: The Journal of Neuroscience, vol. 45, no. 2, pp. 1–14, 2025. @article{DanielHertz2025, Human visual cortex contains regions selectively involved in perceiving and recognizing ecologically important visual stimuli such as people and places. Located in the ventral temporal lobe, these regions are organized consistently relative to cortical folding, a phenomenon thought to be inherited from how centrally or peripherally these stimuli are viewed with the retina. While this eccentricity theory of visual cortex has been one of the best descriptions of its functional organization, whether or not it accurately describes visual processing in all category-selective regions is not yet clear. Through a combination of behavioral and functional MRI measurements in 27 participants (17 females), we demonstrate that a limb-selective region neighboring well-studied face-selective regions shows tuning for the visual periphery in a cortical region originally thought to be centrally biased. We demonstrate that the spatial computations performed by the limb-selective region are consistent with visual experience and in doing so, make the novel observation that there may in fact be two eccentricity gradients, forming an eccentricity reversal across high-level visual cortex. These data expand the current theory of cortical organization to provide a unifying principle that explains the broad functional features of many visual regions, showing that viewing experience interacts with innate wiring principles to drive the location of cortical specialization. |
Jiahong Cui; Wenbo Yu; Lei Hu; Yuxuan Wang; Zhihan Liu The effect of transcranial random noise stimulation (tRNS) over bilateral parietal cortex in visual cross-modal conflicts Journal Article In: Scientific Reports, vol. 15, no. 1, pp. 1–12, 2025. @article{Cui2025, In complex sensory environments, visual cross-modal conflicts often affect auditory performance. The inferior parietal cortex (IPC) is involved in processing visual conflicts, namely when cognitive control processes such as inhibitory control and working memory are required. This study investigated the effect of bilateral IPC tRNS on reducing visual cross-modal conflicts and explored whether its efficacy is dependent on the conflict type. Forty-four young adults were randomly allocated to receive either active tRNS (100–640 Hz, 2-mA for 20 min) or sham stimulation. Participants repeatedly performed tasks in three phases: before, during, and after stimulation. Results showed that tRNS significantly enhanced task accuracy across both semantic and non-semantic conflicts compared to sham, as well as a greater benefit in semantic conflict after stimulation. Correlation analyses indicated that individuals with lower baseline performance benefited more from active tRNS during stimulation in the non-semantic conflict task. There were no significant differences between groups in reaction time for each conflict type task. These findings provide important evidence for the use of tRNS in reducing visual cross-modal conflicts, particularly in suppressing semantic distractors, and highlight the critical role of bilateral IPC in modulating visual cross-modal conflicts. |
Gabriela Cruz; María Melcón; Leonardo Sutandi; Matias M. Palva; Satu Palva; Gregor Thut Oscillatory brain activity in the canonical alpha-band conceals distinct mechanisms in attention Journal Article In: The Journal of Neuroscience, vol. 45, no. 1, pp. 1–17, 2025. @article{Cruz2025, Brain oscillations in the alpha-band (8-14 Hz) have been linked to specific processes in attention and perception. In particular, decreases in posterior alpha-amplitude are thought to reflect activation of perceptually relevant brain areas for target engagement, while alpha-amplitude increases have been associated with inhibition for distractor suppression. Traditionally, these alpha-changes have been viewed as two facets of the same process. However, recent evidence calls for revisiting this interpretation. Here, we recorded MEG/EEG in 32 participants (19 females) during covert visuospatial attention shifts (spatial cues) and two control conditions (neutral cue, no-attention cue), while tracking fixational eye movements. In disagreement with a single, perceptually relevant alpha-process, we found the typical alpha-modulations contra- and ipsilateral to the attention focus to be triple dissociated in their timing, topography, and spectral features: Ipsilateral alpha-increases occurred early, over occipital sensors, at a high alpha-frequency (10–14 Hz) and were expressed during spatial attention (alpha spatial cue > neutral cue). In contrast, contralateral alpha-decreases occurred later, over parietal sensors, at a lower alpha-frequency (7–10 Hz) and were associated with attention deployment in general (alpha spatial and neutral cue < no-attention cue). Additionally, the lateralized early alpha-increases but not alpha-decreases during spatial attention coincided in time with directionally biased microsaccades. Overall, this suggests that the attention-related early alpha-increases and late alpha-decreases reflect distinct, likely reflexive versus endogenously controlled attention mechanisms. We conclude that there is more than one perceptually relevant posterior alpha-oscillation, which need to be dissociated for a detailed account of their roles in perception and attention. |
Sarah C. Creel Connecting the tots: Strong looking-pointing correlations in preschoolers' word learning and implications for continuity in language development Journal Article In: Child Development, vol. 96, pp. 87–103, 2025. @article{Creel2025, How does one assess developmental change when the measures themselves change with development? Most developmental studies of word learning use either looking (infants) or pointing (preschoolers and older). With little empirical evidence of the relationship between the two measures, developmental change is difficult to assess. This paper analyzes 914 pointing, looking children (451 female, varied ethnicities, 2.5–6.5 years, dates: 2009–2019) in 36 word- or sound-learning experiments with two-alternative test trials. Looking proportions and pointing accuracy correlated strongly (r =.7). Counter to the “looks first” hypothesis, looks were not sensitive to incipient knowledge that pointing missed: when pointing is at chance, looking proportions are also. Results suggest one possible path forward for assessing continuous developmental change. Methodological best practices are discussed. |
Arianna Compostella; Marta Tagliani; Maria Vender; Denis Delfitto 2025. @book{Compostella2025, In this study, we examine how implicit statistical learning (ISL) interacts with the cognitive bias of the alternation advantage in serial reaction time (SRT) tasks. Our aim was to disentangle perceptual from motor aspects of learning, as well as to shed light on the cognitive sources of this alternation effect. We developed a manual (Study 1) and an oculomotor (Study 2) two-choice SRT task, with visual stimuli following the regularities of two binary artificial grammars (Fibonacci and its modification Skip). While these grammars share some deterministic transitional regularities, they differ in their probabilistic transitional regularities and distributional properties. The pattern of manual RTs in Study 1 provide evidence for ISL, showing that subjects learned the deterministic and probabilistic transitions in the two grammars. We also found a bias toward alternation (vs. repetition) in correspondence to non-deterministic points, regardless of their statistical properties in the grammars. Study 2 provides further evidence for both ISL and the alternation advantage, in terms of shorter manual RTs and higher accuracy rates of anticipatory eye movements. Saccadic responses preceding stimulus onset allow us to argue for the perceptual nature of ISL: participants detected regularities in the string by forming S-S associations based on the sequence of the perceived stimuli. Moreover, we propose that shifts in visuospatial attention preceding oculomotor programming play a role in the occurrence of the alternation advantage, and that such an effect is driven by the spatial location of the stimulus. These findings are also discussed with respect to the presence of two (possibly interacting) parsing strategies: statistical generalizations on the string vs. local hierarchical reconstruction. |
Alasdair D. F. Clarke; Amelia R. Hunt Learn more from your data with asymptotic regression Journal Article In: Journal of Experimental Psychology: General, pp. 1–18, 2025. @article{Clarke2025, All measures of behavior have a temporal context. Changes in behavior over time often take a similar form: monotonically decreasing or increasing toward an asymptote. Whether these behavioral dynamics are the object of study or a nuisance variable, their inclusion in models of data makes conclusions more complete, robust, and well-specified, and can contribute to theory development. Here, we demonstrate that asymptotic regression is a relatively simple tool that can be applied to repeated-measures data to estimate three parameters: starting point, rate of change, and asymptote. Each of these parameters has a meaningful interpretation in terms of ecological validity, behavioral dynamics, and performance limits, respectively. They can also be used to help decide how many trials to include in an experiment and as a principled approach to reducing noise in data. We demonstrate the broad utility of asymptotic regression for modeling the effect of the passage of time within a single trial and for changes over trials of an experiment, using two existing data sets and a set of new visual search data. An important limit of asymptotic regression is that it cannot be applied to data that are stationary or change nonmonotonically. But for data that have performance changes that progress steadily toward an asymptote, as many behavioral measures do, it is a simple and powerful tool for describing those changes. Public |
Andriana L. Christofalos; Nicole M. Arco; Madison Laks; Heather Sheridan The impact of interword spacing on inference processing during text reading: Evidence from eye movements Journal Article In: Discourse Processes, vol. 62, no. 1, pp. 1–15, 2025. @article{Christofalos2025, Removing interword spacing has been shown to disrupt lower-level oculomotor processes and word identification during text reading. However, the impact of these disruptions on higher-level processes remains unclear. To examine the influence of spacing on inferential processing, we monitored eye movements while participants read spaced and unspaced passages that were strongly or weakly constrained toward an inference. Removing spaces disrupted reading fluency, as evidenced by longer reading times, longer fixation durations, reduced skipping, and shorter saccades. We also observed the effects of inferential constraint for spaced passages as characterized by longer reading times, more regressions, and longer regression-path durations for weakly than strongly constrained passages. However, these constraint effects were absent for unspaced passages, suggesting that removing spaces disrupts inferential processing. Our results are consistent with models of reading and discourse processing that assume that higher-level reading processes depend on the quality of lexical representations developed at earlier, word-level reading stages. |
Jürgen Cholewa; Annika Kirschenkern; Frederike Steinke; Thomas Günther In: Journal of Speech, Language, and Hearing Research, pp. 1–19, 2025. @article{Cholewa2025, Purpose: Predictive language comprehension has become a major topic in psycholinguistic research. The study described in this article aims to investigate if German children with developmental language disorder (DLD) use grammatical gender agreement to predict the continuation of noun phrases in the same way as it has been observed for typically developing (TD) children. The study also seeks to differentiate between specific and general deficits in predictive processing by exploring the anticipatory use of semantic information. Additionally, the research examines whether the processing of gender and semantic information varies with the speed of stimulus presentation. Method: The study included 30 children with DLD (average age = 8.7 years) and 26 TD children (average age = 8.4 years) who participated in a visual-world eye- tracking study. Noun phrases, consisting of an article, an adjective, and a noun, were presented that matched with only one of two target pictures. The phrases contained a gender cue, a semantic cue, a combination of both, or none of these cues. The cues were provided by the article and/or adjective and could be used to identify the target picture before the noun itself was presented. Results: Both groups, TD children and those with DLD, utilized predictive processing strategies in response to gender agreement and semantic information when decoding noun phrases. However, children with DLD were only able to consider gender cues when noun phrases were presented at a slower speech rate, and even then, their predictive certainty remained below the typical level for their age. Conclusion: Based on these findings, the article discusses the potential relevance of the prediction framework for explaining comprehension deficits in chil- dren with DLD, as well as the clinical implications of the results. |
Jui‐Tai T. Chen; Yi Hsuan Chang; Cesar Barquero; Moeka Mong Jia Teo; Nai Wen Kan; Chin An Wang Microsaccade behavior associated with inhibitory control in athletes in the antisaccade task Journal Article In: Psychology of Sport and Exercise, vol. 78, pp. 1–13, 2025. @article{Chen2025a, The ability to achieve a state of readiness before upcoming tasks, known as a preparatory set, is critical for athletic performance. Here, we investigated these preparatory processes associated with inhibitory control using the anti-saccade paradigm, in which participants are instructed, prior to target appearance, either to automatically look at the target (pro-saccade) or to suppress this automatic response and intentionally look in the opposite direction (anti-saccade). We focused on microsaccadic eye movements that happen before saccade responses in either pro- or anti-saccade tasks, as these microsaccades reflect ongoing preparatory processes during saccade planning before execution. We hypothesized that athletes, compared to non-athletes, would demonstrate better preparation, given research generally indicating higher inhibitory control in athletes. Our findings showed that microsaccade rates decreased before target appearance, with lower rates observed during anti-saccade preparation compared to pro-saccade preparation. However, microsaccade rates and metrics did not differ significantly between athletes and non-athletes. Moreover, reduced microsaccade rates were associated with improved task performance in non-athletes, leading to higher accuracy and faster saccade reaction times (SRTs) in trials without microsaccades. For athletes, only SRTs were affected by microsaccade occurrence. Moreover, the modulation of microsaccadic inhibition on accuracy was more pronounced in non-athletes compared to athletes. In conclusion, while microsaccade responses were modulated by task preparation, differences between athletes and non-athletes were non-significant. These findings, for the first time, highlight the potential of using microsaccades as an online objective index to study preparatory sets in sports science research. |
He Chen; Jun Kunimatsu; Tomomichi Oya; Yuri Imaizumi; Yukiko Hori; Masayuki Matsumoto; Yasuhiro Tsubo; Okihide Hikosaka; Takafumi Minamimoto; Yuji Naya; Hiroshi Yamada Formation of brain-wide neural geometry during visual item recognition in monkeys Journal Article In: iScience, vol. 28, no. 3, pp. 1–17, 2025. @article{Chen2025, Neural dynamics are thought to reflect computations that relay and transform information in the brain. Previous studies have identified the neural population dynamics in many individual brain regions as a trajectory geometry, preserving a common computational motif. However, whether these populations share particular geometric patterns across brain-wide neural populations remains unclear. Here, by mapping neural dynamics widely across temporal/frontal/limbic regions in the cortical and subcortical structures of monkeys, we show that 10 neural populations, including 2,500 neurons, propagate visual item information in a stochastic manner. We found that visual inputs predominantly evoked rotational dynamics in the higher-order visual area, TE, and its downstream striatum tail, while curvy/straight dynamics appeared frequently downstream in the orbitofrontal/hippocampal network. These geometric changes were not deterministic but rather stochastic according to their respective emergence rates. Our meta-analysis results indicate that visual information propagates as a heterogeneous mixture of stochastic neural population signals in the brain. |
Yi Hsuan Chang; Rachel Yep; Chin An Wang In: Psychophysiology, vol. 62, pp. 1–22, 2025. @article{Chang2025, Pupil size is a non-invasive index for autonomic arousal mediated by the locus coeruleus–norepinephrine (LC-NE) system. While pupil size and its derivative (velocity) are increasingly used as indicators of arousal, limited research has investigated the relationships between pupil size and other well-known autonomic responses. Here, we simultaneously recorded pupillometry, heart rate, skin conductance, pulse wave amplitude, and respiration signals during an emotional face–word Stroop task, in which task-evoked (phasic) pupil dilation correlates with LC-NE responsivity. We hypothesized that emotional conflict and valence would affect pupil and other autonomic responses, and trial-by-trial correlations between pupil and other autonomic responses would be observed during both tonic and phasic epochs. Larger pupil dilations, higher pupil size derivative, and lower heart rates were observed in the incongruent condition compared to the congruent condition. Additionally, following incongruent trials, the congruency effect was reduced, and arousal levels indexed by previous-trial pupil dilation were correlated with subsequent reaction times. Furthermore, linear mixed models revealed that larger pupil dilations correlated with higher heart rates, higher skin conductance responses, higher respiration amplitudes, and lower pulse wave amplitudes on a trial-by-trial basis. Similar effects were seen between positive and negative valence conditions. Moreover, tonic pupil size before stimulus presentation significantly correlated with all other tonic autonomic responses, whereas tonic pupil size derivative correlated with heart rates and skin conductance responses. These results demonstrate a trial-by-trial relationship between pupil dynamics and other autonomic responses, highlighting pupil size as an effective real-time index for autonomic arousal during emotional conflict and valence processing. |
Rita Cersosimo; Paul E. Engelhardt; Leigh Fernandez; Filippo Domaneschi Novel metaphor processing in dyslexia: A visual world eye-tracking study Journal Article In: Discourse Processes, pp. 1–21, 2025. @article{Cersosimo2025, Metaphor comprehension has been investigated in neurodevelopmental disorders, but studies devoted to adults with dyslexia are few and present inconsistent results. The present study sought to investigate how adults with dyslexia process novel metaphors. Individual differences in vocabulary, working memory, and Theory of Mind were also assessed. An online metaphor comprehension task based on the Visual World Paradigm was carried out with eye-tracking. Metaphors and corresponding literal sentences were aurally presented in isolation, and participants were asked to select a picture that best corresponded to the sentence they heard. Our results indicated that participants with dyslexia chose metaphor interpretations at a similar rate as did the control group. However, online processing data indicated generally slower response times, with a particular delay in processing metaphorical utterances. Eye movement analyses provided further insights into the underlying nature of the processing slowdowns, highlighting specific challenges encountered by individuals with dyslexia when interpreting figurative language. |
Dries Cavents; July De Wilde; Jelena Vranjes Towards a multimodal approach for analysing interpreter's management of rapport challenge in onsite and video remote interpreting Journal Article In: Journal of Pragmatics, vol. 235, pp. 220–237, 2025. @article{Cavents2025, Recently, interpreters' management of rapport is increasingly being investigated. Yet little attention has been directed towards the role of the interpreter's non-verbal behaviour when managing rapport and to the influence of video mediated forms of interpreting on the use of non-verbal behaviour. Therefore, this study proposes a multimodal micro-interactional framework for analysing interpreters' management of rapport challenge in both onsite (OSI) and video remote interpreting (VRI) interaction. The paper introduces a multimodal coding scheme based on Spencer-Oatey's Rapport Management Theory (2008), which is then applied to a dataset of video recorded interpreter-mediated interactions to examine how interpreters employ verbal, paraverbal, and non-verbal resources to multimodally address rapport challenge. Data were collected from simulated interactions involving professional public service interpreters and role-players adopting the role of primary participants in a reception centre for asylum seekers. The findings reveal that in OSI interpreters use a wide range of non-verbal resources when conveying rapport challenges, whereas VRI imposes constraints on non-verbal communication, often necessitating more disruptive verbal strategies to manage rapport. The study underscores the importance of a multimodal approach to interpreting research, highlighting how non-verbal behaviours significantly contribute to the management of interpersonal relations in interpreter-mediated talk. |
Yuqing Cai; Christoph Strauch; Stefan Van der Stigchel; Antonia F. Ten Brink; Frans W. Cornelissen; Marnix Naber Mapping simulated visual field defects with movie-viewing pupil perimetry Journal Article In: Graefe's Archive for Clinical and Experimental Ophthalmology, pp. 1–10, 2025. @article{Cai2025, Purpose: Assessing the quality of the visual field is important for the diagnosis of ophthalmic and neurological diseases and, consequently, for rehabilitation. Visual field defects (VFDs) are typically assessed using standard automated perimetry (SAP). However, SAP requires participants to understand instructions, maintain fixation and sustained attention, and provide overt responses. These aspects make SAP less suitable for very young or cognitively impaired populations. Here we investigate the feasibility of a new and less demanding form of perimetry. This method assesses visual sensitivity based on pupil responses while performing the perhaps simplest task imaginable: watching movies. Method: We analyzed an existing dataset, with healthy participants (n = 70) freely watching movies with or without gaze-contingent simulated VFDs, either hemianopia (left- or right-sided) or glaucoma (large nasal arc, small nasal arc, and tunnel vision). Meanwhile, their gaze and pupil size were recorded. Using a recently published toolbox (Open-DPSM), we modeled the relative contribution of visual events to the pupil responses to indicate relative visual sensitivity across the visual field and to dissociate between conditions with and without simulated VFDs. Result: Conditions with and without simulated VFDs could be dissociated, with an AUC ranging from 0.85 to 0.97, depending on the specific simulated VFD condition. In addition, the dissociation was better when including more movies in the modeling but the model with as few movies as 10 movies was sufficient for a good classification (AUC ranging from 0.84 to 0.96). Conclusion: Movie-viewing pupil perimetry is promising in providing complementary information for the diagnosis of VFDs, especially for those who are unable to perform conventional perimetry. |
Andrew M Burleson; Pamela E Souza The time course of cognitive effort during disrupted speech Journal Article In: Quarterly Journal of Experimental Psychology, pp. 1–18, 2025. @article{Burleson2025, Listeners often find themselves in scenarios where speech is disrupted, misperceived, or otherwise difficult to recognise. In these situations, many individuals report exerting additional effort to understand speech, even when repairing speech may be difficult or impossible. This investigation aimed to characterise cognitive efforts across time during both sentence listening and a post-sentence retention interval by observing the pupillary response of participants with normal to borderline-normal hearing in response to two interrupted speech conditions: sentences interrupted by gaps of silence or bursts of noise. The pupillary response serves as a measure of the cumulative resources devoted to task completion. Both interruption conditions resulted in significantly greater levels of pupil dilation than the uninterrupted speech condition. Just prior to the end of a sentence, trials periodically interrupted by bursts of noise elicited greater pupil dilation than the silent-interrupted condition. Compared to the uninterrupted condition, both interruption conditions resulted in increased dilation after sentence end but before repetition, possibly reflecting sustained processing demands. Understanding pupil dilation as a marker of cognitive effort is important for clinicians and researchers when assessing the additional effort exerted by listeners with hearing loss who may use cochlear implants or hearing aids. Even when successful perceptual repair is unlikely, listeners may continue to exert increased effort when processing misperceived speech, which could cause them to miss upcoming speech or may contribute to heightened listening fatigue. |
Laurence Bruggeman; Evan Kidd; Rachel Nordlinger; Anne Cutler Incremental processing in a polysynthetic language (Murrinhpatha) Journal Article In: Cognition, vol. 257, pp. 1–7, 2025. @article{Bruggeman2025, Language processing is rapidly incremental, but evidence bearing upon this assumption comes from very few languages. In this paper we report on a study of incremental processing in Murrinhpatha, a polysynthetic Australian language, which expresses complex sentence-level meanings in a single verb, the full meaning of which is not clear until the final morph. Forty native Murrinhpatha speakers participated in a visual world eyetracking experiment in which they viewed two complex scenes as they heard a verb describing one of the scenes. The scenes were selected so that the verb describing the target scene had either no overlap with a possible description of the competitor image, or overlapped from the start (onset overlap) or at the end of the verb (rhyme overlap). The results showed that, despite meaning only being clear at the end of the verb, Murrinhpatha speakers made incremental predictions that differed across conditions. The findings demonstrate that processing in polysynthetic languages is rapid and incremental, yet unlike in commonly studied languages like English, speakers make parsing predictions based on information associated with bound morphs rather than discrete words. |
Rossella Breveglieri; Riccardo Brandolani; Stefano Diomedi; Markus Lappe; Claudio Galletti; Patrizia Fattori Role of the medial posterior parietal cortex in orchestrating attention and reaching Journal Article In: The Journal of Neuroscience, vol. 45, no. 1, pp. 1–11, 2025. @article{Breveglieri2025, The interplay between attention, alertness, and motor planning is crucial for our manual interactions. To investigate the neural bases of this interaction and challenge the views that attention cannot be disentangled from motor planning, we instructed human volunteers of both sexes to plan and execute reaching movements while attending to the target, while attending elsewhere, or without constraining attention. We recorded reaction times to reach initiation and pupil diameter and interfered with the functions of the medial posterior parietal cortex (mPPC) with online repetitive transcranial magnetic stimulation to test the causal role of this cortical region in the interplay between spatial attention and reaching. We found that mPPC plays a key role in the spatial association of reach planning and covert attention. Moreover, we have found that alertness, measured by pupil size, is a good predictor of the promptness of reach initiation only if we plan a reach to attended targets, and mPPC is causally involved in this coupling. Different from previous understanding, we suggest that mPPC is neither involved in reach planning per se, nor in sustained covert attention in the absence of a reach plan, but it is specifically involved in attention functional to reaching. |
Laurel Brehm; Nora Kennis; Christina Bergmann When is a ranana a banana? Disentangling the mechanisms of error repair and word learning Journal Article In: Language, Cognition and Neuroscience, pp. 1–21, 2025. @article{Brehm2025, When faced with an ambiguous novel word such as ‘ranana', how do listeners decide whether they heard a mispronunciation of a familiar target (‘banana') or a label for an unfamiliar novel item? We examined this question by combining visual-world eye-tracking with an offline forced-choice judgment paradigm. In two studies, we show evidence that participants entertain repair and novel label interpretations of novel words that were created by editing a familiar target word in multiple phonetic features (Experiment 1) or a single phonetic feature (Experiment 2). Repair (‘ranana' = a banana) and learning (‘ranana' = a novel referent) were both common interpretation strategies, and learning was strongly associated with visual attention to the novel image after it was referred to in a sentence. This indicates that repair and learning are both valid strategies for understanding novel words that depend upon a set of similar mechanisms, and suggests that attention during listening is causally related to whether one learns or repairs. |
Martina Bovo; Sebastián Moyano; Giulia Calignano; Eloisa Valenza; María Ángeles Ballesteros-Duperon; María Rosario Rueda The modulating effect of gestational age on attentional disengagement in toddlers Journal Article In: Infant Behavior and Development, vol. 78, pp. 1–12, 2025. @article{Bovo2025, Gestational Age (GA) at birth plays a crucial role in identifying potential vulnerabilities to long-term difficulties in cognitive and behavioral development. The present study aims to explore the influence of gestational age on the efficiency of early visual attention orienting, as a potential marker for the development of specific high-level socio-cognitive skills. We administered the Gap-Overlap task to measure the attentional orienting and disengagement performance of 16-month-olds born between the 34th and 41st weeks of gestation. Our findings indicate that GA might be a significant predictor of attentional disengagement performance, with lower GAs associated with slower orienting of visual attention in the gap condition. Additionally, we discuss a possible influence of endogenous attention control on disengagement accuracy at this age, particularly among full-term infants. Overall, the findings highlight the role of GA as a key factor in evaluating early visual attention development, acting as a marker for detecting early vulnerabilities. |
Floortje G. Bouwkamp; Floris P. Lange; Eelke Spaak Spatial predictive context speeds up visual search by biasing local attentional competition Journal Article In: Journal of Cognitive Neuroscience, vol. 37, no. 1, pp. 28–42, 2025. @article{Bouwkamp2025, The human visual system is equipped to rapidly and implicitly learn and exploit the statistical regularities in our environment. Within visual search, contextual cueing demonstrates how implicit knowledge of scenes can improve search performance. This is commonly interpreted as spatial context in the scenes becoming predictive of the target location, which leads to a more efficient guidance of attention during search. However, what drives this enhanced guidance is unknown. First, it is under debate whether the entire scene (global context) or more local context drives this phenomenon. Second, it is unclear how exactly improved attentional guidance is enabled by target enhancement and distractor suppression. In the present magnetoencephalography experiment, we leveraged rapid invisible frequency tagging to answer these two outstanding questions. We found that the improved performance when searching implicitly familiar scenes was accompanied by a stronger neural representation of the target stimulus, at the cost specifically of those distractors directly surrounding the target. Crucially, this biasing of local attentional competition was behaviorally relevant when searching familiar scenes. Taken together, we conclude that implicitly learned spatial predictive context improves how we search our environment by sharpening the attentional field. |
Cemre Baykan; Alexander C. Schütz Electroencephalographic responses to the number of objects in partially occluded and uncovered scenes Journal Article In: Journal of Nognitive neuroscience, vol. 37, no. 1, pp. 227–238, 2025. @article{Baykan2025, Perceptual completion is ubiquitous when estimating properties such as the shape, size, or number of objects in partially occluded scenes. Behavioral experiments showed that the number of hidden objects is underestimated in partially occluded scenes compared with an estimation based on the density of visible objects and the amount of occlusion. It is still unknown at which processing level this (under)estimation of the number of hidden objects occurs. We studied this question using a passive viewing task in which observers viewed a game board that was initially partially occluded and later was uncovered to reveal its hidden parts. We simultaneously measured the electroencephalographic responses to the partially occluded board presentation and its uncovering. We hypothesized that if the underestimation is a result of early sensory processing, it would be observed in the activities of P1 and N1, whereas if it is because of higher level processes such as expectancy, it would be reflected in P3 activities. Our data showed that P1 amplitude increased with numerosity in both occluded and uncovered states, indicating a link between P1 and simple stimulus features. The N1 amplitude was highest when both the initially visible and uncovered areas of the board were completely filled with game pieces, suggesting that the N1 component is sensitive to the overall Gestalt. Finally, we observed that P3 activity was reduced when the density of game pieces in the uncovered parts matched the initially visible parts, implying a relationship between the P3 component and expectation mismatch. Overall, our results suggest that inferences about the number of hidden items are reflected in high-level processing. |
Matthias P. Baumann; Anna F. Denninger; Ziad M. Hafed Perisaccadic perceptual mislocalization strength depends on the visual appearance of saccade targets Journal Article In: Journal of Neurophysiology, vol. 133, pp. 85–100, 2025. @article{Baumann2025, We normally perceive a stable visual environment despite eye movements. To achieve such stability, visual processing integrates information across a given saccade, and laboratory hallmarks of such integration are robustly observed by presenting brief perisaccadic visual probes. In one classic phenomenon, probe locations are grossly mislocalized. This mislocalization is believed to depend, at least in part, on corollary discharge associated with saccade-related neuronal movement commands. However, we recently found that superior colliculus motor bursts, a known source of corollary discharge, can be different for different image appearances of the saccade target. Therefore, here we investigated whether perisaccadic mislocalization also depends on saccade target appearance. We asked human participants to generate saccades to either low (0.5 cycles/°) or high (5 cycles/°) spatial frequency gratings. We always placed a high-contrast target spot at grating center, to ensure matched saccades across image types. We presented a single, brief perisaccadic probe, which was high in contrast to avoid saccadic suppression, and the subjects pointed (via mouse cursor) at the seen probe location. We observed stronger perisaccadic mislocalization for low-spatial frequency saccade targets and for upper visual field probe locations. This was despite matched saccade metrics and kinematics across conditions, and it was also despite matched probe visibility for the different saccade target images (low vs. high spatial frequency). Assuming that perisaccadic visual mislocalization depends on corollary discharge, our results suggest that such discharge might relay more than just spatial saccade vectors to the visual system; saccade target visual features can also be transmitted.NEW & NOTEWORTHY Brief visual probes are grossly mislocalized when presented in the temporal vicinity of saccades. Although the mechanisms of such mislocalization are still under investigation, one component of them could derive from corollary discharge signals associated with saccade movement commands. Here, we were motivated by the observation that superior colliculus movement bursts, one source of corollary discharge, vary with saccade target image appearance. If so, then perisaccadic mislocalization should also do so, which we confirmed. |
Pablo A. Barrionuevo; Alexander C. Schütz; Karl R. Gegenfurtner Increased brightness assimilation in rod vision Journal Article In: iScience, vol. 28, no. 2, pp. 1–15, 2025. @article{Barrionuevo2025, Our visual system uses contextual cues to estimate the brightness of surfaces: brightness can shift toward (assimilation) or away from (contrast) the brightness of the surroundings. We investigated brightness induction at different light levels and found a potential influence of rod photoreceptors on brightness induction. We then used a novel tetrachromatic display to generate stimuli differentially exciting rods or cones at a fixed light adaptation level. Under rod vision, brightness assimilation was enhanced while brightness contrast was not altered in comparison to cone vision. We ruled out that this effect was mediated by the low resolution of night vision. Our findings suggest that rod vision affects the high-level interpretation of visual scenes that results in differences in brightness assimilation but not contrast. Our results imply that the visual system employs more perceptual inferences under rod vision than under cone vision to solve visual ambiguities in complex spatial displays. |
Dale J. Barr; Hanna Sirniö; Beáta Kovács; Kieran J. O'Shea; Shannon McNee; Alistair Beith; Heather Britain; Qintong Li Perspective conflict disrupts pragmatic inference in real-time language comprehension Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–19, 2025. @article{Barr2025, In two visual-world eyetracking experiments, we investigated how effectively addressees use information about a speaker's perspective to resolve temporary ambiguities in spoken expressions containing prenominal scalar adjectives (e.g., the small candle). The experiments used a new “Display Change” task to create situations where an addressee's perspective conflicted with that of a speaker, allowing the point of disambiguation (early vs. late) to be specified independently from each perspective. Contrary to existing perspective-taking theories, the only situation in which addressees resolved references early was when both perspectives afforded early disambiguation. When perspectives conflicted, addressees exhibited a lower rate of preferential looks to the target and slower response times. This disruption to contrastive inference reflects either the suspension of pragmatic inferencing or cognitive limitations on the simultaneous representation and use of incompatible perspectives. |
Emma L. Axelsson; Jessica S. Horst; Samantha L. Playford; Amanda I. Winiger Toddlers' looking behaviours during referent selection and relationships with immediate and delayed retention Journal Article In: Journal of Memory and Language, vol. 141, pp. 1–15, 2025. @article{Axelsson2025, The current study investigates whether children's attempts to solve referential ambiguity is best explained as a process-of-elimination or a novelty bias. We measured 2.5-year-old children's pointing and eye movements during referent selection trials and assessed whether this changes across repeated exposures. We also tested children's retention of novel words and how much focusing on novel targets during referent selection supports immediate and delayed retention as well as the effect of hearing the words ostensively named after referent selection. Time course analyses of children's looking during referent selection indicated that soon after noun onsets, in familiar target trials there was a greater focus on targets relative to chance, but in novel target trials, children focussed on targets less than chance, suggesting an initial focus on competitors. Children also took longer to focus on and point to novel compared to familiar targets. Thus, this converging evidence suggests referent selection is best described as a process-of-elimination. Ostensive naming also led to faster pointing at novel targets in subsequent trials and better delayed retention than the non-ostensive condition. In addition, a greater focus on novel targets during referent selection was associated with better immediate retention for the ostensive naming condition, but better delayed retention for the non-ostensive condition. Therefore, a focus on novelty may supplement weaker encoding, facilitating later retention. |
Ralph Andrews; Michael Melnychuk; Sarah Moran; Teigan Walsh; Sophie Boylan; Paul Dockree Paced breathing associated with pupil diameter oscillations at the same rate and reduced lapses in attention Journal Article In: Psychophysiology, vol. 62, no. 2, pp. 1–22, 2025. @article{Andrews2025, A dynamical systems model proposes that respiratory, locus coeruleus—noradrenaline (LC-NA), and cortical attentional systems interact, producing emergent states of attention. We tested a prediction that fixing respiratory pace (versus spontaneous respiration) stabilizes oscillations in pupil diameter (LC-NA proxy) and attentional state. Primary comparisons were between ‘Instructed Breath' (IB) and ‘No Instructed Breath' (NIB) groups. Secondarily, we investigated the effects of shifting respiratory frequency in the IB group from 0.15 to 0.1–0.15 Hz in Experiment 1 (n = 55) and 0.15–0.1 Hz only in Experiment 2 (n = 48) (replication). In the Paced Auditory Cue Entrainment (PACE) task, participants heard two auditory tones, alternating higher and lower pitches, cycling continuously. Tones acted as a breath guide for IB and an attention monitor for both groups. Participants gave rhythmic mouse responses to the transition points between tones (left for high-to-low, right for low-to-high). We derived accuracy of mouse click timing (RTm), variability in click timing (RTVL), and counts of erroneously inverting the left/right rhythm (IRs and Switches). Despite no differences between groups in RTm or RTVL, IB committed significantly fewer IRs and switches, indicating less lapses in attention during paced breathing. Differences in behavioral metrics were present across tone cycle frequencies but not exclusive to IB, so breath frequency did not appear to have a specific effect. Pupil diameter oscillations in IB closely tracked the frequency of the instructed breathing, implicating LC-NA activity as being entrained by the breath intervention. We conclude that pacing respiratory frequency did stabilize attention, possibly through stabilizing fluctuations in LC-NA. |
Elena Allegretti; Giorgia D'Innocenzo; Moreno I. Coco In: Behavior Research Methods, vol. 57, no. 1, pp. 1–20, 2025. @article{Allegretti2025, The complex interplay between low- and high-level mechanisms governing our visual system can only be fully understood within ecologically valid naturalistic contexts. For this reason, in recent years, substantial efforts have been devoted to equipping the scientific community with datasets of realistic images normed on semantic or spatial features. Here, we introduce VISIONS, an extensive database of 1136 naturalistic scenes normed on a wide range of perceptual and conceptual norms by 185 English speakers across three levels of granularity: isolated object, whole scene, and object-in-scene. Each naturalistic scene contains a critical object systematically manipulated and normed regarding its semantic consistency (e.g., a toothbrush vs. a flashlight in a bathroom) and spatial position (i.e., left, right). Normative data are also available for low- (i.e., clarity, visual complexity) and high-level (i.e., name agreement, confidence, familiarity, prototypicality, manipulability) features of the critical object and its embedding scene context. Eye-tracking data during a free-viewing task further confirms the experimental validity of our manipulations while theoretically demonstrating that object semantics is acquired in extra-foveal vision and used to guide early overt attention. To our knowledge, VISIONS is the first database exhaustively covering norms about integrating objects in scenes and providing several perceptual and conceptual norms of the two as independently taken. We expect VISIONS to become an invaluable image dataset to examine and answer timely questions above and beyond vision science, where a diversity of perceptual, attentive, mnemonic, or linguistic processes could be explored as they develop, age, or become neuropathological. |
Maryam A. Aljassmi; Kayleigh L. Warrington; Victoria A. Mcgowan; Fang Xie; Kevin B. Paterson Parafoveal preview benefit effects in vertical alphabetic reading Journal Article In: Language, Cognition and Neuroscience, pp. 1–10, 2025. @article{Aljassmi2025, The present study examines the extent to which the cognitive processes underlying reading can adapt to accommodate changes in text orientation. For readers of English, processing times are slowed substantially when reading text in the non-conventional vertical direction, but little is known about the processes underlying this slowdown. Accordingly, participants read English text presented in the conventional horizontal orientation, or rotated 90° clockwise to create a vertical orientation. Lexical processing was explored with word frequency effects and parafoveal processing was measured through parafoveal preview benefit. Reading times were longer, and word frequency effects were larger for vertical, compared with horizotonally presented text, in line with findings for reading in unfamiliar formats. Crucially, while clear preview benefit effects were observed for horizontal reading, these effects were entirely absent during vertical reading. These results provide novel insight into perceptual flexibility in foveal and parafoveal processing during reading. |
Blair Aitken; Luke A. Downey; Serah Rose; Brooke Manning; Thomas R. Arkell; Brook Shiferaw; Amie C. Hayley In: Human Psychopharmacology: Clinical and Experimental, vol. 40, pp. 1–8, 2025. @article{Aitken2025, ABSTRACT Objective: To examine the effect of a low dose (10 mg) of methylphenidate on cognitive performance, visuospatial working memory (VSWM) and gaze behaviour capabilities in healthy adults. Methods: This randomised, double‐blind, placebo‐controlled and crossover study examined the effects of 10 mg methylphe- nidate on cognitive performance, VSWM and gaze behaviour. Fixation duration and rate, gaze transition entropy, and stationary gaze entropy were used to quantify visual scanning efficiency in 25 healthy adults (36% female, mean +/- SD age = 33.5 +/- 7.8 years |
Maurits Adam; Birgit Elsner; Norbert Zmyj Perspective matters in goal-predictive gaze shifts during action observation: Results from 6-, 9-, and 12-month-olds and adults Journal Article In: Journal of Experimental Child Psychology, vol. 249, pp. 1–13, 2025. @article{Adam2025, Research on goal-predictive gaze shifts in infancy so far has mostly focused on the effect of infants' experience with observed actions or the effect of agency cues that the observed agent displays. However, the perspective from which an action is presented to the infants (egocentric vs. allocentric) has received only little attention from researchers despite the fact that the natural observation of own actions is always linked to an egocentric perspective, whereas the observation of others' actions is often linked to an allocentric perspective. The current study investigated the timing of 6-, 9-, and 12-month-olds' goal-predictive gaze behavior, as well as that of adults, during the observation of simple human grasping actions that were presented from either an egocentric or allocentric perspective (within-participants design). The results showed that at 6 and 9 months of age, the infants predicted the action goal only when observing the action from the egocentric perspective. The 12-month-olds and adults, in contrast, predicted the action in both perspectives. The results therefore are in line with accounts proposing an advantage of egocentric versus allocentric processing of social stimuli, at least early in development. This study is among the first to show this egocentric bias already during the first year of life. |
Khaled H. A. Abdel-Latif; Thomas Koelewijn; Deniz Başkent; Hartmut Meister Assessment of speech processing and listening effort associated with speech-on-speech masking using the visual world paradigm and pupillometry Journal Article In: Trends in hearing, vol. 29, pp. 1–13, 2025. @article{AbdelLatif2025, Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal. This study aimed to propose a new VWP to examine the time course of speech segregation when competing sentences are presented and to collect pupil size data as a measure of listening effort. Twelve young normal-hearing participants were presented with competing matrix sentences (structure "name-verb-numeral-adjective-object") diotically via headphones at four target-to-masker ratios (TMRs), corresponding to intermediate to near perfect speech recognition. The VWP visually presented the number and object words from both the target and masker sentences. Participants were instructed to gaze at the corresponding words of the target sentence without providing verbal responses. The gaze fixations consistently reflected the different TMRs for both number and object words. The slopes of the fixation curves were steeper, and the proportion of target fixations increased with higher TMRs, suggesting more efficient segregation under more favorable conditions. Temporal analysis of pupil data using Bayesian paired sample t-tests showed a corresponding reduction in pupil dilation with increasing TMR, indicating reduced listening effort. The results support the conclusion that the proposed VWP and the captured eye movements and pupil dilation are suitable for objective assessment of sentence-based speech-on-speech segregation and the corresponding listening effort. |
Mohamad Abbass; Benjamin Corrigan; Renée Johnston; Roberto Gulli; Adam Sachs; Jonathan C. Lau; Julio Martinez-Trujillo Prefrontal cortex neuronal ensembles dynamically encode task features during associative memory and virtual navigation Journal Article In: Cell Reports, vol. 44, no. 1, pp. 1–23, 2025. @article{Abbass2025, Neuronal populations expand their information-encoding capacity using mixed selective neurons. This is particularly prominent in association areas such as the lateral prefrontal cortex (LPFC), which integrate information from multiple sensory systems. However, during conditions that approximate natural behaviors, it is unclear how LPFC neuronal ensembles process space- and time-varying information about task features. Here, we show that, during a virtual reality task with naturalistic elements that requires associative memory, individual neurons and neuronal ensembles in the primate LPFC dynamically mix unconstrained features of the task, such as eye movements, with task-related visual features. Neurons in dorsal regions show more selectivity for space and eye movements, while ventral regions show more selectivity for visual features, representing them in a separate subspace. In summary, LPFC neurons exhibit dynamic and mixed selectivity for unconstrained and constrained task elements, and neural ensembles can separate task features in different subspaces. |
2024 |
Carolin Zsigo; Ellen Greimel; Regine Primbs; Jürgen Bartling; Gerd Schulte-Körne; Lisa Feldmann Frontal alpha asymmetry during emotion regulation in adults with lifetime major depression Journal Article In: Cognitive, Affective, & Behavioral Neuroscience, vol. 24, no. 3, pp. 552–566, 2024. @article{Zsigo2024, Emotion regulation (ER) often is impaired in current or remitted major depression (MD), although the extent of the deficits is not fully understood. Recent studies suggest that frontal alpha asymmetry (FAA) could be a promising electrophysiological measure to investigate ER. The purpose of this study was to investigate ER differences between participants with lifetime major depression (lifetime MD) and healthy controls (HC) for the first time in an experimental task by using FAA. We compared lifetime MD (n = 34) and HC (n = 25) participants aged 18–24 years in (a) an active ER condition, in which participants were instructed to reappraise negative images and (b) a condition in which they attended to the images while an EEG was recorded. We also report FAA results from an independent sample of adolescents with current MD (n = 36) and HC adolescents (n = 38). In the main sample, both groups were able to decrease self-reported negative affect in response to negative images through ER, without significant group differences. We found no differences between groups or conditions in FAA, which was replicated within the independent adolescent sample. The lifetime MD group also reported less adaptive ER in daily life and higher difficulty of ER during the task. The lack of differences between in self-reported affect and FAA between lifetime MD and HC groups in the active ER task indicates that lifetime MD participants show no impairments when instructed to apply an adaptive ER strategy. Implications for interventional aspects are discussed. |
Carolin Zsigo; Lisa Feldmann; Frans Oort; Charlotte Piechaczek; Jürgen Bartling; Martin Schulte-Rüther; Christian Wachinger; Gerd Schulte-Körne; Ellen Greimel Emotion regulation training for adolescents with major depression: Results from a randomized controlled trial Journal Article In: Emotion, vol. 24, no. 4, pp. 975–991, 2024. @article{Zsigo2024a, Difficulties in emotion regulation (ER) are thought to contribute to the development and maintenance of major depression (MD) in adolescents. In healthy adults, a task-based training of ER has previously proven effective to reduce stress, but no such studies are available for MD. It is also unclear whether findings can be generalized onto adolescent populations. The final sample consisted of n = 70 adolescents with MD, who were randomized to a task-based ER training (n = 36) or a control training (n = 34). Across four sessions, the ER group was trained to downregulate negative affect to negative images via reappraisal, while the control group was instructed to attend the images. Rumination, stress-, and affect-related measures were assessed as primary outcomes, behavioral and neurophysiological responses (late positive potential, LPP), as secondary outcomes. The trial was preregistered at clinicaltrials.gov (NCT03957850). While there was no significant differential effect of the ER training on primary outcomes, we found small to moderate effects on rumination in the ER group, but not the control group. During reappraisal (compared to attend), the ER group showed an unexpected increase of the LPP during the first, but not during later training sessions. Although replication in large, multicenter trials is needed, our findings on effect sizes suggest that ER training might be promising to decrease rumination in adolescent MD. The LPP increase at the first session may represent cognitive effort, which was successfully reduced over the sessions. Future studies should research whether training effects transfer to daily life and are durable over a longer time period. |
Inbal Ziv; Inbar Avni; Ilan Dinstein; Gal Meiri; Yoram S. Bonneh Oculomotor randomness is higher in autistic children and increases with the severity of symptoms Journal Article In: Autism Research, vol. 17, no. 2, pp. 249–265, 2024. @article{Ziv2024, A variety of studies have suggested that at least some children with autism spectrum disorder (ASD) view the world differently. Differences in gaze patterns as measured by eye tracking have been demonstrated during visual exploration of images and natural viewing of movies with social content. Here we analyzed the temporal randomness of saccades and blinks during natural viewing of movies, inspired by a recent measure of “randomness” applied to micro-movements of the hand and head in ASD (Torres et al., 2013; Torres & Denisova, 2016). We analyzed a large eye-tracking dataset of 189 ASD and 41 typically developing (TD) children (1–11 years old) who watched three movie clips with social content, each repeated twice. We found that oculomotor measures of randomness, obtained from gamma parameters of inter-saccade intervals (ISI) and blink duration distributions, were significantly higher in the ASD group compared with the TD group and were correlated with the ADOS comparison score, reflecting increased “randomness” in more severe cases. Moreover, these measures of randomness decreased with age, as well as with higher cognitive scores in both groups and were consistent across repeated viewing of each movie clip. Highly “random” eye movements in ASD children could be associated with high “neural variability” or noise, poor sensory-motor control, or weak engagement with the movies. These findings could contribute to the future development of oculomotor biomarkers as part of an integrative diagnostic tool for ASD. |
Artyom Zinchenko; Markus Conci; Hermann J. Müller; Thomas Geyer Environmental regularities mitigate attentional misguidance in contextual cueing of visual search Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 50, no. 5, pp. 699–711, 2024. @article{Zinchenko2024, Visual search is faster when a fixed target location is paired with a spatially invariant (vs. randomly changing) distractor configuration, thus indicating that repeated contexts are learned, thereby guiding attention to the target (contextual cueing [CC]). Evidence for memory-guided attention has also been revealed with electrophysiological (electroencephalographic [EEG]) recordings, starting with an enhanced early posterior neg- ativity (N1pc), which signals a preattentive bias toward the target, and, subsequently, attentional and postselective components, such as the posterior contralateral negativity (PCN) and contralateral delay activ- ity (CDA), respectively. Despite effective learning, relearning of previously acquired contexts is inflexible: The CC benefits disappear when the target is relocated to a new position within an otherwise invariant context and corresponding EEG correlates are diminished. The present study tested whether global statistical properties that induce predictions going beyond the immediate invariant layout can facilitate contextual relearning. Global statistical regularities were implemented by presenting repeated and nonrepeated displays in separate streaks (mini blocks) of trials in the relocation phase, with individual displays being presented in a fixed and thus predictable order. Our results revealed a significant CC effect (and an associated modulation of the N1pc, PCN, and CDA components) during initial learning. Critically, the global statistical regularities in the relocation phase also resulted in a reliable CC effect, thus revealing effective relearning with predictive streaks. Moreover, this relearning was reflected in an enhanced PCN amplitude for repeated relative to non- repeated contexts. Temporally ordered contexts may thus adapt memory-based guidance of attention, par- ticularly the allocation of covert attention in the visual display. |
Juliane T. Zimmermann; T. Mark Ellison; Francesco Cangemi; Simon Wehrle; Kai Vogeley; Martine Grice Lookers and listeners on the autism spectrum: The roles of gaze duration and pitch height in inferring mental states Journal Article In: Frontiers in Communication, vol. 9, pp. 1–17, 2024. @article{Zimmermann2024a, Although mentalizing abilities in autistic adults without intelligence deficits are similar to those of control participants in tasks relying on verbal information, they are dissimilar in tasks relying on non-verbal information. The current study aims to investigate mentalizing behavior in autism in a paradigm involving two important nonverbal means to communicate mental states: eye gaze and speech intonation. In an eye-tracking experiment, participants with ASD and a control group watched videos showing a virtual character gazing at objects while an utterance was presented auditorily. We varied the virtual character's gaze duration toward the object (600 or 1800 ms) and the height of the pitch peak on the accented syllable of the word denoting the object. Pitch height on the accented syllable was varied by 45 Hz, leading to high or low prosodic emphasis. Participants were asked to rate the importance of the given object for the virtual character. At the end of the experiment, we assessed how well participants recognized the objects they were presented with in a recognition task. Both longer gaze duration and higher pitch height increased the importance ratings of the object for the virtual character overall. Compared to the control group, ratings of the autistic group were lower for short gaze, but higher when gaze was long but pitch was low. Regardless of an ASD diagnosis, participants clustered into three behaviorally different subgroups, representing individuals whose ratings were influenced (1) predominantly by gaze duration, (2) predominantly by pitch height, or (3) by neither, accordingly labelled “Lookers,” “Listeners” and “Neithers” in our study. “Lookers” spent more time fixating the virtual character's eye region than “Listeners,” while both “Listeners” and “Neithers” spent more time fixating the object than “Lookers.” Object recognition was independent of the virtual character's gaze duration towards the object and pitch height. It was also independent of an ASD diagnosis. Our results show that gaze duration and intonation are effectively used by autistic persons for inferring the importance of an object for a virtual character. Notably, compared to the control group, autistic participants were influenced more strongly by gaze duration than by pitch height. |
Eckart Zimmermann Compression of time in double-step saccades Journal Article In: Journal of Neurophysiology, vol. 132, no. 1, pp. 61–67, 2024. @article{Zimmermann2024, Temporal intervals appear compressed at the time of saccades. Here, I asked if saccadic compression of time is related to motor planning or to saccade execution. To dissociate saccade motor planning from its execution, I used the double-step paradigm, in which subjects have to perform two horizontal saccades successively. At various times around the saccade sequence, I presented two large horizontal bars, which marked an interval lasting 100 ms. After 700 ms, a second temporal interval was presented, varying in duration across trials. Subjects were required to judge which interval appeared shorter. I found that during the first saccades in the double-step paradigm, temporal intervals were compressed. Maximum temporal compression coincided with saccade onset. Around the time of the second saccade, I found temporal compression as well, however, the time of maximum compression preceded saccade onset by about 70 ms. I compared the magnitude and time of temporal compression between double-step saccades and amplitude-matched single saccades, which I measured separately. Although I found no difference in time compression magnitude, the time when maximum compression occurred differed significantly. I conclude that the temporal shift of time compression in double-step saccades demonstrates the influence of saccade motor planning on time perception. NEW & NOTEWORTHY Visually defined temporal intervals appear compressed at the time of saccades. Here, I tested time perception during double-step saccades dissociating saccade planning from execution. Although around the time of the first saccade, peak compression was found at saccade onset, compression around the time of the second saccade peaked 70 ms before saccade onset. The results suggest that saccade motor planning influences time perception. |