EyeLink 认知出版物
All EyeLink cognitive and perception research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2024 |
Jinli Xiong; Xianmin Gong; Quan Yang; Shufei Yin Age-differential role of gaze reinstatement in recognition memory for negative visual stimuli Journal Article In: The Journals of Gerontology, Series B: Psychological Sciences and Social Sciences, vol. 79, no. 5, pp. 1–9, 2024. @article{Xiong2024, Objectives: Although research has shown that the replay of encoding-specific gaze patterns during retrieval, known as gaze reinstatement, facilitates memory retrieval, little is known about whether it differentially associates with the negativity preference in memory (defined as enhanced memory for negative stimuli relative to neutral stimuli in this study) among younger and older adults. The present study aims to address this research gap. Methods: A total of 33 older adults (16 women; aged 58–69 years |
Will Xiao; Saloni Sharma; Gabriel Kreiman; Margaret S. Livingstone Feature-selective responses in macaque visual cortex follow eye movements during natural vision Journal Article In: Nature Neuroscience, vol. 27, no. 6, pp. 1157–1166, 2024. @article{Xiao2024a, In natural vision, primates actively move their eyes several times per second via saccades. It remains unclear whether, during this active looking, visual neurons exhibit classical retinotopic properties, anticipate gaze shifts or mirror the stable quality of perception, especially in complex natural scenes. Here, we let 13 monkeys freely view thousands of natural images across 4.6 million fixations, recorded 883 h of neuronal responses in six areas spanning primary visual to anterior inferior temporal cortex and analyzed spatial, temporal and featural selectivity in these responses. Face neurons tracked their receptive field contents, indicated by category-selective responses. Self-consistency analysis showed that general feature-selective responses also followed eye movements and remained gaze-dependent over seconds of viewing the same image. Computational models of feature-selective responses located retinotopic receptive fields during free viewing. We found limited evidence for feature-selective predictive remapping and no viewing-history integration. Thus, ventral visual neurons represent the world in a predominantly eye-centered reference frame during natural vision. |
Naiqi G. Xiao; Hila Ghersin; Natasha D. Dombrowski; Alexandra M. Boldin; Lauren L. Emberson Infants' top-down perceptual modulation is specific to own-race faces Journal Article In: Journal of Experimental Child Psychology, vol. 242, pp. 1–17, 2024. @article{Xiao2024, Recent studies have revealed the influence of higher-level cognitive systems in modulating perceptual processing (top-down perceptual modulation) in infancy. However, more research is needed to understand how top-down processes in infant perception contribute to early perceptual development. To this end, this study examined infants' top-down perception of own- and other-race faces to reveal whether top-down modulation is linked to the emergence of perceptual specialization. Infants first learned an association between a sound and faces, with the race of the faces manipulated between groups (own race vs. other race). We then tested infants' face perception across various levels of perceptual difficulty (manipulated by presentation duration) and indexed top-down perception by the change in perception when infants heard the sound previously associated with the face (predictive sound) versus an irrelevant sound. Infants exhibited top-down face perception for own-race faces (Experiment 1). However, we present new evidence that infants did not show evidence of top-down modulation for other-race faces (Experiment 2), suggesting an experience-based specificity of this capacity with more effective top-down modulation in familiar perceptual contexts. In addition, we ruled out the possibility that this face race effect was due to differences in infants' associative learning of the sound and faces between the two groups. This work has important implications for understanding the mechanisms supporting perceptual development and how they relate to top-down perception in infancy. |
Tiansheng Xia; Yingqi Yan; Jiayue Guo Color in web-advertising: The effect of color hue contrast on web satisfaction and advertising memory Journal Article In: Current Psychology, vol. 43, no. 16, pp. 14645–14658, 2024. @article{Xia2024b, There has been a growth in e-commerce, presenting consumers with varied forms of advertising. A key goal of web advertising is to leave a lasting impression on the user, and web satisfaction is an important measure of the quality and usability of a web page after an ad is placed on it. This experiment manipulated participants' purpose in web browsing (free browsing versus goal oriented) and the color combination of the web background and the vertical-ad background (high or low hue contrast) to predict users' satisfaction with the web page and the degree of ad recall. The psychological mechanisms of this effect were also explored using an eye-tracking device to record and analyze eye movements. The participants were 120 university students, 64.2% of whom were female and 35.8% of whom were male. During free browsing, participants could simulate the daily use of a browser to browse the web and were given 120 s to do so, and in the task-oriented browsing condition, participants were told in advance that they had to summarize the headlines of each news item one at a time within 120 s. The results showed that, in the free-viewing task, the hue contrast between the ad–web background colors negatively affected web satisfaction and ad memory whereas there was no significant difference in this effect in the goal-oriented task. Furthermore, in the free-viewing task, the level of attentional intrusion mediated the effect of ad–web hue contrast on the degree of ad recall; color harmony mediated the effect of hue contrast on the user's evaluation of web satisfaction. These results can act as a new reference for web design research and marketing practice. |
Jordana S. Wynn; Daniel L. Schacter Eye movements reinstate remembered locations during episodic simulation Journal Article In: Cognition, vol. 248, pp. 1–6, 2024. @article{Wynn2024, Imagining the future, like recalling the past, relies on the ability to retrieve and imagine a spatial context. Research suggests that eye movements support this process by reactivating spatial contextual details from memory, a process termed gaze reinstatement. While gaze reinstatement has been linked to successful memory retrieval, it remains unclear whether it supports the related process of future simulation. In the present study, we recorded both eye movements and audio while participants described familiar locations from memory and subsequently imagined future events occurring in those locations while either freely moving their eyes or maintaining central fixation. Restricting viewing during simulation significantly reduced self-reported vividness ratings, supporting a critical role for eye movements in simulation. When viewing was unrestricted, participants spontaneously reinstated gaze patterns specific to the simulated location, replicating findings of gaze reinstatement during memory retrieval. Finally, gaze-based location reinstatement was predictive of simulation success, indexed by the number of internal (episodic) details produced, with both measures peaking early and co-varying over time. Together, these findings suggest that the same oculomotor processes that support episodic memory retrieval – that is, gaze-based reinstatement of spatial context – also support episodic simulation. |
Nicholas J. Wyche; Mark Edwards; Stephanie C. Goodhew An updating-based working memory load alters the dynamics of eye movements but not their spatial extent during free viewing of natural scenes Journal Article In: Attention, Perception, & Psychophysics, vol. 86, no. 2, pp. 503–524, 2024. @article{Wyche2024, The relationship between spatial deployments of attention and working memory load is an important topic of study, with clear implications for real-world tasks such as driving. Previous research has generally shown that attentional breadth broadens under higher load, while exploratory eye-movement behaviour also appears to change with increasing load. However, relatively little research has compared the effects of working memory load on different kinds of spatial deployment, especially in conditions that require updating of the contents of working memory rather than simple retrieval. The present study undertook such a comparison by measuring participants' attentional breadth (via an undirected Navon task) and their exploratory eye-movement behaviour (a free-viewing recall task) under low and high updating working memory loads. While spatial aspects of task performance (attentional breadth, and peripheral extent of image exploration in the free-viewing task) were unaffected by the load manipulation, the exploratory dynamics of the free-viewing task (including fixation durations and scan-path lengths) changed under increasing load. These findings suggest that temporal dynamics, rather than the spatial extent of exploration, are the primary mechanism affected by working memory load during the spatial deployment of attention. Further, individual differences in exploratory behaviour were observed on the free-viewing task: all metrics were highly correlated across working memory load blocks. These findings suggest a need for further investigation of individual differences in eye-movement behaviour; potential factors associated with these individual differences, including working memory capacity and persistence versus flexibility orientations, are discussed. |
Hao Wu; Yuding Zhang; Qiong Luo; Zhengzhou Zhu The magnitude representations of fractions of Chinese students: Evidence from behavioral experiment and eye-tracking Journal Article In: Current Psychology, vol. 43, no. 5, pp. 4113–4128, 2024. @article{Wu2024d, Early knowledge of fractions can largely predict later mathematical performance, and a comprehensive and in-depth understanding of fractions is fundamental to learning more advanced mathematics. The study aimed to explore the influencing factors and age characteristics of magnitude representations of fractions by a fraction comparison task, using subjects' eye-movement measures as direct evidence and the results of linear regression analyses as indirect evidence. The results found that the number of digits of fractions' components and types of fraction pairs jointly influence the magnitude representations of fractions. For one-digit fraction pairs with and without common components, componential representation is favored; for two-digit fraction pairs with common components, componential representation is preferred, while for two-digit fraction pairs without common components, holistic representation is selected. The representation styles are consistent across university students, junior high school students and primary school students, and there are significant age differences in representation levels, with university students being more flexible in their use of representation strategies of fractions than the other two ages, and junior high school students showing the same level with the primary school students. These results suggest that not only Chinese university students, but also Chinese primary and junior high school students can select and adapt representation strategies of fractions according to the characteristics and complexity of fraction processing tasks. The eye-movement technique can largely compensate for the shortcomings of the regression analysis paradigm and better reveal the critical cognitive processes involved in the processing of fractions. |
Di Wu; Yan Zhu; Yifan Wang; Na Liu; Pan Zhang Transcranial direct current stimulation of the prefrontal and visual cortices diversely affects early and late perceptual learning Journal Article In: Brain and Behavior, vol. 14, no. 7, pp. 1–13, 2024. @article{Wu2024a, Background: Research has shown that visual perceptual learning (VPL) is related to modifying neural activity in higher level decision-making regions. However, the causal roles of the prefrontal and visual cortexes in VPL are still unclear. Here, we investigated how anodal transcranial direct current stimulation (tDCS) of the prefrontal and visual cortices modulates VPL in the early and later phases and the role of multiple brain regions. Methods: Perceptual learning on the coherent motion direction identification task included early and later stages. After early training, participants needed to continuously train to reach a plateau; once the plateau was reached, participants entered a later stage. Sixty participants were randomly divided into five groups. Regardless of the training at the early and later stages, four groups received multitarget tDCS over the right dorsolateral prefrontal cortex (rDLPFC) and right middle temporal area (rMT), single-target tDCS over the rDLPFC, and single-target tDCS over the rMT or sham stimulation, and one group was stimulated at the ipsilateral brain region (i.e., left MT). Results: Compared with sham stimulation, multitarget and two single-target tDCS over the rDLPFC or rMT improved posttest performance and accelerated learning during the early period. However, multitarget tDCS and two single-target tDCS led to equivalent benefits for VPL. Additionally, these beneficial effects were absent when anodal tDCS was applied to the ipsilateral brain region. For the later period, the above facilitating effects on VPL induced by multitarget or single-target tDCS disappeared. Conclusions: This study suggested the causal role of the prefrontal and visual cortices in visual motion perceptual learning by anodal tDCS but failed to find greater beneficial effects by simultaneously stimulating the prefrontal and visual cortices. Future research should investigate the functional associations between multiple brain regions to further promote VPL. |
Chenjing Wu; Hongyan Zhu; Yameng Zhang; Wei Zhang; Xianyou He Sensitivity to moral goodness under different aesthetic contexts Journal Article In: Ethics and Behavior, vol. 34, no. 4, pp. 279–293, 2024. @article{Wu2024c, Does context influence our appreciation of beauty? To answer this question, two experiments were conducted to determine the effect of contextual aesthetics on the recognition of moral behavior. Experiment 1 demonstrated that individuals in a high-aesthetic context had a quicker recognition time for moral behavior than those in a low-aesthetic context. In a low-aesthetic context, individuals recognize immoral behavior more quickly than in a high aesthetic context. Individuals showed greater recognition rates for moral behavior in a high aesthetic context and higher recognition for immoral behaviors in a low aesthetic context for behavior with unclear information. Experiment 2 revealed that individual fixation counts were smaller under the conditions of high aesthetic context and moral behavior than under the conditions of low aesthetic context and moral behavior, indicating a correlation between low aesthetic context and immoral behavior. This study shows that high aesthetic context facilitates the recognition of moral behavior, which has implications for moral education. |
Jaeger Wongtrakun; Shou-Han Zhou; Mark A. Bellgrove; Trevor T. J. Chong; James P. Coxon The effect of congruent versus incongruent distractor positioning on electrophysiological signals during perceptual decision-making Journal Article In: The Journal of Neuroscience, vol. 44, no. 45, pp. 1–9, 2024. @article{Wongtrakun2024, Key event-related potentials (ERPs) of perceptual decision-making such as centroparietal positivity (CPP) elucidate how evidence is accumulated toward a given choice. Furthermore, this accumulation can be impacted by visual target selection signals such as the N2 contralateral (N2c). How these underlying neural mechanisms of perceptual decision-making are influenced by the spatial congruence of distractors relative to target stimuli remains unclear. Here, we used electroencephalography (EEG) in humans of both sexes to investigate the effect of distractor spatial congruency (same vs different hemifield relative to targets) on perceptual decision-making. We confirmed that responses for perceptual decisions were slower for spatially incongruent versus congruent distractors of high salience. Similarly, markers of target selection (N2c peak amplitude) and evidence accumulation (CPP slope) were found to be lower when distractors were spatially incongruent versus congruent. To evaluate the effects of congruency further, we applied drift diffusion modeling to participant responses, which showed that larger amplitudes of both ERPs were correlated with shorter nondecision times when considering the effect of congruency. The modeling also suggested that congruency's effect on behavior occurred prior to and during evidence accumulation when considering the effects of the N2c peak and CPP slope. These findings point to spatially incongruent distractors, relative to congruent distractors, influencing decisions as early as the initial sensory processing phase and then continuing to exert an effect as evidence is accumulated throughout the decision-making process. Overall, our findings highlight how key electrophysiological signals of perceptual decision-making are influenced by the spatial congruence of target and distractor. |
Raymond Ka Wong; Janahan Selvanayagam; Kevin Johnston; Stefan Everling Functional specialization and distributed processing across marmoset lateral prefrontal subregions Journal Article In: Cerebral Cortex, vol. 34, no. 10, pp. 1–15, 2024. @article{Wong2024, A prominent aspect of primate lateral prefrontal cortex organization is its division into several cytoarchitecturally distinct subregions. Neurophysiological investigations in macaques have provided evidence for the functional specialization of these subregions, but an understanding of the relative representational topography of sensory, social, and cognitive processes within them remains elusive. One explanatory factor is that evidence for functional specialization has been compiled largely from a patchwork of findings across studies, in many animals, and with considerable variation in stimulus sets and tasks. Here, we addressed this by leveraging the common marmoset (Callithrix jacchus) to carry out large-scale neurophysiological mapping of the lateral prefrontal cortex using high-density microelectrode arrays, and a diverse suite of test stimuli including faces, marmoset calls, and spatial working memory task. Task-modulated units and units responsive to visual and auditory stimuli were distributed throughout the lateral prefrontal cortex, while those with saccade-related activity or face-selective responses were restricted to 8aV, 8aD, 10, 46 V, and 47. Neurons with contralateral visual receptive fields were limited to areas 8aV and 8aD. These data reveal a mixed pattern of functional specialization in the lateral prefrontal cortex, in which responses to some stimuli and tasks are distributed broadly across lateral prefrontal cortex subregions, while others are more limited in their representation. |
Hanna E. Willis; Bradley Caron; Matthew R. Cavanaugh; Lucy Starling; Sara Ajina; Franco Pestilli; Marco Tamietto; Krystel R. Huxlin; Kate E. Watkins; Holly Bridge Rehabilitating homonymous visual field deficits: White matter markers of recovery — stage 2 registered report Journal Article In: Brain Communications, vol. 6, no. 5, pp. 1–16, 2024. @article{Willis2024, Damage to the primary visual cortex or its afferent white matter tracts results in loss of vision in the contralateral visual field that can present as homonymous visual field deficits. Evidence suggests that visual training in the blind field can partially reverse blindness at trained locations. However, the efficacy of visual training is highly variable across participants, and the reasons for this are poorly understood. It is likely that variance in residual neural circuitry following the insult may underlie the variation among patients. Many stroke survivors with visual field deficits retain residual visual processing in their blind field despite a lack of awareness. Previous research indicates that intact structural and functional connections between the dorsal lateral geniculate nucleus and the human extrastriate visual motion-processing area hMT+ are necessary for blindsight to occur. We therefore hypothesized that changes in this white matter pathway may underlie improvements resulting from motion discrimination training.Eighteen stroke survivors with long-standing, unilateral, homonymous field defects from retro-geniculate brain lesions completed 6 months of visual training at home. This involved performing daily sessions of a motion discrimination task, at two non-overlapping locations in the blind field, at least 5 days per week. Motion discrimination and integration thresholds, Humphrey perimetry and structural and diffusion-weighted MRI were collected pre- and post-training. Changes in fractional anisotropy (FA) were analysed in visual tracts connecting the ipsilesional dorsal lateral geniculate nucleus and hMT+, and the ipsilesional dorsal lateral geniculate nucleus and primary visual cortex. The (non-visual) tract connecting the ventral posterior lateral nucleus of the thalamus and the primary somatosensory cortex was analysed as a control. Changes in white matter intintegrity were correlated with improvements in motion discrimination and Humphrey perimetry. We found that the magnitude of behavioural improvement was not directly related to changes in FA in the pathway between the dorsal lateral geniculate nucleus and hMT+ or dorsal lateral geniculate nucleus and primary visual cortex. Baseline FA in either tract also failed to predict improvements in training. However, an exploratory analysis showed a significant increase in FA in the distal part of the tract connecting the dorsal lateral geniculate nucleus and hMT+, suggesting that 6 months of visual training in chronic, retro-geniculate strokes may enhance white matter microstructural integrity of residual geniculo-extrastriate pathways. |
Jonathon Whitlock; Ryan Hubbard; Huiyu Ding; Lili Sahakyan Trial-level fluctuations in pupil dilation at encoding reflect strength of relational binding Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 50, no. 2, pp. 212–229, 2024. @article{Whitlock2024, Eye-tracking methodologies have revealed that eye movements and pupil dilations are influenced by our pre-vious experiences. Dynamic fluctuations in pupil size during learning reflect in part the formation of mem-ories for learned information, while viewing behavior during memory testing is influenced by memory retrieval and drawn to previously learned associations. However, no study to date has linked fluctuations in pupil dilation at encoding to the magnitude of viewing behavior at test. The current investigation involved monitoring eye movements both in single item recognition and relational recognition tasks. In the item task, all faces were presented with the same background scene and memory for faces was subsequently tested, whereas in the relational task each face was presented with its own unique background scene and memory for the face-scene association was subsequently tested. Pupil size changes during encoding predicted the magnitude of preferential viewing during test, as well as future recognition accuracy. These effects emerged only in the relational task, but not in the item task, and were replicated in an additional experiment in which stimulus luminance was more tightly controlled. A follow-up experiment and additional analyses ruled out differences in orienting instructions or number of fixations to the encoding display as explanations of the observed effects. The results shed light on the links between pupil dilation, memory encoding, and eye movement patterns during recognition and suggest that trial-level fluctuations in pupil dilation during encoding reflect relational binding of items to their context rather than general memory formation or strength. |
Will Whitham; Bradley Karstadt; Nicola C. Anderson; Walter F. Bischof; Steven J. Schapiro; Alan Kingstone; Richard Coss; Elina Birmingham; Jessica L. Yorzinski Predator gaze captures both human and chimpanzee attention Journal Article In: PLoS ONE, vol. 19, no. 11, pp. 1–23, 2024. @article{Whitham2024, Primates can rapidly detect potential predators and modify their behavior based on the level of risk. The gaze direction of predators is one feature that primates can use to assess risk levels: recognition of a predator's direct stare indicates to prey that it has been detected and the level of risk is relatively high. Predation has likely shaped visual attention in primates to quickly assess the level of risk but we know little about the constellation of low-level (e.g., contrast, color) and higher-order (e.g., category membership, perceived threat) visual features that primates use to do so. We therefore presented human and chimpanzee (Pan troglodytes) participants with photographs of potential predators (lions) and prey (impala) while we recorded their overt attention with an eye-tracker. The gaze of the predators and prey was either directed or averted. We found that both humans and chimpanzees visually fixated the eyes of predators more than those of prey. In addition, they directed the most attention toward the eyes of directed (rather than averted) predators. Humans, but not chimpanzees, gazed at the eyes of the predators and prey more than other features. Importantly, low-level visual features of the predators and prey did not provide a good explanation of the observed gaze patterns. |
Kayla M. Whearty; Ivan Ruiz; Anna R. Knippenberg; Gregory P. Strauss In: Neuropsychology, vol. 38, no. 5, pp. 475–485, 2024. @article{Whearty2024, Objective: The present study explored the hypothesis that anhedonia reflects an emotional memory impairment for pleasant stimuli, rather than diminished hedonic capacity in individuals with schizophrenia (SZ). Method: Participants included 30 SZ and 30 healthy controls (HCs) subjects who completed an eye-tracking emotion-induced memory trade-off task where contextually relevant pleasant, unpleasant, or neutral items were inserted into the foreground of neutral background scenes. Passive viewing and poststimulus elaboration blocks were administered to assess differential encoding mechanisms, and immediate and 1-week recognition testing phases were completed to assess the effects of delay interval. Participants also made self-reports of positive emotion, negative emotion, and arousal in response to the stimuli. Results: Results indicated that SZ experienced stimuli similarly to HC. Both groups demonstrated the typical emotion-induced memory trade-off during the passive viewing and poststimulus elaboration encoding blocks, as indicated by more hits for emotional than neutral items and fewer hits for backgrounds paired with emotional than neutral items. Eye-tracking data also indicated that both groups were more likely to fixate earlier and have longer dwell time on emotional than neutral items. At the 1-week delay, the emotion-induced memory trade-off was eliminated in both groups, and SZ showed fewer overall hits across valence conditions. Greater severity of anhedonia was specifically associated with impaired recognition for pleasant stimuli at the immediate recognition phase. Conclusions: Findings suggest that anhedonia in SZ is associated with emotional memory impairment, particularly a deficit in encoding positive stimuli. |
Emily R. Weichart; Layla Unger; Nicole King; Vladimir M. Sloutsky; Brandon M. Turner “The eyes are the window to the representation”: Linking gaze to memory precision and decision weights in object discrimination tasks Journal Article In: Psychological Review, vol. 131, no. 4, pp. 1045–1067, 2024. @article{Weichart2024, Humans selectively attend to task-relevant information in order to make accurate decisions. However, selective attention incurs consequences if the learning environment changes unexpectedly. This trade-off has been underscored by studies that compare learning behaviors between adults and young children: broad sampling during learning comes with a breadth of information in memory, often allowing children to notice details of the environment that are missed by their more selective adult counterparts. The current work extends the exemplar-similarity account of object discrimination to consider both the intentional and consequential aspects of selective attention when predicting choice. In a novel direct input approach, we used trial-level eyetracking data from training and test to replace the otherwise freely estimated attention dynamics of the model. We demonstrate that only a model imbued with gaze correlates of memory precision in addition to decision weights can accurately predict key behaviors associated with (a) selective attention to a relevant dimension, (b) distributed attention across dimensions, and (c) flexibly shifting strategies between tasks. Although humans engage in selective attention with the intention of being accurate in the moment, our findings suggest that its consequences on memory constrain the information that is available for making decisions in the future. |
Yipu Wei; Yingjia Wan; Michael K. Tanenhaus Spontaneous perspective-taking in real-time language comprehension: Evidence from eye-movements and grain of coordination Journal Article In: Scientific Reports, vol. 14, no. 1, pp. 1–10, 2024. @article{Wei2024a, Linguistic communication requires interlocutors to consider differences in each other's knowledge (perspective-taking). However, perspective-taking might either be spontaneous or strategic. We monitored listeners' eye movements in a referential communication task. A virtual speaker gave temporally ambiguous instructions with scalar adjectives (“big” in “big cubic block”). Scalar adjectives assume a contrasting object (a small cubic block). We manipulated whether the contrasting object (a small triangle) for a competitor object (a big triangle) was in common ground (visible to both speaker and listener) or was occluded so it was in the listener's privileged ground, in which case perspective-taking would allow earlier reference-resolution. We used a complex visual context with multiple objects, making strategic perspective-taking unlikely when all objects are in the listener's referential domain. A turn-taking, puzzle-solving task manipulated whether participants could anticipate a more restricted referential domain. Pieces were either confined to a small area (requiring fine-grained coordination) or distributed across spatially distinct regions (requiring only coarse-grained coordination). Results strongly supported spontaneous perspective-taking: Although comprehension was less time-locked in the coarse-grained condition, participants in both conditions used perspective information to identify the target referent earlier when the competitor contrast was in privileged ground, even when participants believed instructions were computer-generated. |
Wei Wei; Kangning Wang; Shuang Qiu; Huiguang He A MultiModal Vigilance (MMV) dataset during RSVP and SSVEP brain-computer interface tasks Journal Article In: Scientific Data, vol. 11, no. 1, pp. 1–14, 2024. @article{Wei2024, Vigilance represents an ability to sustain prolonged attention and plays a crucial role in ensuring the reliability and optimal performance of various tasks. In this report, we describe a MultiModal Vigilance (MMV) dataset comprising seven physiological signals acquired during two Brain-Computer Interface (BCI) tasks. The BCI tasks encompass a rapid serial visual presentation (RSVP)-based target image retrieval task and a steady-state visual evoked potential (SSVEP)-based cursor-control task. The MMV dataset includes four sessions of seven physiological signals for 18 subjects, which encompasses electroencephalogram(EEG), electrooculogram (EOG), electrocardiogram (ECG), photoplethysmogram (PPG), electrodermal activity (EDA), electromyogram (EMG), and eye movement. The MMV dataset provides data from four stages: 1) raw data, 2) pre-processed data, 3) trial data, and 4) feature data that can be directly used for vigilance estimation. We believe this dataset will achieve flexible reuse and meet the various needs of researchers. And this dataset will greatly contribute to advancing research on physiological signal-based vigilance research and estimation. |
Jelena M. Wehrli; Yanfang Xia; Aslan Abivardi; Birgit Kleim; Dominik R. Bach The impact of doxycycline on human contextual fear memory Journal Article In: Psychopharmacology, vol. 241, no. 5, pp. 1065–1077, 2024. @article{Wehrli2024, Rationale: Previous work identified an attenuating effect of the matrix metalloproteinase (MMP) inhibitor doxycycline on fear memory consolidation. This may present a new mechanistic approach for the prevention of trauma-related disorders. However, so far, this has only been unambiguously demonstrated in a cued delay fear conditioning paradigm, in which a simple geometric cue predicted a temporally overlapping aversive outcome. This form of learning is mainly amygdala dependent. Psychological trauma often involves the encoding of contextual cues, which putatively necessitates partly different neural circuits including the hippocampus. The role of MMP signalling in the underlying neural pathways in humans is unknown. Methods: Here, we investigated the effect of doxycycline on configural fear conditioning in a double-blind placebo-controlled randomised trial with 100 (50 females) healthy human participants. Results: Our results show that participants successfully learned and retained, after 1 week, the context-shock association in both groups. We find no group difference in fear memory retention in either of our pre-registered outcome measures, startle eye-blink responses and pupil dilation. Contrary to expectations, we identified elevated fear-potentiated startle in the doxycycline group early in the recall test, compared to the placebo group. Conclusion: Our results suggest that doxycycline does not substantially attenuate contextual fear memory. This might limit its potential for clinical application. |
Simon Weber; Thomas Christophel; Kai Görgen; Joram Soch; John-Dylan Haynes Working memory signals in early visual cortex are present in weak and strong imagers Journal Article In: Human Brain Mapping, vol. 45, no. 3, pp. 1–17, 2024. @article{Weber2024, It has been suggested that visual images are memorized across brief periods of time by vividly imagining them as if they were still there. In line with this, the contents of both working memory and visual imagery are known to be encoded already in early visual cortex. If these signals in early visual areas were indeed to reflect a combined imagery and memory code, one would predict them to be weaker for individuals with reduced visual imagery vividness. Here, we systematically investigated this question in two groups of participants. Strong and weak imagers were asked to remember images across brief delay periods. We were able to reliably reconstruct the memorized stimuli from early visual cortex during the delay. Importantly, in contrast to the prediction, the quality of reconstruction was equally accurate for both strong and weak imagers. The decodable information also closely reflected behavioral precision in both groups, suggesting it could contribute to behavioral performance, even in the extreme case of completely aphantasic individuals. Our data thus suggest that working memory signals in early visual cortex can be present even in the (near) absence of phenomenal imagery. |
Aline Wauters; Dimitri M. L. Van Ryckeghem; Melanie Noel; Kendra Mueri; Sabine Soltani; Tine Vervoort Parental narrative style moderates the relation between pain-related attention and memory biases in youth with chronic pain Journal Article In: Pain, vol. 165, pp. 126–137, 2024. @article{Wauters2024, Negatively biased pain memories robustly predict maladaptive pain outcomes in children. Both attention bias to pain and parental narrative style have been linked with the development of these negative biases, with previous studies indicating that how parents talk to their child about the pain might buffer the influence of children's attention bias to pain on the development of such negatively biased pain memories. This study investigated the moderating role of parental narrative style in the relation between pain-related attention and memory biases in a pediatric chronic pain sample who underwent a cold pressor task. Participants were 85 youth-parent dyads who reminisced about youth's painful event. Eye-tracking technology was used to assess youth's attention bias to pain information, whereas youth's pain-related memories were elicited 1 month later through telephone interview. Results indicated that a parental narrative style using less repetitive yes–no questions, more emotion words, and less fear words buffered the influence of high levels of youth's attention bias to pain in the development of negatively biased pain memories. Opposite effects were observed for youth with low levels of attention bias to pain. Current findings corroborate earlier results on parental reminiscing in the context of pain (memories) but stress the importance of matching narrative style with child characteristics, such as child attention bias to pain, in the development of negatively biased pain memories. Future avenues for parent–child reminiscing and clinical implications for pediatric chronic pain are discussed. |
Annie Warman; Allan Clark; George L. Malcolm; Maximillian Havekost; Stéphanie Rossit Is there a lower visual field advantage for object affordances? A registered report Journal Article In: Quarterly Journal of Experimental Psychology, vol. 77, no. 11, pp. 2151 –2164, 2024. @article{Warman2024, It's been repeatedly shown that pictures of graspable objects can facilitate visual processing, even in the absence of reach-to-grasp actions, an effect often attributed to the concept of affordances. A classic demonstration of this is the handle compatibility effect, characterised by faster reaction times when the orientation of a graspable object's handle is compatible with the hand used to respond, even when the handle orientation is task-irrelevant. Nevertheless, it is debated whether the speeded reaction times are a result of affordances or spatial compatibility. First, we investigated whether we could replicate the handle compatibility effect while controlling for spatial compatibility. Participants (N = 68) responded with left or right-handed keypresses to whether the object was upright or inverted and, in separate blocks, whether the object was red or green. We failed to replicate the handle compatibility effect, with no significant difference between compatible and incompatible conditions, in both tasks. Second, we investigated whether there is a lower visual field (VF) advantage for the handle compatibility effect in line with what has been found for hand actions. A further 68 participants responded to object orientation presented either in the upper or lower VF. A significant handle compatibility effect was observed in the lower VF, but not the upper VF. This suggests that there is a lower VF advantage for affordances, possibly as the lower VF is where our actions most frequently occur. However, future studies should explore the impact of eye movements on the handle compatibility effect and tool affordances. |
Zhenni Wang; Chen Zhang; Qihui Guo; Qing Fan; Lihui Wang Concurrent oculomotor hyperactivity and deficient anti-saccade performance in obsessive-compulsive disorder Journal Article In: Journal of Psychiatric Research, vol. 180, pp. 402–410, 2024. @article{Wang2024l, Existing studies mainly focused on the inhibition of the task-interfering response to understand the inhibitory deficits of obsessive-compulsive disorder (OCD). However, recent studies suggested that inhibitory function is broadly involved in response preparation and implementation. It is yet unknown if the inhibition dysfunction in OCD extends beyond the task-interfering response to the general inhibitory function. Here we address this issue based on the multidimensional eye-movement measurements, which can better capture the inhibitory deficits than manual responses. Thirty-one OCD patients and 32 healthy controls (HCs) completed the anti-saccade task where multidimensional eye-movement features were developed. Confirmatory factor analysis (CFA) suggested two components of inhibitory function that negatively correlated with each other: one component of oculomotor hyperactivity in generating oculomotor output which is characterized with early premature saccades, early cross rates and saccade number; the other component of task-specific oculomotor efficiency which is characterized with task accuracy, saccade latency, correction rate, and amplitude gain. Importantly, OCD showed both stronger oculomotor hyperactivity and deficient oculomotor efficiency than HCs, and the machine-learning-based classifications showed that the features of oculomotor hyperactivity had higher prediction accuracy than the features of oculomotor efficiency in distinguishing OCD from HCs. Our results suggested that OCD has concurrent deficits in oculomotor hyperactivity and oculomotor efficiency, which may originate from a common inhibitory dysfunction. |
Zhenni Wang; Radha Nila Meghanathan; Stefan Pollmann; LihuiWang Common structure of saccades and microsaccades in visual perception Journal Article In: Journal of Vision, vol. 24, no. 4, pp. 1–13, 2024. @article{Wang2024k, We obtain large amounts of external information through our eyes, a process often considered analogous to picture mapping onto a camera lens. However, our eyes are never as still as a camera lens, with saccades occurring between fixations and microsaccades occurring within a fixation. Although saccades are agreed to be functional for information sampling in visual perception, it remains unknown if microsaccades have a similar function when eye movement is restricted. Here, we demonstrated that saccades and microsaccades share common spatiotemporal structures in viewing visual objects. Twenty-seven adults viewed faces and houses in free-viewing and fixation-controlled conditions. Both saccades and microsaccades showed distinctive spatiotemporal patterns between face and house viewing that could be discriminated by pattern classifications. The classifications based on saccades and microsaccades could also be mutually generalized. Importantly, individuals who showed more distinctive saccadic patterns between faces and houses also showed more distinctive microsaccadic patterns. Moreover, saccades and microsaccades showed a higher structure similarity for face viewing than house viewing and a common orienting preference for the eye region over the mouth region. These findings suggested a common oculomotor program that is used to optimize information sampling during visual object perception. |
Yao Wang; Yue Jiang; Zhiming Hu; Constantin Ruhdorfer; Mihai Bâce; Andreas Bulling VisRecall++: Analysing and predicting visualisation gecallability from gaze behaviour Journal Article In: Proceedings of the ACM on Human-Computer Interaction, vol. 8, pp. 1–18, 2024. @article{Wang2024j, Question answering has recently been proposed as a promising means to assess the recallability of information visualisations. However, prior works are yet to study the link between visually encoding a visualisation in memory and recall performance. To fill this gap, we propose VisRecall++ – a novel 40-participant recallability dataset that contains gaze data on 200 visualisations and 1,000 questions, including identifying the title and retrieving values. We measured recallability by asking participants questions after they observed the visualisation for 10 seconds. Our analyses reveal several insights, such as saccade amplitude, number of fixations, and fixation duration significantly differ between high and low recallability groups. Finally, we propose GazeRecallNet – a novel computational method to predict recallability from gaze behaviour that outperforms the state-of-the-art model RecallNet and three other baselines on this task. Taken together, our results shed light on assessing recallability from gaze behaviour and inform future work on recallability-based visualisation optimisation. |
Yang Wang; Jon D. Elhaiorcid; Christian Montagorcid; Lei Zhangorcid; Haibo Yang Attentional bias to social media stimuli is moderated by fear of missing out among problematic social media users Journal Article In: Journal of Behavioral Addictions, vol. 3, pp. 807–822, 2024. @article{Wang2024i, Background and aims: Previous evidence has indicated that problematic social media use (PSMU) is characterized by an attentional bias to social media icons (such as Facebook icons), but not to social webpages (such as Facebook webpages). They suggest that there may be other factors influencing attentional bias like fear of missing out (FoMO). But it remains unclear how FoMO moderates attentional bias in PSMU. This study aims to investigate whether PSMU show attentional bias for stimuli associated with social media, and how FoMO moderates on attentional bias among PSMU through experimental methods. Methods: Based on the Interaction of Person-Affect-Cognition-Execution (I-PACE) model, this study explored mechanisms of attentional bias to social media icons (such as WeChat) related to PSMU and further examined the role of FoMO in this relationship. Specifically, attentional bias patterns to social media icons of 62 participants (31 PSMU and 31 control group) were explored during a dot-probe paradigm combined with eye-tracking in Experiment 1, and attentional bias patterns to social media icons of another 61 individuals with PSMU with different FoMO levels was explored during a dot-probe paradigm combined with eye-tracking in Experiment 2. Results: Results revealed that individuals with PSMU had an attentional bias toward social media icons, demonstrated by attentional maintenance, and such bias such bias was moderated by FoMO negatively, demonstrated by attentional vigilance and maintenance in PSMU/high FoMO. Conclusion: These results suggest that attentional bias is a common mechanism associated with PSMU, and FoMO is a key factor on the development of PSMU. |
Xinrui Wang; Hui Jing Lu; Hanran Li; Lei Chang Childhood environmental unpredictability and experimentally primed uncertainty in relation to intuitive versus deliberate visual search Journal Article In: Current Psychology, vol. 43, no. 5, pp. 4737–4750, 2024. @article{Wang2024o, Visual search is an integral part of animal life. Two search strategies, intuitive vs. deliberate search, are adopted by almost all animals including humans to adapt to different extent of environmental uncertainty. In two eye-tracking experiments involving simple visual search (Study 1) and complex information search (Study 2), we used the evolutionary life history (LH) approach to investigate the interaction between childhood environmental unpredictability and primed concurrent uncertainty in enabling these two search strategies. The results indicate that when individuals with greater childhood unpredictability were exposed to uncertainty cues, they exhibited intuitive rather than deliberate visual search (i.e., fewer fixations, reduced dwell time, a larger saccade size, and fewer repetitive inspections relative to individuals with lower childhood unpredictability). We conclude that childhood environment is crucial in calibrating LH including visual and cognitive strategies to adaptively respond to current environmental conditions. |
Sinuo Wang; Yang He; Jie Hu; Jianan Xia; Ke Fang; Junna Yu; Yingying Wang Eye movement intervention facilitates concurrent perception and memory processing Journal Article In: Cerebral Cortex, vol. 34, no. 5, pp. 1–13, 2024. @article{Wang2024h, A widely used psychotherapeutic treatment for post-traumatic stress disorder (PTSD) involves performing bilateral eye movement (EM) during trauma memory retrieval. However, how this treatment - described as eye movement desensitization and reprocessing (EMDR) - alleviates trauma-related symptoms is unclear. While conventional theories suggest that bilateral EM interferes with concurrently retrieved trauma memories by taxing the limited working memory resources, here, we propose that bilateral EM actually facilitates information processing. In two EEG experiments, we replicated the bilateral EM procedure of EMDR, having participants engaging in continuous bilateral EM or receiving bilateral sensory stimulation (BS) as a control while retrieving short- or long-term memory. During EM or BS, we presented bystander images or memory cues to probe neural representations of perceptual and memory information. Multivariate pattern analysis of the EEG signals revealed that bilateral EM enhanced neural representations of simultaneously processed perceptual and memory information. This enhancement was accompanied by heightened visual responses and increased neural excitability in the occipital region. Furthermore, bilateral EM increased information transmission from the occipital to the frontoparietal region, indicating facilitated information transition from low-level perceptual representation to high-level memory representation. These findings argue for theories that emphasize information facilitation rather than disruption in the EMDR treatment. |
Shengyuan Wang; Yanhua Lin; Xiaowei Ding Unmasking social attention: The key distinction between social and non-social attention emerges in disengagement, not engagement Journal Article In: Cognition, vol. 249, pp. 1–13, 2024. @article{Wang2024g, The debate surrounding whether social and non-social attention share the same mechanism has been contentious. While prior studies predominantly focused on engagement, we examined the potential disparity between social and non-social attention from both perspectives of engagement and disengagement, respectively. We developed a two-stage attention-shifting paradigm to capture both attention engagement and disengagement. Combining results from five eye-tracking experiments, we supported that the disengagement of social attention markedly outpaces that of non-social attention, while no significant discrepancy emerges in engagement. We uncovered that the faster disengagement of social attention came from its social nature by eliminating alternative explanations including broader fixation distribution width, reduced directional salience in the peripheral visual field, decreased cue-object categorical consistency, reduced perceived validity, and faster processing time. Our study supported that the distinction between social and non-social attention is rooted in attention disengagement, not engagement. |
Pengchao Wang; Wei Mu; Gege Zhan; Aiping Wang; Zuoting Song; Tao Fang; Xueze Zhang; Junkongshuai Wang; Lan Niu; Jianxiong Bin; Lihua Zhang; Jie Jia; Xiaoyang Kang Preference detection of the humanoid robot face based on EEG and eye movement Journal Article In: Neural Computing and Applications, vol. 36, no. 19, pp. 11603–11621, 2024. @article{Wang2024f, The face of a humanoid robot can affect the user experience, and the detection of face preference is particularly important. Preference detection belongs to a branch of emotion recognition that has received much attention from researchers. Most of the previous preference detection studies have been conducted based on a single modality. In this paper, we detect face preferences of humanoid robots based on electroencephalogram (EEG) signals and eye movement signals for single modality, canonical correlation analysis fusion modality, and bimodal deep autoencoder (BDAE) fusion modality, respectively. We validated the theory of frontal asymmetry by analyzing the preference patterns of EEG and found that participants had higher alpha wave energy for preference faces. In addition, hidden preferences extracted by EEG signals were better classified than preferences from participants' subjective feedback, and also, the classification performance of eye movement data was improved. Finally, experimental results showed that BDAE multimodal fusion using frontal alpha and beta power spectral densities and eye movement information as features performed best, with the highest average accuracy of 83.13% for the SVM and 71.09% for the KNN. |
Lei Wang; Xufeng Zhou; Jie Yang; Fu Zeng; Shuzhen Zuo; Makoto Kusunoki; Huimin Wang; Yong-di Zhou; Aihua Chen; Sze Chai Kwok Mixed coding of content-temporal detail by dorsomedial posterior parietal neurons Journal Article In: Journal of Neuroscience, vol. 44, no. 3, pp. 1–16, 2024. @article{Wang2024, The dorsomedial posterior parietal cortex (dmPPC) is part of a higher-cognition network implicated in elaborate processes under- pinning memory formation, recollection, episode reconstruction, and temporal information processing. Neural coding for complex episodic processing is however under-documented. Here, we recorded extracellular neural activities from three male rhesus macaques (Macaca mulatta) and revealed a set of neural codes of “neuroethogram” in the primate parietal cortex. Analyzing neural responses in macaque dmPPC to naturalistic videos, we discovered several groups of neurons that are sensitive to different categories of ethogram items, low-level sensory features, and saccadic eye movement. We also discovered that the processing of category and feature information by these neurons is sustained by the accumulation of temporal information over a long timescale of up to 30 s, corroborating its reported long temporal receptive windows. We performed an additional behavioral experiment with additional two male rhesus macaques and found that saccade-related activities could not account for the mixed neuronal responses elicited by the video stimuli. We further observed monkeys' scan paths and gaze consistency are modulated by video content. Taken altogether, these neural findings explain how dmPPC weaves fabrics of ongoing experiences together in real time. The high dimensionality of neural representations should motivate us to shift the focus of attention from pure selectivity neurons to mixed selectivity neurons, especially in increasingly complex naturalistic task designs. |
Kangning Wang; Wei Wei; Weibo Yi; Shuang Qiu; Huiguang He; Minpeng Xu; Dong Ming Contrastive fine-grained domain adaptation network for EEG-based vigilance estimation Journal Article In: Neural Networks, vol. 179, pp. 1–18, 2024. @article{Wang2024d, Vigilance state is crucial for the effective performance of users in brain-computer interface (BCI) systems. Most vigilance estimation methods rely on a large amount of labeled data to train a satisfactory model for the specific subject, which limits the practical application of the methods. This study aimed to build a reliable vigilance estimation method using a small amount of unlabeled calibration data. We conducted a vigilance experiment in the designed BCI-based cursor-control task. Electroencephalogram (EEG) signals of eighteen participants were recorded in two sessions on two different days. And, we proposed a contrastive fine-grained domain adaptation network (CFGDAN) for vigilance estimation. Here, an adaptive graph convolution network (GCN) was built to project the EEG data of different domains into a common space. The fine-grained feature alignment mechanism was designed to weight and align the feature distributions across domains at the EEG channel level, and the contrastive information preservation module was developed to preserve the useful target-specific information during the feature alignment. The experimental results show that the proposed CFGDAN outperforms the compared methods in our BCI vigilance dataset and SEED-VIG dataset. Moreover, the visualization results demonstrate the efficacy of the designed feature alignment mechanisms. These results indicate the effectiveness of our method for vigilance estimation. Our study is helpful for reducing calibration efforts and promoting the practical application potential of vigilance estimation methods. |
Jiahui Wang Does working memory capacity influence learning from video and attentional processing of the instructor's visuals? Journal Article In: Behaviour & Information Technology, vol. 43, no. 1, pp. 95–109, 2024. @article{Wang2024c, Existing evidence suggested learners with differences in attention and cognition might respond to the same media in differential ways. The current study focused on one format of video design–instructor visibility and explored the moderating effects of working memory capacity on learning from such video design and if learners with high and low working memory capacity attended to the instructor's visuals differently. Participants watched a video either with or without the instructor's visuals on the screen, while their visual attention was recorded simultaneously. After the video, participants responded to a learning test that measured retention and transfer. Although the results did not show working memory capacity moderated the instructor visibility effects on learning or influenced learners' visual attention to the instructor's visuals, the findings did indicate working memory capacity was a positive predictor of retention performance regardless of the video design. Discussions and implications of the findings were provided. |
Jiahui Wang Mind wandering in videos that integrate instructor's visuals: An eye tracking study Journal Article In: Innovations in Education and Teaching International, vol. 61, no. 5, pp. 972–987, 2024. @article{Wang2024m, With an increasing number of videos integrating instructor's visuals on screen, we know little about the impacts of this design on mind wandering. The study aims to investigate a) how instructor visibility impacts mind wandering; b) the relationship between mind wandering and retention performance; c) how visual behaviour during video-watching influences mind wandering. Each participant watched a video with or without instructor visibility, while their visual behaviour was recorded by an eye tracker. Retention performance was measured at the completion of the video. Mind wandering was inferred via global self-report measure and objective eye tracking measure. Both measures of mind wandering indicated the instructor visible video resulted in less mind wandering. Findings suggested mind wandering impaired retention performance. Additionally, visual attention to the instructor was associated with less mind wandering. |
Andi Wang; Ana Pellicer-Sánchez Exploring L2 learners' processing of unknown words during subtitled viewing through self-reports Journal Article In: International Review of Applied Linguistics in Language Teaching, no. 2, pp. 1–30, 2024. @article{Wang2024a, Studies have shown the benefits of subtitled viewing for incidental vocabulary learning, but the effects of different subtitling types varied across studies. The effectiveness of different types of subtitled viewing could be related to how unknown vocabulary is processed during viewing. However, no studies have investigated L2 learners' processing of unknown words in viewing beyond exploring learners' attention allocation. The present research followed a qualitative approach to explore L2 learners' processing of unknown words during subtitled viewing under three conditions (i.e., captions, L1 subtitles, and bilingual subtitles) by tapping into learners' reported awareness of the unknown words and the vocabulary processing strategies used to engage with unknown words. According to stimulated recall data (elicited by eye-tracking data) from 45 intermediate-to-advanced-level Chinese learners of English, captions led to increased awareness of the unknown words. Moreover, the types of strategies learners used to cope with unknown vocabulary were determined by subtitling type. |
Kerri Walter; Michelle Freeman; Peter Bex Quantifying task-related gaze Journal Article In: Attention, Perception, & Psychophysics, vol. 86, no. 4, pp. 1318–1329, 2024. @article{Walter2024, Competing theories attempt to explain what guides eye movements when exploring natural scenes: bottom-up image salience and top-down semantic salience. In one study, we apply language-based analyses to quantify the well-known observation that task influences gaze in natural scenes. Subjects viewed ten scenes as if they were performing one of two tasks. We found that the semantic similarity between the task and the labels of objects in the scenes captured the task-dependence of gaze (t(39) = 13.083; p < 0.001). In another study, we examined whether image salience or semantic salience better predicts gaze during a search task, and if viewing strategies are affected by searching for targets of high or low semantic relevance to the scene. Subjects searched 100 scenes for a high- or low-relevance object. We found that image salience becomes a worse predictor of gaze across successive fixations, while semantic salience remains a consistent predictor (X2(1 |
Daniel Walper; Alexandra Bendixen; Sabine Grimm; Anna Schubö; Wolfgang Einhäuser Attention deployment in natural scenes: Higher-order scene statistics rather than semantics modulate the N2pc component Journal Article In: Journal of vision, vol. 24, no. 6, pp. 1–28, 2024. @article{Walper2024, Which properties of a natural scene affect visual search? We consider the alternative hypotheses that low-level statistics, higher-level statistics, semantics, or layout affect search difficulty in natural scenes. Across three experiments (n = 20 each), we used four different backgrounds that preserve distinct scene properties: (a) natural scenes (all experiments); (b) 1/f noise (pink noise, which preserves only low-level statistics and was used in Experiments 1 and 2); (c) textures that preserve low-level and higher-level statistics but not semantics or layout (Experiments 2 and 3); and (d) inverted (upside-down) scenes that preserve statistics and semantics but not layout (Experiment 2). We included "split scenes" that contained different backgrounds left and right of the midline (Experiment 1, natural/noise; Experiment 3, natural/texture). Participants searched for a Gabor patch that occurred at one of six locations (all experiments). Reaction times were faster for targets on noise and slower on inverted images, compared to natural scenes and textures. The N2pc component of the event-related potential, a marker of attentional selection, had a shorter latency and a higher amplitude for targets in noise than for all other backgrounds. The background contralateral to the target had an effect similar to that on the target side: noise led to faster reactions and shorter N2pc latencies than natural scenes, although we observed no difference in N2pc amplitude. There were no interactions between the target side and the non-target side. Together, this shows that-at least when searching simple targets without own semantic content-natural scenes are more effective distractors than noise and that this results from higher-order statistics rather than from semantics or layout. |
Sonja Walcher; Živa Korda; Christof Körner; Mathias Benedek How workload and availability of spatial reference shape eye movement coupling in visuospatial working memory Journal Article In: Cognition, vol. 249, pp. 1–16, 2024. @article{Walcher2024, Eyes are active in memory recall and visual imagination, yet our grasp of the underlying qualities and factors of these internally coupled eye movements is limited. To explore this, we studied 50 participants, examining how workload, spatial reference availability, and imagined movement direction influence internal coupling of eye movements. We designed a visuospatial working memory task in which participants mentally moved a black patch along a path within a matrix and each trial involved one step along this path (presented via speakers: up, down, left, or right). We varied workload by adjusting matrix size (3 × 3 vs. 5 × 5), manipulated availability of a spatial frame of reference by presenting either a blank screen (requiring participants to rely solely on their mental representation of the matrix) or spatial reference in the form of an empty matrix, and contrasted active task performance to two control conditions involving only active or passive listening. Our findings show that eye movements consistently matched the imagined movement of the patch in the matrix, not driven solely by auditory or semantic cues. While workload influenced pupil diameter, perceived demand, and performance, it had no observable impact on internal coupling. The availability of spatial reference enhanced coupling of eye movements, leading more frequent, precise, and resilient saccades against noise and bias. The absence of workload effects on coupled saccades in our study, in combination with the relatively high degree of coupling observed even in the invisible matrix condition, indicates that eye movements align with shifts in attention across both visually and internally represented information. This suggests that coupled eye movements are not merely strategic efforts to reduce workload, but rather a natural response to where attention is directed. |
Ana Vilotijević; Sebastiaan Mathôt Non-image-forming vision as measured through ipRGC-mediated pupil constriction is not modulated by covert visual attention Journal Article In: Cerebral Cortex, vol. 34, no. 3, pp. 1–9, 2024. @article{Vilotijevic2024, In brightness, the pupil constricts, while in darkness, the pupil dilates; this is known as the pupillary light response (PLR). The PLR is driven by all photoreceptors: rods and cones, which contribute to image-forming vision, and intrinsically photosensitive retinal ganglion cells (ipRGCs), which mainly contribute to non-image-forming vision. Rods and cones cause immediate pupil constriction upon light exposure, whereas ipRGCs cause sustained constriction throughout light exposure. Recent studies have shown that covert attention modulated the initial PLR; however, it remains unclear whether the same holds for the sustained PLR. We tested this by leveraging ipRGCs' responsiveness to blue light, causing the most prominent sustained constriction. While replicating previous studies by showing that pupils constricted more when either directly looking at, or covertly attending to, bright as compared to dim stimuli (with the same color), we also found that the pupil constricted more when directly looking at blue as compared to red stimuli (with the same luminosity). Crucially, however, in two high-powered studies (n = 60), we did not find any pupil-size difference when covertly attending to blue as compared to red stimuli. This suggests that ipRGC-mediated pupil constriction, and possibly non-image-forming vision more generally, is not modulated by covert attention. |
Pamela Villavicencio; Cristina Malla; Joan López-Moliner Prediction of time to contact under perceptual and contextual uncertainties Journal Article In: Journal of Vision, vol. 24, no. 6, pp. 1–18, 2024. @article{Villavicencio2024, Accurately estimating time to contact (TTC) is crucial for successful interactions with moving objects, yet it is challenging under conditions of sensory and contextual uncertainty, such as occlusion. In this study, participants engaged in a prediction motion task, monitoring a target that moved rightward and an occluder. The participants' task was to press a key when they predicted the target would be aligned with the occluder's right edge. We manipulated sensory uncertainty by varying the visible and occluded periods of the target, thereby modulating the time available to integrate sensory information and the duration over which motion must be extrapolated. Additionally, contextual uncertainty was manipulated by having a predictable and unpredictable condition, meaning the occluder either reliably indicated where the moving target would disappear or provided no such indication. Results showed differences in accuracy between the predictable and unpredictable occluder conditions, with different eye movement patterns in each case. Importantly, the ratio of the time the target was visible, which allows for the integration of sensory information, to the occlusion time, which determines perceptual uncertainty, was a key factor in determining performance. This ratio is central to our proposed model, which provides a robust framework for understanding and predicting human performance in dynamic environments with varying degrees of uncertainty. |
Simone Viganò; Rena Bayramova; Christian F. Doeller; Roberto Bottini Spontaneous eye movements reflect the representational geometries of conceptual spaces Journal Article In: Proceedings of the National Academy of Sciences, vol. 121, no. 17, pp. 1–10, 2024. @article{Vigano2024, Functional neuroimaging studies indicate that the human brain can represent concepts and their relational structure in memory using coding schemes typical of spatial navigation. However, whether we can read out the internal representational geometries of conceptual spaces solely from human behavior remains unclear. Here, we report that the relational structure between concepts in memory might be reflected in spontaneous eye movements during verbal fluency tasks: When we asked participants to randomly generate numbers, their eye movements correlated with distances along the left- to- right one- dimensional geometry of the number space (mental number line), while they scaled with distance along the ring- like two- dimensional geometry of the color space (color wheel) when they randomly generated color names. Moreover, when participants ran- domly produced animal names, eye movements correlated with low- dimensional sim- ilarity in word frequencies. These results suggest that the representational geometries used to internally organize conceptual spaces might be read out from gaze behavior. |
Inês S. Veríssimo; Zachary Nudelman; Christian N. L. Olivers Does crowding predict conjunction search? An individual differences approach Journal Article In: Vision Research, vol. 216, pp. 1–13, 2024. @article{Verissimo2024, Searching for objects in the visual environment is an integral part of human behavior. Most of the information used during such visual search comes from the periphery of our vision, and understanding the basic mechanisms of search therefore requires taking into account the inherent limitations of peripheral vision. Our previous work using an individual differences approach has shown that one of the major factors limiting peripheral vision (crowding) is predictive of single feature search, as reflected in response time and eye movement measures. Here we extended this work, by testing the relationship between crowding and visual search in a conjunction-search paradigm. Given that conjunction search involves more fine-grained discrimination and more serial behavior, we predicted it would be strongly affected by crowding. We tested sixty participants with regard to their sensitivity to both orientation and color-based crowding (as measured by critical spacing) and their efficiency in searching for a color/orientation conjunction (as indicated by manual response times and eye movements). While the correlations between the different crowding tasks were high, the correlations between the different crowding measures and search performance were relatively modest, and no higher than those previously observed for single-feature search. Instead, observers showed very strong color selectivity during search. The results suggest that conjunction search behavior relies more on top-down guidance (here by color) and is therefore relatively less determined by individual differences in sensory limitations as caused by crowding. |
Jennifer A. Veitch; Naomi J. Miller Effects of temporal light modulation on individuals sensitive to pattern glare Journal Article In: Leukos, vol. 20, no. 3, pp. 310–346, 2024. @article{Veitch2024, Solid-state lighting systems can vary widely in the degree of temporal light modulation (TLM) of their light output. TLM is known to have visual, cognitive, and behavioral effects but there are few recommendations for limits on the acceptable TLM in everyday lighting systems and there is little information concerning individual differences in sensitivity. This paper is a re-analysis of previously presented data, focusing on two subgroups in a larger sample: those scoring low or high on the Wilkins Pattern Glare Sensitivity (PGS) test, which is a validated test that identifies people at high risk of visual stress. The results show that the PGS groups differed in their sensitivity to TLM conditions, despite short exposures and a restricted field of view. |
Janne M. Veerbeek; Henrik Rühe; Beatrice Ottiger; Stephan Bohlhalter; Thomas Nyffeler; Dario Cazzoli Impact of neglect on the relationship between upper limb motor function and upper limb performance in the (hyper)acute poststroke phase Journal Article In: Neurorehabilitation and Neural Repair, vol. 39, no. 2, pp. 138–41, 2024. @article{Veerbeek2024, Visuospatial neglect (VSN) is a negative, strong, and independent predictor of poor outcome after stroke, and is associated with poorer upper limb (UL) motor recovery in terms of function or capacity (ie, in standardized, lab-based testing). Although the main aim of stroke rehabilitation is to re-establish optimal functioning in daily life, the impact of VSN on UL performance (ie, in unstructured, everyday environments) is largely unknown. In this proof of principle study, the impact of VSN on the strength of the association between UL motor function (Jamar Hand Dynamometer) and UL performance (Upper Limb Lucerne ICF-based Multidisciplinary Observation Scale) was investigated in 65 (hyper)acute first-ever stroke patients. In a moderator analysis, the interaction term was negative and significant, showing that VSN suppresses the use of UL motor function in daily life (ie, performance). This finding suggests that, when considering UL performance in the (hyper)acute phase after stroke, interventions aimed to reduce deficits in both UL motor function and visuospatial function should already be started in the acute stroke unit setting. |
Ondřej Vaníček; Lucie Krejčová; Martin Hůla; Kateřina Potyszová; Kateřina Klapilová; Klára Bártová Eye-tracking does not reveal early attention processing of sexual copulatory movement in heterosexual men and women Journal Article In: Scientific Reports, vol. 14, no. 1, pp. 1–8, 2024. @article{Vanicek2024, Men and women respond differently when presented with sexual stimuli. Men's reaction is gender-specific, and women's reaction is gender-nonspecific. This might be a result of differential cognitive processing of sexual cues, namely copulatory movement (CM), which is present in almost every dynamic erotic stimulus. A novelty eye-tracking procedure was developed to assess the saliency of short film clips containing CM or non-CM sexual activities. Results from 29 gynephilic men and 31 androphilic women showed only small and insignificant effects in attention bias and no effects in attentional capture. Our results suggest that CM is not processed differently in men and women and, therefore, is not the reason behind gender-nonspecific sexual responses in women. |
Nele Vanbilsen; Valentina Pergher; Marc M. Van Hulle Effects of task-specific strategy on attentional control game training: Preliminary data from healthy adults Journal Article In: Current Psychology, vol. 43, no. 2, pp. 1864–1878, 2024. @article{Vanbilsen2024, Although recent studies showed the beneficial effect of video game training, it is still unclear whether the used strategy plays an important role in enhancing performance in the trained cognitive ability and in promoting transfers to other cognitive domains. We investigated behaviourally the effect of strategy on the outcomes of visual attentional control game training and both behaviourally and in terms of EEG-based event-related potentials (ERPs), the effect on other cognitive domains. We recruited 21 healthy adults, divided into three groups: a strategy-training group (STG) instructed to use a specific strategy, a non-strategy training group (NSTG) that self-developed their strategy, and a passive control group (PCG) that underwent only pre- and post-tests. Our results showed that the use of a specific strategy made the STG participants respond faster to the trained contrast level task, but not on the contour exercises task. Furthermore, both STG and NSTG showed pre- and post-transfers, however no significant differences were found when comparing the groups, for both behaviour and ERP responses. In conclusion, we believe these preliminary results provide evidence for the importance of strategy choice in cognitive training protocols. |
Elle Heusden; Christian N. L. Olivers; Mieke Donk The effects of eccentricity on attentional capture Journal Article In: Attention, Perception, & Psychophysics, vol. 86, no. 2, pp. 422–438, 2024. @article{Heusden2024, Visual attention may be captured by an irrelevant yet salient distractor, thereby slowing search for a relevant target. This phenomenon has been widely studied using the additional singleton paradigm in which search items are typically all presented at one and the same eccentricity. Yet, differences in eccentricity may well bias the competition between target and distractor. Here we investigate how attentional capture is affected by the relative eccentricities of a target and a distractor. Participants searched for a shape-defined target in a grid of homogeneous nontargets of the same color. On 75% of trials, one of the nontarget items was replaced by a salient color-defined distractor. Crucially, target and distractor eccentricities were independently manipulated across three levels of eccentricity (i.e., near, middle, and far). Replicating previous work, we show that the presence of a distractor slows down search. Interestingly, capture as measured by manual reaction times was not affected by target and distractor eccentricity, whereas capture as measured by the eyes was: items close to fixation were more likely to be selected than items presented further away. Furthermore, the effects of target and distractor eccentricity were largely additive, suggesting that the competition between saliency- and relevance-driven selection was modulated by an independent eccentricity-based spatial component. Implications of the dissociation between manual and oculomotor responses are also discussed. |
Anouk Heide; Maaike Wessel; Danae Papadopetraki; Dirk E. M. Geurts; Teije H. Prooije; Frank Gommans; Bastiaan R. Bloem; Michiel F. Dirkx; Rick C. Helmich Propranolol reduces Parkinson's tremor and inhibits tremor-related activity in the motor cortex: A placebo-controlled crossover trial Journal Article In: Annals of Neurology, pp. 1–12, 2024. @article{Heide2024, Objective: Parkinson's disease (PD) resting tremor is thought to be initiated in the basal ganglia and amplified in the cerebello-thalamo-cortical circuit. Because stress worsens tremor, the noradrenergic system may play a role in amplifying tremor. We tested if and how propranolol, a non-selective beta-adrenergic receptor antagonist, reduces PD tremor and whether or not this effect is specific to stressful conditions. Methods: In a cross-over, double-blind intervention study, participants with PD resting tremor received propranolol (40 mg, single dose) or placebo (counter-balanced) on 2 different days. During both days, we assessed tremor severity (with accelerometry) and tremor-related brain activity (with functional magnetic resonance imaging), as well as heart rate and pupil diameter, while subjects performed a stressful cognitive load task that has been linked to the noradrenergic system. We tested for effects of drug (propranolol vs placebo) and stress (cognitive load vs rest) on tremor power and tremor-related brain activity. Results: We included 27 PD patients with prominent resting tremor. Tremor power significantly increased during cognitive load versus rest (F[1,19] = 13.8; p = 0.001; (Formula presented.) = 0.42) and decreased by propranolol versus placebo (F[1,19] = 6.4; p = 0.02; (Formula presented.) = 0.25), but there was no interaction. We observed task-related brain activity in a stress-sensitive cognitive control network and tremor power-related activity in the cerebello-thalamo-cortical circuit. Propranolol significantly reduced tremor-related activity in the motor cortex compared to placebo (F[1,21] = 5.3; p = 0.03; (Formula presented.) = 0.20), irrespective of cognitive load. Interpretation: Our findings indicate that propranolol has a general, context-independent, tremor-reducing effect that may be implemented at the level of the primary motor cortex. |
A. Van Den Kerchove; H. Si-Mohammed; M. M. Van Hulle; F. Cabestaing Correcting for ERP latency jitter improves gaze-independent BCI decoding Journal Article In: Journal of Neural Engineering, vol. 21, no. 4, pp. 1–15, 2024. @article{VanDenKerchove2024, Objective. Patients suffering from heavy paralysis or Locked-in-Syndrome can regain communication using a Brain-Computer Interface (BCI). Visual event-related potential (ERP) based BCI paradigms exploit visuospatial attention (VSA) to targets laid out on a screen. However, performance drops if the user does not direct their eye gaze at the intended target, harming the utility of this class of BCIs for patients suffering from eye motor deficits. We aim to create an ERP decoder that is less dependent on eye gaze. Approach. ERP component latency jitter plays a role in covert visuospatial attention (VSA) decoding. We introduce a novel decoder which compensates for these latency effects, termed Woody Classifier-based Latency Estimation (WCBLE). We carried out a BCI experiment recording ERP data in overt and covert visuospatial attention (VSA), and introduce a novel special case of covert VSA termed split VSA, simulating the experience of patients with severely impaired eye motor control. We evaluate WCBLE on this dataset and the BNCI2014-009 dataset, within and across VSA conditions to study the dependency on eye gaze and the variation thereof during the experiment. Main results. WCBLE outperforms state-of-the-art methods in the VSA conditions of interest in gaze-independent decoding, without reducing overt VSA performance. Results from across-condition evaluation show that WCBLE is more robust to varying VSA conditions throughout a BCI operation session. Significance. Together, these results point towards a pathway to achieving gaze independence through suited ERP decoding. Our proposed gaze-independent solution enhances decoding performance in those cases where performing overt VSA is not possible. |
Roman Vakhrushev; Arezoo Pooresmaeili Interaction of spatial attention and the associated reward value of audiovisual objects Journal Article In: Cortex, vol. 179, pp. 271–285, 2024. @article{Vakhrushev2024, Reward value and selective attention both enhance the representation of sensory stimuli at the earliest stages of processing. It is still debated whether and how reward-driven and attentional mechanisms interact to influence perception. Here we ask whether the interaction between reward value and selective attention depends on the sensory modality through which the reward information is conveyed. Human participants first learned the reward value of uni-modal visual and auditory stimuli during a conditioning phase. Subsequently, they performed a target detection task on bimodal stimuli containing a previously rewarded stimulus in one, both, or neither of the modalities. Additionally, participants were required to focus their attention on one side and only report targets on the attended side. Our results showed a strong modulation of visual and auditory event-related potentials (ERPs) by spatial attention. We found no main effect of reward value but importantly we found an interaction effect as the strength of attentional modulation of the ERPs was significantly affected by the reward value. When reward effects were examined separately with respect to each modality, auditory value-driven modulation of attention was found to dominate the ERP effects whereas visual reward value on its own led to no effect, likely due to its interference with the target processing. These results inspire a two-stage model where first the salience of a high reward stimulus is enhanced on a local priority map specific to each sensory modality, and at a second stage reward value and top-down attentional mechanisms are integrated across sensory modalities to affect perception. |
Hariklia Vagias; Michelle L. Byrne; Lyn Millist; Owen White; Meaghan Clough; Joanne Fielding Visuo-cognitive phenotypes in early multiple sclerosis: A multisystem model of visual processing Journal Article In: Journal of Clinical Medicine, vol. 13, no. 3, pp. 1–19, 2024. @article{Vagias2024, Background: Cognitive impairment can emerge in the earliest stages of multiple sclerosis (MS), with heterogeneity in cognitive deficits often hindering symptom identification and management. Sensory–motor dysfunction, such as visual processing impairment, is also common in early disease and can impact neuropsychological task performance in MS. However, cognitive phenotype research in MS does not currently consider the relationship between early cognitive changes and visual processing impairment. Objectives: This study explored the relationship between cognition and visual processing in early MS by adopting a three-system model of afferent sensory, central cognitive and efferent ocular motor visual processing to identify distinct visuo-cognitive phenotypes. Methods: Patients with clinically isolated syndrome and relapsing–remitting MS underwent neuro-ophthalmic, ocular motor and neuropsychological evaluation to assess each visual processing system. The factor structure of ocular motor variables was examined using exploratory factor analysis, and phenotypes were identified using latent profile analysis. Results: Analyses revealed three ocular-motor constructs (cognitive control, cognitive processing speed and basic visual processing) and four visuo-cognitive phenotypes (early visual changes, efferent-cognitive, cognitive control and afferent-processing speed). While the efferent-cognitive phenotype was present in significantly older patients than was the early visual changes phenotype, there were no other demographic differences between phenotypes. The efferent-cognitive and cognitive control phenotypes had poorer performance on the Symbol Digit Modalities Test compared to that of other phenotypes; however, no other differences in performance were detected. Conclusion: Our findings suggest that distinct visual processing deficits in early MS may differentially impact cognition, which is not captured using standard neuropsychological evaluation. Further research may facilitate improved symptom identification and intervention in early disease. |
Maiko Uesaki; Arnab Biswas; Hiroshi Ashida; Gerrit Maus Blue-yellow combination enhances perceived motion in Rotating Snakes illusion Journal Article In: i-Perception, vol. 15, no. 2, pp. 1–9, 2024. @article{Uesaki2024, The Rotating Snakes illusion is a visual illusion where a stationary image elicits a compelling sense of anomalous motion. There have been recurring albeit anecdotal claims that the perception of illusory motion is more salient when the image consists of patterns with the combination of blue and yellow; however, there is limited empirical evidence that supports those claims. In the present study, we aimed to assess whether the Rotating Snakes illusion is more salient in its blue-yellow variation, compared to red-green and greyscale variations when the luminance of corresponding elements within the patterns were equated. Using the cancellation method, we found that the velocity required to establish perceptual stationarity was indeed greater for the stimulus composed of patterns with a blue-yellow combination than the other two variants. Our findings provide, for the first time, empirical evidence that the presence of colour affects the magnitude of illusion in the Rotating Snakes illusion. |
Motoaki Uchimura; Hironori Kumano; Shigeru Kitazawa Neural transformation from retinotopic to background-centric coordinates in the macaque precuneus Journal Article In: The Journal of Neuroscience, vol. 44, no. 48, pp. 1–19, 2024. @article{Uchimura2024, Visual information is initially represented in retinotopic coordinates and later in craniotopic coordinates. Psychophysical evidence suggests that visual information is further represented in more general coordinates related to the external world; however, the neural basis of nonegocentric coordinates remains elusive. This study investigates the automatic transformation from egocentric to nonegocentric coordinates in the macaque precuneus (two males, one female), identified by a functional imaging study as a key area for nonegocentric representation. We found that 6.2% of neurons in the precuneus have receptive fields (RFs) anchored to the background rather than to the retina or the head, while 16% had traditional retinotopic RFs. Notably, these two types were not exclusive: many background-centric neurons initially encode a stimulus' position in retinotopic coordinates (up to ∼90 ms from the stimulus onset) but later shift to background coordinates, peaking at ∼150 ms. Regarding retinotopic information, the stimulus dominated the initial period, whereas the background dominated the later period. In the absence of a background, there is a dramatic surge in retinotopic information about the stimulus during the later phase, clearly delineating two distinct periods of retinotopic encoding: one focusing on the figure to be attended and another on the background. These findings suggest that the initial retinotopic information of the stimulus is combined with the background retinotopic information in a subsequent stage, yielding a more stable representation of the stimulus relative to the background through time-division multiplexing. |
Sandra Tyralla; Eckart Zimmermann Serial dependencies in motor targeting as a function of target appearance Journal Article In: Journal of Vision, vol. 24, no. 13, pp. 1–13, 2024. @article{Tyralla2024, In order to bring stimuli of interest into our central field of vision, we perform saccadic eye movements. After every saccade, the error between the predicted and actual landing position is monitored. In the laboratory, artificial post-saccadic errors are created by displacing the target during saccade execution. Previous research found that even a single post-saccadic error induces immediate amplitude changes to minimize that error. The saccadic amplitude adjustment could result from a recalibration of the saccade target representation. We asked if recalibration follows an integration scheme in which the impact magnitude of the previous post-saccadic target location depends on the certainty of the current target. We asked subjects to perform saccades to Gaussian blobs as targets, the visuospatial certainty of which we manipulated by changing its spatial constant. In separate sessions, either the pre-saccadic or post-saccadic target was uncertain. Additionally, we manipulated the contrast to further decrease certainty, changing the spatial constant mid-saccade. We found saccade-by-saccade amplitude reductions only with a currently uncertain target, a previously certain one, and a constant target contrast. We conclude that the features of the pre-saccadic target (i.e., size and contrast) determine the extent to which post-saccadic error shapes upcoming saccade amplitudes. |
Massimo Turatto; Matteo De Tommaso; Leonardo Chelazzi Learning to ignore visual onset distractors hinges on a configuration-dependent coordinates system Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 50, no. 10, pp. 971–988, 2024. @article{Turatto2024, Decrement of attentional capture elicited by visual onset distractors, consistent with habituation, has been extensively characterized over the past several years. However, the type of spatial frame of reference according to which such decrement occurs in the brain remains unknown. Here, four related experiments are reported to shed light on this issue. Observers were asked to discriminate the orientation of a titled line while ignoring a salient but task-irrelevant visual onset that occurred on some trials. The experiments all involved an initial habituation phase, during which capture elicited by the onset distractor progressively decreased, as in prior studies. Importantly, in all experiments, the location of the target and the distractor remained fixed during this phase. After habituation was established, in a final test phase of the various experiments, the spatial arrangement of the target and the distractor was changed to test for the relative contribution to habituation of retinotopic, spatiotopic, and configuration-dependent visual representations. Experiment 1 indicated that spatiotopic representations contribute little, if at all, to the observed decrement in attentional capture. The results from Experiment 2 were compatible with the notion that such capture reduction occurs in either retinotopic- or configuration-specific representations. However, Experiment 3 ruled out the contribution of retinotopic representations, leaving configuration-specific representation as the sole viable interpretation. This conclusion was confirmed by the results of Experiments 4 and 5. In conclusion, visual onset distractors appear to be rejected at a level of the visual hierarchy where visual events are encoded in a configuration-specific or context-dependent manner. |
Marius Tröndle; Nicolas Langer Decomposing neurophysiological underpinnings of age-related decline in visual working memory Journal Article In: Neurobiology of Aging, vol. 139, pp. 30–43, 2024. @article{Troendle2024, Exploring the neural basis of age-related decline in working memory is vital in our aging society. Previous electroencephalographic studies suggested that the contralateral delay activity (CDA) may be insensitive to age-related decline in lateralized visual working memory (VWM) performance. Instead, recent evidence indicated that task-induced alpha power lateralization decreases in older age. However, the relationship between alpha power lateralization and age-related decline of VWM performance remains unknown, and recent studies have questioned the validity of these findings due to confounding factors of the aperiodic signal. Using a sample of 134 participants, we replicated the age-related decrease of alpha power lateralization after adjusting for the aperiodic signal. Critically, the link between task performance and alpha power lateralization was found only when correcting for aperiodic signal biases. Functionally, these findings suggest that age-related declines in VWM performance may be related to the decreased ability to prioritize relevant over irrelevant information. Conversely, CDA amplitudes were stable across age groups, suggesting a distinct neural mechanism possibly related to preserved VWM encoding or early maintenance. |
Ana María Triana; Juha Salmi; Nicholas Mark Edward Alexander Hayward; Jari Saramäki; Enrico Glerean 2024. @book{Triana2024, Our behavior and mental states are constantly shaped by our environment and experiences. However, little is known about the response of brain functional connectivity to environmental, physiological, and behavioral changes on different timescales, from days to months. This gives rise to an urgent need for longitudinal studies that collect high-frequency data. To this end, for a single subject, we collected 133 days of behavioral data with smartphones and wearables and performed 30 functional magnetic resonance imaging (fMRI) scans measuring attention, memory, resting state, and the effects of naturalistic stimuli. We find traces of past behavior and physiology in brain connectivity that extend up as far as 15 days. While sleep and physical activity relate to brain connectivity during cognitively demanding tasks, heart rate variability and respiration rate are more relevant for resting-state connectivity and movie-watching. This unique data set is openly accessible, offering an exceptional opportunity for further discoveries. Our results demonstrate that we should not study brain connectivity in isolation, but rather acknowledge its interdependence with the dynamics of the environment, changes in lifestyle, and short-term fluctuations such as transient illnesses or restless sleep. These results reflect a prolonged and sustained relationship between external factors and neural processes. Overall, precision mapping designs such as the one employed here can help to better understand intraindividual variability, which may explain some of the observed heterogeneity in fMRI findings. The integration of brain connectivity, physiology data and environmental cues will propel future environmental neuroscience research and support precision healthcare. |
Michael P. Trevarrow; Miranda J. Munoz; Yessenia M. Rivera; Rishabh Arora; Quentin H. Drane; Gian D. Pal; Leonard Verhagen Metman; Lisa C. Goelz; Daniel M. Corcos; Fabian J. David Medication improves velocity, reaction time, and movement time but not amplitude or error during memory-guided reaching in Parkinson's disease Journal Article In: Physiological Reports, vol. 12, no. 17, pp. 1–14, 2024. @article{Trevarrow2024, The motor impairments experienced by people with Parkinson's disease (PD) are exacerbated during memory-guided movements. Despite this, the effect of antiparkinson medication on memory-guided movements has not been elucidated. We evaluated the effect of antiparkinson medication on motor control during a memory-guided reaching task with short and long retention delays in participants with PD and compared performance to age-matched healthy control (HC) participants. Thirty-two participants with PD completed the motor section of the Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS III) and performed a memory-guided reaching task with two retention delays (0.5 s and 5 s) while on and off medication. Thirteen HC participants completed the MDS-UPDRS III and performed the memory-guided reaching task. In the task, medication increased movement velocity, decreased movement time, and decreased reaction time toward what was seen in the HC. However, movement amplitude and reaching error were unaffected by medication. Shorter retention delays increased movement velocity and amplitude, decreased movement time, and decreased error, but increased reaction times in the participants with PD and HC. Together, these results imply that antiparkinson medication is more effective at altering the neurophysiological mechanisms controlling movement velocity and reaction time compared with other aspects of movement control. |
Caterina Trentin; Giulia Rinaldi; Magdalena A. Chorzcepa; Michaela A. Imhof; Heleen A. Slagter; Christian N. L. Olivers A certain future strengthens the past: knowing ahead how to act on an object prioritizes its visual working memory representation Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–15, 2024. @article{Trentin2024a, Findings from recent studies indicate that planning an action toward an object strengthens its visual working memory (VWM) representation, emphasizing the importance of sensorimotor links in VWM. In the present study, we investigated to what extent such sensorimotor links are modulated by how well-defined an action plan is. In three eye-tracking experiments, we asked participants to memorize a visual stimulus for a subsequent memory test, whereby they performed a specific hand movement toward memory-matching probes. We manipulated action uncertainty so that in the defined action condition, participants knew before the memory delay what specific action they would have to perform at the memory test, while in the undefined action condition, they were informed about the specific action on the object in VWM only after the delay. Importantly, during the delay, participants were presented with a visual detection task, designed to measure any attentional biases toward the memorized object. Across the three experiments, we found moderate evidence that knowing in advance how to act on an object prioritized its mnemonic representation, as expressed in an increased attentional bias toward it. Our results support the idea that knowing what action to perform on an object strengthens its representation in VWM, and further highlight the importance of considering action in the study of VWM. |
Caterina Trentin; Chris Olivers; Heleen A. Slagter Action planning renders objects in working memory more attentionally salient Journal Article In: Journal of Cognitive Neuroscience, vol. 36, no. 10, pp. 2166–2183, 2024. @article{Trentin2024, A rapidly growing body of work suggests that visual working memory (VWM) is fundamentally action oriented. Consistent with this, we recently showed that attention is more strongly biased by VWM representations of objects when we plan to act on those objects in the future. Using EEG and eye tracking, here, we investigated neurophysiological correlates of the interactions between VWM and action. Participants (n = 36) memorized a shape for a subsequent VWM test. At test, a probe was presented along with a secondary object. In the action condition, participants gripped the actual probe if it matched the memorized shape, whereas in the control condition, they gripped the secondary object. Crucially, during the VWM delay, participants engaged in a visual selection task, in which they located a target as fast as possible. The memorized shape could either encircle the target (congruent trials) or a distractor (incongruent trials). Replicating previous findings, we found that eye gaze was biased toward the VWM-matching shape and, importantly, more so when the shape was directly associated with an action plan. Moreover, the ERP results revealed that during the selection task, future action-relevant VWM-matching shapes elicited (1) a stronger Ppc (posterior positivity contralateral), signaling greater attentional saliency; (2) an earlier PD (distractor positivity) component, suggesting faster suppression; (3) a larger inverse (i.e., positive) sustained posterior contralateral negativity in incongruent trials, consistent with stronger suppression of action-associated distractors; and (4) an enhanced response-locked positivity over left motor regions, possibly indicating enhanced inhibition of the response associated with the memorized item during the interim task. Overall, these results suggest that action planning renders objects in VWM more attentionally salient, supporting the notion of selection-for-action in working memory. |
Vít Třebický; Petr Tureček; Jitka Třebická Fialová; Žaneta Pátková; Dominika Grygarová; Jan Havlíček In: Evolution and Human Behavior, vol. 45, no. 6, pp. 1–11, 2024. @article{Trebicky2024, Facial and bodily features represent salient visual stimuli upon which people spontaneously attribute various fitness-relevant characteristics such as attractiveness or formidability. While existing evidence predominantly relies on sequential stimuli presentation tasks, real-world social comparisons often involve assessing two or multiple individuals. In studies using two-alternative forced-choice tasks, participants usually perform at rates above the chance to select the expected option. However, these tasks use dichotomized and artificially manipulated stimuli that lack generalizability in situations where the differences between individuals are less likely to be ‘clear-cut'. We tested whether the probability of selection will proportionally increase with increasing degrees of difference between the stimuli or whether there is a discrimination threshold if the stimuli are perceived as too similar. In two registered studies comprising online (N = 446) and onsite (N = 56) participants, we explored the influence of the degree of difference in attractiveness and formidability ratings between stimuli pairs on both the probability of selection and selection speed. Participants were presented with randomly selected pairs of men (30 pairs of faces, 30 pairs of bodies) and tasked with choosing the more attractive or formidable target. Applying Bayesian inference, our findings reveal a systematic impact of the degree of difference on both the selection probability and speed. As differences in attractiveness or formidability increased, both men and women exhibited a heightened propensity and speed in selecting the higher-scoring stimuli. Our study demonstrates that people discriminate even slight differences in attractiveness and formidability, indicating that cognitive processes underlying the perception of these characteristics had undergone natural selection for a high level of discrimination. |
Tobiasz Trawiński; Chuanli Zang; Simon P. Liversedge; Yao Ge; Ying Fu; Nick Donnelly; Tobiasz Trawinski; Chuanli Zang; Simon P. Liversedge; Yao Ge; Ying Fu; Nick Donnelly The influence of culture on the viewing of Western and East Asian paintings Journal Article In: Psychology of Aesthetics, Creativity, and the Arts, vol. 18, no. 2, pp. 121–142, 2024. @article{Trawinski2024, The influence of British and Chinese culture on the viewing of paintings from Western and East Asian traditions was explored in an old/new discrimination task. Accuracy data were considered alongside signal detection measures of sensitivity and bias. The results showed participant culture and painting tradition interacted but only with respect to response bias and not sensitivity. Eye movements were also recorded during encoding and discrimination. Paintings were split into regions of interest defined by faces, or the theme and context to analyze the eye movement data. With respect to the eye movement data, the results showed that a match between participant culture and painting tradition increased the viewing of faces in paintings at the expense of the viewing of other locations, an effect interpreted as a manifestation of the Other Race Effect on the viewing of paintings. There was, however, no evidence of broader influence of culture on the eye movements made to paintings as might be expected if culture influenced the allocation of attention more generally. Taken together, these findings suggest culture influences the viewing of paintings but only in response to challenges to the encoding of faces. |
Christof Elias Topfstedt; Luca Wollenberg; Thomas Schenk Training enables substantial decoupling of visual attention and saccade preparation Journal Article In: Vision Research, vol. 221, pp. 1–13, 2024. @article{Topfstedt2024, Visual attention is typically shifted toward the targets of upcoming saccadic eye movements. This observation is commonly interpreted in terms of an obligatory coupling between attentional selection and oculomotor programming. Here, we investigated whether this coupling is facilitated by a habitual expectation of spatial congruence between visual and motor targets. To this end, we conducted a dual-task (i.e., concurrent saccade task and visual discrimination task) experiment in which male and female participants were trained to either anticipate spatial congruence or incongruence between a saccade target and an attention probe stimulus. To assess training-induced effects of expectation on premotor attention allocation, participants subsequently completed a test phase in which the attention probe position was randomized. Results revealed that discrimination performance was systematically biased toward the expected attention probe position, irrespective of whether this position matched the saccade target or not. Overall, our findings demonstrate that visual attention can be substantially decoupled from ongoing oculomotor programming and suggest an important role of habitual expectations in the attention-action coupling. |
Ivan Tomić; Paul M. Bays A dynamic neural resource model bridges sensory and working memory Journal Article In: eLife, vol. 12, pp. 1–38, 2024. @article{Tomic2024a, Probing memory of a complex visual image within a few hundred milliseconds after its disappearance reveals significantly greater fidelity of recall than if the probe is delayed by as little as a second. Classically interpreted, the former taps into a detailed but rapidly decaying visual sensory or ‘iconic' memory (IM), while the latter relies on capacity-limited but comparatively stable visual working memory (VWM). While iconic decay and VWM capacity have been extensively studied independently, currently no single framework quantitatively accounts for the dynamics of memory fidelity over these time scales. Here, we extend a stationary neural population model of VWM with a temporal dimension, incorporating rapid sensory-driven accumulation of activity encoding each visual feature in memory, and a slower accumulation of internal error that causes memorized features to randomly drift over time. Instead of facilitating read-out from an independent sensory store, an early cue benefits recall by lifting the effective limit on VWM signal strength imposed when multiple items compete for representation, allowing memory for the cued item to be supplemented with information from the decaying sensory trace. Empirical measurements of human recall dynamics validate these predictions while excluding alternative model architectures. A key conclusion is that differences in capacity classically thought to distinguish IM and VWM are in fact contingent upon a single resource-limited WM store. |
Ivan Tomić; Dagmar Adamcová; Máté Fehér; Paul M. Bays Dissecting the components of error in analogue report tasks: Error in analogue report tasks Journal Article In: Behavior Research Methods, vol. 56, pp. 8196–8213, 2024. @article{Tomic2024, Over the last two decades, the analogue report task has become a standard method for measuring the fidelity of visual representations across research domains including perception, attention, and memory. Despite its widespread use, there has been no methodical investigation of the different task parameters that might contribute to response variability. To address this gap, we conducted two experiments manipulating components of a typical analogue report test of memory for colour hue. We found that human response errors were independently affected by changes in storage and maintenance requirements of the task, demonstrated by a strong effect of set size even in the absence of a memory delay. In contrast, response variability remained unaffected by physical size of the colour wheel, implying negligible contribution of motor noise to task performance, or by its chroma radius, highlighting non-uniformity of the standard colour space. Comparing analogue report to a matched forced-choice task, we found variation in adjustment criterion made a limited contribution to analogue report variability, becoming meaningful only with low representational noise. Our findings validate the analogue report task as a robust measure of representational fidelity for most purposes, while also quantifying non-representational sources of noise that would limit its reliability in specialized settings. |
Daniel Toledano; Mor Sasi; Shlomit Yuval-Greenberg; Dominique Lamy On the timing of overt attention deployment: Eye-movement evidence for the priority accumulation framework Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 50, no. 5, pp. 431–450, 2024. @article{Toledano2024, Most visual-search theories assume that our attention is automatically allocated to the location with the highest priority at any given moment. The Priority Accumulation Framework (PAF) challengesthis assumption. It suggests that the priority weight at each location accumulates across sequential events and that evidence for the presence of action-relevant information contributes to determining when attention is deployed to the location with the highest accumulated priority. Here, we tested these hypotheses for overt attention by recording first saccades in a free-viewing spatial-cueing task. We manipulated search difficulty (Experiments 1 and 2) and cue salience (Experiment 2). Standard theories posit that when oculomotor capture by the cue occurs, it is initiated before the search display appears; therefore, these theories predict that the cue's impact on the distribution of first saccades should be independent of search difficulty but influenced by the cue's saliency. By contrast, PAF posits that the cue can bias competition later, after processing of the search display has already started, and therefore predicts that such late impact should increase with both search difficulty and cue salience. The results fully supported PAF's predictions. Our account suggests a distinction between attentional capture and attentional-priority bias that resolves enduring inconsistencies in the attentional-capture literature. |
Zhenghe Tian; Jingwen Chen; Cong Zhang; Bin Min; Bo Xu; Liping Wang Mental programming of spatial sequences in working memory in the macaque frontal cortex Journal Article In: Science, vol. 385, no. 1437, pp. 1–1, 2024. @article{Tian2024a, WM refers to our ability to temporarily maintain and manipulate information, which is foundational to the organization of goal-directed behavior. Although the nature of WM maintenance has been the focus of WM research in the past decades, WM manipulation or volitional control is more complex and has received less attention. The control process is what makes WM distinct and sets it apart from short-term memory. Previous human imaging studies have shown that the frontal cortex was highly involved inWM control. However, the neural dynamics and computational mechanisms supporting the control are not understood. We aimed to characterize these neural computations in the frontal cortex of nonhuman primates. |
Yanying Tian; Min Hai; Yongchun Wang; Minmin Yan; Tingkang Zhang; Jingjing Zhao; Yonghui Wang Is the precedence of social re-orienting only inherent to the initiators? Journal Article In: Quarterly Journal of Experimental Psychology, pp. 1–14, 2024. @article{Tian2024, Previous researches have revealed that initiators preferentially re-orient their attention towards responders with whom they have established joint attention (JA). However, it remains unclear whether this precedence of social re-orienting is inherent to initiators or applies equally to responders, and whether this social re-orienting is modulated by the social contexts in which JA is achieved. To address these issues, the present study adopted a modified virtual-reality paradigm to manipulate social roles (initiator vs. responder), social behaviours (JA vs. Non-JA), and social contexts (intentional vs. incidental). Results indicated that people, whether as initiators or responders, exhibited a similar prioritisation pattern of social re-orienting, and this was independent of the social contexts in which JA was achieved, revealing that the prioritisation of social re-orienting is an inherent social attentional mechanism in humans. It should be noted, however, that the distinct social cognitive systems engaged when individuals switched roles between initiator and responder were only driven during intentional (Experiment 1) rather than incidental (Experiment 2) JA. These findings provide potential insights for understanding the shared attention system and the integrated framework of attentional and mentalising processes. |
Jessica A. F. Thompson; Hannah Sheahan; Tsvetomira Dumbalska; Julian Sandbrink; Manuela Piazza; Christopher Summerfield Zero-shot counting with a dual-stream neural network model Journal Article In: Neuron, vol. 112, no. 24, pp. 4147–4158, 2024. @article{Thompson2024, To understand a visual scene, observers need to both recognize objects and encode relational structure. For example, a scene comprising three apples requires the observer to encode concepts of ‘‘apple'' and ‘‘three.'' In the primate brain, these functions rely on dual (ventral and dorsal) processing streams. Object recognition in primates has been successfully modeled with deep neural networks, but how scene structure (including numerosity) is encoded remains poorly understood. Here, we built a deep learning model, based on the dual-stream architecture of the primate brain, which is able to count items ‘‘zero-shot''—even if the objects themselves are unfamiliar. Our dual-stream network forms spatial response fields and lognormal number codes that resemble those observed in the macaque posterior parietal cortex. The dual-stream network also makes successful predictions about human counting behavior. Our results provide evidence for an enactive theory of the role of the posterior parietal cortex in visual scene understanding. |
Nikita Thomas; Jennifer H. Acton; Jonathan T. Erichsen; Tony Redmond; Matt J. Dunn Reliability of gaze-contingent perimetry Journal Article In: Behavior Research Methods, vol. 56, no. 5, pp. 4883–4892, 2024. @article{Thomas2024a, Standard automated perimetry, a psychophysical task performed routinely in eyecare clinics, requires observers to maintain fixation for several minutes at a time in order to measure visual field sensitivity. Detection of visual field damage is confounded by eye movements, making the technique unreliable in poorly attentive individuals and those with pathologically unstable fixation, such as nystagmus. Microperimetry, which utilizes ‘partial gaze-contingency' (PGC), aims to counteract eye movements but only corrects for gaze position errors prior to each stimulus onset. Here, we present a novel method of visual field examination in which stimulus position is updated during presentation, which we refer to as ‘continuous gaze-contingency' (CGC). In the first part of this study, we present three case examples that demonstrate the ability of CGC to measure the edges of the physiological blind spot in infantile nystagmus with greater accuracy than PGC and standard ‘no gaze-contingency' (NoGC), as initial proof-of-concept for the utility of the paradigm in measurements of absolute scotomas in these individuals. The second part of this study focused on healthy observers, in which we demonstrate that CGC has the lowest stimulus positional error (gaze-contingent precision: CGC = ± 0.29° |
Elizabeth H. X. Thomas; Susan L. Rossell; Jessica B. Myles; Eric J. Tan; Erica Neill; Sean P. Carruthers; Philip J. Sumner; Kiymet Bozaoglu; Caroline Gurvich The relationship of schizotypy and saccade performance in patients with schizophrenia and non-clinical individuals Journal Article In: Journal of Individual Differences, vol. 45, no. 4, pp. 244–254, 2024. @article{Thomas2024, Deficits in saccade performance (i.e., rapid eye movements) are commonly observed in people with schizophrenia. Investigations of the schizotypy-saccade relationship have been exclusively explored in non-clinical individuals, with mixed findings. Of the three saccadic paradigms, research has predominantly focused on the antisaccade paradigm, while the relationship between schizotypy and prosaccade and memory-guided saccade performance remains underexplored. This study aimed to investigate the relationship between schizotypy and saccade performance across the three saccadic paradigms in both patients and non-clinical individuals. Sixty-two patients with schizophrenia/schizoaffective disorder and 148 non-clinical individuals completed the Oxford-Liverpool Inventory of Feelings and Experiences (O-LIFE) self-report questionnaire as a measure of schizotypy. All participants also completed a prosaccade, memory-guided saccade and antisaccade task. Canonical correlation analyses were conducted to examine the collective, multivariate relationship between the set of schizotypy variables and the sets of prosaccade, memory-guided saccade and antisaccade variables. Differences between patients and non-clinical groups were in line with previous research. In the non-clinical group, Cognitive Disorganisation was the highest contributing variable to prosaccade performance and prosaccade latency was the highest contributing variable to schizotypy. There was no significant relationship between schizotypy and memory-guided or antisaccade performance. No significant relationships between schizotypy and saccade performance were observed in the patient group. Our findings suggest a relationship between disorganized schizotypy and basic processing speed in non-clinical individuals. This relationship was not observed in patients, suggesting that sub-clinical saccade performance may not mirror impairments observed in schizophrenia. Our findings in the non-clinical group were inconsistent with previous studies. These used different schizotypy inventories, suggesting that schizotypy measures derived from different conceptual backgrounds may not be comparable. |
Jordy Thielen; Tessa M. Leeuwen; Simon J. Hazenberg; Anna Z. L. Wester; Floris P. Lange; Rob Lier Amodal completion across the brain: The impact of structure and knowledge Journal Article In: Journal of vision, vol. 24, no. 6, pp. 10, 2024. @article{Thielen2024, This study investigates the phenomenon of amodal completion within the context of naturalistic objects, employing a repetition suppression paradigm to disentangle the influence of structure and knowledge cues on how objects are completed. The research focuses on early visual cortex (EVC) and lateral occipital complex (LOC), shedding light on how these brain regions respond to different completion scenarios. In LOC, we observed suppressed responses to structure and knowledge-compatible stimuli, providing evidence that both cues influence neural processing in higher-level visual areas. However, in EVC, we did not find evidence for differential responses to completions compatible or incompatible with either structural or knowledge-based expectations. Together, our findings suggest that the interplay between structure and knowledge cues in amodal completion predominantly impacts higher-level visual processing, with less pronounced effects on the early visual cortex. This study contributes to our understanding of the complex mechanisms underlying visual perception and highlights the distinct roles played by different brain regions in amodal completion. |
Maria Theobald; Joseph Colantonio; Igor Bascandziev; Elizabeth Bonawitz; Garvin Brod Do reflection prompts promote children's conflict monitoring and revision of misconceptions? Journal Article In: Child Development, vol. 95, no. 4, pp. e253–e269, 2024. @article{Theobald2024, We tested whether reflection prompts enhance conflict monitoring and facilitate the revision of misconceptions. German children (N = 97 |
Antonia F. Ten Brink; Iris Heiner; H. Chris Dijkerman; Christoph Strauch Pupil dilation reveals the intensity of touch Journal Article In: Psychophysiology, vol. 61, no. 6, pp. 1–13, 2024. @article{TenBrink2024, Touch is important for many aspects of our daily activities. One of the most important tactile characteristics is its perceived intensity. However, quantifying the intensity of perceived tactile stimulation is not always possible using overt responses. Here, we show that pupil responses can objectively index the intensity of tactile stimulation in the absence of overt participant responses. In Experiment 1 (n = 32), we stimulated three reportedly differentially sensitive body locations (finger, forearm, and calf) with a single tap of a tactor while tracking pupil responses. Tactile stimulation resulted in greater pupil dilation than a baseline without stimulation. Furthermore, pupils dilated more for the more sensitive location (finger) than for the less sensitive location (forearm and calf). In Experiment 2 (n = 20) we extended these findings by manipulating the intensity of the stimulation with three different intensities, here a short vibration, always at the little finger. Again, pupils dilated more when being stimulated at higher intensities as compared to lower intensities. In summary, pupils dilated more for more sensitive parts of the body at constant stimulation intensity and for more intense stimulation at constant location. Taken together, the results show that the intensity of perceived tactile stimulation can be objectively measured with pupil responses – and that such responses are a versatile marker for touch research. Our findings may pave the way for previously impossible objective tests of tactile sensitivity, for example in minimally conscious state patients. |
Emily D. Taylor; Tobias Feldmann-Wüstefeld Reward-modulated attention deployment is driven by suppression, not attentional capture Journal Article In: NeuroImage, vol. 299, pp. 1–12, 2024. @article{Taylor2024, One driving factor for attention deployment towards a stimulus is its associated value due to previous experience and learning history. Previous visual search studies found that when looking for a target, distractors associated with higher reward produce more interference (e.g., longer response times). The present study investigated the neural mechanism of such value-driven attention deployment. Specifically, we were interested in which of the three attention sub-processes are responsible for the interference that was repeatedly observed behaviorally: enhancement of relevant information, attentional capture by irrelevant information, or suppression of irrelevant information. We replicated earlier findings showing longer response times and lower accuracy when a target competed with a high-reward compared to a low-reward distractor. We also found a spatial gradient of interference: behavioral performance dropped with increasing proximity to the target. This gradient was steeper for high- than low-reward distractors. Event-related potentials of the EEG signal showed the reason for the reward-induced attentional bias: High-reward distractors required more suppression than low-reward distractors as evident in larger Pd components. This effect was only found for distractors near targets, showing the additional filtering needs required for competing stimuli in close proximity. As a result, fewer attentional resources can be distributed to the target when it competes with a high-reward distractor, as evident in a smaller target-N2pc amplitude. The distractor-N2pc, indicative of attentional capture, was neither affected by distance nor reward, showing that attentional capture alone cannot explain interference by stimuli of high value. In sum our results show that the higher need for suppression of high-value stimuli contributes to reward-modulated attention deployment and increased suppression can prevent attentional capture of high-value stimuli. |
John M. Tauber; Scott L. Brincat; Emily P. Stephen; Jacob A. Donoghue; Leo Kozachkov; Emery N. Brown; Earl K. Miller Propofol-mediated unconsciousness disrupts progression of sensory signals through the cortical hierarchy Journal Article In: Journal of Cognitive Neuroscience, vol. 36, no. 2, pp. 394–413, 2024. @article{Tauber2024, A critical component of anesthesia is the loss of sensory perception. Propofol is the most widely used drug for general anesthesia, but the neural mechanisms of how and when it disrupts sensory processing are not fully understood. We analyzed local field potential and spiking recorded from Utah arrays in auditory cortex, associative cortex, and cognitive cortex of nonhuman primates before and during propofol-mediated unconsciousness. Sensory stimuli elicited robust and decodable stimulus responses and triggered periods of stimulus-related synchronization between brain areas in the local field potential of Awake animals. By contrast, propofol-mediated unconsciousness eliminated stimulus-related synchrony and drastically weakened stimulus responses and information in all brain areas except for auditory cortex, where responses and information persisted. However, we found stimuli occurring during spiking Up states triggered weaker spiking responses than in Awake animals in auditory cortex, and little or no spiking responses in higher order areas. These results suggest that propofol's effect on sensory processing is not just because of asynchronous Down states. Rather, both Down states and Up states reflect disrupted dynamics. |
Dilce Tanriverdi; Frans W. Cornelissen Rapid assessment of peripheral visual crowding Journal Article In: Frontiers in Neuroscience, vol. 18, pp. 1–14, 2024. @article{Tanriverdi2024, Visual crowding, the phenomenon in which the ability to distinguish objects is hindered in cluttered environments, has critical implications for various ophthalmic and neurological disorders. Traditional methods for assessing crowding involve time-consuming and attention-demanding psychophysical tasks, making routine examination challenging. This study sought to compare trial-based Alternative Forced-Choice (AFC) paradigms using either manual or eye movement responses and a continuous serial search paradigm employing eye movement responses to evaluate their efficiency in rapidly assessing peripheral crowding. In all paradigms, we manipulated the orientation of a central Gabor patch, which could be presented alone or surrounded by six Gabor patches. We measured participants' target orientation discrimination thresholds using adaptive psychophysics to assess crowding magnitude. Depending on the paradigm, participants either made saccadic eye movements to the target location or responded manually by pressing a key or moving a mouse. We compared these paradigms in terms of crowding magnitude, assessment time, and paradigm demand. Our results indicate that employing eye movement-based paradigms for assessing peripheral visual crowding yields results faster compared to paradigms that necessitate manual responses. Furthermore, when considering similar levels of confidence in the threshold measurements, both a novel serial search paradigm and an eye movement-based 6AFC paradigm proved to be the most efficient in assessing crowding magnitude. Additionally, crowding estimates obtained through either the continuous serial search or the 6AFC paradigms were consistently higher than those obtained using the 2AFC paradigms. Lastly, participants did not report a clear difference between paradigms in terms of their perceived demand. In conclusion, both the continuous serial search and the 6AFC eye movement response paradigms enable a fast assessment of visual crowding. These approaches may potentially facilitate future routine crowding assessment. However, the usability of these paradigms in specific patient populations and specific purposes should be assessed. |
Jacob C. Tanner; Joshua Faskowitz; Lisa Byrge; Daniel P. Kennedy; Olaf Sporns; Richard F. Betzel Synchronous high-amplitude co-fluctuations of functional brain networks during movie-watching Journal Article In: Imaging Neuroscience, vol. 1, pp. 1–21, 2024. @article{Tanner2024, Recent studies have shown that functional connectivity can be decomposed into its exact frame- wise contributions, revealing short- lived, infrequent, and high- amplitude time points referred to as “events.” Events contribute disproportionately to the time- averaged connectivity pattern, improve identifiability and brain- behavior associations, and differences in their expression have been linked to endogenous hormonal fluctuations and autism. Here, we explore the characteristics of events while subjects watch movies. Using two independently acquired imaging datasets in which participants passively watched movies, we find that events synchronize across individuals and based on the level of synchronization, can be categorized into three distinct classes: those that synchronize at the boundaries between movies, those that synchronize during movies, and those that do not synchronize at all. We find that boundary events, compared to the other categories, exhibit greater amplitude, distinct co- fluctuation patterns, and temporal propagation. We show that underlying boundary events 1 is a specific mode of co-fluctuation involving the activation of control and salience systems alongside the deactivation of visual systems. Events that synchronize during the movie, on the other hand, display a pattern of co-fluctuation that is time- locked to the movie stimulus. Finally, we found that subjects' time-varying brain networks are most similar to one another during these synchronous events. |
Hideki Tamura; Shigeki Nakauchi; Tetsuto Minami Glossiness perception and its pupillary response Journal Article In: Vision Research, vol. 219, pp. 1–10, 2024. @article{Tamura2024, Recent studies have revealed that pupillary response changes depend on perceptual factors such as subjective brightness caused by optical illusions and luminance. However, the manner in which the perceptual factor that is derived from the glossiness perception of object surfaces affects the pupillary response remains unclear. We investigated the relationship between the glossiness perception and pupillary response through a glossiness rating experiment that included recording the pupil diameter. We prepared general object images (original) and randomized images (shuffled) that comprised the same images with randomized small square regions as stimuli. The image features were controlled by matching the luminance histogram. The observers were asked to rate the perceived glossiness of the stimuli presented for 3,000 ms and the changes in their pupil diameters were recorded. Images with higher glossiness ratings constricted the pupil size more than those with lower glossiness ratings at the peak constriction of the pupillary responses during the stimulus duration. The linear mixed-effects model demonstrated that the glossiness rating, image category (original/shuffled), variance of the luminance histogram, and stimulus area were most effective in predicting the pupillary responses. These results suggest that the illusory brightness obtained by the image regions of high-glossiness objects, such as specular highlights, induce pupil constriction. |
Agnieszka Szarkowska; Valentina Ragni; Sonia Szkriba; Sharon Black; David Orrego-Carmona; Jan Louis Kruger In: PLoS ONE, vol. 19, no. 10, pp. 1–29, 2024. @article{Szarkowska2024a, Every day, millions of viewers worldwide engage with subtitled content, and an increasing number choose to watch without sound. In this mixed-methods study, we examine the impact of sound presence or absence on the viewing experience of both first-language (L1) and second-language (L2) viewers when they watch subtitled videos. We explore this novel phenomenon through comprehension and recall post-tests, self-reported cognitive load, immersion, and enjoyment measures, as well as gaze pattern analysis using eye tracking. We also investigate viewers' motivations for opting for audiovisual content without sound and explore how the absence of sound impacts their viewing experience, using in-depth, semi-structured interviews. Our goal is to ascertain whether these effects are consistent among L2 and L1 speakers from different language varieties. To achieve this, we tested L1-British English, L1-Australian English and L2-English (L1-Polish) language speakers (n = 168) while they watched English-language audiovisual material with English subtitles with and without sound. The findings show that when watching videos without sound, viewers experienced increased cognitive load, along with reduced comprehension, immersion and overall enjoyment. Examination of participants' gaze revealed that the absence of sound significantly affected the viewing experience, increasing the need for subtitles and thus increasing the viewers' propensity to process them more thoroughly. The absence of sound emerged as a global constraint that made reading more effortful. Triangulating data from multiple sources made it possible to tap into some of the metacognitive strategies employed by viewers to maintain comprehension in the absence of sound. We discuss the implications within the context of the growing trend of watching subtitled videos without sound, emphasising its potential impact on cognitive processes and the viewing experience. |
Yanliang Sun; Lixue Wang; Wenhao Yu; Xue Yang; Jiaru Song; Shouxin Li Mechanisms of visual working memory processing task-irrelevant information retrieved from visual long-term memory Journal Article In: Cognition, vol. 250, pp. 1–10, 2024. @article{Sun2024d, Visual working memory (VWM) can selectively filter task-irrelevant information from incoming visual stimuli. However, whether a similar filtering process applies to task-irrelevant information retrieved from visual long-term memory (VLTM) remains elusive. We assume a “resource-limited retrieval mechanism” in VWM in charge of the retrieval of irrelevant VLTM information. To make a comprehensive understanding of this mechanism, we conducted three experiments using both a VLTM learning task and a VWM task combined with pupillometry. The presence of a significant pupil light response (PLR) served as empirical evidence that VLTM information can indeed make its way into VWM. Notably, task-relevant VLTM information induced a sustained PLR, contrasting with the transient PLR observed for task-irrelevant VLTM information. Importantly, the transience of the PLR occurred under conditions of low VWM load, but this effect was absent under conditions of high load. Collectively, these results show that task-irrelevant VLTM information can enter VWM and then fade away only under conditions of low VWM load. This dynamic underscores the resource-limited retrieval mechanism within VWM, exerting control over the entry of VLTM information. |
Xinyi Sun; Guangdi Lin; Min Zhan; Yubin Zheng; Jianqiang Ye; Dimei Chen Effects of a microcomputer-based laboratory on the triple-representation of a preservice chemistry teacher: An eye-tracking design and evidence Journal Article In: Journal of Chemical Education, vol. 101, no. 3, pp. 858–867, 2024. @article{Sun2024c, Numerous academic studies in the field of chemical education utilize the conceptual framework of three distinct tiers that underlie the instruction and acquisition of chemical knowledge. This framework is commonly depicted in a chemistry triangle, with the vertices denoted as macroscopic, submicroscopic, and symbolic. The ability of students to effectively convert between various representations and develop triple representation thinking can be advantageous in comprehending chemical concepts and the underlying micro essence. However, studies have shown that students have certain difficulties with triple representation. A microcomputer-based laboratory provides students with intuitive material by dynamically presenting data images in real time, helping students transfer among triple representations. This study investigated the effect of a microcomputer-based laboratory on the triple representation of a chemistry lesson based on eye-tracking evidence. The experimental group (N = 14) completed the test by watching the experimental video of the microcomputer-based laboratory, while the control group (N = 13) watched the traditional experimental video. Eye-tracking was used to make real-time recordings of the experimental procedures carried out by each of the groups. By comparing the triple representation test results and eye-tracking indicators of the two groups, the results show that a microcomputer-based laboratory has a positive effect on the triple representation, and the experimental group performed significantly better when acquiring image information under the influence of the microcomputer-based laboratory. |
Rui Sun; Julia Fietz; Mira Erhart; Dorothee Poehlchen; Lara Henco; Tanja M. Brückl; BeCOME Team; Michael Czisch; Philipp G. Saemann; Victor I. Spoormaker Free-viewing gaze patterns reveal a mood-congruency bias in MDD during an affective fMRI/eye-tracking task Journal Article In: European Archives of Psychiatry and Clinical Neuroscience, vol. 274, pp. 559–571, 2024. @article{Sun2024f, Major depressive disorder (MDD) has been related to abnormal amygdala activity during emotional face processing. However, a recent large-scale study (n = 28,638) found no such correlation, which is probably due to the low precision of fMRI measurements. To address this issue, we used simultaneous fMRI and eye-tracking measurements during a commonly employed emotional face recognition task. Eye-tracking provide high-precision data, which can be used to enrich and potentially stabilize fMRI readouts. With the behavioral response, we additionally divided the active task period into a task-related and a free-viewing phase to explore the gaze patterns of MDD patients and healthy controls (HC) and compare their respective neural correlates. Our analysis showed that a mood-congruency attentional bias could be detected in MDD compared to healthy controls during the free-viewing phase but without parallel amygdala disruption. Moreover, the neural correlates of gaze patterns reflected more prefrontal fMRI activity in the free-viewing than the task-related phase. Taken together, spontaneous emotional processing in free viewing might lead to a more pronounced mood-congruency bias in MDD, which indicates that combined fMRI with eye-tracking measurement could be beneficial for our understanding of the underlying psychopathology of MDD in different emotional processing phases. Trial Registration: The BeCOME study is registered on ClinicalTrials (gov: NCT03984084) by the Max Planck Institute of Psychiatry in Munich, Germany. |
Qi Sun; Lin Zhe Zhan; Fan Huan You; Xiao Fei Dong Attention affects the perception of self-motion direction from optic flow Journal Article In: iScience, vol. 27, no. 4, pp. 1–12, 2024. @article{Sun2024b, Many studies have demonstrated that attention affects the perception of many visual features. However, previous studies show conflicting results regarding the effect of attention on the perception of self-motion direction (i.e., heading) from optic flow. To address this question, we conducted three behavioral experiments and found that estimation accuracies of large headings (>14°) decreased with attention load, discrimination thresholds of these headings increased with attention load, and heading estimates were systematically compressed toward the focus of attention. Therefore, the current study demonstrated that attention affected heading perception from optic flow, showing that the perception is both information-driven and cognitive. |
Cuicui Sun; Zhijin Zhou; David Cropley Cognitive processes in selecting humorous punchlines: A comparative study of humor and creativity Journal Article In: Thinking Skills and Creativity, vol. 52, pp. 1–10, 2024. @article{Sun2024a, Humor generation represents the application of creative cognition in spontaneous, real-life contexts. This study sought to explore the cognitive processes involved in humor generation, with a special focus on the selection of humorous punchlines, by comparing humor and creativity. Employing a daily dialogue Question-Answer paradigm, participants were presented with four types of alternative answers for each dialogue: humorous, novel (non-humorous), routine, and irrelevant. Utilizing eye-tracking technology, the study tracked participants' eye movement trajectories during the selection of humorous punchlines, with a focus on fixation durations on the four answer types at different time intervals. Fifty participants were randomly assigned to either the group tasked with selecting humorous answers or the group tasked with selecting novel answers. The findings indicated that the humor group initially spent more time fixating on novel answers than humorous ones when selecting humorous punchlines; however, in the later stages, fixation duration on humorous answers surpassed that on novel answers. This dynamic underscores a competitive relationship between these two types of associations, shedding light on the cognitive distinctions between humor and creativity. Conversely, the novel group consistently exhibited a preference for humorous answers throughout the punchline selection process. The preference for humorous semantics in the novel group underscored cognitive similarities between humor and creativity. This study sheds light on the cognitive processes involved in selecting humorous punchlines and provides valuable insights into the cognitive parallels and distinctions between humor and creativity. |
Yongqiang Su; Yixun Li; Hong Li Development and validation of the simplified Chinese Author Recognition Test: Evidence from eye movements of Chinese adults in Mainland China Journal Article In: Journal of Research in Reading, vol. 47, no. 1, pp. 20–44, 2024. @article{Su2024a, Background: It is well evident that individuals' levels of print exposure are significantly correlated with their reading ability across languages, and an author recognition test is commonly used to measure print exposure objectively. For the first time, the current work developed and validated a Simplified Chinese Author Recognition Test (SCART) and examined its role in explaining Chinese online reading. Methods: In Study 1, we constructed the SCART for readers of simplified Chinese and validated the test using data collected from 203 young adults in Mainland China. Participants were measured on the SCART and three self-report tasks about their reading experiences and habits. Study 2 recruited additional 68 young adults in Mainland and measured their print exposure (with the same tasks used in Study 1), reading-related cognitive ability (working memory, rapid automatic naming, Chinese character reading, and vocabulary knowledge), and Chinese online reading via an eye-tracking passage reading task. Results: Results of Study 1 support the high reliability and validity of the SCART. Results of Study 2 indicate that SCART scores significantly predicted participants' online reading processing while controlling for subjective reading experiences and habits, and reading-related cognitive abilities. Across two studies, we found converging evidence that the in-depth recognition of the authors (i.e., participants have read the books written by these authors) appears to be a better indicator of print exposure than the superficial recognition of the author names. Conclusions: Taken together, this work filled in the gap in the literature by providing an evidence-based, objective print exposure measure for simplified Chinese and contributes to a broader understanding of print exposure and online reading processing across different writing systems. |
Caizhen Su; Xingyu Liu; Xinru Gan; Hang Zeng Using synchronized eye movements to predict attention in online video learning Journal Article In: Education Sciences, vol. 14, no. 5, pp. 1–12, 2024. @article{Su2024, Concerns persist about attentional engagement in online learning. The inter-subject correlation of eye movements (ISC) has shown promise as an accessible and effective method for attention assessment in online learning. This study extends previous studies investigating ISC of eye movements in online learning by addressing two research questions. Firstly, can ISC predict students' attentional states at a finer level beyond a simple dichotomy of attention states (e.g., attending and distracted states)? Secondly, whether learners' learning styles affect ISC's prediction rate of attention assessment in video learning? Previous studies have shown that learners of different learning styles have different eye movement patterns when viewing static materials. However, limited research has explored the impact of learning styles on viewing patterns in video learning. An eye tracking experiment with participants watching lecture videos demonstrated a connection between ISC and self-reported attention states at a finer level. We also demonstrated that learning styles did not significantly affect ISC's prediction rate of attention assessment in video learning, suggesting that ISC of eye movements can be effectively used without considering learners' learning styles. These findings contribute to the ongoing discourse on optimizing attention assessment in the evolving landscape of online education. |
Patrick Sturt; Nayoung Kwon Agreement attraction in comprehension: Do active dependencies and distractor position play a role? Journal Article In: Language, Cognition and Neuroscience, vol. 39, no. 3, pp. 279–301, 2024. @article{Sturt2024, Across four eye-tracking studies and one self-paced reading study, we test whether attraction in subject-verb agreement is affected by (a) the relative linear positions of target and distractor, and (b) the active dependency status of the distractor. We find an effect of relative position, with greater attraction in retro-active interference configurations, where the distractor is linearly closer to the critical verb (Subject…Distractor…V) than in pro-active interference where it is more distant (Distractor…Subject…V). However, within pro-active interference configurations, attraction was not affected by the active dependency status of the distractor: attraction effects were similarly small whether or not the distractor was waiting to complete an upcoming dependency at the critical verb, with Bayes Factor analyses showing evidence in favour of a null effect of active dependency status. We discuss these findings in terms of the decay of activation, and whether such decay is affected by maintenance of features in memory. |
Dawid Strzelczyk; Nicolas Langer Pre-stimulus activity mediates event-related theta synchronization and alpha desynchronization during memory formation in healthy aging Journal Article In: Imaging Neuroscience, vol. 2, pp. 1–22, 2024. @article{Strzelczyk2024, The capacity to learn is a key determinant for the quality of life, but is known to decline to varying degrees with age. However, despite mounting evidence of memory deficits in older age, the neural mechanisms contributing to successful or impeded memory remain unclear. Previous research has primarily focused on memory formation through remembered versus forgotten comparisons, lacking the ability to capture the incremental nature of learning. Moreover, previous electroencephalography (EEG) studies have primarily examined oscillatory brain activity during the encoding phase, such as event- related synchronization (ERS) of mid-frontal theta and desynchronization (ERD) of parietal alpha, while neglecting the potential influence of pre-stimulus activity. To address these limitations, we employed a sequence learning paradigm, where 113 young and 117 older participants learned a fixed sequence of visual locations through repeated observations (6,423 sequence repetitions, 55 '944 stimuli). This paradigm enabled us to investigate mid- frontal theta ERS, parietal alpha ERD, and how they are affected by pre-stimulus activity during the incremental learning process. Behavioral results revealed that young subjects learned significantly faster than older subjects, in line with expected age-related cognitive decline. Successful incremental learning was directly linked to decreases of mid-frontal theta ERS and increases of parietal alpha ERD. Notably, these neurophysiological changes were less pronounced in older individuals, reflecting a slower rate of learning. Importantly, the mediation analysis revealed that in both age groups, mid-frontal pre-stimulus theta partially mediated the relationship between learning and mid- frontal theta ERS. Furthermore, the overall impact of learning on parietal alpha ERD was primarily driven by its positive influence on pre-stimulus alpha activity. Our findings offer new insights into the age- related differences in memory formation and highlight the importance of pre-stimulus activity in explaining post- stimulus responses during learning. |
Caleb Stone; Jason B. Mattingley; Stefan Bode; Dragan Rangelov Distinct neural markers of evidence accumulation index metacognitive processing before and after simple visual decisions Journal Article In: Cerebral Cortex, vol. 34, no. 5, pp. 1–11, 2024. @article{Stone2024, Perceptual decision-making is affected by uncertainty arising from the reliability of incoming sensory evidence (perceptual uncertainty) and the categorization of that evidence relative to a choice boundary (categorical uncertainty). Here, we investigated how these factors impact the temporal dynamics of evidence processing during decision-making and subsequent metacognitive judgments. Participants performed a motion discrimination task while electroencephalography was recorded. We manipulated perceptual uncertainty by varying motion coherence, and categorical uncertainty by varying the angular offset of motion signals relative to a criterion. After each trial, participants rated their desire to change their mind. High uncertainty impaired perceptual and metacognitive judgments and reduced the amplitude of the centro-parietal positivity, a neural marker of evidence accumulation. Coherence and offset affected the centro-parietal positivity at different time points, suggesting that perceptual and categorical uncertainty affect decision-making in sequential stages. Moreover, the centro-parietal positivity predicted participants' metacognitive judgments: larger predecisional centro-parietal positivity amplitude was associated with less desire to change one's mind, whereas larger postdecisional centro-parietal positivity amplitude was associated with greater desire to change one's mind, but only following errors. These findings reveal a dissociation between predecisional and postdecisional evidence processing, suggesting that the CPP tracks potentially distinct cognitive processes before and after a decision. |
Natalie A Steinemann; Gabriel M Stine; Eric M Trautmann; Ariel Zylberberg; Daniel M Wolpert; Michael N Shadlen Direct observation of the neural computations underlying a single decision Journal Article In: eLife, vol. 12, pp. 1–29, 2024. @article{Steinemann2024, Neurobiological investigations of perceptual decision-making have furnished the first glimpse of a flexible cognitive process at the level of single neurons (Shadlen & Newsome, 1996; Shadlen & Kiani 2013). Neurons in the parietal and prefrontal cortex (Kim & Shadlen, 1999; Romo, Hernandez & Zainos, 2004; Hernandez, Zainos & Romo, 2002; Ding & Gold, 2012) are thought to represent the accumulation of noisy evidence, acquired over time, leading to a decision. Neural recordings averaged over many decisions have provided support for the deterministic rise in activity to a termination bound (Roitman & Shadlen, 2002). Critically, it is the unobserved stochastic component that is thought to confer variability in both choice and decision time (Gold & Shadlen, 2007). Here, we elucidate this stochastic, diffusion-like signal on individual decisions by recording simultaneously from hundreds of neurons in the lateral intraparietal cortex (LIP). We show that a small subset of these neurons, previously studied singly, represent a combination of deterministic drift and stochastic diffusion—the integral of noisy evidence—during perceptual decision making, and we provide direct support for the hypothesis that this diffusion signal is the quantity responsible for the variability in choice and reaction times. Neuronal state space and decoding analyses, applied to the whole population, also identify the drift diffusion signal. However, we show that the signal relies on the subset of neurons with response fields that overlap the choice targets. This parsimonious observation would escape detection by these powerful methods, absent a clear hypothesis. ### Competing Interest Statement The authors have declared no competing interest. |
Noah J. Steinberg; Zvi N. Roth; J. Anthony Movshon; Elisha Merriam Brain representations of motion and position in the double-drift illusion Journal Article In: eLife, vol. 13, pp. 1–16, 2024. @article{Steinberg2024, In the ‘double-drift' illusion, local motion within a window moving in the periphery of the visual field alters the window's perceived path. The illusion is strong even when the eyes track a target whose motion matches the window so that the stimulus remains stable on the retina. This implies that the illusion involves the integration of retinal signals with non-retinal eye-movement signals. To identify where in the brain this integration occurs, we measured BOLD fMRI responses in visual cortex while subjects experienced the double-drift illusion. We then used a combination of univariate and multivariate decoding analyses to identify (1) which brain areas were sensitive to the illusion and (2) whether these brain areas contained information about the illusory stimulus trajectory. We identified a number of cortical areas that responded more strongly during the illusion than a control condition that was matched for low-level stimulus properties. Only in area hMT+ was it possible to decode the illusory trajectory. We additionally performed a number of important controls that rule out possible low-level confounds. Concurrent eye tracking confirmed that subjects accurately tracked the moving target; we were unable to decode the illusion trajectory using eye position measurements recorded during fMRI scanning, ruling out explanations based on differences in oculomotor behavior. Our results provide evidence for a perceptual representation in human visual cortex that incorporates extraretinal information. |
Yannik Stegmann; Janna Teigeler; Arash Mirifar; Andreas Keil; Matthias Gamer Electrocortical responses in anticipation of avoidable and inevitable threats: A multisite study Journal Article In: The Journal of Neuroscience, vol. 44, no. 42, pp. 1–12, 2024. @article{Stegmann2024, When faced with danger, human beings respond with a repertoire of defensive behaviors, including freezing and active avoidance. Previous research has revealed a pattern of physiological responses, characterized by heart rate bradycardia, reduced visual exploration, and heightened sympathetic arousal in reaction to avoidable threats, suggesting a state of attentive immobility in humans. However, the electrocortical underpinnings of these behaviors remain largely unexplored. To investigate the visuocortical components of attentive immobility, we recorded parieto-occipital alpha activity, along with eye-movements and autonomic responses, while participants awaited either an avoidable, inevitable, or no threat. To test the robustness and generalizability of our findings, we collected data from a total of 101 participants (76 females, 35 males) at two laboratories. Across sites, we observed an enhanced suppression of parieto-occipital alpha activity during avoidable threats, in contrast to inevitable or no threat trials, particularly towards the end of the trial that prompted avoidance responses. This response pattern coincided with heart rate bradycardia, centralization of gaze and increased sympathetic arousal. Furthermore, our findings expand on previous research by revealing that the amount of alpha suppression, along with centralization of gaze, and heart rate changes, predict the speed of motor responses. Collectively, these findings indicate that when individuals encounter avoidable threats, they enter a state of attentive immobility, which enhances perceptual processing and facilitates action preparation. This state appears to reflect freezing-like behavior in humans. Significance Statement In response to avoidable danger, organisms often exhibit freezing-like behavior. Recent research suggests that freezing is not merely a passive response but involves a state of attentive immobility aimed at enhancing threat avoidance and perception. However, the attentional mechanisms involved in response to avoidable threats at the level of the brain remain poorly understood. To address this gap, we employed EEG, eye-tracking, and measurements of autonomic activity. Our findings revealed a suppression of EEG alpha power, along with cardiac deceleration, reduced eye-movements, and heightened sympathetic activity during the anticipation of avoidable threats. Moreover, this response pattern was predictive of motor response times. These results underscore the significance of heightened perceptual processing during freezing-like states in humans. |
Vasilena Stefanova; Christoph Scheepers; Paul Wilson; Kostas A. Papageorgiou In: PLoS ONE, vol. 19, no. 5, pp. 1–14, 2024. @article{Stefanova2024, Narcissism is a part of the Dark Triad that consists also of the traits of Machiavellianism and psychopathy. Two main types of narcissism exist: grandiose and vulnerable narcissism. Being a Dark Triad trait, narcissism is typically associated with negative outcomes. However, recent research suggests that at least the grandiose type may be linked (directly or indirectly) to positive outcomes including lower levels of psychopathology, higher school grades in adolescents, deeper and more strategic learning in university students and higher cognitive performance in experimental settings. The current pre-registered, quasi-experimental study implemented eye-tracking to assess whether grandiose narcissism indirectly predicts cognitive performance through wider distribution of attention on the Raven's Progressive Matrices task. Fifty-four adults completed measures of the Dark Triad, self-esteem and psychopathology. Eight months to one year later, participants completed the Raven's, while their eye-movements were monitored during high stress conditions. When controlling for previous levels of psychopathology, grandiose narcissism predicted higher Raven's scores indirectly, through increased variability in the number of fixations across trials. These findings suggest that grandiose narcissism predicts higher cognitive performance, at least in experimental settings, and call for further research to understand the implications of this seemingly dark trait for performance across various settings. |
S. Tabitha Steendam; Nicoleta Prutean; Fleur Clybouw; Joshua O. Eayrs; Nanne Kukkonen; Wim Notebaert; Ruth M. Krebs; Jan R. Wiersema; C. Nico Boehler Compensating for the mobile menace with extra effort: A pupillometry investigation of the mere presence effect of smartphones Journal Article In: Biological Psychology, vol. 193, pp. 1–11, 2024. @article{Steendam2024, Previous research suggests that the mere presence of a smartphone can detrimentally affect performance. However, other studies failed to observe such detrimental effects. A limitation of existing studies is that no indexes of (potentially compensating) effort were included. Further, time-on-task effects have been unexplored. Here, we address these limitations by investigating the mere-presence effect of a smartphone on performance in two continuous-performance experiments (Experiment 1 using an n-back and a number judgement task at two difficulty levels, and Experiment 2 using a pure, challenging n-back task), measuring pupil size to assess invested effort, and taking into account time-on-task effects. Finally, contrary to previous studies that predominantly used between-subject designs, we utilized within-subject designs in both experiments. Contrary to expectations, Experiment 1 largely yielded no significant effects of smartphone presence on performance. Nonetheless, the presence of a smartphone triggered larger tonic pupil size in the more difficult task, and a more rapid decrease over time. Experiment 2 similarly failed to demonstrate smartphone effects on performance, but replicated the finding of larger tonic pupil size in the presence of a smartphone. In addition, tonic pupil size showed a slower decrease over time when a smartphone was present. In Experiment 2, we could furthermore look at phasic pupil size, which decreased over time in the absence of a phone but not in its presence. These findings suggest a complex relationship between smartphone presence, effort, and time-on-task, which does not necessarily express itself behaviorally, highlighting in particular the need to also explore potential contributions of (compensatory) effort. |
Jacob M. Stanley; Douglas H. Wedell Impact of choice set complexity on decoy effects Journal Article In: Journal of Behavioral Decision Making, vol. 37, no. 2, pp. 1–18, 2024. @article{Stanley2024, Studies of contextual choice typically use three option choice sets to evaluate how preference relations depend on the values of a third decoy option. However, often real-world decisions are made using choice sets with many more than three alternatives, such as in online shopping. Three experiments tested for attraction and compromise decoy effects in choice sets that varied the number and ordering of alternatives using a within-subjects preferential choice grocery shopping task. In Experiment 1, attraction and compromise effects were significantly reduced as alternatives increased from three to nine. Experiment 2 found significantly greater attraction effects in nine alternative choice sets ordered by attributes compared with a random ordering. Experiment 3 used eye tracking and found significant attraction effects in choice sets with 3, 9, and 15 alternatives, but the effect was reduced with increasing alternatives. Eye tracking revealed that participants engaged in more by-dimension comparisons as the number of alternatives increased, but, contrary to previous research, the proportion of by-alternative to by-dimension transitions was not linearly predictive of decoy effects. With increased alternatives, the proportion of the total information attended to decreased, leading to worse choice outcomes, and participants were more likely to engage in a lexicographic decision-making strategy. |
Patricia L. Stan; Matthew A. Smith Recent visual experience reshapes V4 neuronal activity and improves perceptual performance Journal Article In: The Journal of Neuroscience, vol. 44, no. 41, pp. 1–17, 2024. @article{Stan2024, Recent visual experience heavily influences our visual perception, but how this is mediated by the reshaping of neuronal activity to alter and improve perceptual discrimination remains unknown. We recorded from populations of neurons in visual cortical area V4 while two male rhesus macaque monkeys performed a natural image change detection task under different experience conditions. We found that maximizing the recent experience with a particular image led to an improvement in the ability to detect a change in that image. This improvement was associated with decreased neural responses to the image, consistent with neuronal changes previously seen in studies of adaptation and expectation. We found that the magnitude of behavioral improvement was correlated with the magnitude of response suppression. Furthermore, this suppression of activity led to an increase in signal separation, providing evidence that a reduction in activity can improve stimulus encoding. Within populations of neurons, greater recent experience was associated with decreased trial-to-trial shared variability, indicating that a reduction in variability is a key means by which experience influences perception. Taken together, the results of our study contribute to an understanding of how recent visual experience can shape our perception and behavior through modulating activity patterns in mid-level visual cortex. Significance Statement Our visual experience shapes our perception and behavior. This work identifies neural signatures of visual experience that directly link to behavioral performance, an area that has been elusive in past work. Our study represents a demonstration of how the activity of populations of neurons in the visual cortex, shaped by experience, can reflect an altered neural code that underlies behavior. |
Justine Staal; Jelmer Alsma; Jos Van der Geest; Sílvia Mamede; Els Jansen; Maarten A. Frens; Walter W. Van den Broek; Laura Zwaan Selective processing of clinical information related to correct and incorrect diagnoses: An eye-tracking experiment Journal Article In: Medical Education, pp. 1–10, 2024. @article{Staal2024, Introduction: Diagnostic errors are often attributed to erroneous selection and interpretation of patients' clinical information, due to either cognitive biases or knowledge deficits. However, whether the selection or processing of clinical information differs between correct and incorrect diagnoses in written clinical cases remains unclear. We hypothesised that residents would spend more time processing clinical information that was relevant to their final diagnosis, regardless of whether their diagnosis was correct. Methods: In this within-subjects eye-tracking experiment, 19 internal or emergency medicine residents diagnosed 12 written cases. Half the cases contained a correct diagnostic suggestion and the others an incorrect suggestion. We measured how often (i.e. number of fixations) and how long (i.e. dwell time) residents attended to clinical information relevant for either suggestion. Additionally, we measured confidence and time to diagnose in each case. Results: Residents looked longer and more often at clinical information relevant for the correct diagnostic suggestion if they received an incorrect suggestion and were able to revise this suggestion to the correct diagnosis (dwell time: M: 6.3 seconds, SD: 5.1 seconds; compared to an average of 4 seconds in other conditions; number of fixations: M: 25 fixations, SD: 20; compared to an average of 16–17 fixations). Accordingly, time to diagnose was longer in cases with an incorrect diagnostic suggestion (M: 86 seconds, SD: 47 seconds; compared to an average of 70 seconds in other conditions). Confidence (range: 64%–67%) did not differ depending on residents' accuracy or the diagnostic suggestion. Discussion: Selectivity in information processing was not directly associated with an increase in diagnostic errors but rather seemed related to recognising and revising a biased suggestion in favour of the correct diagnosis. This could indicate an important role for case-specific knowledge in avoiding biases and diagnostic errors. Future research should examine information processing for other types of clinical information. |
Connor Spiech; Anne Danielsen; Bruno Laeng; Tor Endestad Oscillatory attention in groove Journal Article In: Cortex, vol. 174, pp. 137–148, 2024. @article{Spiech2024, Attention is not constant but rather fluctuates over time and these attentional fluctuations may prioritize the processing of certain events over others. In music listening, the pleasurable urge to move to music (termed ‘groove' by music psychologists) offers a particularly convenient case study of oscillatory attention because it engenders synchronous and oscillatory movements which also vary predictably with stimulus complexity. In this study, we simultaneously recorded pupillometry and scalp electroencephalography (EEG) from participants while they listened to drumbeats of varying complexity that they rated in terms of groove afterwards. Using the intertrial phase coherence of the beat frequency, we found that while subjects were listening, their pupil activity became entrained to the beat of the drumbeats and this entrained attention persisted in the EEG even as subjects imagined the drumbeats continuing through subsequent silent periods. This entrainment in both the pupillometry and EEG worsened with increasing rhythmic complexity, indicating poorer sensory precision as the beat became more obscured. Additionally, sustained pupil dilations revealed the expected, inverted U-shaped relationship between rhythmic complexity and groove ratings. Taken together, this work bridges oscillatory attention to rhythmic complexity in relation to musical groove. |
Blanca T. M. Spee; Jozsef Arato; Jan Mikuni; Ulrich S. Tran; Matthew Pelowski; Helmut Leder In: Frontiers in Psychology, vol. 15, pp. 1–15, 2024. @article{Spee2024, Introduction: Gestalt perception refers to the cognitive ability to perceive various elements as a unified whole. In our study, we delve deeper into the phenomenon of Gestalt recognition in visual cubist art, a transformative process culminating in what is often described as an Aha moment. This Aha moment signifies a sudden understanding of what is seen, merging seemingly disparate elements into a coherent meaningful picture. The onset of this Aha moment can vary, either appearing almost instantaneously, which is in line with theories of hedonic fluency, or manifesting after a period of time, supporting the concept of delayed but more in-depth meaningful insight. Methods: We employed pupillometry to measure cognitive and affective shifts during art interaction, analyzing both maximum pupil dilation and average dilation across the trial. The study consisted of two parts: in the first, 84 participants identified faces in cubist paintings under various conditions, with Aha moments and pupil dilation measured. In part 2, the same 84 participants assessed the artworks through ratings in a no-task free-viewing condition. Results: Results of part 1 indicate a distinctive pattern of pupil dilation, with maximum dilation occurring at both trial onset and end. Longer response times were observed for high-fluent, face-present stimuli, aligning with a delayed but accurate Aha-moment through recognition. Additionally, the time of maximum pupil dilation, rather than average dilation, exhibited significant associations, being later for high-fluent, face-present stimuli and correct detections. In part 2, average, not the time of maximum pupil dilation emerged as the significant factor. Face-stimuli and highly accessible art evoked stronger dilations, also reflecting high clearness and negative valence ratings. Discussion: The study underscores a complex relationship between the timing of recognition and the Aha moment, suggesting nuanced differences in emotional and cognitive responses during art viewing. Pupil dilation measures offer insight into these processes especially for moments of recognition, though their application in evaluating emotional responses through artwork ratings warrants further exploration. |