眼链接EEG/fNIRS/TMS出版物
All EyeLink EEG, fNIRS, and TMS research publications (with concurrent eye tracking) up until 2024 (with early 2025s) are listed below by year. You can search the publications using keywords such as P300, Gamma band, NIRS, etc. You can also search for individual author names. If we missed any EyeLink EEG, fNIRS, or TMS articles, please email us!
2023 |
Sangkyu Son; Joonsik Moon; Yee-Joon Kim; Min-Suk Kang; Joonyeol Lee Frontal-to-visual information flow explains predictive motion tracking Journal Article In: NeuroImage, vol. 269, pp. 1–11, 2023. @article{Son2023, Predictive tracking demonstrates our ability to maintain a line of vision on moving objects even when they temporarily disappear. Models of smooth pursuit eye movements posit that our brain achieves this ability by directly streamlining motor programming from continuously updated sensory motion information. To test this hypothesis, we obtained sensory motion representation from multivariate electroencephalogram activity while human participants covertly tracked a temporarily occluded moving stimulus with their eyes remaining stationary at the fixation point. The sensory motion representation of the occluded target evolves to its maximum strength at the expected timing of reappearance, suggesting a timely modulation of the internal model of the visual target. We further characterize the spatiotemporal dynamics of the task-relevant motion information by computing the phase gradients of slow oscillations. We discovered a predominant posterior-to-anterior phase gradient immediately after stimulus occlusion; however, at the expected timing of reappearance, the axis reverses the gradient, becoming anterior-to-posterior. The behavioral bias of smooth pursuit eye movements, which is a signature of the predictive process of the pursuit, was correlated with the posterior division of the gradient. These results suggest that the sensory motion area modulated by the prediction signal is involved in updating motor programming. |
Dawid Strzelczyk; Simon P. Kelly; Nicolas Langer Neurophysiological markers of successful learning in healthy aging Journal Article In: GeroScience, vol. 45, no. 5, pp. 2873–2896, 2023. @article{Strzelczyk2023, The capacity to learn and memorize is a key determinant for the quality of life but is known to decline to varying degrees with age. However, neural correlates of memory formation and the critical features that determine the extent to which aging affects learning are still not well understood. By employing a visual sequence learning task, we were able to track the behavioral and neurophysiological markers of gradual learning over several repetitions, which is not possible in traditional approaches that utilize a remember vs. forgotten comparison. On a neurophysiological level, we focused on two learning-related centro-parietal event-related potential (ERP) components: the expectancy-driven P300 and memory-related broader positivity (BP). Our results revealed that although both age groups showed significant learning progress, young individuals learned faster and remembered more stimuli than older participants. Successful learning was directly linked to a decrease of P300 and BP amplitudes. However, young participants showed larger P300 amplitudes with a sharper decrease during the learning, even after correcting for an observed age-related longer P300 latency and increased P300 peak variability. Additionally, the P300 amplitude predicted learning success in both age groups and showed good test–retest reliability. On the other hand, the memory formation processes, reflected by the BP amplitude, revealed a similar level of engagement in both age groups. However, this engagement did not translate into the same learning progress in the older participants. We suggest that the slower and more variable timing of the stimulus identification process reflected in the P300 means that despite the older participants engaging the memory formation process, there is less time for it to translate the categorical stimulus location information into a solidified memory trace. The results highlight the important role of the P300 and BP as a neurophysiological marker of learning and may enable the development of preventive measures for cognitive decline. |
Stefanie Sturm; Jordi Costa-Faidella; Iria SanMiguel Neural signatures of memory gain through active exploration in an oculomotor-auditory learning task Journal Article In: Psychophysiology, vol. 60, no. 10, pp. 1–20, 2023. @article{Sturm2023, Active engagement improves learning and memory, and self- versus externally generated stimuli are processed differently: perceptual intensity and neural responses are attenuated. Whether the attenuation is linked to memory formation remains unclear. This study investigates whether active oculomotor control over auditory stimuli—controlling for movement and stimulus predictability—benefits associative learning, and studies the underlying neural mechanisms. Using EEG and eye tracking we explored the impact of control during learning on the processing and memory recall of arbitrary oculomotor-auditory associations. Participants (N = 23) learned associations through active exploration or passive observation, using a gaze-controlled interface to generate sounds. Our results show faster learning progress in the active condition. ERPs time-locked to the onset of sound stimuli showed that learning progress was linked to an attenuation of the P3a component. The detection of matching movement-sound pairs triggered a target-matching P3b. There was no general modulation of ERPs through active learning. However, we found continuous variation in the strength of the memory benefit across participants: some benefited more strongly from active control during learning than others. This was paralleled in the strength of the N1 attenuation effect for self-generated stimuli, which was correlated with memory gain in active learning. Our results show that control helps learning and memory and modulates sensory responses. Individual differences during sensory processing predict the strength of the memory benefit. Taken together, these results help to disentangle the effects of agency, unspecific motor-based neuromodulation, and predictability on ERP components and establish a link between self-generation effects and active learning memory gain. |
Binbin Sun; Bryan Wang; Zhen Wei; Zhe Feng; Zhi Liu Wu; Walid Yassin; William S. Stone; Yan Lin; Xue Jun Kong Identification of diagnostic markers for ASD: A restrictive interest analysis based on EEG combined with eye tracking Journal Article In: Frontiers in Neuroscience, vol. 17, pp. 1–16, 2023. @article{Sun2023, Electroencephalography (EEG) functional connectivity (EFC) and eye tracking (ET) have been explored as objective screening methods for autism spectrum disorder (ASD), but no study has yet evaluated restricted and repetitive behavior (RRBs) simultaneously to infer early ASD diagnosis. Typically developing (TD) children (n = 27) and ASD (n = 32), age- and sex-matched, were evaluated with EFC and ET simultaneously, using the restricted interest stimulus paradigm. Network-based machine learning prediction (NBS-predict) was used to identify ASD. Correlations between EFC, ET, and Autism Diagnostic Observation Schedule-Second Edition (ADOS-2) were performed. The Area Under the Curve (AUC) of receiver-operating characteristics (ROC) was measured to evaluate the predictive performance. Under high restrictive interest stimuli (HRIS), ASD children have significantly higher α band connectivity and significantly more total fixation time (TFT)/pupil enlargement of ET relative to TD children (p = 0.04299). These biomarkers were not only significantly positively correlated with each other (R = 0.716 |
Maciej J. Szul; Sotirios Papadopoulos; Sanaz Alavizadeh; Sébastien Daligaut; Denis Schwartz; Jérémie Mattout; James J. Bonaiuto Diverse beta burst waveform motifs characterize movement-related cortical dynamics Journal Article In: Progress in Neurobiology, vol. 228, pp. 1–17, 2023. @article{Szul2023, Classical analyses of induced, frequency-specific neural activity typically average band-limited power over trials. More recently, it has become widely appreciated that in individual trials, beta band activity occurs as transient bursts rather than amplitude-modulated oscillations. Most studies of beta bursts treat them as unitary, and having a stereotyped waveform. However, we show there is a wide diversity of burst shapes. Using a biophysical model of burst generation, we demonstrate that waveform variability is predicted by variability in the synaptic drives that generate beta bursts. We then use a novel, adaptive burst detection algorithm to identify bursts from human MEG sensor data recorded during a joystick-based reaching task, and apply principal component analysis to burst waveforms to define a set of dimensions, or motifs, that best explain waveform variance. Finally, we show that bursts with a particular range of waveform motifs, ones not fully accounted for by the biophysical model, differentially contribute to movement-related beta dynamics. Sensorimotor beta bursts are therefore not homogeneous events and likely reflect distinct computational processes. |
Travis N. Talcott; John E. Kiat; Steven J. Luck; Nicholas Gaspelin Is covert attention necessary for programming accurate saccades? Evidence from saccade-locked event-related potentials Journal Article In: Attention, Perception, & Psychophysics, pp. 1–19, 2023. @article{Talcott2023, For decades, researchers have assumed that shifts of covert attention mandatorily occur prior to eye movements to improve perceptual processing of objects before they are fixated. However, recent research suggests that the N2pc component—a neural measure of covert attentional allocation—does not always precede eye movements. The current study investigated whether the N2pc component mandatorily precedes eye movements and assessed its role in the accuracy of gaze control. In three experiments, participants searched for a letter of a specific color (e.g., red) and directed gaze to it as a response. Electroencephalograms and eye movements were coregistered to determine whether neural markers of covert attention preceded the initial shift of gaze. The results showed that the presaccadic N2pc only occurred under limited conditions: when there were many potential target locations and distractors. Crucially, there was no evidence that the presence or magnitude of the presaccadic N2pc was associated with improved eye movement accuracy in any of the experiments. Interestingly, ERP decoding analyses were able to classify the target location well before the eyes started to move, which likely reflects a presaccadic cognitive process that is distinct from the attentional process measured by the N2pc. Ultimately, we conclude that the covert attentional mechanism indexed by the N2pc is not necessary for precise gaze control. |
Qawi K. Telesford; Eduardo Gonzalez-Moreira; Ting Xu; Yiwen Tian; Stanley J. Colcombe; Jessica Cloud; Brian E. Russ; Arnaud Falchier; Maximilian Nentwich; Jens Madsen; Lucas C. Parra; Charles E. Schroeder; Michael P. Milham; Alexandre R. Franco An open-access dataset of naturalistic viewing using simultaneous EEG-fMRI Journal Article In: Scientific Data, vol. 10, no. 1, pp. 1–13, 2023. @article{Telesford2023, In this work, we present a dataset that combines functional magnetic imaging (fMRI) and electroencephalography (EEG) to use as a resource for understanding human brain function in these two imaging modalities. The dataset can also be used for optimizing preprocessing methods for simultaneously collected imaging data. The dataset includes simultaneously collected recordings from 22 individuals (ages: 23–51) across various visual and naturalistic stimuli. In addition, physiological, eye tracking, electrocardiography, and cognitive and behavioral data were collected along with this neuroimaging data. Visual tasks include a flickering checkerboard collected outside and inside the MRI scanner (EEG-only) and simultaneous EEG-fMRI recordings. Simultaneous recordings include rest, the visual paradigm Inscapes, and several short video movies representing naturalistic stimuli. Raw and preprocessed data are openly available to download. We present this dataset as part of an effort to provide open-access data to increase the opportunity for discoveries and understanding of the human brain and evaluate the correlation between electrical brain activity and blood oxygen level-dependent (BOLD) signals. |
Roman Vakhrushev; Felicia Pei-Hsin Cheng; Anne Schacht; Arezoo Pooresmaeili Differential effects of intra-modal and crossmodal reward value on perception: ERP evidence Journal Article In: PLoS ONE, vol. 18, pp. 1–25, 2023. @article{Vakhrushev2023, In natural environments objects comprise multiple features from the same or different sensory modalities but it is not known how perception of an object is affected by the value associations of its constituent parts. The present study compares intra- and cross-modal valuedriven effects on behavioral and electrophysiological correlates of perception. Human participants first learned the reward associations of visual and auditory cues. Subsequently, they performed a visual discrimination task in the presence of previously rewarded, task irrelevant visual or auditory cues (intra- and cross-modal cues, respectively). During the conditioning phase, when reward associations were learned and reward cues were the target of the task, high value stimuli of both modalities enhanced the electrophysiological correlates of sensory processing in posterior electrodes. During the post-conditioning phase, when reward delivery was halted and previously rewarded stimuli were task-irrelevant, cross-modal value significantly enhanced the behavioral measures of visual sensitivity, whereas intra-modal value produced only an insignificant decrement. Analysis of the simultaneously recorded event-related potentials (ERPs) of posterior electrodes revealed similar findings. We found an early (90-120 ms) suppression of ERPs evoked by high-value, intramodal stimuli. Cross-modal stimuli led to a later value-driven modulation, with an enhancement of response positivity for high- compared to low-value stimuli starting at the N1 window (180-250 ms) and extending to the P3 (300-600 ms) responses. These results indicate that sensory processing of a compound stimulus comprising a visual target and task-irrelevant visual or auditory cues is modulated by the reward value of both sensory modalities, but such modulations rely on distinct underlying mechanisms. |
Susanne M. Veen; Robert A. Perera; Laura Manning-Franke; Amma A. Agyemang; Karen Skop; Scott R. Sponheim; Elisabeth A. Wilde; Alexander Stamenkovic; James S. Thomas; William C. Walker Executive function and relation to static balance metrics in chronic mild TBI: A LIMBIC-CENC secondary analysis Journal Article In: Frontiers in Neurology, vol. 13, pp. 1–16, 2023. @article{Veen2023, Introduction: Among patients with traumatic brain injury (TBI), postural instability often persists chronically with negative consequences such as higher fall risk. One explanation may be reduced executive function (EF) required to effectively process, interpret and combine, sensory information. In other populations, a decline in higher cognitive functions are associated with a decline in walking and balance skills. Considering the link between EF decline and reduction in functional capacity, we investigated whether specific tests of executive function could predict balance function in a cohort of individuals with a history of chronic mild TBI (mTBI) and compared to individuals with a negative history of mTBI. Methods: Secondary analysis was performed on the local LIMBIC-CENC cohort (N = 338, 259 mTBI, mean 45 ± STD 10 age). Static balance was assessed with the sensory organization test (SOT). Hierarchical regression was used for each EF test outcome using the following blocks: (1) the number of TBIs sustained, age, and sex; (2) the separate Trail making test (TMT); (3) anti-saccade eye tracking items (error, latency, and accuracy); (4) Oddball distractor stimulus P300 and N200 at PZ and FZ response; and (5) Oddball target stimulus P300 and N200 at PZ and FZ response. Results: The full model with all predictors accounted for between 15.2% and 21.5% of the variability in the balance measures. The number of TBI's) showed a negative association with the SOT2 score (p = 0.002). Additionally, longer times to complete TMT part B were shown to be related to a worse SOT1 score (p = 0.038). EEG distractors had the most influence on the SOT3 score (p = 0.019). Lastly, the SOT-composite and SOT5 scores were shown to be associated with longer inhibition latencies and errors (anti-saccade latency and error |
Dirk Moorselaar; Changrun Huang; Jan Theeuwes Electrophysiological indices of distractor processing in visual search are shaped by target expectations Journal Article In: Journal of Cognitive Neuroscience, vol. 35, no. 6, pp. 1032–1044, 2023. @article{Moorselaar2023, Although in many cases salient stimuli capture attention involuntarily, it has been proposed recently that under certain conditions, the bottom–up signal generated by such stimuli can be proactively suppressed. In support of this signal suppression hypothesis, ERP studies have demonstrated that salient stimuli that do not capture attention elicit a distractor positivity (PD), a putative neural index of suppression. At the same time, it is becoming increasingly clear that regularities across preceding search episodes have a large influence on attentional selection. Yet to date, studies in support of the signal suppression hypothesis have largely ignored the role of selection history on the processing of distractors. The current study addressed this issue by examining how electrophysiological markers of attentional selection (N2pc) and suppression (PD) elicited by targets and dis-tractors, respectively, were modulated when the search target randomly varied instead of being fixed across trials. Results showed that although target selection was unaffected by this manipulation, both in terms of manual response times, as well as in terms of the N2pc component, the PD component was reliably attenuated when the target features varied randomly across trials. This result demonstrates that the distractor PD, which is typically considered the marker of selective distractor processing, cannot unequivocally be attributed to suppression only, as it also, at least in part, reflects the upweighting of target features. |
Sarah Villard; Tyler K. Perrachione; Sung-Joo Lim; Ayesha Alam; Gerald Kidd In: The Journal of the Acoustical Society of America, vol. 154, no. 2, pp. 1152–1167, 2023. @article{Villard2023, The task of processing speech masked by concurrent speech/noise can pose a substantial challenge to listeners. However, performance on such tasks may not directly reflect the amount of listening effort they elicit. Changes in pupil size and neural oscillatory power in the alpha range (8–12 Hz) are prominent neurophysiological signals known to reflect listening effort; however, measurements obtained through these two approaches are rarely correlated, suggesting that they may respond differently depending on the specific cognitive demands (and, by extension, the specific type of effort) elicited by specific tasks. This study aimed to compare changes in pupil size and alpha power elicited by different types of auditory maskers (highly confusable intelligible speech maskers, speech-envelope-modulated speech-shaped noise, and unmodulated speech-shaped noise maskers) in young, normal-hearing listeners. Within each condition, the target-to-masker ratio was set at the participant's individually estimated 75% correct point on the psychometric function. The speech masking condition elicited a significantly greater increase in pupil size than either of the noise masking conditions, whereas the unmodulated noise masking condition elicited a significantly greater increase in alpha oscillatory power than the speech masking condition, suggesting that the effort needed to solve these respective tasks may have different neural origins. |
Vera A. Voigtlaender; Florian Sandhaeger; David J. Hawellek; Steffen R. Hage; Markus Siegel Neural representations of the content and production of human vocalization Journal Article In: Proceedings of the National Academy of Sciences, vol. 120, no. 23, pp. 1–9, 2023. @article{Voigtlaender2023, Speech, as the spoken form of language, is fundamental for human communication. The phenomenon of covert inner speech implies functional independence of speech content and motor production. However, it remains unclear how a flexible mapping between speech content and production is achieved on the neural level. To address this, we recorded magnetoencephalography in humans performing a rule-based vocalization task. On each trial, vocalization content (one of two vowels) and production form (overt or covert) were instructed independently. Using multivariate pattern analysis, we found robust neural information about vocalization content and produc- tion, mostly originating from speech areas of the left hemisphere. Production signals dynamically transformed upon presentation of the content cue, whereas content signals remained largely stable throughout the trial. In sum, our results show dissociable neural representations of vocalization content and production in the human brain and provide insights into the neural dynamics underlying human vocalization. |
Ria Vormbrock; Maximilian Bruchmann; Lucas Menne; Thomas Straube; Sebastian Schindler Testing stimulus exposure time as the critical factor of increased EPN and LPP amplitudes for fearful faces during perceptual distraction tasks Journal Article In: Cortex, vol. 160, pp. 9–23, 2023. @article{Vormbrock2023, Fearful facial expressions are prioritized across different information processing stages, as evident in early, intermediate, and late components of event-related brain potentials (ERPs). Recent studies showed that, in contrast to early N170 modulations, mid-latency (Early Posterior Negativity, EPN) and late (Late Positive Potential, LPP) emotional modula- tions depend on the attended perceptual feature. Nevertheless, several studies reported significant differences between emotional and neutral faces for the EPN or LPP components during distraction tasks. One cause for these conflicting findings might be that when faces are presented sufficiently long, participants attend to task-irrelevant features of the faces. In this registered report, we tested whether the presentation duration of faces is the critical factor for differences between reported emotional modulations during perceptual distraction tasks. To this end, 48 participants were required to discriminate the orientation of lines overlaid onto fearful or neutral faces, while face presentation varied (100 msec, 300 msec, 1,000 msec, 2,000 msec). While participants did not need to pay attention to the faces, we observed main effects of emotion for the EPN and LPP, but no interaction between emotion and presentation duration. Of note, unregistered exploratory tests per presentation duration showed no significant EPN and LPP emotion differences during short durations (100 and 300 msec) but significant differences with longer durations. While the presentation duration seems not to be a critical factor for EPN and LPP emotion effects, future studies are needed to investigate the role of threshold effects and the applied analytic designs to explain conflicting findings in the literature. |
Josefine Waldthaler; Alexander Sperlich; Aylin König; Charlotte Stüssel; Frank Bremmer; Lars Timmermann; David Pedrosa In: NeuroImage: Clinical, vol. 37, pp. 1–11, 2023. @article{Waldthaler2023, While deep brain stimulation (DBS) in the subthalamic nucleus (STN) improves motor functions in Parkinson's disease (PD), it may also increase impulsivity by interfering with the inhibition of reflexive responses. The aim of this study was to investigate if varying the pulse frequency of STN-DBS has a modulating effect on response inhibition and its neural correlates. For this purpose, 14 persons with PD repeated an antisaccade task in three stimulation settings (DBS off, high-frequency DBS (130 Hz), mid-frequency DBS (60 Hz)) in a randomized order, while eye movements and brain activity via high-density EEG were recorded. On a behavioral level, 130 Hz DBS stimulation had no effect on response inhibition measured as antisaccade error rate, while 60 Hz DBS induced a slight but significant reduction of directional errors compared with the DBS-off state and 130 Hz DBS. Further, stimulation with both frequencies decreased the onset latency of correct antisaccades, while increasing the latency of directional errors. Time-frequency domain analysis of the EEG data revealed that 60 Hz DBS was associated with an increase in preparatory theta power over a midfrontal region of interest compared with the off-DBS state which is generally regarded as a marker of increased cognitive control. While no significant differences in brain activity over mid- and lateral prefrontal regions of interest emerged between the 60 Hz and 130 Hz conditions, both stimulation frequencies were associated with a stronger midfrontal beta desynchronization during the mental preparation for correct antisaccades compared with DBS off-state which is discussed in the context of potentially enhanced proactive recruitment of the oculomotor network. Our preliminary findings suggest that mid-frequency STN-DBS may provide beneficial effects on response inhibition, while both 130 Hz- and 60 Hz STN-DBS may promote voluntary actions at the expense of slower reflexive responses. |
Kangning Wang; Shuang Qiu; Wei Wei; Weibo Yi; Huiguang He; Minpeng Xu; Tzyy Ping Jung; Dong Ming Investigating EEG-based cross-session and cross-task vigilance estimation in BCI systems Journal Article In: Journal of Neural Engineering, vol. 20, no. 5, pp. 1–18, 2023. @article{Wang2023g, Objective. The state of vigilance is crucial for effective performance in brain-computer interface (BCI) tasks, and therefore, it is essential to investigate vigilance levels in BCI tasks. Despite this, most studies have focused on vigilance levels in driving tasks rather than on BCI tasks, and the electroencephalogram (EEG) patterns of vigilance states in different BCI tasks remain unclear. This study aimed to identify similarities and differences in EEG patterns and performances of vigilance estimation in different BCI tasks and sessions. Approach. To achieve this, we built a steady-state visual evoked potential-based BCI system and a rapid serial visual presentation-based BCI system and recruited 18 participants to carry out four BCI experimental sessions over four days. Main results. Our findings demonstrate that specific neural patterns for high and low vigilance levels are relatively stable across sessions. Differential entropy features significantly differ between different vigilance levels in all frequency bands and between BCI tasks in the delta and theta frequency bands, with the theta frequency band features playing a critical role in vigilance estimation. Additionally, prefrontal, temporal, and occipital regions are more relevant to the vigilance state in BCI tasks. Our results suggest that cross-session vigilance estimation is more accurate than cross-task estimation. Significance. Our study clarifies the underlying mechanisms of vigilance state in two BCI tasks and provides a foundation for further research in vigilance estimation in BCI applications. |
Kangning Wang; Shuang Qiu; Wei Wei; Yukun Zhang; Shengpei Wang; Huiguang He; Minpeng Xu; Tzyy Ping Jung; Dong Ming A multimodal approach to estimating vigilance in SSVEP-based BCI Journal Article In: Expert Systems with Applications, vol. 225, pp. 1–16, 2023. @article{Wang2023h, Brain-computer interface (BCI) is a communication system that allows a direct connection between the human brain and external devices, which is able to provide assistance and improve the quality of life for people with disabilities. Vigilance is an important cognitive state and plays an important role in human–computer interaction. In BCI tasks, the low-vigilance state of the BCI user would lead to the performance degradation. Therefore, it is desirable to develop an efficient method to estimate the vigilance state of BCI users. In this study, we built a 4-target BCI system based on steady-state visual evoked potential (SSVEP) for cursor control. Electroencephalogram (EEG) and electrooculogram (EOG) were recorded simultaneously from 18 subjects during a 90-min continuous cursor-control BCI task. We proposed a multimodal vigilance estimating network, named MVENet, to estimate the vigilance state of BCI users through the multimodal signals. In this architecture, a spatial-temporal convolution module with an attention mechanism was adopted to explore the temporal-spatial information of the EEG features, and a long short-term memory module was utilized to learn the temporal dependencies of EOG features. Moreover, a fusion mechanism was built to fuse the EEG representations and EOG representations effectively. Experimental results showed that the proposed network achieved a better performance than the compared methods. These results demonstrate the feasibility and effectiveness of our methods for estimating the vigilance state of BCI users. |
Taylor D. Webb; Matthew G. Wilson; Henrik Odéen; Jan Kubanek Sustained modulation of primate deep brain circuits with focused ultrasonic waves Journal Article In: Brain Stimulation, vol. 16, no. 3, pp. 798–805, 2023. @article{Webb2023, Background: Transcranial focused ultrasound has the potential to noninvasively modulate deep brain circuits and impart sustained, neuroplastic effects. Objective: Bring the approach closer to translations by demonstrating sustained modulation of deep brain circuits and choice behavior in task-performing non-human primates. Methods: Low-intensity transcranial ultrasound of 30 s in duration was delivered in a controlled manner into deep brain targets (left or right lateral geniculate nucleus; LGN) of non-human primates while the subjects decided whether a left or a right visual target appeared first. While the animals performed the task, we recorded intracranial EEG from occipital screws. The ultrasound was delivered into the deep brain targets daily for a period of more than 6 months. Results: The brief stimulation induced effects on choice behavior that persisted up to 15 minutes and were specific to the sonicated target. Stimulation of the left/right LGN increased the proportion of rightward/leftward choices. These effects were accompanied by an increase in gamma activity over visual cortex. The contralateral effect on choice behavior and the increase in gamma, compared to sham stimulation, suggest that the stimulation excited the target neural circuits. There were no detrimental effects on the animals' discrimination performance over the months-long course of the stimulation. Conclusion: This study demonstrates that brief, 30-s ultrasonic stimulation induces neuroplastic effects specifically in the target deep brain circuits, and that the stimulation can be applied daily without detrimental effects. These findings encourage repeated applications of transcranial ultrasound to malfunctioning deep brain circuits in humans with the goal of providing a durable therapeutic reset. |
Christian Wienke; Marcus Grueschow; Aiden Haghikia; Tino Zaehle In: Journal of Neuroscience, vol. 43, no. 36, pp. 6306–6319, 2023. @article{Wienke2023, Transcutaneous auricular vagus nerve stimulation (taVNS) has been proposed to activate the locus ceruleus-noradrenaline (LC-NA) system. However, previous studies failed to find consistent modulatory effects of taVNS on LC-NA biomarkers. Previous studies suggest that phasic taVNS may be capable of modulating LC-NA biomarkers such as pupil dilation and alpha oscillations. However, it is unclear whether these effects extend beyond pure sensory vagal nerve responses. Critically, the potential of the pupillary light reflex as an additional taVNS biomarker has not been explored so far. Here, we applied phasic active and sham taVNS in 29 subjects (16 female, 13 male) while they performed an emotional Stroop task (EST) and a passive pupil light reflex task (PLRT). We recorded pupil size and brain activity dynamics using a combined Magnetoencephalography (MEG) and pupillometry design. Our results show that phasic taVNS significantly increased pupil dilation and performance during the EST. During the PLRT, active taVNS reduced and delayed pupil constriction. In the MEG, taVNS increased frontal-midline theta and alpha power during the EST, whereas occipital alpha power was reduced during both the EST and PLRT. Our findings provide evidence that phasic taVNS systematically modulates behavioral, pupillary, and electrophysiological parameters of LC-NA activity during cognitive processing. Moreover, we demonstrate for the first time that the pupillary light reflex can be used as a simple and effective proxy of taVNS efficacy. These findings have important implications for the development of noninvasive neuromodulation interventions for various cognitive and clinical applications. |
Sobanawartiny Wijeakumar; Samuel H. Forbes; Vincent A. Magnotta; Sean Deoni; Kiara Jackson; Vinay P. Singh; Madhuri Tiwari; Aarti Kumar; John P. Spencer Stunting in infancy is associated with atypical activation of working memory and attention networks Journal Article In: Nature Human Behaviour, vol. 7, no. 12, pp. 2199–2211, 2023. @article{Wijeakumar2023, Stunting is associated with poor long-term cognitive, academic and economic outcomes, yet the mechanisms through which stunting impacts cognition in early development remain unknown. In a first-ever neuroimaging study conducted on infants from rural India, we demonstrate that stunting impacts a critical, early-developing cognitive system—visual working memory. Stunted infants showed poor visual working memory performance and were easily distractible. Poor performance was associated with reduced engagement of the left anterior intraparietal sulcus, a region involved in visual working memory maintenance and greater suppression in the right temporoparietal junction, a region involved in attentional shifting. When assessed one year later, stunted infants had lower problem-solving scores, while infants of normal height with greater left anterior intraparietal sulcus activation showed higher problem-solving scores. Finally, short-for-age infants with poor physical growth indices but good visual working memory performance showed more positive outcomes suggesting that intervention efforts should focus on improving working memory and reducing distractibility in infancy. |
Ying Joey Zhou; Aarti Ramchandran; Saskia Haegens Alpha oscillations protect working memory against distracters in a modality-specific way Journal Article In: NeuroImage, vol. 278, pp. 1–9, 2023. @article{Zhou2023d, Alpha oscillations are thought to be involved in suppressing distracting input in working-memory tasks. Yet, the spatial-temporal dynamics of such suppression remain unclear. Key questions are whether such suppression reflects a domain-general inattentiveness mechanism, or occurs in a stimulus- or modality-specific manner within cortical areas most responsive to the distracters; and whether the suppression is proactive (i.e., preparatory) or reactive. Here, we addressed these questions using a working-memory task where participants had to memorize an array of visually presented digits and reproduce one of them upon being probed. We manipulated the presence of distracters and the sensory modality in which distracters were presented during memory maintenance. Our results show that sensory areas most responsive to visual and auditory distracters exhibited stronger alpha power increase after visual and auditory distracter presentation respectively. These results suggest that alpha oscillations underlie distracter suppression in a reactive, modality-specific manner. |
Alexander Zhigalov; Ole Jensen Perceptual echoes as travelling waves may arise from two discrete neuronal sources Journal Article In: NeuroImage, vol. 272, pp. 1–9, 2023. @article{Zhigalov2023, Growing evidence suggests that travelling waves are functionally relevant for cognitive operations in the brain. Several electroencephalography (EEG) studies report on a perceptual alpha-echo, representing the brain response to a random visual flicker, propagating as a travelling wave across the cortical surface. In this study, we ask if the propagating activity of the alpha-echo is best explained by a set of discrete sources mixing at the sensor level rather than a cortical travelling wave. To this end, we presented participants with gratings modulated by random noise and simultaneously acquired the ongoing MEG. The perceptual alpha-echo was estimated using the temporal response function linking the visual input to the brain response. At the group level, we observed a spatial decay of the amplitude of the alpha-echo with respect to the sensor where the alpha-echo was the largest. Importantly, the propagation latencies consistently increased with the distance. Interestingly, the propagation of the alpha-echoes was predominantly centro-lateral, while EEG studies reported mainly posterior-frontal propagation. Moreover, the propagation speed of the alpha-echoes derived from the MEG data was around 10 m/s, which is higher compared to the 2 m/s reported in EEG studies. Using source modelling, we found an early component in the primary visual cortex and a phase-lagged late component in the parietal cortex, which may underlie the travelling alpha-echoes at the sensor level. We then simulated the alpha-echoes using realistic EEG and MEG forward models by placing two sources in the parietal and occipital cortices in accordance with our empirical findings. The two-source model could account for both the direction and speed of the observed alpha-echoes in the EEG and MEG data. Our results demonstrate that the propagation of the perceptual echoes observed in EEG and MEG data can be explained by two sources mixing at the scalp level equally well as by a cortical travelling wave. Importantly, these findings should not be directly extrapolated to intracortical recordings, where travelling waves gradually propagate at a sub-millimetre scale. |
Bin Zhao; Gaoyan Zhang; Longbiao Wang; Jianwu Dang Multimodal evidence for predictive coding in sentence oral reading Journal Article In: Cerebral Cortex, vol. 33, no. 13, pp. 8620–8632, 2023. @article{Zhao2023, Sentence oral reading requires not only a coordinated effort in the visual, articulatory, and cognitive processes but also supposes a top-down influence from linguistic knowledge onto the visual-motor behavior. Despite a gradual recognition of a predictive coding effect in this process, there is currently a lack of a comprehensive demonstration regarding the time-varying brain dynamics that underlines the oral reading strategy. To address this, our study used a multimodal approach, combining real-time recording of electroencephalography, eye movements, and speech, with a comprehensive examination of regional, inter-regional, sub-network, and whole-brain responses. Our study identified the top-down predictive effect with a phrase-grouping phenomenon in the fixation interval and eye-voice span. This effect was associated with the delta and theta band synchronization in the prefrontal, anterior temporal, and inferior frontal lobes. We also observed early activation of the cognitive control network and its recurrent interactions with the visual-motor networks structurally at the phrase rate. Finally, our study emphasizes the importance of cross-frequency coupling as a promising neural realization of hierarchical sentence structuring and calls for further investigation. |
I. M. Dushyanthi Karunathilake; Jason L. Dunlap; Janani Perera; Alessandro Presacco; Lien Decruy; Samira Anderson; Stefanie E. Kuchinsky; Jonathan Z. Simon Effects of aging on cortical representations of continuous speech Journal Article In: Journal of Neurophysiology, vol. 129, no. 6, pp. 1359–1377, 2023. @article{Karunathilake2023, Understanding speech in a noisy environment is crucial in day-to-day interactions and yet becomes more challenging with age, even for healthy aging. Age-related changes in the neural mechanisms that enable speech-in-noise listening have been investigated previously; however, the extent to which age affects the timing and fidelity of encoding of target and interfering speech streams is not well understood. Using magnetoencephalography (MEG), we investigated how continuous speech is represented in auditory cortex in the presence of interfering speech in younger and older adults. Cortical representations were obtained from neural responses that time-locked to the speech envelopes with speech envelope reconstruction and temporal response functions (TRFs). TRFs showed three prominent peaks corresponding to auditory cortical processing stages: early (∼50 ms), middle (∼100 ms), and late (∼200 ms). Older adults showed exaggerated speech envelope representations compared with younger adults. Temporal analysis revealed both that the age-related exaggeration starts as early as ∼50 ms and that older adults needed a substantially longer integration time window to achieve their better reconstruction of the speech envelope. As expected, with increased speech masking envelope reconstruction for the attended talker decreased and all three TRF peaks were delayed, with aging contributing additionally to the reduction. Interestingly, for older adults the late peak was delayed, suggesting that this late peak may receive contributions from multiple sources. Together these results suggest that there are several mechanisms at play compensating for age-related temporal processing deficits at several stages but which are not able to fully reestablish unimpaired speech perception. |
Anastasia Kerr-German; A. Caglar Tas; Aaron T. Buss A multi-method approach to addressing the toddler data desert in attention research Journal Article In: Cognitive Development, vol. 65, pp. 1–14, 2023. @article{KerrGerman2023, Visual attention skills undergo robust development change during infancy and continue to co-develop with other cognitive processes in early childhood. Despite this, this is a general disconnect between measures of the earliest foundations of attention during infancy and later development of attention in relation to executive functioning during the toddler years. To examine associations between these different measures of attention, the current study administered an oculomotor task (infant orienting with attention, IOWA) and a manual response (Flanker) task with a group of toddlers. We collected simultaneous neural recordings (using functional near-infrared spectroscopy), eye-tracking, and behavioral responses in 2.5- and 3.5-year-olds to examine the neural and behavioral associations between these skills. Results revealed that oculomotor facilitation in the IOWA task was negatively associated with accuracy on neutral trials in the Flanker task. Second, conflict scores between the two tasks were positively associated. At the neural level, however, the tasks showed distinct patterns of activation. Left frontal cortex was engaged during the Flanker task whereas right frontal and parietal cortex was engaged during the IOWA task. Activation during the IOWA task differed based on how well children could control oculomotor behavior during the task. Children with high levels of stimulus reactivity activated parietal cortex more strongly, but children with more controlled oculomotor behavior activated frontal cortex more strongly. |
Nathalie Klein Selle; Kristina Suchotzki; Yoni Pertzov; Matthias Gamer Orienting versus inhibition: The theory behind the ocular-based Concealed Information Test Journal Article In: Psychophysiology, vol. 60, no. 3, pp. 1–13, 2023. @article{KleinSelle2023, When trying to conceal one's knowledge, various ocular changes occur. However, which cognitive mechanisms drive these changes? Do orienting or inhibition—two processes previously associated with autonomic changes—play a role? To answer this question, we used a Concealed Information Test (CIT) in which participants were either motivated to conceal (orienting + inhibition) or reveal (orienting only) their knowledge. While pupil size increased in both motivational conditions, the fixation and blink CIT effects were confined to the conceal condition. These results were mirrored in autonomic changes, with skin conductance increasing in both conditions while heart rate decreased solely under motivation to conceal. Thus, different cognitive mechanisms seem to drive ocular responses. Pupil size appears to be linked to the orienting of attention (akin to skin conductance changes), while fixations and blinks rather seem to reflect arousal inhibition (comparable to heart rate changes). This knowledge strengthens CIT theory and illuminates the relationship between ocular and autonomic activity. |
Emily J. Knight; Edward G. Freedman; Evan J. Myers; Alaina S. Berruti; Leona A. Oakes; Cody Zhewei Cao; Sophie Molholm; John J. Foxe Severely attenuated visual feedback processing in children on the autism spectrum Journal Article In: Journal of Neuroscience, vol. 43, no. 13, pp. 2424–2438, 2023. @article{Knight2023, Individuals on the autism spectrum often exhibit atypicality in their sensory perception, but the neural underpinnings of these perceptual differences remain incompletely understood. One proposed mechanism is an imbalance in higher-order feedback re-entrant inputs to early sensory cortices during sensory perception, leading to increased propensity to focus on local object features over global context. We explored this theory by measuring visual evoked potentials during contour integration as considerable work has revealed that these processes are largely driven by feedback inputs from higher-order ventral visual stream regions. We tested the hypothesis that autistic individuals would have attenuated evoked responses to illusory contours compared with neurotypical controls. Electrophysiology was acquired while 29 autistic and 31 neurotypical children (7-17 years old, inclusive of both males and females) passively viewed a random series of Kanizsa figure stimuli, each consisting of four inducers that were aligned either at random rotational angles or such that contour integration would form an illusory square. Autistic children demonstrated attenuated automatic contour integration over lateral occipital regions relative to neurotypical controls. The data are discussed in terms of the role of predictive feedback processes on perception of global stimulus features and the notion that weakened “priors” may play a role in the visual processing anomalies seen in autism. |
Svetlana Kovalenko; Anton Mamonov; Vladislav Kuznetsov; Alexandr Bulygin; Irina Shoshina; Ivan Brak; Alexey Kashevnik OperatorEYEVP: Operator dataset for fatigue detection based on eye movements, heart rate data, and video information Journal Article In: Sensors, vol. 23, no. 13, pp. 1–35, 2023. @article{Kovalenko2023, Detection of fatigue is extremely important in the development of different kinds of preventive systems (such as driver monitoring or operator monitoring for accident prevention). The presence of fatigue for this task should be determined with physiological and objective behavioral indicators. To develop an effective model of fatigue detection, it is important to record a dataset with people in a state of fatigue as well as in a normal state. We carried out data collection using an eye tracker, a video camera, a stage camera, and a heart rate monitor to record a different kind of signal to analyze them. In our proposed dataset, 10 participants took part in the experiment and recorded data 3 times a day for 8 days. They performed different types of activity (choice reaction time, reading, correction test Landolt rings, playing Tetris), imitating everyday tasks. Our dataset is useful for studying fatigue and finding indicators of its manifestation. We have analyzed datasets that have public access to find the best for this task. Each of them contains data of eye movements and other types of data. We evaluated each of them to determine their suitability for fatigue studies, but none of them fully fit the fatigue detection task. We evaluated the recorded dataset by calculating the correspondences between eye-tracking data and CRT (choice reaction time) that show the presence of fatigue. |
Frauke Kraus; Sarah Tune; Jonas Obleser; Björn Herrmann Neural α oscillations and pupil size differentially index cognitive demand under competing audiovisual task conditions Journal Article In: Journal of Neuroscience, vol. 43, no. 23, pp. 4352–4364, 2023. @article{Kraus2023a, Cognitive demand is thought to modulate two often used, but rarely combined, measures: pupil size and neural α (8–12 Hz) oscillatory power. However, it is unclear whether these two measures capture cognitive demand in a similar way under complex audiovisual-task conditions. Here we recorded pupil size and neural α power (using electroencephalography), while human participants of both sexes concurrently performed a visual multiple object-tracking task and an auditory gap detection task. Difficulties of the two tasks were manipulated independent of each other. Participants' performance decreased in accuracy and speed with increasing cognitive demand. Pupil size increased with increasing difficulty for both the auditory and the visual task. In contrast, α power showed diverging neural dynamics: parietal α power decreased with increasing difficulty in the visual task, but not with increasing difficulty in the auditory task. Furthermore, independent of task difficulty, within-participant trial-by-trial fluctuations in pupil size were negatively correlated with α power. Difficulty-induced changes in pupil size and α power, however, did not correlate, which is consistent with their different cognitive-demand sensitivities. Overall, the current study demonstrates that the dynamics of the neurophysiological indices of cognitive demand and associated effort are multifaceted and potentially modality-dependent under complex audiovisual-task conditions. |
Wupadrasta Santosh Kumar; Supratim Ray Healthy ageing and cognitive impairment alter EEG functional connectivity in distinct frequency bands Journal Article In: European Journal of Neuroscience, vol. 58, no. 6, pp. 3432–3449, 2023. @article{Kumar2023, Functional connectivity (FC) indicates the interdependencies between brain signals recorded from spatially distinct locations in different frequency bands, which is modulated by cognitive tasks and is known to change with ageing and cognitive disorders. Recently, the power of narrow-band gamma oscillations induced by visual gratings have been shown to reduce with both healthy ageing and in subjects with mild cognitive impairment (MCI). However, the impact of ageing/MCI on stimulus-induced gamma FC has not been well studied. We recorded electroencephalogram (EEG) from a large cohort (N = 229) of elderly subjects (>49 years) while they viewed large cartesian gratings to induce gamma oscillations and studied changes in alpha and gamma FC with healthy ageing (N = 218) and MCI (N = 11). Surprisingly, we found distinct differences across age and MCI groups in power and FC. With healthy ageing, alpha power did not change but FC decreased significantly. MCI reduced gamma but not alpha FC significantly compared with age and gender matched controls, even when power was matched between the two groups. Overall, our results suggest distinct effects of ageing and disease on EEG power and FC, suggesting different mechanisms underlying ageing and cognitive disorders. |
Baiwei Liu; Anna C. Nobre; Freek Ede Microsaccades transiently lateralise EEG alpha activity Journal Article In: Progress in Neurobiology, vol. 224, pp. 1–9, 2023. @article{Liu2023, The lateralisation of 8–12 Hz alpha activity is a canonical signature of human spatial cognition that is typically studied under strict fixation requirements. Yet, even during attempted fixation, the brain produces small involuntary eye movements known as microsaccades. Here we report how spontaneous microsaccades – made in the absence of incentives to look elsewhere – can themselves drive transient lateralisation of EEG alpha power according to microsaccade direction. This transient lateralisation of posterior alpha power occurs similarly following start and return microsaccades and is, at least for start microsaccades, driven by increased alpha power ipsilateral to microsaccade direction. This reveals new links between spontaneous microsaccades and human electrophysiological brain activity. It highlights how microsaccades are an important factor to consider in studies relating alpha activity – including spontaneous fluctuations in alpha activity – to spatial cognition, such as studies on visual attention, anticipation, and working memory. |
Junlian Luo; Thérèse Collins The representational similarity between visual perception and recent perceptual history Journal Article In: Journal of Neuroscience, vol. 43, no. 20, pp. 3658–3665, 2023. @article{Luo2023b, From moment to moment, the visual properties of objects in the world fluctuate because of external factors like ambient lighting, occlusion and eye movements, and internal (proximal) noise. Despite this variability in the incoming information, our perception is stable. Serial dependence, the behavioral attraction of current perceptual responses toward previously seen stimuli, may reveal a mechanism underlying stability: a spatiotemporally tuned operator that smooths over spurious fluctuations. The current study examined the neural underpinnings of serial dependence by recording the electroencephalographic (EEG) brain response of female and male human observers to prototypical objects (faces, cars, and houses) and morphs that mixed properties of two prototypes. Behavior was biased toward previously seen objects. Representational similarity analysis (RSA) revealed that responses evoked by visual objects contained information about the previous stimulus. The trace of previous representations in the response to the current object occurred immediately on object appearance, suggesting that serial dependence arises from a brain state or set that precedes processing of new input. However, the brain response to current visual objects was not representationally similar to the trace they leave on subsequent object representations. These results reveal that while past stimulus history influences current representations, this influence does not imply a shared neural code between the previous trial (memory) and the current trial (perception). |
Jiayu Mao; Shuang Qiu; Wei Wei; Huiguang He Cross-modal guiding and reweighting network for multi-modal RSVP-based target detection Journal Article In: Neural Networks, vol. 161, pp. 65–82, 2023. @article{Mao2023, Rapid Serial Visual Presentation (RSVP) based Brain–Computer Interface (BCI) facilities the high-throughput detection of rare target images by detecting evoked event-related potentials (ERPs). At present, the decoding accuracy of the RSVP-based BCI system limits its practical applications. This study introduces eye movements (gaze and pupil information), referred to as EYE modality, as another useful source of information to combine with EEG-based BCI and forms a novel target detection system to detect target images in RSVP tasks. We performed an RSVP experiment, recorded the EEG signals and eye movements simultaneously during a target detection task, and constructed a multi-modal dataset including 20 subjects. Also, we proposed a cross-modal guiding and fusion network to fully utilize EEG and EYE modalities and fuse them for better RSVP decoding performance. In this network, a two-branch backbone was built to extract features from these two modalities. A Cross-Modal Feature Guiding (CMFG) module was proposed to guide EYE modality features to complement the EEG modality for better feature extraction. A Multi-scale Multi-modal Reweighting (MMR) module was proposed to enhance the multi-modal features by exploring intra- and inter-modal interactions. And, a Dual Activation Fusion (DAF) was proposed to modulate the enhanced multi-modal features for effective fusion. Our proposed network achieved a balanced accuracy of 88.00% (±2.29) on the collected dataset. The ablation studies and visualizations revealed the effectiveness of the proposed modules. This work implies the effectiveness of introducing the EYE modality in RSVP tasks. And, our proposed network is a promising method for RSVP decoding and further improves the performance of RSVP-based target detection systems. |
Sebastiaan Mathôt; Hermine Berberyan; Philipp Büchel; Veera Ruuskanen; Ana Vilotijević; Wouter Kruijne Effects of pupil size as manipulated through ipRGC activation on visual processing Journal Article In: NeuroImage, vol. 283, pp. 1–13, 2023. @article{Mathot2023, The size of the eyes' pupils determines how much light enters the eye and also how well this light is focused. Through this route, pupil size shapes the earliest stages of visual processing. Yet causal effects of pupil size on vision are poorly understood and rarely studied. Here we introduce a new way to manipulate pupil size, which relies on activation of intrinsically photosensitive retinal ganglion cells (ipRGCs) to induce sustained pupil constriction. We report the effects of both experimentally induced and spontaneous changes in pupil size on visual processing as measured through EEG. We compare these to the effects of stimulus intensity and covert visual attention, because previous studies have shown that these factors all have comparable effects on some common measures of early visual processing, such as detection performance and steady-state visual evoked potentials; yet it is still unclear whether these are superficial similarities, or rather whether they reflect similar underlying processes. Using a mix of neural-network decoding, ERP analyses, and time-frequency analyses, we find that induced pupil size, spontaneous pupil size, stimulus intensity, and covert visual attention all affect EEG responses, mainly over occipital and parietal electrodes, but—crucially—that they do so in qualitatively different ways. Induced and spontaneous pupil-size changes mainly modulate activity patterns (but not overall power or intertrial coherence) in the high-frequency beta range; this may reflect an effect of pupil size on oculomotor activity and/ or visual processing. In addition, spontaneous (but not induced) pupil size tends to correlate positively with intertrial coherence in the alpha band; this may reflect a non-causal relationship, mediated by arousal. Taken together, our findings suggest that pupil size has qualitatively different effects on visual processing from stimulus intensity and covert visual attention. This shows that pupil size as manipulated through ipRGC activation strongly affects visual processing, and provides concrete starting points for further study of this important yet understudied earliest stage of visual processing. |
Tamas Minarik; Barbara Berger; Ole Jensen Optimal parameters for rapid (invisible) frequency tagging using MEG Journal Article In: NeuroImage, vol. 281, pp. 1–16, 2023. @article{Minarik2023, Frequency tagging has been demonstrated to be a useful tool for identifying representational-specific neuronal activity in the auditory and visual domains. However, the slow flicker (<30 Hz) applied in conventional frequency tagging studies is highly visible and might entrain endogenous neuronal oscillations. Hence, stimulation at faster frequencies that is much less visible and does not interfere with endogenous brain oscillatory activity is a promising new tool. In this study, we set out to examine the optimal stimulation parameters of rapid frequency tagging (RFT/RIFT) with magnetoencephalography (MEG) by quantifying the effects of stimulation frequency, size and position of the flickering patch. Rapid frequency tagging using flickers above 50 Hz results in almost invisible stimulation which does not interfere with slower endogenous oscillations; however, the signal is weaker as compared to tagging at slower frequencies so certainty over the optimal parameters of stimulation delivery are crucial. The here presented results examining the frequency range between 60 Hz and 96 Hz suggest that RFT induces brain responses with decreasing strength up to about 84 Hz. In addition, even at the smallest flicker patch (2°) focally presented RFT induces a significant and measurable oscillatory brain signal (steady state visual evoked potential/field, SSVEP/F) at the stimulation frequency (66 Hz); however, the elicited response increases with patch size. While focal RFT presentation elicits the strongest response, off-centre presentations do generally mainly elicit a measureable response if presented below the horizontal midline. Importantly, the results also revealed considerable individual differences in the neuronal responses to RFT stimulation. Finally, we discuss the comparison of oscillatory measures (coherence and power) and sensor types (planar gradiometers and magnetometers) in order to achieve optimal outcomes. Based on our extensive findings we set forward concrete recommendations for using rapid frequency tagging in human cognitive neuroscience investigations. |
Xianliang Ge; Yunxian Pan; Sujie Wang; Linze Qian; Jingjia Yuan; Jie Xu; Nitish Thakor; Yu Sun Improving intention detection in single-trial classification through fusion of EEG and eye-tracker data Journal Article In: IEEE Transactions on Human-Machine Systems, vol. 53, no. 1, pp. 132–141, 2023. @article{Ge2023, Intention decoding is an indispensable procedure in hands-free human-computer interaction (HCI). Conventional eye-tracking system using single-model fixation duration possibly issues commands ignoring users' real expectation. In the current study, an eye-brain hybrid brain-computer interface (BCI) interaction system was introduced for intention detection through fusion of multi-modal eye-track and ERP (a measurement derived from EEG) features. Eye-track and EEG data were recorded from 64 healthy participants as they performed a 40-min customized free search task of a fixed target icon among 25 icons. The corresponding fixation duration of eye-trackingw and ERP were extracted. Five previously-validated LDA-based classifiers (including RLDA, SWLDA, BLDA, SKLDA, and STDA) and the widely-used CNN method were adopted to verify the efficacy of feature fusion from both offline and pseudo-online analysis, and optimal approach was evaluated through modulating the training set and system response duration. Our study demonstrated that the input of multi-modal eye-track and ERP features achieved superior performance of intention detection in the single trial classification of active search task. And compared with single-model ERP feature, this new strategy also induced congruent accuracy across different classifiers. Moreover, in comparison with other classification methods, we found that the SKLDA exhibited the superior performance when fusing feature in offline test (ACC=0.8783 |
M. N. Hebart; O. Contier; L. Teichmann; A. H. Rockter; C. Y. Zheng; A. Kidder; A. Corriveau; M. Vaziri-Pashkam; C. I. Baker THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior Journal Article In: eLife, vol. 12, pp. 1–37, 2023. @article{Hebart2023, Understanding object representations requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. Here we present THINGS-data, a multimodal collection of large-scale neuroimaging and behavioral datasets in humans, comprising densely-sampled functional MRI and magnetoencephalographic recordings, as well as 4.70 million similarity judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless hypotheses at scale while assessing the reproducibility of previous findings. Beyond the unique insights promised by each individual dataset, the multimodality of THINGS-data allows combining datasets for a much broader view into object processing than previously possible. Our analyses demonstrate the high quality of the datasets and provide five examples of hypothesis-driven and data-driven applications. THINGS-data constitutes the core public release of the THINGS initiative (https://things-initiative.org) for bridging the gap between disciplines and the advancement of cognitive neuroscience. |
Lena Henke; Ashley G. Lewis; Lars Meyer Fast and slow rhythms of naturalistic reading revealed by combined eye-tracking and electroencephalography Journal Article In: Journal of Neuroscience, vol. 43, no. 24, pp. 4461–4469, 2023. @article{Henke2023, Neural oscillations are thought to support speech and language processing. They may not only inherit acoustic rhythms, but might also impose endogenous rhythms onto processing. In support of this, we here report that human (both male and female) eye movements during naturalistic reading exhibit rhythmic patterns that show frequency-selective coherence with the EEG, in the absence of any stimulation rhythm. Periodicity was observed in two distinct frequency bands: First, word-locked saccades at 4-5 Hz display coherence with whole-head theta-band activity. Second, fixation durations fluctuate rhythmically at;1 Hz, in coherence with occipital delta-band activity. This latter effect was additionally phase-locked to sentence endings, suggesting a relationship with the formation of multi-word chunks. Together, eye movements during reading contain rhythmic patterns that occur in synchrony with oscillatory brain activity. This suggests that linguistic processing imposes preferred processing time scales onto reading, largely independent of actual physical rhythms in the stimulus. |
Nora Hollenstein; Marius Tröndle; Martyna Plomecka; Samuel Kiegeland; Yilmazcan Özyurt; Lena A. Jäger; Nicolas Langer The ZuCo benchmark on cross-subject reading task classification with EEG and eye-tracking data Journal Article In: Frontiers in Psychology, vol. 13, pp. 1–20, 2023. @article{Hollenstein2023, We present a new machine learning benchmark for reading task classification with the goal of advancing EEG and eye-tracking research at the intersection between computational language processing and cognitive neuroscience. The benchmark task consists of a cross-subject classification to distinguish between two reading paradigms: normal reading and task-specific reading. The data for the benchmark is based on the Zurich Cognitive Language Processing Corpus (ZuCo 2.0), which provides simultaneous eye-tracking and EEG signals from natural reading of English sentences. The training dataset is publicly available, and we present a newly recorded hidden testset. We provide multiple solid baseline methods for this task and discuss future improvements. We release our code and provide an easy-to-use interface to evaluate new approaches with an accompanying public leaderboard: www.zuco-benchmark.com. |
Jie Hu; Arkady Konovalov; Christian C. Ruff A unified neural account of contextual and individual differences in altruism Journal Article In: eLife, vol. 12, pp. 1–36, 2023. @article{Hu2023, Altruism is critical for cooperation and productivity in human societies but is known to vary strongly across contexts and individuals. The origin of these differences is largely unknown, but may in principle reflect variations in different neurocognitive processes that temporally unfold during altruistic decision making (ranging from initial perceptual processing via value computations to final integrative choice mechanisms). Here, we elucidate the neural origins of individual and contextual differences in altruism by examining altruistic choices in different inequality contexts with computational modeling and electroencephalography (EEG). Our results show that across all contexts and individuals, wealth distribution choices recruit a similar late decision process evident in model-predicted evidence accumulation signals over parietal regions. Contextual and individual differences in behavior related instead to initial processing of stimulus-locked inequality-related value information in centroparietal and centrofrontal sensors, as well as to gamma-band synchronization of these value-related signals with parietal response-locked evidence-accumulation signals. Our findings suggest separable biological bases for individual and contextual differences in altruism that relate to differences in the initial processing of choice-relevant information. |
Jinfeng Huang; Gaoyan Zhang; Jianwu Dang; Yu Chen; Shoko Miyamoto Semantic processing during continuous speech production: An analysis from eye movements and EEG Journal Article In: Frontiers in Human Neuroscience, vol. 17, pp. 1–13, 2023. @article{Huang2023c, Introduction: Speech production involves neurological planning and articulatory execution. How speakers prepare for articulation is a significant aspect of speech production research. Previous studies have focused on isolated words or short phrases to explore speech planning mechanisms linked to articulatory behaviors, including investigating the eye-voice span (EVS) during text reading. However, these experimental paradigms lack real-world speech process replication. Additionally, our understanding of the neurological dimension of speech planning remains limited. Methods: This study examines speech planning mechanisms during continuous speech production by analyzing behavioral (eye movement and speech) and neurophysiological (EEG) data within a continuous speech production task. The study specifically investigates the influence of semantic consistency on speech planning and the occurrence of “look ahead” behavior. Results: The outcomes reveal the pivotal role of semantic coherence in facilitating fluent speech production. Speakers access lexical representations and phonological information before initiating speech, emphasizing the significance of semantic processing in speech planning. Behaviorally, the EVS decreases progressively during continuous reading of regular sentences, with a slight increase for non-regular sentences. Moreover, eye movement pattern analysis identifies two distinct speech production modes, highlighting the importance of semantic comprehension and prediction in higher-level lexical processing. Neurologically, the dual pathway model of speech production is supported, indicating a dorsal information flow and frontal lobe involvement. The brain network linked to semantic understanding exhibits a negative correlation with semantic coherence, with significant activation during semantic incoherence and suppression in regular sentences. Discussion: The study's findings enhance comprehension of speech planning mechanisms and offer insights into the role of semantic coherence in continuous speech production. Furthermore, the research methodology establishes a valuable framework for future investigations in this domain. |
Christoph Huber-Huber; David Melcher Saccade execution increases the preview effect with faces: An EEG and eye-tracking coregistration study Journal Article In: Attention, Perception, & Psychophysics, pp. 1–17, 2023. @article{HuberHuber2023, Under naturalistic viewing conditions, humans conduct about three to four saccadic eye movements per second. These dynamics imply that in real life, humans rarely see something completely new; there is usually a preview of the upcoming foveal input from extrafoveal regions of the visual field. In line with results from the field of reading research, we have shown with EEG and eye-tracking coregistration that an extrafoveal preview also affects postsaccadic visual object processing and facilitates discrimination. Here, we ask whether this preview effect in the fixation-locked N170, and in manual responses to the postsaccadic target face (tilt discrimination), requires saccade execution. Participants performed a gaze-contingent experiment in which extrafoveal face images could change their orientation during a saccade directed to them. In a control block, participants maintained stable gaze throughout the experiment and the extrafoveal face reappeared foveally after a simulated saccade latency. Compared with this no-saccade condition, the neural and the behavioral preview effects were much larger in the saccade condition. We also found shorter first fixation durations after an invalid preview, which is in contrast to reading studies. We interpret the increased preview effect under saccade execution as the result of the additional sensorimotor processes that come with gaze behavior compared with visual perception under stable fixation. In addition, our findings call into question whether EEG studies with fixed gaze capture key properties and dynamics of active, natural vision. |
Roxane J. Itier; Amie J. Durston In: Scientific Reports, vol. 13, no. 1, pp. 1–15, 2023. @article{Itier2023, Decoding others' facial expressions is critical for social functioning. To clarify the neural correlates of expression perception depending on where we look on the face, three combined gaze-contingent ERP experiments were analyzed using robust mass-univariate statistics. Regardless of task, fixation location impacted face processing from 50 to 350 ms, maximally around 120 ms, reflecting retinotopic mapping around C2 and P1 components. Fixation location also impacted majorly the N170-P2 interval while weak effects were seen at the face-sensitive N170 peak. Results question the widespread assumption that faces are processed holistically into an indecomposable perceptual whole around the N170. Rather, face processing is a complex and view-dependent process that continues well beyond the N170. Expression and fixation location interacted weakly during the P1-N170 interval, supporting a role for the mouth and left eye in fearful and happy expression decoding. Expression effects were weakest at the N170 peak but strongest around P2, especially for fear, reflecting task-independent affective processing. Results suggest N170 reflects a transition between processes rather than the maximum of a holistic face processing stage. Focus on this peak should be replaced by data-driven analyses of the epoch using robust statistics to fully unravel the early visual processing of faces and their affective content. |
Fumiaki Iwane; Debadatta Dash; Roberto F. Salamanca-Giron; William Hayward; Marlene Bönstrup; Ethan R. Buch; Leonardo G. Cohen Combined low-frequency brain oscillatory activity and behavior predict future errors in human motor skill Journal Article In: Current Biology, vol. 33, no. 15, pp. 3145–3154, 2023. @article{Iwane2023, Human skills are composed of sequences of individual actions performed with utmost precision. When occasional errors occur, they may have serious consequences, for example, when pilots are manually landing a plane. In such cases, the ability to predict an error before it occurs would clearly be advantageous. Here, we asked whether it is possible to predict future errors in a keyboard procedural human motor skill. We report that prolonged keypress transition times (KTTs), reflecting slower speed, and anomalous delta-band oscillatory activity in cingulate-entorhinal-precuneus brain regions precede upcoming errors in skill. Combined anomalous low-frequency activity and prolonged KTTs predicted up to 70% of future errors. Decoding strength (posterior probability of error) increased progressively approaching the errors. We conclude that it is possible to predict future individual errors in skill sequential performance. |
Woojae Jeong; Seolmin Kim; JeongJun Park; Joonyeol Lee In: Communications Biology, vol. 6, no. 1, pp. 1–13, 2023. @article{Jeong2023, Humans integrate multiple sources of information for action-taking, using the reliability of each source to allocate weight to the data. This reliability-weighted information integration is a crucial property of Bayesian inference. In this study, participants were asked to perform a smooth pursuit eye movement task in which we independently manipulated the reliability of pursuit target motion and the direction-of-motion cue. Through an analysis of pursuit initiation and multivariate electroencephalography activity, we found neural and behavioral evidence of Bayesian information integration: more attraction toward the cue direction was generated when the target motion was weak and unreliable. Furthermore, using mathematical modeling, we found that the neural signature of Bayesian information integration had extra-retinal origins, although most of the multivariate electroencephalography activity patterns during pursuit were best correlated with the retinal velocity errors accumulated over time. Our results demonstrated neural implementation of Bayesian inference in human oculomotor behavior. |
Philippa Anne Johnson; Tessel Blom; Simon Gaal; Daniel Feuerriegel; Stefan Bode; Hinze Hogendoorn Position representations of moving objects align with real-time position in the early visual response Journal Article In: eLife, vol. 12, pp. 1–21, 2023. @article{Johnson2023, When interacting with the dynamic world, the brain receives outdated sensory information, due to the time required for neural transmission and processing. In motion perception, the brain may overcome these fundamental delays through predictively encoding the position of moving objects using information from their past trajectories. In the present study, we evaluated this proposition using multivariate analysis of high temporal resolution electroencephalographic data. We tracked neural position representations of moving objects at different stages of visual processing, relative to the real-time position of the object. During early stimulus-evoked activity, position representations of moving objects were activated substantially earlier than the equivalent activity evoked by unpredictable flashes, aligning the earliest representations of moving stimuli with their real-time positions. These findings indicate that the predictability of straight trajectories enables full compensation for the neural delays accumulated early in stimulus processing, but that delays still accumulate across later stages of cortical processing. |
Michele Bevilacqua; Krystel R. Huxlin; Friedhelm C. Hummel; Estelle Raffin Pathway and directional specificity of Hebbian plasticity in the cortical visual motion processing network Journal Article In: iScience, vol. 26, no. 7, pp. 1–18, 2023. @article{Bevilacqua2023, Cortico-cortical paired associative stimulation (ccPAS), which repeatedly pairs single-pulse transcranial magnetic stimulation (TMS) over two distant brain regions, is thought to modulate synaptic plasticity. We explored its spatial selectivity (pathway and direction specificity) and its nature (oscillatory signature and perceptual consequences) when applied along the ascending (Forward) and descending (Backward) motion discrimination pathway. We found unspecific connectivity increases in bottom-up inputs in the low gamma band, probably reflecting visual task exposure. A clear distinction in information transfer occurred in the re-entrant alpha signals, which were only modulated by Backward-ccPAS, and predictive of visual improvements in healthy participants. These results suggest a causal involvement of the re-entrant MT-to-V1 low-frequency inputs in motion discrimination and integration in healthy participants. Modulating re-entrant input activity could provide single-subject prediction scenarios for visual recovery. Visual recovery might indeed partly rely on these residual inputs projecting to spared V1 neurons. |
Antonio Fernándeza; Nina M. Hanning; Marisa Carrasco Transcranial magnetic stimulation to frontal but not occipital cortex disrupts endogenous attention Journal Article In: Proceedings of the National Academy of Sciences, vol. 120, no. 10, pp. 1–10, 2023. @article{Fernandeza2023, Covert endogenous (voluntary) attention improves visual performance. Human neuroimaging studies suggest that the putative human homolog of macaque frontal eye fields (FEF+) is critical for this improvement, whereas early visual areas are not. Yet, correlational MRI methods do not manipulate brain function. We investigated whether rFEF+ or V1/V2 plays a causal role in endogenous attention. We used tran- scranial magnetic stimulation (TMS) to alter activity in the visual cortex or rFEF+ when observers performed an orientation discrimination task while attention was manipulated. On every trial, they received double-pulse TMS at a predetermined site (stimulated region) around V1/V2 or rFEF+. Two cortically magnified gratings were presented, one in the stimulated region (contralateral to the stimulated area) and another in the symmetric (ipsilateral) nonstimulated region. Grating contrast was varied to measure contrast response functions (CRFs) for all attention and stim- ulation combinations. In experiment 1, the CRFs were similar at the stimulated and nonstimulated regions, indicating that early visual areas do not modulate endogenous attention during stimulus presentation. In contrast, occipital TMS eliminates exog- enous (involuntary) attention effects on performance [A. Fernández, M. Carrasco,- Curr. Biol. 30, 4078–4084 (2020)]. In experiment 2, rFEF+ stimulation decreased the overall attentional effect; neither benefits at the attended location nor costs at the unattended location were significant. The frequency and directionality of microsaccades mimicked this pattern: Whereas occipital stimulation did not affect microsaccades, rFEF+ stimulation caused a higher microsaccade rate directed toward the stimulated hemifield. These results provide causal evidence of the role of this frontal region for endogenous attention. |
Nina M. Hanning; Antonio Fernández; Marisa Carrasco Dissociable roles of human frontal eye fields and early visual cortex in presaccadic attention Journal Article In: Nature Communications, vol. 14, no. 1, pp. 1–11, 2023. @article{Hanning2023, Shortly before saccadic eye movements, visual sensitivity at the saccade target is enhanced, at the expense of sensitivity elsewhere. Some behavioral and neural correlates of this presaccadic shift of attention resemble those of covert attention, deployed during fixation. Microstimulation in non-human primates has shown that presaccadic attention modulates perception via feedback from oculomotor to visual areas. This mechanism also seems plausible in humans, as both oculomotor and visual areas are active during saccade planning. We investigated this hypothesis by applying TMS to frontal or visual areas during saccade preparation. By simultaneously measuring perceptual performance, we show their causal and differential roles in contralateral presaccadic attention effects: Whereas rFEF+ stimulation enhanced sensitivity opposite the saccade target throughout saccade preparation, V1/V2 stimulation reduced sensitivity at the saccade target only shortly before saccade onset. These findings are consistent with presaccadic attention modulating perception through cortico-cortical feedback and further dissociate presaccadic and covert attention. |
Zahra Azizi; Reza Ebrahimpour Explaining integration of evidence separated by temporal gaps with frontoparietal circuit models Journal Article In: Neuroscience, vol. 509, pp. 74–95, 2023. @article{Azizi2023, Perceptual decisions rely on accumulating sensory evidence over time. However, the accumulation process is complicated in real life when evidence resulted from separated cues over time. Previous studies demonstrate that participants are able to integrate information from two separated cues to improve their performance invariant to an interval between the cues. However, there is no neural model that can account for accuracy and confidence in decisions when there is a time interval in evidence. We used behavioral and EEG datasets from a visual choice task —Random dot motion— with separated evidence to investigate three candid distributed neural networks. We showed that decisions based on evidence accumulation by separated cues over time are best explained by the interplay of recurrent cortical dynamics of centro-parietal and frontal brain areas while an uncertainty-monitoring module included in the model. |
Stefanie I. Becker; Zachary Hamblin-Frohman; Hongfeng Xia; Zeguo Qiu Tuning to non-veridical features in attention and perceptual decision-making: An EEG study Journal Article In: Neuropsychologia, vol. 188, pp. 1–10, 2023. @article{Becker2023b, When searching for a lost item, we tune attention to the known properties of the object. Previously, it was believed that attention is tuned to the veridical attributes of the search target (e.g., orange), or an attribute that is slightly shifted away from irrelevant features towards a value that can more optimally distinguish the target from the distractors (e.g., red-orange; optimal tuning). However, recent studies showed that attention is often tuned to the relative feature of the search target (e.g., redder), so that all items that match the relative features of the target equally attract attention (e.g., all redder items; relational account). Optimal tuning was shown to occur only at a later stage of identifying the target. However, the evidence for this division mainly relied on eye tracking studies that assessed the first eye movements. The present study tested whether this division can also be observed when the task is completed with covert attention and without moving the eyes. We used the N2pc in the EEG of participants to assess covert attention, and found comparable results: Attention was initially tuned to the relative colour of the target, as shown by a significantly larger N2pc to relatively matching distractors than a target-coloured distractor. However, in the response accuracies, a slightly shifted, “optimal” distractor interfered most strongly with target identification. These results confirm that early (covert) attention is tuned to the relative properties of an item, in line with the relational account, while later decision-making processes may be biased to optimal features. |
Andrey R. Nikolaev; Benedikt V. Ehinger; Radha Nila Meghanathan; Cees Leeuwen Planning to revisit: Neural activity in refixation precursors Journal Article In: Journal of Vision, vol. 23, no. 7, pp. 1–19, 2023. @article{Nikolaev2023, Eye tracking studies suggest that refixations—fixations to locations previously visited—serve to recover information lost or missed during earlier exploration of a visual scene. These studies have largely ignored the role of precursor fixations—previous fixations on locations the eyes return to later.We consider the possibility that preparations to return later are already made during precursor fixations. This process would mark precursor fixations as a special category of fixations, that is, distinct in neural activity from other fixation categories such as refixations and fixations to locations visited only once. To capture the neural signals associated with fixation categories, we analyzed electroencephalograms (EEGs) and eye movements recorded simultaneously in a free-viewing contour search task. We developed a methodological pipeline involving regression-based deconvolution modeling, allowing our analyses to account for overlapping EEG responses owing to the saccade sequence and other oculomotor covariates. We found that precursor fixations were preceded by the largest saccades among the fixation categories. Independent of the effect of saccade length, EEG amplitude was enhanced in precursor fixations compared with the other fixation categories 200 to 400 ms after fixation onsets, most noticeably over the occipital areas.We concluded that precursor fixations play a pivotal role in visual perception, marking the continuous occurrence of transitions between exploratory and exploitative modes of eye movement in natural viewing behavior. |
M. P. Noonan; A. H. Von Lautz; Y. Bauer; C. Summerfield; M. S. Stokes Differential modulation of visual responses by distractor or target expectations Journal Article In: Attention, Perception, & Psychophysics, vol. 85, no. 3, pp. 845–862, 2023. @article{Noonan2023, Discriminating relevant from irrelevant information in a busy visual scene is supported by statistical regularities in the environment. However, it is unclear to what extent immediate stimulus repetitions and higher order expectations (whether a repetition is statistically probable or not) are supported by the same neural mechanisms. Moreover, it is also unclear whether target and distractor-related processing are mediated by the same or different underlying neural mechanisms. Using a speeded target discrimination task, the present study implicitly cued subjects to the location of the target or the distractor via manipulations in the underlying stimulus predictability. In separate studies, we collected EEG and MEG alongside behavioural data. Results showed that reaction times were reduced with increased expectations for both types of stimuli and that these effects were driven by expected repetitions in both cases. Despite the similar behavioural pattern across target and distractors, neurophysiological measures distinguished the two stimuli. Specifically, the amplitude of the P1 was modulated by stimulus relevance, being reduced for repeated distractors and increased for repeated targets. The P1 was not, however, modulated by higher order stimulus expectations. These expectations were instead reflected in modulations in ERP amplitude and theta power in frontocentral electrodes. Finally, we observed that a single repetition of a distractor was sufficient to reduce decodability of stimulus spatial location and was also accompanied by diminished representation of stimulus features. Our results highlight the unique mechanisms involved in distractor expectation and suppression and underline the importance of studying these processes distinctly from target-related attentional control. |
Stijn A. Nuiten; Jan Willem Gee; Jasper B. Zantvoord; Johannes J. Fahrenfort; Simon Gaal Catecholaminergic neuromodulation and selective attention jointly shape perceptual decision-making Journal Article In: eLife, vol. 12, pp. 1–26, 2023. @article{Nuiten2023, Perceptual decisions about sensory input are influenced by fluctuations in ongoing neural activity, most prominently driven by attention and neuromodulator systems. It is currently unknown if neuromodulator activity and attention differentially modulate perceptual decision-making and/or whether neuromodulatory systems in fact control attentional processes. To investigate the effects of two distinct neuromodulatory systems and spatial attention on perceptual decisions, we pharmacologically elevated cholinergic (through donepezil) and catecholaminergic (through atomoxetine) levels in humans performing a visuo-spatial attention task, while we measured electroencephalography (EEG). Both attention and catecholaminergic enhancement improved decision-making at the behavioral and algorithmic level, as reflected in increased perceptual sensitivity and the modulation of the drift rate parameter derived from drift diffusion modeling. Univariate analyses of EEG data time-locked to the attentional cue, the target stimulus, and the motor response further revealed that attention and catecholaminergic enhancement both modulated pre-stimulus cortical excitability, cue- and stimulus-evoked sensory activity, as well as parietal evidence accumulation signals. Interestingly, we observed both similar, unique, and interactive effects of attention and catecholaminergic neuromodulation on these behavioral, algorithmic, and neural markers of the decision-making process. Thereby, this study reveals an intricate relationship between attentional and catecholaminergic systems and advances our understanding about how these systems jointly shape various stages of perceptual decision-making. |
Yali Pan; Tzvetan Popov; Steven Frisson; Ole Jensen Saccades are locked to the phase of alpha oscillations during natural reading Journal Article In: PLoS Biology, vol. 21, no. 1, pp. 1–19, 2023. @article{Pan2023b, AU We:saccade Pleaseconfirmthatallheadinglevelsarerepresentedcorrectly 3 to 5 times per second when reading. However,:little is known about the neuronal mechanisms coordinating the oculomotor and visual system during such rapid processing. Here, we ask if brain oscillations play a role in the temporal coordination of the visuomotor integration. We simultaneously acquired MEG and eye-tracking data while participants read sentences silently. Every sentence was embedded with a target word of either high or low lexical frequency. Our key finding demonstrated that saccade onsets were locked to the phase of alpha oscillations (8 to 13 Hz), and in particular, for saccades towards low frequency words. Source modelling demonstrated that the alpha oscillations to which the saccades were locked, were generated in the right-visual motor cortex (BA 7). Our findings suggest that the alpha oscillations serve to time the processing between the oculomotor and visual systems during natural reading, and that this coordination becomes more pronounced for demanding words. |
Nadia Paraskevoudi; Iria SanMiguel Sensory suppression and increased neuromodulation during actions disrupt memory encoding of unpredictable self-initiated stimuli Journal Article In: Psychophysiology, vol. 60, no. 1, pp. 1–25, 2023. @article{Paraskevoudi2023, Actions modulate sensory processing by attenuating responses to self-compared to externally generated inputs, which is traditionally attributed to stimulus-specific motor predictions. Yet, suppression has been also found for stimuli merely coinciding with actions, pointing to unspecific processes that may be driven by neuromodulatory systems. Meanwhile, the differential processing for self-generated stimuli raises the possibility of producing effects also on memory for these stimuli; however, evidence remains mixed as to the direction of the effects. Here, we assessed the effects of actions on sensory processing and memory encoding of concomitant, but unpredictable sounds, using a combination of self-generation and memory recognition task concurrently with EEG and pupil recordings. At encoding, subjects performed button presses that half of the time generated a sound (motor-auditory; MA) and listened to passively presented sounds (auditory-only; A). At retrieval, two sounds were presented and participants had to respond which one was present before. We measured memory bias and memory performance by having sequences where either both or only one of the test sounds were presented at encoding, respectively. Results showed worse memory performance – but no differences in memory bias –, attenuated responses, and larger pupil diameter for MA compared to A sounds. Critically, the larger the sensory attenuation and pupil diameter, the worse the memory performance for MA sounds. Nevertheless, sensory attenuation did not correlate with pupil dilation. Collectively, our findings suggest that sensory attenuation and neuromodulatory processes coexist during actions, and both relate to disrupted memory for concurrent, albeit unpredictable sounds. |
Srividya Pattisapu; Supratim Ray Stimulus-induced narrow-band gamma oscillations in humans can be recorded using open-hardware low-cost EEG amplifier Journal Article In: PLoS ONE, vol. 18, pp. 1–19, 2023. @article{Pattisapu2023, Stimulus-induced narrow-band gamma oscillations (30–70 Hz) in human electro-encephalograph (EEG) have been linked to attentional and memory mechanisms and are abnormal in mental health conditions such as autism, schizophrenia and Alzheimer's Disease. However, since the absolute power in EEG decreases rapidly with increasing frequency following a “1/ f” power law, and the gamma band includes line noise frequency, these oscillations are highly susceptible to instrument noise. Previous studies that recorded stimulus-induced gamma oscillations used expensive research-grade EEG amplifiers to address this issue. While low-cost EEG amplifiers have become popular in Brain Computer Interface applications that mainly rely on low-frequency oscillations (< 30 Hz) or steady-state-visually-evoked-potentials, whether they can also be used to measure stimulus-induced gamma oscillations is unknown. We recorded EEG signals using a low-cost, open-source amplifier (OpenBCI) and a traditional, research-grade amplifier (Brain Products GmbH), both connected to the OpenBCI cap, in male (N = 6) and female (N = 5) subjects (22–29 years) while they viewed full-screen static gratings that are known to induce two distinct gamma oscillations: slow and fast gamma, in a subset of subjects. While the EEG signals from OpenBCI were considerably noisier, we found that out of the seven subjects who showed a gamma response in Brain Products recordings, six showed a gamma response in OpenBCI as well. In spite of the noise in the OpenBCI setup, the spectral and temporal profiles of these responses in alpha (8–13 Hz) and gamma bands were highly correlated between OpenBCI and Brain Products recordings. These results suggest that low-cost amplifiers can potentially be used in stimulus-induced gamma response detection. |
Iván Plaza-Rosales; Enzo Brunetti; Rodrigo Montefusco-Siegmund; Samuel Madariaga; Rodrigo Hafelin; Daniela P. Ponce; María Isabel Behrens; Pedro E. Maldonado; Andrea Paula-Lima Visual-spatial processing impairment in the occipital-frontal connectivity network at early stages of Alzheimer's disease Journal Article In: Frontiers in Aging Neuroscience, vol. 15, pp. 1–14, 2023. @article{PlazaRosales2023, Introduction: Alzheimer's disease (AD) is the leading cause of dementia worldwide, but its pathophysiological phenomena are not fully elucidated. Many neurophysiological markers have been suggested to identify early cognitive impairments of AD. However, the diagnosis of this disease remains a challenge for specialists. In the present cross-sectional study, our objective was to evaluate the manifestations and mechanisms underlying visual-spatial deficits at the early stages of AD. Methods: We combined behavioral, electroencephalography (EEG), and eye movement recordings during the performance of a spatial navigation task (a virtual version of the Morris Water Maze adapted to humans). Participants (69–88 years old) with amnesic mild cognitive impairment–Clinical Dementia Rating scale (aMCI–CDR 0.5) were selected as probable early AD (eAD) by a neurologist specialized in dementia. All patients included in this study were evaluated at the CDR 0.5 stage but progressed to probable AD during clinical follow-up. An equal number of matching healthy controls (HCs) were evaluated while performing the navigation task. Data were collected at the Department of Neurology of the Clinical Hospital of the Universidad de Chile and the Department of Neuroscience of the Faculty of Universidad de Chile. Results: Participants with aMCI preceding AD (eAD) showed impaired spatial learning and their visual exploration differed from the control group. eAD group did not clearly prefer regions of interest that could guide solving the task, while controls did. The eAD group showed decreased visual occipital evoked potentials associated with eye fixations, recorded at occipital electrodes. They also showed an alteration of the spatial spread of activity to parietal and frontal regions at the end of the task. The control group presented marked occipital activity in the beta band (15–20 Hz) at early visual processing time. The eAD group showed a reduction in beta band functional connectivity in the prefrontal cortices reflecting poor planning of navigation strategies. Discussion: We found that EEG signals combined with visual-spatial navigation analysis, yielded early and specific features that may underlie the basis for understanding the loss of functional connectivity in AD. Still, our results are clinically promising for early diagnosis required to improve quality of life and decrease healthcare costs. |
Tzvetan Popov; Bart Gips; Nathan Weisz; Ole Jensen Brain areas associated with visual spatial attention display topographic organization during auditory spatial attention Journal Article In: Cerebral Cortex, vol. 33, no. 7, pp. 3478–3489, 2023. @article{Popov2023a, Spatially selective modulation of alpha power (8–14 Hz) is a robust finding in electrophysiological studies of visual attention, and has been recently generalized to auditory spatial attention. This modulation pattern is interpreted as reflecting a top-down mechanism for suppressing distracting input from unattended directions of sound origin. The present study on auditory spatial attention extends this interpretation by demonstrating that alpha power modulation is closely linked to oculomotor action. We designed an auditory paradigm in which participants were required to attend to upcoming sounds from one of 24 loudspeakers arranged in a circular array around the head. Maintaining the location of an auditory cue was associated with a topographically modulated distribution of posterior alpha power resembling the findings known from visual attention. Multivariate analyses allowed the prediction of the sound location in the horizontal plane. Importantly, this prediction was also possible, when derived from signals capturing saccadic activity. A control experiment on auditory spatial attention confirmed that, in absence of any visual/auditory input, lateralization of alpha power is linked to the lateralized direction of gaze. Attending to an auditory target engages oculomotor and visual cortical areas in a topographic manner akin to the retinotopic organization associated with visual attention. |
Tzvetan Popov; Tobias Staudigl Cortico-ocular coupling in the service of episodic memory formation Journal Article In: Progress in Neurobiology, vol. 227, pp. 1–9, 2023. @article{Popov2023, Encoding of visual information is a necessary requirement for most types of episodic memories. In search for a neural signature of memory formation, amplitude modulation of neural activity has been repeatedly shown to correlate with and suggested to be functionally involved in successful memory encoding. We here report a complementary view on why and how brain activity relates to memory, indicating a functional role of cortico-ocular interactions for episodic memory formation. Recording simultaneous magnetoencephalography and eye tracking in 35 human participants, we demonstrate that gaze variability and amplitude modulations of alpha/beta oscillations (10–20 Hz) in visual cortex covary and predict subsequent memory performance between and within participants. Amplitude variation during pre-stimulus baseline was associated with gaze direction variability, echoing the co-variation observed during scene encoding. We conclude that encoding of visual information engages unison coupling between oculomotor and visual areas in the service of memory formation. |
Linze Qian; Xianliang Ge; Zhao Feng; Sujie Wang; Jingjia Yuan; Yunxian Pan; Hongqi Shi; Jie Xu; Yu Sun Brain network reorganization during visual search task revealed by a network analysis of fixation-related potential Journal Article In: IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 31, pp. 1219–1229, 2023. @article{Qian2023, Visual search is ubiquitous in daily life and has attracted substantial research interest over the past decades. Although accumulating evidence has suggested complex neurocognitive processes underlying visual search, the neural communication across the brain regions remains poorly understood. The present work aimed to fill this gap by investigating functional networks of fixation-related potential (FRP) during the visual search task. Multi-frequency electroencephalogram (EEG) networks were constructed from 70 university students (male/female = 35/35) using FRPs time-locked to target and non-target fixation onsets, which were determined by concurrent eye-tracking data. Then graph theoretical analysis (GTA) and a data-driven classification framework were employed to quantitatively reveal the divergent reorganization between target and non-target FRPs. We found distinct network architectures between target and non-target mainly in the delta and theta bands. More importantly, we achieved a classification accuracy of 92.74% for target and non-target discrimination using both global and nodal network features. In line with the results of GTA, we found that the integration corresponding to target and non-target FRPs significantly differed, while the nodal features contributing most to classification performance primarily resided in the occipital and parietal-temporal areas. Interestingly, we revealed that females exhibited significantly higher local efficiency in delta band when focusing on the search task. In summary, these results provide some of the first quantitative insights into the underlying brain interaction patterns during the visual search process. |
Nan Qin; Francesca Crespi; Alice Mado Proverbio; Gilles Pourtois A systematic exploration of attentional load effects on the C1 ERP component Journal Article In: Psychophysiology, pp. 1–30, 2023. @article{Qin2023, The C1 ERP component reflects the earliest visual processing in V1. However, it remains debated whether attentional load can influence it or not. We conducted two EEG experiments to investigate the effect of attentional load on the C1. Task difficulty was manipulated at fixation using an oddball detection task that was either easy (low load) or difficult (high load), while the distractor was presented in the upper visual field (UVF) to score the C1. In Experiment 1, we used a block design and the stimulus onset asynchrony (SOA) between the central stimulus and the peripheral distractor was either short or long. In Experiment 2, task difficulty was manipulated on a trial-by-trial basis using a visual cue, and the peripheral distractor was presented either before or after the central stimulus. The results showed that the C1 was larger in the high compared to the low load condition irrespective of SOA in Experiment 1. In Experiment 2, no significant load modulation of the C1 was observed. However, we found that the contingent negative variation (CNV) was larger in the low compared to the high load condition. Moreover, the C1 was larger when the peripheral distractor was presented after than before the central stimulus. Combined together, these results suggest that different top-down control processes can influence the initial feedforward stage of visual processing in V1 captured by the C1 ERP component. |
Zeguo Qiu; Stefanie I. Becker; Hongfeng Xia; Zachary Hamblin-Frohman; Alan J. Pegna Fixation-related electrical potentials during a free visual search task reveal the timing of visual awareness Journal Article In: iScience, vol. 26, no. 7, pp. 1–17, 2023. @article{Qiu2023, It has been repeatedly claimed that emotional faces readily capture attention, and that they may be processed without awareness. Yet some observations cast doubt on these assertions. Part of the problem may lie in the experimental paradigms employed. Here, we used a free viewing visual search task during electroencephalographic recordings, where participants searched for either fearful or neutral facial expressions among distractor expressions. Fixation-related potentials were computed for fearful and neutral targets and the response compared for stimuli consciously reported or not. We showed that awareness was associated with an electrophysiological negativity starting at around 110 ms, while emotional expressions were distinguished on the N170 and early posterior negativity only when stimuli were consciously reported. These results suggest that during unconstrained visual search, the earliest electrical correlate of awareness may emerge as early as 110 ms, and fixating at an emotional face without reporting it may not produce any unconscious processing. |
Zeguo Qiu; Dihua Wu; Benjamin J. Muehlebach Differential modulation on neural activity related to flankers during face processing: A visual crowding study Journal Article In: Neuroscience Letters, vol. 815, no. September, pp. 137496, 2023. @article{Qiu2023a, In this visual crowding study, we manipulated the perceivability of a central crowded face (a fearful or a neutral face) by varying the similarity between the central face and the surrounding flanker stimuli. We presented participants with pairs of visual clutters and recorded their electroencephalography during an emotion judgement task. In an upright flanker condition where both the central target face and flanker faces were upright faces, participants were less likely to report seeing the target face, and their P300 was weakened, compared to a scrambled flanker condition where scrambled face images were used as flankers. Additionally, at ∼ 120 ms post-stimulus, a posterior negativity was found for the upright compared to scrambled flanker condition, however only for fearful face targets. We concluded that early neural responses seem to be affected by the perceptual characteristics of both target and flanker stimuli whereas later-stage neural activity is associated with post-perceptual evaluation of the stimuli in this visual crowding paradigm. |
Maimu Alissa Rehbein; Thomas Kroker; Constantin Winker; Lena Ziehfreund; Anna Reschke; Jens Bölte; Miroslaw Wyczesany; Kati Roesmann; Ida Wessing; Markus Junghöfer Non-invasive stimulation reveals ventromedial prefrontal cortex function in reward prediction and reward processing Journal Article In: Frontiers in Neuroscience, vol. 17, pp. 1–22, 2023. @article{Rehbein2023, Introduction: Studies suggest an involvement of the ventromedial prefrontal cortex (vmPFC) in reward prediction and processing, with reward-based learning relying on neural activity in response to unpredicted rewards or non-rewards (reward prediction error, RPE). Here, we investigated the causal role of the vmPFC in reward prediction, processing, and RPE signaling by transiently modulating vmPFC excitability using transcranial Direct Current Stimulation (tDCS). Methods: Participants received excitatory or inhibitory tDCS of the vmPFC before completing a gambling task, in which cues signaled varying reward probabilities and symbols provided feedback on monetary gain or loss. We collected self-reported and evaluative data on reward prediction and processing. In addition, cue-locked and feedback-locked neural activity via magnetoencephalography (MEG) and pupil diameter using eye-tracking were recorded. Results: Regarding reward prediction (cue-locked analysis), vmPFC excitation (versus inhibition) resulted in increased prefrontal activation preceding loss predictions, increased pupil dilations, and tentatively more optimistic reward predictions. Regarding reward processing (feedback-locked analysis), vmPFC excitation (versus inhibition) resulted in increased pleasantness, increased vmPFC activation, especially for unpredicted gains (i.e., gain RPEs), decreased perseveration in choice behavior after negative feedback, and increased pupil dilations. Discussion: Our results support the pivotal role of the vmPFC in reward prediction and processing. Furthermore, they suggest that transient vmPFC excitation via tDCS induces a positive bias into the reward system that leads to enhanced anticipation and appraisal of positive outcomes and improves reward-based learning, as indicated by greater behavioral flexibility after losses and unpredicted outcomes, which can be seen as an improved reaction to the received feedback. |
Florian Sandhaeger; Nina Omejc; Anna Antonia Pape; Markus Siegel Abstract perceptual choice signals during action-linked decisions in the human brain Journal Article In: PLoS Biology, vol. 21, no. 10, pp. 1–27, 2023. @article{Sandhaeger2023, Humans can make abstract choices independent of motor actions. However, in laboratory tasks, choices are typically reported with an associated action. Consequentially, knowledge about the neural representation of abstract choices is sparse, and choices are often thought to evolve as motor intentions. Here, we show that in the human brain, perceptual choices are represented in an abstract, motor-independent manner, even when they are directly linked to an action. We measured MEG signals while participants made choices with known or unknown motor response mapping. Using multivariate decoding, we quantified stimulus, perceptual choice, and motor response information with distinct cortical distributions. Choice representations were invariant to whether the response mapping was known during stimulus presentation, and they occupied a distinct representational space from motor signals. As expected from an internal decision variable, they were informed by the stimuli, and their strength predicted decision confidence and accuracy. Our results demonstrate abstract neural choice signals that generalize to action-linked decisions, suggesting a general role of an abstract choice stage in human decision-making. |
Bruno Bianchi; Rodrigo Loredo; María Fonseca; Julia Carden; Virginia Jaichenco; Titus Malsburg; Diego E. Shalom; Juan Kamienkowski Neural bases of predictions during natural reading of known statements: An electroencephalography and eye movements co-registration study Journal Article In: Neuroscience, vol. 519, pp. 131–146, 2023. @article{Bianchi2023, Predictions of incoming words performed during reading have an impact on how the reader moves their eyes and on the electrical brain potentials. Eye tracking (ET) experiments show that less predictable words are fixated for longer periods of times. Electroencephalography (EEG) experiments show that these words elicit a more negative potential around 400 ms (N400) after the word onset when reading one word at a time (foveated reading). Nevertheless, there was no N400 potential during the foveated reading of previously known sentences (memory-encoded), which suggests that the prediction of words from memory-encoded sentences is based on different mechanisms than predictions performed on common sentences. Here, we performed an ET-EEG co-registration experiment where participants read common and memory-encoded sentences. Our results show that the N400 potential disappear when the reader recognises the sentence. Furthermore, time–frequency analyses show a larger alpha lateralisation and a beta power increase for memory-encoded sentences. This suggests a more distributed attention and an active maintenance of the cognitive set, in concordance to the predictive coding framework. |
Laura Brockhoff; Elisa Adriana Elias; Maximilian Bruchmann; Sebastian Schindler; Robert Moeck; Thomas Straube The effects of visual perceptual load on detection performance and event-related potentials to auditory stimuli Journal Article In: NeuroImage, vol. 273, pp. 1–9, 2023. @article{Brockhoff2023, Load Theory states that perceptual load prevents, or at least reduces, the processing of task-unrelated stimuli. This study systematically examined the detection and neural processing of auditory stimuli unrelated to a visual foreground task. The visual task was designed to create continuous perceptual load, alternated between low and high load, and contained performance feedback to motivate participants to focus on the visual task instead of the auditory stimuli presented in the background. The auditory stimuli varied in intensity, and participants signaled their subjective perception of these stimuli without receiving feedback. Depending on stimulus intensity, we observed load effects on detection performance and P3 amplitudes of the event-related potential (ERP). N1 amplitudes were unaffected by perceptual load, as tested by Bayesian statistics. Findings suggest that visual perceptual load affects the processing of auditory stimuli in a late time window, which is associated with a lower probability of reported awareness of these stimuli. |
Tom Bullock; Kamryn Pickett; Anabel Salimian; Caitlin Gregory; Mary H. MacLean; Barry Giesbrecht Eye movements disrupt EEG alpha-band coding of behaviorally relevant and irrelevant spatial locations held in working memory Journal Article In: Journal of Neurophysiology, vol. 129, no. 5, pp. 1191–1211, 2023. @article{Bullock2023a, Oscillations in the alpha frequency band (∼8-12 Hz) of the human electroencephalogram play an important role in supporting selective attention to visual items and maintaining their spatial locations in working memory (WM). Recent findings suggest that spatial information maintained in alpha is modulated by interruptions to continuous visual input, such that attention shifts, eye closure, and backward masking of the encoded item cause reconstructed representations of remembered locations to become degraded. Here, we investigated how another common visual disruption-eye movements-modulates reconstructions of behaviorally relevant and irrelevant item locations held in WM. Participants completed a delayed estimation task, where they encoded and recalled either the location or color of an object after a brief retention period. During retention, participants either fixated at the center or executed a sequence of eye movements. Electroencephalography (EEG) was recorded at the scalp and eye position was monitored with an eye tracker. Inverted encoding modeling (IEM) was applied to reconstruct location-selective responses across multiple frequency bands during encoding and retention. Location-selective responses were successfully reconstructed from alpha activity during retention where participants fixated at the center, but these reconstructions were disrupted during eye movements. Recall performance decreased during eye-movements conditions but remained largely intact, and further analyses revealed that under specific task conditions, it was possible to reconstruct retained location information from lower frequency bands (1-4 Hz) during eye movements. These results suggest that eye movements disrupt maintained spatial information in alpha in a manner consistent with other acute interruptions to continuous visual input, but this information may be represented in other frequency bands.NEW & NOTEWORTHY Neural oscillations in the alpha frequency band support selective attention to visual items and maintenance of their spatial locations in human working memory. Here, we investigate how eye movements disrupt representations of item locations held in working memory. Although it was not possible to recover item locations from alpha during eye movements, retained location information could be recovered from select lower frequency bands. This suggests that during eye movements, stored spatial information may be represented in other frequencies. |
Jon Burnsky; Franziska Kretzschmar; Erika Mayer; Adrian Staub In: Language, Cognition and Neuroscience, vol. 38, no. 6, pp. 821–842, 2023. @article{Burnsky2023, Two eye movement/EEG co-registration experiments investigated effects of predictability, visual contrast, and parafoveal preview in normal reading. Replicating previous studies, in Experiment 1 contrast and predictability additively influenced fixation durations, and in Experiment 2 invalid preview eliminated the predictability effect on early eye movement measures. In both experiments, predictability influenced the amplitude of the N400 component of the fixation-related potential. In Experiment 1, visual contrast did not influence the N400, and in Experiment 2, the effect of predictability on the N400 was larger with invalid preview, in opposition to the eye movement pattern. The N400 may reflect a late process of accessing conceptual representations while the duration of the eyes' fixation on a word is sensitive to the difficulty of perceptual encoding and early stages of word recognition. The effects of predictability on both fixation duration and the N400 suggest an influence of this variable at two distinct processing stages. |
Kahyun Choi; Sanghum Woo; Joonyeol Lee Motor-effector dependent modulation of sensory-motor processes identified by the multivariate pattern analysis of EEG activity Journal Article In: Scientific Reports, vol. 13, no. 1, pp. 1–12, 2023. @article{Choi2023, Sensory information received through sensory organs is constantly modulated by numerous non-sensory factors. Recent studies have demonstrated that the state of action can modulate sensory representations in cortical areas. Similarly, sensory information can be modulated by the type of action used to report perception; however, systematic investigation of this issue is scarce. In this study, we examined whether sensorimotor processes represented in electroencephalography (EEG) activities vary depending on the type of effector behavior. Nineteen participants performed motion direction discrimination tasks in which visual inputs were the same, and only the effector behaviors for reporting perceived motion directions were different (smooth pursuit, saccadic eye movement, or button press). We used multivariate pattern analysis to compare the EEG activities for identical sensory inputs under different effector behaviors. The EEG activity patterns for the identical sensory stimulus before any motor action varied across the effector behavior conditions, and the choice of motor effectors modulated the neural direction discrimination differently. We suggest that the motor-effector dependent modulation of EEG direction discrimination might be caused by effector-specific motor planning or preparation signals because it did not have functional relevance to behavioral direction discriminability. |
Samson Chota; Surya Gayet; J. Leon Kenemans; Christian N. L. Olivers; Stefan Van der Stigchel A matter of availability: Sharper tuning for memorized than for perceived stimulus features Journal Article In: Cerebral Cortex, vol. 33, no. 12, pp. 7608–7618, 2023. @article{Chota2023, Our visual environment is relatively stable over time. An optimized visual system could capitalize on this by devoting less representational resources to objects that are physically present. The vividness of subjective experience, however, suggests that externally available (perceived) information is more strongly represented in neural signals than memorized information. To distinguish between these opposing predictions, we use EEG multivariate pattern analysis to quantify the representational strength of task-relevant features in anticipation of a change-detection task. Perceptual availability was manipulated between experimental blocks by either keeping the stimulus available on the screen during a 2-s delay period (perception) or removing it shortly after its initial presentation (memory). We find that task-relevant (attended) memorized features are more strongly represented than irrelevant (unattended) features. More importantly, we find that task-relevant features evoke significantly weaker representations when they are perceptually available compared with when they are unavailable. These findings demonstrate that, contrary to what subjective experience suggests, vividly perceived stimuli elicit weaker neural representations (in terms of detectable multivariate information) than the same stimuli maintained in visual working memory. We hypothesize that an efficient visual system spends little of its limited resources on the internal representation of information that is externally available anyway. |
Sebastian C. Coleman; Zelekha A. Seedat; Anna C. Whittaker; Agatha Lenartowicz; Karen J. Mullinger Beyond the beta rebound: Post-task responses in oscillatory activity follow cessation of working memory processes Journal Article In: NeuroImage, vol. 265, pp. 1–11, 2023. @article{Coleman2023, Post-task responses (PTRs) are transitionary responses occurring for several seconds between the end of a stimulus/task and a period of rest. The most well-studied of these are beta band (13 – 30 Hz) PTRs in motor networks following movement, often called post-movement beta rebounds, which have been shown to differ in patients with schizophrenia and autism. Previous studies have proposed that beta PTRs reflect inhibition of task-positive networks to enable a return to resting brain activity, scaling with cognitive demand and reflecting cortical self-regulation. It is unknown whether PTRs are a phenomenon of the motor system, or whether they are a more general self-modulatory property of cortex that occur following cessation of higher cognitive processes as well as movement. To test this, we recorded magnetoencephalography (MEG) responses in 20 healthy participants to a working-memory task, known to recruit cortical networks associated with higher cognition. Our results revealed PTRs in the theta, alpha and beta bands across many regions of the brain, including the dorsal attention network (DAN) and lateral visual regions. These PTRs increased significantly (p < 0.05) in magnitude with working-memory load, an effect which is independent of oscillatory modulations occurring over the task period as well as those following individual stimuli. Furthermore, we showed that PTRs are functionally related to reaction times in left lateral visual (p < 0.05) and left parietal (p < 0.1) regions, while the oscillatory responses measured during the task period are not. Importantly, motor PTRs following button presses did not modulate with task condition, suggesting that PTRs in different networks are driven by different aspects of cognition. Our findings show that PTRs are not limited to motor networks but are widespread in regions which are recruited during the task. We provide evidence that PTRs have unique properties, scaling with cognitive load and correlating significantly with behaviour. Based on the evidence, we suggest that PTRs inhibit task-positive network activity to enable a transition to rest, however, further investigation is required to uncover their role in neuroscience and pathology. |
Laura Convertino; Daniel Bush; Fanfan Zheng; Rick A. Adams; Neil Burgess Reduced grid-like theta modulation in schizophrenia Journal Article In: Brain, vol. 146, no. 5, pp. 2191–2198, 2023. @article{Convertino2023, The hippocampal formation has been implicated in the pathophysiology of schizophrenia, with patients showing impairments in spatial and relational cognition, structural changes in entorhinal cortex and reduced theta coherence with medial prefrontal cortex. Both the entorhinal cortex and medial prefrontal cortex exhibit a 6-fold (or ‘hexadirectional') modulation of neural activity during virtual navigation that is indicative of grid cell populations and associated with accurate spatial navigation. Here, we examined whether these grid-like patterns are disrupted in schizophrenia. We asked 17 participants with diagnoses of schizophrenia and 23 controls (matched for age, sex and IQ) to perform a virtual reality spatial navigation task during magnetoencephalography. The control group showed stronger 4–10 Hz theta power during movement onset, as well as hexadirectional modulation of theta band oscillatory activity in the right entorhinal cortex whose directional stability across trials correlated with navigational accuracy. This hexadirectional modulation was absent in schizophrenia patients, with a significant difference between groups. These results suggest that impairments in spatial and relational cognition associated with schizophrenia may arise from disrupted grid firing patterns in entorhinal cortex. |
Anna Corriveau; Alexis Kidder; Lina Teichmann; Susan G. Wardle; Chris I. Baker Sustained neural representations of personally familiar people and places during cued recall Journal Article In: Cortex, vol. 158, pp. 71–82, 2023. @article{Corriveau2023, The recall and visualization of people and places from memory is an everyday occurrence, yet the neural mechanisms underpinning this phenomenon are not well understood. In particular, the temporal characteristics of the internal representations generated by active recall are unclear. Here, we used magnetoencephalography (MEG) and multivariate pattern analysis to measure the evolving neural representation of familiar places and people across the whole brain when human participants engage in active recall. To isolate self- generated imagined representations, we used a retro-cue paradigm in which participants were first presented with two possible labels before being cued to recall either the first or second item. We collected personalized labels for specific locations and people familiar to each participant. Importantly, no visual stimuli were presented during the recall period, and the retro-cue paradigm allowed the dissociation of responses associated with the la- bels from those corresponding to the self-generated representations. First, we found that following the retro-cue it took on average ~1000 ms for distinct neural representations of freely recalled people or places to develop. Second, we found distinct representations of personally familiar concepts throughout the 4 s recall period. Finally, we found that these representations were highly stable and generalizable across time. These results suggest that self-generated visualizations and recall of familiar places and people are subserved by a stable neural mechanism that operates relatively slowly when under conscious control. |
Gonçalo Cosme; Patrícia Arriaga; Pedro J. Rosa; Mitul A. Mehta; Diana Prata Temporal profile of intranasal oxytocin in the human autonomic nervous system at rest: An electrocardiography and pupillometry study Journal Article In: Journal of Psychopharmacology, vol. 37, no. 6, pp. 566–576, 2023. @article{Cosme2023, Background: Human social behavior is modulated by oxytocin (OT). Intranasal administration of OT (IN-OT) is a noninvasive route shown to elicit changes in the autonomic nervous system (ANS) activity; however, IN-OT's effect on the temporal profile of ANS activity at rest is yet to be described. Aims: We aimed to describe the temporal profile of IN-OT at six 10-min time windows from 15- to 100-min post-administration in 20 male participants at rest while continuously recording their pupillary in an eyes-open condition and cardiac activity in eyes-open and eyes-closed conditions. Methods: We used a double-blind, placebo-controlled, within-subjects design study where we extracted two proxies of parasympathetic nervous system (PNS) activity: high-frequency heart rate variability (HF-HRV) and pupillary unrest index (PUI); and a proxy of sympathetic nervous system activity: sample entropy of the pupillary unrest. Results: In the eyes-open condition, we found an effect of IN-OT on the proxies of PNS activity: decreased PUI in the three-time windows post-administration spanning 65–100 min, and as an exploratory finding, an increased HF-HRV in the 80–85 min time window. Conclusions: We suggest there is a role of OT in PNS regulation that may be consistent with OT's currently theorized role in the facilitation of alertness and approach behavior. |
Ingmar E. J. Vries; Moritz F. Wurm Predictive neural representations of naturalistic dynamic input Journal Article In: Nature Communications, vol. 14, no. 1, pp. 1–16, 2023. @article{Vries2023, Adaptive behavior such as social interaction requires our brain to predict unfolding external dynamics. While theories assume such dynamic prediction, empirical evidence is limited to static snapshots and indirect consequences of predictions. We present a dynamic extension to representational similarity analysis that uses temporally variable models to capture neural representations of unfolding events. We applied this approach to source-reconstructed magnetoencephalography (MEG) data of healthy human subjects and demonstrate both lagged and predictive neural representations of observed actions. Predictive representations exhibit a hierarchical pattern, such that high-level abstract stimulus features are predicted earlier in time, while low-level visual features are predicted closer in time to the actual sensory input. By quantifying the temporal forecast window of the brain, this approach allows investigating predictive processing of our dynamic world. It can be applied to other naturalistic stimuli (e.g., film, soundscapes, music, motor planning/execution, social interaction) and any biosignal with high temporal resolution. |
Carola Dolci; C. Nico Boehler; Elisa Santandrea; Anneleen Dewulf; Suliann Ben-Hamed; Emiliano Macaluso; Leonardo Chelazzi; Einat Rashal Integrated effects of top-down attention and statistical learning during visual search: An EEG study Journal Article In: Attention, Perception, & Psychophysics, vol. 85, no. 6, pp. 1819–1833, 2023. @article{Dolci2023, The present study aims to investigate how the competition between visual elements is solved by top-down and/or statistical learning (SL) attentional control (AC) mechanisms when active together. We hypothesized that the “winner” element that will undergo further processing is selected either by one AC mechanism that prevails over the other, or by the joint activity of both mechanisms. To test these hypotheses, we conducted a visual search experiment that combined an endogenous cueing protocol (valid vs. neutral cue) and an imbalance of target frequency distribution across locations (high- vs. low-frequency location). The unique and combined effects of top-down control and SL mechanisms were measured on behaviour and amplitudes of three evoked-response potential (ERP) components (i.e., N2pc, P1, CNV) related to attentional processing. Our behavioural results showed better performance for validly cued targets and for targets in the high-frequency location. The two factors were found to interact, so that SL effects emerged only in the absence of top-down guidance. Whereas the CNV and P1 only displayed a main effect of cueing, for the N2pc we observed an interaction between cueing and SL, revealing a cueing effect for targets in the low-frequency condition, but not in the high-frequency condition. Thus, our data support the view that top-down control and SL work in a conjoint, integrated manner during target selection. In particular, SL mechanisms are reduced or even absent when a fully reliable top-down guidance of attention is at play. |
Dock H. Duncan; Dirk Moorselaar; Jan Theeuwes Pinging the brain to reveal the hidden attentional priority map using encephalography Journal Article In: Nature Communications, vol. 14, no. 1, pp. 1–13, 2023. @article{Duncan2023, Attention has been usefully thought of as organized in priority maps – putative maps of space where attentional priority is weighted across spatial regions in a winner-take-all competition for attentional deployment. Recent work has highlighted the influence of past experiences on the weighting of spatial priority – called selection history. Aside from being distinct from more well-studied, top-down forms of attentional enhancement, little is known about the neural substrates of history-mediated attentional priority. Using a task known to induce statistical learning of target distributions, in an EEG study we demonstrate that this otherwise invisible, latent attentional priority map can be visualized during the intertrial period using a ‘pinging' technique in conjunction with multivariate pattern analyses. Our findings not only offer a method of visualizing the history-mediated attentional priority map, but also shed light on the underlying mechanisms allowing our past experiences to influence future behavior. |
Kacie Dunham-Carr; Jacob I. Feldman; David M. Simon; Sarah R. Edmunds; Alexander Tu; Wayne Kuang; Julie G. Conrad; Pooja Santapuram; Mark T. Wallace; Tiffany G. Woynaroski The processing of audiovisual speech is linked with vocabulary in autistic and monautistic children: An ERP study Journal Article In: Brain Sciences, vol. 13, no. 7, pp. 1–15, 2023. @article{DunhamCarr2023, Explaining individual differences in vocabulary in autism is critical, as understanding and using words to communicate are key predictors of long-term outcomes for autistic individuals. Differences in audiovisual speech processing may explain variability in vocabulary in autism. The efficiency of audiovisual speech processing can be indexed via amplitude suppression, wherein the amplitude of the event-related potential (ERP) is reduced at the P2 component in response to audiovisual speech compared to auditory-only speech. This study used electroencephalography (EEG) to measure P2 amplitudes in response to auditory-only and audiovisual speech and norm-referenced, standardized assessments to measure vocabulary in 25 autistic and 25 nonautistic children to determine whether amplitude suppression (a) differs or (b) explains variability in vocabulary in autistic and nonautistic children. A series of regression analyses evaluated associations between amplitude suppression and vocabulary scores. Both groups demonstrated P2 amplitude suppression, on average, in response to audiovisual speech relative to auditory-only speech. Between-group differences in mean amplitude suppression were nonsignificant. Individual differences in amplitude suppression were positively associated with expressive vocabulary through receptive vocabulary, as evidenced by a significant indirect effect observed across groups. The results suggest that efficiency of audiovisual speech processing may explain variance in vocabulary in autism. |
Ciara Egan; Joshua S. Payne; Manon W. Jones In: Neuropsychologia, vol. 184, pp. 1–8, 2023. @article{Egan2023, Readers with developmental dyslexia are known to be impaired in representing and accessing phonology, but their ability to process meaning is generally considered to be intact. However, neurocognitive studies show evidence of a subtle semantic processing deficit in dyslexic readers, relative to their typically-developing peers. Here, we compared dyslexic and typical adult readers on their ability to judge semantic congruency (congruent vs. inconcongruent) in short, two-word phrases, which were further manipulated for phonological relatedness (alliterating vs. non-alliterating); “dazzling-diamond”; “sparkling-diamond”; “dangerous-diamond”; and “creepy-diamond”. At the level of behavioural judgement, all readers were less accurate when evaluating incongruent alliterating items compared with incongruent non-aliterating, suggesting that phonological patterning creates the illusion of semantic congruency (as per Egan et al., 2020). Dyslexic readers showed a similar propensity for this form-meaning relationship despite a phonological processing impairment as evidenced in the cognitive and literacy indicative assessments. Dyslexic readers also showed an overall reduction in the ability to accurately judge semantic congruency, suggestive of a subtle semantic impairment. Whilst no group differences emerged in the electrophysiological measures, our pupil dilation measurements revealed a global tendency for dyslexic readers to manifest a reduced attentional response to these word stimuli, compared with typical readers. Our results show a broad manifestation of neurocognitive differences in adult dyslexic and typical readers' processing of print, at the level of autonomic arousal as well as in higher level semantic judgements. |
Tahnée Engelen; Anne Buot; Julie Grèzes; Catherine Tallon-baudry Whose emotion is it? Perspective matters to understand brain-body interactions in emotions Journal Article In: NeuroImage, vol. 268, pp. 1–14, 2023. @article{Engelen2023, Feeling happy, or judging whether someone else is feeling happy are two distinct facets of emotions that nev- ertheless rely on similar physiological and neural activity. Differentiating between these two states, also called Self/Other distinction, is an essential aspect of empathy, but how exactly is it implemented? In non-emotional cognition, the transient neural response evoked at each heartbeat, or heartbeat evoked response (HER), indexes the self and signals Self/Other distinction. Here, using electroencephalography ( n = 32), we probe whether HERs' role in Self/Other distinction extends also to emotion–a domain where brain-body interactions are particularly relevant. We asked participants to rate independently validated affective scenes, reporting either their own emotion (Self) or the emotion expressed by people in the scene (Other). During the visual cue indicating to adopt the Self or Other perspective, before the affective scene, HERs distinguished between the two conditions, in visual cortices as well as in the right frontal operculum. Physiological reactivity (facial electromyogram, skin conduc- tance, heart rate) during affective scene co-varied as expected with valence and arousal ratings, but also with the Self- or Other-perspective adopted. Finally, HERs contributed to the subjective experience of valence in the Self condition, in addition to and independently from physiological reactivity. We thus show that HERs represent a trans-domain marker of Self/Other distinction, here specifically contributing to experienced valence. We propose that HERs represent a form of evidence related to the ‘I' part of the judgement ‘To which extent do I feel happy'. The ‘I' related evidence would be combined with the affective evidence collected during affective scene presentation, accounting at least partly for the difference between feeling an emotion and identifying it in someone else. |
Jamal Esmaily; Sajjad Zabbah; Reza Ebrahimpour; Bahador Bahrami Interpersonal alignment of neural evidence accumulation to social exchange of confidence Journal Article In: eLife, vol. 12, pp. 1–27, 2023. @article{Esmaily2023, Private, subjective beliefs about uncertainty have been found to have idiosyncratic computational and neural substrates yet, humans share such beliefs seamlessly and cooperate successfully. Bringing together decision making under uncertainty and interpersonal alignment in communication, in a discovery plus pre-registered replication design, we examined the neuro-computational basis of the relationship between privately held and socially shared uncertainty. Examining confidence-speed-accuracy trade-off in uncertainty-ridden perceptual decisions under social vs isolated context, we found that shared (i.e. reported confidence) and subjective (inferred from pupillometry) uncertainty dynamically followed social information. An attractor neural network model incorporating social information as top-down additive input captured the observed behavior and demonstrated the emergence of social alignment in virtual dyadic simulations. Electroencephalography showed that social exchange of confidence modulated the neural signature of perceptual evidence accumulation in the central parietal cortex. Our findings offer a neural population model for interpersonal alignment of shared beliefs. |
Edward Ester; Rachel Weese Temporally dissociable mechanisms of spatial, feature, and motor selection during working memory–guided behavior Journal Article In: Journal of Cognitive Neuroscience, vol. 35, no. 12, pp. 2014–2027, 2023. @article{Ester2023, Working memory (WM) is a capacity-and duration-limited system that forms a temporal bridge between fleeting sensory phenomena and possible actions. But how are the contents of WM used to guide behavior? A recent high-profile study reported evidence for simultaneous access to WM content and linked motor plans during WM-guided behavior, challenging serial models where task-relevant WM content is first selected and then mapped on to a task-relevant motor response. However, the task used in that study was not optimized to distinguish the selection of spatial versus nonspatial visual information stored in memory, nor to distinguish whether or how the chronometry of selecting nonspatial visual information stored in memory might differ from the selection of linked motor plans. Here, we revisited the chronometry of spatial, feature, and motor selection during WM-guided behavior using a task optimized to disentangle these processes. Concurrent EEG and eye position recordings revealed clear evidence for temporally dissociable spatial, feature, and motor selection during this task. Thus, our data reveal the existence of multiple WM selection mechanisms that belie conceptualizations of WM-guided behavior based on purely serial or parallel visuomotor processing. |
Jasper H. Fabius; Alessio Fracasso; Michele Deodato; David Melcher; Stefan Van der Stigchel Bilateral increase in MEG planar gradients prior to saccade onset Journal Article In: Scientific Reports, vol. 13, no. 1, pp. 1–10, 2023. @article{Fabius2023, Every time we move our eyes, the retinal locations of objects change. To distinguish the changes caused by eye movements from actual external motion of the objects, the visual system is thought to anticipate the consequences of eye movements (saccades). Single neuron recordings have indeed demonstrated changes in receptive fields before saccade onset. Although some EEG studies with human participants have also demonstrated a pre-saccadic increased potential over the hemisphere that will process a stimulus after a saccade, results have been mixed. Here, we used magnetoencephalography to investigate the timing and lateralization of visually evoked planar gradients before saccade onset. We modelled the gradients from trials with both a saccade and a stimulus as the linear combination of the gradients from two conditions with either only a saccade or only a stimulus. We reasoned that any residual gradients in the condition with both a saccade and a stimulus must be uniquely linked to visually-evoked neural activity before a saccade. We observed a widespread increase in residual planar gradients. Interestingly, this increase was bilateral, showing activity both contralateral and ipsilateral to the stimulus, i.e. over the hemisphere that would process the stimulus after saccade offset. This pattern of results is consistent with predictive pre-saccadic changes involving both the current and the future receptive fields involved in processing an attended object, well before the start of the eye movement. The active, sensorimotor coupling of vision and the oculomotor system may underlie the seamless subjective experience of stable and continuous perception. |
Lisa Feldmann; Carolin Zsigo; Isabelle Mörtl; Jürgen Bartling; Christian Wachinger; Frans Oort; Gerd Schulte-Körne; Ellen Greimel Emotion regulation in adolescents with major depression – Evidence from a combined EEG and eye-tracking study Journal Article In: Journal of Affective Disorders, vol. 340, pp. 899–906, 2023. @article{Feldmann2023, Background: Adolescent major depression (MD) is characterized by deficits in emotion regulation (ER). Little is known about the neurophysiological correlates that are associated with these deficits. Moreover, the additional examination of visual attention during ER would allow a more in-depth understanding of ER deficits but has not yet been applied simultaneously. Methods: N = 33 adolescents with MD and n = 35 healthy controls (HCs) aged 12–18 years performed an ER task during which they either a) down-regulated their negative affective response to negative images via cognitive reappraisal or b) attended the images without changing their affective response. During the task, the Late Positive Potential (LPP), gaze fixations on emotional image aspects, and self-reported affective responses were collected simultaneously. Results: Compared to HCs, adolescents with MD demonstrated reduced ER success based on self-report but did not differ in LPP amplitudes. Participants in both groups showed increased amplitudes in the middle LPP window when they reappraised negative pictures compared to when they attended them. Only in the HC group, increased LPP amplitudes during reappraisal were paralleled by more positive affective responses. Limitation: The applied stimuli were part of picture databases and might therefore have limited self-relevance. Conclusions: Increased LPP amplitude during ER in both groups might be specific to adolescence and might suggest that ER at this age is challenging and requires a high amount of cognitive resources. These findings provide an important starting point for future interventional studies in youth MD. |
Oscar Ferrante; Alexander Zhigalov; Clayton Hickey; Ole Jensen Statistical learning of distractor suppression downregulates prestimulus neural excitability in early visual cortex Journal Article In: Journal of Neuroscience, vol. 43, no. 12, pp. 2190–2198, 2023. @article{Ferrante2023, Visual attention is highly influenced by past experiences. Recent behavioral research has shown that expectations about the spatial location of distractors within a search array are implicitly learned, with expected distractors becoming less interfering. Little is known about the neural mechanism supporting this form of statistical learning. Here, we used magnetoencephalography (MEG) to measure human brain activity to test whether proactive mechanisms are involved in the statistical learning of distractor locations. Specifically, we used a new technique called rapid invisible frequency tagging (RIFT) to assess neural excitability in early visual cortex during statistical learning of distractor suppression while concurrently investigating the modulation of posterior alpha band activity (8–12 Hz). Male and female human participants performed a visual search task in which a target was occasionally presented alongside a color-singleton distractor. Unbeknown to the participants, the distracting stimuli were presented with different probabilities across the two hemifields. RIFT analysis showed that early visual cortex exhibited reduced neural excitability in the prestimulus interval at retinotopic locations associated with higher distractor probabilities. In contrast, we did not find any evidence of expectation-driven distractor suppression in alpha band activity. These findings indicate that proactive mechanisms of attention are involved in predictive distractor suppression and that these mechanisms are associated with altered neural excitability in early visual cortex. Moreover, our findings indicate that RIFT and alpha band activity might subtend different and possibly independent attentional mechanisms. |
2022 |
Constanze Schmitt; Milosz Krala; Frank Bremmer Neural signatures of actively controlled self-motion and the subjective encoding of distance Journal Article In: eNeuro, vol. 9, no. 6, pp. 1–18, 2022. @article{Schmitt2022, Navigating through an environment requires knowledge about one's direction of self-motion (heading) and traveled distance. Behavioral studies showed that human participants can actively reproduce a previously observed travel distance purely based on visual information. Here, we employed electroencephalography (EEG) to investigate the underlying neural processes. We measured, in human observers, event-related potentials (ERPs) during visually simulated straight-forward self-motion across a ground plane. The participants' task was to reproduce (active condition) double the distance of a previously seen self-displacement (passive condition) using a gamepad. We recorded the trajectories of self-motion during the active condition and played it back to the participants in a third set of trials (replay condition). We analyzed EEG activity separately for four electrode clusters: frontal (F), central (C), parietal (P), and occipital (O). When aligned to self-motion onset or offset, response modulation of the ERPs was stronger, and several ERP components had different latencies in the passive as compared with the active condition. This result is in line with the concept of predictive coding, which implies modified neural activation for self-induced versus externally induced sensory stimulation. We aligned our data also to the times when subjects passed the (objective) single distance d_obj and the (subjective) single distance d_sub. Remarkably, wavelet-based temporal-frequency analyses revealed enhanced theta-band activation for F, P, and O-clusters shortly before passing d_sub. This enhanced activation could be indicative of a navigation related representation of subjective distance. More generally, our study design allows to investigate subjective perception without interfering neural activation because of the required response action. |
Poppy Sharp; Tjerk Gutteling; David Melcher; Clayton Hickey Spatial attention tunes temporal processing in early visual cortex by speeding and slowing alpha oscillations Journal Article In: Journal of Neuroscience, vol. 42, no. 41, pp. 7824–7832, 2022. @article{Sharp2022, The perception of dynamic visual stimuli relies on two apparently conflicting perceptual mechanisms: rapid visual input must sometimes be integrated into unitary percepts but at other times must be segregated or parsed into separate objects or events. Though they have opposite effects on our perceptual experience, the deployment of spatial attention benefits both operations. Little is known about the neural mechanisms underlying this impact of spatial attention on temporal perception. Here, we record magnetoencephalography (MEG) in male and female humans to demonstrate that the deployment of spatial attention for the purpose of segregating or integrating visual stimuli impacts prestimulus oscillatory activity in retinotopic visual brain areas where the attended location is represented. Alpha band oscillations contralateral to an attended location are therefore faster than ipsilateral oscillations when stimuli appearing at this location will need to be segregated, but slower in expectation of the need for integration, consistent with the idea that a frequency is linked to perceptual sampling rate. These results demonstrate a novel interaction between temporal visual processing and the allocation of attention in space. |
Elio Balestrieri; Niko A. Busch Spontaneous alpha-band oscillations bias subjective contrast perception Journal Article In: Journal of Neuroscience, pp. 1–31, 2022. @article{Balestrieri2022, Perceptual decisions depend both on the features of the incoming stimulus and on the ongoing brain activity at the moment the stimulus is received. Specifically, trial-to-trial fluctuations in cortical excitability have been linked to fluctuations in the amplitude of prestimulus α oscillations (~8-13 Hz), which are in turn are associated with fluctuations in subjects' tendency to report the detection of a stimulus. It is currently unknown whether α oscillations bias postperceptual decision-making, or even bias subjective perception itself. To answer this question, we used a contrast discrimination task in which both male and female human subjects reported which of two gratings (one in each hemifield) was perceived as having a stronger contrast. Our EEG analysis showed that subjective contrast was reduced for the stimulus in the hemifield represented in the hemisphere with relatively stronger prestimulus α amplitude, reflecting reduced cortical excitability. Furthermore, the strength of this spontaneous hemispheric lateralization was strongly correlated with the magnitude of individual subjects' biases, suggesting that the spontaneous patterns of α lateralization play a role in explaining the intersubject variability in contrast perception. These results indicate that spontaneous fluctuations in cortical excitability, indicated by patterns of prestimulus α amplitude, affect perceptual decisions by altering the phenomenological perception of the visual world. |
Angela Radetz; Markus Siegel Spectral fingerprints of cortical neuromodulation Journal Article In: Journal of Neuroscience, vol. 42, no. 18, pp. 3836–3846, 2022. @article{Radetz2022, Pupil size has been established as a versatile marker of noradrenergic and cholinergic neuromodulation, which has profound effects on neuronal processing, cognition, and behavior. However, little is known about the cortical control and effects of pupil-linked neuromodulation. Here, we show that pupil dynamics are tightly coupled to temporally, spectrally, and spatially specific modulations of local and large-scale cortical population activity in the human brain. We quantified the dynamics of band-limited cortical population activity in resting human subjects using magnetoencephalography and investigated how neural dynamics were linked to simultaneously recorded pupil dynamics. Our results show that pupil-linked neuromodulation does not merely affect cortical population activity in a stereotypical fashion. Instead, we identified three frontal, precentral, and occipitoparietal networks, in which local population activity with distinct spectral profiles in the theta, beta, and alpha bands temporally preceded and followed changes in pupil size. Furthermore, we found that amplitude coupling at;16 Hz in a large-scale frontoparietal network predicted pupil dynamics. Our results unravel network-specific spectral fingerprints of cortical neuromodulation in the human brain that likely reflect both the causes and effects of neuromodulation. |
Kumari Liza; Supratim Ray Local interactions between steady-state visually evoked potentials at nearby flickering frequencies Journal Article In: Journal of Neuroscience, vol. 42, no. 19, pp. 3965–3974, 2022. @article{Liza2022, Steady-state visually evoked potentials (SSVEPs) are widely used to index top-down cognitive processing in human electroencephalogram (EEG) studies. Typically, two stimuli flickering at different temporal frequencies (TFs) are presented, each producing a distinct response in the EEG at its flicker frequency. However, how SSVEP responses in EEGs are modulated in the presence of a competing flickering stimulus just because of sensory interactions is not well understood. We have previously shown in local field potentials (LFPs) recorded from awake monkeys that when two overlapping full-screen gratings are counterphased at different TFs, there is an asymmetric SSVEP response suppression, with greater suppression from lower TFs, which further depends on the relative orientations of the gratings (stronger suppression and asymmetry for parallel compared with orthogonal gratings). Here, we first confirmed these effects in both male and female human EEG recordings. Then, we mapped the response suppression of one stimulus (target) by a competing stimulus (mask) over a much wider range than the previous study. Surprisingly, we found that the suppression was not stronger at low frequencies in general, but systematically varied depending on the target TF, indicating local interactions between the two competing stimuli. These results were confirmed in both human EEG and monkey LFP and electrocorticogram (ECoG) data. Our results show that sensory interactions between multiple SSVEPs are more complex than shown previously and are influenced by both local and global factors, underscoring the need to cautiously interpret the results of studies involving SSVEP paradigms.SIGNIFICANCE STATEMENT Steady-state visually evoked potentials (SSVEPs) are extensively used in human cognitive studies and brain-computer interfacing applications where multiple stimuli flickering at distinct frequencies are concurrently presented in the visual field. We recently characterized interactions between competing flickering stimuli in animal recordings and found that stimuli flickering slowly produce larger suppression. Here, we confirmed these in human EEGs, and further characterized the interactions by using a much wider range of target and competing (mask) frequencies in both human EEGs and invasive animal recordings. These revealed a new "local" component, whereby the suppression increased when competing stimuli flickered at nearby frequencies. Our results highlight the complexity of sensory interactions among multiple SSVEPs and underscore the need to cautiously interpret studies involving SSVEP paradigms. |
Arno Libert; Arne Van Den Kerchove; Benjamin Wittevrongel; Marc M. Van Hulle Analytic beamformer transformation for transfer learning in motion-onset visual evoked potential decoding Journal Article In: Journal of Neural Engineering, vol. 19, pp. 1–16, 2022. @article{Libert2022, Objective. While decoders of electroencephalography-based event-related potentials (ERPs) are routinely tailored to the individual user to maximize performance, developing them on populations for individual usage has proven much more challenging. We propose the analytic beamformer transformation (ABT) to extract phase and/or magnitude information from spatiotemporal ERPs in response to motion-onset stimulation. Approach. We have tested ABT on 52 motion-onset visual evoked potential (mVEP) datasets from 26 healthy subjects and compared the classification accuracy of support vector machine (SVM), spatiotemporal beamformer (stBF) and stepwise linear discriminant analysis (SWLDA) when trained on individual subjects and on a population thereof. Main results. When using phase- and combined phase/magnitude information extracted by ABT, we show significant improvements in accuracy of population-trained classifiers applied to individual users (p < 0.001). We also show that 450 epochs are needed for a correct functioning of ABT, which corresponds to 2 min of paradigm stimulation. Significance. We have shown that ABT can be used to create population-trained mVEP classifiers using a limited number of epochs. We expect this to pertain to other ERPs or synchronous stimulation paradigms, allowing for a more effective, population-based training of visual BCIs. Finally, as ABT renders recordings across subjects more structurally invariant, it could be used for transfer learning purposes in view of plug-and-play BCI applications. |
Sharif I. Kronemer; Mark Aksen; Julia Z. Ding; Jun Hwan Ryu; Qilong Xin; Zhaoxiong Ding; Jacob S. Prince; Hunki Kwon; Aya Khalaf; Sarit Forman; David S. Jin; Kevin Wang; Kaylie Chen; Claire Hu; Akshar Agarwal; Erik Saberski; Syed Mohammad Adil Wafa; Owen P. Morgan; Jia Wu; Kate L. Christison-Lagay; Nicholas Hasulak; Martha Morrell; Alexandra Urban; R. Todd Constable; Michael Pitts; R. Mark Richardson; Michael J. Crowley; Hal Blumenfeld Human visual consciousness involves large scale cortical and subcortical networks independent of task report and eye movement activity Journal Article In: Nature Communications, vol. 13, pp. 1–17, 2022. @article{Kronemer2022, The full neural circuits of conscious perception remain unknown. Using a visual perception task, we directly recorded a subcortical thalamic awareness potential (TAP). We also developed a unique paradigm to classify perceived versus not perceived stimuli using eye measurements to remove confounding signals related to reporting on conscious experiences. Using fMRI, we discovered three major brain networks driving conscious visual perception independent of report: first, increases in signal detection regions in visual, fusiform cortex, and frontal eye fields; and in arousal/salience networks involving midbrain, thalamus, nucleus accumbens, anterior cingulate, and anterior insula; second, increases in frontoparietal attention and executive control networks and in the cerebellum; finally, decreases in the default mode network. These results were largely maintained after excluding eye movement-based fMRI changes. Our findings provide evidence that the neurophysiology of consciousness is complex even without overt report, involving multiple cortical and subcortical networks overlapping in space and time. |
Louisa Kulke; Lena Brümmer; Arezoo Pooresmaeili; Annekathrin Schacht Visual competition attenuates emotion effects during overt attention shifts Journal Article In: Psychophysiology, vol. 59, pp. 1–14, 2022. @article{Kulke2022, Numerous different objects are simultaneously visible in a person's visual field, competing for attention. This competition has been shown to affect eye-movements and early neural responses toward stimuli, while the role of a stimulus' emotional meaning for mechanisms of overt attention shifts under competition is unclear. The current study combined EEG and eye-tracking to investigate effects of competition and emotional content on overt shifts of attention to human face stimuli. Competition prolonged the latency of the P1 component and of saccades, while faces showing emotional expressions elicited an early posterior negativity (EPN). Remarkably, the emotion-related modulation of the EPN was attenuated when two stimuli were competing for attention compared to non-competition. In contrast, no interaction effects of emotional expression and competition were observed on other event-related potentials. This finding indicates that competition can decelerate attention shifts in general and also diminish the emotion-driven attention capture, measured through the smaller effects of emotional expression on EPN amplitude. Reduction of the brain's responsiveness to emotional content in the presence of distractors contradicts models that postulate fully automatic processing of emotions. |
Wupadrasta Santosh Kumar; Keerthana Manikandan; Dinavahi V. P. S. Murty; Ranjini Garani Ramesh; Simran Purokayastha; Mahendra Javali; Naren Prahalada Rao; Supratim Ray Stimulus-induced narrowband gamma oscillations are test–retest reliable in human EEG Journal Article In: Cerebral Cortex Communications, vol. 3, no. 1, pp. 1–15, 2022. @article{Kumar2022a, Visual stimulus-induced gamma oscillations in electroencephalogram (EEG) recordings have been recently shown to be compromised in subjects with preclinical Alzheimer's Disease (AD), suggesting that gamma could be an inexpensive biomarker for AD diagnosis provided its characteristics remain consistent across multiple recordings. Previous magnetoencephalography studies in young subjects have reported consistent gamma power over recordings separated by a few weeks to months. Here, we assessed the consistency of stimulus-induced slow (20–35 Hz) and fast gamma (36–66 Hz) oscillations in subjects (n = 40) (age: 50–88 years) in EEG recordings separated by a year, and tested the consistency in the magnitude of gamma power, its temporal evolution and spectral profile. Gamma had distinct spectral/temporal characteristics across subjects, which remained consistent across recordings (average intraclass correlation of ~0.7). Alpha (8–12 Hz) and steady-state-visually evoked-potentials were also reliable. We further tested how EEG features can be used to identify 2 recordings as belonging to the same versus different subjects and found high classifier performance (AUC of ~0.89), with temporal evolution of slow gamma and spectral profile being most informative. These results suggest that EEG gamma oscillations are reliable across sessions separated over long durations and can also be a potential tool for subject identification. |
Ilmari Kurki; Aapo Hyvärinen; Linda Henriksson Dynamics of retinotopic spatial attention revealed by multifocal MEG Journal Article In: NeuroImage, vol. 263, pp. 1–13, 2022. @article{Kurki2022, Visual focal attention is both fast and spatially localized, making it challenging to investigate using human neuroimaging paradigms. Here, we used a new multivariate multifocal mapping method with magnetoencephalography (MEG) to study how focal attention in visual space changes stimulus-evoked responses across the visual field. The observer's task was to detect a color change in the target location, or at the central fixation. Simultaneously, 24 regions in visual space were stimulated in parallel using an orthogonal, multifocal mapping stimulus sequence. First, we used univariate analysis to estimate stimulus-evoked responses in each channel. Then we applied multivariate pattern analysis to look for attentional effects on the responses. We found that attention to a target location causes two spatially and temporally separate effects. Initially, attentional modulation is brief, observed at around 60–130 ms post stimulus, and modulates responses not only at the target location but also in adjacent regions. A later modulation was observed from around 200 ms, which was specific to the location of the attentional target. The results support the idea that focal attention employs several processing stages and suggest that early attentional modulation is less spatially specific than late. |
Timo L. Kvamme; Mesud Sarmanlu; Christopher Bailey; Morten Overgaard In: Neuroscience, vol. 482, pp. 1–17, 2022. @article{Kvamme2022, Spontaneous neural oscillations are key predictors of perceptual decisions to bind multisensory sig- nals into a unified percept. Research links decreased alpha power in the posterior cortices to attention and audio- visual binding in the sound-induced flash illusion (SIFI) paradigm. This suggests that controlling alpha oscillations would be a way of controlling audiovisual binding. In the present feasibility study we used MEG- neurofeedback to train one group of subjects to increase left/right and another to increase right/left alpha power ratios in the parietal cortex. We tested for changes in audiovisual binding in a SIFI paradigm where flashes appeared in both hemifields. Results showed that the neurofeedback induced a significant asymmetry in alpha power for the left/right group, not seen for the right/left group. Corresponding asymmetry changes in audiovisual binding in illusion trials (with 2, 3, and 4 beeps paired with 1 flash) were not apparent. Exploratory analyses showed that neurofeedback training effects were present for illusion trials with the lowest numeric disparity (i.e., 2 beeps and 1 flash trials) only if the previous trial had high congruency (2 beeps and 2 flashes). Our data suggest that the relation between parietal alpha power (an index of attention) and its effect on audiovisual binding is dependent on the learned causal structure in the previous stimulus. The present results suggests that low alpha power biases observers towards audiovisual binding when they have learned that audiovisual signals orig- inate from a common origin, consistent with a Bayesian causal inference account of multisensory perception. |
Timo L. Kvamme; Mesud Sarmanlu; Morten Overgaard Doubting the double-blind: Introducing a questionnaire for awareness of experimental purposes in neurofeedback studies Journal Article In: Consciousness and Cognition, vol. 104, pp. 1–13, 2022. @article{Kvamme2022a, Double-blinding subjects to the experiment's purpose is an important standard in neurofeedback studies. However, it is difficult to provide evidence that humans are entirely unaware of certain information. This study used insights from consciousness studies and neurophenomenology to develop a contingency awareness questionnaire for neurofeedback. We assessed whether participants had an awareness of experimental purposes to manipulate their attention and multisensory perception. A subset of subjects (5 out of 20) gained a degree of awareness of experimental purposes as evidenced by their correct guess about the purposes of the experiment to affect their attention and multisensory perceptions specific to their double-blinded group assignment. The results warrant replication before they are applied to clinical neurofeedback studies, given the considerable time taken to perform the questionnaire (∼25 min). We discuss the strengths and limitations of our contingency awareness questionnaire and the growing appeal of the double-blinded standard in clinical neurofeedback studies. |
Nan Li; Olaf Dimigen; Werner Sommer; Suiping Wang Parafoveal words can modulate sentence meaning: Electrophysiological evidence from an RSVP-with-flanker task Journal Article In: Psychophysiology, vol. 59, pp. 1–18, 2022. @article{Li2022d, During natural reading, readers can take up some visual information from not-yet-fixated words to the right of the current fixation and it is well-established that this parafoveal preview facilitates the subsequent foveal processing of the word. However, the extraction and integration of word meaning from parafoveal words and their possible influence on the semantic content of the sentence are controversial. In the current study, we recorded event-related potentials (ERPs) in the RSVP-with-flankers paradigm to test whether and how updates of sentential meaning, based only on parafoveal information, may influence the subsequent foveal processing. In Chinese sentences, the congruency of parafoveal and foveal target words with the sentence was orthogonally manipulated. In contrast to previous research, we also controlled for potentially confounding effects of parafoveal-to-foveal repetition priming (identity preview effects) on the N400. Crucially, we found that the classic effect of foveal congruency on the N400 component only appeared when the word in preview had been congruent with sentence meaning; in contrast, there was no N400 as a function of foveal incongruency when the preview word had also been incongruent. These results indicate that sentence meaning rapidly adapts to parafoveal preview, altering the semantic context for the subsequently fixated word. We also show that correct parafoveal preview generally attenuates the N400 once a word is fixated, regardless of congruency. Taken together, our findings underline the highly generative and adaptive framework of language comprehension. |
Baiwei Liu; Anna C. Nobre; Freek Ede Functional but not obligatory link between microsaccades and neural modulation by covert spatial attention Journal Article In: Nature Communications, vol. 13, pp. 1–10, 2022. @article{Liu2022, Covert spatial attention is associated with spatial modulation of neural activity as well as with directional biases in fixational eye movements known as microsaccades. We studied how these two ‘fingerprints' of attention are interrelated in humans. We investigated spatial modulation of 8-12 Hz EEG alpha activity and microsaccades when attention is directed internally within the spatial layout of visual working memory. Consistent with a common origin, spatial modulations of alpha activity and microsaccades co-vary: alpha lateralisation is stronger in trials with microsaccades toward versus away from the memorised location of the to-be-attended item and occurs earlier in trials with earlier microsaccades toward this item. Critically, however, trials without attention-driven microsaccades nevertheless show clear spatial modulation of alpha activity – comparable to trials with attention-driven microsaccades. Thus, directional biases in microsaccades correlate with neural signatures of spatial attention, but they are not necessary for neural modulation by spatial attention to be manifest. |