EyeLink EEG / fNIRS / TMS Publications
All EyeLink EEG, fNIRS, and TMS research publications (with concurrent eye tracking) up until 2022 (with early 2023s) are listed below by year. You can search the publications using keywords such as P300, Gamma band, NIRS, etc. You can also search for individual author names. If we missed any EyeLink EEG, fNIRS, or TMS articles, please email us!
Tzu-Yu Hsu; Jui-Tai Chen; Philip Tseng; Chin-An Wang
In: Biological Psychology, vol. 165, pp. 108202, 2021.
Microsaccade is a type of fixational eye movements that is modulated by various sensory and cognitive processes, and impact our visual perception. Although studies in monkeys have demonstrated a functional role for the superior colliculus and frontal eye field (FEF) in controlling microsaccades, our understanding of the neural mechanisms underlying the generation of microsaccades is still limited. By applying continuous theta-burst stimulation (cTBS) over the right FEF and the vertex, we investigated the role of the FEF in generating human microsaccade responses evoked by salient stimuli or by changes in background luminance. We observed higher microsaccade rates prior to target appearance, and larger rebound in microsaccade occurrence following salient stimuli, when disruptive cTBS was applied over FEF compared to vertex stimulation. Moreover, the microsaccade direction modulation after changes in background luminance was disrupted with FEF stimulation. Together, our results constitute the first evidence of FEF modulation in human microsaccade responses.
Tzu-Yuv Hsu; Yu-Fan Hsu; Hsin-Yi Wang; Chin-An Wang
In: European Journal of Neuroscience, vol. 54, no. 1, pp. 4283–4294, 2021.
The appearance of a salient stimulus evokes a series of orienting responses including saccades and pupil size to prepare the body for appropriate action. The midbrain superior colliculus (SC) that receives critical control signals from the frontal eye field (FEF) is hypothesized to coordinate all components of orienting. It has shown recently that the FEF, together with the SC, is also importantly involved in the control of pupil size, in addition to its well-documented role in eye movements. Although the role of the FEF in pupil size is demonstrated in monkeys, its role in human pupil responses and the coordination between pupil size and saccades remains to be established. Through applying continuous theta-burst stimulation over the right FEF and vertex, we investigated the role of the FEF in human pupil and saccade responses evoked by a salient stimulus, and the coordination between pupil size and saccades. Our results showed that neither saccade reaction times (SRT) nor pupil responses evoked by salient stimuli were modulated by FEF stimulation. In contrast, the correlation between pupil size and SRTs in the contralateral stimulus condition was diminished with FEF stimulation, but intact with vertex stimulation. Moreover, FEF stimulation effects between saccade and pupil responses associated with salient stimuli correlated across participants. This is the first transcranial magnetic stimulation (TMS) study on the pupil orienting response, and our findings suggest that human FEF was involved in coordinating pupil size and saccades, but not involved in the control of pupil orienting responses.
Zhenlan Jin; Ruie Gou; Junjun Zhang; Ling Li
In: Journal of Vision, vol. 21, no. 3, pp. 1–10, 2021.
Close coupling between attention and smooth pursuit eye movements has been widely established and frontal eye field (FEF) is a “hub” region for attention and eye movements. Frontal pursuit area (FPA), a subregion of the FEF, is part of neural circuit for the pursuit, here, we directly checked the role of the FPA in the interaction between the pursuit and attention. To do it, we applied a dual-task paradigm where an attention demanding task was integrated into the pursuit target and interrupted the FPA using transcranial magnetic stimulation (TMS). In the study, participants were required to pursue a moving circle with a letter inside, which changed to another one every 100 ms and report whether “H” (low attentional load) or one of “H,” “S,” or “L” (high attentional load) appeared during the trial. As expected, increasing the attentional load decreased accuracy of the letter detection. Importantly, the FPA TMS had no effect on both the pursuit and letter detection tasks in the low load condition, whereas it reduced 200 to 320 ms gain, but tended to increase the letter detection accuracy in the high load condition. Moreover, individual's FPA TMS effect on pursuit gain
Björn Machner; Jonathan Imholz; Lara Braun; Philipp J. Koch; Tobias Bäumer; Thomas F. Münte; Christoph Helmchen; Andreas Sprenger
Resting-state functional connectivity in the attention networks is not altered by offline theta-burst stimulation of the posterior parietal cortex or the temporo-parietal junction as compared to a vertex control site Journal Article
In: Neuroimage: Reports, vol. 1, no. 2, pp. 100013, 2021.
Disruption of resting-state functional connectivity (RSFC) between core regions of the dorsal attention network (DAN), including the bilateral superior parietal lobule (SPL), and structural damage of the right-lateralized ventral attention network (VAN), including the temporo-parietal junction (TPJ), have been described as neural basis for hemispatial neglect. Pursuing a virtual lesion model, we aimed to perturbate the attention networks of 22 healthy subjects by applying continuous theta burst stimulation (cTBS) to the right SPL or TPJ. We first created network masks of the DAN and VAN based on RSFC analyses from a RS-fMRI baseline session and determined the SPL and TPJ stimulation site within the respective mask. We then performed RS-fMRI immediately after cTBS of the SPL, TPJ (active sites) or vertex (control site). RSFC between SPL/TPJ and whole brain as well as between predefined regions of interest (ROI) in the attention networks was analyzed in a within-subject design. Contrary to our hypothesis, seed-based RSFC did not differ between the four experimental conditions. The individual change in ROI-to-ROI RSFC from baseline to post-stimulation did also not differ between active (SPL, TPJ) and control (vertex) cTBS. In our study, a single session offline cTBS over the right SPL or TPJ could not alter RSFC in the attention networks as compared to a control stimulation, maybe because effects wore off too early. Future studies should consider a modified cTBS protocol, concurrent TMS-fMRI or transcranial direct current stimulation.
Lara Merken; Marco Davare; Peter Janssen; Maria C. Romero
In: Scientific Reports, vol. 11, pp. 4511, 2021.
The neural mechanisms underlying the effects of continuous Theta-Burst Stimulation (cTBS) in humans are poorly understood. Animal studies can clarify the effects of cTBS on individual neurons, but behavioral evidence is necessary to demonstrate the validity of the animal model. We investigated the behavioral effect of cTBS applied over parietal cortex in rhesus monkeys performing a visually-guided grasping task with two differently sized objects, which required either a power grip or a pad-to-side grip. We used Fitts' law, predicting shorter grasping times (GT) for large compared to small objects, to investigate cTBS effects on two different grip types. cTBS induced long-lasting object-specific and dose-dependent changes in GT that remained present for up to two hours. High-intensity cTBS increased GTs for a power grip, but shortened GTs for a pad-to-side grip. Thus, high-intensity stimulation strongly reduced the natural GT difference between objects (i.e. the Fitts' law effect). In contrast, low-intensity cTBS induced the opposite effects on GT. Modifying the coil orientation from the standard 45-degree to a 30-degree angle induced opposite cTBS effects on GT. These findings represent behavioral evidence for the validity of the nonhuman primate model to study the neural underpinnings of non-invasive brain stimulation.
Kentaro Miyamoto; Nadescha Trudel; Kevin Kamermans; Michele C. Lim; Alberto Lazari; Lennart Verhagen; Marco K. Wittmann; Matthew F. S. Rushworth
In: Neuron, vol. 109, no. 8, pp. 1396–1408, 2021.
More than one type of probability must be considered when making decisions. It is as necessary to know one's chance of performing choices correctly as it is to know the chances that desired outcomes will follow choices. We refer to these two choice contingencies as internal and external probability. Neural activity across many frontal and parietal areas reflected internal and external probabilities in a similar manner during decision-making. However, neural recording and manipulation approaches suggest that one area, the anterior lateral prefrontal cortex (alPFC), is highly specialized for making prospective, metacognitive judgments on the basis of internal probability; it is essential for knowing which decisions to tackle, given its assessment of how well they will be performed. Its activity predicted prospective metacognitive judgments, and individual variation in activity predicted individual variation in metacognitive judgments. Its disruption altered metacognitive judgments, leading participants to tackle perceptual decisions they were likely to fail.
Roberto F. Salanamca-Giron; Estelle Raffin; Sarah B. Zandvliet; Martin Seeber; Christoph M. Michel; Paul Sauseng; Krystel R. Huxlin; Friedhelm C. Hummel
In: NeuroImage, vol. 240, pp. 118299, 2021.
Visual motion discrimination involves reciprocal interactions in the alpha band between the primary visual cortex (V1) and mediotemporal areas (V5/MT). We investigated whether modulating alpha phase synchronization using individualized multisite transcranial alternating current stimulation (tACS) over V5 and V1 regions would improve motion discrimination. We tested 3 groups of healthy subjects with the following conditions: (1) individualized In-Phase V1alpha-V5alpha tACS (0° lag), (2) individualized Anti-Phase V1alpha-V5alpha tACS (180° lag) and (3) sham tACS. Motion discrimination and EEG activity were recorded before, during and after tACS. Performance significantly improved in the Anti-Phase group compared to the In-Phase group 10 and 30 min after stimulation. This result was explained by decreases in bottom-up alpha-V1 gamma-V5 phase-amplitude coupling. One possible explanation of these results is that Anti-Phase V1alpha-V5alpha tACS might impose an optimal phase lag between stimulation sites due to the inherent speed of wave propagation, hereby supporting optimized neuronal communication.
Omer Sharon; Firas Fahoum; Yuval Nir
In: Journal of Neuroscience, vol. 41, no. 2, pp. 320–330, 2021.
Vagus nerve stimulation (VNS) is widely used to treat drug-resistant epilepsy and depression. While the precise mechanisms mediating its long-term therapeutic effects are not fully resolved, they likely involve locus coeruleus (LC) stimulation via the nucleus of the solitary tract, which receives afferent vagal inputs. In rats, VNS elevates LC firing and forebrain noradrenaline levels, whereas LC lesions suppress VNS therapeutic efficacy. Noninvasive transcutaneous VNS (tVNS) uses electrical stimulation that targets the auricular branch of the vagus nerve at the cymba conchae of the ear. However, the extent to which tVNS mimics VNS remains unclear. Here, we investigated the short-term effects of tVNS in healthy human male volunteers (n = 24), using high-density EEG and pupillometry during visual fixation at rest. We compared short (3.4 s) trials of tVNS to sham electrical stimulation at the earlobe (far from the vagus nerve branch) to control for somatosensory stimulation. Although tVNS and sham stimulation did not differ in subjective intensity ratings, tVNS led to robust pupil dilation (peaking 4-5 s after trial onset) that was significantly higher than following sham stimulation. We further quantified, using parallel factor analysis, how tVNS modulates idle occipital alpha (8-13Hz) activity identified in each participant. We found greater attenuation of alpha oscillations by tVNS than by sham stimulation. This demonstrates that tVNS reliably induces pupillary and EEG markers of arousal beyond the effects of somatosensory stimulation, thus supporting the hypothesis that tVNS elevates noradrenaline and other arousal-promoting neuromodulatory signaling, and mimics invasive VNS.
Chloé Stengel; Marine Vernet; Julià L. Amengual; Antoni Valero-Cabré
In: Scientific Reports, vol. 11, pp. 3807, 2021.
Correlational evidence in non-human primates has reported increases of fronto-parietal high-beta (22–30 Hz) synchrony during the top-down allocation of visuo-spatial attention. But may inter-regional synchronization at this specific frequency band provide a causal mechanism by which top-down attentional processes facilitate conscious visual perception? To address this question, we analyzed electroencephalographic (EEG) signals from a group of healthy participants who performed a conscious visual detection task while we delivered brief (4 pulses) rhythmic (30 Hz) or random bursts of Transcranial Magnetic Stimulation (TMS) to the right Frontal Eye Field (FEF) prior to the onset of a lateralized target. We report increases of inter-regional synchronization in the high-beta band (25–35 Hz) between the electrode closest to the stimulated region (the right FEF) and right parietal EEG leads, and increases of local inter-trial coherence within the same frequency band over bilateral parietal EEG contacts, both driven by rhythmic but not random TMS patterns. Such increases were accompained by improvements of conscious visual sensitivity for left visual targets in the rhythmic but not the random TMS condition. These outcomes suggest that high-beta inter-regional synchrony can be modulated non-invasively and that high-beta oscillatory activity across the right dorsal fronto-parietal network may contribute to the facilitation of conscious visual perception. Our work supports future applications of non-invasive brain stimulation to restore impaired visually-guided behaviors by operating on top-down attentional modulatory mechanisms.
Fosca Al Roumi; Sébastien Marti; Liping Wang; Marie Amalric; Stanislas Dehaene
In: Neuron, vol. 109, no. 16, pp. 2627–2639, 2021.
How does the human brain store sequences of spatial locations? We propose that each sequence is internally compressed using an abstract, language-like code that captures its numerical and geometrical regularities. We exposed participants to spatial sequences of fixed length but variable regularity while their brain activity was recorded using magneto-encephalography. Using multivariate decoders, each successive location could be decoded from brain signals, and upcoming locations were anticipated prior to their actual onset. Crucially, sequences with lower complexity, defined as the minimal description length provided by the formal language, led to lower error rates and to increased anticipations. Furthermore, neural codes specific to the numerical and geometrical primitives of the postulated language could be detected, both in isolation and within the sequences. These results suggest that the human brain detects sequence regularities at multiple nested levels and uses them to compress long sequences in working memory.
Thomas Andrillon; Angus Burns; Teigane Mackay; Jennifer Windt; Naotsugu Tsuchiya
Predicting lapses of attention with sleep-like slow waves Journal Article
In: Nature Communications, vol. 12, pp. 3657, 2021.
Attentional lapses occur commonly and are associated with mind wandering, where focus is turned to thoughts unrelated to ongoing tasks and environmental demands, or mind blanking, where the stream of consciousness itself comes to a halt. To understand the neural mechanisms underlying attentional lapses, we studied the behaviour, subjective experience and neural activity of healthy participants performing a task. Random interruptions prompted participants to indicate their mental states as task-focused, mind-wandering or mind-blanking. Using high-density electroencephalography, we report here that spatially and temporally localized slow waves, a pattern of neural activity characteristic of the transition toward sleep, accompany behavioural markers of lapses and preceded reports of mind wandering and mind blanking. The location of slow waves could distinguish between sluggish and impulsive behaviours, and between mind wandering and mind blanking. Our results suggest attentional lapses share a common physiological origin: the emergence of local sleep-like activity within the awake brain.
M. Antúnez; S. Mancini; J. A. Hernández-Cabrera; L. J. Hoversten; H. A. Barber; M. Carreiras
In: Brain and Language, vol. 214, pp. 104905, 2021.
During reading, we can process and integrate information from words allocated in the parafoveal region. However, whether we extract and process the meaning of parafoveal words is still under debate. Here, we obtained Fixation-Related Potentials in a Basque-Spanish bilingual sample during a Spanish reading task. By using the boundary paradigm, we presented different parafoveal previews that could be either Basque non-cognate translations or unrelated Basque words. We prove for the first time cross-linguistic semantic preview benefit effects in alphabetic languages, providing novel evidence of modulations in the N400 component. Our findings suggest that the meaning of parafoveal words is processed and integrated during reading and that such meaning is activated and shared across languages in bilingual readers.
Damiano Azzalini; Anne Buot; Stefano Palminteri; Catherine Tallon-Baudry
In: Journal of Neuroscience, vol. 41, no. 23, pp. 5102–5114, 2021.
Forrest Gump or The Matrix? Preference-based decisions are subjective and entail self-reflection. However, these self-related features are unaccounted for by known neural mechanisms of valuation and choice. Self-related processes have been linked to a basic interoceptive biological mechanism, the neural monitoring of heartbeats, in particular in ventromedial prefrontal cortex (vmPFC), a region also involved in value encoding. We thus hypothesized a functional coupling between the neural monitoring of heartbeats and the precision of value encoding in vmPFC. Human participants of both sexes were presented with pairs of movie titles. They indicated either which movie they preferred or performed a control objective visual discrimination that did not require self-reflection. Using magnetoencephalography, we measured heartbeat-evoked responses (HERs) before option presentation and confirmed that HERs in vmPFC were larger when preparing for the subjective, self-related task. We retrieved the expected cortical value network during choice with time-resolved statistical modeling. Crucially, we show that larger HERs before option presentation are followed by stronger value encoding during choice in vmPFC. This effect is independent of overall vmPFC baseline activity. The neural interaction between HERs and value encoding predicted preference-based choice consistency over time, accounting for both interindividual differences and trial-to-trial fluctuations within individuals. Neither cardiac activity nor arousal fluctuations could account for any of the effects. HERs did not interact with the encoding of perceptual evidence in the discrimination task. Our results show that the self-reflection underlying preference-based decisions involves HERs, and that HER integration to subjective value encoding in vmPFC contributes to preference stability.
Shlomit Beker; John J. Foxe; Sophie Molholm
In: Journal of Neurophysiology, vol. 126, no. 5, pp. 1783–1798, 2021.
Anticipating near-future events is fundamental to adaptive behavior, whereby neural processing of predictable stimuli is significantly facilitated relative to nonpredictable events. Neural oscillations appear to be a key anticipatory mechanism by which processing of upcoming stimuli is modified, and they often entrain to rhythmic environmental sequences. Clinical and anecdotal observations have led to the hypothesis that people with autism spectrum disorder (ASD) may have deficits in generating predictions, and as such, a candidate neural mechanism may be failure to adequately entrain neural activity to repetitive environmental patterns, to facilitate temporal predictions. We tested this hypothesis by interrogating temporal predictions and rhythmic entrainment using behavioral and electrophysiological approaches. We recorded high-density electroencephalography in children with ASD and typically developing (TD) age- and IQ-matched controls, while they reacted to an auditory target as quickly as possible. This auditory event was either preceded by predictive rhythmic visual cues or was not preceded by any cue. Both ASD and control groups presented comparable behavioral facilitation in response to the Cue versus No-Cue condition, challenging the hypothesis that children with ASD have deficits in generating temporal predictions. Analyses of the electrophysiological data, in contrast, revealed significantly reduced neural entrainment to the visual cues and altered anticipatory processes in the ASD group. This was the case despite intact stimulus-evoked visual responses. These results support intact behavioral temporal prediction in response to a cue in ASD, in the face of altered neural entrainment and anticipatory processes.
Chama Belkhiria; Vsevolod Peysakhovich
EOG metrics for cognitive workload detection Journal Article
In: Procedia Computer Science, vol. 192, pp. 1875–1884, 2021.
Increasing workload is a central notion in human factors research that can decrease the performance and yield accidents. Thus, it is crucial to understand the impact of different internal operator's factors including eye movements, memory and audio-visual integration. Here, we explored the relationship between cognitive workload (low vs. high) and eye movements (saccades, fixations and smooth pursuit). The task difficulty was induced by auditory noise, arithmetical count and working memory load. We estimated cognitive workload using EOG and EEG-based mental state monitoring. One novelty consists in recording the EOG around the ears (alternative EOG) and around the eyes (conventional EOG). The number of blinks and saccades amplitude increased along with the difficulty increase (p ≤ 0.05). We found significant correlations between EOG and EEG (theta/alpha ratio) and between conventional and alternative EOG signal. The increase in cognitive load may disturb the coding and maintenance of related visual information. Alternative EOG metrics could be a valuable tool for detecting workload.
Amir H. Meghdadi; Barry Giesbrecht; Miguel P. Eckstein
In: Experimental Brain Research, vol. 239, no. 3, pp. 797–809, 2021.
The use of scene context is a powerful way by which biological organisms guide and facilitate visual search. Although many studies have shown enhancements of target-related electroencephalographic activity (EEG) with synthetic cues, there have been fewer studies demonstrating such enhancements during search with scene context and objects in real world scenes. Here, observers covertly searched for a target in images of real scenes while we used EEG to measure the steady state visual evoked response to objects flickering at different frequencies. The target appeared in its typical contextual location or out of context while we controlled for low-level properties of the image including target saliency against the background and retinal eccentricity. A pattern classifier using EEG activity at the relevant modulated frequencies showed target detection accuracy increased when the target was in a contextually appropriate location. A control condition for which observers searched the same images for a different target orthogonal to the contextual manipulation, resulted in no effects of scene context on classifier performance, confirming that image properties cannot explain the contextual modulations of neural activity. Pattern classifier decisions for individual images were also related to the aggregated observer behavioral decisions for individual images. Together, these findings demonstrate target-related neural responses are modulated by scene context during visual search with real world scenes and can be related to behavioral search decisions.
Michael Christopher Melnychuk; Ian H. Robertson; Emanuele R. G. Plini; Paul M. Dockree
In: Brain Sciences, vol. 11, pp. 1324, 2021.
Yogic and meditative traditions have long held that the fluctuations of the breath and the mind are intimately related. While respiratory modulation of cortical activity and attentional switching are established, the extent to which electrophysiological markers of attention exhibit synchronization with respiration is unknown. To this end, we examined (1) frontal midline theta-beta ratio (TBR), an indicator of attentional control state known to correlate with mind wandering episodes and functional connectivity of the executive control network; (2) pupil diameter (PD), a known proxy measure of locus coeruleus (LC) noradrenergic activity; and (3) respiration for evidence of phase synchronization and information transfer (multivariate Granger causality) during quiet restful breathing. Our results indicate that both TBR and PD are simultaneously synchronized with the breath, suggesting an underlying oscillation of an attentionally relevant electrophysiological index that is phase-locked to the respiratory cycle which could have the potential to bias the attentional system into switching states. We highlight the LC's pivotal role as a coupling mechanism between respiration and TBR, and elaborate on its dual functions as both a chemosensitive respiratory nucleus and a pacemaker of the attentional system. We further suggest that an appreciation of the dynamics of this weakly coupled oscillatory system could help deepen our understanding of the traditional claim of a relationship between breathing and attention.
Anna M. Monk; Daniel N. Barry; Vladimir Litvak; Gareth R. Barnes; Eleanor A. Maguire
In: eNeuro, vol. 8, no. 4, pp. 1–12, 2021.
Our lives unfold as sequences of events. We experience these events as seamless, although they are composed of individual images captured in between the interruptions imposed by eye blinks and saccades. Events typically involve visual imagery from the real world (scenes), and the hippocampus is frequently en-gaged in this context. It is unclear, however, whether the hippocampus would be similarly responsive to unfolding events that involve abstract imagery. Addressing this issue could provide insights into the nature of its contribution to event processing, with relevance for theories of hippocampal function. Consequently, during magnetoencephalography (MEG), we had female and male humans watch highly matched unfolding movie events composed of either scene image frames that reflected the real world, or frames depicting abstract pat-terns. We examined the evoked neuronal responses to each image frame along the time course of the movie events. Only one difference between the two conditions was evident, and that was during the viewing of the first image frame of events, detectable across frontotemporal sensors. Further probing of this difference using source reconstruction revealed greater engagement of a set of brain regions across parietal, frontal, premotor, and cerebellar cortices, with the largest change in broadband (1–30 Hz) power in the hippocampus during scene-based movie events. Hippocampal engagement during the first image frame of scene-based events could reflect its role in registering a recognizable context perhaps based on templates or schemas. The hippo-campus, therefore, may help to set the scene for events very early on.
Anna M. Monk; Marshall A. Dalton; Gareth R. Barnes; Eleanor A. Maguire
In: Journal of Cognitive Neuroscience, vol. 33, no. 1, pp. 89–103, 2021.
The hippocampus and ventromedial prefrontal cortex (vmPFC) play key roles in numerous cognitive domains including mind-wandering, episodic memory and imagining the future. Perspectives differ on precisely how they support these diverse functions, but there is general agreement that it involves constructing representations comprised of numerous elements. Visual scenes have been deployed extensively in cognitive neuroscience because they are paradigmatic multi-element stimuli. However, it remains unclear whether scenes, rather than other types of multi-feature stimuli, preferentially engage hippocampus and vmPFC. Here we leveraged the high temporal resolution of magnetoencephalography to test participants as they gradually built scene imagery from three successive auditorily-presented object descriptions and an imagined 3D space. This was contrasted with constructing mental images of non-scene arrays that were composed of three objects and an imagined 2D space. The scene and array stimuli were, therefore, highly matched, and this paradigm permitted a closer examination of step-by-step mental construction than has been undertaken previously. We observed modulation of theta power in our two regions of interest -anterior hippocampus during the initial stage, and in vmPFC during the first two stages, of scene relative to array construction. Moreover, the scene-specific anterior hippocampal activity during the first construction stage was driven by the vmPFC, with mutual entrainment between the two brain regions thereafter. These findings suggest that hippocampal and vmPFC neural activity is especially tuned to scene representations during the earliest stage of their formation, with implications for theories of how these brain areas enable cognitive functions such as episodic memory.
Peter R. Murphy; Niklas Wilming; Diana C. Hernandez-Bocanegra; Genis Prat-Ortega; Tobias H. Donner
In: Nature Neuroscience, vol. 24, no. 7, pp. 987–997, 2021.
Many decisions under uncertainty entail the temporal accumulation of evidence that informs about the state of the environment. When environments are subject to hidden changes in their state, maximizing accuracy and reward requires non-linear accumulation of evidence. How this adaptive, non-linear computation is realized in the brain is unknown. We analyzed human behavior and cortical population activity (measured with magnetoencephalography) recorded during visual evidence accumulation in a changing environment. Behavior and decision-related activity in cortical regions involved in action planning exhibited hallmarks of adaptive evidence accumulation, which could also be implemented by a recurrent cortical microcircuit. Decision dynamics in action-encoding parietal and frontal regions were mirrored in a frequency-specific modulation of the state of the visual cortex that depended on pupil-linked arousal and the expected probability of change. These findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related feedback to the sensory cortex.
Gaëlle Nicolas; Eric Castet; Adrien Rabier; Emmanuelle Kristensen; Michel Dojat; Anne Guérin-Dugué
Neural correlates of intra-saccadic motion perception Journal Article
In: Journal of Vision, vol. 21, no. 11, pp. 1–24, 2021.
Retinal motion of the visual scene is not consciously perceived during ocular saccades in normal everyday conditions. It has been suggested that extra-retinal signals actively suppress intra-saccadic motion perception to preserve stable perception of the visual world. However, using stimuli optimized to preferentially activate the M-pathway, Castet and Masson (2000) demonstrated that motion can be perceived during a saccade. Based on this psychophysical paradigm, we used electroencephalography and eye-tracking recordings to investigate the neural correlates related to the conscious perception of intra-saccadic motion. We demonstrated the eﬀective involvement during saccades of the cortical areas V1-V2 and MT-V5, which convey motion information along the M-pathway. We also showed that individual motion perception was related to retinal temporal frequency.
J. A. Nij Bijvank; E. M. M. Strijbis; I. M. Nauta; S. D. Kulik; L. J. Balk; C. J. Stam; A. Hillebrand; J. J. G. Geurts; B. M. J. Uitdehaag; L. J. Rijn; A. Petzold; M. M. Schoonheim
In: NeuroImage: Clinical, vol. 32, pp. 102848, 2021.
Background: Impaired eye movements in multiple sclerosis (MS) are common and could represent a non-invasive and accurate measure of (dys)functioning of interconnected areas within the complex brain network. The aim of this study was to test whether altered saccadic eye movements are related to changes in functional connectivity (FC) in patients with MS. Methods: Cross-sectional eye movement (pro-saccades and anti-saccades) and magnetoencephalography (MEG) data from the Amsterdam MS cohort were included from 176 MS patients and 33 healthy controls. FC was calculated between all regions of the Brainnetome atlas in six conventional frequency bands. Cognitive function and disability were evaluated by previously validated measures. The relationships between saccadic parameters and both FC and clinical scores in MS patients were analysed using multivariate linear regression models. Results: In MS pro- and anti-saccades were abnormal compared to healthy controls A relationship of saccadic eye movements was found with FC of the oculomotor network, which was stronger for regional than global FC. In general, abnormal eye movements were related to higher delta and theta FC but lower beta FC. Strongest associations were found for pro-saccadic latency and FC of the precuneus (beta band $beta$ = -0.23
Hamideh Norouzi; Niloofar Tavakoli; Mohammad Reza Daliri
In: International Journal of Psychophysiology, vol. 166, pp. 61–70, 2021.
Working memory (WM) can be considered as a limited-capacity system which is capable of saving information temporarily with the aim of processing. The aim of the present study was to establish whether eccentricity representation in WM could be decoded from eletroencephalography (EEG) alpha-band oscillation in parietal cortex during delay-period while performing memory-guided saccade (MGS) task. In this regard, we recorded EEG and Eye-tracking signals of 17 healthy volunteers in a variant version of MGS task. We designed the modified version of MGS task for the first time to investigate the effect of locating stimuli in two different positions, in a near (6°) eccentricity and far (12°) eccentricity on saccade error as a behavioral parameter. Another goal of study was to discern whether or not varying the stimuli loci can alter behavioral and eletroencephalographical data while performing the variant version of MGS task. Our findings demonstrate that saccade error for the near fixation condition is significantly smaller than the far from fixation condition. We observed an increase in alpha power in parietal lobe in near vs far conditions. In addition, the results indicate that the increase in alpha (8–12 Hz) power from fixation to memory was negatively correlated with saccade error. The novel approach of using simultaneous EEG/Eye-tracking recording in the modified MGS task provided both behavioral and electroencephalographic analyses for oscillatory activity during this new version of MGS task.
John Orczyk; Charles E. Schroeder; Ilana Y. Abeles; Manuel Gomez-Ramirez; Pamela D. Butler; Yoshinao Kajikawa
Comparison of scalp ERP to faces in macaques and humans Journal Article
In: Frontiers in Systems Neuroscience, vol. 15, pp. 667611, 2021.
Face recognition is an essential activity of social living, common to many primate species. Underlying processes in the brain have been investigated using various techniques and compared between species. Functional imaging studies have shown face-selective cortical regions and their degree of correspondence across species. However, the temporal dynamics of face processing, particularly processing speed, are likely different between them. Across sensory modalities activation of primary sensory cortices in macaque monkeys occurs at about 3/5 the latency of corresponding activation in humans, though this human simian difference may diminish or disappear in higher cortical regions. We recorded scalp event-related potentials (ERPs) to presentation of faces in macaques and estimated the peak latency of ERP components. Comparisons of latencies between macaques (112 ms) and humans (192 ms) suggested that the 3:5 ratio could be preserved in higher cognitive regions of face processing between those species.
Anastasia O. Ovchinnikova; Anatoly N. Vasilyev; Ivan P. Zubarev; Bogdan L. Kozyrskiy; Sergei L. Shishkin
In: Frontiers in Neuroscience, vol. 15, pp. 619591, 2021.
Gaze-based input is an efficient way of hand-free human-computer interaction. However, it suffers from the inability of gaze-based interfaces to discriminate voluntary and spontaneous gaze behaviors, which are overtly similar. Here, we demonstrate that voluntary eye fixations can be discriminated from spontaneous ones using short segments of magnetoencephalography (MEG) data measured immediately after the fixation onset. Recently proposed convolutional neural networks (CNNs), linear finite impulse response filters CNN (LF-CNN) and vector autoregressive CNN (VAR-CNN), were applied for binary classification of the MEG signals related to spontaneous and voluntary eye fixations collected in healthy participants (n = 25) who performed a game-like task by fixating on targets voluntarily for 500 ms or longer. Voluntary fixations were identified as those followed by a fixation in a special confirmatory area. Spontaneous vs. voluntary fixation-related single-trial 700 ms MEG segments were non-randomly classified in the majority of participants, with the group average cross-validated ROC AUC of 0.66 ± 0.07 for LF-CNN and 0.67 ± 0.07 for VAR-CNN (M ± SD). When the time interval, from which the MEG data were taken, was extended beyond the onset of the visual feedback, the group average classification performance increased up to 0.91. Analysis of spatial patterns contributing to classification did not reveal signs of significant eye movement impact on the classification results. We conclude that the classification of MEG signals has a certain potential to support gaze-based interfaces by avoiding false responses to spontaneous eye fixations on a single-trial basis. Current results for intention detection prior to gaze-based interface's feedback, however, are not sufficient for online single-trial eye fixation classification using MEG data alone, and further work is needed to find out if it could be used in practical applications.
Yali Pan; Steven Frisson; Ole Jensen
Neural evidence for lexical parafoveal processing Journal Article
In: Nature Communications, vol. 12, pp. 5234, 2021.
In spite of the reduced visual acuity, parafoveal information plays an important role in natural reading. However, competing models on reading disagree on whether words are previewed parafoveally at the lexical level. We find neural evidence for lexical parafoveal processing by combining a rapid invisible frequency tagging (RIFT) approach with magnetoencephalography (MEG) and eye-tracking. In a silent reading task, target words are tagged (flickered) subliminally at 60 Hz. The tagging responses measured when fixating on the pre-target word reflect parafoveal processing of the target word. We observe stronger tagging responses during pre-target fixations when followed by low compared with high lexical frequency targets. Moreover, this lexical parafoveal processing is associated with individual reading speed. Our findings suggest that reading unfolds in the fovea and parafovea simultaneously to support fluent reading.
Hame Park; Christoph Kayser
In: Journal of Neuroscience, vol. 41, no. 5, pp. 1068–1079, 2021.
Our senses often receive conflicting multisensory information, which our brain reconciles by adaptive recalibration. A classic example is the ventriloquism aftereffect, which emerges following both cumulative (long-term) and trial-wise exposure to spatially discrepant multisensory stimuli. Despite the importance of such adaptive mechanisms for interacting with environments that change over multiple timescales, it remains debated whether the ventriloquism aftereffects observed following trial-wise and cumulative exposure arise from the same neurophysiological substrate. We address this question by probing electroencephalography recordings from healthy humans (both sexes) for processes predictive of the aftereffect biases following the exposure to spatially offset audiovisual stimuli. Our results support the hypothesis that discrepant multisensory evidence shapes aftereffects on distinct timescales via common neurophysiological processes reflecting sensory inference and memory in parietal- occipital regions, while the cumulative exposure to consistent discrepancies additionally recruits prefrontal processes. During the subsequent unisensory trial, both trial-wise and cumulative exposure bias the encoding of the acoustic information, but do so distinctly. Our results posit a central role of parietal regions in shaping multisensory spatial recalibration, suggest that frontal regions consolidate the behavioral bias for persistent multisensory discrepancies, but also show that the trial-wise and cumulative exposure bias sound position encoding via distinct neurophysiological processes.
Mohsen Parto Dezfouli; Saeideh Davoudi; Robert T. Knight; Mohammad Reza Daliri; Elizabeth L. Johnson
In: Cortex, vol. 138, pp. 113–126, 2021.
How does the human brain integrate spatial and temporal information into unified mnemonic representations? Building on classic theories of feature binding, we first define the oscillatory signatures of integrating ‘where' and ‘when' information in working memory (WM) and then investigate the role of prefrontal cortex (PFC) in spatiotemporal integration. Fourteen individuals with lateral PFC damage and 20 healthy controls completed a visuospatial WM task while electroencephalography (EEG) was recorded. On each trial, two shapes were presented sequentially in a top/bottom spatial orientation. We defined EEG signatures of spatiotemporal integration by comparing the maintenance of two possible where-when configurations: the first shape presented on top and the reverse. Frontal delta-theta ($delta$$theta$; 2–7 Hz) activity, frontal-posterior $delta$$theta$ functional connectivity, lateral posterior event-related potentials, and mesial posterior alpha phase-to-gamma amplitude coupling dissociated the two configurations in controls. WM performance and frontal and mesial posterior signatures of spatiotemporal integration were diminished in PFC lesion patients, whereas lateral posterior signatures were intact. These findings reveal both PFC-dependent and independent substrates of spatiotemporal integration and link optimal performance to PFC.
Jairo Perez-Osorio; Abdulaziz Abubshait; Agnieszka Wykowska
In: Journal of Cognitive Neuroscience, vol. 34, no. 1, pp. 108–126, 2021.
Understanding others' nonverbal behavior is essential for social interaction, as it allows, among others, to infer mental states. While gaze communication, a well-established nonverbal social behavior, has shown its importance in inferring others' mental states, not much is known about the effects of irrelevant gaze signals on cognitive conflict markers during collaborative settings. Here, participants completed a categorization task where they categorized objects based on their color while observing images of a robot. On each trial, participants observed the robot iCub grasping an object from a table and offering it to them to simulate a handover. Once the robot “moved” the object forward, participants were asked to categorize the object according to its color. Before participants were allowed to respond, the robot made a lateral head/gaze shift. The gaze shifts were either congruent or incongruent with the object's color. We expected that incongruent head-cues would induce more errors (Study 1), would be associated with more curvature in eye-tracking trajectories (Study 2), and induce larger amplitude in electrophysiological markers of cognitive conflict (Study 3). Results of the three studies show more oculomotor interference as measured in error rates (Study 1), larger curvatures eye-tracking trajectories (Study 2), and higher amplitudes of the N2 event-related potential (ERP) of the EEG signals as well as higher Event-Related Spectral Perturbation (ERSP) amplitudes (Study 3) for incongruent trials compared to congruent trials. Our findings reveal that behavioral, ocular and electrophysiological markers can index the influence of irrelevant signals during goal-oriented tasks.
Thomas Pfeffer; Adrian Ponce-Alvarez; Konstantinos Tsetsos; Thomas Meindertsma; Christoffer Julius Gahnström; Ruud Lucas Brink; Guido Nolte; Andreas Karl Engel; Gustavo Deco; Tobias Hinrich Donner
In: Science Advances, vol. 7, no. 29, pp. eabf5620, 2021.
Influential theories postulate distinct roles of catecholamines and acetylcholine in cognition and behavior. However, previous physiological work reported similar effects of these neuromodulators on the response properties (specifically, the gain) of individual cortical neurons. Here, we show a double dissociation between the effects of catecholamines and acetylcholine at the level of large-scale interactions between cortical areas in humans. A pharmacological boost of catecholamine levels increased cortex-wide interactions during a visual task, but not rest. An acetylcholine boost decreased interactions during rest, but not task. Cortical circuit modeling explained this dissociation by differential changes in two circuit properties: The local excitation-inhibition balance (more strongly increased by catecholamines) and intracortical transmission (more strongly reduced by acetylcholine). The inferred catecholaminergic mechanism also predicted noisier decision-making, which we confirmed for both perceptual and value-based choice behavior. Our work highlights specific circuit mechanisms for shaping cortical network interactions and behavioral variability by key neuromodulatory systems.
Ella Podvalny; Leana E. King; Biyu J. He
In: eLife, vol. 10, pp. e68265, 2021.
Arousal levels perpetually rise and fall spontaneously. How markers of arousal—pupil size and frequency content of brain activity—relate to each other and influence behavior in humans is poorly understood. We simultaneously monitored magnetoencephalography and pupil in healthy volunteers at rest and during a visual perceptual decision-making task. Spontaneously varying pupil size correlates with power of brain activity in most frequency bands across large-scale resting-state cortical networks. Pupil size recorded at prestimulus baseline correlates with subsequent shifts in detection bias (c) and sensitivity (d'). When dissociated from pupil-linked state, prestimulus spectral power of resting state networks still predicts perceptual behavior. Fast spontaneous pupil constriction and dilation correlate with large-scale brain activity as well but not perceptual behavior. Our results illuminate the relation between central and peripheral arousal markers and their respective roles in human perceptual decision-making.
Hamed Rahimi-Nasrabadi; Jianzhong Jin; Reece Mazade; Carmen Pons; Sohrab Najafian; Jose-Manuel Alonso
Image luminance changes contrast sensitivity in visual cortex Journal Article
In: Cell Reports, vol. 34, no. 5, pp. 1–21, 2021.
Accurate measures of contrast sensitivity are important for evaluating visual disease progression and for navigation safety. Previous measures suggested that cortical contrast sensitivity was constant across widely different luminance ranges experienced indoors and outdoors. Against this notion, here, we show that luminance range changes contrast sensitivity in both cat and human cortex, and the changes are different for dark and light stimuli. As luminance range increases, contrast sensitivity increases more within cortical pathways signaling lights than those signaling darks. Conversely, when the luminance range is constant, light-dark differences in contrast sensitivity remain relatively constant even if background luminance changes. We show that a Naka-Rushton function modified to include luminance range and light-dark polarity accurately replicates both the statistics of light-dark features in natural scenes and the cortical responses to multiple combinations of contrast and luminance. We conclude that differences in light-dark contrast increase with luminance range and are largest in bright environments.
Isabelle A. Rosenthal; Shridhar R. Singh; Katherine L. Hermann; Dimitrios Pantazis; Bevil R. Conway
Color space geometry uncovered with magnetoencephalography Journal Article
In: Current Biology, vol. 31, no. 3, pp. 515–526, 2021.
The geometry that describes the relationship among colors, and the neural mechanisms that support color vision, are unsettled. Here, we use multivariate analyses of measurements of brain activity obtained with magnetoencephalography to reverse-engineer a geometry of the neural representation of color space. The analyses depend upon determining similarity relationships among the spatial patterns of neural responses to different colors and assessing how these relationships change in time. We evaluate the approach by relating the results to universal patterns in color naming. Two prominent patterns of color naming could be accounted for by the decoding results: the greater precision in naming warm colors compared to cool colors evident by an interaction of hue and lightness, and the preeminence among colors of reddish hues. Additional experiments showed that classifiers trained on responses to color words could decode color from data obtained using colored stimuli, but only at relatively long delays after stimulus onset. These results provide evidence that perceptual representations can give rise to semantic representations, but not the reverse. Taken together, the results uncover a dynamic geometry that provides neural correlates for color appearance and generates new hypotheses about the structure of color space.
Giulia C. Salgari; Geoffrey F. Potts; Joseph Schmidt; Chi C. Chan; Christopher C. Spencer; Jeffrey S. Bedwell
In: Clinical Neurophysiology, vol. 132, no. 7, pp. 1526–1536, 2021.
Objectives: Negative psychiatric symptoms are often resistant to treatments, regardless of the disorder in which they appear. One model for a cause of negative symptoms is impairment in higher-order cognition. The current study examined how particular bottom-up and top-down mechanisms of selective attention relate to severity of negative symptoms across a transdiagnostic psychiatric sample. Methods: The sample consisted of 130 participants: 25 schizophrenia-spectrum disorders, 26 bipolar disorders, 18 unipolar depression, and 61 nonpsychiatric controls. The relationships between attentional event-related potentials following rare visual targets (i.e., N1, N2b, P2a, and P3b) and severity of the negative symptom domains of anhedonia, avolition, and blunted affect were evaluated using frequentist and Bayesian analyses. Results: P3b and N2b mean amplitudes were inversely related to the Positive and Negative Syndrome Scale-Negative Symptom Factor severity score across the entire sample. Subsequent regression analyses showed a significant negative transdiagnostic relationship between P3b amplitude and blunted affect severity. Conclusions: Results indicate that negative symptoms, and particularly blunted affect, may have a stronger association with deficits in top-down mechanisms of selective attention. Significance: This suggests that people with greater severity of blunted affect, independent of diagnosis, do not allocate sufficient cognitive resources when engaging in activities requiring selective attention.
Xin He; Weilin Liu; Nan Qin; Lili Lyu; Xue Dong; Min Bao
In: Psychophysiology, vol. 58, no. 12, pp. e13920, 2021.
Selective attention is essential when we face sensory inputs with distractions. In the past decades, Lavie's load theory of selective attention delineates a complete picture of distractor suppression under different attentional control load. The present study was originally designed to explore how reward modulates the load effect of attentional selection. Unexpectedly, it revealed new findings under extended attentional load that was not involved in previous work. Participants were asked to complete a rewarded attentive visual tracking task while presented with irrelevant auditory oddball stimuli, with their behavioral performance, event-related potentials and pupillary responses recorded. We found that although the behavioral performance and pupil sizes varied unidirectionally with the attentional load, the processing of distractors as reflected by the mismatch negativity (MMN) increased first and then decreased. In contrast to the prediction of Lavie's theory that attentional control fails to effectively suppress distractor processing under high attentional control load, our finding suggests that extremely high attentional control load may instead require suppression of distractor processing at a stage as early as possible. Besides, P3a, a positive-polarity response sometimes following the MMN, was not affected by the attentional load, but both N1 (a negative-polarity component peaking $sim$100 ms from sound onset) and P3a were weakened at higher reward, indicating that reward leads to attenuated early processing of distractor and thus suppresses the attentional orienting towards distractors. These findings altogether complement Lavie's load theory of selective attention, presenting a more complex picture of how attentional load and reward affects selective attention.
Peter J. Hills; Martin R. Vasilev; Panarai Ford; Lucy Snell; Emma Whitworth; Tessa Parsons; Rebecca Morisson; Abigail Silveira; Bernhard Angele
In: Neuropsychologia, vol. 161, pp. 107989, 2021.
Since the characteristics and symptoms of both schizophrenia and schizotypy are manifested heterogeneously, it is possible that different endophenotypes and neurophysiological measures (sensory gating and smooth pursuit eye movement errors) represent different clusters of symptoms. Participants (N = 205) underwent a standard conditioned-pairing paradigm to establish their sensory gating ratio, a smooth-pursuit eye-movement task, a latent inhibition task, and completed the Schizotypal Personality Questionnaire. A Multidimensional Scaling analysis revealed that sensory gating was related to positive and disorganised dimensions of schizotypy. Latent inhibition and prepulse inhibition were not related to any dimension of schizotypy. Smooth pursuit eye movement error was unrelated to sensory gating and latent inhibition, but was related to negative dimensions of schizotypy. Our findings suggest that the symptom clusters associated with two main endophenotypes are largely independent. To fully understand symptomology and outcomes of schizotypal traits, the different subtypes of schizotypy (and potentially, schizophrenia) ought to be considered separately rather than together.
Christoph Huber-Huber; Julia Steininger; Markus Grüner; Ulrich Ansorge
In: Psychophysiology, vol. 58, no. 5, pp. e13787, 2021.
Visual attention and saccadic eye movements are linked in a tight, yet flexible fashion. In humans, this link is typically studied with dual-task setups. Participants are instructed to execute a saccade to some target location, while a discrimination target is flashed on a screen before the saccade can be made. Participants are also instructed to report a specific feature of this discrimination target at the trial end. Discrimination performance is usually better if the discrimination target occurred at the same location as the saccade target compared to when it occurred at a different location, which is explained by the mandatory shift of attention to the saccade target location before saccade onset. This pre-saccadic shift of attention presumably enhances the perception of the discrimination target if it occurred at the same, but not if it occurred at a different location. It is, however, known that a dual-task setup can alter the primary process under investigation. Here, we directly compared pre-saccadic attention in single-task versus dual-task setups using concurrent electroencephalography (EEG) and eye-tracking. Our results corroborate the idea of a pre-saccadic shift of attention. They, however, question that this shift leads to the same-position discrimination advantage. The relation of saccade and discrimination target position affected the EEG signal only after saccade onset. Our results, thus, favor an alternative explanation based on the role of saccades for the consolidation of sensory and short-term memory. We conclude that studies with dual-task setups arrived at a valid conclusion despite not measuring exactly what they intended to measure.
Anna Hudson; Amie J. Durston; Sarah D. McCrackin; Roxane J. Itier
In: Brain Topography, vol. 34, no. 6, pp. 813–833, 2021.
Facial expression processing is a critical component of social cognition yet, whether it is influenced by task demands at the neural level remains controversial. Past ERP studies have found mixed results with classic statistical analyses, known to increase both Type I and Type II errors, which Mass Univariate statistics (MUS) control better. However, MUS open-access toolboxes can use different fundamental statistics, which may lead to inconsistent results. Here, we compared the output of two MUS toolboxes, LIMO and FMUT, on the same data recorded during the processing of angry and happy facial expressions investigated under three tasks in a within-subjects design. Both toolboxes revealed main effects of emotion during the N170 timing and main effects of task during later time points typically associated with the LPP component. Neither toolbox yielded an interaction between the two factors at the group level, nor at the individual level in LIMO, confirming that the neural processing of these two face expressions is largely independent from task demands. Behavioural data revealed main effects of task on reaction time and accuracy, but no influence of expression or an interaction between the two. Expression processing and task demands are discussed in the context of the consistencies and discrepancies between the two toolboxes and existing literature.
Silvia L. Isabella; J. Allan Cheyne; Douglas Cheyne
In: Frontiers in Human Neuroscience, vol. 15, pp. 786035, 2021.
Cognitive control of action is associated with conscious effort and is hypothesised to be reflected by increased frontal theta activity. However, the functional role of these increases in theta power, and how they contribute to cognitive control remains unknown. We conducted an MEG study to test the hypothesis that frontal theta oscillations interact with sensorimotor signals in order to produce controlled behaviour, and that the strength of these interactions will vary with the amount of control required. We measured neuromagnetic activity in 16 healthy adults performing a response inhibition (Go/Switch) task, known from previous work to modulate cognitive control requirements using hidden patterns of Go and Switch cues. Learning was confirmed by reduced reaction times (RT) to patterned compared to random Switch cues. Concurrent measures of pupil diameter revealed changes in subjective cognitive effort with stimulus probability, even in the absence of measurable behavioural differences, revealing instances of covert variations in cognitive effort. Significant theta oscillations were found in five frontal brain regions, with theta power in the right middle frontal and right premotor cortices parametrically increasing with cognitive effort. Similar increases in oscillatory power were also observed in motor cortical gamma, suggesting an interaction. Right middle frontal and right precentral theta activity predicted changes in pupil diameter across all experimental conditions, demonstrating a close relationship between frontal theta increases and cognitive control. Although no theta-gamma cross-frequency coupling was found, long-range theta phase coherence among the five significant sources between bilateral middle frontal, right inferior frontal, and bilateral premotor areas was found, thus providing a mechanism for the relay of cognitive control between frontal and motor areas via theta signalling. Furthermore, this provides the first evidence for the sensitivity of frontal theta oscillations to implicit motor learning and its effects on cognitive load. More generally these results present a possible a mechanism for this frontal theta network to coordinate response preparation, inhibition and execution.
Efthymia C. Kapnoula; Bob McMurray
In: Brain and Language, vol. 223, pp. 105031, 2021.
Listeners generally categorize speech sounds in a gradient manner. However, recent work, using a visual analogue scaling (VAS) task, suggests that some listeners show more categorical performance, leading to less flexible cue integration and poorer recovery from misperceptions (Kapnoula et al., 2017, 2021). We asked how individual differences in speech gradiency can be reconciled with the well-established gradiency in the modal listener, showing how VAS performance relates to both Visual World Paradigm and EEG measures of gradiency. We also investigated three potential sources of these individual differences: inhibitory control; lexical inhibition; and early cue encoding. We used the N1 ERP component to track pre-categorical encoding of Voice Onset Time (VOT). The N1 linearly tracked VOT, reflecting a fundamentally gradient speech perception; however, for less gradient listeners, this linearity was disrupted near the boundary. Thus, while all listeners are gradient, they may show idiosyncratic encoding of specific cues, affecting downstream processing.
Hamid Karimi-Rouzbahani; Alexandra Woolgar; Anina N. Rich
In: eLife, vol. 10, pp. e60563, 2021.
There are many monitoring environments, such as railway control, in which lapses of attention can have tragic consequences. Problematically, sustained monitoring for rare targets is difficult, with more misses and longer reaction times over time. What changes in the brain underpin these ‘vigilance decrements'? We designed a multiple-object monitoring (MOM) paradigm to examine how the neural representation of information varied with target frequency and time performing the task. Behavioural performance decreased over time for the rare target (monitoring) condition, but not for a frequent target (active) condition. This was mirrored in neural decoding using magnetoencephalography: coding of critical information declined more during monitoring versus active conditions along the experiment. We developed new analyses that can predict behavioural errors from the neural data more than a second before they occurred. This facilitates pre-empting behavioural errors due to lapses in attention and provides new insight into the neural correlates of vigilance decrements.
Julian Q. Kosciessa; Ulman Lindenberger; Douglas D. Garrett
In: Nature Communications, vol. 12, pp. 2430, 2021.
Knowledge about the relevance of environmental features can guide stimulus processing. However, it remains unclear how processing is adjusted when feature relevance is uncertain. We hypothesized that (a) heightened uncertainty would shift cortical networks from a rhythmic, selective processing-oriented state toward an asynchronous (“excited”) state that boosts sensitivity to all stimulus features, and that (b) the thalamus provides a subcortical nexus for such uncertainty-related shifts. Here, we had young adults attend to varying numbers of task-relevant features during EEG and fMRI acquisition to test these hypotheses. Behavioral modeling and electrophysiological signatures revealed that greater uncertainty lowered the rate of evidence accumulation for individual stimulus features, shifted the cortex from a rhythmic to an asynchronous/excited regime, and heightened neuromodulatory arousal. Crucially, this unified constellation of within-person effects was dominantly reflected in the uncertainty-driven upregulation of thalamic activity. We argue that neuromodulatory processes involving the thalamus play a central role in how the brain modulates neural excitability in the face of momentary uncertainty.
James E. Kragel; Stephan Schuele; Stephen VanHaerents; Joshua M. Rosenow; Joel L. Voss
In: Science Advances, vol. 7, no. 25, pp. eabf7144, 2021.
Although the human hippocampus is necessary for long-term memory, controversial findings suggest that it may also support short-Term memory in the service of guiding effective behaviors during learning. We tested the counterintuitive theory that the hippocampus contributes to long-Term memory through remarkably short-Term processing, as reflected in eye movements during scene encoding. While viewing scenes for the first time, shortterm retrieval operative within the episode over only hundreds of milliseconds was indicated by a specific eye-movement pattern, which was effective in that it enhanced spatiotemporal memory formation. This viewing pattern was predicted by hippocampal theta oscillations recorded from depth electrodes and by shifts toward top-down influence of hippocampal theta on activity within visual perception and attention networks. The hippocampus thus supports short-Term memory processing that coordinates behavior in the service of effective spatiotemporal learning.
Wouter Kruijne; Christian N. L. Olivers; Hedderik Rijn
In: Journal of Cognitive Neuroscience, vol. 33, no. 7, pp. 1230–1252, 2021.
Human time perception is malleable and subject to many biases. For example, it has repeatedly been shown that stimuli that are physically intense or that are unexpected seem to last longer. Two competing hypotheses have been proposed to account for such biases: One states that these temporal illusions are the result of increased levels of arousal that speeds up neural clock dynamics, whereas the alternative “magnitude coding” account states that the magnitude of sensory responses causally modulates perceived durations. Common experimental paradigms used to study temporal biases cannot dissociate between these accounts, as arousal and sensory magnitude covary and modulate each other. Here, we present two temporal discrimination experiments where two flashing stimuli demarcated the start and end of a to-be-timed interval. These stimuli could be either in the same or a different location, which led to different sensory responses because of neural repetition suppression. Crucially, changes and repetitions were fully predictable, which allowed us to explore effects of sensory response magnitude without changes in arousal or surprise. Intervals with changing markers were perceived as lasting longer than those with repeating markers. We measured EEG (Experiment 1) and pupil size (Experiment 2) and found that temporal perception was related to changes in ERPs (P2) and pupil constriction, both of which have been related to responses in the sensory cortex. Conversely, correlates of surprise and arousal (P3 amplitude and pupil dilation) were unaffected by stimulus repetitions and changes. These results demonstrate, for the first time, that sensory magnitude affects time perception even under constant levels of arousal.
Louisa Kulke; Lena Brümmer; Arezoo Pooresmaeili; Annekathrin Schacht
In: Psychophysiology, vol. 58, no. 8, pp. e13838, 2021.
In everyday life, faces with emotional expressions quickly attract attention and eye movements. To study the neural mechanisms of such emotion-driven attention by means of event-related brain potentials (ERPs), tasks that employ covert shifts of attention are commonly used, in which participants need to inhibit natural eye movements towards stimuli. It remains, however, unclear how shifts of attention to emotional faces with and without eye movements differ from each other. The current preregistered study aimed to investigate neural differences between covert and overt emotion-driven attention. We combined eye tracking with measurements of ERPs to compare shifts of attention to faces with happy, angry, or neutral expressions when eye movements were either executed (go conditions) or withheld (no-go conditions). Happy and angry faces led to larger EPN amplitudes, shorter latencies of the P1 component, and faster saccades, suggesting that emotional expressions significantly affected shifts of attention. Several ERPs (N170, EPN, LPC) were augmented in amplitude when attention was shifted with an eye movement, indicating an enhanced neural processing of faces if eye movements had to be executed together with a reallocation of attention. However, the modulation of ERPs by facial expressions did not differ between the go and no-go conditions, suggesting that emotional content enhances both covert and overt shifts of attention. In summary, our results indicate that overt and covert attention shifts differ but are comparably affected by emotional content.
Seungji Lee; Doyoung Lee; Hyunjae Gil; Ian Oakley; Yang Seok Cho; Sung-Phil Kim
In: Brain Sciences, vol. 11, no. 2, pp. 1–15, 2021.
Searching familiar faces in the crowd may involve stimulus-driven attention by emotional significance, together with goal-directed attention due to task-relevant needs. The present study investigated the effect of familiarity on attentional processes by exploring eye fixation-related potentials (EFRPs) and eye gazes when humans searched for, among other distracting faces, either an acquaintance's face or a newly-learned face. Task performance and gaze behavior were indistinguishable for identifying either faces. However, from the EFRP analysis, after a P300 component for successful search of target faces, we found greater deflections of right parietal late positive potentials in response to newly-learned faces than acquaintance's faces, indicating more involvement of goaldirected attention in processing newly-learned faces. In addition, we found greater occipital negativity elicited by acquaintance's faces, reflecting emotional responses to significant stimuli. These results may suggest that finding a familiar face in the crowd would involve lower goal-directed attention and elicit more emotional responses.
Cai S. Longman; Heike Elchlepp; Stephen Monsell; Aureliu Lavric
In: Neuropsychologia, vol. 160, pp. 107984, 2021.
Among the issues examined by studies of cognitive control in multitasking is whether processes underlying performance in the different tasks occur serially or in parallel. Here we ask a similar question about processes that pro-actively control task-set. In task-switching experiments, several indices of task-set preparation have been extensively documented, including anticipatory orientation of gaze to the task-relevant location (an unambiguous marker of reorientation of attention), and a positive polarity brain potential over the posterior cortex (whose functional significance is less well understood). We examine whether these markers of preparation occur in parallel or serially, and in what order. On each trial a cue required participants to make a semantic classification of one of three digits presented simultaneously, with the location of each digit consistently associated with one of three classification tasks (e.g., if the task was odd/even, the digit at the top of the display was relevant). The EEG positivity emerged following, and appeared time-locked to, the anticipatory fixation on the task-relevant location, which might suggest serial organisation. However, the fixation-locked positivity was not better defined than the cue-locked positivity; in fact, for the trials with the earliest fixations the positivity was better time-locked to the cue onset. This is more consistent with (re)orientation of spatial attention occurring in parallel with, but slightly before, the reconfiguration of other task-set components indexed by the EEG positivity.
Sara LoTemplio; Jack Silcox; Kara D. Federmeier; Brennan R. Payne
In: Psychophysiology, vol. 58, no. 4, pp. e13758, 2021.
Although the P3b component of the event-related brain potential is one of the most widely studied components, its underlying generators are not currently well understood. Recent theories have suggested that the P3b is triggered by phasic activation of the locus-coeruleus norepinephrine (LC-NE) system, an important control center implicated in facilitating optimal task-relevant behavior. Previous research has reported strong correlations between pupil dilation and LC activity, suggesting that pupil diameter is a useful indicator for ongoing LC-NE activity. Given the strong relationship between LC activity and pupil dilation, if the P3b is driven by phasic LC activity, there should be a robust trial-to-trial relationship with the phasic pupillary dilation response (PDR). However, previous work examining relationships between concurrently recorded pupillary and P3b responses has not supported this. One possibility is that the relationship between the measures might be carried primarily by either inter-individual (i.e., between-participant) or intra-individual (i.e., within-participant) contributions to coupling, and prior work has not systematically delineated these relationships. Doing so in the current study, we do not find evidence for either inter-individual or intra-individual relationships between the PDR and P3b responses. However, baseline pupil dilation did predict the P3b. Interestingly, both the PDR and P3b independently predicted inter-individual and intra-individual variability in decision response time. Implications for the LC-P3b hypothesis are discussed.
Sarah D. McCrackin; Roxane J. Itier
In: Cortex, vol. 143, pp. 205–222, 2021.
Looking at someone's eyes is thought to be important for affective theory of mind (aTOM), our ability to infer their emotional state. However, it is unknown whether an individual's gaze direction influences our aTOM judgements and what the time course of this influence might be. We presented participants with sentences describing individuals in positive, negative or neutral scenarios, followed by direct or averted gaze neutral face pictures of those individuals. Participants made aTOM judgements about each person's mental state, including their affective valence and arousal, and we investigated whether the face gaze direction impacted those judgements. Participants rated that gazers were feeling more positive when they displayed direct gaze as opposed to averted gaze, and that they were feeling more aroused during negative contexts when gaze was averted as opposed to direct. Event-related potentials associated with face perception and affective processing were examined using mass-univariate analyses to track the time-course of this eye-gaze and affective processing interaction at a neural level. Both positive and negative trials were differentiated from neutral trials at many stages of processing. This included the early N200 and EPN components, believed to reflect automatic emotion areas activation and attentional selection respectively. This also included the later P300 and LPP components, thought to reflect elaborative cognitive appraisal of emotional content. Critically, sentence valence and gaze direction interacted over these later components, which may reflect the incorporation of eye-gaze in the cognitive evaluation of another's emotional state. The results suggest that gaze perception directly impacts aTOM processes, and that altered eye-gaze processing in clinical populations may contribute to associated aTOM impairments.
Sarah D. McCrackin; Roxane J. Itier
In: NeuroImage, vol. 226, pp. 117605, 2021.
Looking at the eyes informs us about the thoughts and emotions of those around us, and impacts our own emotional state. However, it is unknown how perceiving direct and averted gaze impacts our ability to share the gazer's positive and negative emotions, abilities referred to as positive and negative affective empathy. We presented 44 participants with contextual sentences describing positive, negative and neutral events happening to other people (e.g. “Her newborn was saved/killed/fed yesterday afternoon.”). These were designed to elicit positive, negative, or little to no empathy, and were followed by direct or averted gaze images of the individuals described. Participants rated their affective empathy for the individual and their own emotional valence on each trial. Event-related potentials time-locked to face-onset and associated with empathy and emotional processing were recorded to investigate whether they were modulated by gaze direction. Relative to averted gaze, direct gaze was associated with increased positive valence in the positive and neutral conditions and with increased positive empathy ratings. A similar pattern was found at the neural level, using robust mass-univariate statistics. The N100, thought to reflect an automatic activation of emotion areas, was modulated by gaze in the affective empathy conditions, with opposite effect directions in positive and negative conditions. The P200, an ERP component sensitive to positive stimuli, was modulated by gaze direction only in the positive empathy condition. Positive and negative trials were processed similarly at the early N200 processing stage, but later diverged, with only negative trials modulating the EPN, P300 and LPP components. These results suggest that positive and negative affective empathy are associated with distinct time-courses, and that perceived gaze direction uniquely modulates positive empathy, highlighting the importance of studying empathy with face stimuli.
Sebastian Schindler; Clara Tirloni; Maximilian Bruchmann; Thomas Straube
In: Biological Psychology, vol. 161, pp. 108056, 2021.
High perceptual load is thought to impair already the early stages of visual processing of task-irrelevant visual stimuli. However, recent studies showed no effects of perceptual load on early ERPs in response to task-irrelevant emotional faces. In this preregistered EEG study (N = 40), we investigated the effects of continuous perceptual load on ERPs to fearful and neutral task-irrelevant faces and their phase-scrambled versions. Perceptual load did not modulate face or emotion effects for the P1 or N170. In contrast, larger face-scramble and fearful-neutral differentiation were found during low as compared to high load for the Early Posterior Negativity (EPN). Further, face-independent P1, but face-dependent N170 emotional modulations were observed. Taken together, our findings show that P1 and N170 face and emotional modulations are highly resistant to load manipulations, indicating a high degree of automaticity during this processing stage, whereas the EPN might represent a bottleneck in visual information processing.
Constanze Schmitt; Jakob C. B. Schwenk; Adrian Schütz; Jan Churan; André Kaminiarz; Frank Bremmer
In: Progress in Neurobiology, vol. 205, pp. 102117, 2021.
The visually-based control of self-motion is a challenging task, requiring – if needed – immediate adjustments to keep on track. Accordingly, it would appear advantageous if the processing of self-motion direction (heading) was predictive, thereby accelerating the encoding of unexpected changes, and un-impaired by attentional load. We tested this hypothesis by recording EEG in humans and macaque monkeys with similar experimental protocols. Subjects viewed a random dot pattern simulating self-motion across a ground plane in an oddball EEG paradigm. Standard and deviant trials differed only in their simulated heading direction (forward-left vs. forward-right). Event-related potentials (ERPs) were compared in order to test for the occurrence of a visual mismatch negativity (vMMN), a component that reflects preattentive and likely also predictive processing of sensory stimuli. Analysis of the ERPs revealed signatures of a prediction mismatch for deviant stimuli in both humans and monkeys. In humans, a MMN was observed starting 110 ms after self-motion onset. In monkeys, peak response amplitudes following deviant stimuli were enhanced compared to the standard already 100 ms after self-motion onset. We consider our results strong evidence for a preattentive processing of visual self-motion information in humans and monkeys, allowing for ultrafast adjustments of their heading direction.
Jack W. Silcox; Brennan R. Payne
In: Cortex, vol. 142, pp. 296–316, 2021.
There is an apparent disparity between the fields of cognitive audiology and cognitive electrophysiology as to how linguistic context is used when listening to perceptually challenging speech. To gain a clearer picture of how listening effort impacts context use, we conducted a pre-registered study to simultaneously examine electrophysiological, pupillometric, and behavioral responses when listening to sentences varying in contextual constraint and acoustic challenge in the same sample. Participants (N = 44) listened to sentences that were highly constraining and completed with expected or unexpected sentence-final words (“The prisoners were planning their escape/party”) or were low-constraint sentences with unexpected sentence-final words (“All day she thought about the party”). Sentences were presented either in quiet or with +3 dB SNR background noise. Pupillometry and EEG were simultaneously recorded and subsequent sentence recognition and word recall were measured. While the N400 expectancy effect was diminished by noise, suggesting impaired real-time context use, we simultaneously observed a beneficial effect of constraint on subsequent recognition memory for degraded speech. Importantly, analyses of trial-to-trial coupling between pupil dilation and N400 amplitude showed that when participants' showed increased listening effort (i.e., greater pupil dilation), there was a subsequent recovery of the N400 effect, but at the same time, higher effort was related to poorer subsequent sentence recognition and word recall. Collectively, these findings suggest divergent effects of acoustic challenge and listening effort on context use: while noise impairs the rapid use of context to facilitate lexical semantic processing in general, this negative effect is attenuated when listeners show increased effort in response to noise. However, this effort-induced reliance on context for online word processing comes at the cost of poorer subsequent memory.
Rodolfo Solís-Vivanco; Ole Jensen; Mathilde Bonnefond
In: Human Brain Mapping, vol. 42, no. 6, pp. 1699–1713, 2021.
Detection of unexpected, yet relevant events is essential in daily life. fMRI studies have revealed the involvement of the ventral attention network (VAN), including the temporo-parietal junction (TPJ), in such process. In this MEG study with 34 participants (17 women), we used a bimodal (visual/auditory) attention task to determine the neuronal dynamics associated with suppression of the activity of the VAN during top-down attention and its recruitment when information from the unattended sensory modality is involuntarily integrated. We observed an anticipatory power increase of alpha/beta oscillations (12–20 Hz, previously associated with functional inhibition) in the VAN following a cue indicating the modality to attend. Stronger VAN power increases were associated with better task performance, suggesting that the VAN suppression prevents shifting attention to distractors. Moreover, the TPJ was synchronized with the frontal eye field in that frequency band, indicating that the dorsal attention network (DAN) might participate in such suppression. Furthermore, we found a 12–20 Hz power decrease and enhanced synchronization, in both the VAN and DAN, when information between sensory modalities was congruent, suggesting an involvement of these networks when attention is involuntarily enhanced due to multisensory integration. Our results show that effective multimodal attentional allocation includes the modulation of the VAN and DAN through upper-alpha/beta oscillations. Altogether these results indicate that the suppressing role of alpha/beta oscillations might operate beyond sensory regions.
Jemaine E. Stacey; Mark Crook-Rumsey; Alexander Sumich; Christina J. Howard; Trevor Crawford; Kinneret Livne; Sabrina Lenzoni; Stephen Badham
In: Neuropsychologia, vol. 157, pp. 107887, 2021.
Prior research has focused on EEG differences across age or EEG differences across cognitive tasks/eye tracking. There are few studies linking age differences in EEG to age differences in behavioural performance which is necessary to establish how neuroactivity corresponds to successful and impaired ageing. Eighty-six healthy participants completed a battery of cognitive tests and eye-tracking measures. Resting state EEG (n = 75, 31 young, 44 older adults) was measured for delta, theta, alpha and beta power as well as for alpha peak frequency. Age deficits in cognition were aligned with the literature, showing working memory and inhibitory deficits along with an older adult advantage in vocabulary. Older adults showed poorer eye movement accuracy and response times, but we did not replicate literature showing a greater age deficit for antisaccades than for prosaccades. We replicated EEG literature showing lower alpha peak frequency in older adults but not literature showing lower alpha power. Older adults also showed higher beta power and less parietal alpha power asymmetry than young adults. Interaction effects showed that better prosaccade performance was related to lower beta power in young adults but not in older adults. Performance at the trail making test part B (measuring task switching and inhibition) was improved for older adults with higher resting state delta power but did not depend on delta power for young adults. It is argued that individuals with higher slow-wave resting EEG may be more resilient to age deficits in tasks that utilise cross-cortical processing.
Benjamin J. Stauch; Alina Peter; Heike Schuler; Pascal Fries
In: eLife, vol. 10, pp. e68240, 2021.
Under natural conditions, the visual system often sees a given input repeatedly. This provides an opportunity to optimize processing of the repeated stimuli. Stimulus repetition has been shown to strongly modulate neuronal-gamma band synchronization, yet crucial questions remained open. Here we used magnetoencephalography in 30 human subjects and find that gamma decreases across ≈10 repetitions and then increases across further repetitions, revealing plastic changes of the activated neuronal circuits. Crucially, increases induced by one stimulus did not affect responses to other stimuli, demonstrating stimulus specificity. Changes partially persisted when the inducing stimulus was repeated after 25 minutes of intervening stimuli. They were strongest in early visual cortex and increased interareal feedforward influences. Our results suggest that early visual cortex gamma synchronization enables adaptive neuronal processing of recurring stimuli. These and previously reported changes might be due to an interaction of oscillatory dynamics with established synaptic plasticity mechanisms.
David W. Sutterer; Andrew J. Coia; Vincent Sun; Steven K. Shevell; Edward Awh
In: Psychophysiology, vol. 58, no. 4, pp. e13779, 2021.
A long-standing question in the field of vision research is whether scalp-recorded EEG activity contains sufficient information to identify stimulus chromaticity. Recent multivariate work suggests that it is possible to decode which chromaticity an observer is viewing from the multielectrode pattern of EEG activity. There is debate, however, about whether the claimed effects of stimulus chromaticity on visual evoked potentials (VEPs) are instead caused by unequal stimulus luminances, which are achromatic differences. Here, we tested whether stimulus chromaticity could be decoded when potential confounds with luminance were minimized by (1) equating chromatic stimuli in luminance using heterochromatic flicker photometry for each observer and (2) independently varying the chromaticity and luminance of target stimuli, enabling us to test whether the pattern for a given chromaticity generalized across wide variations in luminance. We also tested whether luminance variations can be decoded from the topography of voltage across the scalp. In Experiment 1, we presented two chromaticities (appearing red and green) at three luminance levels during separate trials. In Experiment 2, we presented four chromaticities (appearing red, orange, yellow, and green) at two luminance levels. Using a pattern classifier and the multielectrode pattern of EEG activity, we were able to accurately decode the chromaticity and luminance level of each stimulus. Furthermore, we were able to decode stimulus chromaticity when we trained the classifier on chromaticities presented at one luminance level and tested at a different luminance level. Thus, EEG topography contains robust information regarding stimulus chromaticity, despite large variations in stimulus luminance.
David W. Sutterer; Sean M. Polyn; Geoffrey F. Woodman
In: Journal of Neurophysiology, vol. 125, no. 3, pp. 957–971, 2021.
Covert spatial attention is thought to facilitate the maintenance of locations in working memory, and EEG a-band activity (8–12Hz) is proposed to track the focus of covert attention. Recent work has shown that multivariate patterns of a-band activity track the polar angle of remembered locations relative to fixation. However, a defining feature of covert spatial attention is that it facilitates processing in a specific region of the visual field, and prior work has not determined whether patterns of a-band activity track the two-dimensional (2-D) coordinates of remembered stimuli within a visual hemifield or are instead maximally sensitive to the polar angle of remembered locations around fixation. Here, we used a lateralized spatial estimation task, in which observers remembered the location of one or two target dots presented to one side of fixation, to test this question. By applying a linear discriminant classifier to the topography of a-band activity, we found that we were able to decode the location of remembered stimuli. Critically, model comparison revealed that the pattern of classifier choices observed across remembered positions was best explained by a model assuming that a-band activity tracks the 2-D coordinates of remembered locations rather than a model assuming that a-band activity tracks the polar angle of remembered locations relative to fixation. These results support the hypothesis that this a-band activity is involved in the spotlight of attention, and arises from mid- to lower-level visual areas involved in maintaining spatial locations in working memory. NEW & NOTEWORTHY A substantial body of work has shown that patterns of EEG a-band activity track the angular coordinates of attended and remembered stimuli around fixation, but whether these patterns track the two-dimensional coordinates of stimuli presented within a visual hemifield remains an open question. Here, we demonstrate that a-band activity tracks the two-dimensional coordinates of remembered stimuli within a hemifield, showing that a-band activity reflects a spotlight of attention focused on locations maintained in working memory.
Yu Takagi; Laurence Tudor Hunt; Mark W. Woolrich; Timothy E. J. Behrens; Miriam C. Klein-Flügge
In: eLife, vol. 10, pp. 1–27, 2021.
Choices rely on a transformation of sensory inputs into motor responses. Using invasive single neuron recordings, the evolution of a choice process has been tracked by projecting population neural responses into state spaces. Here, we develop an approach that allows us to recover similar trajectories on a millisecond timescale in non-invasive human recordings. We selectively suppress activity related to three task-axes, relevant and irrelevant sensory inputs and response direction, in magnetoencephalography data acquired during context-dependent choices. Recordings from premotor cortex show a progression from processing sensory input to processing the response. In contrast to previous macaque recordings, information related to choice-irrelevant features is represented more weakly than choice-relevant sensory information. To test whether this mechanistic difference between species is caused by extensive over-training common in non-human primate studies, we trained humans on >20,000 trials of the task. Choice-irrelevant features were still weaker than relevant features in premotor cortex after over-training.
Travis N. Talcott; Nicholas Gaspelin
Eye movements are not mandatorily preceded by the N2pc component Journal Article
In: Psychophysiology, vol. 58, no. 6, pp. e13821, 2021.
Researchers typically distinguish between two mechanisms of attentional selection in vision: overt and covert attention. A commonplace assumption is that overt eye movements are automatically preceded by shifts of covert attention during visual search. Although the N2pc component is a putative index of covert attentional orienting, little is currently known about its relationship with overt eye movements. This is because most previous studies of the N2pc component prohibit overt eye movements. The current study assessed this relationship by concurrently measuring covert attention (via the N2pc) and overt eye movements (via eye tracking). Participants searched displays for a lateralized target stimulus and were allowed to generate overt eye movements during the search. We then assessed whether overt eye movements were preceded by the N2pc component. The results indicated that saccades were preceded by an N2pc component, but only when participants were required to carefully inspect the target stimulus before initiating the eye movement. When participants were allowed to make naturalistic eye movements in service of visual search, there was no evidence of an N2pc component before eye movements. These findings suggest that the N2pc component does not always precede overt eye movements during visual search. Implications for understanding the relationship between covert and overt attention are discussed.
Wieske Zoest; Christoph Huber-Huber; Matthew D. Weaver; Clayton Hickey
In: Journal of Neuroscience, vol. 41, no. 33, pp. 7120–7135, 2021.
Our visual environment is complicated, and our cognitive capacity is limited. As a result, we must strategically ignore some stimuli to prioritize others. Common sense suggests that foreknowledge of distractor characteristics, like location or color, might help us ignore these objects. But empirical studies have provided mixed evidence, often showing that knowing about a distractor before it appears counterintuitively leads to its attentional selection. What has looked like strategic distractor suppression in the past is now commonly explained as a product of prior experience and implicit statistical learning, and the long-standing notion the distractor suppression is reflected in a band oscillatory brain activity has been challenged by results appearing to link a to target resolution. Can we strategically, proactively suppress distractors? And, if so, does this involve a? Here, we use the concurrent recording of human EEG and eye movements in optimized experimental designs to identify behavior and brain activity associated with proactive distractor suppression. Results from three experiments show that knowing about distractors before they appear causes a reduction in electrophysiological indices of covert attentional selection of these objects and a reduction in the overt deployment of the eyes to the location of the objects. This control is established before the distractor appears and is predicted by the power of cue-elicited a activity over the visual cortex. Foreknowledge of distractor characteristics therefore leads to improved selective control, and a oscillations in visual cortex reflect the implementation of this strategic, proactive mechanism.
Aurélien Weiss; Valérian Chambon; Junseok K. Lee; Jan Drugowitsch; Valentin Wyart
In: Nature Communications, vol. 12, pp. 2228, 2021.
Making accurate decisions in uncertain environments requires identifying the generative cause of sensory cues, but also the expected outcomes of possible actions. Although both cognitive processes can be formalized as Bayesian inference, they are commonly studied using different experimental frameworks, making their formal comparison difficult. Here, by framing a reversal learning task either as cue-based or outcome-based inference, we found that humans perceive the same volatile environment as more stable when inferring its hidden state by interaction with uncertain outcomes than by observation of equally uncertain cues. Multivariate patterns of magnetoencephalographic (MEG) activity reflected this behavioral difference in the neural interaction between inferred beliefs and incoming evidence, an effect originating from associative regions in the temporal lobe. Together, these findings indicate that the degree of control over the sampling of volatile environments shapes human learning and decision-making under uncertainty.
Bo Yao; Jason R. Taylor; Briony Banks; Sonja A. Kotz
In: NeuroImage, vol. 239, pp. 118313, 2021.
Growing evidence shows that theta-band (4–7 Hz) activity in the auditory cortex phase-locks to rhythms of overt speech. Does theta activity also encode the rhythmic dynamics of inner speech? Previous research established that silent reading of direct speech quotes (e.g., Mary said: “This dress is lovely!”) elicits more vivid inner speech than indirect speech quotes (e.g., Mary said that the dress was lovely). As we cannot directly track the phase alignment between theta activity and inner speech over time, we used EEG to measure the brain's phase-locked responses to the onset of speech quote reading. We found that direct (vs. indirect) quote reading was associated with increased theta phase synchrony over trials at 250–500 ms post-reading onset, with sources of the evoked activity estimated in the speech processing network. An eye-tracking control experiment confirmed that increased theta phase synchrony in direct quote reading was not driven by eye movement patterns, and more likely reflects synchronous phase resetting at the onset of inner speech. These findings suggest a functional role of theta phase modulation in reading-induced inner speech.
Anne Buot; Damiano Azzalini; Maximilien Chaumon; Catherine Tallon-Baudry
Does stroke volume influence heartbeat evoked responses? Journal Article
In: Biological Psychology, vol. 165, pp. 108165, 2021.
We know surprisingly little on how heartbeat-evoked responses (HERs) vary with cardiac parameters. Here, we measured both stroke volume, or volume of blood ejected at each heartbeat, with impedance cardiography, and HER amplitude with magneto-encephalography, in 21 male and female participants at rest with eyes open. We observed that HER co-fluctuates with stroke volume on a beat-to-beat basis, but only when no correction for cardiac artifact was performed. This highlights the importance of an ICA correction tailored to the cardiac artifact. We also observed that easy-to-measure cardiac parameters (interbeat intervals, ECG amplitude) are sensitive to stroke volume fluctuations and can be used as proxies when stroke volume measurements are not available. Finally, interindividual differences in stroke volume were reflected in MEG data, but whether this effect is locked to heartbeats is unclear. Altogether, our results question assumptions on the link between stroke volume and HERs.
Christoforos Christoforou; Argyro Fella; Paavo H. T. Leppänen; George K. Georgiou; Timothy C. Papadopoulos
In: Clinical Neurophysiology, vol. 132, no. 11, pp. 2798–2807, 2021.
Objective: We combined electroencephalography (EEG) and eye-tracking recordings to examine the underlying factors elicited during the serial Rapid-Automatized Naming (RAN) task that may differentiate between children with dyslexia (DYS) and chronological age controls (CAC). Methods: Thirty children with DYS and 30 CAC (Mage = 9.79 years; age range 7.6 through 12.1 years) performed a set of serial RAN tasks. We extracted fixation-related potentials (FRPs) under phonologically similar (rime-confound) or visually similar (resembling lowercase letters) and dissimilar (non-confounding and discrete uppercase letters, respectively) control tasks. Results: Results revealed significant differences in FRP amplitudes between DYS and CAC groups under the phonologically similar and phonologically non-confounding conditions. No differences were observed in the case of the visual conditions. Moreover, regression analysis showed that the average amplitude of the extracted components significantly predicted RAN performance. Conclusion: FRPs capture neural components during the serial RAN task informative of differences between DYS and CAC and establish a relationship between neurocognitive processes during serial RAN and dyslexia. Significance: We suggest our approach as a methodological model for the concurrent analysis of neurophysiological and eye-gaze data to decipher the role of RAN in reading.
Edan Daniel; Ilan Dinstein
In: Journal of Neurophysiology, vol. 125, no. 4, pp. 1111–1120, 2021.
Remarkable trial-by-trial variability is apparent in cortical responses to repeating stimulus presentations. This neural variability across trials is relatively high before stimulus presentation and then reduced (i.e., quenched) $sim$0.2 s after stimulus presentation. Individual subjects exhibit different magnitudes of variability quenching, and previous work from our lab has revealed that individuals with larger variability quenching exhibit lower (i.e., better) perceptual thresholds in a contrast discrimination task. Here, we examined whether similar findings were also apparent in a motion detection task, which is processed by distinct neural populations in the visual system. We recorded EEG data from 35 adult subjects as they detected the direction of coherent motion in random dot kinematograms. The results demonstrated that individual magnitudes of variability quenching were significantly correlated with coherent motion thresholds, particularly when presenting stimuli with low dot densities, where coherent motion was more difficult to detect. These findings provide consistent support for the hypothesis that larger magnitudes of neural variability quenching are associated with better perceptual abilities in multiple visual domain tasks. NEW & NOTEWORTHY The current study demonstrates that better visual perception abilities in a motion discrimination task are associated with larger quenching of neural variability. In line with previous studies and signal detection theory principles, these findings support the hypothesis that cortical sensory neurons increase reproducibility to enhance detection and discrimination of sensory stimuli.
Jonathan Daume; Peng Wang; Alexander Maye; Dan Zhang; Andreas K. Engel
In: NeuroImage, vol. 224, pp. 117376, 2021.
The phase of neural oscillatory signals aligns to the predicted onset of upcoming stimulation. Whether such phase alignments represent phase resets of underlying neural oscillations or just rhythmically evoked activity, and whether they can be observed in a rhythm-free visual context, however, remains unclear. Here, we recorded the magnetoencephalogram while participants were engaged in a temporal prediction task, judging the visual or tactile reappearance of a uniformly moving stimulus. The prediction conditions were contrasted with a control condition to dissociate phase adjustments of neural oscillations from stimulus-driven activity. We observed stronger delta band inter-trial phase consistency (ITPC) in a network of sensory, parietal and frontal brain areas, but no power increase reflecting stimulus-driven or prediction-related evoked activity. Delta ITPC further correlated with prediction performance in the cerebellum and visual cortex. Our results provide evidence that phase alignments of low-frequency neural oscillations underlie temporal predictions in a non-rhythmic visual and crossmodal context.
Saeideh Davoudi; Mohsen Parto Dezfouli; Robert T. Knight; Mohammad Reza Daliri; Elizabeth L. Johnson
In: Journal of Cognitive Neuroscience, vol. 33, no. 9, pp. 1798–1810, 2021.
How does the human brain prioritize different visual representations in working memory (WM)? Here, we define the oscillatory mechanisms supporting selection of “where”and “when” features from visual WM storage and investigate the role of pFC in feature selection. Fourteen individuals with lateral pFC damage and 20 healthy controls performed a visuospatial WM task while EEG was recorded. On each trial, two shapes were presented sequentially in a top/ bottom spatial orientation. A retro-cue presented mid-delay prompted which of the two shapes had been in either the top/ bottom spatial position or first/second temporal position. We found that cross-frequency coupling between parieto-occipital alpha ($alpha$; 8–12 Hz) oscillations and topographi-cally distributed gamma ($gamma$; 30–50 Hz) activity tracked selection of the distinct cued feature in controls. This signature of feature selection was disrupted in patients with pFC lesions, despite intact $alpha$–$gamma$ coupling independent of feature selection. These findings reveal a pFC-dependent parieto-occipital $alpha$–$gamma$ mechanism for the rapid selection of visual WM representations.
Jan Willem De Gee; Camile M. C. Correa; Matthew Weaver; Tobias H. Donner; Simon Van Gaal
In: Cerebral Cortex, vol. 31, no. 7, pp. 3565–3578, 2021.
Central to human and animal cognition is the ability to learn from feedback in order to optimize future rewards. Such a learning signal might be encoded and broadcasted by the brain's arousal systems, including the noradrenergic locus coeruleus. Pupil responses and the positive slow wave component of event-related potentials reflect rapid changes in the arousal level of the brain. Here, we ask whether and how these variables may reflect surprise: the mismatch between one's expectation about being correct and the outcome of a decision, when expectations fluctuate due to internal factors (e.g., engagement). We show that during an elementary decision task in the face of uncertainty both physiological markers of phasic arousal reflect surprise. We further show that pupil responses and slow wave event-related potential are unrelated to each other and that prediction error computations depend on feedback awareness. These results further advance our understanding of the role of central arousal systems in decision-making under uncertainty.
Megan T. Debettencourt; Stephanie D. Williams; Edward K. Vogel; Edward Awh
In: Journal of Cognitive Neuroscience, vol. 33, no. 10, pp. 2132–2148, 2021.
Our attention is critically important for what we remember. Prior measures of the relationship between attention and memory, however, have largely treated “attention” as a monolith. Here, across three experiments, we provide evidence for two dissociable aspects of attention that influence encoding into long-term memory. Using spatial cues together with a sensitive continuous report procedure, we find that long-term memory response error is affected by both trial-by-trial fluctuations of sustained attention and prioritization via covert spatial attention. Furthermore, using multivariate analyses of EEG, we track both sustained attention and spatial attention before stimulus onset. Intriguingly, even during moments of low sustained attention, there is no decline in the representation of the spatially attended location, showing that these two aspects of attention have robust but independent effects on long-term memory encoding. Finally, sustained and spatial attention predicted distinct variance in long-term memory performance across individuals. That is, the relationship between attention and long-term memory suggests a composite model, wherein distinct attentional subcomponents influence encoding into long-term memory. These results point toward a taxonomy of the distinct attentional processes that constrain our memories.
Federica Degno; Otto Loberg; Simon P. Liversedge
In: Collabra: Psychology, vol. 7, no. 1, pp. 1–28, 2021.
A growing number of studies are using co-registration of eye movement (EM) and fixation-related potential (FRP) measures to investigate reading. However, the number of co-registration experiments remains small when compared to the number of studies in the literature conducted with EMs and event-related potentials (ERPs) alone. One reason for this is the complexity of the experimental design and data analyses. The present paper is designed to support researchers who might have expertise in conducting reading experiments with EM or ERP techniques and are wishing to take their first steps towards co-registration research. The objective of this paper is threefold. First, to provide an overview of the issues that such researchers would face. Second, to provide a critical overview of the methodological approaches available to date to deal with these issues. Third, to offer an example pipeline and a full set of scripts for data preprocessing that may be adopted and adapted for one's own needs. The data preprocessing steps are based on EM data parsing via Data Viewer (SR Research), and the provided scripts are written in Matlab and R. Ultimately, with this paper we hope to encourage other researchers to run co-registration experiments to study reading and human cognition more generally.
Gisella K. Diaz; Edward K. Vogel; Edward Awh
In: Journal of Cognitive Neuroscience, vol. 33, no. 7, pp. 1354–1364, 2021.
Multiple neural signals have been found to track the number of items stored in working memory ( WM). These signals include oscillatory activity in the alpha band and slow-wave components in human EEG, both of which vary with storage loads and predict individual differences in WM capacity. However, recent evidence suggests that these two signals play distinct roles in spatial attention and item-based storage in WM. Here, we examine the hypothesis that sustained negative voltage deflections over parieto-occipital electrodes reflect the number of individuated items in WM, whereas oscillatory activity in the alpha frequency band (8–12 Hz) within the same electrodes tracks the attended positions in the visual display. We measured EEG activity while participants stored the orientation of visual elements that were either grouped by collinearity or not. This grouping manipulation altered the number of individuated items perceived while holding constant the number of locations occupied by visual stimuli. The negative slow wave tracked the number of items stored and was reduced in amplitude in the grouped condition. By contrast, oscillatory activity in the alpha frequency band tracked the number of positions occupied by the memoranda and was unaffected by perceptual grouping. Perceptual grouping, then, reduced the number of individuated representations stored in WM as reflected by the negative slow wave, whereas the location of each element was actively maintained as indicated by alpha power. These findings contribute to the emerging idea that distinct classes of EEG signals work in concert to successfully maintain online representations in WM.
Marcos Domic-Siede; Martín Irani; Joaquín Valdés; Marcela Perrone-Bertolotti; Tomás Ossandón
In: NeuroImage, vol. 226, pp. 117557, 2021.
Cognitive planning, the ability to develop a sequenced plan to achieve a goal, plays a crucial role in human goal-directed behavior. However, the specific role of frontal structures in planning is unclear. We used a novel and ecological task, that allowed us to separate the planning period from the execution period. The spatio-temporal dynamics of EEG recordings showed that planning induced a progressive and sustained increase of frontal-midline theta activity (FM$theta$) over time. Source analyses indicated that this activity was generated within the prefrontal cortex. Theta activity from the right mid-Cingulate Cortex (MCC) and the left Anterior Cingulate Cortex (ACC) were correlated with an increase in the time needed for elaborating plans. On the other hand, left Frontopolar cortex (FP) theta activity exhibited a negative correlation with the time required for executing a plan. Since reaction times of planning execution correlated with correct responses, left FP theta activity might be associated with efficiency and accuracy in making a plan. Associations between theta activity from the right MCC and the left ACC with reaction times of the planning period may reflect high cognitive demand of the task, due to the engagement of attentional control and conflict monitoring implementation. In turn, the specific association between left FP theta activity and planning performance may reflect the participation of this brain region in successfully self-generated plans.
Linda Drijvers; Ole Jensen; Eelke Spaak
In: Human Brain Mapping, vol. 42, no. 4, pp. 1138–1152, 2021.
During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1,440 Hz refresh rate). Integration difficulty was manipulated by lower-order auditory factors (clear/degraded speech) and higher-order visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual − fauditory = 7 Hz), specifically when lower-order integration was easiest because signal quality was optimal. This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of lower-order audiovisual integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context.
Stefan Dürschmid; Andre Maric; Marcel S. Kehl; Robert T Knight; Hermann Hinrichs; Hans-Jochen Heinze
In: Journal of Neuroscience, vol. 41, pp. 1727–1737, 2021.
Impulsive decisions arise from preferring smaller but sooner rewards compared to larger but later rewards. How neural activity and attention to choice alternatives contribute to reward decisions during temporal discounting is not clear. Here we probed (i) attention to and (ii) neural representation of delay and reward information in humans (both sexes) engaged in choices. We studied behavioral and frequency specific dynamics supporting impulsive decisions on a fine-grained temporal scale using eye tracking and magnetoencephalographic (MEG) recordings. In one condition participants had to decide for themselves but pretended to decide for their best friend in a second prosocial condition, which required perspective taking. Hence, conditions varied in the value for themselves versus that pretending to choose for another person. Stronger impulsivity was reliably found across three independent groups for prosocial decisions. Eye tracking revealed a systematic shift of attention from the delay to the reward information and differences in eye tracking between conditions predicted differences in discounting. High frequency activity (HFA: 175-250 Hz) distributed over right fronto-temporal sensors correlated with delay and reward information in consecutive temporal intervals for high value decisions for oneself but not the friend. Collectively the results imply that the HFA recorded over fronto-temporal MEG sensors plays a critical role in choice option integration.
Amie J. Durston; Roxane J. Itier
In: Brain Research, vol. 1765, pp. 147505, 2021.
Most ERP studies on facial expressions of emotion have yielded inconsistent results regarding the time course of emotion effects and their possible modulation by task demands. Most studies have used classical statistical methods with a high likelihood of type I and type II errors, which can be limited with Mass Univariate statistics. FMUT and LIMO are currently the only two available toolboxes for Mass Univariate analysis of ERP data and use different fundamental statistics. Yet, no direct comparison of their output has been performed on the same dataset. Given the current push to transition to robust statistics to increase results replicability, here we compared the output of these toolboxes on data previously analyzed using classic approaches (Itier & Neath-Tavares, 2017). The early (0–352 ms) processing of fearful, happy, and neutral faces was investigated under three tasks in a within-subject design that also controlled gaze fixation location. Both toolboxes revealed main effects of emotion and task but neither yielded an interaction between the two, confirming the early processing of fear and happy expressions is largely independent of task demands. Both toolboxes found virtually no difference between neutral and happy expressions, while fearful (compared to neutral and happy) expressions modulated the N170 and EPN but elicited maximum effects after the N170 peak, around 190 ms. Similarities and differences in the spatial and temporal extent of these effects are discussed in comparison to the published classical analysis and the rest of the ERP literature.
In: Psychophysiology, vol. 58, no. 1, pp. e13683, 2021.
The change detection task is a widely used paradigm to examine visual working memory processes. Participants memorize a set of items and then, try to detect changes in the set after a retention period. The negative slow wave (NSW) and contralateral delay activity (CDA) are event-related potentials in the EEG signal that are commonly used in change detection tasks to track working memory load, as both increase with the number of items maintained in working memory (set size). While the CDA was argued to more purely reflect the memory-specific neural activity than the NSW, it also requires a lateralized design and attention shifts prior to memoranda onset, imposing more restrictions on the task than the NSW. The present study proposes a novel change detection task in which both CDA and NSW can be measured at the same time. Memory items were presented bilaterally, but their distribution in the left and right hemifield varied, inducing a target imbalance or “net load.” NSW increased with set size, whereas CDA increased with net load. In addition, a multivariate linear classifier was able to decode the set size and net load from the EEG signal. CDA, NSW, and decoding accuracy predicted an individual's working memory capacity. In line with the notion of a bilateral advantage in working memory, accuracy, and CDA data suggest that participants tended to encode items relatively balanced. In sum, this novel change detection task offers a basis to make use of converging neural measures of working memory in a comprehensive paradigm.
Tobias Feldmann-Wüstefeld; Marina Weinberger; Edward Awh
Spatially guided distractor suppression during visual search Journal Article
In: Journal of Neuroscience, vol. 41, no. 14, pp. 3180–3191, 2021.
Past work has demonstrated that active suppression of salient distractors is a critical part of visual selection. Evidence for goaldriven suppression includes below-baseline visual encoding at the position of salient distractors (Gaspelin and Luck, 2018) and neural signals such as the distractor positivity (Pd) that track how many distractors are presented in a given hemifield (Feldmann-Wöstefeld and Vogel, 2019). One basic question regarding distractor suppression is whether it is inherently spatial or nonspatial in character. Indeed, past work has shown that distractors evoke both spatial (Theeuwes, 1992) and nonspatial forms of interference (Folk and Remington, 1998), motivating a direct examination of whether space is integral to goal-driven distractor suppression. Here, we use behavioral and EEG data from adult humans (male and female) to provide clear evidence for a spatial gradient of suppression surrounding salient singleton distractors. Replicating past work, both reaction time and neural indices of target selection improved monotonically as the distance between target and distractor increased. Importantly, these target selection effects were paralleled by a monotonic decline in the amplitude of the Pd, an electrophysiological index of distractor suppression. Moreover, multivariate analyses revealed spatially selective activity in the h-band that tracked the position of the target and, critically, revealed suppressed activity at spatial channels centered on distractor positions. Thus, goal-driven selection of relevant over irrelevant information benefits from a spatial gradient of suppression surrounding salient distractors.
Joshua J. Foster; William Thyer; Janna W. Wennberg; Edward Awh
In: Journal of Neuroscience, vol. 41, no. 8, pp. 1802–1815, 2021.
Covert spatial attention has a variety of effects on the responses of individual neurons. However, relatively little is known about the net effect of these changes on sensory population codes, even though perception ultimately depends on population activity. Here, we measured the EEG in human observers (male and female), and isolated stimulus-evoked activity that was phase-locked to the onset of attended and ignored visual stimuli. Using an encoding model, we reconstructed spatially selective population tuning functions from the pattern of stimulus-evoked activity across the scalp. Our EEG-based approach allowed us to measure very early visually evoked responses occurring;100ms after stimulus onset. In Experiment 1, we found that covert attention increased the amplitude of spatially tuned population responses at this early stage of sensory processing. In Experiment 2, we parametrically varied stimulus contrast to test how this effect scaled with stimulus contrast. We found that the effect of attention on the amplitude of spatially tuned responses increased with stimulus contrast, and was well described by an increase in response gain (i.e., a multiplicative scaling of the population response). Together, our results show that attention increases the gain of spatial population codes during the first wave of visual processing.
Wendel M. Friedl; Andreas Keil
In: Journal of Neuroscience, vol. 41, no. 26, pp. 5723–5733, 2021.
Processing capabilities for many low-level visual features are experientially malleable, aiding sighted organisms in adapting to dynamic environments. Explicit instructions to attend a specific visual field location influence retinotopic visuocortical activity, amplifying responses to stimuli appearing at cued spatial positions. It remains undetermined both how such prioritization affects surrounding nonprioritized locations, and if a given retinotopic spatial position can attain enhanced cortical representation through experience rather than instruction. The current report examined visuocortical response changes as human observers (N = 51, 19 male) learned, through differential classical conditioning, to associate specific screen locations with aversive outcomes. Using dense-array EEG and pupillometry, we tested the preregistered hypotheses of either sharpening or generalization around an aversively associated location following a single conditioning session. Competing hypotheses tested whether mean response changes would take the form of a Gaussian (generalization) or difference-of-Gaussian (sharpening) distribution over spatial positions, peaking at the viewing location paired with a noxious noise. Occipital 15 Hz steady-state visual evoked potential responses were selectively heightened when viewing aversively paired locations and displayed a nonlinear, difference-of-Gaussian profile across neighboring locations, consistent with suppressive surround modulation of nonprioritized positions. Measures of alpha-band (8-12 Hz) activity were differentially altered in anterior versus posterior locations, while pupil diameter exhibited selectively heightened responses to noise-paired locations but did not evince differences across the nonpaired locations. These results indicate that visuocortical spatial representations are sharpened in response to location-specific aversive conditioning, while top-down influences indexed by alpha-power reduction exhibit posterior generalization and anterior sharpening.
R. Frömer; H. Lin; C. K. Dean Wolf; M. Inzlicht; A. Shenhav
In: Nature Communications, vol. 12, pp. 1030, 2021.
The amount of mental effort we invest in a task is influenced by the reward we can expect if we perform that task well. However, some of the rewards that have the greatest potential for driving these efforts are partly determined by factors beyond one's control. In such cases, effort has more limited efficacy for obtaining rewards. According to the Expected Value of Control theory, people integrate information about the expected reward and efficacy of task performance to determine the expected value of control, and then adjust their control allocation (i.e., mental effort) accordingly. Here we test this theory's key behavioral and neural predictions. We show that participants invest more cognitive control when this control is more rewarding and more efficacious, and that these incentive components separately modulate EEG signatures of incentive evaluation and proactive control allocation. Our findings support the prediction that people combine expectations of reward and efficacy to determine how much effort to invest.
Jordan Garrett; Tom Bullock; Barry Giesbrecht
In: Journal of Cognitive Neuroscience, vol. 33, no. 7, pp. 1271–1286, 2021.
Recent studies have reported enhanced visual responses during acute bouts of physical exercise, suggesting that sensory systems may become more sensitive during active exploration of the environment. This raises the possibility that exercise may also modulate brain activity associated with other cognitive functions, like visual working memory, that rely on patterns of activity that persist beyond the initial sensory evoked response. Here, we investigated whether the neural coding of an object location held in memory is modulated by an acute bout of aerobic exercise. Participants performed a spatial change detection task while seated on a stationary bike at rest and during low-intensity cycling (∼50 watts/50 RPM). Brain activity was measured with EEG. An inverted encoding modeling technique was employed to estimate location-selective channel response functions from topographical patterns of alpha-band (8–12 Hz) activity. There was strong evidence of robust spatially selective responses during stimulus presentation and retention periods both at rest and during exercise. During retention, the spatial selectivity of these responses decreased in the exercise condition relative to rest. A temporal generalization analysis indicated that models trained on one time period could be used to reconstruct the remembered locations at other time periods, however, generalization was degraded during exercise. Together, these results demonstrate that it is possible to reconstruct the contents of working memory at rest and during exercise, but that exercise can result in degraded responses, which contrasts with the enhancements observed in early sensory processing.
Nicole Hakim; Edward Awh; Edward K. Vogel; Monica D. Rosenberg
In: Current Biology, vol. 31, no. 22, pp. 4998–5008, 2021.
Human brains share a broadly similar functional organization with consequential individual variation. This duality in brain function has primarily been observed when using techniques that consider the spatial organization of the brain, such as MRI. Here, we ask whether these common and unique signals of cognition are also present in temporally sensitive but spatially insensitive neural signals. To address this question, we compiled electroencephalogram (EEG) data from individuals of both sexes while they performed multiple working memory tasks at two different data-collection sites (n = 171 and 165). Results revealed that trial-averaged EEG activity exhibited inter-electrode correlations that were stable within individuals and unique across individuals. Furthermore, models based on these inter-electrode correlations generalized across datasets to predict participants' working memory capacity and general fluid intelligence. Thus, inter-electrode correlation patterns measured with EEG provide a signature of working memory and fluid intelligence in humans and a new framework for characterizing individual differences in cognitive abilities.
Nicole Hakim; Tobias Feldmann-Wüstefeld; Edward Awh; Edward K. Vogel
In: Cerebral Cortex, vol. 31, no. 7, pp. 3323–3337, 2021.
Visual working memory (WM) must maintain relevant information, despite the constant influx of both relevant and irrelevant information. Attentional control mechanisms help determine which of this new information gets access to our capacity-limited WM system. Previous work has treated attentional control as a monolithic process - either distractors capture attention or they are suppressed. Here, we provide evidence that attentional capture may instead be broken down into at least two distinct subcomponent processes: (1) Spatial capture, which refers to when spatial attention shifts towards the location of irrelevant stimuli and (2) item-based capture, which refers to when item-based WM representations of irrelevant stimuli are formed. To dissociate these two subcomponent processes of attentional capture, we utilized a series of electroencephalography components that track WM maintenance (contralateral delay activity), suppression (distractor positivity), item individuation (N2pc), and spatial attention (lateralized alpha power). We show that new, relevant information (i.e., a task-relevant distractor) triggers both spatial and item-based capture. Irrelevant distractors, however, only trigger spatial capture from which ongoing WM representations can recover more easily. This fractionation of attentional capture into distinct subcomponent processes provides a refined framework for understanding how distracting stimuli affect attention and WM.
Louisa Bogaerts; Craig G. Richter; Ayelet N. Landau; Ram Frost
Beta-band activity is a signature of statistical learning Journal Article
In: Journal of Neuroscience, vol. 40, no. 39, pp. 7523–7530, 2020.
Through statistical learning (SL), cognitive systems may discover the underlying regularities in the environment. Testing human adults (n = 35, 21 females), we document, in the context of a classical visual SL task, divergent rhythmic EEG activity in the interstimulus delay periods within patterns versus between patterns (i.e., pattern transitions). Our findings reveal increased oscillatory activity in the beta band (∼20 Hz) at triplet transitions that indexes learning: It emerges with increased pattern repetitions; and importantly, it is highly correlated with behavioral learning outcomes. These findings hold the promise of converging on an online measure of learning regularities and provide important theoretical insights regarding the mechanisms of SL and prediction.
Mathieu Bourguignon; Martijn Baart; Efthymia C. Kapnoula; Nicola Molinaro
In: Journal of Neuroscience, vol. 40, no. 5, pp. 1053–1065, 2020.
Lip-reading is crucial for understanding speech in challenging conditions. But how the brain extracts meaning from, silent, visual speech is still under debate. Lip-reading in silence activates the auditory cortices, but it is not known whether such activation reflects immediate synthesis of the corresponding auditory stimulus or imagery of unrelated sounds. To disentangle these possibilities, we used magnetoencephalography to evaluate how cortical activity in 28 healthy adult humans (17 females) entrained to the auditory speech envelope and lip movements (mouth opening) when listening to a spoken story without visual input (audio-only), and when seeing a silent video of a speaker articulating another story (video-only). In video-only, auditory cortical activity entrained to the absent auditory signal at frequencies <1 Hz more than to the seen lip movements. This entrainment process was characterized by an auditory-speech-to-brain delay of $sim$70 ms in the left hemisphere, compared with $sim$20 ms in audio-only. Entrainment to mouth opening was found in the right angular gyrus at <1 Hz, and in early visual cortices at 1– 8 Hz. These findings demonstrate that the brain can use a silent lip-read signal to synthesize a coarse-grained auditory speech representation in early auditory cortices. Our data indicate the following underlying oscillatory mechanism: seeing lip movements first modulates neuronal activity in early visual cortices at frequencies that match articulatory lip movements; the right angular gyrus then extracts slower features of lip movements, mapping them onto the corresponding speech sound features; this information is fed to auditory cortices, most likely facilitating speech parsing.
Méadhbh B. Brosnan; Kristina Sabaroedin; Tim Silk; Sila Genc; Daniel P. Newman; Gerard M. Loughnane; Alex Fornito; Redmond G. O'Connell; Mark A. Bellgrove
In: Nature Human Behaviour, vol. 4, no. 8, pp. 844–855, 2020.
Animal neurophysiological studies have identified neural signals within dorsal frontoparietal areas that trace a perceptual decision by accumulating sensory evidence over time and trigger action upon reaching a threshold. Although analogous accumulation-to-bound signals are identifiable on extracranial human electroencephalography, their cortical origins remain unknown. Here neural metrics of human evidence accumulation, predictive of the speed of perceptual reports, were isolated using electroencephalography and related to dorsal frontoparietal network (dFPN) connectivity using diffusion and resting-state functional magnetic resonance imaging. The build-up rate of evidence accumulation mediated the relationship between the white matter macrostructure of dFPN pathways and the efficiency of perceptual reports. This association between steeper build-up rates of evidence accumulation and the dFPN was recapitulated in the resting-state networks. Stronger connectivity between dFPN regions is thus associated with faster evidence accumulation and speeded perceptual decisions. Our findings identify an integrated network for perceptual decisions that may be targeted for neurorehabilitation in cognitive disorders.
Maximilian Bruchmann; Sebastian Schindler; Thomas Straube
In: Psychophysiology, vol. 57, no. 9, pp. e13597, 2020.
Prioritized processing of fearful compared to neutral faces is reflected in behavioral advantages such as lower detection thresholds, but also in enhanced early and late event-related potentials (ERPs). Behavioral advantages have recently been associated with the spatial frequency spectrum of fearful faces, better fitting the human contrast sensitivity function than the spectrum of neutral faces. However, it is unclear whether and to which extent early and late ERP differences are due to low-level spatial frequency spectrum information or high-level representations of the facial expression. In this pre-registered EEG study (N = 38), the effects of fearful-specific spatial frequencies on event-related ERPs were investigated by presenting faces with fearful and neutral expressions whose spatial frequency spectra were manipulated so as to contain either the average power spectra of neutral, fearful, or both expressions combined. We found an enlarged N170 to fearful versus neutral faces, not interacting with spatial frequency. Interactions of emotional expression and spatial frequencies were observed for the P1 and Early Posterior Negativity (EPN). For both components, larger emotion differences were observed when the spectrum contained neutral as opposed to fearful frequencies. Importantly, for the EPN, fearful and neutral expressions did not differ anymore when inserting fearful frequencies into neutral expressions, whereas typical emotion differences were found when faces contained average or neutral frequencies. Our findings show that N170 emotional modulations are unaffected by expression-specific spatial frequencies. However, expression-specific spatial frequencies alter early and mid-latency ERPs. Most notably, the EPN to neutral expressions is boosted by adding fearful spectra—but not vice versa.
Antimo Buonocore; Olaf Dimigen; David Melcher
In: Journal of Neuroscience, vol. 40, no. 11, pp. 2305–2313, 2020.
Humans actively sample their environment with saccadic eye movements to bring relevant information into high-acuity foveal vision. Despite being lower in resolution, peripheral information is also available before each saccade. How the pre-saccadic extrafoveal preview of a visual object influences its post-saccadic processing is still an unanswered question. The current study investigated this question by simultaneously recording behavior and fixation-related brain potentials while human subjects made saccades to face stimuli. We manipulated the relationship between pre-saccadic "previews" and post-saccadic images to explicitly isolate the influences of the former. Subjects performed a gender discrimination task on a newly foveated face under three preview conditions: scrambled face, incongruent face (different identity from the foveated face), and congruent face (same identity). As expected, reaction times were faster after a congruent-face preview compared with a scrambled-face preview. Importantly, intact face previews (either incongruent or congruent) resulted in a massive reduction of post-saccadic neural responses. Specifically, we analyzed the classic face-selective N170 component at occipitotemporal electroencephalogram electrodes, which was still present in our experiments with active looking. However, the post-saccadic N170 was strongly attenuated following intact-face previews compared with the scrambled condition. This large and long-lasting decrease in evoked activity is consistent with a trans-saccadic mechanism of prediction that influences category-specific neural processing at the start of a new fixation. These findings constrain theories of visual stability and show that the extrafoveal preview methodology can be a useful tool to investigate its underlying mechanisms.
Simon Majed Ceh; Sonja Annerer-Walcher; Christof Körner; Christian Rominger; Silvia Erika Kober; Andreas Fink; Mathias Benedek
In: Brain and Behavior, vol. 10, no. 10, pp. 1–14, 2020.
Introduction: Many goal-directed and spontaneous everyday activities (e.g., planning, mind wandering) rely on an internal focus of attention. Internally directed cognition (IDC) was shown to differ from externally directed cognition in a range of neurophysiological indicators such as electroencephalogram (EEG) alpha activity and eye behavior. Methods: In this EEG–eye-tracking coregistration study, we investigated effects of attention direction on EEG alpha activity and various relevant eye parameters. We used an established paradigm to manipulate internal attention demands in the visual domain within tasks by means of conditional stimulus masking. Results: Consistent with previous research, IDC involved relatively higher EEG alpha activity (lower alpha desynchronization) at posterior cortical sites. Moreover, IDC was characterized by greater pupil diameter (PD), fewer microsaccades, fixations, and saccades. These findings show that internal versus external cognition is associated with robust differences in several indicators at the neural and perceptual level. In a second line of analysis, we explored the intrinsic temporal covariation between EEG alpha activity and eye parameters during rest. This analysis revealed a positive correlation of EEG alpha power with PD especially in bilateral parieto-occipital regions. Conclusion: Together, these findings suggest that EEG alpha activity and PD represent time-sensitive indicators of internal attention demands, which may be involved in a neurophysiological gating mechanism serving to shield internal cognition from irrelevant sensory information.
Peter De Lissa; Roberto Caldara; Victoria Nicholls; Sebastien Miellet
In: PLoS ONE, vol. 15, no. 8, pp. e0236967, 2020.
Previous research has shown that visual attention does not always exactly follow gaze direction, leading to the concepts of overt and covert attention. However, it is not yet clear how such covert shifts of visual attention to peripheral regions impact the processing of the targets we directly foveate as they move in our visual field. The current study utilised the coregistration of eye-position and EEG recordings while participants tracked moving targets that were embedded with a 30 Hz frequency tag in a Steady State Visually Evoked Potentials (SSVEP) paradigm. When the task required attention to be divided between the moving target (overt attention) and a peripheral region where a second target might appear (covert attention), the SSVEPs elicited by the tracked target at the 30 Hz frequency band were significantly, but transiently, lower than when participants did not have to covertly monitor for a second target. Our findings suggest that neural responses of overt attention are only briefly reduced when attention is divided between covert and overt areas. This neural evidence is in line with theoretical accounts describing attention as a pool of finite resources, such as the perceptual load theory. Altogether, these results have practical implications for many real-world situations where covert shifts of attention may discretely reduce visual processing of objects even when they are directly being tracked with the eyes.
Andrea Desantis; Adrien Chan-Hon-Tong; Thérèse Collins; Hinze Hogendoorn; Patrick Cavanagh
In: Frontiers in Human Neuroscience, vol. 14, pp. 570419, 2020.
Attention can be oriented in space covertly without the need of eye movements. We used multivariate pattern classification analyses (MVPA) to investigate whether the time course of the deployment of covert spatial attention leading up to the observer's perceptual decision can be decoded from both EEG alpha power and raw activity traces. Decoding attention from these signals can help determine whether raw EEG signals and alpha power reflect the same or distinct features of attentional selection. Using a classical cueing task, we showed that the orientation of covert spatial attention can be decoded by both signals. However, raw activity and alpha power may reflect different features of spatial attention, with alpha power more associated with the orientation of covert attention in space and raw activity with the influence of attention on perceptual processes.
Elisa C. Dias; Abraham C. Van Voorhis; Filipe Braga; Julianne Todd; Javier Lopez-Calderon; Antigona Martinez; Daniel C. Javitt
In: Cerebral Cortex, vol. 30, no. 5, pp. 2823–2833, 2020.
During normal visual behavior, individuals scan the environment through a series of saccades and fixations. At each fixation, the phase of ongoing rhythmic neural oscillations is reset, thereby increasing efficiency of subsequent visual processing. This phase-reset is reflected in the generation of a fixation-related potential (FRP). Here, we evaluate the integrity of theta phase-reset/FRP generation and Guided Visual Search task in schizophrenia. Subjects performed serial and parallel versions of the task. An initial study (15 healthy controls (HC)/15 schizophrenia patients (SCZ)) investigated behavioral performance parametrically across stimulus features and set-sizes. A subsequent study (25-HC/25-SCZ) evaluated integrity of search-related FRP generation relative to search performance and evaluated visual span size as an index of parafoveal processing. Search times were significantly increased for patients versus controls across all conditions. Furthermore, significantly, deficits were observed for fixation-related theta phase-reset across conditions, that fully predicted impaired reduced visual span and search performance and correlated with impaired visual components of neurocognitive processing. By contrast, overall search strategy was similar between groups. Deficits in theta phase-reset mechanisms are increasingly documented across sensory modalities in schizophrenia. Here, we demonstrate that deficits in fixation-related theta phase-reset during naturalistic visual processing underlie impaired efficiency of early visual function in schizophrenia.
Nadine Dijkstra; Luca Ambrogioni; Diego Vidaurre; Marcel Gerven
In: eLife, vol. 9, pp. 1–19, 2020.
After the presentation of a visual stimulus, neural processing cascades from low-level sensory areas to increasingly abstract representations in higher-level areas. It is often hypothesised that a reversal in neural processing underlies the generation of mental images as abstract representations are used to construct sensory representations in the absence of sensory input. According to predictive processing theories, such reversed processing also plays a central role in later stages of perception. Direct experimental evidence of reversals in neural information flow has been missing. Here, we used a combination of machine learning and magnetoencephalography to characterise neural dynamics in humans. We provide direct evidence for a reversal of the perceptual feed-forward cascade during imagery and show that, during perception, such reversals alternate with feed-forward processing in an 11 Hz oscillatory pattern. Together, these results show how common feedback processes support both veridical perception and mental imagery.
Troy Dildine; Elizabeth Necka; Lauren Yvette Atlas
In: Scientific Reports, vol. 10, pp. 21373, 2020.
Self-report is the gold standard for measuring pain. However, decisions about pain can vary substantially within and between individuals. We measured whether self-reported pain is accompanied by metacognition and variations in confidence, similar to perceptual decision-making in other modalities. Eighty healthy volunteers underwent acute thermal pain and provided pain ratings followed by confidence judgments on continuous visual analogue scales. We investigated whether eye fixations and reaction time during pain rating might serve as implicit markers of confidence. Confidence varied across trials and increased confidence was associated with faster pain rating reaction times. The association between confidence and fixations varied across individuals as a function of the reliability of individuals' association between temperature and pain. Taken together, this work indicates that individuals can provide metacognitive judgments of pain and extends research on confidence in perceptual decision-making to pain.
Ciara Egan; Filipe Cristino; Joshua S. Payne; Guillaume Thierry; Manon W. Jones
In: Cortex, vol. 124, pp. 111–118, 2020.
In linguistics, the relationship between phonological word form and meaning is mostly considered arbitrary. Why, then, do literary authors traditionally craft sound relationships between words? We set out to characterise how dynamic interactions between word form and meaning may account for this literary practice. Here, we show that alliteration influences both meaning integration and attentional engagement during reading. We presented participants with adjective-noun phrases, having manipulated semantic relatedness (congruent, incongruent) and form repetition (alliterating, non-alliterating) orthogonally, as in “dazzling-diamond”; “sparkling-diamond”; “dangerous-diamond”; and “creepy-diamond”. Using simultaneous recording of event-related brain potentials and pupil dilation (PD), we establish that, whilst semantic incongruency increased N400 amplitude as expected, it reduced PD, an index of attentional engagement. Second, alliteration affected semantic evaluation of word pairs, since it reduced N400 amplitude even in the case of unrelated items (e.g., “dangerous-diamond”). Third, alliteration specifically boosted attentional engagement for related words (e.g., “dazzling-diamond”), as shown by a sustained negative correlation between N400 amplitudes and PD change after the window of lexical integration. Thus, alliteration strategically arouses attention during reading and when comprehension is challenged, phonological information helps readers link concepts beyond the level of literal semantics. Overall, our findings provide a tentative mechanism for the empowering effect of sound repetition in literary constructs.
Thomas Geyer; Franziska Günther; Hermann J. Müller; Jim Kacian; Heinrich René Liesefeld; Stella Pierides
In: Journal of Eye Movement Research, vol. 13, no. 2, pp. 1–29, 2020.
The current study, set within the larger enterprise of Neuro-Cognitive Poetics, was designed to examine how readers deal with the 'cut'-a more or less sharp semantic-conceptual break-in normative, three-line English-language haiku poems (ELH). Readers were presented with three-line haiku that consisted of two (seemingly) disparate parts, a (two-line) 'phrase' image and a one-line 'fragment' image, in order to determine how they process the conceptual gap between these images when constructing the poem's meaning-as reflected in their patterns of reading eye movements. In addition to replicating the basic 'cut effect', i.e., the extended fixation dwell time on the fragment line relative to the other lines, the present study examined (a) how this effect is influenced by whether the cut is purely implicit or explicitly marked by punctuation, and (b) whether the effect pattern could be delineated against a control condition of 'uncut', one-image haiku. For 'cut' vs. 'uncut' haiku, the results revealed the distribution of fixations across the poems to be modulated by the position of the cut (after line 1 vs. after line 2), the presence vs. absence of a cut marker, and the semanticconceptual distance between the two images (context-action vs. juxtaposition haiku). These formal-structural and conceptual-semantic properties were associated with systematic changes in how individual poem lines were scanned at first reading and then (selectively) re-sampled in second-and third-pass reading to construct and check global meaning. No such effects were found for one-image (control) haiku. We attribute this pattern to the operation of different meaning resolution processes during the comprehension of two-image haiku, which are invoked by both form-and meaning-related features of the poems.
Maximilian F. A. Hauser; Stefanie Heba; Tobias Schmidt-Wilcke; Martin Tegenthoff; Denise Manahan-Vaughan
In: Human Brain Mapping, vol. 41, no. 5, pp. 1153–1166, 2020.
In addition to its role in visuospatial navigation and the generation of spatial representations, in recent years, the hippocampus has been proposed to support perceptual processes. This is especially the case where high-resolution details, in the form of fine-grained relationships between features such as angles between components of a visual scene, are involved. An unresolved question is how, in the visual domain, perspective-changes are differentiated from allocentric changes to these perceived feature relationships, both of which may be argued to involve the hippocampus. We conducted functional magnetic resonance imaging of the brain response (corroborated through separate event-related potential source-localization) in a passive visuospatial oddball-paradigm to examine to what extent the hippocampus and other brain regions process changes in perspective, or configuration of abstract, three-dimensional structures. We observed activation of the left superior parietal cortex during perspective shifts, and right anterior hippocampus in configuration-changes. Strikingly, we also found the cerebellum to differentiate between the two, in a way that appeared tightly coupled to hippocampal processing. These results point toward a relationship between the cerebellum and the hippocampus that occurs during perception of changes in visuospatial information that has previously only been reported with regard to visuospatial navigation.
Simone G. Heideman; Andrew J. Quinn; Mark W. Woolrich; Freek Ede; Anna C. Nobre
In: Progress in Neurobiology, vol. 184, pp. 101731, 2020.
An emerging perspective describes beta-band (15−28 Hz) activity as consisting of short-lived high-amplitude events that only appear sustained in conventional measures of trial-average power. This has important implications for characterising abnormalities observed in beta-band activity in disorders like Parkinson's disease. Measuring parameters associated with beta-event dynamics may yield more sensitive measures, provide more selective diagnostic neural markers, and provide greater mechanistic insight into the breakdown of brain dynamics in this disease. Here, we used magnetoencephalography in eighteen Parkinson's disease participants off dopaminergic medication and eighteen healthy control participants to investigate beta-event dynamics during timed movement preparation. We used the Hidden Markov Model to classify event dynamics in a data-driven manner and derived three parameters of beta events: (1) beta-state amplitude, (2) beta-state lifetime, and (3) beta-state interval time. Of these, changes in beta-state interval time explained the overall decreases in beta power during timed movement preparation and uniquely captured the impairment in such preparation in patients with Parkinson's disease. Thus, the increased granularity of the Hidden Markov Model analysis (compared with conventional analysis of power) provides increased sensitivity and suggests a possible reason for impairments of timed movement preparation in Parkinson's disease.
James E. Hoffman; Minwoo Kim; Matt Taylor; Kelsey Holiday
In: Cortex, vol. 122, pp. 140–158, 2020.
The present research used behavioral and event-related brain potentials (ERP) measures to determine whether emotional capture is automatic in the emotion-induced blindness (EIB) paradigm. The first experiment varied the priority of performing two concurrent tasks: identifying a negative or neutral picture appearing in a rapid serial visual presentation (RSVP) stream of pictures and multiple object tracking (MOT). Results showed that increased attention to the MOT task resulted in decreased accuracy for identifying both negative and neutral target pictures accompanied by decreases in the amplitude of the P3b component. In contrast, the early posterior negativity (EPN) component elicited by negative pictures was unaffected by variations in attention. Similarly, there was a decrement in MOT performance for dual-task versus single task conditions but no effect of picture type (negative vs neutral) on MOT accuracy which isn't consistent with automatic emotional capture of attention. However, the MOT task might simply be insensitive to brief interruptions of attention. The second experiment used a more sensitive reaction time (RT) measure to examine this possibility. Results showed that RT to discriminate a gap appearing in a tracked object was delayed by the simultaneous appearance of to-be-ignored distractor pictures even though MOT performance was once again unaffected by the distractor. Importantly, the RT delay was the same for both negative and neutral distractors suggesting that capture was driven by physical salience rather than emotional salience of the distractors. Despite this lack of emotional capture, the EPN component, which is thought to reflect emotional capture, was still present. We suggest that the EPN doesn't reflect capture but rather downstream effects of attention, including object recognition. These results show that capture by emotional pictures in EIB can be suppressed when attention is engaged in another difficult task. The results have important implications for understanding capture effects in EIB.