EyeLink EEG / fNIRS / TMS Publications
All EyeLink EEG, fNIRS, and TMS research publications (with concurrent eye tracking) up until 2023 (with early 2024s) are listed below by year. You can search the publications using keywords such as P300, Gamma band, NIRS, etc. You can also search for individual author names. If we missed any EyeLink EEG, fNIRS, or TMS articles, please email us!
2021 |
Peter R. Murphy; Niklas Wilming; Diana C. Hernandez-Bocanegra; Genis Prat-Ortega; Tobias H. Donner Adaptive circuit dynamics across human cortex during evidence accumulation in changing environments Journal Article In: Nature Neuroscience, vol. 24, no. 7, pp. 987–997, 2021. @article{Murphy2021, Many decisions under uncertainty entail the temporal accumulation of evidence that informs about the state of the environment. When environments are subject to hidden changes in their state, maximizing accuracy and reward requires non-linear accumulation of evidence. How this adaptive, non-linear computation is realized in the brain is unknown. We analyzed human behavior and cortical population activity (measured with magnetoencephalography) recorded during visual evidence accumulation in a changing environment. Behavior and decision-related activity in cortical regions involved in action planning exhibited hallmarks of adaptive evidence accumulation, which could also be implemented by a recurrent cortical microcircuit. Decision dynamics in action-encoding parietal and frontal regions were mirrored in a frequency-specific modulation of the state of the visual cortex that depended on pupil-linked arousal and the expected probability of change. These findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related feedback to the sensory cortex. |
Aurélien Weiss; Valérian Chambon; Junseok K. Lee; Jan Drugowitsch; Valentin Wyart Interacting with volatile environments stabilizes hidden-state inference and its brain signatures Journal Article In: Nature Communications, vol. 12, pp. 2228, 2021. @article{Weiss2021, Making accurate decisions in uncertain environments requires identifying the generative cause of sensory cues, but also the expected outcomes of possible actions. Although both cognitive processes can be formalized as Bayesian inference, they are commonly studied using different experimental frameworks, making their formal comparison difficult. Here, by framing a reversal learning task either as cue-based or outcome-based inference, we found that humans perceive the same volatile environment as more stable when inferring its hidden state by interaction with uncertain outcomes than by observation of equally uncertain cues. Multivariate patterns of magnetoencephalographic (MEG) activity reflected this behavioral difference in the neural interaction between inferred beliefs and incoming evidence, an effect originating from associative regions in the temporal lobe. Together, these findings indicate that the degree of control over the sampling of volatile environments shapes human learning and decision-making under uncertainty. |
Wieske Zoest; Christoph Huber-Huber; Matthew D. Weaver; Clayton Hickey Strategic distractor suppression improves selective control in human vision Journal Article In: Journal of Neuroscience, vol. 41, no. 33, pp. 7120–7135, 2021. @article{Zoest2021, Our visual environment is complicated, and our cognitive capacity is limited. As a result, we must strategically ignore some stimuli to prioritize others. Common sense suggests that foreknowledge of distractor characteristics, like location or color, might help us ignore these objects. But empirical studies have provided mixed evidence, often showing that knowing about a distractor before it appears counterintuitively leads to its attentional selection. What has looked like strategic distractor suppression in the past is now commonly explained as a product of prior experience and implicit statistical learning, and the long-standing notion the distractor suppression is reflected in a band oscillatory brain activity has been challenged by results appearing to link a to target resolution. Can we strategically, proactively suppress distractors? And, if so, does this involve a? Here, we use the concurrent recording of human EEG and eye movements in optimized experimental designs to identify behavior and brain activity associated with proactive distractor suppression. Results from three experiments show that knowing about distractors before they appear causes a reduction in electrophysiological indices of covert attentional selection of these objects and a reduction in the overt deployment of the eyes to the location of the objects. This control is established before the distractor appears and is predicted by the power of cue-elicited a activity over the visual cortex. Foreknowledge of distractor characteristics therefore leads to improved selective control, and a oscillations in visual cortex reflect the implementation of this strategic, proactive mechanism. |
Bo Yao; Jason R. Taylor; Briony Banks; Sonja A. Kotz Reading direct speech quotes increases theta phase-locking: Evidence for cortical tracking of inner speech? Journal Article In: NeuroImage, vol. 239, pp. 118313, 2021. @article{Yao2021a, Growing evidence shows that theta-band (4–7 Hz) activity in the auditory cortex phase-locks to rhythms of overt speech. Does theta activity also encode the rhythmic dynamics of inner speech? Previous research established that silent reading of direct speech quotes (e.g., Mary said: “This dress is lovely!”) elicits more vivid inner speech than indirect speech quotes (e.g., Mary said that the dress was lovely). As we cannot directly track the phase alignment between theta activity and inner speech over time, we used EEG to measure the brain's phase-locked responses to the onset of speech quote reading. We found that direct (vs. indirect) quote reading was associated with increased theta phase synchrony over trials at 250–500 ms post-reading onset, with sources of the evoked activity estimated in the speech processing network. An eye-tracking control experiment confirmed that increased theta phase synchrony in direct quote reading was not driven by eye movement patterns, and more likely reflects synchronous phase resetting at the onset of inner speech. These findings suggest a functional role of theta phase modulation in reading-induced inner speech. |
Anna Hudson; Amie J. Durston; Sarah D. McCrackin; Roxane J. Itier In: Brain Topography, vol. 34, no. 6, pp. 813–833, 2021. @article{Hudson2021, Facial expression processing is a critical component of social cognition yet, whether it is influenced by task demands at the neural level remains controversial. Past ERP studies have found mixed results with classic statistical analyses, known to increase both Type I and Type II errors, which Mass Univariate statistics (MUS) control better. However, MUS open-access toolboxes can use different fundamental statistics, which may lead to inconsistent results. Here, we compared the output of two MUS toolboxes, LIMO and FMUT, on the same data recorded during the processing of angry and happy facial expressions investigated under three tasks in a within-subjects design. Both toolboxes revealed main effects of emotion during the N170 timing and main effects of task during later time points typically associated with the LPP component. Neither toolbox yielded an interaction between the two factors at the group level, nor at the individual level in LIMO, confirming that the neural processing of these two face expressions is largely independent from task demands. Behavioural data revealed main effects of task on reaction time and accuracy, but no influence of expression or an interaction between the two. Expression processing and task demands are discussed in the context of the consistencies and discrepancies between the two toolboxes and existing literature. |
Silvia L. Isabella; J. Allan Cheyne; Douglas Cheyne Inhibitory control in the absence of awareness: Interactions between frontal and motor cortex oscillations mediate implicitly learned responses Journal Article In: Frontiers in Human Neuroscience, vol. 15, pp. 786035, 2021. @article{Isabella2021, Cognitive control of action is associated with conscious effort and is hypothesised to be reflected by increased frontal theta activity. However, the functional role of these increases in theta power, and how they contribute to cognitive control remains unknown. We conducted an MEG study to test the hypothesis that frontal theta oscillations interact with sensorimotor signals in order to produce controlled behaviour, and that the strength of these interactions will vary with the amount of control required. We measured neuromagnetic activity in 16 healthy adults performing a response inhibition (Go/Switch) task, known from previous work to modulate cognitive control requirements using hidden patterns of Go and Switch cues. Learning was confirmed by reduced reaction times (RT) to patterned compared to random Switch cues. Concurrent measures of pupil diameter revealed changes in subjective cognitive effort with stimulus probability, even in the absence of measurable behavioural differences, revealing instances of covert variations in cognitive effort. Significant theta oscillations were found in five frontal brain regions, with theta power in the right middle frontal and right premotor cortices parametrically increasing with cognitive effort. Similar increases in oscillatory power were also observed in motor cortical gamma, suggesting an interaction. Right middle frontal and right precentral theta activity predicted changes in pupil diameter across all experimental conditions, demonstrating a close relationship between frontal theta increases and cognitive control. Although no theta-gamma cross-frequency coupling was found, long-range theta phase coherence among the five significant sources between bilateral middle frontal, right inferior frontal, and bilateral premotor areas was found, thus providing a mechanism for the relay of cognitive control between frontal and motor areas via theta signalling. Furthermore, this provides the first evidence for the sensitivity of frontal theta oscillations to implicit motor learning and its effects on cognitive load. More generally these results present a possible a mechanism for this frontal theta network to coordinate response preparation, inhibition and execution. |
Efthymia C. Kapnoula; Bob McMurray In: Brain and Language, vol. 223, pp. 105031, 2021. @article{Kapnoula2021, Listeners generally categorize speech sounds in a gradient manner. However, recent work, using a visual analogue scaling (VAS) task, suggests that some listeners show more categorical performance, leading to less flexible cue integration and poorer recovery from misperceptions (Kapnoula et al., 2017, 2021). We asked how individual differences in speech gradiency can be reconciled with the well-established gradiency in the modal listener, showing how VAS performance relates to both Visual World Paradigm and EEG measures of gradiency. We also investigated three potential sources of these individual differences: inhibitory control; lexical inhibition; and early cue encoding. We used the N1 ERP component to track pre-categorical encoding of Voice Onset Time (VOT). The N1 linearly tracked VOT, reflecting a fundamentally gradient speech perception; however, for less gradient listeners, this linearity was disrupted near the boundary. Thus, while all listeners are gradient, they may show idiosyncratic encoding of specific cues, affecting downstream processing. |
Hamid Karimi-Rouzbahani; Alexandra Woolgar; Anina N. Rich Neural signatures of vigilance decrements predict behavioural errors before they occur Journal Article In: eLife, vol. 10, pp. e60563, 2021. @article{KarimiRouzbahani2021, There are many monitoring environments, such as railway control, in which lapses of attention can have tragic consequences. Problematically, sustained monitoring for rare targets is difficult, with more misses and longer reaction times over time. What changes in the brain underpin these ‘vigilance decrements'? We designed a multiple-object monitoring (MOM) paradigm to examine how the neural representation of information varied with target frequency and time performing the task. Behavioural performance decreased over time for the rare target (monitoring) condition, but not for a frequent target (active) condition. This was mirrored in neural decoding using magnetoencephalography: coding of critical information declined more during monitoring versus active conditions along the experiment. We developed new analyses that can predict behavioural errors from the neural data more than a second before they occurred. This facilitates pre-empting behavioural errors due to lapses in attention and provides new insight into the neural correlates of vigilance decrements. |
Julian Q. Kosciessa; Ulman Lindenberger; Douglas D. Garrett Thalamocortical excitability modulation guides human perception under uncertainty Journal Article In: Nature Communications, vol. 12, pp. 2430, 2021. @article{Kosciessa2021, Knowledge about the relevance of environmental features can guide stimulus processing. However, it remains unclear how processing is adjusted when feature relevance is uncertain. We hypothesized that (a) heightened uncertainty would shift cortical networks from a rhythmic, selective processing-oriented state toward an asynchronous (“excited”) state that boosts sensitivity to all stimulus features, and that (b) the thalamus provides a subcortical nexus for such uncertainty-related shifts. Here, we had young adults attend to varying numbers of task-relevant features during EEG and fMRI acquisition to test these hypotheses. Behavioral modeling and electrophysiological signatures revealed that greater uncertainty lowered the rate of evidence accumulation for individual stimulus features, shifted the cortex from a rhythmic to an asynchronous/excited regime, and heightened neuromodulatory arousal. Crucially, this unified constellation of within-person effects was dominantly reflected in the uncertainty-driven upregulation of thalamic activity. We argue that neuromodulatory processes involving the thalamus play a central role in how the brain modulates neural excitability in the face of momentary uncertainty. |
James E. Kragel; Stephan Schuele; Stephen VanHaerents; Joshua M. Rosenow; Joel L. Voss Rapid coordination of effective learning by the human hippocampus Journal Article In: Science Advances, vol. 7, no. 25, pp. eabf7144, 2021. @article{Kragel2021, Although the human hippocampus is necessary for long-term memory, controversial findings suggest that it may also support short-Term memory in the service of guiding effective behaviors during learning. We tested the counterintuitive theory that the hippocampus contributes to long-Term memory through remarkably short-Term processing, as reflected in eye movements during scene encoding. While viewing scenes for the first time, shortterm retrieval operative within the episode over only hundreds of milliseconds was indicated by a specific eye-movement pattern, which was effective in that it enhanced spatiotemporal memory formation. This viewing pattern was predicted by hippocampal theta oscillations recorded from depth electrodes and by shifts toward top-down influence of hippocampal theta on activity within visual perception and attention networks. The hippocampus thus supports short-Term memory processing that coordinates behavior in the service of effective spatiotemporal learning. |
Wouter Kruijne; Christian N. L. Olivers; Hedderik Rijn Neural repetition suppression modulates time perception: Evidence from electrophysiology and pupillometry Journal Article In: Journal of Cognitive Neuroscience, vol. 33, no. 7, pp. 1230–1252, 2021. @article{Kruijne2021, Human time perception is malleable and subject to many biases. For example, it has repeatedly been shown that stimuli that are physically intense or that are unexpected seem to last longer. Two competing hypotheses have been proposed to account for such biases: One states that these temporal illusions are the result of increased levels of arousal that speeds up neural clock dynamics, whereas the alternative “magnitude coding” account states that the magnitude of sensory responses causally modulates perceived durations. Common experimental paradigms used to study temporal biases cannot dissociate between these accounts, as arousal and sensory magnitude covary and modulate each other. Here, we present two temporal discrimination experiments where two flashing stimuli demarcated the start and end of a to-be-timed interval. These stimuli could be either in the same or a different location, which led to different sensory responses because of neural repetition suppression. Crucially, changes and repetitions were fully predictable, which allowed us to explore effects of sensory response magnitude without changes in arousal or surprise. Intervals with changing markers were perceived as lasting longer than those with repeating markers. We measured EEG (Experiment 1) and pupil size (Experiment 2) and found that temporal perception was related to changes in ERPs (P2) and pupil constriction, both of which have been related to responses in the sensory cortex. Conversely, correlates of surprise and arousal (P3 amplitude and pupil dilation) were unaffected by stimulus repetitions and changes. These results demonstrate, for the first time, that sensory magnitude affects time perception even under constant levels of arousal. |
Louisa Kulke; Lena Brümmer; Arezoo Pooresmaeili; Annekathrin Schacht Overt and covert attention shifts to emotional faces: Combining EEG, eye tracking, and a go/no-go paradigm Journal Article In: Psychophysiology, vol. 58, no. 8, pp. e13838, 2021. @article{Kulke2021, In everyday life, faces with emotional expressions quickly attract attention and eye movements. To study the neural mechanisms of such emotion-driven attention by means of event-related brain potentials (ERPs), tasks that employ covert shifts of attention are commonly used, in which participants need to inhibit natural eye movements towards stimuli. It remains, however, unclear how shifts of attention to emotional faces with and without eye movements differ from each other. The current preregistered study aimed to investigate neural differences between covert and overt emotion-driven attention. We combined eye tracking with measurements of ERPs to compare shifts of attention to faces with happy, angry, or neutral expressions when eye movements were either executed (go conditions) or withheld (no-go conditions). Happy and angry faces led to larger EPN amplitudes, shorter latencies of the P1 component, and faster saccades, suggesting that emotional expressions significantly affected shifts of attention. Several ERPs (N170, EPN, LPC) were augmented in amplitude when attention was shifted with an eye movement, indicating an enhanced neural processing of faces if eye movements had to be executed together with a reallocation of attention. However, the modulation of ERPs by facial expressions did not differ between the go and no-go conditions, suggesting that emotional content enhances both covert and overt shifts of attention. In summary, our results indicate that overt and covert attention shifts differ but are comparably affected by emotional content. |
Seungji Lee; Doyoung Lee; Hyunjae Gil; Ian Oakley; Yang Seok Cho; Sung-Phil Kim Eye fixation-related potentials during visual search on acquaintance and newly-learned faces Journal Article In: Brain Sciences, vol. 11, no. 2, pp. 1–15, 2021. @article{Lee2021b, Searching familiar faces in the crowd may involve stimulus-driven attention by emotional significance, together with goal-directed attention due to task-relevant needs. The present study investigated the effect of familiarity on attentional processes by exploring eye fixation-related potentials (EFRPs) and eye gazes when humans searched for, among other distracting faces, either an acquaintance's face or a newly-learned face. Task performance and gaze behavior were indistinguishable for identifying either faces. However, from the EFRP analysis, after a P300 component for successful search of target faces, we found greater deflections of right parietal late positive potentials in response to newly-learned faces than acquaintance's faces, indicating more involvement of goaldirected attention in processing newly-learned faces. In addition, we found greater occipital negativity elicited by acquaintance's faces, reflecting emotional responses to significant stimuli. These results may suggest that finding a familiar face in the crowd would involve lower goal-directed attention and elicit more emotional responses. |
Cai S. Longman; Heike Elchlepp; Stephen Monsell; Aureliu Lavric Serial or parallel proactive control of components of task-set? A task-switching investigation with concurrent EEG and eye-tracking Journal Article In: Neuropsychologia, vol. 160, pp. 107984, 2021. @article{Longman2021, Among the issues examined by studies of cognitive control in multitasking is whether processes underlying performance in the different tasks occur serially or in parallel. Here we ask a similar question about processes that pro-actively control task-set. In task-switching experiments, several indices of task-set preparation have been extensively documented, including anticipatory orientation of gaze to the task-relevant location (an unambiguous marker of reorientation of attention), and a positive polarity brain potential over the posterior cortex (whose functional significance is less well understood). We examine whether these markers of preparation occur in parallel or serially, and in what order. On each trial a cue required participants to make a semantic classification of one of three digits presented simultaneously, with the location of each digit consistently associated with one of three classification tasks (e.g., if the task was odd/even, the digit at the top of the display was relevant). The EEG positivity emerged following, and appeared time-locked to, the anticipatory fixation on the task-relevant location, which might suggest serial organisation. However, the fixation-locked positivity was not better defined than the cue-locked positivity; in fact, for the trials with the earliest fixations the positivity was better time-locked to the cue onset. This is more consistent with (re)orientation of spatial attention occurring in parallel with, but slightly before, the reconfiguration of other task-set components indexed by the EEG positivity. |
Sara LoTemplio; Jack Silcox; Kara D. Federmeier; Brennan R. Payne Inter- and intra-individual coupling between pupillary, electrophysiological, and behavioral responses in a visual oddball task Journal Article In: Psychophysiology, vol. 58, no. 4, pp. e13758, 2021. @article{LoTemplio2021, Although the P3b component of the event-related brain potential is one of the most widely studied components, its underlying generators are not currently well understood. Recent theories have suggested that the P3b is triggered by phasic activation of the locus-coeruleus norepinephrine (LC-NE) system, an important control center implicated in facilitating optimal task-relevant behavior. Previous research has reported strong correlations between pupil dilation and LC activity, suggesting that pupil diameter is a useful indicator for ongoing LC-NE activity. Given the strong relationship between LC activity and pupil dilation, if the P3b is driven by phasic LC activity, there should be a robust trial-to-trial relationship with the phasic pupillary dilation response (PDR). However, previous work examining relationships between concurrently recorded pupillary and P3b responses has not supported this. One possibility is that the relationship between the measures might be carried primarily by either inter-individual (i.e., between-participant) or intra-individual (i.e., within-participant) contributions to coupling, and prior work has not systematically delineated these relationships. Doing so in the current study, we do not find evidence for either inter-individual or intra-individual relationships between the PDR and P3b responses. However, baseline pupil dilation did predict the P3b. Interestingly, both the PDR and P3b independently predicted inter-individual and intra-individual variability in decision response time. Implications for the LC-P3b hypothesis are discussed. |
Sarah D. McCrackin; Roxane J. Itier I can see it in your eyes: Perceived gaze direction impacts ERP and behavioural measures of affective theory of mind Journal Article In: Cortex, vol. 143, pp. 205–222, 2021. @article{McCrackin2021, Looking at someone's eyes is thought to be important for affective theory of mind (aTOM), our ability to infer their emotional state. However, it is unknown whether an individual's gaze direction influences our aTOM judgements and what the time course of this influence might be. We presented participants with sentences describing individuals in positive, negative or neutral scenarios, followed by direct or averted gaze neutral face pictures of those individuals. Participants made aTOM judgements about each person's mental state, including their affective valence and arousal, and we investigated whether the face gaze direction impacted those judgements. Participants rated that gazers were feeling more positive when they displayed direct gaze as opposed to averted gaze, and that they were feeling more aroused during negative contexts when gaze was averted as opposed to direct. Event-related potentials associated with face perception and affective processing were examined using mass-univariate analyses to track the time-course of this eye-gaze and affective processing interaction at a neural level. Both positive and negative trials were differentiated from neutral trials at many stages of processing. This included the early N200 and EPN components, believed to reflect automatic emotion areas activation and attentional selection respectively. This also included the later P300 and LPP components, thought to reflect elaborative cognitive appraisal of emotional content. Critically, sentence valence and gaze direction interacted over these later components, which may reflect the incorporation of eye-gaze in the cognitive evaluation of another's emotional state. The results suggest that gaze perception directly impacts aTOM processes, and that altered eye-gaze processing in clinical populations may contribute to associated aTOM impairments. |
Sarah D. McCrackin; Roxane J. Itier Feeling through another's eyes: Perceived gaze direction impacts ERP and behavioural measures of positive and negative affective empathy Journal Article In: NeuroImage, vol. 226, pp. 117605, 2021. @article{McCrackin2021a, Looking at the eyes informs us about the thoughts and emotions of those around us, and impacts our own emotional state. However, it is unknown how perceiving direct and averted gaze impacts our ability to share the gazer's positive and negative emotions, abilities referred to as positive and negative affective empathy. We presented 44 participants with contextual sentences describing positive, negative and neutral events happening to other people (e.g. “Her newborn was saved/killed/fed yesterday afternoon.”). These were designed to elicit positive, negative, or little to no empathy, and were followed by direct or averted gaze images of the individuals described. Participants rated their affective empathy for the individual and their own emotional valence on each trial. Event-related potentials time-locked to face-onset and associated with empathy and emotional processing were recorded to investigate whether they were modulated by gaze direction. Relative to averted gaze, direct gaze was associated with increased positive valence in the positive and neutral conditions and with increased positive empathy ratings. A similar pattern was found at the neural level, using robust mass-univariate statistics. The N100, thought to reflect an automatic activation of emotion areas, was modulated by gaze in the affective empathy conditions, with opposite effect directions in positive and negative conditions. The P200, an ERP component sensitive to positive stimuli, was modulated by gaze direction only in the positive empathy condition. Positive and negative trials were processed similarly at the early N200 processing stage, but later diverged, with only negative trials modulating the EPN, P300 and LPP components. These results suggest that positive and negative affective empathy are associated with distinct time-courses, and that perceived gaze direction uniquely modulates positive empathy, highlighting the importance of studying empathy with face stimuli. |
Amir H. Meghdadi; Barry Giesbrecht; Miguel P. Eckstein EEG signatures of contextual influences on visual search with real scenes Journal Article In: Experimental Brain Research, vol. 239, no. 3, pp. 797–809, 2021. @article{Meghdadi2021, The use of scene context is a powerful way by which biological organisms guide and facilitate visual search. Although many studies have shown enhancements of target-related electroencephalographic activity (EEG) with synthetic cues, there have been fewer studies demonstrating such enhancements during search with scene context and objects in real world scenes. Here, observers covertly searched for a target in images of real scenes while we used EEG to measure the steady state visual evoked response to objects flickering at different frequencies. The target appeared in its typical contextual location or out of context while we controlled for low-level properties of the image including target saliency against the background and retinal eccentricity. A pattern classifier using EEG activity at the relevant modulated frequencies showed target detection accuracy increased when the target was in a contextually appropriate location. A control condition for which observers searched the same images for a different target orthogonal to the contextual manipulation, resulted in no effects of scene context on classifier performance, confirming that image properties cannot explain the contextual modulations of neural activity. Pattern classifier decisions for individual images were also related to the aggregated observer behavioral decisions for individual images. Together, these findings demonstrate target-related neural responses are modulated by scene context during visual search with real world scenes and can be related to behavioral search decisions. |
Michael Christopher Melnychuk; Ian H. Robertson; Emanuele R. G. Plini; Paul M. Dockree In: Brain Sciences, vol. 11, pp. 1324, 2021. @article{Melnychuk2021, Yogic and meditative traditions have long held that the fluctuations of the breath and the mind are intimately related. While respiratory modulation of cortical activity and attentional switching are established, the extent to which electrophysiological markers of attention exhibit synchronization with respiration is unknown. To this end, we examined (1) frontal midline theta-beta ratio (TBR), an indicator of attentional control state known to correlate with mind wandering episodes and functional connectivity of the executive control network; (2) pupil diameter (PD), a known proxy measure of locus coeruleus (LC) noradrenergic activity; and (3) respiration for evidence of phase synchronization and information transfer (multivariate Granger causality) during quiet restful breathing. Our results indicate that both TBR and PD are simultaneously synchronized with the breath, suggesting an underlying oscillation of an attentionally relevant electrophysiological index that is phase-locked to the respiratory cycle which could have the potential to bias the attentional system into switching states. We highlight the LC's pivotal role as a coupling mechanism between respiration and TBR, and elaborate on its dual functions as both a chemosensitive respiratory nucleus and a pacemaker of the attentional system. We further suggest that an appreciation of the dynamics of this weakly coupled oscillatory system could help deepen our understanding of the traditional claim of a relationship between breathing and attention. |
Yali Pan; Steven Frisson; Ole Jensen Neural evidence for lexical parafoveal processing Journal Article In: Nature Communications, vol. 12, pp. 5234, 2021. @article{Pan2021a, In spite of the reduced visual acuity, parafoveal information plays an important role in natural reading. However, competing models on reading disagree on whether words are previewed parafoveally at the lexical level. We find neural evidence for lexical parafoveal processing by combining a rapid invisible frequency tagging (RIFT) approach with magnetoencephalography (MEG) and eye-tracking. In a silent reading task, target words are tagged (flickered) subliminally at 60 Hz. The tagging responses measured when fixating on the pre-target word reflect parafoveal processing of the target word. We observe stronger tagging responses during pre-target fixations when followed by low compared with high lexical frequency targets. Moreover, this lexical parafoveal processing is associated with individual reading speed. Our findings suggest that reading unfolds in the fovea and parafovea simultaneously to support fluent reading. |
Hame Park; Christoph Kayser The neurophysiological basis of the trial-wise and cumulative ventriloquism aftereffects Journal Article In: Journal of Neuroscience, vol. 41, no. 5, pp. 1068–1079, 2021. @article{Park2021d, Our senses often receive conflicting multisensory information, which our brain reconciles by adaptive recalibration. A classic example is the ventriloquism aftereffect, which emerges following both cumulative (long-term) and trial-wise exposure to spatially discrepant multisensory stimuli. Despite the importance of such adaptive mechanisms for interacting with environments that change over multiple timescales, it remains debated whether the ventriloquism aftereffects observed following trial-wise and cumulative exposure arise from the same neurophysiological substrate. We address this question by probing electroencephalography recordings from healthy humans (both sexes) for processes predictive of the aftereffect biases following the exposure to spatially offset audiovisual stimuli. Our results support the hypothesis that discrepant multisensory evidence shapes aftereffects on distinct timescales via common neurophysiological processes reflecting sensory inference and memory in parietal- occipital regions, while the cumulative exposure to consistent discrepancies additionally recruits prefrontal processes. During the subsequent unisensory trial, both trial-wise and cumulative exposure bias the encoding of the acoustic information, but do so distinctly. Our results posit a central role of parietal regions in shaping multisensory spatial recalibration, suggest that frontal regions consolidate the behavioral bias for persistent multisensory discrepancies, but also show that the trial-wise and cumulative exposure bias sound position encoding via distinct neurophysiological processes. |
Gaëlle Nicolas; Eric Castet; Adrien Rabier; Emmanuelle Kristensen; Michel Dojat; Anne Guérin-Dugué Neural correlates of intra-saccadic motion perception Journal Article In: Journal of Vision, vol. 21, no. 11, pp. 1–24, 2021. @article{Nicolas2021, Retinal motion of the visual scene is not consciously perceived during ocular saccades in normal everyday conditions. It has been suggested that extra-retinal signals actively suppress intra-saccadic motion perception to preserve stable perception of the visual world. However, using stimuli optimized to preferentially activate the M-pathway, Castet and Masson (2000) demonstrated that motion can be perceived during a saccade. Based on this psychophysical paradigm, we used electroencephalography and eye-tracking recordings to investigate the neural correlates related to the conscious perception of intra-saccadic motion. We demonstrated the effective involvement during saccades of the cortical areas V1-V2 and MT-V5, which convey motion information along the M-pathway. We also showed that individual motion perception was related to retinal temporal frequency. |
J. A. Nij Bijvank; E. M. M. Strijbis; I. M. Nauta; S. D. Kulik; L. J. Balk; C. J. Stam; A. Hillebrand; J. J. G. Geurts; B. M. J. Uitdehaag; L. J. Rijn; A. Petzold; M. M. Schoonheim Impaired saccadic eye movements in multiple sclerosis are related to altered functional connectivity of the oculomotor brain network Journal Article In: NeuroImage: Clinical, vol. 32, pp. 102848, 2021. @article{NijBijvank2021, Background: Impaired eye movements in multiple sclerosis (MS) are common and could represent a non-invasive and accurate measure of (dys)functioning of interconnected areas within the complex brain network. The aim of this study was to test whether altered saccadic eye movements are related to changes in functional connectivity (FC) in patients with MS. Methods: Cross-sectional eye movement (pro-saccades and anti-saccades) and magnetoencephalography (MEG) data from the Amsterdam MS cohort were included from 176 MS patients and 33 healthy controls. FC was calculated between all regions of the Brainnetome atlas in six conventional frequency bands. Cognitive function and disability were evaluated by previously validated measures. The relationships between saccadic parameters and both FC and clinical scores in MS patients were analysed using multivariate linear regression models. Results: In MS pro- and anti-saccades were abnormal compared to healthy controls A relationship of saccadic eye movements was found with FC of the oculomotor network, which was stronger for regional than global FC. In general, abnormal eye movements were related to higher delta and theta FC but lower beta FC. Strongest associations were found for pro-saccadic latency and FC of the precuneus (beta band β = -0.23 |
Hamideh Norouzi; Niloofar Tavakoli; Mohammad Reza Daliri In: International Journal of Psychophysiology, vol. 166, pp. 61–70, 2021. @article{Norouzi2021, Working memory (WM) can be considered as a limited-capacity system which is capable of saving information temporarily with the aim of processing. The aim of the present study was to establish whether eccentricity representation in WM could be decoded from eletroencephalography (EEG) alpha-band oscillation in parietal cortex during delay-period while performing memory-guided saccade (MGS) task. In this regard, we recorded EEG and Eye-tracking signals of 17 healthy volunteers in a variant version of MGS task. We designed the modified version of MGS task for the first time to investigate the effect of locating stimuli in two different positions, in a near (6°) eccentricity and far (12°) eccentricity on saccade error as a behavioral parameter. Another goal of study was to discern whether or not varying the stimuli loci can alter behavioral and eletroencephalographical data while performing the variant version of MGS task. Our findings demonstrate that saccade error for the near fixation condition is significantly smaller than the far from fixation condition. We observed an increase in alpha power in parietal lobe in near vs far conditions. In addition, the results indicate that the increase in alpha (8–12 Hz) power from fixation to memory was negatively correlated with saccade error. The novel approach of using simultaneous EEG/Eye-tracking recording in the modified MGS task provided both behavioral and electroencephalographic analyses for oscillatory activity during this new version of MGS task. |
John Orczyk; Charles E. Schroeder; Ilana Y. Abeles; Manuel Gomez-Ramirez; Pamela D. Butler; Yoshinao Kajikawa Comparison of scalp ERP to faces in macaques and humans Journal Article In: Frontiers in Systems Neuroscience, vol. 15, pp. 667611, 2021. @article{Orczyk2021, Face recognition is an essential activity of social living, common to many primate species. Underlying processes in the brain have been investigated using various techniques and compared between species. Functional imaging studies have shown face-selective cortical regions and their degree of correspondence across species. However, the temporal dynamics of face processing, particularly processing speed, are likely different between them. Across sensory modalities activation of primary sensory cortices in macaque monkeys occurs at about 3/5 the latency of corresponding activation in humans, though this human simian difference may diminish or disappear in higher cortical regions. We recorded scalp event-related potentials (ERPs) to presentation of faces in macaques and estimated the peak latency of ERP components. Comparisons of latencies between macaques (112 ms) and humans (192 ms) suggested that the 3:5 ratio could be preserved in higher cognitive regions of face processing between those species. |
Anastasia O. Ovchinnikova; Anatoly N. Vasilyev; Ivan P. Zubarev; Bogdan L. Kozyrskiy; Sergei L. Shishkin MEG-based detection of voluntary eye fixations used to control a computer Journal Article In: Frontiers in Neuroscience, vol. 15, pp. 619591, 2021. @article{Ovchinnikova2021, Gaze-based input is an efficient way of hand-free human-computer interaction. However, it suffers from the inability of gaze-based interfaces to discriminate voluntary and spontaneous gaze behaviors, which are overtly similar. Here, we demonstrate that voluntary eye fixations can be discriminated from spontaneous ones using short segments of magnetoencephalography (MEG) data measured immediately after the fixation onset. Recently proposed convolutional neural networks (CNNs), linear finite impulse response filters CNN (LF-CNN) and vector autoregressive CNN (VAR-CNN), were applied for binary classification of the MEG signals related to spontaneous and voluntary eye fixations collected in healthy participants (n = 25) who performed a game-like task by fixating on targets voluntarily for 500 ms or longer. Voluntary fixations were identified as those followed by a fixation in a special confirmatory area. Spontaneous vs. voluntary fixation-related single-trial 700 ms MEG segments were non-randomly classified in the majority of participants, with the group average cross-validated ROC AUC of 0.66 ± 0.07 for LF-CNN and 0.67 ± 0.07 for VAR-CNN (M ± SD). When the time interval, from which the MEG data were taken, was extended beyond the onset of the visual feedback, the group average classification performance increased up to 0.91. Analysis of spatial patterns contributing to classification did not reveal signs of significant eye movement impact on the classification results. We conclude that the classification of MEG signals has a certain potential to support gaze-based interfaces by avoiding false responses to spontaneous eye fixations on a single-trial basis. Current results for intention detection prior to gaze-based interface's feedback, however, are not sufficient for online single-trial eye fixation classification using MEG data alone, and further work is needed to find out if it could be used in practical applications. |
Fosca Al Roumi; Sébastien Marti; Liping Wang; Marie Amalric; Stanislas Dehaene Mental compression of spatial sequences in human working memory using numerical and geometrical primitives Journal Article In: Neuron, vol. 109, no. 16, pp. 2627–2639, 2021. @article{AlRoumi2021, How does the human brain store sequences of spatial locations? We propose that each sequence is internally compressed using an abstract, language-like code that captures its numerical and geometrical regularities. We exposed participants to spatial sequences of fixed length but variable regularity while their brain activity was recorded using magneto-encephalography. Using multivariate decoders, each successive location could be decoded from brain signals, and upcoming locations were anticipated prior to their actual onset. Crucially, sequences with lower complexity, defined as the minimal description length provided by the formal language, led to lower error rates and to increased anticipations. Furthermore, neural codes specific to the numerical and geometrical primitives of the postulated language could be detected, both in isolation and within the sequences. These results suggest that the human brain detects sequence regularities at multiple nested levels and uses them to compress long sequences in working memory. |
Thomas Andrillon; Angus Burns; Teigane Mackay; Jennifer Windt; Naotsugu Tsuchiya Predicting lapses of attention with sleep-like slow waves Journal Article In: Nature Communications, vol. 12, pp. 3657, 2021. @article{Andrillon2021, Attentional lapses occur commonly and are associated with mind wandering, where focus is turned to thoughts unrelated to ongoing tasks and environmental demands, or mind blanking, where the stream of consciousness itself comes to a halt. To understand the neural mechanisms underlying attentional lapses, we studied the behaviour, subjective experience and neural activity of healthy participants performing a task. Random interruptions prompted participants to indicate their mental states as task-focused, mind-wandering or mind-blanking. Using high-density electroencephalography, we report here that spatially and temporally localized slow waves, a pattern of neural activity characteristic of the transition toward sleep, accompany behavioural markers of lapses and preceded reports of mind wandering and mind blanking. The location of slow waves could distinguish between sluggish and impulsive behaviours, and between mind wandering and mind blanking. Our results suggest attentional lapses share a common physiological origin: the emergence of local sleep-like activity within the awake brain. |
M. Antúnez; S. Mancini; J. A. Hernández-Cabrera; L. J. Hoversten; H. A. Barber; M. Carreiras Cross-linguistic semantic preview benefit in Basque-Spanish bilingual readers: Evidence from fixation-related potentials Journal Article In: Brain and Language, vol. 214, pp. 104905, 2021. @article{Antunez2021, During reading, we can process and integrate information from words allocated in the parafoveal region. However, whether we extract and process the meaning of parafoveal words is still under debate. Here, we obtained Fixation-Related Potentials in a Basque-Spanish bilingual sample during a Spanish reading task. By using the boundary paradigm, we presented different parafoveal previews that could be either Basque non-cognate translations or unrelated Basque words. We prove for the first time cross-linguistic semantic preview benefit effects in alphabetic languages, providing novel evidence of modulations in the N400 component. Our findings suggest that the meaning of parafoveal words is processed and integrated during reading and that such meaning is activated and shared across languages in bilingual readers. |
Damiano Azzalini; Anne Buot; Stefano Palminteri; Catherine Tallon-Baudry Responses to heartbeats in ventromedial prefrontal cortex contribute to subjective preference-based decisions Journal Article In: Journal of Neuroscience, vol. 41, no. 23, pp. 5102–5114, 2021. @article{Azzalini2021, Forrest Gump or The Matrix? Preference-based decisions are subjective and entail self-reflection. However, these self-related features are unaccounted for by known neural mechanisms of valuation and choice. Self-related processes have been linked to a basic interoceptive biological mechanism, the neural monitoring of heartbeats, in particular in ventromedial prefrontal cortex (vmPFC), a region also involved in value encoding. We thus hypothesized a functional coupling between the neural monitoring of heartbeats and the precision of value encoding in vmPFC. Human participants of both sexes were presented with pairs of movie titles. They indicated either which movie they preferred or performed a control objective visual discrimination that did not require self-reflection. Using magnetoencephalography, we measured heartbeat-evoked responses (HERs) before option presentation and confirmed that HERs in vmPFC were larger when preparing for the subjective, self-related task. We retrieved the expected cortical value network during choice with time-resolved statistical modeling. Crucially, we show that larger HERs before option presentation are followed by stronger value encoding during choice in vmPFC. This effect is independent of overall vmPFC baseline activity. The neural interaction between HERs and value encoding predicted preference-based choice consistency over time, accounting for both interindividual differences and trial-to-trial fluctuations within individuals. Neither cardiac activity nor arousal fluctuations could account for any of the effects. HERs did not interact with the encoding of perceptual evidence in the discrimination task. Our results show that the self-reflection underlying preference-based decisions involves HERs, and that HER integration to subjective value encoding in vmPFC contributes to preference stability. |
Shlomit Beker; John J. Foxe; Sophie Molholm Oscillatory entrainment mechanisms and anticipatory predictive processes in children with autism spectrum disorder Journal Article In: Journal of Neurophysiology, vol. 126, no. 5, pp. 1783–1798, 2021. @article{Beker2021, Anticipating near-future events is fundamental to adaptive behavior, whereby neural processing of predictable stimuli is significantly facilitated relative to nonpredictable events. Neural oscillations appear to be a key anticipatory mechanism by which processing of upcoming stimuli is modified, and they often entrain to rhythmic environmental sequences. Clinical and anecdotal observations have led to the hypothesis that people with autism spectrum disorder (ASD) may have deficits in generating predictions, and as such, a candidate neural mechanism may be failure to adequately entrain neural activity to repetitive environmental patterns, to facilitate temporal predictions. We tested this hypothesis by interrogating temporal predictions and rhythmic entrainment using behavioral and electrophysiological approaches. We recorded high-density electroencephalography in children with ASD and typically developing (TD) age- and IQ-matched controls, while they reacted to an auditory target as quickly as possible. This auditory event was either preceded by predictive rhythmic visual cues or was not preceded by any cue. Both ASD and control groups presented comparable behavioral facilitation in response to the Cue versus No-Cue condition, challenging the hypothesis that children with ASD have deficits in generating temporal predictions. Analyses of the electrophysiological data, in contrast, revealed significantly reduced neural entrainment to the visual cues and altered anticipatory processes in the ASD group. This was the case despite intact stimulus-evoked visual responses. These results support intact behavioral temporal prediction in response to a cue in ASD, in the face of altered neural entrainment and anticipatory processes. |
Chama Belkhiria; Vsevolod Peysakhovich EOG metrics for cognitive workload detection Journal Article In: Procedia Computer Science, vol. 192, pp. 1875–1884, 2021. @article{Belkhiria2021, Increasing workload is a central notion in human factors research that can decrease the performance and yield accidents. Thus, it is crucial to understand the impact of different internal operator's factors including eye movements, memory and audio-visual integration. Here, we explored the relationship between cognitive workload (low vs. high) and eye movements (saccades, fixations and smooth pursuit). The task difficulty was induced by auditory noise, arithmetical count and working memory load. We estimated cognitive workload using EOG and EEG-based mental state monitoring. One novelty consists in recording the EOG around the ears (alternative EOG) and around the eyes (conventional EOG). The number of blinks and saccades amplitude increased along with the difficulty increase (p ≤ 0.05). We found significant correlations between EOG and EEG (theta/alpha ratio) and between conventional and alternative EOG signal. The increase in cognitive load may disturb the coding and maintenance of related visual information. Alternative EOG metrics could be a valuable tool for detecting workload. |
Anne Buot; Damiano Azzalini; Maximilien Chaumon; Catherine Tallon-Baudry Does stroke volume influence heartbeat evoked responses? Journal Article In: Biological Psychology, vol. 165, pp. 108165, 2021. @article{Buot2021, We know surprisingly little on how heartbeat-evoked responses (HERs) vary with cardiac parameters. Here, we measured both stroke volume, or volume of blood ejected at each heartbeat, with impedance cardiography, and HER amplitude with magneto-encephalography, in 21 male and female participants at rest with eyes open. We observed that HER co-fluctuates with stroke volume on a beat-to-beat basis, but only when no correction for cardiac artifact was performed. This highlights the importance of an ICA correction tailored to the cardiac artifact. We also observed that easy-to-measure cardiac parameters (interbeat intervals, ECG amplitude) are sensitive to stroke volume fluctuations and can be used as proxies when stroke volume measurements are not available. Finally, interindividual differences in stroke volume were reflected in MEG data, but whether this effect is locked to heartbeats is unclear. Altogether, our results question assumptions on the link between stroke volume and HERs. |
Christoforos Christoforou; Argyro Fella; Paavo H. T. Leppänen; George K. Georgiou; Timothy C. Papadopoulos Fixation-related potentials in naming speed: A combined EEG and eye-tracking study on children with dyslexia Journal Article In: Clinical Neurophysiology, vol. 132, no. 11, pp. 2798–2807, 2021. @article{Christoforou2021, Objective: We combined electroencephalography (EEG) and eye-tracking recordings to examine the underlying factors elicited during the serial Rapid-Automatized Naming (RAN) task that may differentiate between children with dyslexia (DYS) and chronological age controls (CAC). Methods: Thirty children with DYS and 30 CAC (Mage = 9.79 years; age range 7.6 through 12.1 years) performed a set of serial RAN tasks. We extracted fixation-related potentials (FRPs) under phonologically similar (rime-confound) or visually similar (resembling lowercase letters) and dissimilar (non-confounding and discrete uppercase letters, respectively) control tasks. Results: Results revealed significant differences in FRP amplitudes between DYS and CAC groups under the phonologically similar and phonologically non-confounding conditions. No differences were observed in the case of the visual conditions. Moreover, regression analysis showed that the average amplitude of the extracted components significantly predicted RAN performance. Conclusion: FRPs capture neural components during the serial RAN task informative of differences between DYS and CAC and establish a relationship between neurocognitive processes during serial RAN and dyslexia. Significance: We suggest our approach as a methodological model for the concurrent analysis of neurophysiological and eye-gaze data to decipher the role of RAN in reading. |
Edan Daniel; Ilan Dinstein Individual magnitudes of neural variability quenching are associated with motion perception abilities Journal Article In: Journal of Neurophysiology, vol. 125, no. 4, pp. 1111–1120, 2021. @article{Daniel2021, Remarkable trial-by-trial variability is apparent in cortical responses to repeating stimulus presentations. This neural variability across trials is relatively high before stimulus presentation and then reduced (i.e., quenched) ~0.2 s after stimulus presentation. Individual subjects exhibit different magnitudes of variability quenching, and previous work from our lab has revealed that individuals with larger variability quenching exhibit lower (i.e., better) perceptual thresholds in a contrast discrimination task. Here, we examined whether similar findings were also apparent in a motion detection task, which is processed by distinct neural populations in the visual system. We recorded EEG data from 35 adult subjects as they detected the direction of coherent motion in random dot kinematograms. The results demonstrated that individual magnitudes of variability quenching were significantly correlated with coherent motion thresholds, particularly when presenting stimuli with low dot densities, where coherent motion was more difficult to detect. These findings provide consistent support for the hypothesis that larger magnitudes of neural variability quenching are associated with better perceptual abilities in multiple visual domain tasks. NEW & NOTEWORTHY The current study demonstrates that better visual perception abilities in a motion discrimination task are associated with larger quenching of neural variability. In line with previous studies and signal detection theory principles, these findings support the hypothesis that cortical sensory neurons increase reproducibility to enhance detection and discrimination of sensory stimuli. |
Jonathan Daume; Peng Wang; Alexander Maye; Dan Zhang; Andreas K. Engel Non-rhythmic temporal prediction involves phase resets of low-frequency delta oscillations Journal Article In: NeuroImage, vol. 224, pp. 117376, 2021. @article{Daume2021, The phase of neural oscillatory signals aligns to the predicted onset of upcoming stimulation. Whether such phase alignments represent phase resets of underlying neural oscillations or just rhythmically evoked activity, and whether they can be observed in a rhythm-free visual context, however, remains unclear. Here, we recorded the magnetoencephalogram while participants were engaged in a temporal prediction task, judging the visual or tactile reappearance of a uniformly moving stimulus. The prediction conditions were contrasted with a control condition to dissociate phase adjustments of neural oscillations from stimulus-driven activity. We observed stronger delta band inter-trial phase consistency (ITPC) in a network of sensory, parietal and frontal brain areas, but no power increase reflecting stimulus-driven or prediction-related evoked activity. Delta ITPC further correlated with prediction performance in the cerebellum and visual cortex. Our results provide evidence that phase alignments of low-frequency neural oscillations underlie temporal predictions in a non-rhythmic visual and crossmodal context. |
Saeideh Davoudi; Mohsen Parto Dezfouli; Robert T. Knight; Mohammad Reza Daliri; Elizabeth L. Johnson Prefrontal lesions disrupt posterior alpha–gamma coordination of visual working memory representations Journal Article In: Journal of Cognitive Neuroscience, vol. 33, no. 9, pp. 1798–1810, 2021. @article{Davoudi2021, How does the human brain prioritize different visual representations in working memory (WM)? Here, we define the oscillatory mechanisms supporting selection of “where”and “when” features from visual WM storage and investigate the role of pFC in feature selection. Fourteen individuals with lateral pFC damage and 20 healthy controls performed a visuospatial WM task while EEG was recorded. On each trial, two shapes were presented sequentially in a top/ bottom spatial orientation. A retro-cue presented mid-delay prompted which of the two shapes had been in either the top/ bottom spatial position or first/second temporal position. We found that cross-frequency coupling between parieto-occipital alpha (α; 8–12 Hz) oscillations and topographi-cally distributed gamma (γ; 30–50 Hz) activity tracked selection of the distinct cued feature in controls. This signature of feature selection was disrupted in patients with pFC lesions, despite intact α–γ coupling independent of feature selection. These findings reveal a pFC-dependent parieto-occipital α–γ mechanism for the rapid selection of visual WM representations. |
Jan Willem De Gee; Camile M. C. Correa; Matthew Weaver; Tobias H. Donner; Simon Van Gaal Pupil dilation and the slow wave ERP reflect surprise about choice outcome resulting from intrinsic variability in decision confidence Journal Article In: Cerebral Cortex, vol. 31, no. 7, pp. 3565–3578, 2021. @article{DeGee2021, Central to human and animal cognition is the ability to learn from feedback in order to optimize future rewards. Such a learning signal might be encoded and broadcasted by the brain's arousal systems, including the noradrenergic locus coeruleus. Pupil responses and the positive slow wave component of event-related potentials reflect rapid changes in the arousal level of the brain. Here, we ask whether and how these variables may reflect surprise: the mismatch between one's expectation about being correct and the outcome of a decision, when expectations fluctuate due to internal factors (e.g., engagement). We show that during an elementary decision task in the face of uncertainty both physiological markers of phasic arousal reflect surprise. We further show that pupil responses and slow wave event-related potential are unrelated to each other and that prediction error computations depend on feedback awareness. These results further advance our understanding of the role of central arousal systems in decision-making under uncertainty. |
Megan T. Debettencourt; Stephanie D. Williams; Edward K. Vogel; Edward Awh Sustained attention and spatial attention distinctly influence long-term memory encoding Journal Article In: Journal of Cognitive Neuroscience, vol. 33, no. 10, pp. 2132–2148, 2021. @article{Debettencourt2021, Our attention is critically important for what we remember. Prior measures of the relationship between attention and memory, however, have largely treated “attention” as a monolith. Here, across three experiments, we provide evidence for two dissociable aspects of attention that influence encoding into long-term memory. Using spatial cues together with a sensitive continuous report procedure, we find that long-term memory response error is affected by both trial-by-trial fluctuations of sustained attention and prioritization via covert spatial attention. Furthermore, using multivariate analyses of EEG, we track both sustained attention and spatial attention before stimulus onset. Intriguingly, even during moments of low sustained attention, there is no decline in the representation of the spatially attended location, showing that these two aspects of attention have robust but independent effects on long-term memory encoding. Finally, sustained and spatial attention predicted distinct variance in long-term memory performance across individuals. That is, the relationship between attention and long-term memory suggests a composite model, wherein distinct attentional subcomponents influence encoding into long-term memory. These results point toward a taxonomy of the distinct attentional processes that constrain our memories. |
Federica Degno; Otto Loberg; Simon P. Liversedge Co-registration of eye movements and fixation-related potentials in natural reading: Practical issues of experimental design and data analysis Journal Article In: Collabra: Psychology, vol. 7, no. 1, pp. 1–28, 2021. @article{Degno2021, A growing number of studies are using co-registration of eye movement (EM) and fixation-related potential (FRP) measures to investigate reading. However, the number of co-registration experiments remains small when compared to the number of studies in the literature conducted with EMs and event-related potentials (ERPs) alone. One reason for this is the complexity of the experimental design and data analyses. The present paper is designed to support researchers who might have expertise in conducting reading experiments with EM or ERP techniques and are wishing to take their first steps towards co-registration research. The objective of this paper is threefold. First, to provide an overview of the issues that such researchers would face. Second, to provide a critical overview of the methodological approaches available to date to deal with these issues. Third, to offer an example pipeline and a full set of scripts for data preprocessing that may be adopted and adapted for one's own needs. The data preprocessing steps are based on EM data parsing via Data Viewer (SR Research), and the provided scripts are written in Matlab and R. Ultimately, with this paper we hope to encourage other researchers to run co-registration experiments to study reading and human cognition more generally. |
Gisella K. Diaz; Edward K. Vogel; Edward Awh Perceptual grouping reveals distinct roles for sustained slow wave activity and alpha oscillations in working memory Journal Article In: Journal of Cognitive Neuroscience, vol. 33, no. 7, pp. 1354–1364, 2021. @article{Diaz2021, Multiple neural signals have been found to track the number of items stored in working memory ( WM). These signals include oscillatory activity in the alpha band and slow-wave components in human EEG, both of which vary with storage loads and predict individual differences in WM capacity. However, recent evidence suggests that these two signals play distinct roles in spatial attention and item-based storage in WM. Here, we examine the hypothesis that sustained negative voltage deflections over parieto-occipital electrodes reflect the number of individuated items in WM, whereas oscillatory activity in the alpha frequency band (8–12 Hz) within the same electrodes tracks the attended positions in the visual display. We measured EEG activity while participants stored the orientation of visual elements that were either grouped by collinearity or not. This grouping manipulation altered the number of individuated items perceived while holding constant the number of locations occupied by visual stimuli. The negative slow wave tracked the number of items stored and was reduced in amplitude in the grouped condition. By contrast, oscillatory activity in the alpha frequency band tracked the number of positions occupied by the memoranda and was unaffected by perceptual grouping. Perceptual grouping, then, reduced the number of individuated representations stored in WM as reflected by the negative slow wave, whereas the location of each element was actively maintained as indicated by alpha power. These findings contribute to the emerging idea that distinct classes of EEG signals work in concert to successfully maintain online representations in WM. |
Marcos Domic-Siede; Martín Irani; Joaquín Valdés; Marcela Perrone-Bertolotti; Tomás Ossandón In: NeuroImage, vol. 226, pp. 117557, 2021. @article{DomicSiede2021, Cognitive planning, the ability to develop a sequenced plan to achieve a goal, plays a crucial role in human goal-directed behavior. However, the specific role of frontal structures in planning is unclear. We used a novel and ecological task, that allowed us to separate the planning period from the execution period. The spatio-temporal dynamics of EEG recordings showed that planning induced a progressive and sustained increase of frontal-midline theta activity (FMθ) over time. Source analyses indicated that this activity was generated within the prefrontal cortex. Theta activity from the right mid-Cingulate Cortex (MCC) and the left Anterior Cingulate Cortex (ACC) were correlated with an increase in the time needed for elaborating plans. On the other hand, left Frontopolar cortex (FP) theta activity exhibited a negative correlation with the time required for executing a plan. Since reaction times of planning execution correlated with correct responses, left FP theta activity might be associated with efficiency and accuracy in making a plan. Associations between theta activity from the right MCC and the left ACC with reaction times of the planning period may reflect high cognitive demand of the task, due to the engagement of attentional control and conflict monitoring implementation. In turn, the specific association between left FP theta activity and planning performance may reflect the participation of this brain region in successfully self-generated plans. |
Linda Drijvers; Ole Jensen; Eelke Spaak Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information Journal Article In: Human Brain Mapping, vol. 42, no. 4, pp. 1138–1152, 2021. @article{Drijvers2021, During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1,440 Hz refresh rate). Integration difficulty was manipulated by lower-order auditory factors (clear/degraded speech) and higher-order visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual − fauditory = 7 Hz), specifically when lower-order integration was easiest because signal quality was optimal. This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of lower-order audiovisual integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context. |
Stefan Dürschmid; Andre Maric; Marcel S. Kehl; Robert T. Knight; Hermann Hinrichs; Hans-Jochen Heinze Fronto-temporal regulation of subjective value to suppress impulsivity in intertemporal choices Journal Article In: Journal of Neuroscience, vol. 41, pp. 1727–1737, 2021. @article{Duerschmid2021, Impulsive decisions arise from preferring smaller but sooner rewards compared to larger but later rewards. How neural activity and attention to choice alternatives contribute to reward decisions during temporal discounting is not clear. Here we probed (i) attention to and (ii) neural representation of delay and reward information in humans (both sexes) engaged in choices. We studied behavioral and frequency specific dynamics supporting impulsive decisions on a fine-grained temporal scale using eye tracking and magnetoencephalographic (MEG) recordings. In one condition participants had to decide for themselves but pretended to decide for their best friend in a second prosocial condition, which required perspective taking. Hence, conditions varied in the value for themselves versus that pretending to choose for another person. Stronger impulsivity was reliably found across three independent groups for prosocial decisions. Eye tracking revealed a systematic shift of attention from the delay to the reward information and differences in eye tracking between conditions predicted differences in discounting. High frequency activity (HFA: 175-250 Hz) distributed over right fronto-temporal sensors correlated with delay and reward information in consecutive temporal intervals for high value decisions for oneself but not the friend. Collectively the results imply that the HFA recorded over fronto-temporal MEG sensors plays a critical role in choice option integration. |
Amie J. Durston; Roxane J. Itier The early processing of fearful and happy facial expressions is independent of task demands – Support from mass univariate analyses Journal Article In: Brain Research, vol. 1765, pp. 147505, 2021. @article{Durston2021, Most ERP studies on facial expressions of emotion have yielded inconsistent results regarding the time course of emotion effects and their possible modulation by task demands. Most studies have used classical statistical methods with a high likelihood of type I and type II errors, which can be limited with Mass Univariate statistics. FMUT and LIMO are currently the only two available toolboxes for Mass Univariate analysis of ERP data and use different fundamental statistics. Yet, no direct comparison of their output has been performed on the same dataset. Given the current push to transition to robust statistics to increase results replicability, here we compared the output of these toolboxes on data previously analyzed using classic approaches (Itier & Neath-Tavares, 2017). The early (0–352 ms) processing of fearful, happy, and neutral faces was investigated under three tasks in a within-subject design that also controlled gaze fixation location. Both toolboxes revealed main effects of emotion and task but neither yielded an interaction between the two, confirming the early processing of fear and happy expressions is largely independent of task demands. Both toolboxes found virtually no difference between neutral and happy expressions, while fearful (compared to neutral and happy) expressions modulated the N170 and EPN but elicited maximum effects after the N170 peak, around 190 ms. Similarities and differences in the spatial and temporal extent of these effects are discussed in comparison to the published classical analysis and the rest of the ERP literature. |
Tobias Feldmann-Wüstefeld Neural measures of working memory in a bilateral change detection task Journal Article In: Psychophysiology, vol. 58, no. 1, pp. e13683, 2021. @article{FeldmannWuestefeld2021, The change detection task is a widely used paradigm to examine visual working memory processes. Participants memorize a set of items and then, try to detect changes in the set after a retention period. The negative slow wave (NSW) and contralateral delay activity (CDA) are event-related potentials in the EEG signal that are commonly used in change detection tasks to track working memory load, as both increase with the number of items maintained in working memory (set size). While the CDA was argued to more purely reflect the memory-specific neural activity than the NSW, it also requires a lateralized design and attention shifts prior to memoranda onset, imposing more restrictions on the task than the NSW. The present study proposes a novel change detection task in which both CDA and NSW can be measured at the same time. Memory items were presented bilaterally, but their distribution in the left and right hemifield varied, inducing a target imbalance or “net load.” NSW increased with set size, whereas CDA increased with net load. In addition, a multivariate linear classifier was able to decode the set size and net load from the EEG signal. CDA, NSW, and decoding accuracy predicted an individual's working memory capacity. In line with the notion of a bilateral advantage in working memory, accuracy, and CDA data suggest that participants tended to encode items relatively balanced. In sum, this novel change detection task offers a basis to make use of converging neural measures of working memory in a comprehensive paradigm. |
Tobias Feldmann-Wüstefeld; Marina Weinberger; Edward Awh Spatially guided distractor suppression during visual search Journal Article In: Journal of Neuroscience, vol. 41, no. 14, pp. 3180–3191, 2021. @article{FeldmannWuestefeld2021a, Past work has demonstrated that active suppression of salient distractors is a critical part of visual selection. Evidence for goaldriven suppression includes below-baseline visual encoding at the position of salient distractors (Gaspelin and Luck, 2018) and neural signals such as the distractor positivity (Pd) that track how many distractors are presented in a given hemifield (Feldmann-Wöstefeld and Vogel, 2019). One basic question regarding distractor suppression is whether it is inherently spatial or nonspatial in character. Indeed, past work has shown that distractors evoke both spatial (Theeuwes, 1992) and nonspatial forms of interference (Folk and Remington, 1998), motivating a direct examination of whether space is integral to goal-driven distractor suppression. Here, we use behavioral and EEG data from adult humans (male and female) to provide clear evidence for a spatial gradient of suppression surrounding salient singleton distractors. Replicating past work, both reaction time and neural indices of target selection improved monotonically as the distance between target and distractor increased. Importantly, these target selection effects were paralleled by a monotonic decline in the amplitude of the Pd, an electrophysiological index of distractor suppression. Moreover, multivariate analyses revealed spatially selective activity in the h-band that tracked the position of the target and, critically, revealed suppressed activity at spatial channels centered on distractor positions. Thus, goal-driven selection of relevant over irrelevant information benefits from a spatial gradient of suppression surrounding salient distractors. |
Joshua J. Foster; William Thyer; Janna W. Wennberg; Edward Awh Covert attention increases the gain of stimulus-evoked population codes Journal Article In: Journal of Neuroscience, vol. 41, no. 8, pp. 1802–1815, 2021. @article{Foster2021, Covert spatial attention has a variety of effects on the responses of individual neurons. However, relatively little is known about the net effect of these changes on sensory population codes, even though perception ultimately depends on population activity. Here, we measured the EEG in human observers (male and female), and isolated stimulus-evoked activity that was phase-locked to the onset of attended and ignored visual stimuli. Using an encoding model, we reconstructed spatially selective population tuning functions from the pattern of stimulus-evoked activity across the scalp. Our EEG-based approach allowed us to measure very early visually evoked responses occurring;100ms after stimulus onset. In Experiment 1, we found that covert attention increased the amplitude of spatially tuned population responses at this early stage of sensory processing. In Experiment 2, we parametrically varied stimulus contrast to test how this effect scaled with stimulus contrast. We found that the effect of attention on the amplitude of spatially tuned responses increased with stimulus contrast, and was well described by an increase in response gain (i.e., a multiplicative scaling of the population response). Together, our results show that attention increases the gain of spatial population codes during the first wave of visual processing. |
Wendel M. Friedl; Andreas Keil Aversive conditioning of spatial position sharpens neural population-level tuning in visual cortex and selectively alters alpha-band activity Journal Article In: Journal of Neuroscience, vol. 41, no. 26, pp. 5723–5733, 2021. @article{Friedl2021, Processing capabilities for many low-level visual features are experientially malleable, aiding sighted organisms in adapting to dynamic environments. Explicit instructions to attend a specific visual field location influence retinotopic visuocortical activity, amplifying responses to stimuli appearing at cued spatial positions. It remains undetermined both how such prioritization affects surrounding nonprioritized locations, and if a given retinotopic spatial position can attain enhanced cortical representation through experience rather than instruction. The current report examined visuocortical response changes as human observers (N = 51, 19 male) learned, through differential classical conditioning, to associate specific screen locations with aversive outcomes. Using dense-array EEG and pupillometry, we tested the preregistered hypotheses of either sharpening or generalization around an aversively associated location following a single conditioning session. Competing hypotheses tested whether mean response changes would take the form of a Gaussian (generalization) or difference-of-Gaussian (sharpening) distribution over spatial positions, peaking at the viewing location paired with a noxious noise. Occipital 15 Hz steady-state visual evoked potential responses were selectively heightened when viewing aversively paired locations and displayed a nonlinear, difference-of-Gaussian profile across neighboring locations, consistent with suppressive surround modulation of nonprioritized positions. Measures of alpha-band (8-12 Hz) activity were differentially altered in anterior versus posterior locations, while pupil diameter exhibited selectively heightened responses to noise-paired locations but did not evince differences across the nonpaired locations. These results indicate that visuocortical spatial representations are sharpened in response to location-specific aversive conditioning, while top-down influences indexed by alpha-power reduction exhibit posterior generalization and anterior sharpening. |
R. Frömer; H. Lin; C. K. Dean Wolf; M. Inzlicht; A. Shenhav Expectations of reward and efficacy guide cognitive control allocation Journal Article In: Nature Communications, vol. 12, pp. 1030, 2021. @article{Froemer2021, The amount of mental effort we invest in a task is influenced by the reward we can expect if we perform that task well. However, some of the rewards that have the greatest potential for driving these efforts are partly determined by factors beyond one's control. In such cases, effort has more limited efficacy for obtaining rewards. According to the Expected Value of Control theory, people integrate information about the expected reward and efficacy of task performance to determine the expected value of control, and then adjust their control allocation (i.e., mental effort) accordingly. Here we test this theory's key behavioral and neural predictions. We show that participants invest more cognitive control when this control is more rewarding and more efficacious, and that these incentive components separately modulate EEG signatures of incentive evaluation and proactive control allocation. Our findings support the prediction that people combine expectations of reward and efficacy to determine how much effort to invest. |
Jordan Garrett; Tom Bullock; Barry Giesbrecht Tracking the contents of spatial working memory during an acute bout of aerobic exercise Journal Article In: Journal of Cognitive Neuroscience, vol. 33, no. 7, pp. 1271–1286, 2021. @article{Garrett2021, Recent studies have reported enhanced visual responses during acute bouts of physical exercise, suggesting that sensory systems may become more sensitive during active exploration of the environment. This raises the possibility that exercise may also modulate brain activity associated with other cognitive functions, like visual working memory, that rely on patterns of activity that persist beyond the initial sensory evoked response. Here, we investigated whether the neural coding of an object location held in memory is modulated by an acute bout of aerobic exercise. Participants performed a spatial change detection task while seated on a stationary bike at rest and during low-intensity cycling (∼50 watts/50 RPM). Brain activity was measured with EEG. An inverted encoding modeling technique was employed to estimate location-selective channel response functions from topographical patterns of alpha-band (8–12 Hz) activity. There was strong evidence of robust spatially selective responses during stimulus presentation and retention periods both at rest and during exercise. During retention, the spatial selectivity of these responses decreased in the exercise condition relative to rest. A temporal generalization analysis indicated that models trained on one time period could be used to reconstruct the remembered locations at other time periods, however, generalization was degraded during exercise. Together, these results demonstrate that it is possible to reconstruct the contents of working memory at rest and during exercise, but that exercise can result in degraded responses, which contrasts with the enhancements observed in early sensory processing. |
Nicole Hakim; Edward Awh; Edward K. Vogel; Monica D. Rosenberg Inter-electrode correlations measured with EEG predict individual differences in cognitive ability Journal Article In: Current Biology, vol. 31, no. 22, pp. 4998–5008, 2021. @article{Hakim2021, Human brains share a broadly similar functional organization with consequential individual variation. This duality in brain function has primarily been observed when using techniques that consider the spatial organization of the brain, such as MRI. Here, we ask whether these common and unique signals of cognition are also present in temporally sensitive but spatially insensitive neural signals. To address this question, we compiled electroencephalogram (EEG) data from individuals of both sexes while they performed multiple working memory tasks at two different data-collection sites (n = 171 and 165). Results revealed that trial-averaged EEG activity exhibited inter-electrode correlations that were stable within individuals and unique across individuals. Furthermore, models based on these inter-electrode correlations generalized across datasets to predict participants' working memory capacity and general fluid intelligence. Thus, inter-electrode correlation patterns measured with EEG provide a signature of working memory and fluid intelligence in humans and a new framework for characterizing individual differences in cognitive abilities. |
Nicole Hakim; Tobias Feldmann-Wüstefeld; Edward Awh; Edward K. Vogel Controlling the flow of distracting information in working memory Journal Article In: Cerebral Cortex, vol. 31, no. 7, pp. 3323–3337, 2021. @article{Hakim2021a, Visual working memory (WM) must maintain relevant information, despite the constant influx of both relevant and irrelevant information. Attentional control mechanisms help determine which of this new information gets access to our capacity-limited WM system. Previous work has treated attentional control as a monolithic process - either distractors capture attention or they are suppressed. Here, we provide evidence that attentional capture may instead be broken down into at least two distinct subcomponent processes: (1) Spatial capture, which refers to when spatial attention shifts towards the location of irrelevant stimuli and (2) item-based capture, which refers to when item-based WM representations of irrelevant stimuli are formed. To dissociate these two subcomponent processes of attentional capture, we utilized a series of electroencephalography components that track WM maintenance (contralateral delay activity), suppression (distractor positivity), item individuation (N2pc), and spatial attention (lateralized alpha power). We show that new, relevant information (i.e., a task-relevant distractor) triggers both spatial and item-based capture. Irrelevant distractors, however, only trigger spatial capture from which ongoing WM representations can recover more easily. This fractionation of attentional capture into distinct subcomponent processes provides a refined framework for understanding how distracting stimuli affect attention and WM. |
Xin He; Weilin Liu; Nan Qin; Lili Lyu; Xue Dong; Min Bao Performance-dependent reward hurts performance: The non-monotonic attentional load modulation on task-irrelevant distractor processing Journal Article In: Psychophysiology, vol. 58, no. 12, pp. e13920, 2021. @article{He2021b, Selective attention is essential when we face sensory inputs with distractions. In the past decades, Lavie's load theory of selective attention delineates a complete picture of distractor suppression under different attentional control load. The present study was originally designed to explore how reward modulates the load effect of attentional selection. Unexpectedly, it revealed new findings under extended attentional load that was not involved in previous work. Participants were asked to complete a rewarded attentive visual tracking task while presented with irrelevant auditory oddball stimuli, with their behavioral performance, event-related potentials and pupillary responses recorded. We found that although the behavioral performance and pupil sizes varied unidirectionally with the attentional load, the processing of distractors as reflected by the mismatch negativity (MMN) increased first and then decreased. In contrast to the prediction of Lavie's theory that attentional control fails to effectively suppress distractor processing under high attentional control load, our finding suggests that extremely high attentional control load may instead require suppression of distractor processing at a stage as early as possible. Besides, P3a, a positive-polarity response sometimes following the MMN, was not affected by the attentional load, but both N1 (a negative-polarity component peaking ~100 ms from sound onset) and P3a were weakened at higher reward, indicating that reward leads to attenuated early processing of distractor and thus suppresses the attentional orienting towards distractors. These findings altogether complement Lavie's load theory of selective attention, presenting a more complex picture of how attentional load and reward affects selective attention. |
Peter J. Hills; Martin R. Vasilev; Panarai Ford; Lucy Snell; Emma Whitworth; Tessa Parsons; Rebecca Morisson; Abigail Silveira; Bernhard Angele Sensory gating is related to positive and disorganised schizotypy in contrast to smooth pursuit eye movements and latent inhibition Journal Article In: Neuropsychologia, vol. 161, pp. 107989, 2021. @article{Hills2021a, Since the characteristics and symptoms of both schizophrenia and schizotypy are manifested heterogeneously, it is possible that different endophenotypes and neurophysiological measures (sensory gating and smooth pursuit eye movement errors) represent different clusters of symptoms. Participants (N = 205) underwent a standard conditioned-pairing paradigm to establish their sensory gating ratio, a smooth-pursuit eye-movement task, a latent inhibition task, and completed the Schizotypal Personality Questionnaire. A Multidimensional Scaling analysis revealed that sensory gating was related to positive and disorganised dimensions of schizotypy. Latent inhibition and prepulse inhibition were not related to any dimension of schizotypy. Smooth pursuit eye movement error was unrelated to sensory gating and latent inhibition, but was related to negative dimensions of schizotypy. Our findings suggest that the symptom clusters associated with two main endophenotypes are largely independent. To fully understand symptomology and outcomes of schizotypal traits, the different subtypes of schizotypy (and potentially, schizophrenia) ought to be considered separately rather than together. |
Christoph Huber-Huber; Julia Steininger; Markus Grüner; Ulrich Ansorge Psychophysical dual-task setups do not measure pre-saccadic attention but saccade-related strengthening of sensory representations Journal Article In: Psychophysiology, vol. 58, no. 5, pp. e13787, 2021. @article{HuberHuber2021a, Visual attention and saccadic eye movements are linked in a tight, yet flexible fashion. In humans, this link is typically studied with dual-task setups. Participants are instructed to execute a saccade to some target location, while a discrimination target is flashed on a screen before the saccade can be made. Participants are also instructed to report a specific feature of this discrimination target at the trial end. Discrimination performance is usually better if the discrimination target occurred at the same location as the saccade target compared to when it occurred at a different location, which is explained by the mandatory shift of attention to the saccade target location before saccade onset. This pre-saccadic shift of attention presumably enhances the perception of the discrimination target if it occurred at the same, but not if it occurred at a different location. It is, however, known that a dual-task setup can alter the primary process under investigation. Here, we directly compared pre-saccadic attention in single-task versus dual-task setups using concurrent electroencephalography (EEG) and eye-tracking. Our results corroborate the idea of a pre-saccadic shift of attention. They, however, question that this shift leads to the same-position discrimination advantage. The relation of saccade and discrimination target position affected the EEG signal only after saccade onset. Our results, thus, favor an alternative explanation based on the role of saccades for the consolidation of sensory and short-term memory. We conclude that studies with dual-task setups arrived at a valid conclusion despite not measuring exactly what they intended to measure. |
Mohsen Parto Dezfouli; Saeideh Davoudi; Robert T. Knight; Mohammad Reza Daliri; Elizabeth L. Johnson Prefrontal lesions disrupt oscillatory signatures of spatiotemporal integration in working memory Journal Article In: Cortex, vol. 138, pp. 113–126, 2021. @article{PartoDezfouli2021, How does the human brain integrate spatial and temporal information into unified mnemonic representations? Building on classic theories of feature binding, we first define the oscillatory signatures of integrating ‘where' and ‘when' information in working memory (WM) and then investigate the role of prefrontal cortex (PFC) in spatiotemporal integration. Fourteen individuals with lateral PFC damage and 20 healthy controls completed a visuospatial WM task while electroencephalography (EEG) was recorded. On each trial, two shapes were presented sequentially in a top/bottom spatial orientation. We defined EEG signatures of spatiotemporal integration by comparing the maintenance of two possible where-when configurations: the first shape presented on top and the reverse. Frontal delta-theta (δθ; 2–7 Hz) activity, frontal-posterior δθ functional connectivity, lateral posterior event-related potentials, and mesial posterior alpha phase-to-gamma amplitude coupling dissociated the two configurations in controls. WM performance and frontal and mesial posterior signatures of spatiotemporal integration were diminished in PFC lesion patients, whereas lateral posterior signatures were intact. These findings reveal both PFC-dependent and independent substrates of spatiotemporal integration and link optimal performance to PFC. |
Jairo Perez-Osorio; Abdulaziz Abubshait; Agnieszka Wykowska In: Journal of Cognitive Neuroscience, vol. 34, no. 1, pp. 108–126, 2021. @article{PerezOsorio2021, Understanding others' nonverbal behavior is essential for social interaction, as it allows, among others, to infer mental states. While gaze communication, a well-established nonverbal social behavior, has shown its importance in inferring others' mental states, not much is known about the effects of irrelevant gaze signals on cognitive conflict markers during collaborative settings. Here, participants completed a categorization task where they categorized objects based on their color while observing images of a robot. On each trial, participants observed the robot iCub grasping an object from a table and offering it to them to simulate a handover. Once the robot “moved” the object forward, participants were asked to categorize the object according to its color. Before participants were allowed to respond, the robot made a lateral head/gaze shift. The gaze shifts were either congruent or incongruent with the object's color. We expected that incongruent head-cues would induce more errors (Study 1), would be associated with more curvature in eye-tracking trajectories (Study 2), and induce larger amplitude in electrophysiological markers of cognitive conflict (Study 3). Results of the three studies show more oculomotor interference as measured in error rates (Study 1), larger curvatures eye-tracking trajectories (Study 2), and higher amplitudes of the N2 event-related potential (ERP) of the EEG signals as well as higher Event-Related Spectral Perturbation (ERSP) amplitudes (Study 3) for incongruent trials compared to congruent trials. Our findings reveal that behavioral, ocular and electrophysiological markers can index the influence of irrelevant signals during goal-oriented tasks. |
Thomas Pfeffer; Adrian Ponce-Alvarez; Konstantinos Tsetsos; Thomas Meindertsma; Christoffer Julius Gahnström; Ruud Lucas Brink; Guido Nolte; Andreas Karl Engel; Gustavo Deco; Tobias Hinrich Donner Circuit mechanisms for the chemical modulation of cortex-wide network interactions and behavioral variability Journal Article In: Science Advances, vol. 7, no. 29, pp. eabf5620, 2021. @article{Pfeffer2021, Influential theories postulate distinct roles of catecholamines and acetylcholine in cognition and behavior. However, previous physiological work reported similar effects of these neuromodulators on the response properties (specifically, the gain) of individual cortical neurons. Here, we show a double dissociation between the effects of catecholamines and acetylcholine at the level of large-scale interactions between cortical areas in humans. A pharmacological boost of catecholamine levels increased cortex-wide interactions during a visual task, but not rest. An acetylcholine boost decreased interactions during rest, but not task. Cortical circuit modeling explained this dissociation by differential changes in two circuit properties: The local excitation-inhibition balance (more strongly increased by catecholamines) and intracortical transmission (more strongly reduced by acetylcholine). The inferred catecholaminergic mechanism also predicted noisier decision-making, which we confirmed for both perceptual and value-based choice behavior. Our work highlights specific circuit mechanisms for shaping cortical network interactions and behavioral variability by key neuromodulatory systems. |
Ella Podvalny; Leana E. King; Biyu J. He Spectral signature and behavioral consequence of spontaneous shifts of pupil-linked arousal in human Journal Article In: eLife, vol. 10, pp. e68265, 2021. @article{Podvalny2021, Arousal levels perpetually rise and fall spontaneously. How markers of arousal—pupil size and frequency content of brain activity—relate to each other and influence behavior in humans is poorly understood. We simultaneously monitored magnetoencephalography and pupil in healthy volunteers at rest and during a visual perceptual decision-making task. Spontaneously varying pupil size correlates with power of brain activity in most frequency bands across large-scale resting-state cortical networks. Pupil size recorded at prestimulus baseline correlates with subsequent shifts in detection bias (c) and sensitivity (d'). When dissociated from pupil-linked state, prestimulus spectral power of resting state networks still predicts perceptual behavior. Fast spontaneous pupil constriction and dilation correlate with large-scale brain activity as well but not perceptual behavior. Our results illuminate the relation between central and peripheral arousal markers and their respective roles in human perceptual decision-making. |
Hamed Rahimi-Nasrabadi; Jianzhong Jin; Reece Mazade; Carmen Pons; Sohrab Najafian; Jose-Manuel Alonso Image luminance changes contrast sensitivity in visual cortex Journal Article In: Cell Reports, vol. 34, no. 5, pp. 1–21, 2021. @article{RahimiNasrabadi2021, Accurate measures of contrast sensitivity are important for evaluating visual disease progression and for navigation safety. Previous measures suggested that cortical contrast sensitivity was constant across widely different luminance ranges experienced indoors and outdoors. Against this notion, here, we show that luminance range changes contrast sensitivity in both cat and human cortex, and the changes are different for dark and light stimuli. As luminance range increases, contrast sensitivity increases more within cortical pathways signaling lights than those signaling darks. Conversely, when the luminance range is constant, light-dark differences in contrast sensitivity remain relatively constant even if background luminance changes. We show that a Naka-Rushton function modified to include luminance range and light-dark polarity accurately replicates both the statistics of light-dark features in natural scenes and the cortical responses to multiple combinations of contrast and luminance. We conclude that differences in light-dark contrast increase with luminance range and are largest in bright environments. |
Isabelle A. Rosenthal; Shridhar R. Singh; Katherine L. Hermann; Dimitrios Pantazis; Bevil R. Conway Color space geometry uncovered with magnetoencephalography Journal Article In: Current Biology, vol. 31, no. 3, pp. 515–526, 2021. @article{Rosenthal2021, The geometry that describes the relationship among colors, and the neural mechanisms that support color vision, are unsettled. Here, we use multivariate analyses of measurements of brain activity obtained with magnetoencephalography to reverse-engineer a geometry of the neural representation of color space. The analyses depend upon determining similarity relationships among the spatial patterns of neural responses to different colors and assessing how these relationships change in time. We evaluate the approach by relating the results to universal patterns in color naming. Two prominent patterns of color naming could be accounted for by the decoding results: the greater precision in naming warm colors compared to cool colors evident by an interaction of hue and lightness, and the preeminence among colors of reddish hues. Additional experiments showed that classifiers trained on responses to color words could decode color from data obtained using colored stimuli, but only at relatively long delays after stimulus onset. These results provide evidence that perceptual representations can give rise to semantic representations, but not the reverse. Taken together, the results uncover a dynamic geometry that provides neural correlates for color appearance and generates new hypotheses about the structure of color space. |
Giulia C. Salgari; Geoffrey F. Potts; Joseph Schmidt; Chi C. Chan; Christopher C. Spencer; Jeffrey S. Bedwell Event-related potentials to rare visual targets and negative symptom severity in a transdiagnostic psychiatric sample Journal Article In: Clinical Neurophysiology, vol. 132, no. 7, pp. 1526–1536, 2021. @article{Salgari2021, Objectives: Negative psychiatric symptoms are often resistant to treatments, regardless of the disorder in which they appear. One model for a cause of negative symptoms is impairment in higher-order cognition. The current study examined how particular bottom-up and top-down mechanisms of selective attention relate to severity of negative symptoms across a transdiagnostic psychiatric sample. Methods: The sample consisted of 130 participants: 25 schizophrenia-spectrum disorders, 26 bipolar disorders, 18 unipolar depression, and 61 nonpsychiatric controls. The relationships between attentional event-related potentials following rare visual targets (i.e., N1, N2b, P2a, and P3b) and severity of the negative symptom domains of anhedonia, avolition, and blunted affect were evaluated using frequentist and Bayesian analyses. Results: P3b and N2b mean amplitudes were inversely related to the Positive and Negative Syndrome Scale-Negative Symptom Factor severity score across the entire sample. Subsequent regression analyses showed a significant negative transdiagnostic relationship between P3b amplitude and blunted affect severity. Conclusions: Results indicate that negative symptoms, and particularly blunted affect, may have a stronger association with deficits in top-down mechanisms of selective attention. Significance: This suggests that people with greater severity of blunted affect, independent of diagnosis, do not allocate sufficient cognitive resources when engaging in activities requiring selective attention. |
Sebastian Schindler; Clara Tirloni; Maximilian Bruchmann; Thomas Straube Face and emotional expression processing under continuous perceptual load tasks: An ERP study Journal Article In: Biological Psychology, vol. 161, pp. 108056, 2021. @article{Schindler2021, High perceptual load is thought to impair already the early stages of visual processing of task-irrelevant visual stimuli. However, recent studies showed no effects of perceptual load on early ERPs in response to task-irrelevant emotional faces. In this preregistered EEG study (N = 40), we investigated the effects of continuous perceptual load on ERPs to fearful and neutral task-irrelevant faces and their phase-scrambled versions. Perceptual load did not modulate face or emotion effects for the P1 or N170. In contrast, larger face-scramble and fearful-neutral differentiation were found during low as compared to high load for the Early Posterior Negativity (EPN). Further, face-independent P1, but face-dependent N170 emotional modulations were observed. Taken together, our findings show that P1 and N170 face and emotional modulations are highly resistant to load manipulations, indicating a high degree of automaticity during this processing stage, whereas the EPN might represent a bottleneck in visual information processing. |
Constanze Schmitt; Jakob C. B. Schwenk; Adrian Schütz; Jan Churan; André Kaminiarz; Frank Bremmer Preattentive processing of visually guided self-motion in humans and monkeys Journal Article In: Progress in Neurobiology, vol. 205, pp. 102117, 2021. @article{Schmitt2021, The visually-based control of self-motion is a challenging task, requiring – if needed – immediate adjustments to keep on track. Accordingly, it would appear advantageous if the processing of self-motion direction (heading) was predictive, thereby accelerating the encoding of unexpected changes, and un-impaired by attentional load. We tested this hypothesis by recording EEG in humans and macaque monkeys with similar experimental protocols. Subjects viewed a random dot pattern simulating self-motion across a ground plane in an oddball EEG paradigm. Standard and deviant trials differed only in their simulated heading direction (forward-left vs. forward-right). Event-related potentials (ERPs) were compared in order to test for the occurrence of a visual mismatch negativity (vMMN), a component that reflects preattentive and likely also predictive processing of sensory stimuli. Analysis of the ERPs revealed signatures of a prediction mismatch for deviant stimuli in both humans and monkeys. In humans, a MMN was observed starting 110 ms after self-motion onset. In monkeys, peak response amplitudes following deviant stimuli were enhanced compared to the standard already 100 ms after self-motion onset. We consider our results strong evidence for a preattentive processing of visual self-motion information in humans and monkeys, allowing for ultrafast adjustments of their heading direction. |
Jack W. Silcox; Brennan R. Payne The costs (and benefits) of effortful listening on context processing: A simultaneous electrophysiology, pupillometry, and behavioral study Journal Article In: Cortex, vol. 142, pp. 296–316, 2021. @article{Silcox2021, There is an apparent disparity between the fields of cognitive audiology and cognitive electrophysiology as to how linguistic context is used when listening to perceptually challenging speech. To gain a clearer picture of how listening effort impacts context use, we conducted a pre-registered study to simultaneously examine electrophysiological, pupillometric, and behavioral responses when listening to sentences varying in contextual constraint and acoustic challenge in the same sample. Participants (N = 44) listened to sentences that were highly constraining and completed with expected or unexpected sentence-final words (“The prisoners were planning their escape/party”) or were low-constraint sentences with unexpected sentence-final words (“All day she thought about the party”). Sentences were presented either in quiet or with +3 dB SNR background noise. Pupillometry and EEG were simultaneously recorded and subsequent sentence recognition and word recall were measured. While the N400 expectancy effect was diminished by noise, suggesting impaired real-time context use, we simultaneously observed a beneficial effect of constraint on subsequent recognition memory for degraded speech. Importantly, analyses of trial-to-trial coupling between pupil dilation and N400 amplitude showed that when participants' showed increased listening effort (i.e., greater pupil dilation), there was a subsequent recovery of the N400 effect, but at the same time, higher effort was related to poorer subsequent sentence recognition and word recall. Collectively, these findings suggest divergent effects of acoustic challenge and listening effort on context use: while noise impairs the rapid use of context to facilitate lexical semantic processing in general, this negative effect is attenuated when listeners show increased effort in response to noise. However, this effort-induced reliance on context for online word processing comes at the cost of poorer subsequent memory. |
Rodolfo Solís-Vivanco; Ole Jensen; Mathilde Bonnefond New insights on the ventral attention network: Active suppression and involuntary recruitment during a bimodal task Journal Article In: Human Brain Mapping, vol. 42, no. 6, pp. 1699–1713, 2021. @article{SolisVivanco2021, Detection of unexpected, yet relevant events is essential in daily life. fMRI studies have revealed the involvement of the ventral attention network (VAN), including the temporo-parietal junction (TPJ), in such process. In this MEG study with 34 participants (17 women), we used a bimodal (visual/auditory) attention task to determine the neuronal dynamics associated with suppression of the activity of the VAN during top-down attention and its recruitment when information from the unattended sensory modality is involuntarily integrated. We observed an anticipatory power increase of alpha/beta oscillations (12–20 Hz, previously associated with functional inhibition) in the VAN following a cue indicating the modality to attend. Stronger VAN power increases were associated with better task performance, suggesting that the VAN suppression prevents shifting attention to distractors. Moreover, the TPJ was synchronized with the frontal eye field in that frequency band, indicating that the dorsal attention network (DAN) might participate in such suppression. Furthermore, we found a 12–20 Hz power decrease and enhanced synchronization, in both the VAN and DAN, when information between sensory modalities was congruent, suggesting an involvement of these networks when attention is involuntarily enhanced due to multisensory integration. Our results show that effective multimodal attentional allocation includes the modulation of the VAN and DAN through upper-alpha/beta oscillations. Altogether these results indicate that the suppressing role of alpha/beta oscillations might operate beyond sensory regions. |
Jemaine E. Stacey; Mark Crook-Rumsey; Alexander Sumich; Christina J. Howard; Trevor Crawford; Kinneret Livne; Sabrina Lenzoni; Stephen Badham Age differences in resting state EEG and their relation to eye movements and cognitive performance Journal Article In: Neuropsychologia, vol. 157, pp. 107887, 2021. @article{Stacey2021, Prior research has focused on EEG differences across age or EEG differences across cognitive tasks/eye tracking. There are few studies linking age differences in EEG to age differences in behavioural performance which is necessary to establish how neuroactivity corresponds to successful and impaired ageing. Eighty-six healthy participants completed a battery of cognitive tests and eye-tracking measures. Resting state EEG (n = 75, 31 young, 44 older adults) was measured for delta, theta, alpha and beta power as well as for alpha peak frequency. Age deficits in cognition were aligned with the literature, showing working memory and inhibitory deficits along with an older adult advantage in vocabulary. Older adults showed poorer eye movement accuracy and response times, but we did not replicate literature showing a greater age deficit for antisaccades than for prosaccades. We replicated EEG literature showing lower alpha peak frequency in older adults but not literature showing lower alpha power. Older adults also showed higher beta power and less parietal alpha power asymmetry than young adults. Interaction effects showed that better prosaccade performance was related to lower beta power in young adults but not in older adults. Performance at the trail making test part B (measuring task switching and inhibition) was improved for older adults with higher resting state delta power but did not depend on delta power for young adults. It is argued that individuals with higher slow-wave resting EEG may be more resilient to age deficits in tasks that utilise cross-cortical processing. |
Benjamin J. Stauch; Alina Peter; Heike Schuler; Pascal Fries Stimulus-specific plasticity in human visual gamma-band activity and functional connectivity Journal Article In: eLife, vol. 10, pp. e68240, 2021. @article{Stauch2021, Under natural conditions, the visual system often sees a given input repeatedly. This provides an opportunity to optimize processing of the repeated stimuli. Stimulus repetition has been shown to strongly modulate neuronal-gamma band synchronization, yet crucial questions remained open. Here we used magnetoencephalography in 30 human subjects and find that gamma decreases across ≈10 repetitions and then increases across further repetitions, revealing plastic changes of the activated neuronal circuits. Crucially, increases induced by one stimulus did not affect responses to other stimuli, demonstrating stimulus specificity. Changes partially persisted when the inducing stimulus was repeated after 25 minutes of intervening stimuli. They were strongest in early visual cortex and increased interareal feedforward influences. Our results suggest that early visual cortex gamma synchronization enables adaptive neuronal processing of recurring stimuli. These and previously reported changes might be due to an interaction of oscillatory dynamics with established synaptic plasticity mechanisms. |
David W. Sutterer; Andrew J. Coia; Vincent Sun; Steven K. Shevell; Edward Awh Decoding chromaticity and luminance from patterns of EEG activity Journal Article In: Psychophysiology, vol. 58, no. 4, pp. e13779, 2021. @article{Sutterer2021, A long-standing question in the field of vision research is whether scalp-recorded EEG activity contains sufficient information to identify stimulus chromaticity. Recent multivariate work suggests that it is possible to decode which chromaticity an observer is viewing from the multielectrode pattern of EEG activity. There is debate, however, about whether the claimed effects of stimulus chromaticity on visual evoked potentials (VEPs) are instead caused by unequal stimulus luminances, which are achromatic differences. Here, we tested whether stimulus chromaticity could be decoded when potential confounds with luminance were minimized by (1) equating chromatic stimuli in luminance using heterochromatic flicker photometry for each observer and (2) independently varying the chromaticity and luminance of target stimuli, enabling us to test whether the pattern for a given chromaticity generalized across wide variations in luminance. We also tested whether luminance variations can be decoded from the topography of voltage across the scalp. In Experiment 1, we presented two chromaticities (appearing red and green) at three luminance levels during separate trials. In Experiment 2, we presented four chromaticities (appearing red, orange, yellow, and green) at two luminance levels. Using a pattern classifier and the multielectrode pattern of EEG activity, we were able to accurately decode the chromaticity and luminance level of each stimulus. Furthermore, we were able to decode stimulus chromaticity when we trained the classifier on chromaticities presented at one luminance level and tested at a different luminance level. Thus, EEG topography contains robust information regarding stimulus chromaticity, despite large variations in stimulus luminance. |
David W. Sutterer; Sean M. Polyn; Geoffrey F. Woodman a-Band activity tracks a two-dimensional spotlight of attention during spatial working memory maintenance Journal Article In: Journal of Neurophysiology, vol. 125, no. 3, pp. 957–971, 2021. @article{Sutterer2021a, Covert spatial attention is thought to facilitate the maintenance of locations in working memory, and EEG a-band activity (8–12Hz) is proposed to track the focus of covert attention. Recent work has shown that multivariate patterns of a-band activity track the polar angle of remembered locations relative to fixation. However, a defining feature of covert spatial attention is that it facilitates processing in a specific region of the visual field, and prior work has not determined whether patterns of a-band activity track the two-dimensional (2-D) coordinates of remembered stimuli within a visual hemifield or are instead maximally sensitive to the polar angle of remembered locations around fixation. Here, we used a lateralized spatial estimation task, in which observers remembered the location of one or two target dots presented to one side of fixation, to test this question. By applying a linear discriminant classifier to the topography of a-band activity, we found that we were able to decode the location of remembered stimuli. Critically, model comparison revealed that the pattern of classifier choices observed across remembered positions was best explained by a model assuming that a-band activity tracks the 2-D coordinates of remembered locations rather than a model assuming that a-band activity tracks the polar angle of remembered locations relative to fixation. These results support the hypothesis that this a-band activity is involved in the spotlight of attention, and arises from mid- to lower-level visual areas involved in maintaining spatial locations in working memory. NEW & NOTEWORTHY A substantial body of work has shown that patterns of EEG a-band activity track the angular coordinates of attended and remembered stimuli around fixation, but whether these patterns track the two-dimensional coordinates of stimuli presented within a visual hemifield remains an open question. Here, we demonstrate that a-band activity tracks the two-dimensional coordinates of remembered stimuli within a hemifield, showing that a-band activity reflects a spotlight of attention focused on locations maintained in working memory. |
Yu Takagi; Laurence Tudor Hunt; Mark W. Woolrich; Timothy E. J. Behrens; Miriam C. Klein-Flügge Adapting non-invasive human recordings along multiple task-axes shows unfolding of spontaneous and over-trained choice Journal Article In: eLife, vol. 10, pp. 1–27, 2021. @article{Takagi2021, Choices rely on a transformation of sensory inputs into motor responses. Using invasive single neuron recordings, the evolution of a choice process has been tracked by projecting population neural responses into state spaces. Here, we develop an approach that allows us to recover similar trajectories on a millisecond timescale in non-invasive human recordings. We selectively suppress activity related to three task-axes, relevant and irrelevant sensory inputs and response direction, in magnetoencephalography data acquired during context-dependent choices. Recordings from premotor cortex show a progression from processing sensory input to processing the response. In contrast to previous macaque recordings, information related to choice-irrelevant features is represented more weakly than choice-relevant sensory information. To test whether this mechanistic difference between species is caused by extensive over-training common in non-human primate studies, we trained humans on >20,000 trials of the task. Choice-irrelevant features were still weaker than relevant features in premotor cortex after over-training. |
Travis N. Talcott; Nicholas Gaspelin Eye movements are not mandatorily preceded by the N2pc component Journal Article In: Psychophysiology, vol. 58, no. 6, pp. e13821, 2021. @article{Talcott2021, Researchers typically distinguish between two mechanisms of attentional selection in vision: overt and covert attention. A commonplace assumption is that overt eye movements are automatically preceded by shifts of covert attention during visual search. Although the N2pc component is a putative index of covert attentional orienting, little is currently known about its relationship with overt eye movements. This is because most previous studies of the N2pc component prohibit overt eye movements. The current study assessed this relationship by concurrently measuring covert attention (via the N2pc) and overt eye movements (via eye tracking). Participants searched displays for a lateralized target stimulus and were allowed to generate overt eye movements during the search. We then assessed whether overt eye movements were preceded by the N2pc component. The results indicated that saccades were preceded by an N2pc component, but only when participants were required to carefully inspect the target stimulus before initiating the eye movement. When participants were allowed to make naturalistic eye movements in service of visual search, there was no evidence of an N2pc component before eye movements. These findings suggest that the N2pc component does not always precede overt eye movements during visual search. Implications for understanding the relationship between covert and overt attention are discussed. |
2020 |
Christian Pfeiffer; Nora Hollenstein; Ce Zhang; Nicolas Langer Neural dynamics of sentiment processing during naturalistic sentence reading Journal Article In: NeuroImage, vol. 218, pp. 116934, 2020. @article{Pfeiffer2020, When we read, our eyes move through the text in a series of fixations and high-velocity saccades to extract visual information. This process allows the brain to obtain meaning, e.g., about sentiment, or the emotional valence, expressed in the written text. How exactly the brain extracts the sentiment of single words during naturalistic reading is largely unknown. This is due to the challenges of naturalistic imaging, which has previously led researchers to employ highly controlled, timed word-by-word presentations of custom reading materials that lack ecological validity. Here, we aimed to assess the electrical neural correlates of word sentiment processing during naturalistic reading of English sentences. We used a publicly available dataset of simultaneous electroencephalography (EEG), eye-tracking recordings, and word-level semantic annotations from 7129 words in 400 sentences (Zurich Cognitive Language Processing Corpus; Hollenstein et al., 2018). We computed fixation-related potentials (FRPs), which are evoked electrical responses time-locked to the onset of fixations. A general linear mixed model analysis of FRPs cleaned from visual- and motor-evoked activity showed a topographical difference between the positive and negative sentiment condition in the 224–304 ms interval after fixation onset in left-central and right-posterior electrode clusters. An additional analysis that included word-, phrase-, and sentence-level sentiment predictors showed the same FRP differences for the word-level sentiment, but no additional FRP differences for phrase- and sentence-level sentiment. Furthermore, decoding analysis that classified word sentiment (positive or negative) from sentiment-matched 40-trial average FRPs showed a 0.60 average accuracy (95% confidence interval: [0.58, 0.61]). Control analyses ruled out that these results were based on differences in eye movements or linguistic features other than word sentiment. Our results extend previous research by showing that the emotional valence of lexico-semantic stimuli evoke a fast electrical neural response upon word fixation during naturalistic reading. These results provide an important step to identify the neural processes of lexico-semantic processing in ecologically valid conditions and can serve to improve computer algorithms for natural language processing. |
Reuben Rideaux; Elizabeth Michael; Andrew E. Welchman Adaptation to binocular anticorrelation results in increased neural excitability Journal Article In: Journal of Cognitive Neuroscience, vol. 32, no. 1, pp. 100–110, 2020. @article{Rideaux2020, Throughout the brain, information from individual sources converges onto higher order neurons. For example, information from the two eyes first converges in binocular neurons in area V1. Many neurons appear tuned to similarities between sources of information, which makes intuitive sense in a system striving to match multiple sensory signals to a single external cause, i.e., establish causal inference. However, there are also neurons that are tuned to dissimilar information. In particular, many binocular neurons respond maximally to a dark feature in one eye and a light feature in the other. Despite compelling neurophysiological and behavioural evidence supporting the existence of these neurons (Cumming & Parker, 1997; Janssen, Vogels, Liu, & Orban, 2003; Katyal, Vergeer, He, He, & Engel, 2018; Kingdom, Jennings, & Georgeson, 2018; Tsao, Conway, & Livingstone, 2003), their function has remained opaque. To determine how neural mechanisms tuned to dissimilarities support perception, here we use electroencephalography to measure human observers' steady-state visually evoked potentials (SSVEPs) in response to change in depth after prolonged viewing of anticorrelated and correlated random-dot stereograms (RDS). We find that adaptation to anticorrelated RDS results in larger SSVEPs, while adaptation to correlated RDS has no effect. These results are consistent with recent theoretical work suggesting ‘what not' neurons play a suppressive role in supporting stereopsis (Goncalves & Welchman, 2017); that is, selective adaptation of neurons tuned to binocular mismatches reduces suppression resulting in increased neural excitability. |
Andre Roelke; Christian Vorstius; Ralph Radach; Markus J. Hofmann Fixation-related NIRS indexes retinotopic occipital processing of parafoveal preview during natural reading Journal Article In: NeuroImage, vol. 215, pp. 116823, 2020. @article{Roelke2020, While word frequency and predictability effects have been examined extensively, any evidence on interactive effects as well as parafoveal influences during whole sentence reading remains inconsistent and elusive. Novel neuroimaging methods utilize eye movement data to account for the hemodynamic responses of very short events such as fixations during natural reading. In this study, we used the rapid sampling frequency of near-infrared spectroscopy (NIRS) to investigate neural responses in the occipital and orbitofrontal cortex to word frequency and predictability. We observed increased activation in the right ventral occipital cortex when the fixated word N was of low frequency, which we attribute to an enhanced cost during saccade planning. Importantly, unpredictable (in contrast to predictable) low frequency words increased the activity in the left dorsal occipital cortex at the fixation of the preceding word N-1, presumably due to an upcoming breach of top-down modulated expectation. Opposite to studies that utilized a serial presentation of words (e.g. Hofmann et al., 2014), we did not find such an interaction in the orbitofrontal cortex, implying that top-down timing of cognitive subprocesses is not required during natural reading. We discuss the implications of an interactive parafoveal-on-foveal effect for current models of eye movements. |
Steven W. Savage; Douglas D. Potter; Benjamin W. Tatler The effects of cognitive distraction on behavioural, oculomotor and electrophysiological metrics during a driving hazard perception task Journal Article In: Accident Analysis and Prevention, vol. 138, pp. 1–11, 2020. @article{Savage2020, Previous research has demonstrated that the distraction caused by holding a mobile telephone conversation is not limited to the period of the actual conversation (Haigney, 1995; Redelmeier & Tibshirani, 1997; Savage et al., 2013). In a prior study we identified potential eye movement and EEG markers of cognitive distraction during driving hazard perception. However the extent to which these markers are affected by the demands of the hazard perception task are unclear. Therefore in the current study we assessed the effects of secondary cognitive task demand on eye movement and EEG metrics separately for periods prior to, during and after the hazard was visible. We found that when no hazard was present (prior and post hazard windows), distraction resulted in changes to various elements of saccadic eye movements. However, when the target was present, distraction did not affect eye movements. We have previously found evidence that distraction resulted in an overall decrease in theta band output at occipital sites of the brain. This was interpreted as evidence that distraction results in a reduction in visual processing. The current study confirmed this by examining the effects of distraction on the lambda response component of subjects eye fixation related potentials (EFRPs). Furthermore, we demonstrated that although detections of hazards were not affected by distraction, both eye movement and EEG metrics prior to the onset of the hazard were sensitive to changes in cognitive workload. This suggests that changes to specific aspects of the saccadic eye movement system could act as unobtrusive markers of distraction even prior to a breakdown in driving performance. |
Christoph Schneider; Michael Pereira; Luca Tonin; José del R. Millán Real-time EEG feedback on alpha power lateralization leads to behavioral improvements in a covert attention task Journal Article In: Brain Topography, vol. 33, no. 1, pp. 48–59, 2020. @article{Schneider2020, Visual attention can be spatially oriented, even in the absence of saccadic eye-movements, to facilitate the processing of incoming visual information. One behavioral proxy for this so-called covert visuospatial attention (CVSA) is the validity effect (VE): the reduction in reaction time (RT) to visual stimuli at attended locations and the increase in RT to stimuli at unattended locations. At the electrophysiological level, one correlate of CVSA is the lateralization in the occipital α-band oscillations, resulting from α-power increases ipsilateral and decreases contralateral to the attended hemifield. While this α-band lateralization has been considerably studied using electroencephalography (EEG) or magnetoencephalography (MEG), little is known about whether it can be trained to improve CVSA behaviorally. In this cross-over sham-controlled study we used continuous real-time feedback of the occipital α-lateralization to modulate behavioral and electrophysiological markers of covert attention. Fourteen subjects performed a cued CVSA task, involving fast responses to covertly attended stimuli. During real-time feedback runs, trials extended in time if subjects reached states of high α-lateralization. Crucially, the ongoing α-lateralization was fed back to the subject by changing the color of the attended stimulus. We hypothesized that this ability to self-monitor lapses in CVSA and thus being able to refocus attention accordingly would lead to improved CVSA performance during subsequent testing. We probed the effect of the intervention by evaluating the pre-post changes in the VE and the α-lateralization. Behaviorally, results showed a significant interaction between feedback (experimental–sham) and time (pre-post) for the validity effect, with an increase in performance only for the experimental condition. We did not find corresponding pre-post changes in the α-lateralization. Our findings suggest that EEG-based real-time feedback is a promising tool to enhance the level of covert visuospatial attention, especially with respect to behavioral changes. This opens up the exploration of applications of the proposed training method for the cognitive rehabilitation of attentional disorders. |
Eelke Spaak; Floris P. Lange Hippocampal and prefrontal theta-band mechanisms underpin implicit spatial context learning Journal Article In: Journal of Neuroscience, vol. 40, no. 1, pp. 191–202, 2020. @article{Spaak2020, Humans can rapidly and seemingly implicitly learn to predict typical locations of relevant items when those items are encountered in familiar spatial contexts. Two important questions remain, however, concerning this type of learning: (1) which neural structures and mechanisms are involved in acquiring and exploiting such contextual knowledge? (2) Is this type of learning truly implicit and unconscious? We now answer both these questions after closely examining behavior and recording neural activity using MEG while observers (male and female) were acquiring and exploiting statistical regularities. Computational modeling of behavioral data suggested that, after repeated exposures to a spatial context, participants' behavior was marked by an abrupt switch to an exploitation strategy of the learnt regularities. MEG recordings showed that hippocampus and prefrontal cortex (PFC) were involved in the task and furthermore revealed a striking dissociation: only the initial learning phase was associated with hippocampal theta band activity, while the subsequent exploitation phase showed a shift in theta band activity to the PFC. Intriguingly, the behavioral benefit of repeated exposures to certain scenes was inversely related to explicit awareness of such repeats, demonstrating the implicit nature of the expectations acquired. Together, these findings demonstrate that (1a) hippocampus and PFC play complementary roles in the implicit, unconscious learning and exploitation of spatial statistical regularities; (1b) these mechanisms are implemented in the theta frequency band; and (2) contextual knowledge can indeed be acquired unconsciously, and awareness of such knowledge can even interfere with the exploitation thereof. |
Davide Tabarelli; Christian Keitel; Joachim Gross; Daniel Baldauf Spatial attention enhances cortical tracking of quasi-rhythmic visual stimuli Journal Article In: NeuroImage, vol. 208, pp. 116444, 2020. @article{Tabarelli2020, Successfully interpreting and navigating our natural visual environment requires us to track its dynamics constantly. Additionally, we focus our attention on behaviorally relevant stimuli to enhance their neural processing. Little is known, however, about how sustained attention affects the ongoing tracking of stimuli with rich natural temporal dynamics. Here, we used MRI-informed source reconstructions of magnetoencephalography (MEG) data to map to what extent various cortical areas track concurrent continuous quasi-rhythmic visual stimulation. Further, we tested how top-down visuo-spatial attention influences this tracking process. Our bilaterally presented quasi-rhythmic stimuli covered a dynamic range of 4–20 Hz, subdivided into three distinct bands. As an experimental control, we also included strictly rhythmic stimulation (10 vs 12 Hz). Using a spectral measure of brain-stimulus coupling, we were able to track the neural processing of left vs. right stimuli independently, even while fluctuating within the same frequency range. The fidelity of neural tracking depended on the stimulation frequencies, decreasing for higher frequency bands. Both attended and non-attended stimuli were tracked beyond early visual cortices, in ventral and dorsal streams depending on the stimulus frequency. In general, tracking improved with the deployment of visuo-spatial attention to the stimulus location. Our results provide new insights into how human visual cortices process concurrent dynamic stimuli and provide a potential mechanism – namely increasing the temporal precision of tracking – for boosting the neural representation of attended input. |
L. Tankelevitch; E. Spaak; M. F. S. Rushworth; M. G. Stokes In: Journal of Neuroscience, vol. 40, no. 26, pp. 5033–5050, 2020. @article{Tankelevitch2020, Studies of selective attention typically consider the role of task goals or physical salience, but recent work has shown that attention can also be captured by previously reward-associated stimuli, even when these are no longer relevant (i.e., value-driven attentional capture; VDAC). We used magnetoencephalography (MEG) to investigate how previously reward-associated stimuli are processed, the time-course of reward history effects, and how this relates to the behavioural effects of VDAC. Male and female human participants first completed a reward learning task to establish stimulus-reward associations. Next, we measured attentional capture in a separate task by presenting these stimuli in the absence of reward contingency, and probing their effects on the processing of separate target stimuli presented at different time lags. Using time-resolved multivariate pattern analysis, we found that learned value modulated the spatial selection of previously rewarded stimuli in occipital, inferior temporal, and parietal cortex from ~260ms after stimulus onset. This value modulation was related to the strength of participants' behavioural VDAC effect and persisted into subsequent target processing. Furthermore, we found a spatially invariant value signal from ~340ms. Importantly, learned value did not influence the neural discriminability of the previously rewarded stimuli in visual cortical areas. Our results suggest that VDAC is underpinned by learned value signals which modulate spatial selection throughout posterior visual and parietal cortex. We further suggest that VDAC can occur in the absence of changes in early visual cortical processing. Significance statement Attention is our ability to focus on relevant information at the expense of irrelevant information. It can be affected by previously learned but currently irrelevant stimulus-reward associations, a phenomenon termed “value-driven attentional capture” (VDAC). The neural mechanisms underlying VDAC remain unclear. It has been speculated that reward learning induces visual cortical plasticity which modulates early visual processing to capture attention. Although we find that learned value modulates spatial attention in sensory brain areas, an effect which correlates with VDAC, we find no relevant signatures of visual cortical plasticity. |
Leyla Isik; Anna Mynick; Dimitrios Pantazis; Nancy Kanwisher The speed of human social interaction perception Journal Article In: NeuroImage, vol. 215, pp. 116844, 2020. @article{Isik2020, The ability to perceive others' social interactions, here defined as the directed contingent actions between two or more people, is a fundamental part of human experience that develops early in infancy and is shared with other primates. However, the neural computations underlying this ability remain largely unknown. Is social interaction recognition a rapid feedforward process or a slower post-perceptual inference? Here we used magnetoencephalography (MEG) decoding to address this question. Subjects in the MEG viewed snapshots of visually matched real-world scenes containing a pair of people who were either engaged in a social interaction or acting independently. The presence versus absence of a social interaction could be read out from subjects' MEG data spontaneously, even while subjects performed an orthogonal task. This readout generalized across different people and scenes, revealing abstract representations of social interactions in the human brain. These representations, however, did not come online until quite late, at 300 ms after image onset, well after feedforward visual processes. In a second experiment, we found that social interaction readout still occurred at this same late latency even when subjects performed an explicit task detecting social interactions. We further showed that MEG responses distinguished between different types of social interactions (mutual gaze vs joint attention) even later, around 500 ms after image onset. Taken together, these results suggest that the human brain spontaneously extracts information about others' social interactions, but does so slowly, likely relying on iterative top-down computations. |
Stephanie J. Kayser; Christoph Kayser Shared physiological correlates of multisensory and expectation-based facilitation Journal Article In: eNeuro, vol. 7, no. 2, pp. 1–13, 2020. @article{Kayser2020, Perceptual performance in a visual task can be enhanced by simultaneous multisensory information, but can also be enhanced by a symbolic or amodal cue inducing a specific expectation. That similar benefits can arise from multisensory information and within-modality expectation raises the question of whether the underlying neurophysiological processes are the same or distinct. We investigated this by comparing the influence of the following three types of auxiliary probabilistic cues on visual motion discrimination in humans: (1) acoustic motion, (2) a premotion visual symbolic cue, and (3) a postmotion symbolic cue. Using multivariate analysis of the EEG data, we show that both the multisensory and preceding visual symbolic cue enhance the encoding of visual motion direction as reflected by cerebral activity arising from occipital regions;200–400 ms post-stimulus onset. This suggests a common or overlapping physiological correlate of cross-modal and intramodal auxiliary information, pointing to a neural mechanism susceptive to both multisensory and more abstract probabilistic cues. We also asked how prestimulus activity shapes the cue–stimulus combination and found a differential influence on the cross-modal and intramodal combination: while alpha power modulated the relative weight of visual motion and the acoustic cue, it did not modulate the behavioral influence of a visual symbolic cue, pointing to differences in how prestimulus activity shapes the combination of multisensory and abstract cues with task-relevant information. |
Christophe C. Le Dantec; Aaron R. Seitz Dissociating electrophysiological correlates of contextual and perceptual learning in a visual search task Journal Article In: Journal of Vision, vol. 20, no. 6, pp. 1–15, 2020. @article{LeDantec2020, Perceptual learning and contextual learning are two types of implicit visual learning that can co-occur in the same tasks. For example, to find an animal in the woods, you need to know where to look in the environment (contextual learning) and you must be able to discriminate its features (perceptual learning). However, contextual and perceptual learning are typically studied using distinct experimental paradigms, and little is known regarding their comparative neural mechanisms. In this study, we investigated contextual and perceptual learning in 12 healthy adult humans as they performed the same visual search task, and we examined psychophysical and electrophysiological (event-related potentials) measures of learning. Participants were trained to look for a visual stimulus, a small line with a specific orientation, presented among distractors. We found better performance for the trained target orientation as compared to an untrained control orientation, reflecting specificity of perceptual learning for the orientation of trained elements. This orientation specificity effect was associated with changes in the C1 component. We also found better performance for repeated spatial configurations as compared to novel ones, reflecting contextual learning. This context-specific effect was associated with the N2pc component. Taken together, these results suggest that contextual and perceptual learning are distinct visual learning phenomena that have different behavioral and electrophysiological characteristics. |
Alfred Lim; Steve M. J. Janssen; Jason Satel Exploring the temporal dynamics of inhibition of return using steady-state visual evoked potentials Journal Article In: Cognitive, Affective and Behavioral Neuroscience, pp. 1349–1364, 2020. @article{Lim2020, Inhibition of return is characterized by delayed responses to previously attended locations when the interval between stimuli is long enough. The present study employed steady-state visual evoked potentials (SSVEPs) as a measure of attentional modulation to explore the nature and time course of input- and output-based inhibitory cueing mechanisms that each slow response times at previously stimulated locations under different experimental conditions. The neural effects of behavioral inhibition were examined by comparing post-cue SSVEPs between cued and uncued locations measured across two tasks that differed only in the response modality (saccadic or manual response to targets). Grand averages of SSVEP amplitudes for each condition showed a reduction in amplitude at cued locations in the window of 100-500 ms post-cue, revealing an early, short-term decrease in the responses of neurons that can be attributed to sensory adaptation, regardless of response modality. Because primary visual cortex has been found to be one of the major sources of SSVEP signals, the results suggest that the SSVEP modulations observed were caused by input-based inhibition that occurred in V1, or visual areas earlier than V1, as a consequence of reduced visual input activity at previously cued locations. No SSVEP modulations were observed in either response condition late in the cue-target interval, suggesting that neither late input- nor output-based IOR modulates SSVEPs. These findings provide further electrophysiological support for the theory of multiple mechanisms contributing to behavioral cueing effects. |
Jakub Limanowski; Vladimir Litvak; Karl Friston Cortical beta oscillations reflect the contextual gating of visual action feedback Journal Article In: NeuroImage, vol. 222, pp. 117267, 2020. @article{Limanowski2020, In sensorimotor integration, the brain needs to decide how its predictions should accommodate novel evidence by ‘gating' sensory data depending on the current context. Here, we examined the oscillatory correlates of this process by recording magnetoencephalography (MEG) data during a new task requiring action under intersensory conflict. We used virtual reality to decouple visual (virtual) and proprioceptive (real) hand postures during a task in which the phase of grasping movements tracked a target (in either modality). Thus, we rendered visual information either task-relevant or a (to-be-ignored) distractor. Under visuo-proprioceptive incongruence, occipital beta power decreased (relative to congruence) when vision was task-relevant but increased when it had to be ignored. Dynamic causal modeling (DCM) revealed that this interaction was best explained by diametrical, task-dependent changes in visual gain. These results suggest a crucial role for beta oscillations in the contextual gating (i.e., gain or precision control) of visual vs proprioceptive action feedback, depending on current behavioral demands. |
Kevin P. Madore; Anna M. Khazenzon; Cameron W. Backes; Jiefeng Jiang; Melina R. Uncapher; Anthony M. Norcia; Anthony D. Wagner Memory failure predicted by attention lapsing and media multitasking Journal Article In: Nature, vol. 587, no. 7832, pp. 87–91, 2020. @article{Madore2020, With the explosion of digital media and technologies, scholars, educators and the public have become increasingly vocal about the role that an ‘attention economy' has in our lives1. The rise of the current digital culture coincides with longstanding scientific questions about why humans sometimes remember and sometimes forget, and why some individuals remember better than others2–6. Here we examine whether spontaneous attention lapses—in the moment7–12, across individuals13–15 and as a function of everyday media multitasking16–19—negatively correlate with remembering. Electroencephalography and pupillometry measures of attention20,21 were recorded as eighty young adults (mean age, 21.7 years) performed a goal-directed episodic encoding and retrieval task22. Trait-level sustained attention was further quantified using task-based23 and questionnaire measures24,25. Using trial-to-trial retrieval data, we show that tonic lapses in attention in the moment before remembering, assayed by posterior alpha power and pupil diameter, were correlated with reductions in neural signals of goal coding and memory, along with behavioural forgetting. Independent measures of trait-level attention lapsing mediated the relationship between neural assays of lapsing and memory performance, and between media multitasking and memory. Attention lapses partially account for why we remember or forget in the moment, and why some individuals remember better than others. Heavier media multitasking is associated with a propensity to have attention lapses and forget. |
Alie G. Male; Robert P. O'Shea; Erich Schröger; Dagmar Müller; Urte Roeber; Andreas Widmann In: Psychophysiology, vol. 57, no. 6, pp. e13576, 2020. @article{Male2020, Research shows that the visual system monitors the environment for changes. For example, a left-tilted bar, a deviant, that appears after several presentations of a right-tilted bar, standards, elicits a classic visual mismatch negativity (vMMN): greater negativity for deviants than standards in event-related potentials (ERPs) between 100 and 300 ms after onset of the deviant. The classic vMMN is contributed to by adaptation; it can be distinguished from the genuine vMMN that, through use of control conditions, compares standards and deviants that are equally adapted and physically identical. To determine whether the vMMN follows similar principles to the auditory mismatch negativity (MMN), in two experiments we searched for a genuine vMMN from simple, physiologically plausible stimuli that change in fundamental dimensions: orientation, contrast, phase, and spatial frequency. We carefully controlled for attention and eye movements. We found no evidence for the genuine vMMN, despite adequate statistical power. We conclude that either the genuine vMMN is a rather unstable phenomenon that depends on still-to-be-identified experimental parameters, or it is confined to visual stimuli for which monitoring across time is more natural than monitoring over space, such as for high-level features. We also observed an early deviant-related positivity that we propose might reflect earlier predictive processing. |
Radha Nila Meghanathan; Cees Leeuwen; Marcello Giannini; Andrey R. Nikolaev Neural correlates of task-related refixation behavior Journal Article In: Vision Research, vol. 175, pp. 90–101, 2020. @article{Meghanathan2020, Eye movement research has shown that attention shifts from the currently fixated location to the next before a saccade is executed. We investigated whether the cost of the attention shift depends on higher-order processing at the time of fixation, in particular on visual working memory load differences between fixations and refixations on task-relevant items. The attention shift is reflected in EEG activity in the saccade-related potential (SRP). In a free viewing task involving visual search and memorization of multiple targets amongst distractors, we compared the SRP in first fixations versus refixations on targets and distractors. The task-relevance of targets implies that more information will be loaded in memory (e.g. both identity and location) than for distractors (e.g. location only). First fixations will involve greater memory load than refixations, since first fixations involve loading of new items, while refixations involve rehearsal of previously visited items. The SRP in the interval preceding the saccade away from a target or distractor revealed that saccade preparation is affected by task-relevance and refixation behavior. For task-relevant items only, we found longer fixation duration and higher SRP amplitudes for first fixations than for refixations over the occipital region and the opposite effect over the frontal region. Our findings provide first neurophysiological evidence that working memory loading of task-relevant information at fixation affects saccade planning. |
Nick B. Pandža; Ian Phillips; Valerie P. Karuzis; Polly O'Rourke; Stefanie E. Kuchinsky Neurostimulation and pupillometry: New directions for learning and research in applied linguistics Journal Article In: Annual Review of Applied Linguistics, vol. 40, pp. 56–77, 2020. @article{Pandza2020, This paper begins by discussing new trends in the use of neurostimulation techniques in cognitive science and learning research, as well as the nascent research on their application in second language learning. To illustrate this, an experiment designed to investigate the impact of transcutaneous vagus nerve stimulation (tVNS), which is delivered via earbuds, on how learners process and learn Mandarin tones is reported. Pupillometry, which is an index of cognitive effort, is explained and illustrated as one way to assess the impact of tVNS. Participants in the study were native English speakers, naïve to tone languages, pseudorandomly assigned to active or control conditions, while balancing for nonlinguistic pitch ability and musical experience. Their performance after tVNS was assessed using a range of more traditional language outcome measures, including accuracy and reaction times from lexical recognition and recall tasks and was triangulated with pupillometry during word-learning to help understand the mechanism through which tVNS operates. Findings are discussed in light of the literatures on lexical tone learning, cognitive effort, and neurostimulation, including specific benefits for learners of tone languages. Recommendations are made for future work on the increasingly popular area of neurostimulation for the field of applied linguistics in the 40th anniversary issue of ARAL. |
Ian G. M. Cameron; Andreea Cretu; Femke Struik; Ivan Toni The effects of a TMS double lesion to a cortical network Journal Article In: eNeuro, vol. 7, no. 1, pp. 1–22, 2020. @article{Cameron2020, Transcranial magnetic stimulation (TMS) is often used to understand the function of individual brain regions, but this ignores the fact that TMS may affect network-level rather than nodal-level processes. We examine the effects from a “double lesion” to two frontoparietal network nodes compared to the effects from single lesions to either node. We hypothesize that the absence of additive effects indicates that a single lesion is consequential to a network-level process. Twenty-three humans performed pro- (look towards) and anti- (look away) saccades after receiving continuous theta-burst stimulation (cTBS) to right frontal eye fields (FEF), dorsolateral prefrontal cortex (DLPFC) or somatosensory cortex (S1) (the control region). On a subset of trials, a TMS pulse was applied to right posterior parietal cortex (PPC). FEF, DLPFC and PPC are important frontoparietal network nodes for controlling anti-saccades. Bayesian T-tests were used to test hypotheses for additive double lesion effects on saccade behaviors (cTBS plus TMS pulse) against the null hypothesis that double lesion effects are not different than single lesion effects. We observed strong evidence (BF10 = 325.22) that DLPFC cTBS plus PPC TMS lesion enhanced impairments in ipsilateral anti-saccade amplitudes over DLPFC cTBS alone, but not over the effect of the PPC pulse alone (BF10 = 0.75). Therefore, effects were not additive, and no other evidence for additive effects was found (BF10 < 3). This suggests that saccade-control computations are distributed across this network, with some degree of compensation by PPC for the DLPFC lesion. |
Ian M. Erkelens; William R. Bobier; Alicia C. Macmillan; Nicole L. Maione; Claudia Martin Calderon; Heidi Patterson; Benjamin Thompson A differential role for the posterior cerebellum in the adaptive control of convergence eye movements Journal Article In: Brain Stimulation, vol. 13, no. 1, pp. 215–228, 2020. @article{Erkelens2020a, Introduction: The vergence oculomotor system possesses two robust adaptive mechanisms; a fast “dynamic” and a slow “tonic” system that are both vital for single, clear and comfortable binocular vision. The neural substrates underlying these vergence adaptive mechanisms in humans is unclear. Methods: We investigated the role of the posterior cerebellum in convergence adaptation using inhibitory continuous theta-burst repetitive transcranial magnetic stimulation (cTBS) within a double-blind, sham controlled design while eye movements were recorded at 250hz via infrared oculography. Results: In a preliminary experiment we validated our stimulation protocols by reproducing results from previous work on saccadic adaptation during the classic double-step adaptive shortening paradigm. Following this, across a series of three separate experiments we observed a clear dissociation in the effect of cTBS on convergence adaptation. Dynamic adaptation was substantially reduced while tonic adaptation was unaffected. Baseline dynamic fusional vergence response were also unaffected by stimulation. Conclusions: These results indicate a differential role for the posterior cerebellum in the adaptive control of convergence eye movements and provide initial evidence that repetitive transcranial magnetic stimulation is a viable tool to investigate the neurophysiology of vergence control. The results are discussed in the context of the current models of implicit motor adaptation of vergence and their application to clinical populations and technology design in virtual and augmented head mounted display architectures. Significance statement: The cerebellum plays a critical role in the adaptive control of motor systems. Vergence eye movements shift our gaze in depth allowing us to see in 3D and exhibit two distinct adaptive mechanisms that are engaged under a range of conditions including reading, wearing head-mounted displays and using a new spectacle prescription. It is unclear what role the cerebellum plays in these adaptive mechanisms. To answer this, we temporarily disrupted the function of the posterior cerebellum using non-invasive brain stimulation and report impairment of only one adaptive mechanism, providing evidence for neural compartmentalization. The results have implications for vergence control models and applications to comfort and experience studies in head-mounted displays and the rehabilitation of clinical populations exhibiting vergence dysfunctions. |
Antonio Fernández; Marisa Carrasco Extinguishing exogenous attention via transcranial magnetic stimulation Journal Article In: Current Biology, vol. 30, no. 20, pp. 4078–4084, 2020. @article{Fernandez2020, Orienting covert exogenous (involuntary) attention to a target location improves performance in many visual tasks [1, 2]. It is unknown whether early visual cortical areas are necessary for this improvement. To establish a causal link between these areas and attentional modulations, we used transcranial magnetic stimulation (TMS) to briefly alter cortical excitability and determine whether early visual areas mediate the effect of exogenous attention on performance. Observers performed an orientation discrimination task. After a peripheral valid, neutral, or invalid cue, two cortically magnified gratings were presented, one in the stimulated region and the other in the symmetric region in the opposite hemifield. Observers received two successive TMS pulses around their occipital pole while the stimuli were presented. Shortly after, a response cue indicated the grating whose orientation observers had to discriminate. The response cue either matched—target stimulated—or did not match—distractor stimulated—the stimulated side. Grating contrast was varied to measure contrast response functions (CRF) for all combinations of attention and TMS conditions. When the distractor was stimulated, exogenous attention yielded response gain—performance benefits in the valid-cue condition and costs in the invalid-cue condition compared with the neutral condition at the high contrast levels. Crucially, when the target was stimulated, this response gain was eliminated. Therefore, TMS extinguished the effect of exogenous attention. These results establish a causal link between early visual areas and the modulatory effect of exogenous attention on performance. |
Eric B. Knudsen; Joni D. Wallis Closed-loop theta stimulation in the orbitofrontal cortex prevents reward-based learning Journal Article In: Neuron, vol. 106, no. 3, pp. 537–547.e4, 2020. @article{Knudsen2020, Although neuronal oscillations correlate with many high-level cognitive processes, their causal contribution is less clear. Using a novel closed-loop microstimulation protocol, Knudsen and Wallis demonstrate the necessity of theta oscillations in the orbitofrontal cortex for reward-based learning. |
Pierre Pouget; Stephen Frey; Harry Ahnine; David Attali; Julien Claron; Charlotte Constans; Jean-Francois Aubry; Fabrice Arcizet In: Frontiers in Physiology, vol. 11, pp. 1042, 2020. @article{Pouget2020, Since the late 2010s, Transcranial Ultrasound Stimulation (TUS) has been used experimentally to carryout safe, non-invasive stimulation of the brain with better spatial resolution than Transcranial Magnetic Stimulation (TMS). This innovative stimulation method has emerged as a novel and valuable device for studying brain function in humans and animals. In particular, single pulses of TUS directed to oculomotor regions have been shown to modulate visuomotor behavior of non-human primates during 100 ms ultrasound pulses. In the present study, a sustained effect was induced by applying 20-s trains of neuronavigated repetitive Transcranial Ultrasound Stimulation (rTUS) to oculomotor regions of the frontal cortex in three non-human primates performing an antisaccade task. With the help of MRI imaging and a frame-less stereotactic neuronavigation system (SNS), we were able to demonstrate that neuronavigated TUS (outside of the MRI scanner) is an efficient tool to carry out neuromodulation procedures in non-human primates. We found that, following neuronavigated rTUS, saccades were significantly modified, resulting in shorter latencies compared to no-rTUS trials. This behavioral modulation was maintained for up to 20 min. Oculomotor behavior returned to baseline after 18–31 min and could not be significantly distinguished from the no-rTUS condition. This study is the first to show that neuronavigated rTUS can have a persistent effect on monkey behavior with a quantified return-time to baseline. The specificity of the effects could not be explained by auditory confounds. |
David Zeugin; Michael P. Notter; Jean François Knebel; Silvio Ionta Temporo-parietal contribution to the mental representations of self/other face Journal Article In: Brain and Cognition, vol. 143, pp. 1–6, 2020. @article{Zeugin2020, Face recognition requires comparing the current visual input with stored mental representations of faces. Based on its role in visual recognition of faces and mental representation of the body, we hypothesized that the right temporo-parietal junction (rTPJ) could be implicated also in processing mental representation of faces. To test this hypothesis, we asked 30 neurotypical participants to perform mental rotation (laterality judgment of rotated pictures) of self- and other-face images, before and after the inhibition of rTPJ through repetitive transcranial magnetic stimulation. After inhibition of rTPJ the mental rotation of self-face was slower than other-face. In the control condition the mental rotation of self/other faces was not significantly different. This supports that the role of rTPJ extends to mental representation of faces, specifically for the self. Since the experimental task did not require to explicitly recognize identity, we propose that unconscious identity attribution affects also the mental representation of faces. The present study offers insights on the involvement rTPJ in mental representation of faces and proposes that the neural substrate dedicated to mental representation of faces goes beyond the traditional visual and memory areas. |
Florent Meyniel Brain dynamics for confidence-weighted learning Journal Article In: PLoS Computational Biology, vol. 16, no. 6, pp. e1007935, 2020. @article{Meyniel2020, Learning in a changing, uncertain environment is a difficult problem. A popular solution is to predict future observations and then use surprising outcomes to update those predictions. However, humans also have a sense of confidence that characterizes the precision of their predictions. Bayesian models use a confidence-weighting principle to regulate learning: For a given surprise, the update is smaller when the confidence about the prediction was higher. Prior behavioral evidence indicates that human learning adheres to this confidence-weighting principle. Here, we explored the human brain dynamics sub-tending the confidenceweighting of learning using magneto-encephalography (MEG). During our volatile probability learning task, subjects' confidence reports conformed with Bayesian inference. MEG revealed several stimulus-evoked brain responses whose amplitude reflected surprise, and some of them were further shaped by confidence: Surprise amplified the stimulus-evoked response whereas confidence dampened it. Confidence about predictions also modulated several aspects of the brain state: Pupil-linked arousal and beta-range (15-30 Hz) oscillations. The brain state in turn modulated specific stimulus-evoked surprise responses following the confidence-weighting principle. Our results thus indicate that there exist, in the human brain, signals reflecting surprise that are dampened by confidence in a way that is appropriate for learning according to Bayesian inference. They also suggest a mechanism for confidence-weighted learning: Confidence about predictions would modulate intrinsic properties of the brain state to amplify or dampen surprise responses evoked by discrepant observations. |
Jonathan Mirault; Jeremy Yeaton; Fanny Broqua; Stéphane Dufau; Phillip J. Holcomb; Jonathan Grainger Parafoveal-on-foveal repetition effects in sentence reading: A co-registered eye-tracking and electroencephalogram study Journal Article In: Psychophysiology, vol. 57, no. 8, pp. e13553, 2020. @article{Mirault2020, When reading, can the next word in the sentence (word n + 1) influence how you read the word you are currently looking at (word n)? Serial models of sentence reading state that this generally should not be the case, whereas parallel models predict that this should be the case. Here we focus on perhaps the simplest and the strongest Parafoveal-on-Foveal (PoF) manipulation: word n + 1 is either the same as word n or a different word. Participants read sentences for comprehension and when their eyes left word n, the repeated or unrelated word at position n + 1 was swapped for a word that provided a syntactically correct continuation of the sentence. We recorded electroencephalogram and eye-movements, and time-locked the analysis of fixation-related potentials (FRPs) to fixation of word n. We found robust PoF repetition effects on gaze durations on word n, and also on the initial landing position on word n. Most important is that we also observed significant effects in FRPs, reaching significance at 260 ms post-fixation of word n. Repetition of the target word n at position n + 1 caused a widely distributed reduced negativity in the FRPs. Given the timing of this effect, we argue that it is driven by orthographic processing of word n + 1, while readers were still looking at word n, plus the spatial integration of orthographic information extracted from these two words in parallel. |
Kieran S. Mohr; Niamh Carr; Rachel Georgel; Simon P. Kelly Modulation of the earliest component of the human VEP by spatial attention: An investigation of task demands Journal Article In: Cerebral Cortex Communications, pp. 1–22, 2020. @article{Mohr2020, Spatial attention modulations of initial afferent activity in area V1, indexed by the first component “C1” of the human visual evoked potential, are rarely found. It has thus been suggested that early modulation is induced only by special task conditions, but what these conditions are remains unknown. Recent failed replications—findings of no C1 modulation using a certain task that had previously produced robust modulations—present a strong basis for examining this question. We ran 3 experiments, the first to more exactly replicate the stimulus and behavioral conditions of the original task, and the second and third to manipulate 2 key factors that differed in the failed replication studies: the provision of informative performance feedback, and the degree to which the probed stimulus features matched those facilitating target perception. Although there was an overall significant C1 modulation of 11%, individually, only experiments 1 and 2 showed reliable effects, underlining that the modulations do occur but not consistently. Better feedback induced greater P1, but not C1, modulations. Target-probe feature matching had an inconsistent influence on modulation patterns, with behavioral performance differences and signal-overlap analyses suggesting interference from extrastriate modulations as a potential cause. |
Anna M. Monk; Gareth R. Barnes; Eleanor A. Maguire The effect of object type on building scene imagery — An MEG study Journal Article In: Frontiers in Human Neuroscience, vol. 14, pp. 592175, 2020. @article{Monk2020, Previous studies have reported that some objects evoke a sense of local three-dimensional space (space-defining; SD), while others do not (space-ambiguous; SA), despite being imagined or viewed in isolation devoid of a background context. Moreover, people show a strong preference for SD objects when given a choice of objects with which to mentally construct scene imagery. When deconstructing scenes, people retain significantly more SD objects than SA objects. It, therefore, seems that SD objects might enjoy a privileged role in scene construction. In the current study, we leveraged the high temporal resolution of magnetoencephalography (MEG) to compare the neural responses to SD and SA objects while they were being used to build imagined scene representations, as this has not been examined before using neuroimaging. On each trial, participants gradually built a scene image from three successive auditorily-presented object descriptions and an imagined 3D space. We then examined the neural dynamics associated with the points during scene construction when either SD or SA objects were being imagined. We found that SD objects elicited theta changes relative to SA objects in two brain regions, the right ventromedial prefrontal cortex (vmPFC) and the right superior temporal gyrus (STG). Furthermore, using dynamic causal modeling, we observed that the vmPFC drove STG activity. These findings may indicate that SD objects serve to activate schematic and conceptual knowledge in vmPFC and STG upon which scene representations are then built. |