• Skip to primary navigation
  • Skip to main content
  • Skip to footer
SR Research Logo

SR Research

Fast, Accurate, Reliable Eye Tracking

  • Hardware
    • EyeLink 1000 Plus
    • EyeLink Portable Duo
    • fMRI and MEG Systems
    • EyeLink II
    • Hardware Integration
  • Software
    • Experiment Builder
    • Data Viewer
    • WebLink
    • Software Integration
  • Solutions
    • Reading / Language
    • Developmental
    • fMRI / MEG
    • More…
  • Support
    • Forum
    • Resources
    • Workshops
    • Lab Visits
  • About
    • About Eye Tracking
    • EyeLink Publications
    • History
    • Manufacturing
    • Careers
  • Blog
  • Contact
  • 中文

EEG / fNIRS / TMS Publications

EyeLink EEG / fNIRS / TMS Publications

All EyeLink EEG, fNIRS, and TMS research publications (with concurrent eye tracking) up until 2020 (with early 2021s) are listed below by year. You can search the publications using key words such as P300, Gamma band, NIRS, etc. You can also search for individual author names. If we missed any EyeLink EEG, fNIRS, or TMS article, please email us!

All EyeLink EEG, fNIRS, and TMS publications are also available for download / import into reference management software as a single Bibtex (.bib) file.

 

499 entries « ‹ 1 of 5 › »

2021

Mats W J van Es; Tom R Marshall; Eelke Spaak; Ole Jensen; Jan-Mathijs Schoffelen

Phasic modulation of visual representations during sustained attention Journal Article

European Journal of Neuroscience, pp. 1–18, 2021.

Abstract | Links | BibTeX

@article{Es2021,
title = {Phasic modulation of visual representations during sustained attention},
author = {Mats W J van Es and Tom R Marshall and Eelke Spaak and Ole Jensen and Jan-Mathijs Schoffelen},
doi = {10.1111/ejn.15084},
year = {2021},
date = {2021-01-01},
journal = {European Journal of Neuroscience},
pages = {1--18},
abstract = {Sustained attention has long been thought to benefit perception in a continuous fashion, but recent evidence suggests that it affects perception in a discrete, rhythmic way. Periodic fluctuations in behavioral performance over time, and modulations of behavioral performance by the phase of spontaneous oscillatory brain activity point to an attentional sampling rate in the theta or alpha frequency range. We investigated whether such discrete sampling by attention is reflected in periodic fluctuations in the decodability of visual stimulus orientation from magnetoencephalographic (MEG) brain signals. In this exploratory study, human subjects attended one of two grating stimuli while MEG was being recorded. We assessed the strength of the visual representation of the attended stimulus using a support vector machine (SVM) to decode the orientation of the grating (clockwise vs. counterclockwise) from the MEG signal. We tested whether decoder performance depended on the theta/alpha phase of local brain activity. While the phase of ongoing activity in visual cortex did not modulate decoding performance, theta/alpha phase of activity in the FEF and parietal cortex, contralateral to the attended stimulus did modulate decoding performance. These findings suggest that phasic modulations of visual stimulus representations in the brain are caused by frequency-specific top-down activity in the fronto-parietal attention network.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Sustained attention has long been thought to benefit perception in a continuous fashion, but recent evidence suggests that it affects perception in a discrete, rhythmic way. Periodic fluctuations in behavioral performance over time, and modulations of behavioral performance by the phase of spontaneous oscillatory brain activity point to an attentional sampling rate in the theta or alpha frequency range. We investigated whether such discrete sampling by attention is reflected in periodic fluctuations in the decodability of visual stimulus orientation from magnetoencephalographic (MEG) brain signals. In this exploratory study, human subjects attended one of two grating stimuli while MEG was being recorded. We assessed the strength of the visual representation of the attended stimulus using a support vector machine (SVM) to decode the orientation of the grating (clockwise vs. counterclockwise) from the MEG signal. We tested whether decoder performance depended on the theta/alpha phase of local brain activity. While the phase of ongoing activity in visual cortex did not modulate decoding performance, theta/alpha phase of activity in the FEF and parietal cortex, contralateral to the attended stimulus did modulate decoding performance. These findings suggest that phasic modulations of visual stimulus representations in the brain are caused by frequency-specific top-down activity in the fronto-parietal attention network.

Close

  • doi:10.1111/ejn.15084

Close

Jonathan Daume; Peng Wang; Alexander Maye; Dan Zhang; Andreas K Engel

Non-rhythmic temporal prediction involves phase resets of low-frequency delta oscillations Journal Article

NeuroImage, 224 , pp. 1–17, 2021.

Abstract | Links | BibTeX

@article{Daume2021,
title = {Non-rhythmic temporal prediction involves phase resets of low-frequency delta oscillations},
author = {Jonathan Daume and Peng Wang and Alexander Maye and Dan Zhang and Andreas K Engel},
doi = {10.1016/j.neuroimage.2020.117376},
year = {2021},
date = {2021-01-01},
journal = {NeuroImage},
volume = {224},
pages = {1--17},
publisher = {Elsevier Inc.},
abstract = {The phase of neural oscillatory signals aligns to the predicted onset of upcoming stimulation. Whether such phase alignments represent phase resets of underlying neural oscillations or just rhythmically evoked activity, and whether they can be observed in a rhythm-free visual context, however, remains unclear. Here, we recorded the magnetoencephalogram while participants were engaged in a temporal prediction task, judging the visual or tactile reappearance of a uniformly moving stimulus. The prediction conditions were contrasted with a control condition to dissociate phase adjustments of neural oscillations from stimulus-driven activity. We observed stronger delta band inter-trial phase consistency (ITPC) in a network of sensory, parietal and frontal brain areas, but no power increase reflecting stimulus-driven or prediction-related evoked activity. Delta ITPC further correlated with prediction performance in the cerebellum and visual cortex. Our results provide evidence that phase alignments of low-frequency neural oscillations underlie temporal predictions in a non-rhythmic visual and crossmodal context.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The phase of neural oscillatory signals aligns to the predicted onset of upcoming stimulation. Whether such phase alignments represent phase resets of underlying neural oscillations or just rhythmically evoked activity, and whether they can be observed in a rhythm-free visual context, however, remains unclear. Here, we recorded the magnetoencephalogram while participants were engaged in a temporal prediction task, judging the visual or tactile reappearance of a uniformly moving stimulus. The prediction conditions were contrasted with a control condition to dissociate phase adjustments of neural oscillations from stimulus-driven activity. We observed stronger delta band inter-trial phase consistency (ITPC) in a network of sensory, parietal and frontal brain areas, but no power increase reflecting stimulus-driven or prediction-related evoked activity. Delta ITPC further correlated with prediction performance in the cerebellum and visual cortex. Our results provide evidence that phase alignments of low-frequency neural oscillations underlie temporal predictions in a non-rhythmic visual and crossmodal context.

Close

  • doi:10.1016/j.neuroimage.2020.117376

Close

Marcos Domic-Siede; Martín Irani; Joaquín Valdés; Marcela Perrone-Bertolotti; Tomás Ossandón

Theta activity from frontopolar cortex, mid-cingulate cortex and anterior cingulate cortex shows different roles in cognitive planning performance Journal Article

NeuroImage, 226 , pp. 1–19, 2021.

Abstract | Links | BibTeX

@article{DomicSiede2021,
title = {Theta activity from frontopolar cortex, mid-cingulate cortex and anterior cingulate cortex shows different roles in cognitive planning performance},
author = {Marcos Domic-Siede and Martín Irani and Joaquín Valdés and Marcela Perrone-Bertolotti and Tomás Ossandón},
doi = {10.1016/j.neuroimage.2020.117557},
year = {2021},
date = {2021-01-01},
journal = {NeuroImage},
volume = {226},
pages = {1--19},
publisher = {Elsevier Inc.},
abstract = {Cognitive planning, the ability to develop a sequenced plan to achieve a goal, plays a crucial role in human goal-directed behavior. However, the specific role of frontal structures in planning is unclear. We used a novel and ecological task, that allowed us to separate the planning period from the execution period. The spatio-temporal dynamics of EEG recordings showed that planning induced a progressive and sustained increase of frontal-midline theta activity (FM$theta$) over time. Source analyses indicated that this activity was generated within the prefrontal cortex. Theta activity from the right mid-Cingulate Cortex (MCC) and the left Anterior Cingulate Cortex (ACC) were correlated with an increase in the time needed for elaborating plans. On the other hand, left Frontopolar cortex (FP) theta activity exhibited a negative correlation with the time required for executing a plan. Since reaction times of planning execution correlated with correct responses, left FP theta activity might be associated with efficiency and accuracy in making a plan. Associations between theta activity from the right MCC and the left ACC with reaction times of the planning period may reflect high cognitive demand of the task, due to the engagement of attentional control and conflict monitoring implementation. In turn, the specific association between left FP theta activity and planning performance may reflect the participation of this brain region in successfully self-generated plans.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Cognitive planning, the ability to develop a sequenced plan to achieve a goal, plays a crucial role in human goal-directed behavior. However, the specific role of frontal structures in planning is unclear. We used a novel and ecological task, that allowed us to separate the planning period from the execution period. The spatio-temporal dynamics of EEG recordings showed that planning induced a progressive and sustained increase of frontal-midline theta activity (FM$theta$) over time. Source analyses indicated that this activity was generated within the prefrontal cortex. Theta activity from the right mid-Cingulate Cortex (MCC) and the left Anterior Cingulate Cortex (ACC) were correlated with an increase in the time needed for elaborating plans. On the other hand, left Frontopolar cortex (FP) theta activity exhibited a negative correlation with the time required for executing a plan. Since reaction times of planning execution correlated with correct responses, left FP theta activity might be associated with efficiency and accuracy in making a plan. Associations between theta activity from the right MCC and the left ACC with reaction times of the planning period may reflect high cognitive demand of the task, due to the engagement of attentional control and conflict monitoring implementation. In turn, the specific association between left FP theta activity and planning performance may reflect the participation of this brain region in successfully self-generated plans.

Close

  • doi:10.1016/j.neuroimage.2020.117557

Close

2020

Rick A Adams; Daniel Bush; Fanfan Zheng; Sofie S Meyer; Raphael Kaplan; Stelios Orfanos; Tiago Reis Marques; Oliver D Howes; Neil Burgess

Impaired theta phase coupling underlies frontotemporal dysconnectivity in schizophrenia Journal Article

Brain, 143 (3), pp. 1261–1277, 2020.

Abstract | Links | BibTeX

@article{Adams2020a,
title = {Impaired theta phase coupling underlies frontotemporal dysconnectivity in schizophrenia},
author = {Rick A Adams and Daniel Bush and Fanfan Zheng and Sofie S Meyer and Raphael Kaplan and Stelios Orfanos and Tiago Reis Marques and Oliver D Howes and Neil Burgess},
doi = {10.1093/brain/awaa035},
year = {2020},
date = {2020-01-01},
journal = {Brain},
volume = {143},
number = {3},
pages = {1261--1277},
abstract = {Frontotemporal dysconnectivity is a key pathology in schizophrenia. The specific nature of this dysconnectivity is unknown, but animal models imply dysfunctional theta phase coupling between hippocampus and medial prefrontal cortex (mPFC). We tested this hypothesis by examining neural dynamics in 18 participants with a schizophrenia diagnosis, both medicated and unmedicated; and 26 age, sex and IQ matched control subjects. All participants completed two tasks known to elicit hippocampal-prefrontal theta coupling: a spatial memory task (during magnetoencephalography) and a memory integration task. In addition, an overlapping group of 33 schizophrenia and 29 control subjects underwent PET to measure the availability of GABAARs expressing the a5 subunit (concentrated on hippocampal somatostatin interneurons). We demonstrate-in the spatial memory task, during memory recall-that theta power increases in left medial temporal lobe (mTL) are impaired in schizophrenia, as is theta phase coupling between mPFC and mTL. Importantly, the latter cannot be explained by theta power changes, head movement, antipsychotics, cannabis use, or IQ, and is not found in other frequency bands. Moreover, mPFC-mTL theta coupling correlated strongly with performance in controls, but not in subjects with schizophrenia, who were mildly impaired at the spatial memory task and no better than chance on the memory integration task. Finally, mTL regions showing reduced phase coupling in schizophrenia magnetoencephalography participants overlapped substantially with areas of diminished a5-GABAAR availability in the wider schizophrenia PET sample. These results indicate that mPFC-mTL dysconnectivity in schizophrenia is due to a loss of theta phase coupling, and imply a5-GABAARs (and the cells that express them) have a role in this process.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Frontotemporal dysconnectivity is a key pathology in schizophrenia. The specific nature of this dysconnectivity is unknown, but animal models imply dysfunctional theta phase coupling between hippocampus and medial prefrontal cortex (mPFC). We tested this hypothesis by examining neural dynamics in 18 participants with a schizophrenia diagnosis, both medicated and unmedicated; and 26 age, sex and IQ matched control subjects. All participants completed two tasks known to elicit hippocampal-prefrontal theta coupling: a spatial memory task (during magnetoencephalography) and a memory integration task. In addition, an overlapping group of 33 schizophrenia and 29 control subjects underwent PET to measure the availability of GABAARs expressing the a5 subunit (concentrated on hippocampal somatostatin interneurons). We demonstrate-in the spatial memory task, during memory recall-that theta power increases in left medial temporal lobe (mTL) are impaired in schizophrenia, as is theta phase coupling between mPFC and mTL. Importantly, the latter cannot be explained by theta power changes, head movement, antipsychotics, cannabis use, or IQ, and is not found in other frequency bands. Moreover, mPFC-mTL theta coupling correlated strongly with performance in controls, but not in subjects with schizophrenia, who were mildly impaired at the spatial memory task and no better than chance on the memory integration task. Finally, mTL regions showing reduced phase coupling in schizophrenia magnetoencephalography participants overlapped substantially with areas of diminished a5-GABAAR availability in the wider schizophrenia PET sample. These results indicate that mPFC-mTL dysconnectivity in schizophrenia is due to a loss of theta phase coupling, and imply a5-GABAARs (and the cells that express them) have a role in this process.

Close

  • doi:10.1093/brain/awaa035

Close

Carmel R Auerbach-Asch; Oded Bein; Leon Y Deouell

Face selective neural activity: Comparisons between fixed and free viewing Journal Article

Brain Topography, 33 (3), pp. 336–354, 2020.

Abstract | Links | BibTeX

@article{AuerbachAsch2020,
title = {Face selective neural activity: Comparisons between fixed and free viewing},
author = {Carmel R Auerbach-Asch and Oded Bein and Leon Y Deouell},
doi = {10.1007/s10548-020-00764-7},
year = {2020},
date = {2020-01-01},
journal = {Brain Topography},
volume = {33},
number = {3},
pages = {336--354},
publisher = {Springer US},
abstract = {Event Related Potentials (ERPs) are widely used to study category-selective EEG responses to visual stimuli, such as the face-selective N170 component. Typically, this is done by flashing stimuli at the point of static gaze fixation. While allowing for good experimental control, these paradigms ignore the dynamic role of eye-movements in natural vision. Fixation-related potentials (FRPs), obtained using simultaneous EEG and eye-tracking, overcome this limitation. Various studies have used FRPs to study processes such as lexical processing, target detection and attention allocation. The goal of this study was to carefully compare face-sensitive activity time-locked to an abrupt stimulus onset at fixation, with that time-locked to a self-generated fixation on a stimulus. Twelve participants participated in three experimental conditions: Free-viewing (FRPs), Cued-viewing (FRPs) and Control (ERPs). We used a multiple regression approach to disentangle overlapping activity components. Our results show that the N170 face-effect is evident for the first fixation on a stimulus, whether it follows a self-generated saccade or stimulus appearance at fixation point. The N170 face-effect has similar topography across viewing conditions, but there were major differences within each stimulus category. We ascribe these differences to an overlap of the fixation-related lambda response and the N170. We tested the plausibility of this account using dipole simulations. Finally, the N170 exhibits category-specific adaptation in free viewing. This study establishes the comparability of the free-viewing N170 face-effect with the classic event-related effect, while highlighting the importance of accounting for eye-movement related effects.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Event Related Potentials (ERPs) are widely used to study category-selective EEG responses to visual stimuli, such as the face-selective N170 component. Typically, this is done by flashing stimuli at the point of static gaze fixation. While allowing for good experimental control, these paradigms ignore the dynamic role of eye-movements in natural vision. Fixation-related potentials (FRPs), obtained using simultaneous EEG and eye-tracking, overcome this limitation. Various studies have used FRPs to study processes such as lexical processing, target detection and attention allocation. The goal of this study was to carefully compare face-sensitive activity time-locked to an abrupt stimulus onset at fixation, with that time-locked to a self-generated fixation on a stimulus. Twelve participants participated in three experimental conditions: Free-viewing (FRPs), Cued-viewing (FRPs) and Control (ERPs). We used a multiple regression approach to disentangle overlapping activity components. Our results show that the N170 face-effect is evident for the first fixation on a stimulus, whether it follows a self-generated saccade or stimulus appearance at fixation point. The N170 face-effect has similar topography across viewing conditions, but there were major differences within each stimulus category. We ascribe these differences to an overlap of the fixation-related lambda response and the N170. We tested the plausibility of this account using dipole simulations. Finally, the N170 exhibits category-specific adaptation in free viewing. This study establishes the comparability of the free-viewing N170 face-effect with the classic event-related effect, while highlighting the importance of accounting for eye-movement related effects.

Close

  • doi:10.1007/s10548-020-00764-7

Close

Yasaman Bagherzadeh; Daniel Baldauf; Dimitrios Pantazis; Robert Desimone

Alpha synchrony and the neurofeedback control of spatial attention Journal Article

Neuron, 105 , pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Bagherzadeh2020,
title = {Alpha synchrony and the neurofeedback control of spatial attention},
author = {Yasaman Bagherzadeh and Daniel Baldauf and Dimitrios Pantazis and Robert Desimone},
doi = {10.1016/j.neuron.2019.11.001},
year = {2020},
date = {2020-01-01},
journal = {Neuron},
volume = {105},
pages = {1--11},
publisher = {Elsevier Inc.},
abstract = {Decreases in alpha synchronization are correlated with enhanced attention, whereas alpha increases are correlated with inattention. However, correlation is not causality, and synchronization may be a byproduct of attention rather than a cause. To test for a causal role of alpha synchrony in attention, we used MEG neurofeedback to train subjects to manipulate the ratio of alpha power over the left versus right parietal cortex. We found that a comparable alpha asymmetry developed over the visual cortex. The alpha training led to corresponding asymmetrical changes in visually evoked responses to probes presented in the two hemifields during training. Thus, reduced alpha was associated with enhanced sensory processing. Testing after training showed a persistent bias in attention in the expected directions. The results support the proposal that alpha synchrony plays a causal role in modulating attention and visual processing, and alpha training could be used for testing hypotheses about synchrony.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Decreases in alpha synchronization are correlated with enhanced attention, whereas alpha increases are correlated with inattention. However, correlation is not causality, and synchronization may be a byproduct of attention rather than a cause. To test for a causal role of alpha synchrony in attention, we used MEG neurofeedback to train subjects to manipulate the ratio of alpha power over the left versus right parietal cortex. We found that a comparable alpha asymmetry developed over the visual cortex. The alpha training led to corresponding asymmetrical changes in visually evoked responses to probes presented in the two hemifields during training. Thus, reduced alpha was associated with enhanced sensory processing. Testing after training showed a persistent bias in attention in the expected directions. The results support the proposal that alpha synchrony plays a causal role in modulating attention and visual processing, and alpha training could be used for testing hypotheses about synchrony.

Close

  • doi:10.1016/j.neuron.2019.11.001

Close

Sonya Bells; Silvia L Isabella; Donald C Brien; Brian C Coe; Douglas P Munoz; Donald J Mabbott; Douglas O Cheyne

Mapping neural dynamics underlying saccade preparation and execution and their relation to reaction time and direction errors Journal Article

Human Brain Mapping, 41 (7), pp. 1934–1949, 2020.

Abstract | Links | BibTeX

@article{Bells2020,
title = {Mapping neural dynamics underlying saccade preparation and execution and their relation to reaction time and direction errors},
author = {Sonya Bells and Silvia L Isabella and Donald C Brien and Brian C Coe and Douglas P Munoz and Donald J Mabbott and Douglas O Cheyne},
doi = {10.1002/hbm.24922},
year = {2020},
date = {2020-01-01},
journal = {Human Brain Mapping},
volume = {41},
number = {7},
pages = {1934--1949},
abstract = {Our ability to control and inhibit automatic behaviors is crucial for negotiating complex environments, all of which require rapid communication between sensory, motor, and cognitive networks. Here, we measured neuromagnetic brain activity to investigate the neural timing of cortical areas needed for inhibitory control, while 14 healthy young adults performed an interleaved prosaccade (look at a peripheral visual stimulus) and antisaccade (look away from stimulus) task. Analysis of how neural activity relates to saccade reaction time (SRT) and occurrence of direction errors (look at stimulus on antisaccade trials) provides insight into inhibitory control. Neuromagnetic source activity was used to extract stimulus-aligned and saccade-aligned activity to examine temporal differences between prosaccade and antisaccade trials in brain regions associated with saccade control. For stimulus-aligned antisaccade trials, a longer SRT was associated with delayed onset of neural activity within the ipsilateral parietal eye field (PEF) and bilateral frontal eye field (FEF). Saccade-aligned activity demonstrated peak activation 10ms before saccade-onset within the contralateral PEF for prosaccade trials and within the bilateral FEF for antisaccade trials. In addition, failure to inhibit prosaccades on anti-saccade trials was associated with increased activity prior to saccade onset within the FEF contralateral to the peripheral stimulus. This work on dynamic activity adds to our knowledge that direction errors were due, at least in part, to a failure to inhibit automatic prosaccades. These findings provide novel evidence in humans regarding the temporal dynamics within oculomotor areas needed for saccade programming and the role frontal brain regions have on top-down inhibitory control.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Our ability to control and inhibit automatic behaviors is crucial for negotiating complex environments, all of which require rapid communication between sensory, motor, and cognitive networks. Here, we measured neuromagnetic brain activity to investigate the neural timing of cortical areas needed for inhibitory control, while 14 healthy young adults performed an interleaved prosaccade (look at a peripheral visual stimulus) and antisaccade (look away from stimulus) task. Analysis of how neural activity relates to saccade reaction time (SRT) and occurrence of direction errors (look at stimulus on antisaccade trials) provides insight into inhibitory control. Neuromagnetic source activity was used to extract stimulus-aligned and saccade-aligned activity to examine temporal differences between prosaccade and antisaccade trials in brain regions associated with saccade control. For stimulus-aligned antisaccade trials, a longer SRT was associated with delayed onset of neural activity within the ipsilateral parietal eye field (PEF) and bilateral frontal eye field (FEF). Saccade-aligned activity demonstrated peak activation 10ms before saccade-onset within the contralateral PEF for prosaccade trials and within the bilateral FEF for antisaccade trials. In addition, failure to inhibit prosaccades on anti-saccade trials was associated with increased activity prior to saccade onset within the FEF contralateral to the peripheral stimulus. This work on dynamic activity adds to our knowledge that direction errors were due, at least in part, to a failure to inhibit automatic prosaccades. These findings provide novel evidence in humans regarding the temporal dynamics within oculomotor areas needed for saccade programming and the role frontal brain regions have on top-down inhibitory control.

Close

  • doi:10.1002/hbm.24922

Close

Nicholas S Bland; Jason B Mattingley; Martin V Sale

Gamma coherence mediates interhemispheric integration during multiple object tracking Journal Article

Journal of Neurophysiology, 123 (5), pp. 1630–1644, 2020.

Abstract | Links | BibTeX

@article{Bland2020,
title = {Gamma coherence mediates interhemispheric integration during multiple object tracking},
author = {Nicholas S Bland and Jason B Mattingley and Martin V Sale},
doi = {10.1152/jn.00755.2019},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neurophysiology},
volume = {123},
number = {5},
pages = {1630--1644},
abstract = {Our ability to track the paths of multiple visual objects moving between the hemifields requires effective integration of information between the two cerebral hemispheres. Coherent neural oscillations in the gamma band (35-70 Hz) are hypothesized to drive this information transfer. Here we manipulated the need for interhemispheric integration using a novel multiple object tracking (MOT) task in which stimuli either moved between the two visual hemifields, requiring interhemispheric integration, or moved within separate visual hemifields. We used electroencephalography (EEG) to measure interhemispheric coherence during the task. Human observers (21 women; 20 men) were poorer at tracking objects between versus within hemifields, reflecting a cost of interhemispheric integration. Critically, gamma coherence was greater in trials requiring interhemispheric integration, particularly between sensors over parieto-occipital areas. In approximately half of the participants, the observed cost of integration was associated with a failure of the cerebral hemispheres to become coherent in the gamma band. Moreover, individual differences in this integration cost correlated with endogenous gamma coherence at these same sensors, although with generally opposing relationships for the real and imaginary part of coherence. The real part (capturing synchronization with a near-zero phase lag) benefited between-hemifield tracking; imaginary coherence was detrimental. Finally, instantaneous phase coherence over the tracking period uniquely predicted between-hemifield tracking performance, suggesting that effective integration benefits from sustained interhemispheric synchronization. Our results show that gamma coherence mediates interhemispheric integration during MOT and add to a growing body of work demonstrating that coherence drives communication across cortically distributed neural networks. NEW & NOTEWORTHY Using a multiple object tracking paradigm, we were able to manipulate the need for interhemispheric integration on a per-trial basis, while also having an objective measure of integration efficacy (i.e., tracking performance). We show that tracking performance reflects a cost of integration, which correlates with individual differences in interhemispheric EEG coherence. Gamma coherence appears to uniquely benefit between-hemifield tracking, predicting performance both across participants and across trials.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Our ability to track the paths of multiple visual objects moving between the hemifields requires effective integration of information between the two cerebral hemispheres. Coherent neural oscillations in the gamma band (35-70 Hz) are hypothesized to drive this information transfer. Here we manipulated the need for interhemispheric integration using a novel multiple object tracking (MOT) task in which stimuli either moved between the two visual hemifields, requiring interhemispheric integration, or moved within separate visual hemifields. We used electroencephalography (EEG) to measure interhemispheric coherence during the task. Human observers (21 women; 20 men) were poorer at tracking objects between versus within hemifields, reflecting a cost of interhemispheric integration. Critically, gamma coherence was greater in trials requiring interhemispheric integration, particularly between sensors over parieto-occipital areas. In approximately half of the participants, the observed cost of integration was associated with a failure of the cerebral hemispheres to become coherent in the gamma band. Moreover, individual differences in this integration cost correlated with endogenous gamma coherence at these same sensors, although with generally opposing relationships for the real and imaginary part of coherence. The real part (capturing synchronization with a near-zero phase lag) benefited between-hemifield tracking; imaginary coherence was detrimental. Finally, instantaneous phase coherence over the tracking period uniquely predicted between-hemifield tracking performance, suggesting that effective integration benefits from sustained interhemispheric synchronization. Our results show that gamma coherence mediates interhemispheric integration during MOT and add to a growing body of work demonstrating that coherence drives communication across cortically distributed neural networks. NEW & NOTEWORTHY Using a multiple object tracking paradigm, we were able to manipulate the need for interhemispheric integration on a per-trial basis, while also having an objective measure of integration efficacy (i.e., tracking performance). We show that tracking performance reflects a cost of integration, which correlates with individual differences in interhemispheric EEG coherence. Gamma coherence appears to uniquely benefit between-hemifield tracking, predicting performance both across participants and across trials.

Close

  • doi:10.1152/jn.00755.2019

Close

Louisa Bogaerts; Craig G Richter; Ayelet N Landau; Ram Frost

Beta-band activity is a signature of statistical learning Journal Article

Journal of Neuroscience, 40 (39), pp. 7523–7530, 2020.

Abstract | Links | BibTeX

@article{Bogaerts2020,
title = {Beta-band activity is a signature of statistical learning},
author = {Louisa Bogaerts and Craig G Richter and Ayelet N Landau and Ram Frost},
doi = {10.1523/JNEUROSCI.0771-20.2020},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neuroscience},
volume = {40},
number = {39},
pages = {7523--7530},
abstract = {Through statistical learning (SL), cognitive systems may discover the underlying regularities in the environment. Testing human adults (n = 35, 21 females), we document, in the context of a classical visual SL task, divergent rhythmic EEG activity in the interstimulus delay periods within patterns versus between patterns (i.e., pattern transitions). Our findings reveal increased oscillatory activity in the beta band (∼20 Hz) at triplet transitions that indexes learning: It emerges with increased pattern repetitions; and importantly, it is highly correlated with behavioral learning outcomes. These findings hold the promise of converging on an online measure of learning regularities and provide important theoretical insights regarding the mechanisms of SL and prediction.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Through statistical learning (SL), cognitive systems may discover the underlying regularities in the environment. Testing human adults (n = 35, 21 females), we document, in the context of a classical visual SL task, divergent rhythmic EEG activity in the interstimulus delay periods within patterns versus between patterns (i.e., pattern transitions). Our findings reveal increased oscillatory activity in the beta band (∼20 Hz) at triplet transitions that indexes learning: It emerges with increased pattern repetitions; and importantly, it is highly correlated with behavioral learning outcomes. These findings hold the promise of converging on an online measure of learning regularities and provide important theoretical insights regarding the mechanisms of SL and prediction.

Close

  • doi:10.1523/JNEUROSCI.0771-20.2020

Close

Mathieu Bourguignon; Martijn Baart; Efthymia C Kapnoula; Nicola Molinaro

Lip-reading enables the brain to synthesize auditory features of unknown silent speech Journal Article

Journal of Neuroscience, 40 (5), pp. 1053–1065, 2020.

Abstract | Links | BibTeX

@article{Bourguignon2020,
title = {Lip-reading enables the brain to synthesize auditory features of unknown silent speech},
author = {Mathieu Bourguignon and Martijn Baart and Efthymia C Kapnoula and Nicola Molinaro},
doi = {10.1523/JNEUROSCI.1101-19.2019},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neuroscience},
volume = {40},
number = {5},
pages = {1053--1065},
abstract = {Lip-reading is crucial for understanding speech in challenging conditions. But how the brain extracts meaning from, silent, visual speech is still under debate. Lip-reading in silence activates the auditory cortices, but it is not known whether such activation reflects immediate synthesis of the corresponding auditory stimulus or imagery of unrelated sounds. To disentangle these possibilities, we used magnetoencephalography to evaluate how cortical activity in 28 healthy adult humans (17 females) entrained to the auditory speech envelope and lip movements (mouth opening) when listening to a spoken story without visual input (audio-only), and when seeing a silent video of a speaker articulating another story (video-only). In video-only, auditory cortical activity entrained to the absent auditory signal at frequencies textless1 Hz more than to the seen lip movements. This entrainment process was characterized by an auditory-speech-to-brain delay of ~70 ms in the left hemisphere, compared with ~20 ms in audio-only. Entrainment to mouth opening was found in the right angular gyrus at textless1 Hz, and in early visual cortices at 1– 8 Hz. These findings demonstrate that the brain can use a silent lip-read signal to synthesize a coarse-grained auditory speech representation in early auditory cortices. Our data indicate the following underlying oscillatory mechanism: seeing lip movements first modulates neuronal activity in early visual cortices at frequencies that match articulatory lip movements; the right angular gyrus then extracts slower features of lip movements, mapping them onto the corresponding speech sound features; this information is fed to auditory cortices, most likely facilitating speech parsing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Lip-reading is crucial for understanding speech in challenging conditions. But how the brain extracts meaning from, silent, visual speech is still under debate. Lip-reading in silence activates the auditory cortices, but it is not known whether such activation reflects immediate synthesis of the corresponding auditory stimulus or imagery of unrelated sounds. To disentangle these possibilities, we used magnetoencephalography to evaluate how cortical activity in 28 healthy adult humans (17 females) entrained to the auditory speech envelope and lip movements (mouth opening) when listening to a spoken story without visual input (audio-only), and when seeing a silent video of a speaker articulating another story (video-only). In video-only, auditory cortical activity entrained to the absent auditory signal at frequencies textless1 Hz more than to the seen lip movements. This entrainment process was characterized by an auditory-speech-to-brain delay of ~70 ms in the left hemisphere, compared with ~20 ms in audio-only. Entrainment to mouth opening was found in the right angular gyrus at textless1 Hz, and in early visual cortices at 1– 8 Hz. These findings demonstrate that the brain can use a silent lip-read signal to synthesize a coarse-grained auditory speech representation in early auditory cortices. Our data indicate the following underlying oscillatory mechanism: seeing lip movements first modulates neuronal activity in early visual cortices at frequencies that match articulatory lip movements; the right angular gyrus then extracts slower features of lip movements, mapping them onto the corresponding speech sound features; this information is fed to auditory cortices, most likely facilitating speech parsing.

Close

  • doi:10.1523/JNEUROSCI.1101-19.2019

Close

Méadhbh B Brosnan; Kristina Sabaroedin; Tim Silk; Sila Genc; Daniel P Newman; Gerard M Loughnane; Alex Fornito; Redmond G O'Connell; Mark A Bellgrove

Evidence accumulation during perceptual decisions in humans varies as a function of dorsal frontoparietal organization Journal Article

Nature Human Behaviour, 4 (8), pp. 844–855, 2020.

Abstract | Links | BibTeX

@article{Brosnan2020,
title = {Evidence accumulation during perceptual decisions in humans varies as a function of dorsal frontoparietal organization},
author = {Méadhbh B Brosnan and Kristina Sabaroedin and Tim Silk and Sila Genc and Daniel P Newman and Gerard M Loughnane and Alex Fornito and Redmond G O'Connell and Mark A Bellgrove},
doi = {10.1038/s41562-020-0863-4},
year = {2020},
date = {2020-01-01},
journal = {Nature Human Behaviour},
volume = {4},
number = {8},
pages = {844--855},
publisher = {Springer US},
abstract = {Animal neurophysiological studies have identified neural signals within dorsal frontoparietal areas that trace a perceptual decision by accumulating sensory evidence over time and trigger action upon reaching a threshold. Although analogous accumulation-to-bound signals are identifiable on extracranial human electroencephalography, their cortical origins remain unknown. Here neural metrics of human evidence accumulation, predictive of the speed of perceptual reports, were isolated using electroencephalography and related to dorsal frontoparietal network (dFPN) connectivity using diffusion and resting-state functional magnetic resonance imaging. The build-up rate of evidence accumulation mediated the relationship between the white matter macrostructure of dFPN pathways and the efficiency of perceptual reports. This association between steeper build-up rates of evidence accumulation and the dFPN was recapitulated in the resting-state networks. Stronger connectivity between dFPN regions is thus associated with faster evidence accumulation and speeded perceptual decisions. Our findings identify an integrated network for perceptual decisions that may be targeted for neurorehabilitation in cognitive disorders.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Animal neurophysiological studies have identified neural signals within dorsal frontoparietal areas that trace a perceptual decision by accumulating sensory evidence over time and trigger action upon reaching a threshold. Although analogous accumulation-to-bound signals are identifiable on extracranial human electroencephalography, their cortical origins remain unknown. Here neural metrics of human evidence accumulation, predictive of the speed of perceptual reports, were isolated using electroencephalography and related to dorsal frontoparietal network (dFPN) connectivity using diffusion and resting-state functional magnetic resonance imaging. The build-up rate of evidence accumulation mediated the relationship between the white matter macrostructure of dFPN pathways and the efficiency of perceptual reports. This association between steeper build-up rates of evidence accumulation and the dFPN was recapitulated in the resting-state networks. Stronger connectivity between dFPN regions is thus associated with faster evidence accumulation and speeded perceptual decisions. Our findings identify an integrated network for perceptual decisions that may be targeted for neurorehabilitation in cognitive disorders.

Close

  • doi:10.1038/s41562-020-0863-4

Close

Maximilian Bruchmann; Sebastian Schindler; Thomas Straube

The spatial frequency spectrum of fearful faces modulates early and mid-latency ERPs but not the N170 Journal Article

Psychophysiology, 57 , pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Bruchmann2020,
title = {The spatial frequency spectrum of fearful faces modulates early and mid-latency ERPs but not the N170},
author = {Maximilian Bruchmann and Sebastian Schindler and Thomas Straube},
doi = {10.1111/psyp.13597},
year = {2020},
date = {2020-01-01},
journal = {Psychophysiology},
volume = {57},
pages = {1--13},
abstract = {Prioritized processing of fearful compared to neutral faces is reflected in behavioral advantages such as lower detection thresholds, but also in enhanced early and late event-related potentials (ERPs). Behavioral advantages have recently been associated with the spatial frequency spectrum of fearful faces, better fitting the human contrast sensitivity function than the spectrum of neutral faces. However, it is unclear whether and to which extent early and late ERP differences are due to low-level spatial frequency spectrum information or high-level representations of the facial expression. In this pre-registered EEG study (N = 38), the effects of fearful-specific spatial frequencies on event-related ERPs were investigated by presenting faces with fearful and neutral expressions whose spatial frequency spectra were manipulated so as to contain either the average power spectra of neutral, fearful, or both expressions combined. We found an enlarged N170 to fearful versus neutral faces, not interacting with spatial frequency. Interactions of emotional expression and spatial frequencies were observed for the P1 and Early Posterior Negativity (EPN). For both components, larger emotion differences were observed when the spectrum contained neutral as opposed to fearful frequencies. Importantly, for the EPN, fearful and neutral expressions did not differ anymore when inserting fearful frequencies into neutral expressions, whereas typical emotion differences were found when faces contained average or neutral frequencies. Our findings show that N170 emotional modulations are unaffected by expression-specific spatial frequencies. However, expression-specific spatial frequencies alter early and mid-latency ERPs. Most notably, the EPN to neutral expressions is boosted by adding fearful spectra—but not vice versa.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Prioritized processing of fearful compared to neutral faces is reflected in behavioral advantages such as lower detection thresholds, but also in enhanced early and late event-related potentials (ERPs). Behavioral advantages have recently been associated with the spatial frequency spectrum of fearful faces, better fitting the human contrast sensitivity function than the spectrum of neutral faces. However, it is unclear whether and to which extent early and late ERP differences are due to low-level spatial frequency spectrum information or high-level representations of the facial expression. In this pre-registered EEG study (N = 38), the effects of fearful-specific spatial frequencies on event-related ERPs were investigated by presenting faces with fearful and neutral expressions whose spatial frequency spectra were manipulated so as to contain either the average power spectra of neutral, fearful, or both expressions combined. We found an enlarged N170 to fearful versus neutral faces, not interacting with spatial frequency. Interactions of emotional expression and spatial frequencies were observed for the P1 and Early Posterior Negativity (EPN). For both components, larger emotion differences were observed when the spectrum contained neutral as opposed to fearful frequencies. Importantly, for the EPN, fearful and neutral expressions did not differ anymore when inserting fearful frequencies into neutral expressions, whereas typical emotion differences were found when faces contained average or neutral frequencies. Our findings show that N170 emotional modulations are unaffected by expression-specific spatial frequencies. However, expression-specific spatial frequencies alter early and mid-latency ERPs. Most notably, the EPN to neutral expressions is boosted by adding fearful spectra—but not vice versa.

Close

  • doi:10.1111/psyp.13597

Close

Antimo Buonocore; Olaf Dimigen; David Melcher

Post-saccadic face processing is modulated by pre-saccadic preview: Evidence from fixation-related potentials Journal Article

Journal of Neuroscience, 40 (11), pp. 2305–2313, 2020.

Abstract | Links | BibTeX

@article{Buonocore2020,
title = {Post-saccadic face processing is modulated by pre-saccadic preview: Evidence from fixation-related potentials},
author = {Antimo Buonocore and Olaf Dimigen and David Melcher},
doi = {10.1523/JNEUROSCI.0861-19.2020},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neuroscience},
volume = {40},
number = {11},
pages = {2305--2313},
abstract = {Humans actively sample their environment with saccadic eye movements to bring relevant information into high-acuity foveal vision. Despite being lower in resolution, peripheral information is also available before each saccade. How the pre-saccadic extrafoveal preview of a visual object influences its post-saccadic processing is still an unanswered question. The current study investigated this question by simultaneously recording behavior and fixation-related brain potentials while human subjects made saccades to face stimuli. We manipulated the relationship between pre-saccadic "previews" and post-saccadic images to explicitly isolate the influences of the former. Subjects performed a gender discrimination task on a newly foveated face under three preview conditions: scrambled face, incongruent face (different identity from the foveated face), and congruent face (same identity). As expected, reaction times were faster after a congruent-face preview compared with a scrambled-face preview. Importantly, intact face previews (either incongruent or congruent) resulted in a massive reduction of post-saccadic neural responses. Specifically, we analyzed the classic face-selective N170 component at occipitotemporal electroencephalogram electrodes, which was still present in our experiments with active looking. However, the post-saccadic N170 was strongly attenuated following intact-face previews compared with the scrambled condition. This large and long-lasting decrease in evoked activity is consistent with a trans-saccadic mechanism of prediction that influences category-specific neural processing at the start of a new fixation. These findings constrain theories of visual stability and show that the extrafoveal preview methodology can be a useful tool to investigate its underlying mechanisms.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Humans actively sample their environment with saccadic eye movements to bring relevant information into high-acuity foveal vision. Despite being lower in resolution, peripheral information is also available before each saccade. How the pre-saccadic extrafoveal preview of a visual object influences its post-saccadic processing is still an unanswered question. The current study investigated this question by simultaneously recording behavior and fixation-related brain potentials while human subjects made saccades to face stimuli. We manipulated the relationship between pre-saccadic "previews" and post-saccadic images to explicitly isolate the influences of the former. Subjects performed a gender discrimination task on a newly foveated face under three preview conditions: scrambled face, incongruent face (different identity from the foveated face), and congruent face (same identity). As expected, reaction times were faster after a congruent-face preview compared with a scrambled-face preview. Importantly, intact face previews (either incongruent or congruent) resulted in a massive reduction of post-saccadic neural responses. Specifically, we analyzed the classic face-selective N170 component at occipitotemporal electroencephalogram electrodes, which was still present in our experiments with active looking. However, the post-saccadic N170 was strongly attenuated following intact-face previews compared with the scrambled condition. This large and long-lasting decrease in evoked activity is consistent with a trans-saccadic mechanism of prediction that influences category-specific neural processing at the start of a new fixation. These findings constrain theories of visual stability and show that the extrafoveal preview methodology can be a useful tool to investigate its underlying mechanisms.

Close

  • doi:10.1523/JNEUROSCI.0861-19.2020

Close

Simon Majed Ceh; Sonja Annerer-Walcher; Christof Körner; Christian Rominger; Silvia Erika Kober; Andreas Fink; Mathias Benedek

Neurophysiological indicators of internal attention: An electroencephalography–eye-tracking coregistration study Journal Article

Brain and Behavior, 10 (10), pp. 1–14, 2020.

Abstract | Links | BibTeX

@article{Ceh2020,
title = {Neurophysiological indicators of internal attention: An electroencephalography–eye-tracking coregistration study},
author = {Simon Majed Ceh and Sonja Annerer-Walcher and Christof Körner and Christian Rominger and Silvia Erika Kober and Andreas Fink and Mathias Benedek},
doi = {10.1002/brb3.1790},
year = {2020},
date = {2020-01-01},
journal = {Brain and Behavior},
volume = {10},
number = {10},
pages = {1--14},
abstract = {Introduction: Many goal-directed and spontaneous everyday activities (e.g., planning, mind wandering) rely on an internal focus of attention. Internally directed cognition (IDC) was shown to differ from externally directed cognition in a range of neurophysiological indicators such as electroencephalogram (EEG) alpha activity and eye behavior. Methods: In this EEG–eye-tracking coregistration study, we investigated effects of attention direction on EEG alpha activity and various relevant eye parameters. We used an established paradigm to manipulate internal attention demands in the visual domain within tasks by means of conditional stimulus masking. Results: Consistent with previous research, IDC involved relatively higher EEG alpha activity (lower alpha desynchronization) at posterior cortical sites. Moreover, IDC was characterized by greater pupil diameter (PD), fewer microsaccades, fixations, and saccades. These findings show that internal versus external cognition is associated with robust differences in several indicators at the neural and perceptual level. In a second line of analysis, we explored the intrinsic temporal covariation between EEG alpha activity and eye parameters during rest. This analysis revealed a positive correlation of EEG alpha power with PD especially in bilateral parieto-occipital regions. Conclusion: Together, these findings suggest that EEG alpha activity and PD represent time-sensitive indicators of internal attention demands, which may be involved in a neurophysiological gating mechanism serving to shield internal cognition from irrelevant sensory information.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Introduction: Many goal-directed and spontaneous everyday activities (e.g., planning, mind wandering) rely on an internal focus of attention. Internally directed cognition (IDC) was shown to differ from externally directed cognition in a range of neurophysiological indicators such as electroencephalogram (EEG) alpha activity and eye behavior. Methods: In this EEG–eye-tracking coregistration study, we investigated effects of attention direction on EEG alpha activity and various relevant eye parameters. We used an established paradigm to manipulate internal attention demands in the visual domain within tasks by means of conditional stimulus masking. Results: Consistent with previous research, IDC involved relatively higher EEG alpha activity (lower alpha desynchronization) at posterior cortical sites. Moreover, IDC was characterized by greater pupil diameter (PD), fewer microsaccades, fixations, and saccades. These findings show that internal versus external cognition is associated with robust differences in several indicators at the neural and perceptual level. In a second line of analysis, we explored the intrinsic temporal covariation between EEG alpha activity and eye parameters during rest. This analysis revealed a positive correlation of EEG alpha power with PD especially in bilateral parieto-occipital regions. Conclusion: Together, these findings suggest that EEG alpha activity and PD represent time-sensitive indicators of internal attention demands, which may be involved in a neurophysiological gating mechanism serving to shield internal cognition from irrelevant sensory information.

Close

  • doi:10.1002/brb3.1790

Close

Peter De Lissa; Roberto Caldara; Victoria Nicholls; Sebastien Miellet

In pursuit of visual attention: SSVEP frequency-tagging moving targets Journal Article

PLoS ONE, 15 (8), pp. 1–15, 2020.

Abstract | Links | BibTeX

@article{DeLissa2020,
title = {In pursuit of visual attention: SSVEP frequency-tagging moving targets},
author = {Peter {De Lissa} and Roberto Caldara and Victoria Nicholls and Sebastien Miellet},
doi = {10.1371/journal.pone.0236967},
year = {2020},
date = {2020-01-01},
journal = {PLoS ONE},
volume = {15},
number = {8},
pages = {1--15},
abstract = {Previous research has shown that visual attention does not always exactly follow gaze direction, leading to the concepts of overt and covert attention. However, it is not yet clear how such covert shifts of visual attention to peripheral regions impact the processing of the targets we directly foveate as they move in our visual field. The current study utilised the coregistration of eye-position and EEG recordings while participants tracked moving targets that were embedded with a 30 Hz frequency tag in a Steady State Visually Evoked Potentials (SSVEP) paradigm. When the task required attention to be divided between the moving target (overt attention) and a peripheral region where a second target might appear (covert attention), the SSVEPs elicited by the tracked target at the 30 Hz frequency band were significantly, but transiently, lower than when participants did not have to covertly monitor for a second target. Our findings suggest that neural responses of overt attention are only briefly reduced when attention is divided between covert and overt areas. This neural evidence is in line with theoretical accounts describing attention as a pool of finite resources, such as the perceptual load theory. Altogether, these results have practical implications for many real-world situations where covert shifts of attention may discretely reduce visual processing of objects even when they are directly being tracked with the eyes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous research has shown that visual attention does not always exactly follow gaze direction, leading to the concepts of overt and covert attention. However, it is not yet clear how such covert shifts of visual attention to peripheral regions impact the processing of the targets we directly foveate as they move in our visual field. The current study utilised the coregistration of eye-position and EEG recordings while participants tracked moving targets that were embedded with a 30 Hz frequency tag in a Steady State Visually Evoked Potentials (SSVEP) paradigm. When the task required attention to be divided between the moving target (overt attention) and a peripheral region where a second target might appear (covert attention), the SSVEPs elicited by the tracked target at the 30 Hz frequency band were significantly, but transiently, lower than when participants did not have to covertly monitor for a second target. Our findings suggest that neural responses of overt attention are only briefly reduced when attention is divided between covert and overt areas. This neural evidence is in line with theoretical accounts describing attention as a pool of finite resources, such as the perceptual load theory. Altogether, these results have practical implications for many real-world situations where covert shifts of attention may discretely reduce visual processing of objects even when they are directly being tracked with the eyes.

Close

  • doi:10.1371/journal.pone.0236967

Close

Andrea Desantis; Adrien Chan-Hon-Tong; Thérèse Collins; Hinze Hogendoorn; Patrick Cavanagh

Decoding the temporal dynamics of covert spatial attention using multivariate EEG analysis: Contributions of raw amplitude and alpha power Journal Article

Frontiers in Human Neuroscience, 14 , pp. 1–14, 2020.

Abstract | Links | BibTeX

@article{Desantis2020,
title = {Decoding the temporal dynamics of covert spatial attention using multivariate EEG analysis: Contributions of raw amplitude and alpha power},
author = {Andrea Desantis and Adrien Chan-Hon-Tong and Thér{è}se Collins and Hinze Hogendoorn and Patrick Cavanagh},
doi = {10.3389/fnhum.2020.570419},
year = {2020},
date = {2020-01-01},
journal = {Frontiers in Human Neuroscience},
volume = {14},
pages = {1--14},
abstract = {Attention can be oriented in space covertly without the need of eye movements. We used multivariate pattern classification analyses (MVPA) to investigate whether the time course of the deployment of covert spatial attention leading up to the observer's perceptual decision can be decoded from both EEG alpha power and raw activity traces. Decoding attention from these signals can help determine whether raw EEG signals and alpha power reflect the same or distinct features of attentional selection. Using a classical cueing task, we showed that the orientation of covert spatial attention can be decoded by both signals. However, raw activity and alpha power may reflect different features of spatial attention, with alpha power more associated with the orientation of covert attention in space and raw activity with the influence of attention on perceptual processes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Attention can be oriented in space covertly without the need of eye movements. We used multivariate pattern classification analyses (MVPA) to investigate whether the time course of the deployment of covert spatial attention leading up to the observer's perceptual decision can be decoded from both EEG alpha power and raw activity traces. Decoding attention from these signals can help determine whether raw EEG signals and alpha power reflect the same or distinct features of attentional selection. Using a classical cueing task, we showed that the orientation of covert spatial attention can be decoded by both signals. However, raw activity and alpha power may reflect different features of spatial attention, with alpha power more associated with the orientation of covert attention in space and raw activity with the influence of attention on perceptual processes.

Close

  • doi:10.3389/fnhum.2020.570419

Close

Elisa C Dias; Abraham C Van Voorhis; Filipe Braga; Julianne Todd; Javier Lopez-Calderon; Antigona Martinez; Daniel C Javitt

Impaired fixation-related theta modulation predicts reduced visual span and guided search deficits in schizophrenia Journal Article

Cerebral Cortex, 30 (5), pp. 2823–2833, 2020.

Abstract | Links | BibTeX

@article{Dias2020,
title = {Impaired fixation-related theta modulation predicts reduced visual span and guided search deficits in schizophrenia},
author = {Elisa C Dias and Abraham C {Van Voorhis} and Filipe Braga and Julianne Todd and Javier Lopez-Calderon and Antigona Martinez and Daniel C Javitt},
doi = {10.1093/cercor/bhz277},
year = {2020},
date = {2020-01-01},
journal = {Cerebral Cortex},
volume = {30},
number = {5},
pages = {2823--2833},
abstract = {During normal visual behavior, individuals scan the environment through a series of saccades and fixations. At each fixation, the phase of ongoing rhythmic neural oscillations is reset, thereby increasing efficiency of subsequent visual processing. This phase-reset is reflected in the generation of a fixation-related potential (FRP). Here, we evaluate the integrity of theta phase-reset/FRP generation and Guided Visual Search task in schizophrenia. Subjects performed serial and parallel versions of the task. An initial study (15 healthy controls (HC)/15 schizophrenia patients (SCZ)) investigated behavioral performance parametrically across stimulus features and set-sizes. A subsequent study (25-HC/25-SCZ) evaluated integrity of search-related FRP generation relative to search performance and evaluated visual span size as an index of parafoveal processing. Search times were significantly increased for patients versus controls across all conditions. Furthermore, significantly, deficits were observed for fixation-related theta phase-reset across conditions, that fully predicted impaired reduced visual span and search performance and correlated with impaired visual components of neurocognitive processing. By contrast, overall search strategy was similar between groups. Deficits in theta phase-reset mechanisms are increasingly documented across sensory modalities in schizophrenia. Here, we demonstrate that deficits in fixation-related theta phase-reset during naturalistic visual processing underlie impaired efficiency of early visual function in schizophrenia.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

During normal visual behavior, individuals scan the environment through a series of saccades and fixations. At each fixation, the phase of ongoing rhythmic neural oscillations is reset, thereby increasing efficiency of subsequent visual processing. This phase-reset is reflected in the generation of a fixation-related potential (FRP). Here, we evaluate the integrity of theta phase-reset/FRP generation and Guided Visual Search task in schizophrenia. Subjects performed serial and parallel versions of the task. An initial study (15 healthy controls (HC)/15 schizophrenia patients (SCZ)) investigated behavioral performance parametrically across stimulus features and set-sizes. A subsequent study (25-HC/25-SCZ) evaluated integrity of search-related FRP generation relative to search performance and evaluated visual span size as an index of parafoveal processing. Search times were significantly increased for patients versus controls across all conditions. Furthermore, significantly, deficits were observed for fixation-related theta phase-reset across conditions, that fully predicted impaired reduced visual span and search performance and correlated with impaired visual components of neurocognitive processing. By contrast, overall search strategy was similar between groups. Deficits in theta phase-reset mechanisms are increasingly documented across sensory modalities in schizophrenia. Here, we demonstrate that deficits in fixation-related theta phase-reset during naturalistic visual processing underlie impaired efficiency of early visual function in schizophrenia.

Close

  • doi:10.1093/cercor/bhz277

Close

Nadine Dijkstra; Luca Ambrogioni; Diego Vidaurre; Marcel van Gerven

Neural dynamics of perceptual inference and its reversal during imagery Journal Article

eLife, 9 , pp. 1–19, 2020.

Abstract | Links | BibTeX

@article{Dijkstra2020,
title = {Neural dynamics of perceptual inference and its reversal during imagery},
author = {Nadine Dijkstra and Luca Ambrogioni and Diego Vidaurre and Marcel van Gerven},
doi = {10.7554/eLife.53588},
year = {2020},
date = {2020-01-01},
journal = {eLife},
volume = {9},
pages = {1--19},
abstract = {After the presentation of a visual stimulus, neural processing cascades from low-level sensory areas to increasingly abstract representations in higher-level areas. It is often hypothesised that a reversal in neural processing underlies the generation of mental images as abstract representations are used to construct sensory representations in the absence of sensory input. According to predictive processing theories, such reversed processing also plays a central role in later stages of perception. Direct experimental evidence of reversals in neural information flow has been missing. Here, we used a combination of machine learning and magnetoencephalography to characterise neural dynamics in humans. We provide direct evidence for a reversal of the perceptual feed-forward cascade during imagery and show that, during perception, such reversals alternate with feed-forward processing in an 11 Hz oscillatory pattern. Together, these results show how common feedback processes support both veridical perception and mental imagery.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

After the presentation of a visual stimulus, neural processing cascades from low-level sensory areas to increasingly abstract representations in higher-level areas. It is often hypothesised that a reversal in neural processing underlies the generation of mental images as abstract representations are used to construct sensory representations in the absence of sensory input. According to predictive processing theories, such reversed processing also plays a central role in later stages of perception. Direct experimental evidence of reversals in neural information flow has been missing. Here, we used a combination of machine learning and magnetoencephalography to characterise neural dynamics in humans. We provide direct evidence for a reversal of the perceptual feed-forward cascade during imagery and show that, during perception, such reversals alternate with feed-forward processing in an 11 Hz oscillatory pattern. Together, these results show how common feedback processes support both veridical perception and mental imagery.

Close

  • doi:10.7554/eLife.53588

Close

Troy Dildine; Elizabeth Necka; Lauren Yvette Atlas

Confidence in subjective pain is predicted by reaction time during decision making Journal Article

Scientific Reports, 10 , pp. 1–14, 2020.

Abstract | Links | BibTeX

@article{Dildine2020,
title = {Confidence in subjective pain is predicted by reaction time during decision making},
author = {Troy Dildine and Elizabeth Necka and Lauren Yvette Atlas},
doi = {10.31234/osf.io/7cnha},
year = {2020},
date = {2020-01-01},
journal = {Scientific Reports},
volume = {10},
pages = {1--14},
publisher = {Nature Publishing Group UK},
abstract = {Self-report is the gold standard for measuring pain. However, decisions about pain can vary substantially within and between individuals. We measured whether self-reported pain is accompanied by metacognition and variations in confidence, similar to perceptual decision-making in other modalities. Eighty healthy volunteers underwent acute thermal pain and provided pain ratings followed by confidence judgments on continuous visual analogue scales. We investigated whether eye fixations and reaction time during pain rating might serve as implicit markers of confidence. Confidence varied across trials and increased confidence was associated with faster pain rating reaction times. The association between confidence and fixations varied across individuals as a function of the reliability of individuals' association between temperature and pain. Taken together, this work indicates that individuals can provide metacognitive judgments of pain and extends research on confidence in perceptual decision-making to pain.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Self-report is the gold standard for measuring pain. However, decisions about pain can vary substantially within and between individuals. We measured whether self-reported pain is accompanied by metacognition and variations in confidence, similar to perceptual decision-making in other modalities. Eighty healthy volunteers underwent acute thermal pain and provided pain ratings followed by confidence judgments on continuous visual analogue scales. We investigated whether eye fixations and reaction time during pain rating might serve as implicit markers of confidence. Confidence varied across trials and increased confidence was associated with faster pain rating reaction times. The association between confidence and fixations varied across individuals as a function of the reliability of individuals' association between temperature and pain. Taken together, this work indicates that individuals can provide metacognitive judgments of pain and extends research on confidence in perceptual decision-making to pain.

Close

  • doi:10.31234/osf.io/7cnha

Close

Linda Drijvers; Ole Jensen; Eelke Spaak

Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information Journal Article

Human Brain Mapping, pp. 1–15, 2020.

Abstract | Links | BibTeX

@article{Drijvers2020,
title = {Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information},
author = {Linda Drijvers and Ole Jensen and Eelke Spaak},
doi = {10.1002/hbm.25282},
year = {2020},
date = {2020-01-01},
journal = {Human Brain Mapping},
pages = {1--15},
abstract = {During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1,440 Hz refresh rate). Integration difficulty was manipulated by lower-order auditory factors (clear/degraded speech) and higher-order visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual − fauditory = 7 Hz), specifically when lower-order integration was easiest because signal quality was optimal. This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of lower-order audiovisual integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1,440 Hz refresh rate). Integration difficulty was manipulated by lower-order auditory factors (clear/degraded speech) and higher-order visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual − fauditory = 7 Hz), specifically when lower-order integration was easiest because signal quality was optimal. This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of lower-order audiovisual integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context.

Close

  • doi:10.1002/hbm.25282

Close

Stefan Dürschmid; Andre Maric; Marcel S Kehl; Robert T Knight; Hermann Hinrichs; Hans-Jochen Heinz

Fronto-temporal regulation of subjective value to suppress impulsivity in intertemporal choices Journal Article

Journal of Neuroscience, 2020.

Abstract | Links | BibTeX

@article{Duerschmid2020,
title = {Fronto-temporal regulation of subjective value to suppress impulsivity in intertemporal choices},
author = {Stefan Dürschmid and Andre Maric and Marcel S Kehl and Robert T Knight and Hermann Hinrichs and Hans-Jochen Heinz},
doi = {10.1523/jneurosci.1196-20.2020},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neuroscience},
abstract = {Impulsive decisions arise from preferring smaller but sooner rewards compared to larger but later rewards. How neural activity and attention to choice alternatives contribute to reward decisions during temporal discounting is not clear. Here we probed (i) attention to and (ii) neural representation of delay and reward information in humans (both sexes) engaged in choices. We studied behavioral and frequency specific dynamics supporting impulsive decisions on a fine-grained temporal scale using eye tracking and magnetoencephalographic (MEG) recordings. In one condition participants had to decide for themselves but pretended to decide for their best friend in a second prosocial condition, which required perspective taking. Hence, conditions varied in the value for themselves versus that pretending to choose for another person. Stronger impulsivity was reliably found across three independent groups for prosocial decisions. Eye tracking revealed a systematic shift of attention from the delay to the reward information and differences in eye tracking between conditions predicted differences in discounting. High frequency activity (HFA: 175-250 Hz) distributed over right fronto-temporal sensors correlated with delay and reward information in consecutive temporal intervals for high value decisions for oneself but not the friend. Collectively the results imply that the HFA recorded over fronto-temporal MEG sensors plays a critical role in choice option integration.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Impulsive decisions arise from preferring smaller but sooner rewards compared to larger but later rewards. How neural activity and attention to choice alternatives contribute to reward decisions during temporal discounting is not clear. Here we probed (i) attention to and (ii) neural representation of delay and reward information in humans (both sexes) engaged in choices. We studied behavioral and frequency specific dynamics supporting impulsive decisions on a fine-grained temporal scale using eye tracking and magnetoencephalographic (MEG) recordings. In one condition participants had to decide for themselves but pretended to decide for their best friend in a second prosocial condition, which required perspective taking. Hence, conditions varied in the value for themselves versus that pretending to choose for another person. Stronger impulsivity was reliably found across three independent groups for prosocial decisions. Eye tracking revealed a systematic shift of attention from the delay to the reward information and differences in eye tracking between conditions predicted differences in discounting. High frequency activity (HFA: 175-250 Hz) distributed over right fronto-temporal sensors correlated with delay and reward information in consecutive temporal intervals for high value decisions for oneself but not the friend. Collectively the results imply that the HFA recorded over fronto-temporal MEG sensors plays a critical role in choice option integration.

Close

  • doi:10.1523/jneurosci.1196-20.2020

Close

Ciara Egan; Filipe Cristino; Joshua S Payne; Guillaume Thierry; Manon W Jones

How alliteration enhances conceptual–attentional interactions in reading Journal Article

Cortex, 124 , pp. 111–118, 2020.

Abstract | Links | BibTeX

@article{Egan2020,
title = {How alliteration enhances conceptual–attentional interactions in reading},
author = {Ciara Egan and Filipe Cristino and Joshua S Payne and Guillaume Thierry and Manon W Jones},
doi = {10.1016/j.cortex.2019.11.005},
year = {2020},
date = {2020-01-01},
journal = {Cortex},
volume = {124},
pages = {111--118},
publisher = {Elsevier Ltd},
abstract = {In linguistics, the relationship between phonological word form and meaning is mostly considered arbitrary. Why, then, do literary authors traditionally craft sound relationships between words? We set out to characterise how dynamic interactions between word form and meaning may account for this literary practice. Here, we show that alliteration influences both meaning integration and attentional engagement during reading. We presented participants with adjective-noun phrases, having manipulated semantic relatedness (congruent, incongruent) and form repetition (alliterating, non-alliterating) orthogonally, as in “dazzling-diamond”; “sparkling-diamond”; “dangerous-diamond”; and “creepy-diamond”. Using simultaneous recording of event-related brain potentials and pupil dilation (PD), we establish that, whilst semantic incongruency increased N400 amplitude as expected, it reduced PD, an index of attentional engagement. Second, alliteration affected semantic evaluation of word pairs, since it reduced N400 amplitude even in the case of unrelated items (e.g., “dangerous-diamond”). Third, alliteration specifically boosted attentional engagement for related words (e.g., “dazzling-diamond”), as shown by a sustained negative correlation between N400 amplitudes and PD change after the window of lexical integration. Thus, alliteration strategically arouses attention during reading and when comprehension is challenged, phonological information helps readers link concepts beyond the level of literal semantics. Overall, our findings provide a tentative mechanism for the empowering effect of sound repetition in literary constructs.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In linguistics, the relationship between phonological word form and meaning is mostly considered arbitrary. Why, then, do literary authors traditionally craft sound relationships between words? We set out to characterise how dynamic interactions between word form and meaning may account for this literary practice. Here, we show that alliteration influences both meaning integration and attentional engagement during reading. We presented participants with adjective-noun phrases, having manipulated semantic relatedness (congruent, incongruent) and form repetition (alliterating, non-alliterating) orthogonally, as in “dazzling-diamond”; “sparkling-diamond”; “dangerous-diamond”; and “creepy-diamond”. Using simultaneous recording of event-related brain potentials and pupil dilation (PD), we establish that, whilst semantic incongruency increased N400 amplitude as expected, it reduced PD, an index of attentional engagement. Second, alliteration affected semantic evaluation of word pairs, since it reduced N400 amplitude even in the case of unrelated items (e.g., “dangerous-diamond”). Third, alliteration specifically boosted attentional engagement for related words (e.g., “dazzling-diamond”), as shown by a sustained negative correlation between N400 amplitudes and PD change after the window of lexical integration. Thus, alliteration strategically arouses attention during reading and when comprehension is challenged, phonological information helps readers link concepts beyond the level of literal semantics. Overall, our findings provide a tentative mechanism for the empowering effect of sound repetition in literary constructs.

Close

  • doi:10.1016/j.cortex.2019.11.005

Close

Tobias Feldmann-Wüstefeld

Neural measures of working memory in a bilateral change detection task Journal Article

Psychophysiology, 58 , pp. 1–22, 2020.

Abstract | Links | BibTeX

@article{FeldmannWuestefeld2020,
title = {Neural measures of working memory in a bilateral change detection task},
author = {Tobias Feldmann-Wüstefeld},
doi = {10.1111/psyp.13683},
year = {2020},
date = {2020-01-01},
journal = {Psychophysiology},
volume = {58},
pages = {1--22},
abstract = {The change detection task is a widely used paradigm to examine visual working memory processes. Participants memorize a set of items and then, try to detect changes in the set after a retention period. The negative slow wave (NSW) and contralateral delay activity (CDA) are event-related potentials in the EEG signal that are commonly used in change detection tasks to track working memory load, as both increase with the number of items maintained in working memory (set size). While the CDA was argued to more purely reflect the memory-specific neural activity than the NSW, it also requires a lateralized design and attention shifts prior to memoranda onset, imposing more restrictions on the task than the NSW. The present study proposes a novel change detection task in which both CDA and NSW can be measured at the same time. Memory items were presented bilaterally, but their distribution in the left and right hemifield varied, inducing a target imbalance or “net load.” NSW increased with set size, whereas CDA increased with net load. In addition, a multivariate linear classifier was able to decode the set size and net load from the EEG signal. CDA, NSW, and decoding accuracy predicted an individual's working memory capacity. In line with the notion of a bilateral advantage in working memory, accuracy, and CDA data suggest that participants tended to encode items relatively balanced. In sum, this novel change detection task offers a basis to make use of converging neural measures of working memory in a comprehensive paradigm.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The change detection task is a widely used paradigm to examine visual working memory processes. Participants memorize a set of items and then, try to detect changes in the set after a retention period. The negative slow wave (NSW) and contralateral delay activity (CDA) are event-related potentials in the EEG signal that are commonly used in change detection tasks to track working memory load, as both increase with the number of items maintained in working memory (set size). While the CDA was argued to more purely reflect the memory-specific neural activity than the NSW, it also requires a lateralized design and attention shifts prior to memoranda onset, imposing more restrictions on the task than the NSW. The present study proposes a novel change detection task in which both CDA and NSW can be measured at the same time. Memory items were presented bilaterally, but their distribution in the left and right hemifield varied, inducing a target imbalance or “net load.” NSW increased with set size, whereas CDA increased with net load. In addition, a multivariate linear classifier was able to decode the set size and net load from the EEG signal. CDA, NSW, and decoding accuracy predicted an individual's working memory capacity. In line with the notion of a bilateral advantage in working memory, accuracy, and CDA data suggest that participants tended to encode items relatively balanced. In sum, this novel change detection task offers a basis to make use of converging neural measures of working memory in a comprehensive paradigm.

Close

  • doi:10.1111/psyp.13683

Close

Thomas Geyer; Franziska Günther; Hermann J Müller; Jim Kacian; Heinrich René Liesefeld; Stella Pierides

Reading English-language haiku: An eye-movement study of the 'cut effect' Journal Article

Journal of Eye Movement Research, 13 (2), pp. 1–29, 2020.

Abstract | Links | BibTeX

@article{Geyer2020,
title = {Reading English-language haiku: An eye-movement study of the 'cut effect'},
author = {Thomas Geyer and Franziska Günther and Hermann J Müller and Jim Kacian and Heinrich René Liesefeld and Stella Pierides},
doi = {10.16910/jemr.13.2.2},
year = {2020},
date = {2020-01-01},
journal = {Journal of Eye Movement Research},
volume = {13},
number = {2},
pages = {1--29},
abstract = {The current study, set within the larger enterprise of Neuro-Cognitive Poetics, was designed to examine how readers deal with the 'cut'-a more or less sharp semantic-conceptual break-in normative, three-line English-language haiku poems (ELH). Readers were presented with three-line haiku that consisted of two (seemingly) disparate parts, a (two-line) 'phrase' image and a one-line 'fragment' image, in order to determine how they process the conceptual gap between these images when constructing the poem's meaning-as reflected in their patterns of reading eye movements. In addition to replicating the basic 'cut effect', i.e., the extended fixation dwell time on the fragment line relative to the other lines, the present study examined (a) how this effect is influenced by whether the cut is purely implicit or explicitly marked by punctuation, and (b) whether the effect pattern could be delineated against a control condition of 'uncut', one-image haiku. For 'cut' vs. 'uncut' haiku, the results revealed the distribution of fixations across the poems to be modulated by the position of the cut (after line 1 vs. after line 2), the presence vs. absence of a cut marker, and the semanticconceptual distance between the two images (context-action vs. juxtaposition haiku). These formal-structural and conceptual-semantic properties were associated with systematic changes in how individual poem lines were scanned at first reading and then (selectively) re-sampled in second-and third-pass reading to construct and check global meaning. No such effects were found for one-image (control) haiku. We attribute this pattern to the operation of different meaning resolution processes during the comprehension of two-image haiku, which are invoked by both form-and meaning-related features of the poems.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The current study, set within the larger enterprise of Neuro-Cognitive Poetics, was designed to examine how readers deal with the 'cut'-a more or less sharp semantic-conceptual break-in normative, three-line English-language haiku poems (ELH). Readers were presented with three-line haiku that consisted of two (seemingly) disparate parts, a (two-line) 'phrase' image and a one-line 'fragment' image, in order to determine how they process the conceptual gap between these images when constructing the poem's meaning-as reflected in their patterns of reading eye movements. In addition to replicating the basic 'cut effect', i.e., the extended fixation dwell time on the fragment line relative to the other lines, the present study examined (a) how this effect is influenced by whether the cut is purely implicit or explicitly marked by punctuation, and (b) whether the effect pattern could be delineated against a control condition of 'uncut', one-image haiku. For 'cut' vs. 'uncut' haiku, the results revealed the distribution of fixations across the poems to be modulated by the position of the cut (after line 1 vs. after line 2), the presence vs. absence of a cut marker, and the semanticconceptual distance between the two images (context-action vs. juxtaposition haiku). These formal-structural and conceptual-semantic properties were associated with systematic changes in how individual poem lines were scanned at first reading and then (selectively) re-sampled in second-and third-pass reading to construct and check global meaning. No such effects were found for one-image (control) haiku. We attribute this pattern to the operation of different meaning resolution processes during the comprehension of two-image haiku, which are invoked by both form-and meaning-related features of the poems.

Close

  • doi:10.16910/jemr.13.2.2

Close

Artyom Zinchenko; Markus Conci; Thomas Töllner; Hermann J Müller; Thomas Geyer

Automatic guidance (and misguidance) of visuospatial attention by acquired scene memory: Evidence from an N1pc polarity reversal Journal Article

Psychological Science, 31 (12), pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Zinchenko2020a,
title = {Automatic guidance (and misguidance) of visuospatial attention by acquired scene memory: Evidence from an N1pc polarity reversal},
author = {Artyom Zinchenko and Markus Conci and Thomas Töllner and Hermann J Müller and Thomas Geyer},
doi = {10.1177/0956797620954815},
year = {2020},
date = {2020-01-01},
journal = {Psychological Science},
volume = {31},
number = {12},
pages = {1--13},
abstract = {Visual search is facilitated when the target is repeatedly encountered at a fixed position within an invariant (vs. randomly variable) distractor layout—that is, when the layout is learned and guides attention to the target, a phenomenon known as contextual cuing. Subsequently changing the target location within a learned layout abolishes contextual cuing, which is difficult to relearn. Here, we used lateralized event-related electroencephalogram (EEG) potentials to explore memory-based attentional guidance (N = 16). The results revealed reliable contextual cuing during initial learning and an associated EEG-amplitude increase for repeated layouts in attention-related components, starting with an early posterior negativity (N1pc, 80–180 ms). When the target was relocated to the opposite hemifield following learning, contextual cuing was effectively abolished, and the N1pc was reversed in polarity (indicative of persistent misguidance of attention to the original target location). Thus, once learned, repeated layouts trigger attentional-priority signals from memory that proactively interfere with contextual relearning after target relocation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual search is facilitated when the target is repeatedly encountered at a fixed position within an invariant (vs. randomly variable) distractor layout—that is, when the layout is learned and guides attention to the target, a phenomenon known as contextual cuing. Subsequently changing the target location within a learned layout abolishes contextual cuing, which is difficult to relearn. Here, we used lateralized event-related electroencephalogram (EEG) potentials to explore memory-based attentional guidance (N = 16). The results revealed reliable contextual cuing during initial learning and an associated EEG-amplitude increase for repeated layouts in attention-related components, starting with an early posterior negativity (N1pc, 80–180 ms). When the target was relocated to the opposite hemifield following learning, contextual cuing was effectively abolished, and the N1pc was reversed in polarity (indicative of persistent misguidance of attention to the original target location). Thus, once learned, repeated layouts trigger attentional-priority signals from memory that proactively interfere with contextual relearning after target relocation.

Close

  • doi:10.1177/0956797620954815

Close

Jing Zhu; Zihan Wang; Tao Gong; Shuai Zeng; Xiaowei Li; Bin Hu; Jianxiu Li; Shuting Sun; Lan Zhang

An improved classification model for depression detection using EEG and eye tracking data Journal Article

IEEE Transactions on Nanobioscience, 19 (3), pp. 527–537, 2020.

Abstract | Links | BibTeX

@article{Zhu2020a,
title = {An improved classification model for depression detection using EEG and eye tracking data},
author = {Jing Zhu and Zihan Wang and Tao Gong and Shuai Zeng and Xiaowei Li and Bin Hu and Jianxiu Li and Shuting Sun and Lan Zhang},
doi = {10.1109/TNB.2020.2990690},
year = {2020},
date = {2020-01-01},
journal = {IEEE Transactions on Nanobioscience},
volume = {19},
number = {3},
pages = {527--537},
abstract = {At present, depression has become a main health burden in the world. However, there are many problems with the diagnosis of depression, such as low patient cooperation, subjective bias and low accuracy. Therefore, reliable and objective evaluation method is needed to achieve effective depression detection. Electroencephalogram (EEG) and eye movements (EMs) data have been widely used for depression detection due to their advantages of easy recording and non-invasion. This research proposes a content based ensemble method (CBEM) to promote the depression detection accuracy, both static and dynamic CBEM were discussed. In the proposed model, EEG or EMs dataset was divided into subsets by the context of the experiments, and then a majority vote strategy was used to determine the subjects' label. The validation of the method is testified on two datasets which included free viewing eye tracking and resting-state EEG, and these two datasets have 36,34 subjects respectively. For these two datasets, CBEM achieves accuracies of 82.5% and 92.65% respectively. The results show that CBEM outperforms traditional classification methods. Our findings provide an effective solution for promoting the accuracy of depression identification, and provide an effective method for identification of depression, which in the future could be used for the auxiliary diagnosis of depression.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

At present, depression has become a main health burden in the world. However, there are many problems with the diagnosis of depression, such as low patient cooperation, subjective bias and low accuracy. Therefore, reliable and objective evaluation method is needed to achieve effective depression detection. Electroencephalogram (EEG) and eye movements (EMs) data have been widely used for depression detection due to their advantages of easy recording and non-invasion. This research proposes a content based ensemble method (CBEM) to promote the depression detection accuracy, both static and dynamic CBEM were discussed. In the proposed model, EEG or EMs dataset was divided into subsets by the context of the experiments, and then a majority vote strategy was used to determine the subjects' label. The validation of the method is testified on two datasets which included free viewing eye tracking and resting-state EEG, and these two datasets have 36,34 subjects respectively. For these two datasets, CBEM achieves accuracies of 82.5% and 92.65% respectively. The results show that CBEM outperforms traditional classification methods. Our findings provide an effective solution for promoting the accuracy of depression identification, and provide an effective method for identification of depression, which in the future could be used for the auxiliary diagnosis of depression.

Close

  • doi:10.1109/TNB.2020.2990690

Close

L Tankelevitch; E Spaak; M F S Rushworth; M G Stokes

Previously reward-associated stimuli capture spatial attention in the absence of changes in the corresponding sensory representations as measured with MEG Journal Article

Journal of Neuroscience, 40 (26), pp. 5033–5050, 2020.

Abstract | Links | BibTeX

@article{Tankelevitch2020,
title = {Previously reward-associated stimuli capture spatial attention in the absence of changes in the corresponding sensory representations as measured with MEG},
author = {L Tankelevitch and E Spaak and M F S Rushworth and M G Stokes},
doi = {10.1101/622589},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neuroscience},
volume = {40},
number = {26},
pages = {5033--5050},
abstract = {Studies of selective attention typically consider the role of task goals or physical salience, but recent work has shown that attention can also be captured by previously reward-associated stimuli, even when these are no longer relevant (i.e., value-driven attentional capture; VDAC). We used magnetoencephalography (MEG) to investigate how previously reward-associated stimuli are processed, the time-course of reward history effects, and how this relates to the behavioural effects of VDAC. Male and female human participants first completed a reward learning task to establish stimulus-reward associations. Next, we measured attentional capture in a separate task by presenting these stimuli in the absence of reward contingency, and probing their effects on the processing of separate target stimuli presented at different time lags. Using time-resolved multivariate pattern analysis, we found that learned value modulated the spatial selection of previously rewarded stimuli in occipital, inferior temporal, and parietal cortex from ~260ms after stimulus onset. This value modulation was related to the strength of participants' behavioural VDAC effect and persisted into subsequent target processing. Furthermore, we found a spatially invariant value signal from ~340ms. Importantly, learned value did not influence the neural discriminability of the previously rewarded stimuli in visual cortical areas. Our results suggest that VDAC is underpinned by learned value signals which modulate spatial selection throughout posterior visual and parietal cortex. We further suggest that VDAC can occur in the absence of changes in early visual cortical processing. Significance statement Attention is our ability to focus on relevant information at the expense of irrelevant information. It can be affected by previously learned but currently irrelevant stimulus-reward associations, a phenomenon termed “value-driven attentional capture” (VDAC). The neural mechanisms underlying VDAC remain unclear. It has been speculated that reward learning induces visual cortical plasticity which modulates early visual processing to capture attention. Although we find that learned value modulates spatial attention in sensory brain areas, an effect which correlates with VDAC, we find no relevant signatures of visual cortical plasticity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Studies of selective attention typically consider the role of task goals or physical salience, but recent work has shown that attention can also be captured by previously reward-associated stimuli, even when these are no longer relevant (i.e., value-driven attentional capture; VDAC). We used magnetoencephalography (MEG) to investigate how previously reward-associated stimuli are processed, the time-course of reward history effects, and how this relates to the behavioural effects of VDAC. Male and female human participants first completed a reward learning task to establish stimulus-reward associations. Next, we measured attentional capture in a separate task by presenting these stimuli in the absence of reward contingency, and probing their effects on the processing of separate target stimuli presented at different time lags. Using time-resolved multivariate pattern analysis, we found that learned value modulated the spatial selection of previously rewarded stimuli in occipital, inferior temporal, and parietal cortex from ~260ms after stimulus onset. This value modulation was related to the strength of participants' behavioural VDAC effect and persisted into subsequent target processing. Furthermore, we found a spatially invariant value signal from ~340ms. Importantly, learned value did not influence the neural discriminability of the previously rewarded stimuli in visual cortical areas. Our results suggest that VDAC is underpinned by learned value signals which modulate spatial selection throughout posterior visual and parietal cortex. We further suggest that VDAC can occur in the absence of changes in early visual cortical processing. Significance statement Attention is our ability to focus on relevant information at the expense of irrelevant information. It can be affected by previously learned but currently irrelevant stimulus-reward associations, a phenomenon termed “value-driven attentional capture” (VDAC). The neural mechanisms underlying VDAC remain unclear. It has been speculated that reward learning induces visual cortical plasticity which modulates early visual processing to capture attention. Although we find that learned value modulates spatial attention in sensory brain areas, an effect which correlates with VDAC, we find no relevant signatures of visual cortical plasticity.

Close

  • doi:10.1101/622589

Close

Bin Zhao; Jinfeng Huang; Gaoyan Zhang; Jianwu Dang; Minbo Chen; Yingjian Fu; Longbiao Wang

Brain network reconstruction of speech production based on electro-encephalography and eye movement Journal Article

Acoustical Science and Technology, 41 (1), pp. 349–350, 2020.

Abstract | Links | BibTeX

@article{Zhao2020,
title = {Brain network reconstruction of speech production based on electro-encephalography and eye movement},
author = {Bin Zhao and Jinfeng Huang and Gaoyan Zhang and Jianwu Dang and Minbo Chen and Yingjian Fu and Longbiao Wang},
doi = {10.1250/ast.41.349},
year = {2020},
date = {2020-01-01},
journal = {Acoustical Science and Technology},
volume = {41},
number = {1},
pages = {349--350},
abstract = {To fully understand the brain mechanism associated with speech functions, it is necessary to unfold the spatiotemporal brain dynamics during the whole speech processing range [1]. However, previous functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) studies focused on cerebral activation patterns and their regional functions, while lacking information of the time courses [2]. In contrast, electroencephalography (EEG) and magneto- encephalography (MEG) with high temporal resolution are inferior in source localization, and are also easily buried in electromagnetic artifacts from muscular actions in articulation, thus interfering with the analysis. In this study, we introduced a novel multimodal data acquisition system to collect EEG, eye movement, and speech in an oral reading task. The behavior data (eye movement and speech) were used for segmenting cognitive stages. EEG data went through independent component analyses (ICA), component clustering, and time-varying (adaptive) multi-variate autoregressive modeling [3] for estimating the spatiotemporal causal interactions among brain regions in each cognitive and speech process. Statistical analyses and literature review were followed to interpret the brain dynamic results for better understanding the speech functions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

To fully understand the brain mechanism associated with speech functions, it is necessary to unfold the spatiotemporal brain dynamics during the whole speech processing range [1]. However, previous functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) studies focused on cerebral activation patterns and their regional functions, while lacking information of the time courses [2]. In contrast, electroencephalography (EEG) and magneto- encephalography (MEG) with high temporal resolution are inferior in source localization, and are also easily buried in electromagnetic artifacts from muscular actions in articulation, thus interfering with the analysis. In this study, we introduced a novel multimodal data acquisition system to collect EEG, eye movement, and speech in an oral reading task. The behavior data (eye movement and speech) were used for segmenting cognitive stages. EEG data went through independent component analyses (ICA), component clustering, and time-varying (adaptive) multi-variate autoregressive modeling [3] for estimating the spatiotemporal causal interactions among brain regions in each cognitive and speech process. Statistical analyses and literature review were followed to interpret the brain dynamic results for better understanding the speech functions.

Close

  • doi:10.1250/ast.41.349

Close

Hong Zeng; Junjie Shen; Wenming Zheng; Aiguo Song; Jia Liu

Toward measuring target perception: First-order and second-order deep network pipeline for classification of fixation-felated potentials Journal Article

Journal of Healthcare Engineering, pp. 1–15, 2020.

Abstract | Links | BibTeX

@article{Zeng2020,
title = {Toward measuring target perception: First-order and second-order deep network pipeline for classification of fixation-felated potentials},
author = {Hong Zeng and Junjie Shen and Wenming Zheng and Aiguo Song and Jia Liu},
doi = {10.1155/2020/8829451},
year = {2020},
date = {2020-01-01},
journal = {Journal of Healthcare Engineering},
pages = {1--15},
abstract = {The topdown determined visual object perception refers to the ability of a person to identify a prespecified visual target. This paper studies the technical foundation for measuring the target-perceptual ability in a guided visual search task, using the EEG-based brain imaging technique. Specifically, it focuses on the feature representation learning problem for single-trial classification of fixation-related potentials (FRPs). The existing methods either capture only first-order statistics while ignoring second-order statistics in data, or directly extract second-order statistics with covariance matrices estimated with raw FRPs that suffer from low signal-to-noise ratio. In this paper, we propose a new representation learning pipeline involving a low-level convolution subnetwork followed by a high-level Riemannian manifold subnetwork, with a novel midlevel pooling layer bridging them. In this way, the discriminative power of the first-order features can be increased by the convolution subnetwork, while the second-order information in the convolutional features could further be deeply learned with the subsequent Riemannian subnetwork. In particular, the temporal ordering of FRPs is well preserved for the components in our pipeline, which is considered to be a valuable source of discriminant information. The experimental results show that proposed approach leads to improved classification performance and robustness to lack of data over the state-of-the-art ones, thus making it appealing for practical applications in measuring the target-perceptual ability of cognitively impaired patients with the FRP technique.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The topdown determined visual object perception refers to the ability of a person to identify a prespecified visual target. This paper studies the technical foundation for measuring the target-perceptual ability in a guided visual search task, using the EEG-based brain imaging technique. Specifically, it focuses on the feature representation learning problem for single-trial classification of fixation-related potentials (FRPs). The existing methods either capture only first-order statistics while ignoring second-order statistics in data, or directly extract second-order statistics with covariance matrices estimated with raw FRPs that suffer from low signal-to-noise ratio. In this paper, we propose a new representation learning pipeline involving a low-level convolution subnetwork followed by a high-level Riemannian manifold subnetwork, with a novel midlevel pooling layer bridging them. In this way, the discriminative power of the first-order features can be increased by the convolution subnetwork, while the second-order information in the convolutional features could further be deeply learned with the subsequent Riemannian subnetwork. In particular, the temporal ordering of FRPs is well preserved for the components in our pipeline, which is considered to be a valuable source of discriminant information. The experimental results show that proposed approach leads to improved classification performance and robustness to lack of data over the state-of-the-art ones, thus making it appealing for practical applications in measuring the target-perceptual ability of cognitively impaired patients with the FRP technique.

Close

  • doi:10.1155/2020/8829451

Close

Lisa Wirz; Lars Schwabe

Prioritized attentional processing: Acute stress, memory and stimulus emotionality facilitate attentional disengagement Journal Article

Neuropsychologia, 138 , pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Wirz2020,
title = {Prioritized attentional processing: Acute stress, memory and stimulus emotionality facilitate attentional disengagement},
author = {Lisa Wirz and Lars Schwabe},
doi = {10.1016/j.neuropsychologia.2020.107334},
year = {2020},
date = {2020-01-01},
journal = {Neuropsychologia},
volume = {138},
pages = {1--13},
publisher = {Elsevier Ltd},
abstract = {Rapid attentional orienting toward relevant stimuli and efficient disengagement from irrelevant stimuli are critical for survival. Here, we examined the roles of memory processes, emotional arousal and acute stress in attentional disengagement. To this end, 64 healthy participants encoded negative and neutral facial expressions and, after being exposed to a stress or control manipulation, performed an attention task in which they had to disengage from these previously encoded as well as novel face stimuli. During the attention task, electroencephalography (EEG) and pupillometry data were recorded. Our results showed overall faster reaction times after acute stress and when participants had to disengage from emotionally negative or old facial expressions. Further, pupil dilations were larger in response to neutral faces. During disengagement, our EEG data revealed a reduced N2pc amplitude when participants disengaged from neutral compared to negative facial expressions when these were not presented before, as well as earlier onset latencies for the N400f (for disengagement from negative and old faces), the N2pc, and the LPP (for disengagement from negative faces). In addition, early visual processing of negative faces, as reflected in the P1 amplitude, was enhanced specifically in stressed participants. Our findings indicate that attentional disengagement is improved for negative and familiar stimuli and that stress facilitates not only attentional disengagement but also emotional processing in general. Together, these processes may represent important mechanisms enabling efficient performance and rapid threat detection.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Rapid attentional orienting toward relevant stimuli and efficient disengagement from irrelevant stimuli are critical for survival. Here, we examined the roles of memory processes, emotional arousal and acute stress in attentional disengagement. To this end, 64 healthy participants encoded negative and neutral facial expressions and, after being exposed to a stress or control manipulation, performed an attention task in which they had to disengage from these previously encoded as well as novel face stimuli. During the attention task, electroencephalography (EEG) and pupillometry data were recorded. Our results showed overall faster reaction times after acute stress and when participants had to disengage from emotionally negative or old facial expressions. Further, pupil dilations were larger in response to neutral faces. During disengagement, our EEG data revealed a reduced N2pc amplitude when participants disengaged from neutral compared to negative facial expressions when these were not presented before, as well as earlier onset latencies for the N400f (for disengagement from negative and old faces), the N2pc, and the LPP (for disengagement from negative faces). In addition, early visual processing of negative faces, as reflected in the P1 amplitude, was enhanced specifically in stressed participants. Our findings indicate that attentional disengagement is improved for negative and familiar stimuli and that stress facilitates not only attentional disengagement but also emotional processing in general. Together, these processes may represent important mechanisms enabling efficient performance and rapid threat detection.

Close

  • doi:10.1016/j.neuropsychologia.2020.107334

Close

Elliott G Wimmer; Yunzhe Liu; Neža Vehar; Timothy E J Behrens; Raymond J Dolan

Episodic memory retrieval success is associated with rapid replay of episode content Journal Article

Nature Neuroscience, 23 (8), pp. 1025–1033, 2020.

Abstract | Links | BibTeX

@article{Wimmer2020,
title = {Episodic memory retrieval success is associated with rapid replay of episode content},
author = {Elliott G Wimmer and Yunzhe Liu and Ne{ž}a Vehar and Timothy E J Behrens and Raymond J Dolan},
doi = {10.1038/s41593-020-0649-z},
year = {2020},
date = {2020-01-01},
journal = {Nature Neuroscience},
volume = {23},
number = {8},
pages = {1025--1033},
publisher = {Springer US},
abstract = {Retrieval of everyday experiences is fundamental for informing our future decisions. The fine-grained neurophysiological mechanisms that support such memory retrieval are largely unknown. We studied participants who first experienced, without repetition, unique multicomponent 40–80-s episodes. One day later, they engaged in cued retrieval of these episodes while undergoing magnetoencephalography. By decoding individual episode elements, we found that trial-by-trial successful retrieval was supported by the sequential replay of episode elements, with a temporal compression factor of textgreater60. The direction of replay supporting retrieval, either backward or forward, depended on whether the task goal was to retrieve elements of an episode that followed or preceded, respectively, a retrieval cue. This sequential replay was weaker in very-high-performing participants, in whom instead we found evidence for simultaneous clustered reactivation. Our results demonstrate that memory-mediated decisions are supported by a rapid replay mechanism that can flexibly shift in direction in response to task goals.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Retrieval of everyday experiences is fundamental for informing our future decisions. The fine-grained neurophysiological mechanisms that support such memory retrieval are largely unknown. We studied participants who first experienced, without repetition, unique multicomponent 40–80-s episodes. One day later, they engaged in cued retrieval of these episodes while undergoing magnetoencephalography. By decoding individual episode elements, we found that trial-by-trial successful retrieval was supported by the sequential replay of episode elements, with a temporal compression factor of textgreater60. The direction of replay supporting retrieval, either backward or forward, depended on whether the task goal was to retrieve elements of an episode that followed or preceded, respectively, a retrieval cue. This sequential replay was weaker in very-high-performing participants, in whom instead we found evidence for simultaneous clustered reactivation. Our results demonstrate that memory-mediated decisions are supported by a rapid replay mechanism that can flexibly shift in direction in response to task goals.

Close

  • doi:10.1038/s41593-020-0649-z

Close

Tommy J Wilson; John J Foxe

Cross-frequency coupling of alpha oscillatory power to the entrainment rhythm of a spatially attended input stream Journal Article

Cognitive Neuroscience, 11 (1-2), pp. 71–91, 2020.

Abstract | Links | BibTeX

@article{Wilson2020,
title = {Cross-frequency coupling of alpha oscillatory power to the entrainment rhythm of a spatially attended input stream},
author = {Tommy J Wilson and John J Foxe},
doi = {10.1080/17588928.2019.1627303},
year = {2020},
date = {2020-01-01},
journal = {Cognitive Neuroscience},
volume = {11},
number = {1-2},
pages = {71--91},
publisher = {Routledge},
abstract = {Neural entrainment and alpha oscillatory power (8–14 Hz) are mechanisms of selective attention. The extent to which these two mechanisms interact, especially in the context of visuospatial attention, is unclear. Here, we show that spatial attention to a delta-frequency, rhythmic visual stimulus in one hemifield results in phase-amplitude coupling between the delta-phase of an entrained frontal source and alpha power generated by ipsilateral visuocortical regions. The driving of ipsilateral alpha power by frontal delta also correlates with task performance. Our analyses suggest that neural entrainment may serve a previously underappreciated role in coordinating macroscale brain networks and that inhibition of processing by alpha power can be coupled to an attended temporal structure. Finally, we note that the observed coupling bolsters one dominant hypothesis of modern cognitive neuroscience, that macroscale brain networks and distributed neural computation are coordinated by oscillatory synchrony and cross-frequency interactions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Neural entrainment and alpha oscillatory power (8–14 Hz) are mechanisms of selective attention. The extent to which these two mechanisms interact, especially in the context of visuospatial attention, is unclear. Here, we show that spatial attention to a delta-frequency, rhythmic visual stimulus in one hemifield results in phase-amplitude coupling between the delta-phase of an entrained frontal source and alpha power generated by ipsilateral visuocortical regions. The driving of ipsilateral alpha power by frontal delta also correlates with task performance. Our analyses suggest that neural entrainment may serve a previously underappreciated role in coordinating macroscale brain networks and that inhibition of processing by alpha power can be coupled to an attended temporal structure. Finally, we note that the observed coupling bolsters one dominant hypothesis of modern cognitive neuroscience, that macroscale brain networks and distributed neural computation are coordinated by oscillatory synchrony and cross-frequency interactions.

Close

  • doi:10.1080/17588928.2019.1627303

Close

Niklas Wilming; Peter R Murphy; Florent Meyniel; Tobias H Donner

Large-scale dynamics of perceptual decision information across human cortex Journal Article

Nature Communications, 11 , pp. 1–14, 2020.

Abstract | Links | BibTeX

@article{Wilming2020,
title = {Large-scale dynamics of perceptual decision information across human cortex},
author = {Niklas Wilming and Peter R Murphy and Florent Meyniel and Tobias H Donner},
doi = {10.1038/s41467-020-18826-6},
year = {2020},
date = {2020-01-01},
journal = {Nature Communications},
volume = {11},
pages = {1--14},
publisher = {Springer US},
abstract = {Perceptual decisions entail the accumulation of sensory evidence for a particular choice towards an action plan. An influential framework holds that sensory cortical areas encode the instantaneous sensory evidence and downstream, action-related regions accumulate this evidence. The large-scale distribution of this computation across the cerebral cortex has remained largely elusive. Here, we develop a regionally-specific magnetoencephalography decoding approach to exhaustively map the dynamics of stimulus- and choice-specific signals across the human cortical surface during a visual decision. Comparison with the evidence accumulation dynamics inferred from behavior disentangles stimulus-dependent and endogenous components of choice-predictive activity across the visual cortical hierarchy. We find such an endogenous component in early visual cortex (including V1), which is expressed in a low (textless20 Hz) frequency band and tracks, with delay, the build-up of choice-predictive activity in (pre-) motor regions. Our results are consistent with choice- and frequency-specific cortical feedback signaling during decision formation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Perceptual decisions entail the accumulation of sensory evidence for a particular choice towards an action plan. An influential framework holds that sensory cortical areas encode the instantaneous sensory evidence and downstream, action-related regions accumulate this evidence. The large-scale distribution of this computation across the cerebral cortex has remained largely elusive. Here, we develop a regionally-specific magnetoencephalography decoding approach to exhaustively map the dynamics of stimulus- and choice-specific signals across the human cortical surface during a visual decision. Comparison with the evidence accumulation dynamics inferred from behavior disentangles stimulus-dependent and endogenous components of choice-predictive activity across the visual cortical hierarchy. We find such an endogenous component in early visual cortex (including V1), which is expressed in a low (textless20 Hz) frequency band and tracks, with delay, the build-up of choice-predictive activity in (pre-) motor regions. Our results are consistent with choice- and frequency-specific cortical feedback signaling during decision formation.

Close

  • doi:10.1038/s41467-020-18826-6

Close

Yongchun Wang; Meilin Di; Jingjing Zhao; Saisai Hu; Zhao Yao; Yonghui Wang

Attentional modulation of unconscious inhibitory visuomotor processes: An EEG study Journal Article

Psychophysiology, 57 , pp. 1–12, 2020.

Abstract | Links | BibTeX

@article{Wang2020k,
title = {Attentional modulation of unconscious inhibitory visuomotor processes: An EEG study},
author = {Yongchun Wang and Meilin Di and Jingjing Zhao and Saisai Hu and Zhao Yao and Yonghui Wang},
doi = {10.1111/psyp.13561},
year = {2020},
date = {2020-01-01},
journal = {Psychophysiology},
volume = {57},
pages = {1--12},
abstract = {The present study examined the role of attention in unconscious inhibitory visuomotor processes in three experiments that employed a mixed paradigm including a spatial cueing task and masked prime task. Spatial attention to the prime was manipulated. Specifically, the valid-cue condition (in which the prime obtained more attentional resources) and invalid-cue condition (in which the prime obtained fewer attentional resources) were included. The behavioral results showed that the negative compatibility effect (a behavioral indicator of inhibitory visuomotor processing) in the valid-cue condition was larger than that in the invalid-cue condition. Most importantly, lateralized readiness potential results indicated that the prime-related activation was stronger in the valid-cue condition than in the invalid-cue condition and that the followed inhibition in the compatible trials was also stronger in the valid-cue condition than in the invalid-cue condition. In line with the proposed attentional modulation model, unconscious visuomotor inhibitory processing is modulated by attentional resources.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study examined the role of attention in unconscious inhibitory visuomotor processes in three experiments that employed a mixed paradigm including a spatial cueing task and masked prime task. Spatial attention to the prime was manipulated. Specifically, the valid-cue condition (in which the prime obtained more attentional resources) and invalid-cue condition (in which the prime obtained fewer attentional resources) were included. The behavioral results showed that the negative compatibility effect (a behavioral indicator of inhibitory visuomotor processing) in the valid-cue condition was larger than that in the invalid-cue condition. Most importantly, lateralized readiness potential results indicated that the prime-related activation was stronger in the valid-cue condition than in the invalid-cue condition and that the followed inhibition in the compatible trials was also stronger in the valid-cue condition than in the invalid-cue condition. In line with the proposed attentional modulation model, unconscious visuomotor inhibitory processing is modulated by attentional resources.

Close

  • doi:10.1111/psyp.13561

Close

Quan Wan; Ying Cai; Jason Samaha; Bradley R Postle

Tracking stimulus representation across a 2-back visual working memory task: Tracking 2-back representation Journal Article

Royal Society Open Science, 7 , pp. 1–18, 2020.

Abstract | Links | BibTeX

@article{Wan2020,
title = {Tracking stimulus representation across a 2-back visual working memory task: Tracking 2-back representation},
author = {Quan Wan and Ying Cai and Jason Samaha and Bradley R Postle},
doi = {10.1098/rsos.190228rsos190228},
year = {2020},
date = {2020-01-01},
journal = {Royal Society Open Science},
volume = {7},
pages = {1--18},
abstract = {How does the neural representation of visual working memory content vary with behavioural priority? To address this, we recorded electroencephalography (EEG) while subjects performed a continuous-performance 2-back working memory task with oriented-grating stimuli. We tracked the transition of the neural representation of an item (n) from its initial encoding, to the status of 'unprioritized memory item' (UMI), and back to 'prioritized memory item', with multivariate inverted encoding modelling. Results showed that the representational format was remapped from its initially encoded format into a distinctive 'opposite' representational format when it became a UMI and then mapped back into its initial format when subsequently prioritized in anticipation of its comparison with item n + 2. Thus, contrary to the default assumption that the activity representing an item in working memory might simply get weaker when it is deprioritized, it may be that a process of priority-based remapping helps to protect remembered information when it is not in the focus of attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

How does the neural representation of visual working memory content vary with behavioural priority? To address this, we recorded electroencephalography (EEG) while subjects performed a continuous-performance 2-back working memory task with oriented-grating stimuli. We tracked the transition of the neural representation of an item (n) from its initial encoding, to the status of 'unprioritized memory item' (UMI), and back to 'prioritized memory item', with multivariate inverted encoding modelling. Results showed that the representational format was remapped from its initially encoded format into a distinctive 'opposite' representational format when it became a UMI and then mapped back into its initial format when subsequently prioritized in anticipation of its comparison with item n + 2. Thus, contrary to the default assumption that the activity representing an item in working memory might simply get weaker when it is deprioritized, it may be that a process of priority-based remapping helps to protect remembered information when it is not in the focus of attention.

Close

  • doi:10.1098/rsos.190228rsos190228

Close

Maximilian F A Hauser; Stefanie Heba; Tobias Schmidt-Wilcke; Martin Tegenthoff; Denise Manahan-Vaughan

Cerebellar-hippocampal processing in passive perception of visuospatial change: An ego- and allocentric axis? Journal Article

Human Brain Mapping, 41 (5), pp. 1153–1166, 2020.

Abstract | Links | BibTeX

@article{Hauser2020,
title = {Cerebellar-hippocampal processing in passive perception of visuospatial change: An ego- and allocentric axis?},
author = {Maximilian F A Hauser and Stefanie Heba and Tobias Schmidt-Wilcke and Martin Tegenthoff and Denise Manahan-Vaughan},
doi = {10.1002/hbm.24865},
year = {2020},
date = {2020-01-01},
journal = {Human Brain Mapping},
volume = {41},
number = {5},
pages = {1153--1166},
abstract = {In addition to its role in visuospatial navigation and the generation of spatial representations, in recent years, the hippocampus has been proposed to support perceptual processes. This is especially the case where high-resolution details, in the form of fine-grained relationships between features such as angles between components of a visual scene, are involved. An unresolved question is how, in the visual domain, perspective-changes are differentiated from allocentric changes to these perceived feature relationships, both of which may be argued to involve the hippocampus. We conducted functional magnetic resonance imaging of the brain response (corroborated through separate event-related potential source-localization) in a passive visuospatial oddball-paradigm to examine to what extent the hippocampus and other brain regions process changes in perspective, or configuration of abstract, three-dimensional structures. We observed activation of the left superior parietal cortex during perspective shifts, and right anterior hippocampus in configuration-changes. Strikingly, we also found the cerebellum to differentiate between the two, in a way that appeared tightly coupled to hippocampal processing. These results point toward a relationship between the cerebellum and the hippocampus that occurs during perception of changes in visuospatial information that has previously only been reported with regard to visuospatial navigation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In addition to its role in visuospatial navigation and the generation of spatial representations, in recent years, the hippocampus has been proposed to support perceptual processes. This is especially the case where high-resolution details, in the form of fine-grained relationships between features such as angles between components of a visual scene, are involved. An unresolved question is how, in the visual domain, perspective-changes are differentiated from allocentric changes to these perceived feature relationships, both of which may be argued to involve the hippocampus. We conducted functional magnetic resonance imaging of the brain response (corroborated through separate event-related potential source-localization) in a passive visuospatial oddball-paradigm to examine to what extent the hippocampus and other brain regions process changes in perspective, or configuration of abstract, three-dimensional structures. We observed activation of the left superior parietal cortex during perspective shifts, and right anterior hippocampus in configuration-changes. Strikingly, we also found the cerebellum to differentiate between the two, in a way that appeared tightly coupled to hippocampal processing. These results point toward a relationship between the cerebellum and the hippocampus that occurs during perception of changes in visuospatial information that has previously only been reported with regard to visuospatial navigation.

Close

  • doi:10.1002/hbm.24865

Close

Simone G Heideman; Andrew J Quinn; Mark W Woolrich; Freek van Ede; Anna C Nobre

Dissecting beta-state changes during timed movement preparation in Parkinson's disease Journal Article

Progress in Neurobiology, 184 , pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Heideman2020,
title = {Dissecting beta-state changes during timed movement preparation in Parkinson's disease},
author = {Simone G Heideman and Andrew J Quinn and Mark W Woolrich and Freek van Ede and Anna C Nobre},
doi = {10.1016/j.pneurobio.2019.101731},
year = {2020},
date = {2020-01-01},
journal = {Progress in Neurobiology},
volume = {184},
pages = {1--11},
publisher = {Elsevier Ltd},
abstract = {An emerging perspective describes beta-band (15−28 Hz) activity as consisting of short-lived high-amplitude events that only appear sustained in conventional measures of trial-average power. This has important implications for characterising abnormalities observed in beta-band activity in disorders like Parkinson's disease. Measuring parameters associated with beta-event dynamics may yield more sensitive measures, provide more selective diagnostic neural markers, and provide greater mechanistic insight into the breakdown of brain dynamics in this disease. Here, we used magnetoencephalography in eighteen Parkinson's disease participants off dopaminergic medication and eighteen healthy control participants to investigate beta-event dynamics during timed movement preparation. We used the Hidden Markov Model to classify event dynamics in a data-driven manner and derived three parameters of beta events: (1) beta-state amplitude, (2) beta-state lifetime, and (3) beta-state interval time. Of these, changes in beta-state interval time explained the overall decreases in beta power during timed movement preparation and uniquely captured the impairment in such preparation in patients with Parkinson's disease. Thus, the increased granularity of the Hidden Markov Model analysis (compared with conventional analysis of power) provides increased sensitivity and suggests a possible reason for impairments of timed movement preparation in Parkinson's disease.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

An emerging perspective describes beta-band (15−28 Hz) activity as consisting of short-lived high-amplitude events that only appear sustained in conventional measures of trial-average power. This has important implications for characterising abnormalities observed in beta-band activity in disorders like Parkinson's disease. Measuring parameters associated with beta-event dynamics may yield more sensitive measures, provide more selective diagnostic neural markers, and provide greater mechanistic insight into the breakdown of brain dynamics in this disease. Here, we used magnetoencephalography in eighteen Parkinson's disease participants off dopaminergic medication and eighteen healthy control participants to investigate beta-event dynamics during timed movement preparation. We used the Hidden Markov Model to classify event dynamics in a data-driven manner and derived three parameters of beta events: (1) beta-state amplitude, (2) beta-state lifetime, and (3) beta-state interval time. Of these, changes in beta-state interval time explained the overall decreases in beta power during timed movement preparation and uniquely captured the impairment in such preparation in patients with Parkinson's disease. Thus, the increased granularity of the Hidden Markov Model analysis (compared with conventional analysis of power) provides increased sensitivity and suggests a possible reason for impairments of timed movement preparation in Parkinson's disease.

Close

  • doi:10.1016/j.pneurobio.2019.101731

Close

James E Hoffman; Minwoo Kim; Matt Taylor; Kelsey Holiday

Emotional capture during emotion-induced blindness is not automatic Journal Article

Cortex, 122 , pp. 140–158, 2020.

Abstract | Links | BibTeX

@article{Hoffman2020,
title = {Emotional capture during emotion-induced blindness is not automatic},
author = {James E Hoffman and Minwoo Kim and Matt Taylor and Kelsey Holiday},
doi = {10.1016/j.cortex.2019.03.013},
year = {2020},
date = {2020-01-01},
journal = {Cortex},
volume = {122},
pages = {140--158},
publisher = {Elsevier Ltd},
abstract = {The present research used behavioral and event-related brain potentials (ERP) measures to determine whether emotional capture is automatic in the emotion-induced blindness (EIB) paradigm. The first experiment varied the priority of performing two concurrent tasks: identifying a negative or neutral picture appearing in a rapid serial visual presentation (RSVP) stream of pictures and multiple object tracking (MOT). Results showed that increased attention to the MOT task resulted in decreased accuracy for identifying both negative and neutral target pictures accompanied by decreases in the amplitude of the P3b component. In contrast, the early posterior negativity (EPN) component elicited by negative pictures was unaffected by variations in attention. Similarly, there was a decrement in MOT performance for dual-task versus single task conditions but no effect of picture type (negative vs neutral) on MOT accuracy which isn't consistent with automatic emotional capture of attention. However, the MOT task might simply be insensitive to brief interruptions of attention. The second experiment used a more sensitive reaction time (RT) measure to examine this possibility. Results showed that RT to discriminate a gap appearing in a tracked object was delayed by the simultaneous appearance of to-be-ignored distractor pictures even though MOT performance was once again unaffected by the distractor. Importantly, the RT delay was the same for both negative and neutral distractors suggesting that capture was driven by physical salience rather than emotional salience of the distractors. Despite this lack of emotional capture, the EPN component, which is thought to reflect emotional capture, was still present. We suggest that the EPN doesn't reflect capture but rather downstream effects of attention, including object recognition. These results show that capture by emotional pictures in EIB can be suppressed when attention is engaged in another difficult task. The results have important implications for understanding capture effects in EIB.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present research used behavioral and event-related brain potentials (ERP) measures to determine whether emotional capture is automatic in the emotion-induced blindness (EIB) paradigm. The first experiment varied the priority of performing two concurrent tasks: identifying a negative or neutral picture appearing in a rapid serial visual presentation (RSVP) stream of pictures and multiple object tracking (MOT). Results showed that increased attention to the MOT task resulted in decreased accuracy for identifying both negative and neutral target pictures accompanied by decreases in the amplitude of the P3b component. In contrast, the early posterior negativity (EPN) component elicited by negative pictures was unaffected by variations in attention. Similarly, there was a decrement in MOT performance for dual-task versus single task conditions but no effect of picture type (negative vs neutral) on MOT accuracy which isn't consistent with automatic emotional capture of attention. However, the MOT task might simply be insensitive to brief interruptions of attention. The second experiment used a more sensitive reaction time (RT) measure to examine this possibility. Results showed that RT to discriminate a gap appearing in a tracked object was delayed by the simultaneous appearance of to-be-ignored distractor pictures even though MOT performance was once again unaffected by the distractor. Importantly, the RT delay was the same for both negative and neutral distractors suggesting that capture was driven by physical salience rather than emotional salience of the distractors. Despite this lack of emotional capture, the EPN component, which is thought to reflect emotional capture, was still present. We suggest that the EPN doesn't reflect capture but rather downstream effects of attention, including object recognition. These results show that capture by emotional pictures in EIB can be suppressed when attention is engaged in another difficult task. The results have important implications for understanding capture effects in EIB.

Close

  • doi:10.1016/j.cortex.2019.03.013

Close

Leyla Isik; Anna Mynick; Dimitrios Pantazis; Nancy Kanwisher

The speed of human social interaction perception Journal Article

NeuroImage, 215 , pp. 1–10, 2020.

Abstract | Links | BibTeX

@article{Isik2020,
title = {The speed of human social interaction perception},
author = {Leyla Isik and Anna Mynick and Dimitrios Pantazis and Nancy Kanwisher},
doi = {10.1016/j.neuroimage.2020.116844},
year = {2020},
date = {2020-01-01},
journal = {NeuroImage},
volume = {215},
pages = {1--10},
publisher = {The Authors},
abstract = {The ability to perceive others' social interactions, here defined as the directed contingent actions between two or more people, is a fundamental part of human experience that develops early in infancy and is shared with other primates. However, the neural computations underlying this ability remain largely unknown. Is social interaction recognition a rapid feedforward process or a slower post-perceptual inference? Here we used magnetoencephalography (MEG) decoding to address this question. Subjects in the MEG viewed snapshots of visually matched real-world scenes containing a pair of people who were either engaged in a social interaction or acting independently. The presence versus absence of a social interaction could be read out from subjects' MEG data spontaneously, even while subjects performed an orthogonal task. This readout generalized across different people and scenes, revealing abstract representations of social interactions in the human brain. These representations, however, did not come online until quite late, at 300 ​ms after image onset, well after feedforward visual processes. In a second experiment, we found that social interaction readout still occurred at this same late latency even when subjects performed an explicit task detecting social interactions. We further showed that MEG responses distinguished between different types of social interactions (mutual gaze vs joint attention) even later, around 500 ​ms after image onset. Taken together, these results suggest that the human brain spontaneously extracts information about others' social interactions, but does so slowly, likely relying on iterative top-down computations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The ability to perceive others' social interactions, here defined as the directed contingent actions between two or more people, is a fundamental part of human experience that develops early in infancy and is shared with other primates. However, the neural computations underlying this ability remain largely unknown. Is social interaction recognition a rapid feedforward process or a slower post-perceptual inference? Here we used magnetoencephalography (MEG) decoding to address this question. Subjects in the MEG viewed snapshots of visually matched real-world scenes containing a pair of people who were either engaged in a social interaction or acting independently. The presence versus absence of a social interaction could be read out from subjects' MEG data spontaneously, even while subjects performed an orthogonal task. This readout generalized across different people and scenes, revealing abstract representations of social interactions in the human brain. These representations, however, did not come online until quite late, at 300 ​ms after image onset, well after feedforward visual processes. In a second experiment, we found that social interaction readout still occurred at this same late latency even when subjects performed an explicit task detecting social interactions. We further showed that MEG responses distinguished between different types of social interactions (mutual gaze vs joint attention) even later, around 500 ​ms after image onset. Taken together, these results suggest that the human brain spontaneously extracts information about others' social interactions, but does so slowly, likely relying on iterative top-down computations.

Close

  • doi:10.1016/j.neuroimage.2020.116844

Close

Stephanie J Kayser; Christoph Kayser

Shared physiological correlates of multisensory and expectation-based facilitation Journal Article

eNeuro, 7 (2), pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Kayser2020,
title = {Shared physiological correlates of multisensory and expectation-based facilitation},
author = {Stephanie J Kayser and Christoph Kayser},
doi = {10.1523/ENEURO.0435-19.2019},
year = {2020},
date = {2020-01-01},
journal = {eNeuro},
volume = {7},
number = {2},
pages = {1--13},
abstract = {Perceptual performance in a visual task can be enhanced by simultaneous multisensory information, but can also be enhanced by a symbolic or amodal cue inducing a specific expectation. That similar benefits can arise from multisensory information and within-modality expectation raises the question of whether the underlying neurophysiological processes are the same or distinct. We investigated this by comparing the influence of the following three types of auxiliary probabilistic cues on visual motion discrimination in humans: (1) acoustic motion, (2) a premotion visual symbolic cue, and (3) a postmotion symbolic cue. Using multivariate analysis of the EEG data, we show that both the multisensory and preceding visual symbolic cue enhance the encoding of visual motion direction as reflected by cerebral activity arising from occipital regions;200–400 ms post-stimulus onset. This suggests a common or overlapping physiological correlate of cross-modal and intramodal auxiliary information, pointing to a neural mechanism susceptive to both multisensory and more abstract probabilistic cues. We also asked how prestimulus activity shapes the cue–stimulus combination and found a differential influence on the cross-modal and intramodal combination: while alpha power modulated the relative weight of visual motion and the acoustic cue, it did not modulate the behavioral influence of a visual symbolic cue, pointing to differences in how prestimulus activity shapes the combination of multisensory and abstract cues with task-relevant information.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Perceptual performance in a visual task can be enhanced by simultaneous multisensory information, but can also be enhanced by a symbolic or amodal cue inducing a specific expectation. That similar benefits can arise from multisensory information and within-modality expectation raises the question of whether the underlying neurophysiological processes are the same or distinct. We investigated this by comparing the influence of the following three types of auxiliary probabilistic cues on visual motion discrimination in humans: (1) acoustic motion, (2) a premotion visual symbolic cue, and (3) a postmotion symbolic cue. Using multivariate analysis of the EEG data, we show that both the multisensory and preceding visual symbolic cue enhance the encoding of visual motion direction as reflected by cerebral activity arising from occipital regions;200–400 ms post-stimulus onset. This suggests a common or overlapping physiological correlate of cross-modal and intramodal auxiliary information, pointing to a neural mechanism susceptive to both multisensory and more abstract probabilistic cues. We also asked how prestimulus activity shapes the cue–stimulus combination and found a differential influence on the cross-modal and intramodal combination: while alpha power modulated the relative weight of visual motion and the acoustic cue, it did not modulate the behavioral influence of a visual symbolic cue, pointing to differences in how prestimulus activity shapes the combination of multisensory and abstract cues with task-relevant information.

Close

  • doi:10.1523/ENEURO.0435-19.2019

Close

Christophe C Le Dantec; Aaron R Seitz

Dissociating electrophysiological correlates of contextual and perceptual learning in a visual search task Journal Article

Journal of Vision, 20 (6), pp. 1–15, 2020.

Abstract | Links | BibTeX

@article{LeDantec2020,
title = {Dissociating electrophysiological correlates of contextual and perceptual learning in a visual search task},
author = {Christophe C {Le Dantec} and Aaron R Seitz},
doi = {10.1167/JOV.20.6.7},
year = {2020},
date = {2020-01-01},
journal = {Journal of Vision},
volume = {20},
number = {6},
pages = {1--15},
abstract = {Perceptual learning and contextual learning are two types of implicit visual learning that can co-occur in the same tasks. For example, to find an animal in the woods, you need to know where to look in the environment (contextual learning) and you must be able to discriminate its features (perceptual learning). However, contextual and perceptual learning are typically studied using distinct experimental paradigms, and little is known regarding their comparative neural mechanisms. In this study, we investigated contextual and perceptual learning in 12 healthy adult humans as they performed the same visual search task, and we examined psychophysical and electrophysiological (event-related potentials) measures of learning. Participants were trained to look for a visual stimulus, a small line with a specific orientation, presented among distractors. We found better performance for the trained target orientation as compared to an untrained control orientation, reflecting specificity of perceptual learning for the orientation of trained elements. This orientation specificity effect was associated with changes in the C1 component. We also found better performance for repeated spatial configurations as compared to novel ones, reflecting contextual learning. This context-specific effect was associated with the N2pc component. Taken together, these results suggest that contextual and perceptual learning are distinct visual learning phenomena that have different behavioral and electrophysiological characteristics.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Perceptual learning and contextual learning are two types of implicit visual learning that can co-occur in the same tasks. For example, to find an animal in the woods, you need to know where to look in the environment (contextual learning) and you must be able to discriminate its features (perceptual learning). However, contextual and perceptual learning are typically studied using distinct experimental paradigms, and little is known regarding their comparative neural mechanisms. In this study, we investigated contextual and perceptual learning in 12 healthy adult humans as they performed the same visual search task, and we examined psychophysical and electrophysiological (event-related potentials) measures of learning. Participants were trained to look for a visual stimulus, a small line with a specific orientation, presented among distractors. We found better performance for the trained target orientation as compared to an untrained control orientation, reflecting specificity of perceptual learning for the orientation of trained elements. This orientation specificity effect was associated with changes in the C1 component. We also found better performance for repeated spatial configurations as compared to novel ones, reflecting contextual learning. This context-specific effect was associated with the N2pc component. Taken together, these results suggest that contextual and perceptual learning are distinct visual learning phenomena that have different behavioral and electrophysiological characteristics.

Close

  • doi:10.1167/JOV.20.6.7

Close

Alfred Lim; Steve M J Janssen; Jason Satel

Exploring the temporal dynamics of inhibition of return using steady-state visual evoked potentials Journal Article

Cognitive, Affective and Behavioral Neuroscience, pp. 1349–1364, 2020.

Abstract | Links | BibTeX

@article{Lim2020,
title = {Exploring the temporal dynamics of inhibition of return using steady-state visual evoked potentials},
author = {Alfred Lim and Steve M J Janssen and Jason Satel},
doi = {10.3758/s13415-020-00846-w},
year = {2020},
date = {2020-01-01},
journal = {Cognitive, Affective and Behavioral Neuroscience},
pages = {1349--1364},
publisher = {Cognitive, Affective, & Behavioral Neuroscience},
abstract = {Inhibition of return is characterized by delayed responses to previously attended locations when the interval between stimuli is long enough. The present study employed steady-state visual evoked potentials (SSVEPs) as a measure of attentional modulation to explore the nature and time course of input- and output-based inhibitory cueing mechanisms that each slow response times at previously stimulated locations under different experimental conditions. The neural effects of behavioral inhibition were examined by comparing post-cue SSVEPs between cued and uncued locations measured across two tasks that differed only in the response modality (saccadic or manual response to targets). Grand averages of SSVEP amplitudes for each condition showed a reduction in amplitude at cued locations in the window of 100-500 ms post-cue, revealing an early, short-term decrease in the responses of neurons that can be attributed to sensory adaptation, regardless of response modality. Because primary visual cortex has been found to be one of the major sources of SSVEP signals, the results suggest that the SSVEP modulations observed were caused by input-based inhibition that occurred in V1, or visual areas earlier than V1, as a consequence of reduced visual input activity at previously cued locations. No SSVEP modulations were observed in either response condition late in the cue-target interval, suggesting that neither late input- nor output-based IOR modulates SSVEPs. These findings provide further electrophysiological support for the theory of multiple mechanisms contributing to behavioral cueing effects.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Inhibition of return is characterized by delayed responses to previously attended locations when the interval between stimuli is long enough. The present study employed steady-state visual evoked potentials (SSVEPs) as a measure of attentional modulation to explore the nature and time course of input- and output-based inhibitory cueing mechanisms that each slow response times at previously stimulated locations under different experimental conditions. The neural effects of behavioral inhibition were examined by comparing post-cue SSVEPs between cued and uncued locations measured across two tasks that differed only in the response modality (saccadic or manual response to targets). Grand averages of SSVEP amplitudes for each condition showed a reduction in amplitude at cued locations in the window of 100-500 ms post-cue, revealing an early, short-term decrease in the responses of neurons that can be attributed to sensory adaptation, regardless of response modality. Because primary visual cortex has been found to be one of the major sources of SSVEP signals, the results suggest that the SSVEP modulations observed were caused by input-based inhibition that occurred in V1, or visual areas earlier than V1, as a consequence of reduced visual input activity at previously cued locations. No SSVEP modulations were observed in either response condition late in the cue-target interval, suggesting that neither late input- nor output-based IOR modulates SSVEPs. These findings provide further electrophysiological support for the theory of multiple mechanisms contributing to behavioral cueing effects.

Close

  • doi:10.3758/s13415-020-00846-w

Close

Jakub Limanowski; Vladimir Litvak; Karl Friston

Cortical beta oscillations reflect the contextual gating of visual action feedback Journal Article

NeuroImage, 222 , pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Limanowski2020b,
title = {Cortical beta oscillations reflect the contextual gating of visual action feedback},
author = {Jakub Limanowski and Vladimir Litvak and Karl Friston},
doi = {10.1016/j.neuroimage.2020.117267},
year = {2020},
date = {2020-01-01},
journal = {NeuroImage},
volume = {222},
pages = {1--11},
publisher = {Elsevier Inc.},
abstract = {In sensorimotor integration, the brain needs to decide how its predictions should accommodate novel evidence by ‘gating' sensory data depending on the current context. Here, we examined the oscillatory correlates of this process by recording magnetoencephalography (MEG) data during a new task requiring action under intersensory conflict. We used virtual reality to decouple visual (virtual) and proprioceptive (real) hand postures during a task in which the phase of grasping movements tracked a target (in either modality). Thus, we rendered visual information either task-relevant or a (to-be-ignored) distractor. Under visuo-proprioceptive incongruence, occipital beta power decreased (relative to congruence) when vision was task-relevant but increased when it had to be ignored. Dynamic causal modeling (DCM) revealed that this interaction was best explained by diametrical, task-dependent changes in visual gain. These results suggest a crucial role for beta oscillations in the contextual gating (i.e., gain or precision control) of visual vs proprioceptive action feedback, depending on current behavioral demands.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In sensorimotor integration, the brain needs to decide how its predictions should accommodate novel evidence by ‘gating' sensory data depending on the current context. Here, we examined the oscillatory correlates of this process by recording magnetoencephalography (MEG) data during a new task requiring action under intersensory conflict. We used virtual reality to decouple visual (virtual) and proprioceptive (real) hand postures during a task in which the phase of grasping movements tracked a target (in either modality). Thus, we rendered visual information either task-relevant or a (to-be-ignored) distractor. Under visuo-proprioceptive incongruence, occipital beta power decreased (relative to congruence) when vision was task-relevant but increased when it had to be ignored. Dynamic causal modeling (DCM) revealed that this interaction was best explained by diametrical, task-dependent changes in visual gain. These results suggest a crucial role for beta oscillations in the contextual gating (i.e., gain or precision control) of visual vs proprioceptive action feedback, depending on current behavioral demands.

Close

  • doi:10.1016/j.neuroimage.2020.117267

Close

Sara LoTemplio; Jack Silcox; Kara D Federmeier; Brennan R Payne

Inter- and intra-individual coupling between pupillary, electrophysiological, and behavioral responses in a visual oddball task Journal Article

Psychophysiology, pp. 1–14, 2020.

Abstract | Links | BibTeX

@article{LoTemplio2020,
title = {Inter- and intra-individual coupling between pupillary, electrophysiological, and behavioral responses in a visual oddball task},
author = {Sara LoTemplio and Jack Silcox and Kara D Federmeier and Brennan R Payne},
doi = {10.1111/psyp.13758},
year = {2020},
date = {2020-01-01},
journal = {Psychophysiology},
pages = {1--14},
abstract = {Although the P3b component of the event-related brain potential is one of the most widely studied components, its underlying generators are not currently well understood. Recent theories have suggested that the P3b is triggered by phasic activation of the locus-coeruleus norepinephrine (LC-NE) system, an important control center implicated in facilitating optimal task-relevant behavior. Previous research has reported strong correlations between pupil dilation and LC activity, suggesting that pupil diameter is a useful indicator for ongoing LC-NE activity. Given the strong relationship between LC activity and pupil dilation, if the P3b is driven by phasic LC activity, there should be a robust trial-to-trial relationship with the phasic pupillary dilation response (PDR). However, previous work examining relationships between concurrently recorded pupillary and P3b responses has not supported this. One possibility is that the relationship between the measures might be carried primarily by either inter-individual (i.e., between-participant) or intra-individual (i.e., within-participant) contributions to coupling, and prior work has not systematically delineated these relationships. Doing so in the current study, we do not find evidence for either inter-individual or intra-individual relationships between the PDR and P3b responses. However, baseline pupil dilation did predict the P3b. Interestingly, both the PDR and P3b independently predicted inter-individual and intra-individual variability in decision response time. Implications for the LC-P3b hypothesis are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Although the P3b component of the event-related brain potential is one of the most widely studied components, its underlying generators are not currently well understood. Recent theories have suggested that the P3b is triggered by phasic activation of the locus-coeruleus norepinephrine (LC-NE) system, an important control center implicated in facilitating optimal task-relevant behavior. Previous research has reported strong correlations between pupil dilation and LC activity, suggesting that pupil diameter is a useful indicator for ongoing LC-NE activity. Given the strong relationship between LC activity and pupil dilation, if the P3b is driven by phasic LC activity, there should be a robust trial-to-trial relationship with the phasic pupillary dilation response (PDR). However, previous work examining relationships between concurrently recorded pupillary and P3b responses has not supported this. One possibility is that the relationship between the measures might be carried primarily by either inter-individual (i.e., between-participant) or intra-individual (i.e., within-participant) contributions to coupling, and prior work has not systematically delineated these relationships. Doing so in the current study, we do not find evidence for either inter-individual or intra-individual relationships between the PDR and P3b responses. However, baseline pupil dilation did predict the P3b. Interestingly, both the PDR and P3b independently predicted inter-individual and intra-individual variability in decision response time. Implications for the LC-P3b hypothesis are discussed.

Close

  • doi:10.1111/psyp.13758

Close

Kevin P Madore; Anna M Khazenzon; Cameron W Backes; Jiefeng Jiang; Melina R Uncapher; Anthony M Norcia; Anthony D Wagner

Memory failure predicted by attention lapsing and media multitasking Journal Article

Nature, 587 (7832), pp. 87–91, 2020.

Abstract | Links | BibTeX

@article{Madore2020,
title = {Memory failure predicted by attention lapsing and media multitasking},
author = {Kevin P Madore and Anna M Khazenzon and Cameron W Backes and Jiefeng Jiang and Melina R Uncapher and Anthony M Norcia and Anthony D Wagner},
doi = {10.1038/s41586-020-2870-z},
year = {2020},
date = {2020-01-01},
journal = {Nature},
volume = {587},
number = {7832},
pages = {87--91},
publisher = {Springer US},
abstract = {With the explosion of digital media and technologies, scholars, educators and the public have become increasingly vocal about the role that an ‘attention economy' has in our lives1. The rise of the current digital culture coincides with longstanding scientific questions about why humans sometimes remember and sometimes forget, and why some individuals remember better than others2–6. Here we examine whether spontaneous attention lapses—in the moment7–12, across individuals13–15 and as a function of everyday media multitasking16–19—negatively correlate with remembering. Electroencephalography and pupillometry measures of attention20,21 were recorded as eighty young adults (mean age, 21.7 years) performed a goal-directed episodic encoding and retrieval task22. Trait-level sustained attention was further quantified using task-based23 and questionnaire measures24,25. Using trial-to-trial retrieval data, we show that tonic lapses in attention in the moment before remembering, assayed by posterior alpha power and pupil diameter, were correlated with reductions in neural signals of goal coding and memory, along with behavioural forgetting. Independent measures of trait-level attention lapsing mediated the relationship between neural assays of lapsing and memory performance, and between media multitasking and memory. Attention lapses partially account for why we remember or forget in the moment, and why some individuals remember better than others. Heavier media multitasking is associated with a propensity to have attention lapses and forget.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

With the explosion of digital media and technologies, scholars, educators and the public have become increasingly vocal about the role that an ‘attention economy' has in our lives1. The rise of the current digital culture coincides with longstanding scientific questions about why humans sometimes remember and sometimes forget, and why some individuals remember better than others2–6. Here we examine whether spontaneous attention lapses—in the moment7–12, across individuals13–15 and as a function of everyday media multitasking16–19—negatively correlate with remembering. Electroencephalography and pupillometry measures of attention20,21 were recorded as eighty young adults (mean age, 21.7 years) performed a goal-directed episodic encoding and retrieval task22. Trait-level sustained attention was further quantified using task-based23 and questionnaire measures24,25. Using trial-to-trial retrieval data, we show that tonic lapses in attention in the moment before remembering, assayed by posterior alpha power and pupil diameter, were correlated with reductions in neural signals of goal coding and memory, along with behavioural forgetting. Independent measures of trait-level attention lapsing mediated the relationship between neural assays of lapsing and memory performance, and between media multitasking and memory. Attention lapses partially account for why we remember or forget in the moment, and why some individuals remember better than others. Heavier media multitasking is associated with a propensity to have attention lapses and forget.

Close

  • doi:10.1038/s41586-020-2870-z

Close

Alie G Male; Robert P O'Shea; Erich Schröger; Dagmar Müller; Urte Roeber; Andreas Widmann

The quest for the genuine visual mismatch negativity (vMMN): Event-related potential indications of deviance detection for low-level visual features Journal Article

Psychophysiology, 57 , pp. 1–27, 2020.

Abstract | Links | BibTeX

@article{Male2020,
title = {The quest for the genuine visual mismatch negativity (vMMN): Event-related potential indications of deviance detection for low-level visual features},
author = {Alie G Male and Robert P O'Shea and Erich Schröger and Dagmar Müller and Urte Roeber and Andreas Widmann},
doi = {10.1111/psyp.13576},
year = {2020},
date = {2020-01-01},
journal = {Psychophysiology},
volume = {57},
pages = {1--27},
abstract = {Research shows that the visual system monitors the environment for changes. For example, a left-tilted bar, a deviant, that appears after several presentations of a right-tilted bar, standards, elicits a classic visual mismatch negativity (vMMN): greater negativity for deviants than standards in event-related potentials (ERPs) between 100 and 300 ms after onset of the deviant. The classic vMMN is contributed to by adaptation; it can be distinguished from the genuine vMMN that, through use of control conditions, compares standards and deviants that are equally adapted and physically identical. To determine whether the vMMN follows similar principles to the auditory mismatch negativity (MMN), in two experiments we searched for a genuine vMMN from simple, physiologically plausible stimuli that change in fundamental dimensions: orientation, contrast, phase, and spatial frequency. We carefully controlled for attention and eye movements. We found no evidence for the genuine vMMN, despite adequate statistical power. We conclude that either the genuine vMMN is a rather unstable phenomenon that depends on still-to-be-identified experimental parameters, or it is confined to visual stimuli for which monitoring across time is more natural than monitoring over space, such as for high-level features. We also observed an early deviant-related positivity that we propose might reflect earlier predictive processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Research shows that the visual system monitors the environment for changes. For example, a left-tilted bar, a deviant, that appears after several presentations of a right-tilted bar, standards, elicits a classic visual mismatch negativity (vMMN): greater negativity for deviants than standards in event-related potentials (ERPs) between 100 and 300 ms after onset of the deviant. The classic vMMN is contributed to by adaptation; it can be distinguished from the genuine vMMN that, through use of control conditions, compares standards and deviants that are equally adapted and physically identical. To determine whether the vMMN follows similar principles to the auditory mismatch negativity (MMN), in two experiments we searched for a genuine vMMN from simple, physiologically plausible stimuli that change in fundamental dimensions: orientation, contrast, phase, and spatial frequency. We carefully controlled for attention and eye movements. We found no evidence for the genuine vMMN, despite adequate statistical power. We conclude that either the genuine vMMN is a rather unstable phenomenon that depends on still-to-be-identified experimental parameters, or it is confined to visual stimuli for which monitoring across time is more natural than monitoring over space, such as for high-level features. We also observed an early deviant-related positivity that we propose might reflect earlier predictive processing.

Close

  • doi:10.1111/psyp.13576

Close

Sarah D McCrackin; Roxane J Itier

Feeling through another's eyes: Perceived gaze direction impacts ERP and behavioural measures of positive and negative affective empathy Journal Article

NeuroImage, 226 , pp. 2–20, 2020.

Abstract | Links | BibTeX

@article{McCrackin2020,
title = {Feeling through another's eyes: Perceived gaze direction impacts ERP and behavioural measures of positive and negative affective empathy},
author = {Sarah D McCrackin and Roxane J Itier},
doi = {10.1016/j.neuroimage.2020.117605},
year = {2020},
date = {2020-01-01},
journal = {NeuroImage},
volume = {226},
pages = {2--20},
publisher = {Elsevier Inc.},
abstract = {Looking at the eyes informs us about the thoughts and emotions of those around us, and impacts our own emo- tional state. However, it is unknown how perceiving direct and averted gaze impacts our ability to share the gazer's positive and negative emotions, abilities referred to as positive and negative affective empathy. We pre- sented 44 participants with contextual sentences deerpscribing positive, negative and neutral events happening to other people (e.g. “Her newborn was saved/killed/fed yesterday afternoon. ”). These were designed to elicit positive, negative, or little to no empathy, and were followed by direct or averted gaze images of the individuals described. Participants rated their affective empathy for the individual and their own emotional valence on each trial. Event-related potentials time-locked to face-onset and associated with empathy and emotional processing were recorded to investigate whether they were modulated by gaze direction. Relative to averted gaze, direct gaze was associated with increased positive valence in the positive and neutral conditions and with increased positive empathy ratings. A similar pattern was found at the neural level, using robust mass-univariate statistics. The N100, thought to reflect an automatic activation of emotion areas, was modulated by gaze in the affective empathy conditions, with opposite effect directions in positive and negative conditions.. The P200, an ERP component sensitive to positive stimuli, was modulated by gaze direction only in the positive empathy condi- tion. Positive and negative trials were processed similarly at the early N200 processing stage, but later diverged, with only negative trials modulating the EPN, P300 and LPP components. These results suggest that positive and negative affective empathy are associated with distinct time-courses, and that perceived gaze direction uniquely modulates positive empathy, highlighting the importance of studying empathy with face stimuli.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Looking at the eyes informs us about the thoughts and emotions of those around us, and impacts our own emo- tional state. However, it is unknown how perceiving direct and averted gaze impacts our ability to share the gazer's positive and negative emotions, abilities referred to as positive and negative affective empathy. We pre- sented 44 participants with contextual sentences deerpscribing positive, negative and neutral events happening to other people (e.g. “Her newborn was saved/killed/fed yesterday afternoon. ”). These were designed to elicit positive, negative, or little to no empathy, and were followed by direct or averted gaze images of the individuals described. Participants rated their affective empathy for the individual and their own emotional valence on each trial. Event-related potentials time-locked to face-onset and associated with empathy and emotional processing were recorded to investigate whether they were modulated by gaze direction. Relative to averted gaze, direct gaze was associated with increased positive valence in the positive and neutral conditions and with increased positive empathy ratings. A similar pattern was found at the neural level, using robust mass-univariate statistics. The N100, thought to reflect an automatic activation of emotion areas, was modulated by gaze in the affective empathy conditions, with opposite effect directions in positive and negative conditions.. The P200, an ERP component sensitive to positive stimuli, was modulated by gaze direction only in the positive empathy condi- tion. Positive and negative trials were processed similarly at the early N200 processing stage, but later diverged, with only negative trials modulating the EPN, P300 and LPP components. These results suggest that positive and negative affective empathy are associated with distinct time-courses, and that perceived gaze direction uniquely modulates positive empathy, highlighting the importance of studying empathy with face stimuli.

Close

  • doi:10.1016/j.neuroimage.2020.117605

Close

Radha Nila Meghanathan; Cees van Leeuwen; Marcello Giannini; Andrey R Nikolaev

Neural correlates of task-related refixation behavior Journal Article

Vision Research, 175 , pp. 90–101, 2020.

Abstract | Links | BibTeX

@article{Meghanathan2020,
title = {Neural correlates of task-related refixation behavior},
author = {Radha Nila Meghanathan and Cees van Leeuwen and Marcello Giannini and Andrey R Nikolaev},
doi = {10.1016/j.visres.2020.07.001},
year = {2020},
date = {2020-01-01},
journal = {Vision Research},
volume = {175},
pages = {90--101},
publisher = {Elsevier},
abstract = {Eye movement research has shown that attention shifts from the currently fixated location to the next before a saccade is executed. We investigated whether the cost of the attention shift depends on higher-order processing at the time of fixation, in particular on visual working memory load differences between fixations and refixations on task-relevant items. The attention shift is reflected in EEG activity in the saccade-related potential (SRP). In a free viewing task involving visual search and memorization of multiple targets amongst distractors, we compared the SRP in first fixations versus refixations on targets and distractors. The task-relevance of targets implies that more information will be loaded in memory (e.g. both identity and location) than for distractors (e.g. location only). First fixations will involve greater memory load than refixations, since first fixations involve loading of new items, while refixations involve rehearsal of previously visited items. The SRP in the interval preceding the saccade away from a target or distractor revealed that saccade preparation is affected by task-relevance and refixation behavior. For task-relevant items only, we found longer fixation duration and higher SRP amplitudes for first fixations than for refixations over the occipital region and the opposite effect over the frontal region. Our findings provide first neurophysiological evidence that working memory loading of task-relevant information at fixation affects saccade planning.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye movement research has shown that attention shifts from the currently fixated location to the next before a saccade is executed. We investigated whether the cost of the attention shift depends on higher-order processing at the time of fixation, in particular on visual working memory load differences between fixations and refixations on task-relevant items. The attention shift is reflected in EEG activity in the saccade-related potential (SRP). In a free viewing task involving visual search and memorization of multiple targets amongst distractors, we compared the SRP in first fixations versus refixations on targets and distractors. The task-relevance of targets implies that more information will be loaded in memory (e.g. both identity and location) than for distractors (e.g. location only). First fixations will involve greater memory load than refixations, since first fixations involve loading of new items, while refixations involve rehearsal of previously visited items. The SRP in the interval preceding the saccade away from a target or distractor revealed that saccade preparation is affected by task-relevance and refixation behavior. For task-relevant items only, we found longer fixation duration and higher SRP amplitudes for first fixations than for refixations over the occipital region and the opposite effect over the frontal region. Our findings provide first neurophysiological evidence that working memory loading of task-relevant information at fixation affects saccade planning.

Close

  • doi:10.1016/j.visres.2020.07.001

Close

Florent Meyniel

Brain dynamics for confidence-weighted learning Journal Article

PLoS Computational Biology, 16 (6), pp. 1–27, 2020.

Abstract | Links | BibTeX

@article{Meyniel2020,
title = {Brain dynamics for confidence-weighted learning},
author = {Florent Meyniel},
doi = {10.1371/journal.pcbi.1007935},
year = {2020},
date = {2020-01-01},
journal = {PLoS Computational Biology},
volume = {16},
number = {6},
pages = {1--27},
abstract = {Learning in a changing, uncertain environment is a difficult problem. A popular solution is to predict future observations and then use surprising outcomes to update those predictions. However, humans also have a sense of confidence that characterizes the precision of their predictions. Bayesian models use a confidence-weighting principle to regulate learning: For a given surprise, the update is smaller when the confidence about the prediction was higher. Prior behavioral evidence indicates that human learning adheres to this confidence-weighting principle. Here, we explored the human brain dynamics sub-tending the confidenceweighting of learning using magneto-encephalography (MEG). During our volatile probability learning task, subjects' confidence reports conformed with Bayesian inference. MEG revealed several stimulus-evoked brain responses whose amplitude reflected surprise, and some of them were further shaped by confidence: Surprise amplified the stimulus-evoked response whereas confidence dampened it. Confidence about predictions also modulated several aspects of the brain state: Pupil-linked arousal and beta-range (15-30 Hz) oscillations. The brain state in turn modulated specific stimulus-evoked surprise responses following the confidence-weighting principle. Our results thus indicate that there exist, in the human brain, signals reflecting surprise that are dampened by confidence in a way that is appropriate for learning according to Bayesian inference. They also suggest a mechanism for confidence-weighted learning: Confidence about predictions would modulate intrinsic properties of the brain state to amplify or dampen surprise responses evoked by discrepant observations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Learning in a changing, uncertain environment is a difficult problem. A popular solution is to predict future observations and then use surprising outcomes to update those predictions. However, humans also have a sense of confidence that characterizes the precision of their predictions. Bayesian models use a confidence-weighting principle to regulate learning: For a given surprise, the update is smaller when the confidence about the prediction was higher. Prior behavioral evidence indicates that human learning adheres to this confidence-weighting principle. Here, we explored the human brain dynamics sub-tending the confidenceweighting of learning using magneto-encephalography (MEG). During our volatile probability learning task, subjects' confidence reports conformed with Bayesian inference. MEG revealed several stimulus-evoked brain responses whose amplitude reflected surprise, and some of them were further shaped by confidence: Surprise amplified the stimulus-evoked response whereas confidence dampened it. Confidence about predictions also modulated several aspects of the brain state: Pupil-linked arousal and beta-range (15-30 Hz) oscillations. The brain state in turn modulated specific stimulus-evoked surprise responses following the confidence-weighting principle. Our results thus indicate that there exist, in the human brain, signals reflecting surprise that are dampened by confidence in a way that is appropriate for learning according to Bayesian inference. They also suggest a mechanism for confidence-weighted learning: Confidence about predictions would modulate intrinsic properties of the brain state to amplify or dampen surprise responses evoked by discrepant observations.

Close

  • doi:10.1371/journal.pcbi.1007935

Close

Jonathan Mirault; Jeremy Yeaton; Fanny Broqua; Stéphane Dufau; Phillip J Holcomb; Jonathan Grainger

Parafoveal-on-foveal repetition effects in sentence reading: A co-registered eye-tracking and electroencephalogram study Journal Article

Psychophysiology, 57 , pp. 1–18, 2020.

Abstract | Links | BibTeX

@article{Mirault2020,
title = {Parafoveal-on-foveal repetition effects in sentence reading: A co-registered eye-tracking and electroencephalogram study},
author = {Jonathan Mirault and Jeremy Yeaton and Fanny Broqua and Stéphane Dufau and Phillip J Holcomb and Jonathan Grainger},
doi = {10.1111/psyp.13553},
year = {2020},
date = {2020-01-01},
journal = {Psychophysiology},
volume = {57},
pages = {1--18},
abstract = {When reading, can the next word in the sentence (word n + 1) influence how you read the word you are currently looking at (word n)? Serial models of sentence reading state that this generally should not be the case, whereas parallel models predict that this should be the case. Here we focus on perhaps the simplest and the strongest Parafoveal-on-Foveal (PoF) manipulation: word n + 1 is either the same as word n or a different word. Participants read sentences for comprehension and when their eyes left word n, the repeated or unrelated word at position n + 1 was swapped for a word that provided a syntactically correct continuation of the sentence. We recorded electroencephalogram and eye-movements, and time-locked the analysis of fixation-related potentials (FRPs) to fixation of word n. We found robust PoF repetition effects on gaze durations on word n, and also on the initial landing position on word n. Most important is that we also observed significant effects in FRPs, reaching significance at 260 ms post-fixation of word n. Repetition of the target word n at position n + 1 caused a widely distributed reduced negativity in the FRPs. Given the timing of this effect, we argue that it is driven by orthographic processing of word n + 1, while readers were still looking at word n, plus the spatial integration of orthographic information extracted from these two words in parallel.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When reading, can the next word in the sentence (word n + 1) influence how you read the word you are currently looking at (word n)? Serial models of sentence reading state that this generally should not be the case, whereas parallel models predict that this should be the case. Here we focus on perhaps the simplest and the strongest Parafoveal-on-Foveal (PoF) manipulation: word n + 1 is either the same as word n or a different word. Participants read sentences for comprehension and when their eyes left word n, the repeated or unrelated word at position n + 1 was swapped for a word that provided a syntactically correct continuation of the sentence. We recorded electroencephalogram and eye-movements, and time-locked the analysis of fixation-related potentials (FRPs) to fixation of word n. We found robust PoF repetition effects on gaze durations on word n, and also on the initial landing position on word n. Most important is that we also observed significant effects in FRPs, reaching significance at 260 ms post-fixation of word n. Repetition of the target word n at position n + 1 caused a widely distributed reduced negativity in the FRPs. Given the timing of this effect, we argue that it is driven by orthographic processing of word n + 1, while readers were still looking at word n, plus the spatial integration of orthographic information extracted from these two words in parallel.

Close

  • doi:10.1111/psyp.13553

Close

Kieran S Mohr; Niamh Carr; Rachel Georgel; Simon P Kelly

Modulation of the earliest component of the human VEP by spatial attention: An investigation of task demands Journal Article

Cerebral Cortex Communications, pp. 1–22, 2020.

Abstract | Links | BibTeX

@article{Mohr2020,
title = {Modulation of the earliest component of the human VEP by spatial attention: An investigation of task demands},
author = {Kieran S Mohr and Niamh Carr and Rachel Georgel and Simon P Kelly},
doi = {10.1093/texcom/tgaa045},
year = {2020},
date = {2020-01-01},
journal = {Cerebral Cortex Communications},
pages = {1--22},
abstract = {Spatial attention modulations of initial afferent activity in area V1, indexed by the first component “C1” of the human visual evoked potential, are rarely found. It has thus been suggested that early modulation is induced only by special task conditions, but what these conditions are remains unknown. Recent failed replications—findings of no C1 modulation using a certain task that had previously produced robust modulations—present a strong basis for examining this question. We ran 3 experiments, the first to more exactly replicate the stimulus and behavioral conditions of the original task, and the second and third to manipulate 2 key factors that differed in the failed replication studies: the provision of informative performance feedback, and the degree to which the probed stimulus features matched those facilitating target perception. Although there was an overall significant C1 modulation of 11%, individually, only experiments 1 and 2 showed reliable effects, underlining that the modulations do occur but not consistently. Better feedback induced greater P1, but not C1, modulations. Target-probe feature matching had an inconsistent influence on modulation patterns, with behavioral performance differences and signal-overlap analyses suggesting interference from extrastriate modulations as a potential cause.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Spatial attention modulations of initial afferent activity in area V1, indexed by the first component “C1” of the human visual evoked potential, are rarely found. It has thus been suggested that early modulation is induced only by special task conditions, but what these conditions are remains unknown. Recent failed replications—findings of no C1 modulation using a certain task that had previously produced robust modulations—present a strong basis for examining this question. We ran 3 experiments, the first to more exactly replicate the stimulus and behavioral conditions of the original task, and the second and third to manipulate 2 key factors that differed in the failed replication studies: the provision of informative performance feedback, and the degree to which the probed stimulus features matched those facilitating target perception. Although there was an overall significant C1 modulation of 11%, individually, only experiments 1 and 2 showed reliable effects, underlining that the modulations do occur but not consistently. Better feedback induced greater P1, but not C1, modulations. Target-probe feature matching had an inconsistent influence on modulation patterns, with behavioral performance differences and signal-overlap analyses suggesting interference from extrastriate modulations as a potential cause.

Close

  • doi:10.1093/texcom/tgaa045

Close

Anna M Monk; Gareth R Barnes; Eleanor A Maguire

The effect of object type on building scene imagery — An MEG study Journal Article

Frontiers in Human Neuroscience, 14 , pp. 1–9, 2020.

Abstract | Links | BibTeX

@article{Monk2020,
title = {The effect of object type on building scene imagery — An MEG study},
author = {Anna M Monk and Gareth R Barnes and Eleanor A Maguire},
doi = {10.3389/fnhum.2020.592175},
year = {2020},
date = {2020-01-01},
journal = {Frontiers in Human Neuroscience},
volume = {14},
pages = {1--9},
abstract = {Previous studies have reported that some objects evoke a sense of local three-dimensional space (space-defining; SD), while others do not (space-ambiguous; SA), despite being imagined or viewed in isolation devoid of a background context. Moreover, people show a strong preference for SD objects when given a choice of objects with which to mentally construct scene imagery. When deconstructing scenes, people retain significantly more SD objects than SA objects. It, therefore, seems that SD objects might enjoy a privileged role in scene construction. In the current study, we leveraged the high temporal resolution of magnetoencephalography (MEG) to compare the neural responses to SD and SA objects while they were being used to build imagined scene representations, as this has not been examined before using neuroimaging. On each trial, participants gradually built a scene image from three successive auditorily-presented object descriptions and an imagined 3D space. We then examined the neural dynamics associated with the points during scene construction when either SD or SA objects were being imagined. We found that SD objects elicited theta changes relative to SA objects in two brain regions, the right ventromedial prefrontal cortex (vmPFC) and the right superior temporal gyrus (STG). Furthermore, using dynamic causal modeling, we observed that the vmPFC drove STG activity. These findings may indicate that SD objects serve to activate schematic and conceptual knowledge in vmPFC and STG upon which scene representations are then built.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous studies have reported that some objects evoke a sense of local three-dimensional space (space-defining; SD), while others do not (space-ambiguous; SA), despite being imagined or viewed in isolation devoid of a background context. Moreover, people show a strong preference for SD objects when given a choice of objects with which to mentally construct scene imagery. When deconstructing scenes, people retain significantly more SD objects than SA objects. It, therefore, seems that SD objects might enjoy a privileged role in scene construction. In the current study, we leveraged the high temporal resolution of magnetoencephalography (MEG) to compare the neural responses to SD and SA objects while they were being used to build imagined scene representations, as this has not been examined before using neuroimaging. On each trial, participants gradually built a scene image from three successive auditorily-presented object descriptions and an imagined 3D space. We then examined the neural dynamics associated with the points during scene construction when either SD or SA objects were being imagined. We found that SD objects elicited theta changes relative to SA objects in two brain regions, the right ventromedial prefrontal cortex (vmPFC) and the right superior temporal gyrus (STG). Furthermore, using dynamic causal modeling, we observed that the vmPFC drove STG activity. These findings may indicate that SD objects serve to activate schematic and conceptual knowledge in vmPFC and STG upon which scene representations are then built.

Close

  • doi:10.3389/fnhum.2020.592175

Close

Anna M Monk; Marshall A Dalton; Gareth R Barnes; Eleanor A Maguire

The role of hippocampal–ventromedial prefrontal cortex neural dynamics in building mental representations Journal Article

Journal of Cognitive Neuroscience, 33 (1), pp. 89–103, 2020.

Abstract | Links | BibTeX

@article{Monk2020a,
title = {The role of hippocampal–ventromedial prefrontal cortex neural dynamics in building mental representations},
author = {Anna M Monk and Marshall A Dalton and Gareth R Barnes and Eleanor A Maguire},
doi = {10.1162/jocn_a_01634},
year = {2020},
date = {2020-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {33},
number = {1},
pages = {89--103},
abstract = {The hippocampus and ventromedial prefrontal cortex (vmPFC) play key roles in numerous cognitive domains including mind-wandering, episodic memory, and imagining the future. Perspectives differ on precisely how they support these diverse functions, but there is general agreement that it involves constructing representations composed of numerous elements. Visual scenes have been deployed extensively in cognitive neuroscience because they are paradigmatic multielement stimuli. However, it remains unclear whether scenes, rather than other types of multifeature stimuli, preferentially engage hippocampus and vmPFC. Here, we leveraged the high temporal resolution of magnetoencephalography to test participants as they gradually built scene imagery from three successive auditorily presented object descriptions and an imagined 3-D space. This was contrasted with constructing mental images of nonscene arrays that were composed of three objects and an imagined 2-D space. The scene and array stimuli were, therefore, highly matched, and this paradigm permitted a closer examination of step-by-step mental construction than has been undertaken previously. We observed modulation of theta power in our two regions of interest— anterior hippocampus during the initial stage and vmPFC during the first two stages, of scene relative to array construction. Moreover, the scene-specific anterior hippocampal activity during the first construction stage was driven by the vmPFC, with mutual entrainment between the two brain regions thereafter. These findings suggest that hippocampal and vmPFC neural activity is especially tuned to scene representations during the earliest stage of their formation, with implications for theories of how these brain areas enable cognitive functions such as episodic memory.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The hippocampus and ventromedial prefrontal cortex (vmPFC) play key roles in numerous cognitive domains including mind-wandering, episodic memory, and imagining the future. Perspectives differ on precisely how they support these diverse functions, but there is general agreement that it involves constructing representations composed of numerous elements. Visual scenes have been deployed extensively in cognitive neuroscience because they are paradigmatic multielement stimuli. However, it remains unclear whether scenes, rather than other types of multifeature stimuli, preferentially engage hippocampus and vmPFC. Here, we leveraged the high temporal resolution of magnetoencephalography to test participants as they gradually built scene imagery from three successive auditorily presented object descriptions and an imagined 3-D space. This was contrasted with constructing mental images of nonscene arrays that were composed of three objects and an imagined 2-D space. The scene and array stimuli were, therefore, highly matched, and this paradigm permitted a closer examination of step-by-step mental construction than has been undertaken previously. We observed modulation of theta power in our two regions of interest— anterior hippocampus during the initial stage and vmPFC during the first two stages, of scene relative to array construction. Moreover, the scene-specific anterior hippocampal activity during the first construction stage was driven by the vmPFC, with mutual entrainment between the two brain regions thereafter. These findings suggest that hippocampal and vmPFC neural activity is especially tuned to scene representations during the earliest stage of their formation, with implications for theories of how these brain areas enable cognitive functions such as episodic memory.

Close

  • doi:10.1162/jocn_a_01634

Close

Christina Mühlberger; Johannes Klackl; Sandra Sittenthaler; Eva Jonas

The approach-motivational nature of reactance-Evidence from asymmetrical frontal cortical activation Journal Article

Motivation Science, 6 (3), pp. 203–220, 2020.

Abstract | Links | BibTeX

@article{Muehlberger2020,
title = {The approach-motivational nature of reactance-Evidence from asymmetrical frontal cortical activation},
author = {Christina Mühlberger and Johannes Klackl and Sandra Sittenthaler and Eva Jonas},
doi = {10.1037/mot0000152},
year = {2020},
date = {2020-01-01},
journal = {Motivation Science},
volume = {6},
number = {3},
pages = {203--220},
abstract = {Research has demonstrated that freedom restrictions evoke psychological reactance-a strong motivation to take action to regain the threatened freedom. We hypothesized that the underlying motivational state of reactance is approach-related. We used either a behavioral measure (line bisection task) or electroencephalography to assess relative left frontal brain activation, an indicator of approach motivation. We found increased approach motivation following imagined (Experiment 1), remembered (Experiment 2), and induced (Experiment 3) freedom threats. The results additionally revealed that only a selfexperienced freedom threat and not a vicarious freedom threat resulted in approach motivation. Overall, the findings suggest that reactance is approach motivational.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Research has demonstrated that freedom restrictions evoke psychological reactance-a strong motivation to take action to regain the threatened freedom. We hypothesized that the underlying motivational state of reactance is approach-related. We used either a behavioral measure (line bisection task) or electroencephalography to assess relative left frontal brain activation, an indicator of approach motivation. We found increased approach motivation following imagined (Experiment 1), remembered (Experiment 2), and induced (Experiment 3) freedom threats. The results additionally revealed that only a selfexperienced freedom threat and not a vicarious freedom threat resulted in approach motivation. Overall, the findings suggest that reactance is approach motivational.

Close

  • doi:10.1037/mot0000152

Close

Taihei Ninomiya; Atsushi Noritake; Kenta Kobayashi; Masaki Isoda

A causal role for frontal cortico-cortical coordination in social action monitoring Journal Article

Nature Communications, 11 , pp. 1–15, 2020.

Abstract | Links | BibTeX

@article{Ninomiya2020,
title = {A causal role for frontal cortico-cortical coordination in social action monitoring},
author = {Taihei Ninomiya and Atsushi Noritake and Kenta Kobayashi and Masaki Isoda},
doi = {10.1038/s41467-020-19026-y},
year = {2020},
date = {2020-01-01},
journal = {Nature Communications},
volume = {11},
pages = {1--15},
publisher = {Springer US},
abstract = {Decision-making via monitoring others' actions is a cornerstone of interpersonal exchanges. Although the ventral premotor cortex (PMv) and the medial prefrontal cortex (MPFC) are cortical nodes in social brain networks, the two areas are rarely concurrently active in neuroimaging, inviting the hypothesis that they are functionally independent. Here we show in macaques that the ability of the MPFC to monitor others' actions depends on input from the PMv. We found that delta-band coherence between the two areas emerged during action execution and action observation. Information flow especially in the delta band increased from the PMv to the MPFC as the biological nature of observed actions increased. Furthermore, selective blockade of the PMv-to-MPFC pathway using a double viral vector infection technique impaired the processing of observed, but not executed, actions. These findings demonstrate that coordinated activity in the PMv-to-MPFC pathway has a causal role in social action monitoring.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Decision-making via monitoring others' actions is a cornerstone of interpersonal exchanges. Although the ventral premotor cortex (PMv) and the medial prefrontal cortex (MPFC) are cortical nodes in social brain networks, the two areas are rarely concurrently active in neuroimaging, inviting the hypothesis that they are functionally independent. Here we show in macaques that the ability of the MPFC to monitor others' actions depends on input from the PMv. We found that delta-band coherence between the two areas emerged during action execution and action observation. Information flow especially in the delta band increased from the PMv to the MPFC as the biological nature of observed actions increased. Furthermore, selective blockade of the PMv-to-MPFC pathway using a double viral vector infection technique impaired the processing of observed, but not executed, actions. These findings demonstrate that coordinated activity in the PMv-to-MPFC pathway has a causal role in social action monitoring.

Close

  • doi:10.1038/s41467-020-19026-y

Close

José P Ossandón; Peter König; Tobias Heed

No evidence for a role of spatially modulated a-band activity in tactile remapping and short-latency, overt orienting behavior Journal Article

Journal of Neuroscience, 40 (47), pp. 9088–9102, 2020.

Abstract | Links | BibTeX

@article{Ossandon2020,
title = {No evidence for a role of spatially modulated a-band activity in tactile remapping and short-latency, overt orienting behavior},
author = {José P Ossandón and Peter König and Tobias Heed},
doi = {10.1523/JNEUROSCI.0581-19.2020},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neuroscience},
volume = {40},
number = {47},
pages = {9088--9102},
abstract = {Oscillatory a-band activity is commonly associated with spatial attention and multisensory prioritization. It has also been suggested to reflect the automatic transformation of tactile stimuli from a skin-based, somatotopic reference frame into an external one. Previous research has not convincingly separated these two possible roles of a-band activity. Previous experimental paradigms have used artificially long delays between tactile stimuli and behavioral responses to aid relating oscillatory activity to these different events. However, this strategy potentially blurs the temporal relationship of a-band activity relative to behavioral indicators of tactile-spatial transformations. Here, we assessed a-band modulation with massive univariate deconvolution, an analysis approach that disentangles brain signals overlapping in time and space. Thirty-one male and female human participants performed a delay-free, visual search task in which saccade behavior was unrestricted. A tactile cue to uncrossed or crossed hands was either informative or uninformative about visual target location. a-Band suppression following tactile stimulation was lateralized relative to the stimulated hand over central-parietal electrodes but relative to its external location over parieto-occipital electrodes. a-Band suppression reflected external touch location only after informative cues, suggesting that posterior a-band lateralization does not index automatic tactile transformation. Moreover, a-band suppression occurred at the time of, or after, the production of the saccades guided by tactile stimulation. These findings challenge the idea that a-band activity is directly involved in tactile-spatial transformation and suggest instead that it reflects delayed, supramodal processes related to attentional reorienting.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Oscillatory a-band activity is commonly associated with spatial attention and multisensory prioritization. It has also been suggested to reflect the automatic transformation of tactile stimuli from a skin-based, somatotopic reference frame into an external one. Previous research has not convincingly separated these two possible roles of a-band activity. Previous experimental paradigms have used artificially long delays between tactile stimuli and behavioral responses to aid relating oscillatory activity to these different events. However, this strategy potentially blurs the temporal relationship of a-band activity relative to behavioral indicators of tactile-spatial transformations. Here, we assessed a-band modulation with massive univariate deconvolution, an analysis approach that disentangles brain signals overlapping in time and space. Thirty-one male and female human participants performed a delay-free, visual search task in which saccade behavior was unrestricted. A tactile cue to uncrossed or crossed hands was either informative or uninformative about visual target location. a-Band suppression following tactile stimulation was lateralized relative to the stimulated hand over central-parietal electrodes but relative to its external location over parieto-occipital electrodes. a-Band suppression reflected external touch location only after informative cues, suggesting that posterior a-band lateralization does not index automatic tactile transformation. Moreover, a-band suppression occurred at the time of, or after, the production of the saccades guided by tactile stimulation. These findings challenge the idea that a-band activity is directly involved in tactile-spatial transformation and suggest instead that it reflects delayed, supramodal processes related to attentional reorienting.

Close

  • doi:10.1523/JNEUROSCI.0581-19.2020

Close

Nick B Pandža; Ian Phillips; Valerie P Karuzis; Polly O'Rourke; Stefanie E Kuchinsky

Neurostimulation and pupillometry: New directions for learning and research in applied linguistics Journal Article

Annual Review of Applied Linguistics, 40 , pp. 56–77, 2020.

Abstract | Links | BibTeX

@article{Pandza2020,
title = {Neurostimulation and pupillometry: New directions for learning and research in applied linguistics},
author = {Nick B Pand{ž}a and Ian Phillips and Valerie P Karuzis and Polly O'Rourke and Stefanie E Kuchinsky},
doi = {10.1017/S0267190520000069},
year = {2020},
date = {2020-01-01},
journal = {Annual Review of Applied Linguistics},
volume = {40},
pages = {56--77},
abstract = {This paper begins by discussing new trends in the use of neurostimulation techniques in cognitive science and learning research, as well as the nascent research on their application in second language learning. To illustrate this, an experiment designed to investigate the impact of transcutaneous vagus nerve stimulation (tVNS), which is delivered via earbuds, on how learners process and learn Mandarin tones is reported. Pupillometry, which is an index of cognitive effort, is explained and illustrated as one way to assess the impact of tVNS. Participants in the study were native English speakers, naïve to tone languages, pseudorandomly assigned to active or control conditions, while balancing for nonlinguistic pitch ability and musical experience. Their performance after tVNS was assessed using a range of more traditional language outcome measures, including accuracy and reaction times from lexical recognition and recall tasks and was triangulated with pupillometry during word-learning to help understand the mechanism through which tVNS operates. Findings are discussed in light of the literatures on lexical tone learning, cognitive effort, and neurostimulation, including specific benefits for learners of tone languages. Recommendations are made for future work on the increasingly popular area of neurostimulation for the field of applied linguistics in the 40th anniversary issue of ARAL.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This paper begins by discussing new trends in the use of neurostimulation techniques in cognitive science and learning research, as well as the nascent research on their application in second language learning. To illustrate this, an experiment designed to investigate the impact of transcutaneous vagus nerve stimulation (tVNS), which is delivered via earbuds, on how learners process and learn Mandarin tones is reported. Pupillometry, which is an index of cognitive effort, is explained and illustrated as one way to assess the impact of tVNS. Participants in the study were native English speakers, naïve to tone languages, pseudorandomly assigned to active or control conditions, while balancing for nonlinguistic pitch ability and musical experience. Their performance after tVNS was assessed using a range of more traditional language outcome measures, including accuracy and reaction times from lexical recognition and recall tasks and was triangulated with pupillometry during word-learning to help understand the mechanism through which tVNS operates. Findings are discussed in light of the literatures on lexical tone learning, cognitive effort, and neurostimulation, including specific benefits for learners of tone languages. Recommendations are made for future work on the increasingly popular area of neurostimulation for the field of applied linguistics in the 40th anniversary issue of ARAL.

Close

  • doi:10.1017/S0267190520000069

Close

Hame Park; Christoph Kayser

The neurophysiological basis of the trial-wise and cumulative ventriloquism aftereffects Journal Article

Journal of Neuroscience, 2020.

Abstract | Links | BibTeX

@article{Park2020,
title = {The neurophysiological basis of the trial-wise and cumulative ventriloquism aftereffects},
author = {Hame Park and Christoph Kayser},
doi = {10.1523/jneurosci.2091-20.2020},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neuroscience},
abstract = {Our senses often receive conflicting multisensory information, which our brain reconciles by adaptive recalibration. A classic example is the ventriloquism aftereffect, which emerges following both cumulative (long-term) and trial-wise exposure to spatially discrepant multisensory stimuli. Despite the importance of such adaptive mechanisms for interacting with environments that change over multiple time scales, it remains debated whether the ventriloquism aftereffects observed following trial-wise- and cumulative exposure arise from the same neurophysiological substrate. We address this question by probing electroencephalography recordings from healthy humans (both sexes) for processes predictive of the aftereffect biases following the exposure to spatially offset audio-visual stimuli. Our results support the hypothesis that discrepant multisensory evidence shapes aftereffects on distinct time scales via common neurophysiological processes reflecting sensory inference and memory in parietal-occipital regions, while the cumulative exposure to consistent discrepancies additionally recruits prefrontal processes. During the subsequent unisensory trial, both trial-wise and cumulative exposure bias the encoding of the acoustic information, but do so distinctly. Our results posit a central role of parietal regions in shaping multisensory spatial recalibration, suggest that frontal regions consolidate the behavioral bias for persistent multisensory discrepancies, but also show that the trial-wise and cumulative exposure bias sound position encoding via distinct neurophysiological processes.SIGNIFICANCE STATEMENTOur brain easily reconciles conflicting multisensory information, such as seeing an actress on screen while hearing her voice over headphones. These adaptive mechanisms exert a persistent influence on the perception of subsequent unisensory stimuli, known as the ventriloquism aftereffect. While this aftereffect emerges following trial-wise or cumulative exposure to multisensory discrepancies, it remained unclear whether both arise from a common neural substrate. We here rephrase this hypothesis using human electroencephalography recordings. Our data suggest that parietal regions involved in multisensory and spatial memory mediate the aftereffect following both trial-wise and cumulative adaptation, but also show that additional and distinct processes are involved in consolidating and implementing the aftereffect following prolonged exposure.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Our senses often receive conflicting multisensory information, which our brain reconciles by adaptive recalibration. A classic example is the ventriloquism aftereffect, which emerges following both cumulative (long-term) and trial-wise exposure to spatially discrepant multisensory stimuli. Despite the importance of such adaptive mechanisms for interacting with environments that change over multiple time scales, it remains debated whether the ventriloquism aftereffects observed following trial-wise- and cumulative exposure arise from the same neurophysiological substrate. We address this question by probing electroencephalography recordings from healthy humans (both sexes) for processes predictive of the aftereffect biases following the exposure to spatially offset audio-visual stimuli. Our results support the hypothesis that discrepant multisensory evidence shapes aftereffects on distinct time scales via common neurophysiological processes reflecting sensory inference and memory in parietal-occipital regions, while the cumulative exposure to consistent discrepancies additionally recruits prefrontal processes. During the subsequent unisensory trial, both trial-wise and cumulative exposure bias the encoding of the acoustic information, but do so distinctly. Our results posit a central role of parietal regions in shaping multisensory spatial recalibration, suggest that frontal regions consolidate the behavioral bias for persistent multisensory discrepancies, but also show that the trial-wise and cumulative exposure bias sound position encoding via distinct neurophysiological processes.SIGNIFICANCE STATEMENTOur brain easily reconciles conflicting multisensory information, such as seeing an actress on screen while hearing her voice over headphones. These adaptive mechanisms exert a persistent influence on the perception of subsequent unisensory stimuli, known as the ventriloquism aftereffect. While this aftereffect emerges following trial-wise or cumulative exposure to multisensory discrepancies, it remained unclear whether both arise from a common neural substrate. We here rephrase this hypothesis using human electroencephalography recordings. Our data suggest that parietal regions involved in multisensory and spatial memory mediate the aftereffect following both trial-wise and cumulative adaptation, but also show that additional and distinct processes are involved in consolidating and implementing the aftereffect following prolonged exposure.

Close

  • doi:10.1523/jneurosci.2091-20.2020

Close

Christian Pfeiffer; Nora Hollenstein; Ce Zhang; Nicolas Langer

Neural dynamics of sentiment processing during naturalistic sentence reading Journal Article

NeuroImage, 218 , pp. 1–15, 2020.

Abstract | Links | BibTeX

@article{Pfeiffer2020,
title = {Neural dynamics of sentiment processing during naturalistic sentence reading},
author = {Christian Pfeiffer and Nora Hollenstein and Ce Zhang and Nicolas Langer},
doi = {10.1016/j.neuroimage.2020.116934},
year = {2020},
date = {2020-01-01},
journal = {NeuroImage},
volume = {218},
pages = {1--15},
publisher = {The Authors},
abstract = {When we read, our eyes move through the text in a series of fixations and high-velocity saccades to extract visual information. This process allows the brain to obtain meaning, e.g., about sentiment, or the emotional valence, expressed in the written text. How exactly the brain extracts the sentiment of single words during naturalistic reading is largely unknown. This is due to the challenges of naturalistic imaging, which has previously led researchers to employ highly controlled, timed word-by-word presentations of custom reading materials that lack ecological validity. Here, we aimed to assess the electrical neural correlates of word sentiment processing during naturalistic reading of English sentences. We used a publicly available dataset of simultaneous electroencephalography (EEG), eye-tracking recordings, and word-level semantic annotations from 7129 words in 400 sentences (Zurich Cognitive Language Processing Corpus; Hollenstein et al., 2018). We computed fixation-related potentials (FRPs), which are evoked electrical responses time-locked to the onset of fixations. A general linear mixed model analysis of FRPs cleaned from visual- and motor-evoked activity showed a topographical difference between the positive and negative sentiment condition in the 224–304 ​ms interval after fixation onset in left-central and right-posterior electrode clusters. An additional analysis that included word-, phrase-, and sentence-level sentiment predictors showed the same FRP differences for the word-level sentiment, but no additional FRP differences for phrase- and sentence-level sentiment. Furthermore, decoding analysis that classified word sentiment (positive or negative) from sentiment-matched 40-trial average FRPs showed a 0.60 average accuracy (95% confidence interval: [0.58, 0.61]). Control analyses ruled out that these results were based on differences in eye movements or linguistic features other than word sentiment. Our results extend previous research by showing that the emotional valence of lexico-semantic stimuli evoke a fast electrical neural response upon word fixation during naturalistic reading. These results provide an important step to identify the neural processes of lexico-semantic processing in ecologically valid conditions and can serve to improve computer algorithms for natural language processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When we read, our eyes move through the text in a series of fixations and high-velocity saccades to extract visual information. This process allows the brain to obtain meaning, e.g., about sentiment, or the emotional valence, expressed in the written text. How exactly the brain extracts the sentiment of single words during naturalistic reading is largely unknown. This is due to the challenges of naturalistic imaging, which has previously led researchers to employ highly controlled, timed word-by-word presentations of custom reading materials that lack ecological validity. Here, we aimed to assess the electrical neural correlates of word sentiment processing during naturalistic reading of English sentences. We used a publicly available dataset of simultaneous electroencephalography (EEG), eye-tracking recordings, and word-level semantic annotations from 7129 words in 400 sentences (Zurich Cognitive Language Processing Corpus; Hollenstein et al., 2018). We computed fixation-related potentials (FRPs), which are evoked electrical responses time-locked to the onset of fixations. A general linear mixed model analysis of FRPs cleaned from visual- and motor-evoked activity showed a topographical difference between the positive and negative sentiment condition in the 224–304 ​ms interval after fixation onset in left-central and right-posterior electrode clusters. An additional analysis that included word-, phrase-, and sentence-level sentiment predictors showed the same FRP differences for the word-level sentiment, but no additional FRP differences for phrase- and sentence-level sentiment. Furthermore, decoding analysis that classified word sentiment (positive or negative) from sentiment-matched 40-trial average FRPs showed a 0.60 average accuracy (95% confidence interval: [0.58, 0.61]). Control analyses ruled out that these results were based on differences in eye movements or linguistic features other than word sentiment. Our results extend previous research by showing that the emotional valence of lexico-semantic stimuli evoke a fast electrical neural response upon word fixation during naturalistic reading. These results provide an important step to identify the neural processes of lexico-semantic processing in ecologically valid conditions and can serve to improve computer algorithms for natural language processing.

Close

  • doi:10.1016/j.neuroimage.2020.116934

Close

Reuben Rideaux; Elizabeth Michael; Andrew E Welchman

Adaptation to binocular anticorrelation results in increased neural excitability Journal Article

Journal of Cognitive Neuroscience, 32 (1), pp. 100–110, 2020.

Abstract | Links | BibTeX

@article{Rideaux2020,
title = {Adaptation to binocular anticorrelation results in increased neural excitability},
author = {Reuben Rideaux and Elizabeth Michael and Andrew E Welchman},
doi = {10.1101/549949},
year = {2020},
date = {2020-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {32},
number = {1},
pages = {100--110},
abstract = {Throughout the brain, information from individual sources converges onto higher order neurons. For example, information from the two eyes first converges in binocular neurons in area V1. Many neurons appear tuned to similarities between sources of information, which makes intuitive sense in a system striving to match multiple sensory signals to a single external cause, i.e., establish causal inference. However, there are also neurons that are tuned to dissimilar information. In particular, many binocular neurons respond maximally to a dark feature in one eye and a light feature in the other. Despite compelling neurophysiological and behavioural evidence supporting the existence of these neurons (Cumming & Parker, 1997; Janssen, Vogels, Liu, & Orban, 2003; Katyal, Vergeer, He, He, & Engel, 2018; Kingdom, Jennings, & Georgeson, 2018; Tsao, Conway, & Livingstone, 2003), their function has remained opaque. To determine how neural mechanisms tuned to dissimilarities support perception, here we use electroencephalography to measure human observers' steady-state visually evoked potentials (SSVEPs) in response to change in depth after prolonged viewing of anticorrelated and correlated random-dot stereograms (RDS). We find that adaptation to anticorrelated RDS results in larger SSVEPs, while adaptation to correlated RDS has no effect. These results are consistent with recent theoretical work suggesting ‘what not' neurons play a suppressive role in supporting stereopsis (Goncalves & Welchman, 2017); that is, selective adaptation of neurons tuned to binocular mismatches reduces suppression resulting in increased neural excitability.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Throughout the brain, information from individual sources converges onto higher order neurons. For example, information from the two eyes first converges in binocular neurons in area V1. Many neurons appear tuned to similarities between sources of information, which makes intuitive sense in a system striving to match multiple sensory signals to a single external cause, i.e., establish causal inference. However, there are also neurons that are tuned to dissimilar information. In particular, many binocular neurons respond maximally to a dark feature in one eye and a light feature in the other. Despite compelling neurophysiological and behavioural evidence supporting the existence of these neurons (Cumming & Parker, 1997; Janssen, Vogels, Liu, & Orban, 2003; Katyal, Vergeer, He, He, & Engel, 2018; Kingdom, Jennings, & Georgeson, 2018; Tsao, Conway, & Livingstone, 2003), their function has remained opaque. To determine how neural mechanisms tuned to dissimilarities support perception, here we use electroencephalography to measure human observers' steady-state visually evoked potentials (SSVEPs) in response to change in depth after prolonged viewing of anticorrelated and correlated random-dot stereograms (RDS). We find that adaptation to anticorrelated RDS results in larger SSVEPs, while adaptation to correlated RDS has no effect. These results are consistent with recent theoretical work suggesting ‘what not' neurons play a suppressive role in supporting stereopsis (Goncalves & Welchman, 2017); that is, selective adaptation of neurons tuned to binocular mismatches reduces suppression resulting in increased neural excitability.

Close

  • doi:10.1101/549949

Close

Andre Roelke; Christian Vorstius; Ralph Radach; Markus J Hofmann

Fixation-related NIRS indexes retinotopic occipital processing of parafoveal preview during natural reading Journal Article

NeuroImage, 215 , pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Roelke2020,
title = {Fixation-related NIRS indexes retinotopic occipital processing of parafoveal preview during natural reading},
author = {Andre Roelke and Christian Vorstius and Ralph Radach and Markus J Hofmann},
doi = {10.1016/j.neuroimage.2020.116823},
year = {2020},
date = {2020-01-01},
journal = {NeuroImage},
volume = {215},
pages = {1--11},
publisher = {Elsevier Ltd},
abstract = {While word frequency and predictability effects have been examined extensively, any evidence on interactive effects as well as parafoveal influences during whole sentence reading remains inconsistent and elusive. Novel neuroimaging methods utilize eye movement data to account for the hemodynamic responses of very short events such as fixations during natural reading. In this study, we used the rapid sampling frequency of near-infrared spectroscopy (NIRS) to investigate neural responses in the occipital and orbitofrontal cortex to word frequency and predictability. We observed increased activation in the right ventral occipital cortex when the fixated word N was of low frequency, which we attribute to an enhanced cost during saccade planning. Importantly, unpredictable (in contrast to predictable) low frequency words increased the activity in the left dorsal occipital cortex at the fixation of the preceding word N-1, presumably due to an upcoming breach of top-down modulated expectation. Opposite to studies that utilized a serial presentation of words (e.g. Hofmann et al., 2014), we did not find such an interaction in the orbitofrontal cortex, implying that top-down timing of cognitive subprocesses is not required during natural reading. We discuss the implications of an interactive parafoveal-on-foveal effect for current models of eye movements.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

While word frequency and predictability effects have been examined extensively, any evidence on interactive effects as well as parafoveal influences during whole sentence reading remains inconsistent and elusive. Novel neuroimaging methods utilize eye movement data to account for the hemodynamic responses of very short events such as fixations during natural reading. In this study, we used the rapid sampling frequency of near-infrared spectroscopy (NIRS) to investigate neural responses in the occipital and orbitofrontal cortex to word frequency and predictability. We observed increased activation in the right ventral occipital cortex when the fixated word N was of low frequency, which we attribute to an enhanced cost during saccade planning. Importantly, unpredictable (in contrast to predictable) low frequency words increased the activity in the left dorsal occipital cortex at the fixation of the preceding word N-1, presumably due to an upcoming breach of top-down modulated expectation. Opposite to studies that utilized a serial presentation of words (e.g. Hofmann et al., 2014), we did not find such an interaction in the orbitofrontal cortex, implying that top-down timing of cognitive subprocesses is not required during natural reading. We discuss the implications of an interactive parafoveal-on-foveal effect for current models of eye movements.

Close

  • doi:10.1016/j.neuroimage.2020.116823

Close

Isabelle A Rosenthal; Shridhar R Singh; Katherine L Hermann; Dimitrios Pantazis; Bevil R Conway

Color space geometry uncovered with magnetoencephalography Journal Article

Current Biology, 31 , pp. 1–18, 2020.

Abstract | Links | BibTeX

@article{Rosenthal2020,
title = {Color space geometry uncovered with magnetoencephalography},
author = {Isabelle A Rosenthal and Shridhar R Singh and Katherine L Hermann and Dimitrios Pantazis and Bevil R Conway},
doi = {10.1016/j.cub.2020.10.062},
year = {2020},
date = {2020-01-01},
journal = {Current Biology},
volume = {31},
pages = {1--18},
publisher = {Elsevier Ltd.},
abstract = {The geometry that describes the relationship among colors, and the neural mechanisms that support color vision, are unsettled. Here, we use multivariate analyses of measurements of brain activity obtained with magnetoencephalography to reverse-engineer a geometry of the neural representation of color space. The analyses depend upon determining similarity relationships among the spatial patterns of neural responses to different colors and assessing how these relationships change in time. We evaluate the approach by relating the results to universal patterns in color naming. Two prominent patterns of color naming could be accounted for by the decoding results: the greater precision in naming warm colors compared to cool colors evident by an interaction of hue and lightness, and the preeminenceamong colors of reddish hues. Additional experiments showed that classifiers trained on responses to color words could decode color from data obtained using colored stimuli, but only at relatively long delays after stimulus onset. These results provide evidence that perceptual representations can give rise to semantic representations, but not the reverse. Taken together, the results uncover a dynamic geometry that provides neural correlates for color appearance and generates new hypotheses about the structure of color space.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The geometry that describes the relationship among colors, and the neural mechanisms that support color vision, are unsettled. Here, we use multivariate analyses of measurements of brain activity obtained with magnetoencephalography to reverse-engineer a geometry of the neural representation of color space. The analyses depend upon determining similarity relationships among the spatial patterns of neural responses to different colors and assessing how these relationships change in time. We evaluate the approach by relating the results to universal patterns in color naming. Two prominent patterns of color naming could be accounted for by the decoding results: the greater precision in naming warm colors compared to cool colors evident by an interaction of hue and lightness, and the preeminenceamong colors of reddish hues. Additional experiments showed that classifiers trained on responses to color words could decode color from data obtained using colored stimuli, but only at relatively long delays after stimulus onset. These results provide evidence that perceptual representations can give rise to semantic representations, but not the reverse. Taken together, the results uncover a dynamic geometry that provides neural correlates for color appearance and generates new hypotheses about the structure of color space.

Close

  • doi:10.1016/j.cub.2020.10.062

Close

Steven W Savage; Douglas D Potter; Benjamin W Tatler

The effects of cognitive distraction on behavioural, oculomotor and electrophysiological metrics during a driving hazard perception task Journal Article

Accident Analysis and Prevention, 138 , pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Savage2020,
title = {The effects of cognitive distraction on behavioural, oculomotor and electrophysiological metrics during a driving hazard perception task},
author = {Steven W Savage and Douglas D Potter and Benjamin W Tatler},
doi = {10.1016/j.aap.2020.105469},
year = {2020},
date = {2020-01-01},
journal = {Accident Analysis and Prevention},
volume = {138},
pages = {1--11},
publisher = {Elsevier},
abstract = {Previous research has demonstrated that the distraction caused by holding a mobile telephone conversation is not limited to the period of the actual conversation (Haigney, 1995; Redelmeier & Tibshirani, 1997; Savage et al., 2013). In a prior study we identified potential eye movement and EEG markers of cognitive distraction during driving hazard perception. However the extent to which these markers are affected by the demands of the hazard perception task are unclear. Therefore in the current study we assessed the effects of secondary cognitive task demand on eye movement and EEG metrics separately for periods prior to, during and after the hazard was visible. We found that when no hazard was present (prior and post hazard windows), distraction resulted in changes to various elements of saccadic eye movements. However, when the target was present, distraction did not affect eye movements. We have previously found evidence that distraction resulted in an overall decrease in theta band output at occipital sites of the brain. This was interpreted as evidence that distraction results in a reduction in visual processing. The current study confirmed this by examining the effects of distraction on the lambda response component of subjects eye fixation related potentials (EFRPs). Furthermore, we demonstrated that although detections of hazards were not affected by distraction, both eye movement and EEG metrics prior to the onset of the hazard were sensitive to changes in cognitive workload. This suggests that changes to specific aspects of the saccadic eye movement system could act as unobtrusive markers of distraction even prior to a breakdown in driving performance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous research has demonstrated that the distraction caused by holding a mobile telephone conversation is not limited to the period of the actual conversation (Haigney, 1995; Redelmeier & Tibshirani, 1997; Savage et al., 2013). In a prior study we identified potential eye movement and EEG markers of cognitive distraction during driving hazard perception. However the extent to which these markers are affected by the demands of the hazard perception task are unclear. Therefore in the current study we assessed the effects of secondary cognitive task demand on eye movement and EEG metrics separately for periods prior to, during and after the hazard was visible. We found that when no hazard was present (prior and post hazard windows), distraction resulted in changes to various elements of saccadic eye movements. However, when the target was present, distraction did not affect eye movements. We have previously found evidence that distraction resulted in an overall decrease in theta band output at occipital sites of the brain. This was interpreted as evidence that distraction results in a reduction in visual processing. The current study confirmed this by examining the effects of distraction on the lambda response component of subjects eye fixation related potentials (EFRPs). Furthermore, we demonstrated that although detections of hazards were not affected by distraction, both eye movement and EEG metrics prior to the onset of the hazard were sensitive to changes in cognitive workload. This suggests that changes to specific aspects of the saccadic eye movement system could act as unobtrusive markers of distraction even prior to a breakdown in driving performance.

Close

  • doi:10.1016/j.aap.2020.105469

Close

Christoph Schneider; Michael Pereira; Luca Tonin; José R del Millán

Real-time EEG feedback on alpha power lateralization leads to behavioral improvements in a covert attention task Journal Article

Brain Topography, 33 (1), pp. 48–59, 2020.

Abstract | Links | BibTeX

@article{Schneider2020,
title = {Real-time EEG feedback on alpha power lateralization leads to behavioral improvements in a covert attention task},
author = {Christoph Schneider and Michael Pereira and Luca Tonin and José R del Millán},
doi = {10.1007/s10548-019-00725-9},
year = {2020},
date = {2020-01-01},
journal = {Brain Topography},
volume = {33},
number = {1},
pages = {48--59},
publisher = {Springer US},
abstract = {Visual attention can be spatially oriented, even in the absence of saccadic eye-movements, to facilitate the processing of incoming visual information. One behavioral proxy for this so-called covert visuospatial attention (CVSA) is the validity effect (VE): the reduction in reaction time (RT) to visual stimuli at attended locations and the increase in RT to stimuli at unattended locations. At the electrophysiological level, one correlate of CVSA is the lateralization in the occipital $alpha$-band oscillations, resulting from $alpha$-power increases ipsilateral and decreases contralateral to the attended hemifield. While this $alpha$-band lateralization has been considerably studied using electroencephalography (EEG) or magnetoencephalography (MEG), little is known about whether it can be trained to improve CVSA behaviorally. In this cross-over sham-controlled study we used continuous real-time feedback of the occipital $alpha$-lateralization to modulate behavioral and electrophysiological markers of covert attention. Fourteen subjects performed a cued CVSA task, involving fast responses to covertly attended stimuli. During real-time feedback runs, trials extended in time if subjects reached states of high $alpha$-lateralization. Crucially, the ongoing $alpha$-lateralization was fed back to the subject by changing the color of the attended stimulus. We hypothesized that this ability to self-monitor lapses in CVSA and thus being able to refocus attention accordingly would lead to improved CVSA performance during subsequent testing. We probed the effect of the intervention by evaluating the pre-post changes in the VE and the $alpha$-lateralization. Behaviorally, results showed a significant interaction between feedback (experimental–sham) and time (pre-post) for the validity effect, with an increase in performance only for the experimental condition. We did not find corresponding pre-post changes in the $alpha$-lateralization. Our findings suggest that EEG-based real-time feedback is a promising tool to enhance the level of covert visuospatial attention, especially with respect to behavioral changes. This opens up the exploration of applications of the proposed training method for the cognitive rehabilitation of attentional disorders.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual attention can be spatially oriented, even in the absence of saccadic eye-movements, to facilitate the processing of incoming visual information. One behavioral proxy for this so-called covert visuospatial attention (CVSA) is the validity effect (VE): the reduction in reaction time (RT) to visual stimuli at attended locations and the increase in RT to stimuli at unattended locations. At the electrophysiological level, one correlate of CVSA is the lateralization in the occipital $alpha$-band oscillations, resulting from $alpha$-power increases ipsilateral and decreases contralateral to the attended hemifield. While this $alpha$-band lateralization has been considerably studied using electroencephalography (EEG) or magnetoencephalography (MEG), little is known about whether it can be trained to improve CVSA behaviorally. In this cross-over sham-controlled study we used continuous real-time feedback of the occipital $alpha$-lateralization to modulate behavioral and electrophysiological markers of covert attention. Fourteen subjects performed a cued CVSA task, involving fast responses to covertly attended stimuli. During real-time feedback runs, trials extended in time if subjects reached states of high $alpha$-lateralization. Crucially, the ongoing $alpha$-lateralization was fed back to the subject by changing the color of the attended stimulus. We hypothesized that this ability to self-monitor lapses in CVSA and thus being able to refocus attention accordingly would lead to improved CVSA performance during subsequent testing. We probed the effect of the intervention by evaluating the pre-post changes in the VE and the $alpha$-lateralization. Behaviorally, results showed a significant interaction between feedback (experimental–sham) and time (pre-post) for the validity effect, with an increase in performance only for the experimental condition. We did not find corresponding pre-post changes in the $alpha$-lateralization. Our findings suggest that EEG-based real-time feedback is a promising tool to enhance the level of covert visuospatial attention, especially with respect to behavioral changes. This opens up the exploration of applications of the proposed training method for the cognitive rehabilitation of attentional disorders.

Close

  • doi:10.1007/s10548-019-00725-9

Close

Eelke Spaak; Floris P de Lange

Hippocampal and prefrontal theta-band mechanisms underpin implicit spatial context learning Journal Article

Journal of Neuroscience, 40 (1), pp. 191–202, 2020.

Abstract | Links | BibTeX

@article{Spaak2020,
title = {Hippocampal and prefrontal theta-band mechanisms underpin implicit spatial context learning},
author = {Eelke Spaak and Floris P de Lange},
doi = {10.1523/JNEUROSCI.1660-19.2019},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neuroscience},
volume = {40},
number = {1},
pages = {191--202},
abstract = {Humans can rapidly and seemingly implicitly learn to predict typical locations of relevant items when those items are encountered in familiar spatial contexts. Two important questions remain, however, concerning this type of learning: (1) which neural structures and mechanisms are involved in acquiring and exploiting such contextual knowledge? (2) Is this type of learning truly implicit and unconscious? We now answer both these questions after closely examining behavior and recording neural activity using MEG while observers (male and female) were acquiring and exploiting statistical regularities. Computational modeling of behavioral data suggested that, after repeated exposures to a spatial context, participants' behavior was marked by an abrupt switch to an exploitation strategy of the learnt regularities. MEG recordings showed that hippocampus and prefrontal cortex (PFC) were involved in the task and furthermore revealed a striking dissociation: only the initial learning phase was associated with hippocampal theta band activity, while the subsequent exploitation phase showed a shift in theta band activity to the PFC. Intriguingly, the behavioral benefit of repeated exposures to certain scenes was inversely related to explicit awareness of such repeats, demonstrating the implicit nature of the expectations acquired. Together, these findings demonstrate that (1a) hippocampus and PFC play complementary roles in the implicit, unconscious learning and exploitation of spatial statistical regularities; (1b) these mechanisms are implemented in the theta frequency band; and (2) contextual knowledge can indeed be acquired unconsciously, and awareness of such knowledge can even interfere with the exploitation thereof.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Humans can rapidly and seemingly implicitly learn to predict typical locations of relevant items when those items are encountered in familiar spatial contexts. Two important questions remain, however, concerning this type of learning: (1) which neural structures and mechanisms are involved in acquiring and exploiting such contextual knowledge? (2) Is this type of learning truly implicit and unconscious? We now answer both these questions after closely examining behavior and recording neural activity using MEG while observers (male and female) were acquiring and exploiting statistical regularities. Computational modeling of behavioral data suggested that, after repeated exposures to a spatial context, participants' behavior was marked by an abrupt switch to an exploitation strategy of the learnt regularities. MEG recordings showed that hippocampus and prefrontal cortex (PFC) were involved in the task and furthermore revealed a striking dissociation: only the initial learning phase was associated with hippocampal theta band activity, while the subsequent exploitation phase showed a shift in theta band activity to the PFC. Intriguingly, the behavioral benefit of repeated exposures to certain scenes was inversely related to explicit awareness of such repeats, demonstrating the implicit nature of the expectations acquired. Together, these findings demonstrate that (1a) hippocampus and PFC play complementary roles in the implicit, unconscious learning and exploitation of spatial statistical regularities; (1b) these mechanisms are implemented in the theta frequency band; and (2) contextual knowledge can indeed be acquired unconsciously, and awareness of such knowledge can even interfere with the exploitation thereof.

Close

  • doi:10.1523/JNEUROSCI.1660-19.2019

Close

Davide Tabarelli; Christian Keitel; Joachim Gross; Daniel Baldauf

Spatial attention enhances cortical tracking of quasi-rhythmic visual stimuli Journal Article

NeuroImage, 208 , pp. 1–18, 2020.

Abstract | Links | BibTeX

@article{Tabarelli2020,
title = {Spatial attention enhances cortical tracking of quasi-rhythmic visual stimuli},
author = {Davide Tabarelli and Christian Keitel and Joachim Gross and Daniel Baldauf},
doi = {10.1016/j.neuroimage.2019.116444},
year = {2020},
date = {2020-01-01},
journal = {NeuroImage},
volume = {208},
pages = {1--18},
publisher = {Elsevier Ltd},
abstract = {Successfully interpreting and navigating our natural visual environment requires us to track its dynamics constantly. Additionally, we focus our attention on behaviorally relevant stimuli to enhance their neural processing. Little is known, however, about how sustained attention affects the ongoing tracking of stimuli with rich natural temporal dynamics. Here, we used MRI-informed source reconstructions of magnetoencephalography (MEG) data to map to what extent various cortical areas track concurrent continuous quasi-rhythmic visual stimulation. Further, we tested how top-down visuo-spatial attention influences this tracking process. Our bilaterally presented quasi-rhythmic stimuli covered a dynamic range of 4–20 ​Hz, subdivided into three distinct bands. As an experimental control, we also included strictly rhythmic stimulation (10 vs 12 ​Hz). Using a spectral measure of brain-stimulus coupling, we were able to track the neural processing of left vs. right stimuli independently, even while fluctuating within the same frequency range. The fidelity of neural tracking depended on the stimulation frequencies, decreasing for higher frequency bands. Both attended and non-attended stimuli were tracked beyond early visual cortices, in ventral and dorsal streams depending on the stimulus frequency. In general, tracking improved with the deployment of visuo-spatial attention to the stimulus location. Our results provide new insights into how human visual cortices process concurrent dynamic stimuli and provide a potential mechanism – namely increasing the temporal precision of tracking – for boosting the neural representation of attended input.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Successfully interpreting and navigating our natural visual environment requires us to track its dynamics constantly. Additionally, we focus our attention on behaviorally relevant stimuli to enhance their neural processing. Little is known, however, about how sustained attention affects the ongoing tracking of stimuli with rich natural temporal dynamics. Here, we used MRI-informed source reconstructions of magnetoencephalography (MEG) data to map to what extent various cortical areas track concurrent continuous quasi-rhythmic visual stimulation. Further, we tested how top-down visuo-spatial attention influences this tracking process. Our bilaterally presented quasi-rhythmic stimuli covered a dynamic range of 4–20 ​Hz, subdivided into three distinct bands. As an experimental control, we also included strictly rhythmic stimulation (10 vs 12 ​Hz). Using a spectral measure of brain-stimulus coupling, we were able to track the neural processing of left vs. right stimuli independently, even while fluctuating within the same frequency range. The fidelity of neural tracking depended on the stimulation frequencies, decreasing for higher frequency bands. Both attended and non-attended stimuli were tracked beyond early visual cortices, in ventral and dorsal streams depending on the stimulus frequency. In general, tracking improved with the deployment of visuo-spatial attention to the stimulus location. Our results provide new insights into how human visual cortices process concurrent dynamic stimuli and provide a potential mechanism – namely increasing the temporal precision of tracking – for boosting the neural representation of attended input.

Close

  • doi:10.1016/j.neuroimage.2019.116444

Close

2019

Maria C Romero; Marco Davare; Marcelo Armendariz; Peter Janssen

Neural effects of transcranial magnetic stimulation at the single-cell level Journal Article

Nature Communications, 10 , pp. 2642, 2019.

Abstract | Links | BibTeX

@article{Romero2019,
title = {Neural effects of transcranial magnetic stimulation at the single-cell level},
author = {Maria C Romero and Marco Davare and Marcelo Armendariz and Peter Janssen},
doi = {10.1038/s41467-019-10638-7},
year = {2019},
date = {2019-12-01},
journal = {Nature Communications},
volume = {10},
pages = {2642},
publisher = {Nature Publishing Group},
abstract = {Transcranial magnetic stimulation (TMS) can non-invasively modulate neural activity in humans. Despite three decades of research, the spatial extent of the cortical area activated by TMS is still controversial. Moreover, how TMS interacts with task-related activity during motor behavior is unknown. Here, we applied single-pulse TMS over macaque parietal cortex while recording single-unit activity at various distances from the center of stimulation during grasping. The spatial extent of TMS-induced activation is remarkably restricted, affecting the spiking activity of single neurons in an area of cortex measuring less than 2 mm in diameter. In task-related neurons, TMS evokes a transient excitation followed by reduced activity, paralleled by a significantly longer grasping time. Furthermore, TMS-induced activity and task-related activity do not summate in single neurons. These results furnish crucial experimental evidence for the neural effects of TMS at the single-cell level and uncover the neural underpinnings of behavioral effects of TMS.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Transcranial magnetic stimulation (TMS) can non-invasively modulate neural activity in humans. Despite three decades of research, the spatial extent of the cortical area activated by TMS is still controversial. Moreover, how TMS interacts with task-related activity during motor behavior is unknown. Here, we applied single-pulse TMS over macaque parietal cortex while recording single-unit activity at various distances from the center of stimulation during grasping. The spatial extent of TMS-induced activation is remarkably restricted, affecting the spiking activity of single neurons in an area of cortex measuring less than 2 mm in diameter. In task-related neurons, TMS evokes a transient excitation followed by reduced activity, paralleled by a significantly longer grasping time. Furthermore, TMS-induced activity and task-related activity do not summate in single neurons. These results furnish crucial experimental evidence for the neural effects of TMS at the single-cell level and uncover the neural underpinnings of behavioral effects of TMS.

Close

  • doi:10.1038/s41467-019-10638-7

Close

Ying Joey Zhou; Alexis Pérez-Bellido; Saskia Haegens; Floris P de Lange

Perceptual expectations modulate low-frequency activity: A statistical learning magnetoencephalographystudy Journal Article

Journal of Cognitive Neuroscience, pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Zhou2019c,
title = {Perceptual expectations modulate low-frequency activity: A statistical learning magnetoencephalographystudy},
author = {Ying Joey Zhou and Alexis Pérez-Bellido and Saskia Haegens and Floris P de Lange},
doi = {10.1162/jocn_a_01511},
year = {2019},
date = {2019-12-01},
journal = {Journal of Cognitive Neuroscience},
pages = {1--12},
publisher = {MIT Press - Journals},
abstract = {Perceptual expectations can change how a visual stimulus is perceived. Recent studies have shown mixed results in terms of whether expectations modulate sensory representations. Here, we used a statistical learning paradigm to study the temporal characteristics of perceptual expectations. We presented participants with pairs of object images organized in a predictive manner and then recorded their brain activity with magnetoencephalography while they viewed expected and unexpected image pairs on the subsequent day. We observed stronger alpha-band (7–14 Hz) activity in response to unexpected compared with expected object images. Specifically, the alpha-band modulation occurred as early as the onset of the stimuli and was most pronounced in left occipito-temporal cortex. Given that the differential response to expected versus unexpected stimuli occurred in sensory regions early in time, our results suggest that expectations modulate perceptual decision-making by changing the sensory response elicited by the stimuli.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Perceptual expectations can change how a visual stimulus is perceived. Recent studies have shown mixed results in terms of whether expectations modulate sensory representations. Here, we used a statistical learning paradigm to study the temporal characteristics of perceptual expectations. We presented participants with pairs of object images organized in a predictive manner and then recorded their brain activity with magnetoencephalography while they viewed expected and unexpected image pairs on the subsequent day. We observed stronger alpha-band (7–14 Hz) activity in response to unexpected compared with expected object images. Specifically, the alpha-band modulation occurred as early as the onset of the stimuli and was most pronounced in left occipito-temporal cortex. Given that the differential response to expected versus unexpected stimuli occurred in sensory regions early in time, our results suggest that expectations modulate perceptual decision-making by changing the sensory response elicited by the stimuli.

Close

  • doi:10.1162/jocn_a_01511

Close

Isabel M Vanegas; Annabelle Blangero; James E Galvin; Alessandro Di Rocco; Angelo Quartarone; Felice M Ghilardi; Simon P Kelly

Altered dynamics of visual contextual interactions in Parkinson's disease Journal Article

Parkinson's Disease, 5 (13), pp. 1–8, 2019.

Abstract | Links | BibTeX

@article{Vanegas2019,
title = {Altered dynamics of visual contextual interactions in Parkinson's disease},
author = {Isabel M Vanegas and Annabelle Blangero and James E Galvin and Alessandro {Di Rocco} and Angelo Quartarone and Felice M Ghilardi and Simon P Kelly},
doi = {10.1038/s41531-019-0085-5},
year = {2019},
date = {2019-12-01},
journal = {Parkinson's Disease},
volume = {5},
number = {13},
pages = {1--8},
publisher = {Nature Publishing Group},
abstract = {Over the last decades, psychophysical and electrophysiological studies in patients and animal models of Parkinson's disease (PD), have consistently revealed a number of visual abnormalities. In particular, specific alterations of contrast sensitivity curves, electroretinogram (ERG), and visual-evoked potentials (VEP), have been attributed to dopaminergic retinal depletion. However, fundamental mechanisms of cortical visual processing, such as normalization or “gain control” computations, have not yet been examined in PD patients. Here, we measured electrophysiological indices of gain control in both space (surround suppression) and time (sensory adaptation) in PD patients based on steady-state VEP (ssVEP). Compared with controls, patients exhibited a significantly higher initial ssVEP amplitude that quickly decayed over time, and greater relative suppression of ssVEP amplitude as a function of surrounding stimulus contrast. Meanwhile, EEG frequency spectra were broadly elevated in patients relative to controls. Thus, contrary to what might be expected given the reduced contrast sensitivity often reported in PD, visual neural responses are not weaker; rather, they are initially larger but undergo an exaggerated degree of spatial and temporal gain control and are embedded within a greater background noise level. These differences may reflect cortical mechanisms that compensate for dysfunctional center-surround interactions at the retinal level.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Over the last decades, psychophysical and electrophysiological studies in patients and animal models of Parkinson's disease (PD), have consistently revealed a number of visual abnormalities. In particular, specific alterations of contrast sensitivity curves, electroretinogram (ERG), and visual-evoked potentials (VEP), have been attributed to dopaminergic retinal depletion. However, fundamental mechanisms of cortical visual processing, such as normalization or “gain control” computations, have not yet been examined in PD patients. Here, we measured electrophysiological indices of gain control in both space (surround suppression) and time (sensory adaptation) in PD patients based on steady-state VEP (ssVEP). Compared with controls, patients exhibited a significantly higher initial ssVEP amplitude that quickly decayed over time, and greater relative suppression of ssVEP amplitude as a function of surrounding stimulus contrast. Meanwhile, EEG frequency spectra were broadly elevated in patients relative to controls. Thus, contrary to what might be expected given the reduced contrast sensitivity often reported in PD, visual neural responses are not weaker; rather, they are initially larger but undergo an exaggerated degree of spatial and temporal gain control and are embedded within a greater background noise level. These differences may reflect cortical mechanisms that compensate for dysfunctional center-surround interactions at the retinal level.

Close

  • doi:10.1038/s41531-019-0085-5

Close

Praghajieeth Raajhen Santhana Gopalan; Otto Loberg; Jarmo A Hämäläinen; Paavo H T Leppänen

Attentional processes in typically developing children as revealed using brain event-related potentials and their source localization in Attention Network Test Journal Article

Scientific Reports, 9 , pp. 2940, 2019.

Abstract | Links | BibTeX

@article{Gopalan2019,
title = {Attentional processes in typically developing children as revealed using brain event-related potentials and their source localization in Attention Network Test},
author = {Praghajieeth Raajhen Santhana Gopalan and Otto Loberg and Jarmo A Hämäläinen and Paavo H T Leppänen},
doi = {10.1038/s41598-018-36947-3},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {2940},
publisher = {Nature Publishing Group},
abstract = {Attention-related processes include three functional sub-components: alerting, orienting, and inhibition. We investigated these components using EEG-based, brain event-related potentials and their neuronal source activations during the Attention Network Test in typically developing school-aged children. Participants were asked to detect the swimming direction of the centre fish in a group of five fish. The target stimulus was either preceded by a cue (centre, double, or spatial) or no cue. An EEG using 128 electrodes was recorded for 83 children aged 12–13 years. RTs showed significant effects across all three sub-components of attention. Alerting and orienting (responses to double vs non-cued target stimulus and spatially vs centre-cued target stimulus, respectively) resulted in larger N1 amplitude, whereas inhibition (responses to incongruent vs congruent target stimulus) resulted in larger P3 amplitude. Neuronal source activation for the alerting effect was localized in the right anterior temporal and bilateral occipital lobes, for the orienting effect bilaterally in the occipital lobe, and for the inhibition effect in the medial prefrontal cortex and left anterior temporal lobe. Neuronal sources of ERPs revealed that sub-processes related to the attention network are different in children as compared to earlier adult fMRI studies, which was not evident from scalp ERPs.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Attention-related processes include three functional sub-components: alerting, orienting, and inhibition. We investigated these components using EEG-based, brain event-related potentials and their neuronal source activations during the Attention Network Test in typically developing school-aged children. Participants were asked to detect the swimming direction of the centre fish in a group of five fish. The target stimulus was either preceded by a cue (centre, double, or spatial) or no cue. An EEG using 128 electrodes was recorded for 83 children aged 12–13 years. RTs showed significant effects across all three sub-components of attention. Alerting and orienting (responses to double vs non-cued target stimulus and spatially vs centre-cued target stimulus, respectively) resulted in larger N1 amplitude, whereas inhibition (responses to incongruent vs congruent target stimulus) resulted in larger P3 amplitude. Neuronal source activation for the alerting effect was localized in the right anterior temporal and bilateral occipital lobes, for the orienting effect bilaterally in the occipital lobe, and for the inhibition effect in the medial prefrontal cortex and left anterior temporal lobe. Neuronal sources of ERPs revealed that sub-processes related to the attention network are different in children as compared to earlier adult fMRI studies, which was not evident from scalp ERPs.

Close

  • doi:10.1038/s41598-018-36947-3

Close

Moreno I Coco; Antje Nuthmann; Olaf Dimigen

Fixation-related brain potentials during semantic integration of object–scene information Journal Article

Journal of Cognitive Neuroscience, 32 (4), pp. 571–589, 2019.

Abstract | Links | BibTeX

@article{Coco2019,
title = {Fixation-related brain potentials during semantic integration of object–scene information},
author = {Moreno I Coco and Antje Nuthmann and Olaf Dimigen},
doi = {10.1162/jocn_a_01504},
year = {2019},
date = {2019-11-01},
journal = {Journal of Cognitive Neuroscience},
volume = {32},
number = {4},
pages = {571--589},
publisher = {MIT Press - Journals},
abstract = {In vision science, a particularly controversial topic is whether and how quickly the semantic information about objects is available outside foveal vision. Here, we aimed at contributing to this debate by coregistering eye movements and EEG while participants viewed photographs of indoor scenes that contained a semantically consistent or inconsistent target object. Linear deconvolution modeling was used to analyze the ERPs evoked by scene onset as well as the fixation-related potentials (FRPs) elicited by the fixation on the target object (t) and by the preceding fixation (t − 1). Object–scene consistency did not influence the probability of immediate target fixation or the ERP evoked by scene onset, which suggests that object–scene semantics was not accessed immediately. However, during the subsequent scene exploration, inconsistent objects were prioritized over consistent objects in extrafoveal vision (i.e., looked at earlier) and were more effortful to process in foveal vision (i.e., looked at longer). In FRPs, we demonstrate a fixation-related N300/N400 effect, whereby inconsistent objects elicit a larger frontocentral negativity than consistent objects. In line with the behavioral findings, this effect was already seen in FRPs aligned to the pretarget fixation t − 1 and persisted throughout fixation t, indicating that the extraction of object semantics can already begin in extrafoveal vision. Taken together, the results emphasize the usefulness of combined EEG/eye movement recordings for understanding the mechanisms of object–scene integration during natural viewing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In vision science, a particularly controversial topic is whether and how quickly the semantic information about objects is available outside foveal vision. Here, we aimed at contributing to this debate by coregistering eye movements and EEG while participants viewed photographs of indoor scenes that contained a semantically consistent or inconsistent target object. Linear deconvolution modeling was used to analyze the ERPs evoked by scene onset as well as the fixation-related potentials (FRPs) elicited by the fixation on the target object (t) and by the preceding fixation (t − 1). Object–scene consistency did not influence the probability of immediate target fixation or the ERP evoked by scene onset, which suggests that object–scene semantics was not accessed immediately. However, during the subsequent scene exploration, inconsistent objects were prioritized over consistent objects in extrafoveal vision (i.e., looked at earlier) and were more effortful to process in foveal vision (i.e., looked at longer). In FRPs, we demonstrate a fixation-related N300/N400 effect, whereby inconsistent objects elicit a larger frontocentral negativity than consistent objects. In line with the behavioral findings, this effect was already seen in FRPs aligned to the pretarget fixation t − 1 and persisted throughout fixation t, indicating that the extraction of object semantics can already begin in extrafoveal vision. Taken together, the results emphasize the usefulness of combined EEG/eye movement recordings for understanding the mechanisms of object–scene integration during natural viewing.

Close

  • doi:10.1162/jocn_a_01504

Close

Mariya E Manahova; Eelke Spaak; Floris P de Lange

Familiarity increases processing speed in the visual system Journal Article

Journal of Cognitive Neuroscience, pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Manahova2019,
title = {Familiarity increases processing speed in the visual system},
author = {Mariya E Manahova and Eelke Spaak and Floris P de Lange},
doi = {10.1162/jocn_a_01507},
year = {2019},
date = {2019-11-01},
journal = {Journal of Cognitive Neuroscience},
pages = {1--12},
publisher = {MIT Press - Journals},
abstract = {Familiarity with a stimulus leads to an attenuated neural response to the stimulus. Alongside this attenuation, recent studies have also observed a truncation of stimulus-evoked activity for familiar visual input. One proposed function of this truncation is to rapidly put neurons in a state of readiness to respond to new input. Here, we examined this hypothesis by presenting human participants with target stimuli that were embedded in rapid streams of familiar or novel distractor stimuli at different speeds of presentation, while recording brain activity using magnetoencephalography and measuring behavioral performance. We investigated the temporal and spatial dynamics of signal truncation and whether this phenomenon bears relationship to participants' ability to categorize target items within a visual stream. Behaviorally, target categorization performance was markedly better when the target was embedded within familiar distractors, and this benefit became more pronounced with increasing speed of presentation. Familiar distractors showed a truncation of neural activity in the visual system. This truncation was strongest for the fastest presentation speeds and peaked in progressively more anterior cortical regions as presentation speeds became slower. Moreover, the neural response evoked by the target was stronger when this target was preceded by familiar distractors. Taken together, these findings demonstrate that item familiarity results in a truncated neural response, is associated with stronger processing of relevant target information, and leads to superior perceptual performance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Familiarity with a stimulus leads to an attenuated neural response to the stimulus. Alongside this attenuation, recent studies have also observed a truncation of stimulus-evoked activity for familiar visual input. One proposed function of this truncation is to rapidly put neurons in a state of readiness to respond to new input. Here, we examined this hypothesis by presenting human participants with target stimuli that were embedded in rapid streams of familiar or novel distractor stimuli at different speeds of presentation, while recording brain activity using magnetoencephalography and measuring behavioral performance. We investigated the temporal and spatial dynamics of signal truncation and whether this phenomenon bears relationship to participants' ability to categorize target items within a visual stream. Behaviorally, target categorization performance was markedly better when the target was embedded within familiar distractors, and this benefit became more pronounced with increasing speed of presentation. Familiar distractors showed a truncation of neural activity in the visual system. This truncation was strongest for the fastest presentation speeds and peaked in progressively more anterior cortical regions as presentation speeds became slower. Moreover, the neural response evoked by the target was stronger when this target was preceded by familiar distractors. Taken together, these findings demonstrate that item familiarity results in a truncated neural response, is associated with stronger processing of relevant target information, and leads to superior perceptual performance.

Close

  • doi:10.1162/jocn_a_01507

Close

Christoph Huber-Huber; Antimo Buonocore; Olaf Dimigen; Clayton Hickey; David Melcher

The peripheral preview effect with faces: Combined EEG and eye-tracking suggests multiple stages of trans-saccadic predictive and non-predictive processing Journal Article

NeuroImage, 200 , pp. 344–362, 2019.

Abstract | Links | BibTeX

@article{HuberHuber2019,
title = {The peripheral preview effect with faces: Combined EEG and eye-tracking suggests multiple stages of trans-saccadic predictive and non-predictive processing},
author = {Christoph Huber-Huber and Antimo Buonocore and Olaf Dimigen and Clayton Hickey and David Melcher},
doi = {10.1016/j.neuroimage.2019.06.059},
year = {2019},
date = {2019-10-01},
journal = {NeuroImage},
volume = {200},
pages = {344--362},
publisher = {Academic Press Inc.},
abstract = {The world appears stable despite saccadic eye-movements. One possible explanation for this phenomenon is that the visual system predicts upcoming input across saccadic eye-movements based on peripheral preview of the saccadic target. We tested this idea using concurrent electroencephalography (EEG) and eye-tracking. Participants made cued saccades to peripheral upright or inverted face stimuli that changed orientation (invalid preview) or maintained orientation (valid preview) while the saccade was completed. Experiment 1 demonstrated better discrimination performance and a reduced fixation-locked N170 component (fN170) with valid than with invalid preview, demonstrating integration of pre- and post-saccadic information. Moreover, the early fixation-related potentials (FRP) showed a preview face inversion effect suggesting that some pre-saccadic input was represented in the brain until around 170 ms post fixation-onset. Experiment 2 replicated Experiment 1 and manipulated the proportion of valid and invalid trials to test whether the preview effect reflects context-based prediction across trials. A whole-scalp Bayes factor analysis showed that this manipulation did not alter the fN170 preview effect but did influence the face inversion effect before the saccade. The pre-saccadic inversion effect declined earlier in the mostly invalid block than in the mostly valid block, which is consistent with the notion of pre-saccadic expectations. In addition, in both studies, we found strong evidence for an interaction between the pre-saccadic preview stimulus and the post-saccadic target as early as 50 ms (Experiment 2) or 90 ms (Experiment 1) into the new fixation. These findings suggest that visual stability may involve three temporal stages: prediction about the saccadic target, integration of pre-saccadic and post-saccadic information at around 50-90 ms post fixation onset, and post-saccadic facilitation of rapid categorization.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The world appears stable despite saccadic eye-movements. One possible explanation for this phenomenon is that the visual system predicts upcoming input across saccadic eye-movements based on peripheral preview of the saccadic target. We tested this idea using concurrent electroencephalography (EEG) and eye-tracking. Participants made cued saccades to peripheral upright or inverted face stimuli that changed orientation (invalid preview) or maintained orientation (valid preview) while the saccade was completed. Experiment 1 demonstrated better discrimination performance and a reduced fixation-locked N170 component (fN170) with valid than with invalid preview, demonstrating integration of pre- and post-saccadic information. Moreover, the early fixation-related potentials (FRP) showed a preview face inversion effect suggesting that some pre-saccadic input was represented in the brain until around 170 ms post fixation-onset. Experiment 2 replicated Experiment 1 and manipulated the proportion of valid and invalid trials to test whether the preview effect reflects context-based prediction across trials. A whole-scalp Bayes factor analysis showed that this manipulation did not alter the fN170 preview effect but did influence the face inversion effect before the saccade. The pre-saccadic inversion effect declined earlier in the mostly invalid block than in the mostly valid block, which is consistent with the notion of pre-saccadic expectations. In addition, in both studies, we found strong evidence for an interaction between the pre-saccadic preview stimulus and the post-saccadic target as early as 50 ms (Experiment 2) or 90 ms (Experiment 1) into the new fixation. These findings suggest that visual stability may involve three temporal stages: prediction about the saccadic target, integration of pre-saccadic and post-saccadic information at around 50-90 ms post fixation onset, and post-saccadic facilitation of rapid categorization.

Close

  • doi:10.1016/j.neuroimage.2019.06.059

Close

Florian Sandhaeger; Constantin von Nicolai; Earl K Miller; Markus Siegel

Monkey EEG links neuronal color and motion information across species and scales Journal Article

eLife, 8 , pp. 1–21, 2019.

Abstract | Links | BibTeX

@article{Sandhaeger2019,
title = {Monkey EEG links neuronal color and motion information across species and scales},
author = {Florian Sandhaeger and Constantin von Nicolai and Earl K Miller and Markus Siegel},
doi = {10.7554/eLife.45645},
year = {2019},
date = {2019-07-01},
journal = {eLife},
volume = {8},
pages = {1--21},
publisher = {eLife Sciences Publications, Ltd},
abstract = {It remains challenging to relate EEG and MEG to underlying circuit processes and comparable experiments on both spatial scales are rare. To close this gap between invasive and non-invasive electrophysiology we developed and recorded human-comparable EEG in macaque monkeys during visual stimulation with colored dynamic random dot patterns. Furthermore, we performed simultaneous microelectrode recordings from 6 areas of macaque cortex and human MEG. Motion direction and color information were accessible in all signals. Tuning of the non- invasive signals was similar to V4 and IT, but not to dorsal and frontal areas. Thus, MEG and EEG were dominated by early visual and ventral stream sources. Source level analysis revealed corresponding information and latency gradients across cortex. We show how information-based methods and monkey EEG can identify analogous properties of visual processing in signals spanning spatial scales from single units to MEG – a valuable framework for relating human and animal studies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It remains challenging to relate EEG and MEG to underlying circuit processes and comparable experiments on both spatial scales are rare. To close this gap between invasive and non-invasive electrophysiology we developed and recorded human-comparable EEG in macaque monkeys during visual stimulation with colored dynamic random dot patterns. Furthermore, we performed simultaneous microelectrode recordings from 6 areas of macaque cortex and human MEG. Motion direction and color information were accessible in all signals. Tuning of the non- invasive signals was similar to V4 and IT, but not to dorsal and frontal areas. Thus, MEG and EEG were dominated by early visual and ventral stream sources. Source level analysis revealed corresponding information and latency gradients across cortex. We show how information-based methods and monkey EEG can identify analogous properties of visual processing in signals spanning spatial scales from single units to MEG – a valuable framework for relating human and animal studies.

Close

  • doi:10.7554/eLife.45645

Close

Sebastian Michelmann; Bernhard P Staresina; Howard Bowman; Simon Hanslmayr

Speed of time-compressed forward replay flexibly changes in human episodic memory Journal Article

Nature Human Behaviour, 3 (2), pp. 143–154, 2019.

Abstract | Links | BibTeX

@article{Michelmann2019,
title = {Speed of time-compressed forward replay flexibly changes in human episodic memory},
author = {Sebastian Michelmann and Bernhard P Staresina and Howard Bowman and Simon Hanslmayr},
doi = {10.1038/s41562-018-0491-4},
year = {2019},
date = {2019-01-01},
journal = {Nature Human Behaviour},
volume = {3},
number = {2},
pages = {143--154},
publisher = {Springer US},
abstract = {Remembering information from continuous past episodes is a complex task 1 . On the one hand, we must be able to recall events in a highly accurate way, often including exact timings. On the other hand, we can ignore irrelevant details and skip to events of interest. Here, we track continuous episodes consisting of different subevents as they are recalled from memory. In behavioural and magnetoencephalography data, we show that memory replay is temporally compressed and proceeds in a forward direction. Neural replay is characterized by the reinstatement of temporal patterns from encoding 2,3 . These fragments of activity reappear on a compressed timescale. Herein, the replay of subevents takes longer than the transition from one subevent to another. This identifies episodic memory replay as a dynamic process in which participants replay fragments of fine-grained temporal patterns and are able to skip flexibly across subevents.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Remembering information from continuous past episodes is a complex task 1 . On the one hand, we must be able to recall events in a highly accurate way, often including exact timings. On the other hand, we can ignore irrelevant details and skip to events of interest. Here, we track continuous episodes consisting of different subevents as they are recalled from memory. In behavioural and magnetoencephalography data, we show that memory replay is temporally compressed and proceeds in a forward direction. Neural replay is characterized by the reinstatement of temporal patterns from encoding 2,3 . These fragments of activity reappear on a compressed timescale. Herein, the replay of subevents takes longer than the transition from one subevent to another. This identifies episodic memory replay as a dynamic process in which participants replay fragments of fine-grained temporal patterns and are able to skip flexibly across subevents.

Close

  • doi:10.1038/s41562-018-0491-4

Close

Jana Annina Müller; Dorothea Wendt; Birger Kollmeier; Stefan Debener; Thomas Brand

Effect of speech rate on neural tracking of speech Journal Article

Frontiers in Psychology, 10 , pp. 1–15, 2019.

Abstract | Links | BibTeX

@article{Mueller2019b,
title = {Effect of speech rate on neural tracking of speech},
author = {Jana Annina Müller and Dorothea Wendt and Birger Kollmeier and Stefan Debener and Thomas Brand},
doi = {10.3389/fpsyg.2019.00449},
year = {2019},
date = {2019-01-01},
journal = {Frontiers in Psychology},
volume = {10},
pages = {1--15},
abstract = {Speech comprehension requires effort in demanding listening situations. Selective attention may be required for focusing on a specific talker in a multi-talker environment, may enhance effort by requiring additional cognitive resources, and is known to enhance the neural representation of the attended talker in the listener's neural response. The aim of the study was to investigate the relation of listening effort, as quantified by subjective effort ratings and pupil dilation, and neural speech tracking during sentence recognition. Task demands were varied using sentences with varying levels of linguistic complexity and using two different speech rates in a picture-matching paradigm with 20 normal-hearing listeners. The participants' task was to match the acoustically presented sentence with a picture presented before the acoustic stimulus. Afterwards they rated their perceived effort on a categorical effort scale. During each trial, pupil dilation (as an indicator of listening effort) and electroencephalogram (as an indicator of neural speech tracking) were recorded. Neither measure was significantly affected by linguistic complexity. However, speech rate showed a strong influence on subjectively rated effort, pupil dilation, and neural tracking. The neural tracking analysis revealed a shorter latency for faster sentences, which may reflect a neural adaptation to the rate of the input. No relation was found between neural tracking and listening effort, even though both measures were clearly influenced by speech rate. This is probably due to factors that influence both measures differently. Consequently, the amount of listening effort is not clearly represented in the neural tracking.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Speech comprehension requires effort in demanding listening situations. Selective attention may be required for focusing on a specific talker in a multi-talker environment, may enhance effort by requiring additional cognitive resources, and is known to enhance the neural representation of the attended talker in the listener's neural response. The aim of the study was to investigate the relation of listening effort, as quantified by subjective effort ratings and pupil dilation, and neural speech tracking during sentence recognition. Task demands were varied using sentences with varying levels of linguistic complexity and using two different speech rates in a picture-matching paradigm with 20 normal-hearing listeners. The participants' task was to match the acoustically presented sentence with a picture presented before the acoustic stimulus. Afterwards they rated their perceived effort on a categorical effort scale. During each trial, pupil dilation (as an indicator of listening effort) and electroencephalogram (as an indicator of neural speech tracking) were recorded. Neither measure was significantly affected by linguistic complexity. However, speech rate showed a strong influence on subjectively rated effort, pupil dilation, and neural tracking. The neural tracking analysis revealed a shorter latency for faster sentences, which may reflect a neural adaptation to the rate of the input. No relation was found between neural tracking and listening effort, even though both measures were clearly influenced by speech rate. This is probably due to factors that influence both measures differently. Consequently, the amount of listening effort is not clearly represented in the neural tracking.

Close

  • doi:10.3389/fpsyg.2019.00449

Close

Judith Nicolas; Aline Bompas; Romain Bouet; Olivier Sillan; Eric Koun; Christian Urquizar; Aurélie Bidet-Caulet; Denis Pélisson

Saccadic adaptation boosts ongoing gamma activity in a subsequent visuoattentional task Journal Article

Cerebral Cortex, 29 (9), pp. 3606–3617, 2019.

Abstract | Links | BibTeX

@article{Nicolas2019a,
title = {Saccadic adaptation boosts ongoing gamma activity in a subsequent visuoattentional task},
author = {Judith Nicolas and Aline Bompas and Romain Bouet and Olivier Sillan and Eric Koun and Christian Urquizar and Aurélie Bidet-Caulet and Denis Pélisson},
doi = {10.1093/cercor/bhy241},
year = {2019},
date = {2019-01-01},
journal = {Cerebral Cortex},
volume = {29},
number = {9},
pages = {3606--3617},
abstract = {Attention and saccadic adaptation (SA) are critical components of visual perception, the former enhancing sensory processing of selected objects, the latter maintaining the eye movements accuracy toward them. Recent studies propelled the hypothesis of a tight functional coupling between these mechanisms, possibly due to shared neural substrates. Here, we used magnetoencephalography to investigate for the first time the neurophysiological bases of this coupling and of SA per se. We compared visual discrimination performance of 12 healthy subjects before and after SA. Eye movements and magnetic signals were recorded continuously. Analyses focused on gamma band activity (GBA) during the pretarget period of the discrimination and the saccadic tasks. We found that GBA increases after SA. This increase was found in the right hemisphere for both postadaptation saccadic and discrimination tasks. For the latter, GBA also increased in the left hemisphere. We conclude that oculomotor plasticity involves GBA modulation within an extended neural network which persists after SA, suggesting a possible role of gamma oscillations in the coupling between SA and attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Attention and saccadic adaptation (SA) are critical components of visual perception, the former enhancing sensory processing of selected objects, the latter maintaining the eye movements accuracy toward them. Recent studies propelled the hypothesis of a tight functional coupling between these mechanisms, possibly due to shared neural substrates. Here, we used magnetoencephalography to investigate for the first time the neurophysiological bases of this coupling and of SA per se. We compared visual discrimination performance of 12 healthy subjects before and after SA. Eye movements and magnetic signals were recorded continuously. Analyses focused on gamma band activity (GBA) during the pretarget period of the discrimination and the saccadic tasks. We found that GBA increases after SA. This increase was found in the right hemisphere for both postadaptation saccadic and discrimination tasks. For the latter, GBA also increased in the left hemisphere. We conclude that oculomotor plasticity involves GBA modulation within an extended neural network which persists after SA, suggesting a possible role of gamma oscillations in the coupling between SA and attention.

Close

  • doi:10.1093/cercor/bhy241

Close

Elena V Orekhova; Tatiana A Stroganova; Justin F Schneiderman; Sebastian Lundström; Bushra Riaz; Darko Sarovic; Olga V Sysoeva; Georg Brant; Christopher Gillberg; Nouchine Hadjikhani

Neural gain control measured through cortical gamma oscillations is associated with sensory sensitivity Journal Article

Human Brain Mapping, 40 (5), pp. 1583–1593, 2019.

Abstract | Links | BibTeX

@article{Orekhova2019,
title = {Neural gain control measured through cortical gamma oscillations is associated with sensory sensitivity},
author = {Elena V Orekhova and Tatiana A Stroganova and Justin F Schneiderman and Sebastian Lundström and Bushra Riaz and Darko Sarovic and Olga V Sysoeva and Georg Brant and Christopher Gillberg and Nouchine Hadjikhani},
doi = {10.1002/hbm.24469},
year = {2019},
date = {2019-01-01},
journal = {Human Brain Mapping},
volume = {40},
number = {5},
pages = {1583--1593},
abstract = {Gamma oscillations facilitate information processing by shaping the excitatory input/output of neuronal populations. Recent studies in humans and nonhuman primates have shown that strong excitatory drive to the visual cortex leads to suppression of induced gamma oscillations, which may reflect inhibitory-based gain control of network excitation. The efficiency of the gain control measured through gamma oscillations may in turn affect sensory sensitivity in everyday life. To test this prediction, we assessed the link between self-reported sensitivity and changes in magneto-encephalographic gamma oscillations as a function of motion velocity of high-contrast visual gratings. The induced gamma oscillations increased in frequency and decreased in power with increasing stimulation intensity. As expected, weaker suppression of the gamma response correlated with sensory hypersensitivity. Robustness of this result was confirmed by its replication in the two samples: neurotypical subjects and people with autism, who had generally elevated sensory sensitivity. We conclude that intensity-related suppression of gamma response is a promising biomarker of homeostatic control of the excitation–inhibition balance in the visual cortex.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Gamma oscillations facilitate information processing by shaping the excitatory input/output of neuronal populations. Recent studies in humans and nonhuman primates have shown that strong excitatory drive to the visual cortex leads to suppression of induced gamma oscillations, which may reflect inhibitory-based gain control of network excitation. The efficiency of the gain control measured through gamma oscillations may in turn affect sensory sensitivity in everyday life. To test this prediction, we assessed the link between self-reported sensitivity and changes in magneto-encephalographic gamma oscillations as a function of motion velocity of high-contrast visual gratings. The induced gamma oscillations increased in frequency and decreased in power with increasing stimulation intensity. As expected, weaker suppression of the gamma response correlated with sensory hypersensitivity. Robustness of this result was confirmed by its replication in the two samples: neurotypical subjects and people with autism, who had generally elevated sensory sensitivity. We conclude that intensity-related suppression of gamma response is a promising biomarker of homeostatic control of the excitation–inhibition balance in the visual cortex.

Close

  • doi:10.1002/hbm.24469

Close

Aisling E O'Sullivan; Chantelle Y Lim; Edmund C Lalor

Look at me when I'm talking to you: Selective attention at a multisensory cocktail party can be decoded using stimulus reconstruction and alpha power modulations Journal Article

European Journal of Neuroscience, 50 (8), pp. 3282–3295, 2019.

Abstract | Links | BibTeX

@article{OSullivan2019,
title = {Look at me when I'm talking to you: Selective attention at a multisensory cocktail party can be decoded using stimulus reconstruction and alpha power modulations},
author = {Aisling E O'Sullivan and Chantelle Y Lim and Edmund C Lalor},
doi = {10.1111/ejn.14425},
year = {2019},
date = {2019-01-01},
journal = {European Journal of Neuroscience},
volume = {50},
number = {8},
pages = {3282--3295},
abstract = {Recent work using electroencephalography has applied stimulus reconstruction techniques to identify the attended speaker in a cocktail party environment. The success of these approaches has been primarily based on the ability to detect cortical tracking of the acoustic envelope at the scalp level. However, most studies have ignored the effects of visual input, which is almost always present in naturalistic scenarios. In this study, we investigated the effects of visual input on envelope-based cocktail party decoding in two multisensory cocktail party situations: (a) Congruent AV—facing the attended speaker while ignoring another speaker represented by the audio-only stream and (b) Incongruent AV (eavesdropping)—attending the audio-only speaker while looking at the unattended speaker. We trained and tested decoders for each condition separately and found that we can successfully decode attention to congruent audiovisual speech and can also decode attention when listeners were eavesdropping, i.e., looking at the face of the unattended talker. In addition to this, we found alpha power to be a reliable measure of attention to the visual speech. Using parieto-occipital alpha power, we found that we can distinguish whether subjects are attending or ignoring the speaker's face. Considering the practical applications of these methods, we demonstrate that with only six near-ear electrodes we can successfully determine the attended speech. This work extends the current framework for decoding attention to speech to more naturalistic scenarios, and in doing so provides additional neural measures which may be incorporated to improve decoding accuracy.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Recent work using electroencephalography has applied stimulus reconstruction techniques to identify the attended speaker in a cocktail party environment. The success of these approaches has been primarily based on the ability to detect cortical tracking of the acoustic envelope at the scalp level. However, most studies have ignored the effects of visual input, which is almost always present in naturalistic scenarios. In this study, we investigated the effects of visual input on envelope-based cocktail party decoding in two multisensory cocktail party situations: (a) Congruent AV—facing the attended speaker while ignoring another speaker represented by the audio-only stream and (b) Incongruent AV (eavesdropping)—attending the audio-only speaker while looking at the unattended speaker. We trained and tested decoders for each condition separately and found that we can successfully decode attention to congruent audiovisual speech and can also decode attention when listeners were eavesdropping, i.e., looking at the face of the unattended talker. In addition to this, we found alpha power to be a reliable measure of attention to the visual speech. Using parieto-occipital alpha power, we found that we can distinguish whether subjects are attending or ignoring the speaker's face. Considering the practical applications of these methods, we demonstrate that with only six near-ear electrodes we can successfully determine the attended speech. This work extends the current framework for decoding attention to speech to more naturalistic scenarios, and in doing so provides additional neural measures which may be incorporated to improve decoding accuracy.

Close

  • doi:10.1111/ejn.14425

Close

Davide Paoletti; Christoph Braun; Elisabeth Julie Vargo; Wieske van Zoest

Spontaneous pre-stimulus oscillatory activity shapes the way we look: A concurrent imaging and eye-movement study Journal Article

European Journal of Neuroscience, 49 , pp. 137–149, 2019.

Abstract | Links | BibTeX

@article{Paoletti2019,
title = {Spontaneous pre-stimulus oscillatory activity shapes the way we look: A concurrent imaging and eye-movement study},
author = {Davide Paoletti and Christoph Braun and Elisabeth Julie Vargo and Wieske van Zoest},
doi = {10.1111/ejn.14285},
year = {2019},
date = {2019-01-01},
journal = {European Journal of Neuroscience},
volume = {49},
pages = {137--149},
abstract = {Previous behavioural studies have accrued evidence that response time plays a critical role in determining whether selection is influenced by stimulus saliency or target template. In the present work, we investigated to what extent the variations in timing and consequent oculomotor controls are influenced by spontaneous variations in pre-stimulus alpha oscillations. We recorded simultaneously brain activity using magnetoencephalography (MEG) and eye movements while participants performed a visual search task. Our results show that slower saccadic reaction times were predicted by an overall stronger alpha power in the 500 ms time window preceding the stimulus onset, while weaker alpha power was a signature of faster responses. When looking separately at performance for fast and slow responses, we found evidence for two specific sources of alpha activity predicting correct versus incorrect responses. When saccades were quickly elicited, errors were predicted by stronger alpha activity in posterior areas, comprising the angular gyrus in the temporal-parietal junction (TPJ) and possibly the lateral intraparietal area (LIP). Instead, when participants were slower in responding, an increase of alpha power in frontal eye fields (FEF), supplementary eye fields (SEF) and dorsolateral pre-frontal cortex (DLPFC) predicted erroneous saccades. In other words, oculomotor accuracy in fast responses was predicted by alpha power differences in more posterior areas, while the accuracy in slow responses was predicted by alpha power differences in frontal areas, in line with the idea that these areas may be differentially related to stimulus-driven and goal-driven control of selection.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous behavioural studies have accrued evidence that response time plays a critical role in determining whether selection is influenced by stimulus saliency or target template. In the present work, we investigated to what extent the variations in timing and consequent oculomotor controls are influenced by spontaneous variations in pre-stimulus alpha oscillations. We recorded simultaneously brain activity using magnetoencephalography (MEG) and eye movements while participants performed a visual search task. Our results show that slower saccadic reaction times were predicted by an overall stronger alpha power in the 500 ms time window preceding the stimulus onset, while weaker alpha power was a signature of faster responses. When looking separately at performance for fast and slow responses, we found evidence for two specific sources of alpha activity predicting correct versus incorrect responses. When saccades were quickly elicited, errors were predicted by stronger alpha activity in posterior areas, comprising the angular gyrus in the temporal-parietal junction (TPJ) and possibly the lateral intraparietal area (LIP). Instead, when participants were slower in responding, an increase of alpha power in frontal eye fields (FEF), supplementary eye fields (SEF) and dorsolateral pre-frontal cortex (DLPFC) predicted erroneous saccades. In other words, oculomotor accuracy in fast responses was predicted by alpha power differences in more posterior areas, while the accuracy in slow responses was predicted by alpha power differences in frontal areas, in line with the idea that these areas may be differentially related to stimulus-driven and goal-driven control of selection.

Close

  • doi:10.1111/ejn.14285

Close

Karisa B Parkington; Roxane J Itier

From eye to face: The impact of face outline, feature number, and feature saliency on the early neural response to faces Journal Article

Brain Research, 1722 , pp. 1–14, 2019.

Abstract | Links | BibTeX

@article{Parkington2019,
title = {From eye to face: The impact of face outline, feature number, and feature saliency on the early neural response to faces},
author = {Karisa B Parkington and Roxane J Itier},
doi = {10.1016/j.brainres.2019.146343},
year = {2019},
date = {2019-01-01},
journal = {Brain Research},
volume = {1722},
pages = {1--14},
abstract = {The LIFTED model of early face perception postulates that the face-sensitive N170 event-related potential may reflect underlying neural inhibition mechanisms which serve to regulate holistic and featural processing. It remains unclear, however, what specific factors impact these neural inhibition processes. Here, N170 peak responses were recorded whilst adults maintained fixation on a single eye using a gaze-contingent paradigm, and the presence/absence of a face outline, as well as the number and type of parafoveal features within the outline, were manipulated. N170 amplitudes and latencies were reduced when a single eye was fixated within a face outline compared to fixation on the same eye in isolation, demonstrating that the simple presence of a face outline is sufficient to elicit a shift towards a more face-like neural response. A monotonic decrease in the N170 amplitude and latency was observed with increasing numbers of parafoveal features, and the type of feature(s) present in parafovea further modulated this early face response. These results support the idea of neural inhibition exerted by parafoveal features onto the foveated feature as a function of the number, and possibly the nature, of parafoveal features. Specifically, the results suggest the use of a feature saliency framework (eyes textgreater mouth textgreater nose) at the neural level, such that the parafoveal eye may play a role in down-regulating the response to the other eye (in fovea) more so than the nose or the mouth. These results confirm the importance of parafoveal features and the face outline in the neural inhibition mechanism, and provide further support for a feature saliency mechanism guiding early face perception.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The LIFTED model of early face perception postulates that the face-sensitive N170 event-related potential may reflect underlying neural inhibition mechanisms which serve to regulate holistic and featural processing. It remains unclear, however, what specific factors impact these neural inhibition processes. Here, N170 peak responses were recorded whilst adults maintained fixation on a single eye using a gaze-contingent paradigm, and the presence/absence of a face outline, as well as the number and type of parafoveal features within the outline, were manipulated. N170 amplitudes and latencies were reduced when a single eye was fixated within a face outline compared to fixation on the same eye in isolation, demonstrating that the simple presence of a face outline is sufficient to elicit a shift towards a more face-like neural response. A monotonic decrease in the N170 amplitude and latency was observed with increasing numbers of parafoveal features, and the type of feature(s) present in parafovea further modulated this early face response. These results support the idea of neural inhibition exerted by parafoveal features onto the foveated feature as a function of the number, and possibly the nature, of parafoveal features. Specifically, the results suggest the use of a feature saliency framework (eyes textgreater mouth textgreater nose) at the neural level, such that the parafoveal eye may play a role in down-regulating the response to the other eye (in fovea) more so than the nose or the mouth. These results confirm the importance of parafoveal features and the face outline in the neural inhibition mechanism, and provide further support for a feature saliency mechanism guiding early face perception.

Close

  • doi:10.1016/j.brainres.2019.146343

Close

Thomas Parr; Berk M Mirza; Hayriye Cagnan; Karl J Friston

Dynamic causal modelling of active vision Journal Article

Journal of Neuroscience, 39 (32), pp. 6265–6275, 2019.

Abstract | Links | BibTeX

@article{Parr2019,
title = {Dynamic causal modelling of active vision},
author = {Thomas Parr and Berk M Mirza and Hayriye Cagnan and Karl J Friston},
doi = {10.1523/JNEUROSCI.2459-18.2019},
year = {2019},
date = {2019-01-01},
journal = {Journal of Neuroscience},
volume = {39},
number = {32},
pages = {6265--6275},
abstract = {In this paper, we draw from recent theoretical work on active perception, which suggests that the brain makes use of an internal (i.e., generative) model to make inferences about the causes of sensations. This view treats visual sensations as consequent on action (i.e., saccades) and implies that visual percepts must be actively constructed via a sequence ofeye movements. Oculomotor control calls on a distributed set ofbrain sources that includes the dorsal and ventral frontoparietal (attention) networks.Weargue that connections from the frontal eye fields to ventral parietal sources represent the mapping from “where”, fixation location to information derived from “what” representations in the ventral visual stream. During scene construction, this mapping must be learned, putatively through changes in the effective connectivityofthese synapses. Here,wetest the hypothesis that the couplingbetweenthe dorsal frontal cortexand the right temporoparietal cortex is modulated during saccadic interrogation ofa simple visual scene. Using dynamic causal modeling for magnetoencephalography with (male and female) human participants, we assess the evidence for changes in effective connectivity by comparing models that allow for this modulation with models that do not. We find strong evidence for modulation of connections between the two attention networks; namely, a disinhibition ofthe ventral network by its dorsal counterpart.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In this paper, we draw from recent theoretical work on active perception, which suggests that the brain makes use of an internal (i.e., generative) model to make inferences about the causes of sensations. This view treats visual sensations as consequent on action (i.e., saccades) and implies that visual percepts must be actively constructed via a sequence ofeye movements. Oculomotor control calls on a distributed set ofbrain sources that includes the dorsal and ventral frontoparietal (attention) networks.Weargue that connections from the frontal eye fields to ventral parietal sources represent the mapping from “where”, fixation location to information derived from “what” representations in the ventral visual stream. During scene construction, this mapping must be learned, putatively through changes in the effective connectivityofthese synapses. Here,wetest the hypothesis that the couplingbetweenthe dorsal frontal cortexand the right temporoparietal cortex is modulated during saccadic interrogation ofa simple visual scene. Using dynamic causal modeling for magnetoencephalography with (male and female) human participants, we assess the evidence for changes in effective connectivity by comparing models that allow for this modulation with models that do not. We find strong evidence for modulation of connections between the two attention networks; namely, a disinhibition ofthe ventral network by its dorsal counterpart.

Close

  • doi:10.1523/JNEUROSCI.2459-18.2019

Close

Nathan M Petro; Nina N Thigpen; Steven Garcia; Maeve R Boylan; Andreas Keil

Pre-target alpha power predicts the speed of cued target discrimination Journal Article

NeuroImage, 189 , pp. 878–885, 2019.

Abstract | Links | BibTeX

@article{Petro2019,
title = {Pre-target alpha power predicts the speed of cued target discrimination},
author = {Nathan M Petro and Nina N Thigpen and Steven Garcia and Maeve R Boylan and Andreas Keil},
doi = {10.1016/j.neuroimage.2019.01.066},
year = {2019},
date = {2019-01-01},
journal = {NeuroImage},
volume = {189},
pages = {878--885},
abstract = {The human visual system selects information from dense and complex streams of spatiotemporal input. This selection process is aided by prior knowledge of the features, location, and temporal proximity of an upcoming stimulus. In the laboratory, this knowledge is often conveyed by cues, preceding a task-relevant target stimulus. Response speed in cued selection tasks varies within and across participants and is often thought to index efficient selection of a cued feature, location, or moment in time. The present study used a reverse correlation approach to identify neural predictors of efficient target discrimination: Participants identified the orientation of a sinusoidal grating, which was presented in one hemifield following the presentation of bilateral visual cues that carried temporal but not spatial information about the target. Across different analytic approaches, faster target responses were predicted by larger alpha power preceding the target. These results suggest that heightened pre-target alpha power during a cue period may index a state that is beneficial for subsequent target processing. Our findings are broadly consistent with models that emphasize capacity sharing across time, as well as models that link alpha oscillations to temporal predictions regarding upcoming events.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The human visual system selects information from dense and complex streams of spatiotemporal input. This selection process is aided by prior knowledge of the features, location, and temporal proximity of an upcoming stimulus. In the laboratory, this knowledge is often conveyed by cues, preceding a task-relevant target stimulus. Response speed in cued selection tasks varies within and across participants and is often thought to index efficient selection of a cued feature, location, or moment in time. The present study used a reverse correlation approach to identify neural predictors of efficient target discrimination: Participants identified the orientation of a sinusoidal grating, which was presented in one hemifield following the presentation of bilateral visual cues that carried temporal but not spatial information about the target. Across different analytic approaches, faster target responses were predicted by larger alpha power preceding the target. These results suggest that heightened pre-target alpha power during a cue period may index a state that is beneficial for subsequent target processing. Our findings are broadly consistent with models that emphasize capacity sharing across time, as well as models that link alpha oscillations to temporal predictions regarding upcoming events.

Close

  • doi:10.1016/j.neuroimage.2019.01.066

Close

Ella Podvalny; Matthew W Flounders; Leana E King; Tom Holroyd; Biyu J He

A dual role of prestimulus spontaneous neural activity in visual object recognition Journal Article

Nature Communications, 10 , pp. 3910, 2019.

Abstract | Links | BibTeX

@article{Podvalny2019,
title = {A dual role of prestimulus spontaneous neural activity in visual object recognition},
author = {Ella Podvalny and Matthew W Flounders and Leana E King and Tom Holroyd and Biyu J He},
doi = {10.1038/s41467-019-11877-4},
year = {2019},
date = {2019-01-01},
journal = {Nature Communications},
volume = {10},
pages = {3910},
publisher = {Springer US},
abstract = {Vision relies on both specific knowledge of visual attributes, such as object categories, and general brain states, such as those reflecting arousal. We hypothesized that these phenomena independently influence recognition of forthcoming stimuli through distinct processes reflected in spontaneous neural activity. Here, we recorded magnetoencephalographic (MEG) activity in participants (N = 24) who viewed images of objects presented at recognition threshold. Using multivariate analysis applied to sensor-level activity patterns recorded before stimulus presentation, we identified two neural processes influencing subsequent subjective recognition: a general process, which disregards stimulus category and correlates with pupil size, and a specific process, which facilitates category-specific recognition. The two processes are doubly-dissociable: the general process correlates with changes in criterion but not in sensitivity, whereas the specific process correlates with changes in sensitivity but not in criterion. Our findings reveal distinct mechanisms of how spontaneous neural activity influences perception and provide a framework to integrate previous findings.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Vision relies on both specific knowledge of visual attributes, such as object categories, and general brain states, such as those reflecting arousal. We hypothesized that these phenomena independently influence recognition of forthcoming stimuli through distinct processes reflected in spontaneous neural activity. Here, we recorded magnetoencephalographic (MEG) activity in participants (N = 24) who viewed images of objects presented at recognition threshold. Using multivariate analysis applied to sensor-level activity patterns recorded before stimulus presentation, we identified two neural processes influencing subsequent subjective recognition: a general process, which disregards stimulus category and correlates with pupil size, and a specific process, which facilitates category-specific recognition. The two processes are doubly-dissociable: the general process correlates with changes in criterion but not in sensitivity, whereas the specific process correlates with changes in sensitivity but not in criterion. Our findings reveal distinct mechanisms of how spontaneous neural activity influences perception and provide a framework to integrate previous findings.

Close

  • doi:10.1038/s41467-019-11877-4

Close

Ulrich Pomper; Thomas Ditye; Ulrich Ansorge

Contralateral delay activity during temporal order memory Journal Article

Neuropsychologia, 129 , pp. 104–116, 2019.

Abstract | Links | BibTeX

@article{Pomper2019,
title = {Contralateral delay activity during temporal order memory},
author = {Ulrich Pomper and Thomas Ditye and Ulrich Ansorge},
doi = {10.1016/j.neuropsychologia.2019.03.012},
year = {2019},
date = {2019-01-01},
journal = {Neuropsychologia},
volume = {129},
pages = {104--116},
abstract = {In everyday life, we constantly need to remember the temporal sequence of visual events over short periods of time, for example, when making sense of others' actions or watching a movie. While there is increasing knowledge available on neural mechanisms underlying visual working memory (VWM) regarding the identity and spatial location of objects, less is known about how the brain encodes and retains information on temporal sequences. Here, we investigate whether the contralateral-delay activity (CDA), a well-studied electroencephalographic (EEG) component associated with VWM of object identity, also reflects the encoding and retention of temporal order. In two independent experiments, we presented participants with a sequence of four or six images, followed by a 1 s retention period. Participants judged temporal order by indicating whether a subsequently presented probe image was originally displayed during the first or the second half of the sequence. As a main novel result, we report the emergence of a contralateral negativity already following the presentation of the first item of the sequence, which increases over the course of a trial with every presented item, up to a limit of four items. We further observed no differences in the CDA during the temporal-order task compared to one obtained during a task concerning the spatial location of the presented items. Since the characteristics of the CDA appear to be highly similar between different encoded feature dimensions and increases as additional items are being encoded, we suggest this component might be a general characteristic of various types of VWM.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In everyday life, we constantly need to remember the temporal sequence of visual events over short periods of time, for example, when making sense of others' actions or watching a movie. While there is increasing knowledge available on neural mechanisms underlying visual working memory (VWM) regarding the identity and spatial location of objects, less is known about how the brain encodes and retains information on temporal sequences. Here, we investigate whether the contralateral-delay activity (CDA), a well-studied electroencephalographic (EEG) component associated with VWM of object identity, also reflects the encoding and retention of temporal order. In two independent experiments, we presented participants with a sequence of four or six images, followed by a 1 s retention period. Participants judged temporal order by indicating whether a subsequently presented probe image was originally displayed during the first or the second half of the sequence. As a main novel result, we report the emergence of a contralateral negativity already following the presentation of the first item of the sequence, which increases over the course of a trial with every presented item, up to a limit of four items. We further observed no differences in the CDA during the temporal-order task compared to one obtained during a task concerning the spatial location of the presented items. Since the characteristics of the CDA appear to be highly similar between different encoded feature dimensions and increases as additional items are being encoded, we suggest this component might be a general characteristic of various types of VWM.

Close

  • doi:10.1016/j.neuropsychologia.2019.03.012

Close

Tzvetan Popov; Bart Gips; Sabine Kastner; Ole Jensen

Spatial specificity of alpha oscillations in the human visual system Journal Article

Human Brain Mapping, 40 (15), pp. 4432–4440, 2019.

Abstract | Links | BibTeX

@article{Popov2019,
title = {Spatial specificity of alpha oscillations in the human visual system},
author = {Tzvetan Popov and Bart Gips and Sabine Kastner and Ole Jensen},
doi = {10.1002/hbm.24712},
year = {2019},
date = {2019-01-01},
journal = {Human Brain Mapping},
volume = {40},
number = {15},
pages = {4432--4440},
abstract = {Alpha oscillations are strongly modulated by spatial attention. To what extent, the generators of cortical alpha oscillations are spatially distributed and have selectivity that can be related to retinotopic organization is a matter of continuous scientific debate. In the present report, neuromagnetic activity was quantified by means of spatial location tuning functions from 30 participants engaged in a visuospatial attention task. A cue presented briefly in one of 16 locations directing covert spatial attention resulted in a robust modulation of posterior alpha oscillations. The distribution of the alpha sources approximated the retinotopic organization of the human visual system known from hemodynamic studies. Better performance in terms of target identification was associated with a more spatially constrained alpha modulation. The present findings demonstrate that the generators of posterior alpha oscillations are retinotopically organized when modulated by spatial attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Alpha oscillations are strongly modulated by spatial attention. To what extent, the generators of cortical alpha oscillations are spatially distributed and have selectivity that can be related to retinotopic organization is a matter of continuous scientific debate. In the present report, neuromagnetic activity was quantified by means of spatial location tuning functions from 30 participants engaged in a visuospatial attention task. A cue presented briefly in one of 16 locations directing covert spatial attention resulted in a robust modulation of posterior alpha oscillations. The distribution of the alpha sources approximated the retinotopic organization of the human visual system known from hemodynamic studies. Better performance in terms of target identification was associated with a more spatially constrained alpha modulation. The present findings demonstrate that the generators of posterior alpha oscillations are retinotopically organized when modulated by spatial attention.

Close

  • doi:10.1002/hbm.24712

Close

Silvan C Quax; Nadine Dijkstra; Mariel J van Staveren; Sander E Bosch; Marcel A J van Gerven

Eye movements explain decodability during perception and cued attention in MEG Journal Article

NeuroImage, 195 , pp. 444–453, 2019.

Abstract | Links | BibTeX

@article{Quax2019,
title = {Eye movements explain decodability during perception and cued attention in MEG},
author = {Silvan C Quax and Nadine Dijkstra and Mariel J van Staveren and Sander E Bosch and Marcel A J van Gerven},
doi = {10.1016/j.neuroimage.2019.03.069},
year = {2019},
date = {2019-01-01},
journal = {NeuroImage},
volume = {195},
pages = {444--453},
abstract = {Eye movements are an integral part of human perception, but can induce artifacts in many magneto-encephalography (MEG) and electroencephalography (EEG) studies. For this reason, investigators try to minimize eye movements and remove these artifacts from their data using different techniques. When these artifacts are not purely random, but consistent regarding certain stimuli or conditions, the possibility arises that eye movements are actually inducing effects in the MEG signal. It remains unclear how much of an influence eye movements can have on observed effects in MEG, since most MEG studies lack a control analysis to verify whether an effect found in the MEG signal is induced by eye movements. Here, we find that we can decode stimulus location from eye movements in two different stages of a working memory match-to-sample task that encompass different areas of research typically done with MEG. This means that the observed MEG effect might be (partly) due to eye movements instead of any true neural correlate. We suggest how to check for eye movement effects in the data and make suggestions on how to minimize eye movement artifacts from occurring in the first place.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye movements are an integral part of human perception, but can induce artifacts in many magneto-encephalography (MEG) and electroencephalography (EEG) studies. For this reason, investigators try to minimize eye movements and remove these artifacts from their data using different techniques. When these artifacts are not purely random, but consistent regarding certain stimuli or conditions, the possibility arises that eye movements are actually inducing effects in the MEG signal. It remains unclear how much of an influence eye movements can have on observed effects in MEG, since most MEG studies lack a control analysis to verify whether an effect found in the MEG signal is induced by eye movements. Here, we find that we can decode stimulus location from eye movements in two different stages of a working memory match-to-sample task that encompass different areas of research typically done with MEG. This means that the observed MEG effect might be (partly) due to eye movements instead of any true neural correlate. We suggest how to check for eye movement effects in the data and make suggestions on how to minimize eye movement artifacts from occurring in the first place.

Close

  • doi:10.1016/j.neuroimage.2019.03.069

Close

Romain Quentin; Jean Rémi King; Etienne Sallard; Nathan Fishman; Ryan Thompson; Ethan R Buch; Leonardo G Cohen

Differential brain mechanisms of selection and maintenance of information during working memory Journal Article

Journal of Neuroscience, 39 (19), pp. 3728–3740, 2019.

Abstract | Links | BibTeX

@article{Quentin2019,
title = {Differential brain mechanisms of selection and maintenance of information during working memory},
author = {Romain Quentin and Jean Rémi King and Etienne Sallard and Nathan Fishman and Ryan Thompson and Ethan R Buch and Leonardo G Cohen},
doi = {10.1523/JNEUROSCI.2764-18.2019},
year = {2019},
date = {2019-01-01},
journal = {Journal of Neuroscience},
volume = {39},
number = {19},
pages = {3728--3740},
abstract = {Working memory is our ability to select and temporarily hold information as needed for complex cognitive operations. The temporal dynamics of sustained and transient neural activity supporting the selection and holding of memory content is not known. To address this problem, we recorded magnetoencephalography in healthy participants performing a retro-cue working memory task in which the selection rule and the memory content varied independently. Multivariate decoding and source analyses showed that selecting the memory content relies on prefrontal and parieto-occipital persistent oscillatory neural activity. By contrast, the memory content was reactivated in a distributed occipitotemporal posterior network, preceding the working memory decision and in a different format than during the visual encoding. These results identify a neural signature of content selection and characterize differentiated spatiotemporal constraints for subprocesses of working memory.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Working memory is our ability to select and temporarily hold information as needed for complex cognitive operations. The temporal dynamics of sustained and transient neural activity supporting the selection and holding of memory content is not known. To address this problem, we recorded magnetoencephalography in healthy participants performing a retro-cue working memory task in which the selection rule and the memory content varied independently. Multivariate decoding and source analyses showed that selecting the memory content relies on prefrontal and parieto-occipital persistent oscillatory neural activity. By contrast, the memory content was reactivated in a distributed occipitotemporal posterior network, preceding the working memory decision and in a different format than during the visual encoding. These results identify a neural signature of content selection and characterize differentiated spatiotemporal constraints for subprocesses of working memory.

Close

  • doi:10.1523/JNEUROSCI.2764-18.2019

Close

Amirsaman Sajad; David C Godlove; Jeffrey D Schall

Cortical microcircuitry of performance monitoring Journal Article

Nature Neuroscience, 22 , pp. 265–274, 2019.

Abstract | Links | BibTeX

@article{Sajad2019,
title = {Cortical microcircuitry of performance monitoring},
author = {Amirsaman Sajad and David C Godlove and Jeffrey D Schall},
doi = {10.1038/s41593-018-0309-8},
year = {2019},
date = {2019-01-01},
journal = {Nature Neuroscience},
volume = {22},
pages = {265--274},
publisher = {Springer US},
abstract = {The medial frontal cortex enables performance monitoring, indexed by the error-related negativity (ERN) and manifested by performance adaptations. We recorded electroencephalogram over and neural spiking across all layers of the supplementary eye field, an agranular cortical area, in monkeys performing a saccade-countermanding (stop signal) task. Neurons signaling error production, feedback predicting reward gain or loss, and delivery of fluid reward had different spike widths and were concentrated differently across layers. Neurons signaling error or loss of reward were more common in layers 2 and 3 (L2/3), whereas neurons signaling gain of reward were more common in layers 5 and 6 (L5/6). Variation of error– and reinforcement-related spike rates in L2/3 but not L5/6 predicted response time adaptation. Variation in error-related spike rate in L2/3 but not L5/6 predicted ERN magnitude. These findings reveal novel features of cortical microcircuitry supporting performance monitoring and confirm one cortical source of the ERN.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The medial frontal cortex enables performance monitoring, indexed by the error-related negativity (ERN) and manifested by performance adaptations. We recorded electroencephalogram over and neural spiking across all layers of the supplementary eye field, an agranular cortical area, in monkeys performing a saccade-countermanding (stop signal) task. Neurons signaling error production, feedback predicting reward gain or loss, and delivery of fluid reward had different spike widths and were concentrated differently across layers. Neurons signaling error or loss of reward were more common in layers 2 and 3 (L2/3), whereas neurons signaling gain of reward were more common in layers 5 and 6 (L5/6). Variation of error– and reinforcement-related spike rates in L2/3 but not L5/6 predicted response time adaptation. Variation in error-related spike rate in L2/3 but not L5/6 predicted ERN magnitude. These findings reveal novel features of cortical microcircuitry supporting performance monitoring and confirm one cortical source of the ERN.

Close

  • doi:10.1038/s41593-018-0309-8

Close

Sebastian Schindler; Maximilian Bruchmann; Florian Bublatzky; Thomas Straube

Modulation of face- and emotion-selective ERPs by the three most common types of face image manipulations Journal Article

Social Cognitive and Affective Neuroscience, 14 (5), pp. 493–503, 2019.

Abstract | Links | BibTeX

@article{Schindler2019,
title = {Modulation of face- and emotion-selective ERPs by the three most common types of face image manipulations},
author = {Sebastian Schindler and Maximilian Bruchmann and Florian Bublatzky and Thomas Straube},
doi = {10.1093/scan/nsz027},
year = {2019},
date = {2019-01-01},
journal = {Social Cognitive and Affective Neuroscience},
volume = {14},
number = {5},
pages = {493--503},
abstract = {In neuroscientific studies, the naturalness of face presentation differs; a third of published studies makes use of close-up full coloured faces, a third uses close-up grey-scaled faces and another third employs cutout grey-scaled faces. Whether and how these methodological choices affect emotion-sensitive components of the event-related brain potentials (ERPs) is yet unclear. Therefore, this pre-registered study examined ERP modulations to close-up full-coloured and grey-scaled faces as well as cutout fearful and neutral facial expressions, while attention was directed to no-face oddballs. Results revealed no interaction of face naturalness and emotion for any ERP component, but showed, however, large main effects for both factors. Specifically, fearful faces and decreasing face naturalness elicited substantially enlarged N170 and early posterior negativity amplitudes and lower face naturalness also resulted in a larger P1.This pattern reversed for the LPP, showing linear increases in LPP amplitudes with increasing naturalness.We observed no interaction of emotion with face naturalness, which suggests that face naturalness and emotion are decoded in parallel at these early stages. Researchers interested in strong modulations of early components should make use of cutout grey-scaled faces, while those interested in a pronounced late positivity should use close-up coloured faces.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In neuroscientific studies, the naturalness of face presentation differs; a third of published studies makes use of close-up full coloured faces, a third uses close-up grey-scaled faces and another third employs cutout grey-scaled faces. Whether and how these methodological choices affect emotion-sensitive components of the event-related brain potentials (ERPs) is yet unclear. Therefore, this pre-registered study examined ERP modulations to close-up full-coloured and grey-scaled faces as well as cutout fearful and neutral facial expressions, while attention was directed to no-face oddballs. Results revealed no interaction of face naturalness and emotion for any ERP component, but showed, however, large main effects for both factors. Specifically, fearful faces and decreasing face naturalness elicited substantially enlarged N170 and early posterior negativity amplitudes and lower face naturalness also resulted in a larger P1.This pattern reversed for the LPP, showing linear increases in LPP amplitudes with increasing naturalness.We observed no interaction of emotion with face naturalness, which suggests that face naturalness and emotion are decoded in parallel at these early stages. Researchers interested in strong modulations of early components should make use of cutout grey-scaled faces, while those interested in a pronounced late positivity should use close-up coloured faces.

Close

  • doi:10.1093/scan/nsz027

Close

Shirin Vafaei Shooshtari; Jamal Esmaily Sadrabadi; Zahra Azizi; Reza Ebrahimpour

Confidence representation of perceptual decision by EEG and eye data in a random dot motion task Journal Article

Neuroscience, 406 , pp. 510–527, 2019.

Abstract | Links | BibTeX

@article{Shooshtari2019,
title = {Confidence representation of perceptual decision by EEG and eye data in a random dot motion task},
author = {Shirin Vafaei Shooshtari and Jamal Esmaily Sadrabadi and Zahra Azizi and Reza Ebrahimpour},
doi = {10.1016/j.neuroscience.2019.03.031},
year = {2019},
date = {2019-01-01},
journal = {Neuroscience},
volume = {406},
pages = {510--527},
publisher = {IBRO},
abstract = {The Confidence of a decision could be considered as the internal estimate of decision accuracy. This variable has been studied extensively by different types of recording data such as behavioral, electroencephalography (EEG), eye and electrophysiology data. Although the value of the reported confidence is considered as one of the most important parameters in decision making, the confidence reporting phase might be considered as a restrictive element in investigating the decision process. Thus, decision confidence should be extracted by means of other provided types of information. Here, we proposed eight confidence related properties in EEG and eye data which are significantly descriptive of the defined confidence levels in a random dot motion (RDM) task. As a matter of fact, our proposed EEG and eye data properties are capable of recognizing more than nine distinct levels of confidence. Among our proposed features, the latency of the pupil maximum diameter through the stimulus presentation was established to be the most associated one to the confidence levels. Through the time-dependent analysis of these features, we recognized the time interval of 500–600 ms after the stimulus onset as an important time in correlating features to the confidence levels.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The Confidence of a decision could be considered as the internal estimate of decision accuracy. This variable has been studied extensively by different types of recording data such as behavioral, electroencephalography (EEG), eye and electrophysiology data. Although the value of the reported confidence is considered as one of the most important parameters in decision making, the confidence reporting phase might be considered as a restrictive element in investigating the decision process. Thus, decision confidence should be extracted by means of other provided types of information. Here, we proposed eight confidence related properties in EEG and eye data which are significantly descriptive of the defined confidence levels in a random dot motion (RDM) task. As a matter of fact, our proposed EEG and eye data properties are capable of recognizing more than nine distinct levels of confidence. Among our proposed features, the latency of the pupil maximum diameter through the stimulus presentation was established to be the most associated one to the confidence levels. Through the time-dependent analysis of these features, we recognized the time interval of 500–600 ms after the stimulus onset as an important time in correlating features to the confidence levels.

Close

  • doi:10.1016/j.neuroscience.2019.03.031

Close

Lisa Stacchi; Meike Ramon; Junpeng Lao; Roberto Caldara

Neural representations of faces are tuned to eye movements Journal Article

Journal of Neuroscience, 39 (21), pp. 4113–4123, 2019.

Abstract | Links | BibTeX

@article{Stacchi2019,
title = {Neural representations of faces are tuned to eye movements},
author = {Lisa Stacchi and Meike Ramon and Junpeng Lao and Roberto Caldara},
doi = {10.1523/JNEUROSCI.2968-18.2019},
year = {2019},
date = {2019-01-01},
journal = {Journal of Neuroscience},
volume = {39},
number = {21},
pages = {4113--4123},
abstract = {Eye movements provide a functional signature of how human vision is achieved. Many recent studies have consistently reported robust idiosyncratic visual sampling strategies during face recognition. Whether these interindividual differences are mirrored by idiosyncratic neural responses remains unknown. To this aim, we first tracked eye movements of male and female observers during face recognition. Additionally, for every observer we obtained an objective index of neural face discrimination through EEG that was recorded while they fixated different facial information. We found that foveation of facial features fixated longer during face recognition elicited stronger neural face discrimination responses across all observers. This relationship occurred independently of interindividual differences in preferential facial information sampling (e.g., eye vs mouth lookers), and started as early as the first fixation. Our data show that eye movements play a functional role during face processing by providing the neural system with the information that is diagnostic to a specific observer. The effective processing of identity involves idiosyncratic, rather than universal face representations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye movements provide a functional signature of how human vision is achieved. Many recent studies have consistently reported robust idiosyncratic visual sampling strategies during face recognition. Whether these interindividual differences are mirrored by idiosyncratic neural responses remains unknown. To this aim, we first tracked eye movements of male and female observers during face recognition. Additionally, for every observer we obtained an objective index of neural face discrimination through EEG that was recorded while they fixated different facial information. We found that foveation of facial features fixated longer during face recognition elicited stronger neural face discrimination responses across all observers. This relationship occurred independently of interindividual differences in preferential facial information sampling (e.g., eye vs mouth lookers), and started as early as the first fixation. Our data show that eye movements play a functional role during face processing by providing the neural system with the information that is diagnostic to a specific observer. The effective processing of identity involves idiosyncratic, rather than universal face representations.

Close

  • doi:10.1523/JNEUROSCI.2968-18.2019

Close

David W Sutterer; Joshua J Foster; Kirsten C S Adam; Edward K Vogel; Edward Awh

Item-specific delay activity demonstrates concurrent storage of multiple active neural representations in working memory Journal Article

PLoS Biology, 17 (4), pp. e3000239, 2019.

Abstract | Links | BibTeX

@article{Sutterer2019,
title = {Item-specific delay activity demonstrates concurrent storage of multiple active neural representations in working memory},
author = {David W Sutterer and Joshua J Foster and Kirsten C S Adam and Edward K Vogel and Edward Awh},
doi = {10.1371/journal.pbio.3000239},
year = {2019},
date = {2019-01-01},
journal = {PLoS Biology},
volume = {17},
number = {4},
pages = {e3000239},
abstract = {Persistent neural activity that encodes online mental representations plays a central role in working memory (WM). However, there has been debate regarding the number of items that can be concurrently represented in this active neural state, which is often called the “focus of attention.” Some models propose a strict single-item limit, such that just 1 item can be neurally active at once while other items are relegated to an activity-silent state. Although past studies have decoded multiple items stored in WM, these studies cannot rule out a switching account in which only a single item is actively represented at a time. Here, we directly tested whether multiple representations can be held concurrently in an active state. We tracked spatial representations in WM using alpha-band (8–12 Hz) activity, which encodes spatial positions held in WM. Human observers remembered 1 or 2 positions over a short delay while we recorded electroencephalography (EEG) data. Using a spatial encoding model, we reconstructed active stimulus-specific representations (channel-tuning functions [CTFs]) from the scalp distribution of alpha-band power. Consistent with past work, we found that the selectivity of spatial CTFs was lower when 2 items were stored than when 1 item was stored. Critically, data-driven simulations revealed that the selectivity of spatial representations in the two-item condition could not be explained by models that propose that only a single item can exist in an active state at once. Thus, our findings demonstrate that multiple items can be concurrently represented in an active neural state.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Persistent neural activity that encodes online mental representations plays a central role in working memory (WM). However, there has been debate regarding the number of items that can be concurrently represented in this active neural state, which is often called the “focus of attention.” Some models propose a strict single-item limit, such that just 1 item can be neurally active at once while other items are relegated to an activity-silent state. Although past studies have decoded multiple items stored in WM, these studies cannot rule out a switching account in which only a single item is actively represented at a time. Here, we directly tested whether multiple representations can be held concurrently in an active state. We tracked spatial representations in WM using alpha-band (8–12 Hz) activity, which encodes spatial positions held in WM. Human observers remembered 1 or 2 positions over a short delay while we recorded electroencephalography (EEG) data. Using a spatial encoding model, we reconstructed active stimulus-specific representations (channel-tuning functions [CTFs]) from the scalp distribution of alpha-band power. Consistent with past work, we found that the selectivity of spatial CTFs was lower when 2 items were stored than when 1 item was stored. Critically, data-driven simulations revealed that the selectivity of spatial representations in the two-item condition could not be explained by models that propose that only a single item can exist in an active state at once. Thus, our findings demonstrate that multiple items can be concurrently represented in an active neural state.

Close

  • doi:10.1371/journal.pbio.3000239

Close

David W Sutterer; Joshua J Foster; John T Serences; Edward K Vogel; Edward Awh

Alpha-band oscillations track the retrieval of precise spatial representations from long-term memory Journal Article

Journal of Neurophysiology, 122 (2), pp. 539–551, 2019.

Abstract | Links | BibTeX

@article{Sutterer2019a,
title = {Alpha-band oscillations track the retrieval of precise spatial representations from long-term memory},
author = {David W Sutterer and Joshua J Foster and John T Serences and Edward K Vogel and Edward Awh},
doi = {10.1152/jn.00268.2019},
year = {2019},
date = {2019-01-01},
journal = {Journal of Neurophysiology},
volume = {122},
number = {2},
pages = {539--551},
abstract = {A hallmark of episodic memory is the phenomenon of mentally reexperiencing the details of past events, and a well-established concept is that the neuronal activity that mediates encoding is reinstated at retrieval. Evidence for reinstatement has come from multiple modalities, including functional magnetic resonance imaging and electroencephalography (EEG). These EEG studies have shed light on the time course of reinstatement but have been limited to distinguishing between a few categories. The goal of this work was to use recently developed experimental and technical approaches, namely continuous report tasks and inverted encoding models, to determine which frequencies of oscillatory brain activity support the retrieval of precise spatial memories. In experiment 1, we establish that an inverted encoding model applied to multivariate alpha topography tracks the retrieval of precise spatial memories. In experiment 2, we demonstrate that the frequencies and patterns of multivariate activity at study are similar to the frequencies and patterns observed during retrieval. These findings highlight the broad potential for using encoding models to characterize long-term memory retrieval. NEW & NOTEWORTHY Previous EEG work has shown that category-level information observed during encoding is recapitulated during memory retrieval, but studies with this time-resolved method have not demonstrated the reinstatement of feature-specific patterns of neural activity during retrieval. Here we show that EEG alpha-band activity tracks the retrieval of spatial representations from long-term memory. Moreover, we find considerable overlap between the frequencies and patterns of activity that track spatial memories during initial study and at retrieval.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A hallmark of episodic memory is the phenomenon of mentally reexperiencing the details of past events, and a well-established concept is that the neuronal activity that mediates encoding is reinstated at retrieval. Evidence for reinstatement has come from multiple modalities, including functional magnetic resonance imaging and electroencephalography (EEG). These EEG studies have shed light on the time course of reinstatement but have been limited to distinguishing between a few categories. The goal of this work was to use recently developed experimental and technical approaches, namely continuous report tasks and inverted encoding models, to determine which frequencies of oscillatory brain activity support the retrieval of precise spatial memories. In experiment 1, we establish that an inverted encoding model applied to multivariate alpha topography tracks the retrieval of precise spatial memories. In experiment 2, we demonstrate that the frequencies and patterns of multivariate activity at study are similar to the frequencies and patterns observed during retrieval. These findings highlight the broad potential for using encoding models to characterize long-term memory retrieval. NEW & NOTEWORTHY Previous EEG work has shown that category-level information observed during encoding is recapitulated during memory retrieval, but studies with this time-resolved method have not demonstrated the reinstatement of feature-specific patterns of neural activity during retrieval. Here we show that EEG alpha-band activity tracks the retrieval of spatial representations from long-term memory. Moreover, we find considerable overlap between the frequencies and patterns of activity that track spatial memories during initial study and at retrieval.

Close

  • doi:10.1152/jn.00268.2019

Close

Yuta Suzuki; Tetsuto Minami; Shigeki Nakauchi

Pupil constriction in the glare illusion modulates the steady-state visual evoked potentials Journal Article

Neuroscience, 416 , pp. 221–228, 2019.

Abstract | Links | BibTeX

@article{Suzuki2019,
title = {Pupil constriction in the glare illusion modulates the steady-state visual evoked potentials},
author = {Yuta Suzuki and Tetsuto Minami and Shigeki Nakauchi},
doi = {10.1016/j.neuroscience.2019.08.003},
year = {2019},
date = {2019-01-01},
journal = {Neuroscience},
volume = {416},
pages = {221--228},
publisher = {The Author(s)},
abstract = {The glare illusion enhances the perceived brightness of a central white area surrounded by a luminance gradient, without any actual change in light intensity. In this study, we measured the varied brightness and neurophysiological responses of electroencephalography (EEG) and pupil size with the several luminance contrast patterns of the glare illusion to address the question of whether the illusory brightness changes to the glare illusion process in the early visual cortex. We hypothesized that if the illusory brightness enhancement was created in the early stages of visual processing, the neural response would be similar to how it processes an actual change in light intensity. To test this, we observed the sustained visual cortical response of steady-state visual evoked potentials (SSVEPs), while participants watched flickering dots displayed in the central white area of both the varied luminance contrast of glare illusion and a control stimulus (no glare condition). We found the SSVEP amplitude was lower in the glare illusion than in the control condition, especially under high luminance contrast conditions. Furthermore, we found the probable mechanisms of the inhibited SSVEP amplitude to the high luminance contrast of glare illusion based on the greater pupil constriction, thereby decreasing the amount of light entering the pupil. Thus, the brightness enhancement in the glare illusion is already represented at the primary stage of visual processing linked to the larger pupil constriction.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The glare illusion enhances the perceived brightness of a central white area surrounded by a luminance gradient, without any actual change in light intensity. In this study, we measured the varied brightness and neurophysiological responses of electroencephalography (EEG) and pupil size with the several luminance contrast patterns of the glare illusion to address the question of whether the illusory brightness changes to the glare illusion process in the early visual cortex. We hypothesized that if the illusory brightness enhancement was created in the early stages of visual processing, the neural response would be similar to how it processes an actual change in light intensity. To test this, we observed the sustained visual cortical response of steady-state visual evoked potentials (SSVEPs), while participants watched flickering dots displayed in the central white area of both the varied luminance contrast of glare illusion and a control stimulus (no glare condition). We found the SSVEP amplitude was lower in the glare illusion than in the control condition, especially under high luminance contrast conditions. Furthermore, we found the probable mechanisms of the inhibited SSVEP amplitude to the high luminance contrast of glare illusion based on the greater pupil constriction, thereby decreasing the amount of light entering the pupil. Thus, the brightness enhancement in the glare illusion is already represented at the primary stage of visual processing linked to the larger pupil constriction.

Close

  • doi:10.1016/j.neuroscience.2019.08.003

Close

Rasa Gulbinaite; Diane H M Roozendaal; Rufin VanRullen

Attention differentially modulates the amplitude of resonance frequencies in the visual cortex Journal Article

NeuroImage, 203 , pp. 1–17, 2019.

Abstract | Links | BibTeX

@article{Gulbinaite2019,
title = {Attention differentially modulates the amplitude of resonance frequencies in the visual cortex},
author = {Rasa Gulbinaite and Diane H M Roozendaal and Rufin VanRullen},
doi = {10.1016/j.neuroimage.2019.116146},
year = {2019},
date = {2019-01-01},
journal = {NeuroImage},
volume = {203},
pages = {1--17},
abstract = {Rhythmic visual stimuli (flicker) elicit rhythmic brain responses at the frequency of the stimulus, and attention generally enhances these oscillatory brain responses (steady state visual evoked potentials, SSVEPs). Although SSVEP responses have been tested for flicker frequencies up to 100 Hz [Herrmann, 2001], effects of attention on SSVEP amplitude have only been reported for lower frequencies (up to ~30 Hz), with no systematic comparison across a wide, finely sampled frequency range. Does attention modulate SSVEP amplitude at higher flicker frequencies (gamma band, 30–80 Hz), and is attentional modulation constant across frequencies? By isolating SSVEP responses from the broadband EEG signal using a multivariate spatiotemporal source separation method, we demonstrate that flicker in the alpha and gamma bands elicit strongest and maximally phase stable brain responses (resonance), on which the effect of attention is opposite: positive for gamma and negative for alpha. Finding subject-specific gamma resonance frequency and a positive attentional modulation of gamma-band SSVEPs points to the untapped potential of flicker as a non-invasive tool for studying the causal effects of interactions between visual gamma-band rhythmic stimuli and endogenous gamma oscillations on perception and attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Rhythmic visual stimuli (flicker) elicit rhythmic brain responses at the frequency of the stimulus, and attention generally enhances these oscillatory brain responses (steady state visual evoked potentials, SSVEPs). Although SSVEP responses have been tested for flicker frequencies up to 100 Hz [Herrmann, 2001], effects of attention on SSVEP amplitude have only been reported for lower frequencies (up to ~30 Hz), with no systematic comparison across a wide, finely sampled frequency range. Does attention modulate SSVEP amplitude at higher flicker frequencies (gamma band, 30–80 Hz), and is attentional modulation constant across frequencies? By isolating SSVEP responses from the broadband EEG signal using a multivariate spatiotemporal source separation method, we demonstrate that flicker in the alpha and gamma bands elicit strongest and maximally phase stable brain responses (resonance), on which the effect of attention is opposite: positive for gamma and negative for alpha. Finding subject-specific gamma resonance frequency and a positive attentional modulation of gamma-band SSVEPs points to the untapped potential of flicker as a non-invasive tool for studying the causal effects of interactions between visual gamma-band rhythmic stimuli and endogenous gamma oscillations on perception and attention.

Close

  • doi:10.1016/j.neuroimage.2019.116146

Close

Nicole Hakim; Kirsten C S Adam; Eren Gunseli; Edward Awh; Edward K Vogel

Dissecting the neural focus of attention reveals distinct processes for spatial attention and object-based storage in visual working memory Journal Article

Psychological Science, 30 (4), pp. 526–540, 2019.

Abstract | Links | BibTeX

@article{Hakim2019,
title = {Dissecting the neural focus of attention reveals distinct processes for spatial attention and object-based storage in visual working memory},
author = {Nicole Hakim and Kirsten C S Adam and Eren Gunseli and Edward Awh and Edward K Vogel},
doi = {10.1177/0956797619830384},
year = {2019},
date = {2019-01-01},
journal = {Psychological Science},
volume = {30},
number = {4},
pages = {526--540},
abstract = {Complex cognition relies on both on-line representations in working memory (WM), said to reside in the focus of attention, and passive off-line representations of related information. Here, we dissected the focus of attention by showing that distinct neural signals index the on-line storage of objects and sustained spatial attention. We recorded electroencephalogram (EEG) activity during two tasks that employed identical stimulus displays but varied the relative demands for object storage and spatial attention. We found distinct delay-period signatures for an attention task (which required only spatial attention) and a WM task (which invoked both spatial attention and object storage). Although both tasks required active maintenance of spatial information, only the WM task elicited robust contralateral delay activity that was sensitive to mnemonic load. Thus, we argue that the focus of attention is maintained via a collaboration between distinct processes for covert spatial orienting and object-based storage.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Complex cognition relies on both on-line representations in working memory (WM), said to reside in the focus of attention, and passive off-line representations of related information. Here, we dissected the focus of attention by showing that distinct neural signals index the on-line storage of objects and sustained spatial attention. We recorded electroencephalogram (EEG) activity during two tasks that employed identical stimulus displays but varied the relative demands for object storage and spatial attention. We found distinct delay-period signatures for an attention task (which required only spatial attention) and a WM task (which invoked both spatial attention and object storage). Although both tasks required active maintenance of spatial information, only the WM task elicited robust contralateral delay activity that was sensitive to mnemonic load. Thus, we argue that the focus of attention is maintained via a collaboration between distinct processes for covert spatial orienting and object-based storage.

Close

  • doi:10.1177/0956797619830384

Close

Nicole Hakim; Tobias Feldmann-Wüstefeld; Edward Awh; Edward K Vogel

Perturbing neural representations of working memory with task-irrelevant interruption Journal Article

Journal of Cognitive Neuroscience, 32 (3), pp. 558–569, 2019.

Abstract | Links | BibTeX

@article{Hakim2019a,
title = {Perturbing neural representations of working memory with task-irrelevant interruption},
author = {Nicole Hakim and Tobias Feldmann-Wüstefeld and Edward Awh and Edward K Vogel},
doi = {10.1101/716613},
year = {2019},
date = {2019-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {32},
number = {3},
pages = {558--569},
abstract = {Working memory maintains information so that it can be used in complex cognitive tasks. A key challenge for this system is to maintain relevant information in the face of task-irrelevant perturbations. Across two experiments, we investigated the impact of task-irrelevant interruptions on neural representations of working memory. We recorded EEG activity in humans while they performed a working memory task. On a subset of trials, we interrupted participants with salient but task-irrelevant objects. To track the impact of these task-irrelevant interruptions on neural representations of working memory, we measured two well-characterized, temporally sensitive EEG markers that reflect active, prioritized working memory representations: the contralateral delay activity and lateralized alpha power (8–12 Hz). After interruption, we found that contralateral delay activity amplitude momentarily sustained but was gone by the end of the trial. Lateralized alpha power was immediately influenced by the interrupters but recovered by the end of the trial. This suggests that dissociable neural processes contribute to the maintenance of working memory information and that brief irrelevant onsets disrupt two distinct online aspects of working memory. In addition, we found that task expectancy modulated the timing and magnitude of how these two neural signals responded to task-irrelevant interruptions, suggesting that the brain's response to task-irrelevant interruption is shaped by task context.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Working memory maintains information so that it can be used in complex cognitive tasks. A key challenge for this system is to maintain relevant information in the face of task-irrelevant perturbations. Across two experiments, we investigated the impact of task-irrelevant interruptions on neural representations of working memory. We recorded EEG activity in humans while they performed a working memory task. On a subset of trials, we interrupted participants with salient but task-irrelevant objects. To track the impact of these task-irrelevant interruptions on neural representations of working memory, we measured two well-characterized, temporally sensitive EEG markers that reflect active, prioritized working memory representations: the contralateral delay activity and lateralized alpha power (8–12 Hz). After interruption, we found that contralateral delay activity amplitude momentarily sustained but was gone by the end of the trial. Lateralized alpha power was immediately influenced by the interrupters but recovered by the end of the trial. This suggests that dissociable neural processes contribute to the maintenance of working memory information and that brief irrelevant onsets disrupt two distinct online aspects of working memory. In addition, we found that task expectancy modulated the timing and magnitude of how these two neural signals responded to task-irrelevant interruptions, suggesting that the brain's response to task-irrelevant interruption is shaped by task context.

Close

  • doi:10.1101/716613

Close

Qiming Han; Huan Luo

Visual crowding involves delayed frontoparietal response and enhanced top-down modulation Journal Article

European Journal of Neuroscience, 50 (6), pp. 2931–2941, 2019.

Abstract | Links | BibTeX

@article{Han2019a,
title = {Visual crowding involves delayed frontoparietal response and enhanced top-down modulation},
author = {Qiming Han and Huan Luo},
doi = {10.1111/ejn.14401},
year = {2019},
date = {2019-01-01},
journal = {European Journal of Neuroscience},
volume = {50},
number = {6},
pages = {2931--2941},
abstract = {Crowding, the disrupted recognition of a peripheral target in the presence of nearby flankers, sets a fundamental limit on peripheral vision perception. Debates persist on whether the limit occurs at early visual cortices or is induced by top-down modulation, leaving the neural mechanism for visual crowding largely unclear. To resolve the debate, it is crucial to extract the neural signals elicited by the target from that by the target-flanker clutter, with high temporal resolution. To achieve this purpose, here we employed a temporal response function (TRF) approach to dissociate target-specific response from the overall electroencephalograph (EEG) recordings when the target was presented with (crowded) or without flankers (uncrowded) while subjects were performing a discrimination task on the peripherally presented target. Our results demonstrated two components in the target-specific contrast-tracking TRF response—an early component (100–170 ms) in occipital channels and a late component (210–450 ms) in frontoparietal channels. The late frontoparietal component, which was delayed in time under the crowded condition, was correlated with target discrimination performance, suggesting its involvement in visual crowding. Granger causality analysis further revealed stronger top-down modulation on the target stimulus under the crowded condition. Taken together, our findings support that crowding is associated with a top-down process which modulates the low-level sensory processing and delays the behavioral-relevant response in the high-level region.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Crowding, the disrupted recognition of a peripheral target in the presence of nearby flankers, sets a fundamental limit on peripheral vision perception. Debates persist on whether the limit occurs at early visual cortices or is induced by top-down modulation, leaving the neural mechanism for visual crowding largely unclear. To resolve the debate, it is crucial to extract the neural signals elicited by the target from that by the target-flanker clutter, with high temporal resolution. To achieve this purpose, here we employed a temporal response function (TRF) approach to dissociate target-specific response from the overall electroencephalograph (EEG) recordings when the target was presented with (crowded) or without flankers (uncrowded) while subjects were performing a discrimination task on the peripherally presented target. Our results demonstrated two components in the target-specific contrast-tracking TRF response—an early component (100–170 ms) in occipital channels and a late component (210–450 ms) in frontoparietal channels. The late frontoparietal component, which was delayed in time under the crowded condition, was correlated with target discrimination performance, suggesting its involvement in visual crowding. Granger causality analysis further revealed stronger top-down modulation on the target stimulus under the crowded condition. Taken together, our findings support that crowding is associated with a top-down process which modulates the low-level sensory processing and delays the behavioral-relevant response in the high-level region.

Close

  • doi:10.1111/ejn.14401

Close

Linda Henriksson; Marieke Mur; Nikolaus Kriegeskorte

Rapid invariant encoding of scene layout in human OPA Journal Article

Neuron, 103 , pp. 161–171, 2019.

Abstract | Links | BibTeX

@article{Henriksson2019,
title = {Rapid invariant encoding of scene layout in human OPA},
author = {Linda Henriksson and Marieke Mur and Nikolaus Kriegeskorte},
doi = {10.1016/j.neuron.2019.04.014},
year = {2019},
date = {2019-01-01},
journal = {Neuron},
volume = {103},
pages = {161--171},
publisher = {Elsevier Inc.},
abstract = {Successful visual navigation requires a sense of the geometry of the local environment. How do our brains extract this information from retinal images? Here we visually presented scenes with all possible combinations of five scene-bounding elements (left, right, and back walls; ceiling; floor) to human subjects during functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). The fMRI response patterns in the scene-responsive occipital place area (OPA) reflected scene layout with invariance to changes in surface texture. This result contrasted sharply with the primary visual cortex (V1), which reflected low-level image features of the stimuli, and the parahippocampal place area (PPA), which showed better texture than layout decoding. MEG indicated that the texture-invariant scene layout representation is computed from visual input within ∼100 ms, suggesting a rapid computational mechanism. Taken together, these results suggest that the cortical representation underlying our instant sense of the environmental geometry is located in the OPA.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Successful visual navigation requires a sense of the geometry of the local environment. How do our brains extract this information from retinal images? Here we visually presented scenes with all possible combinations of five scene-bounding elements (left, right, and back walls; ceiling; floor) to human subjects during functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). The fMRI response patterns in the scene-responsive occipital place area (OPA) reflected scene layout with invariance to changes in surface texture. This result contrasted sharply with the primary visual cortex (V1), which reflected low-level image features of the stimuli, and the parahippocampal place area (PPA), which showed better texture than layout decoding. MEG indicated that the texture-invariant scene layout representation is computed from visual input within ∼100 ms, suggesting a rapid computational mechanism. Taken together, these results suggest that the cortical representation underlying our instant sense of the environmental geometry is located in the OPA.

Close

  • doi:10.1016/j.neuron.2019.04.014

Close

499 entries « ‹ 1 of 5 › »

Let's Keep in Touch

SR Research Eye Tracking

NEWSLETTER SIGNUPNEWSLETTER ARCHIVE

Footer

Contact

info@sr-research.com
Phone: +1-613-271-8686
Toll Free: 1-866-821-0731
Fax: 613-482-4866

Quick Links

PRODUCTS

SOLUTIONS

SUPPORT FORUM

Legal Information

Legal Notice

Privacy Policy

EyeLink® eye trackers are intended for research purposes only and should not be used in the treatment or diagnosis of any medical condition.

Featured Blog Post

EyeLink Eye-Tracking Articles

2020 EyeLink Publication Update

Copyright © 2020 SR Research Ltd. All Rights Reserved. EyeLink is a registered trademark of SR Research Ltd.