EyeLink EEG / fNIRS / TMS Publications
All EyeLink EEG, fNIRS, and TMS research publications (with concurrent eye tracking) up until 2019 (with an early 2020) are listed below by year. You can search the publications using key words such as P300, Gamma band, NIRS, etc. You can also search for individual author names. If we missed any EyeLink EEG, fNIRS, or TMS article, please email us!
All EyeLink EEG, fNIRS, and TMS publications are also available for download / import into reference management software as a single Bibtex (.bib) file.
2020 |
James E Hoffman; Minwoo Kim; Matt Taylor; Kelsey Holiday Emotional capture during emotion-induced blindness is not automatic Journal Article Cortex, 122 , pp. 140–158, 2020. @article{Hoffman2020, title = {Emotional capture during emotion-induced blindness is not automatic}, author = {James E Hoffman and Minwoo Kim and Matt Taylor and Kelsey Holiday}, doi = {10.1016/j.cortex.2019.03.013}, year = {2020}, date = {2020-01-01}, journal = {Cortex}, volume = {122}, pages = {140--158}, publisher = {Elsevier Ltd}, abstract = {The present research used behavioral and event-related brain potentials (ERP) measures to determine whether emotional capture is automatic in the emotion-induced blindness (EIB) paradigm. The first experiment varied the priority of performing two concurrent tasks: identifying a negative or neutral picture appearing in a rapid serial visual presentation (RSVP) stream of pictures and multiple object tracking (MOT). Results showed that increased attention to the MOT task resulted in decreased accuracy for identifying both negative and neutral target pictures accompanied by decreases in the amplitude of the P3b component. In contrast, the early posterior negativity (EPN) component elicited by negative pictures was unaffected by variations in attention. Similarly, there was a decrement in MOT performance for dual-task versus single task conditions but no effect of picture type (negative vs neutral) on MOT accuracy which isn't consistent with automatic emotional capture of attention. However, the MOT task might simply be insensitive to brief interruptions of attention. The second experiment used a more sensitive reaction time (RT) measure to examine this possibility. Results showed that RT to discriminate a gap appearing in a tracked object was delayed by the simultaneous appearance of to-be-ignored distractor pictures even though MOT performance was once again unaffected by the distractor. Importantly, the RT delay was the same for both negative and neutral distractors suggesting that capture was driven by physical salience rather than emotional salience of the distractors. Despite this lack of emotional capture, the EPN component, which is thought to reflect emotional capture, was still present. We suggest that the EPN doesn't reflect capture but rather downstream effects of attention, including object recognition. These results show that capture by emotional pictures in EIB can be suppressed when attention is engaged in another difficult task. The results have important implications for understanding capture effects in EIB.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The present research used behavioral and event-related brain potentials (ERP) measures to determine whether emotional capture is automatic in the emotion-induced blindness (EIB) paradigm. The first experiment varied the priority of performing two concurrent tasks: identifying a negative or neutral picture appearing in a rapid serial visual presentation (RSVP) stream of pictures and multiple object tracking (MOT). Results showed that increased attention to the MOT task resulted in decreased accuracy for identifying both negative and neutral target pictures accompanied by decreases in the amplitude of the P3b component. In contrast, the early posterior negativity (EPN) component elicited by negative pictures was unaffected by variations in attention. Similarly, there was a decrement in MOT performance for dual-task versus single task conditions but no effect of picture type (negative vs neutral) on MOT accuracy which isn't consistent with automatic emotional capture of attention. However, the MOT task might simply be insensitive to brief interruptions of attention. The second experiment used a more sensitive reaction time (RT) measure to examine this possibility. Results showed that RT to discriminate a gap appearing in a tracked object was delayed by the simultaneous appearance of to-be-ignored distractor pictures even though MOT performance was once again unaffected by the distractor. Importantly, the RT delay was the same for both negative and neutral distractors suggesting that capture was driven by physical salience rather than emotional salience of the distractors. Despite this lack of emotional capture, the EPN component, which is thought to reflect emotional capture, was still present. We suggest that the EPN doesn't reflect capture but rather downstream effects of attention, including object recognition. These results show that capture by emotional pictures in EIB can be suppressed when attention is engaged in another difficult task. The results have important implications for understanding capture effects in EIB. |
2019 |
Praghajieeth Raajhen Santhana Gopalan; Otto Loberg; Jarmo Arvid Hämäläinen; Paavo H T Leppänen Scientific Reports, 9 , pp. 1–13, 2019. @article{Gopalan2019, title = {Attentional processes in typically developing children as revealed using brain event-related potentials and their source localization in Attention Network Test}, author = {Praghajieeth Raajhen Santhana Gopalan and Otto Loberg and Jarmo Arvid Hämäläinen and Paavo H T Leppänen}, doi = {10.1038/s41598-018-36947-3}, year = {2019}, date = {2019-12-01}, journal = {Scientific Reports}, volume = {9}, pages = {1--13}, publisher = {Nature Publishing Group}, abstract = {Attention-related processes include three functional sub-components: alerting, orienting, and inhibition. We investigated these components using EEG-based, brain event-related potentials and their neuronal source activations during the Attention Network Test in typically developing school-aged children. Participants were asked to detect the swimming direction of the centre fish in a group of five fish. The target stimulus was either preceded by a cue (centre, double, or spatial) or no cue. An EEG using 128 electrodes was recorded for 83 children aged 12–13 years. RTs showed significant effects across all three sub-components of attention. Alerting and orienting (responses to double vs non-cued target stimulus and spatially vs centre-cued target stimulus, respectively) resulted in larger N1 amplitude, whereas inhibition (responses to incongruent vs congruent target stimulus) resulted in larger P3 amplitude. Neuronal source activation for the alerting effect was localized in the right anterior temporal and bilateral occipital lobes, for the orienting effect bilaterally in the occipital lobe, and for the inhibition effect in the medial prefrontal cortex and left anterior temporal lobe. Neuronal sources of ERPs revealed that sub-processes related to the attention network are different in children as compared to earlier adult fMRI studies, which was not evident from scalp ERPs.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Attention-related processes include three functional sub-components: alerting, orienting, and inhibition. We investigated these components using EEG-based, brain event-related potentials and their neuronal source activations during the Attention Network Test in typically developing school-aged children. Participants were asked to detect the swimming direction of the centre fish in a group of five fish. The target stimulus was either preceded by a cue (centre, double, or spatial) or no cue. An EEG using 128 electrodes was recorded for 83 children aged 12–13 years. RTs showed significant effects across all three sub-components of attention. Alerting and orienting (responses to double vs non-cued target stimulus and spatially vs centre-cued target stimulus, respectively) resulted in larger N1 amplitude, whereas inhibition (responses to incongruent vs congruent target stimulus) resulted in larger P3 amplitude. Neuronal source activation for the alerting effect was localized in the right anterior temporal and bilateral occipital lobes, for the orienting effect bilaterally in the occipital lobe, and for the inhibition effect in the medial prefrontal cortex and left anterior temporal lobe. Neuronal sources of ERPs revealed that sub-processes related to the attention network are different in children as compared to earlier adult fMRI studies, which was not evident from scalp ERPs. |
Isabel M Vanegas; Annabelle Blangero; James E Galvin; Alessandro Di Rocco; Angelo Quartarone; Felice M Ghilardi; Simon P Kelly Altered dynamics of visual contextual interactions in Parkinson's disease Journal Article npj Parkinson's Disease, 5 , pp. 1–8, 2019. @article{Vanegas2019, title = {Altered dynamics of visual contextual interactions in Parkinson's disease}, author = {Isabel M Vanegas and Annabelle Blangero and James E Galvin and Alessandro {Di Rocco} and Angelo Quartarone and Felice M Ghilardi and Simon P Kelly}, doi = {10.1038/s41531-019-0085-5}, year = {2019}, date = {2019-12-01}, journal = {npj Parkinson's Disease}, volume = {5}, pages = {1--8}, publisher = {Nature Publishing Group}, abstract = {Over the last decades, psychophysical and electrophysiological studies in patients and animal models of Parkinson's disease (PD), have consistently revealed a number of visual abnormalities. In particular, specific alterations of contrast sensitivity curves, electroretinogram (ERG), and visual-evoked potentials (VEP), have been attributed to dopaminergic retinal depletion. However, fundamental mechanisms of cortical visual processing, such as normalization or “gain control” computations, have not yet been examined in PD patients. Here, we measured electrophysiological indices of gain control in both space (surround suppression) and time (sensory adaptation) in PD patients based on steady-state VEP (ssVEP). Compared with controls, patients exhibited a significantly higher initial ssVEP amplitude that quickly decayed over time, and greater relative suppression of ssVEP amplitude as a function of surrounding stimulus contrast. Meanwhile, EEG frequency spectra were broadly elevated in patients relative to controls. Thus, contrary to what might be expected given the reduced contrast sensitivity often reported in PD, visual neural responses are not weaker; rather, they are initially larger but undergo an exaggerated degree of spatial and temporal gain control and are embedded within a greater background noise level. These differences may reflect cortical mechanisms that compensate for dysfunctional center-surround interactions at the retinal level.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Over the last decades, psychophysical and electrophysiological studies in patients and animal models of Parkinson's disease (PD), have consistently revealed a number of visual abnormalities. In particular, specific alterations of contrast sensitivity curves, electroretinogram (ERG), and visual-evoked potentials (VEP), have been attributed to dopaminergic retinal depletion. However, fundamental mechanisms of cortical visual processing, such as normalization or “gain control” computations, have not yet been examined in PD patients. Here, we measured electrophysiological indices of gain control in both space (surround suppression) and time (sensory adaptation) in PD patients based on steady-state VEP (ssVEP). Compared with controls, patients exhibited a significantly higher initial ssVEP amplitude that quickly decayed over time, and greater relative suppression of ssVEP amplitude as a function of surrounding stimulus contrast. Meanwhile, EEG frequency spectra were broadly elevated in patients relative to controls. Thus, contrary to what might be expected given the reduced contrast sensitivity often reported in PD, visual neural responses are not weaker; rather, they are initially larger but undergo an exaggerated degree of spatial and temporal gain control and are embedded within a greater background noise level. These differences may reflect cortical mechanisms that compensate for dysfunctional center-surround interactions at the retinal level. |
Maria C Romero; Marco Davare; Marcelo Armendariz; Peter Janssen Neural effects of transcranial magnetic stimulation at the single-cell level Journal Article Nature Communications, 10 (1), pp. 1–11, 2019. @article{Romero2019, title = {Neural effects of transcranial magnetic stimulation at the single-cell level}, author = {Maria C Romero and Marco Davare and Marcelo Armendariz and Peter Janssen}, doi = {10.1038/s41467-019-10638-7}, year = {2019}, date = {2019-12-01}, journal = {Nature Communications}, volume = {10}, number = {1}, pages = {1--11}, publisher = {Nature Publishing Group}, abstract = {Transcranial magnetic stimulation (TMS) can non-invasively modulate neural activity in humans. Despite three decades of research, the spatial extent of the cortical area activated by TMS is still controversial. Moreover, how TMS interacts with task-related activity during motor behavior is unknown. Here, we applied single-pulse TMS over macaque parietal cortex while recording single-unit activity at various distances from the center of stimulation during grasping. The spatial extent of TMS-induced activation is remarkably restricted, affecting the spiking activity of single neurons in an area of cortex measuring less than 2 mm in diameter. In task-related neurons, TMS evokes a transient excitation followed by reduced activity, paralleled by a significantly longer grasping time. Furthermore, TMS-induced activity and task-related activity do not summate in single neurons. These results furnish crucial experimental evidence for the neural effects of TMS at the single-cell level and uncover the neural underpinnings of behavioral effects of TMS.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Transcranial magnetic stimulation (TMS) can non-invasively modulate neural activity in humans. Despite three decades of research, the spatial extent of the cortical area activated by TMS is still controversial. Moreover, how TMS interacts with task-related activity during motor behavior is unknown. Here, we applied single-pulse TMS over macaque parietal cortex while recording single-unit activity at various distances from the center of stimulation during grasping. The spatial extent of TMS-induced activation is remarkably restricted, affecting the spiking activity of single neurons in an area of cortex measuring less than 2 mm in diameter. In task-related neurons, TMS evokes a transient excitation followed by reduced activity, paralleled by a significantly longer grasping time. Furthermore, TMS-induced activity and task-related activity do not summate in single neurons. These results furnish crucial experimental evidence for the neural effects of TMS at the single-cell level and uncover the neural underpinnings of behavioral effects of TMS. |
Moreno I Coco; Antje Nuthmann; Olaf Dimigen Fixation-related brain potentials during semantic integration of object–scene information Journal Article Journal of Cognitive Neuroscience, pp. 1–19, 2019. @article{Coco2019, title = {Fixation-related brain potentials during semantic integration of object–scene information}, author = {Moreno I Coco and Antje Nuthmann and Olaf Dimigen}, doi = {10.1162/jocn_a_01504}, year = {2019}, date = {2019-11-01}, journal = {Journal of Cognitive Neuroscience}, pages = {1--19}, publisher = {MIT Press - Journals}, abstract = {In vision science, a particularly controversial topic is whether and how quickly the semantic information about objects is available outside foveal vision. Here, we aimed at contributing to this debate by coregistering eye movements and EEG while participants viewed photographs of indoor scenes that contained a semantically consistent or inconsistent target object. Linear deconvolution modeling was used to analyze the ERPs evoked by scene onset as well as the fixation-related potentials (FRPs) elicited by the fixation on the target object ( t) and by the preceding fixation ( t − 1). Object–scene consistency did not influence the probability of immediate target fixation or the ERP evoked by scene onset, which suggests that object–scene semantics was not accessed immediately. However, during the subsequent scene exploration, inconsistent objects were prioritized over consistent objects in extrafoveal vision (i.e., looked at earlier) and were more effortful to process in foveal vision (i.e., looked at longer). In FRPs, we demonstrate a fixation-related N300/N400 effect, whereby inconsistent objects elicit a larger frontocentral negativity than consistent objects. In line with the behavioral findings, this effect was already seen in FRPs aligned to the pretarget fixation t − 1 and persisted throughout fixation t, indicating that the extraction of object semantics can already begin in extrafoveal vision. Taken together, the results emphasize the usefulness of combined EEG/eye movement recordings for understanding the mechanisms of object–scene integration during natural viewing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In vision science, a particularly controversial topic is whether and how quickly the semantic information about objects is available outside foveal vision. Here, we aimed at contributing to this debate by coregistering eye movements and EEG while participants viewed photographs of indoor scenes that contained a semantically consistent or inconsistent target object. Linear deconvolution modeling was used to analyze the ERPs evoked by scene onset as well as the fixation-related potentials (FRPs) elicited by the fixation on the target object ( t) and by the preceding fixation ( t − 1). Object–scene consistency did not influence the probability of immediate target fixation or the ERP evoked by scene onset, which suggests that object–scene semantics was not accessed immediately. However, during the subsequent scene exploration, inconsistent objects were prioritized over consistent objects in extrafoveal vision (i.e., looked at earlier) and were more effortful to process in foveal vision (i.e., looked at longer). In FRPs, we demonstrate a fixation-related N300/N400 effect, whereby inconsistent objects elicit a larger frontocentral negativity than consistent objects. In line with the behavioral findings, this effect was already seen in FRPs aligned to the pretarget fixation t − 1 and persisted throughout fixation t, indicating that the extraction of object semantics can already begin in extrafoveal vision. Taken together, the results emphasize the usefulness of combined EEG/eye movement recordings for understanding the mechanisms of object–scene integration during natural viewing. |
Christoph Huber-Huber; Antimo Buonocore; Olaf Dimigen; Clayton Hickey; David Melcher NeuroImage, 200 , pp. 344–362, 2019. @article{Huber-Huber2019, title = {The peripheral preview effect with faces: Combined EEG and eye-tracking suggests multiple stages of trans-saccadic predictive and non-predictive processing}, author = {Christoph Huber-Huber and Antimo Buonocore and Olaf Dimigen and Clayton Hickey and David Melcher}, doi = {10.1016/j.neuroimage.2019.06.059}, year = {2019}, date = {2019-10-01}, journal = {NeuroImage}, volume = {200}, pages = {344--362}, publisher = {Academic Press Inc.}, abstract = {The world appears stable despite saccadic eye-movements. One possible explanation for this phenomenon is that the visual system predicts upcoming input across saccadic eye-movements based on peripheral preview of the saccadic target. We tested this idea using concurrent electroencephalography (EEG) and eye-tracking. Participants made cued saccades to peripheral upright or inverted face stimuli that changed orientation (invalid preview) or maintained orientation (valid preview) while the saccade was completed. Experiment 1 demonstrated better discrimination performance and a reduced fixation-locked N170 component (fN170) with valid than with invalid preview, demonstrating integration of pre- and post-saccadic information. Moreover, the early fixation-related potentials (FRP) showed a preview face inversion effect suggesting that some pre-saccadic input was represented in the brain until around 170 ms post fixation-onset. Experiment 2 replicated Experiment 1 and manipulated the proportion of valid and invalid trials to test whether the preview effect reflects context-based prediction across trials. A whole-scalp Bayes factor analysis showed that this manipulation did not alter the fN170 preview effect but did influence the face inversion effect before the saccade. The pre-saccadic inversion effect declined earlier in the mostly invalid block than in the mostly valid block, which is consistent with the notion of pre-saccadic expectations. In addition, in both studies, we found strong evidence for an interaction between the pre-saccadic preview stimulus and the post-saccadic target as early as 50 ms (Experiment 2) or 90 ms (Experiment 1) into the new fixation. These findings suggest that visual stability may involve three temporal stages: prediction about the saccadic target, integration of pre-saccadic and post-saccadic information at around 50-90 ms post fixation onset, and post-saccadic facilitation of rapid categorization.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The world appears stable despite saccadic eye-movements. One possible explanation for this phenomenon is that the visual system predicts upcoming input across saccadic eye-movements based on peripheral preview of the saccadic target. We tested this idea using concurrent electroencephalography (EEG) and eye-tracking. Participants made cued saccades to peripheral upright or inverted face stimuli that changed orientation (invalid preview) or maintained orientation (valid preview) while the saccade was completed. Experiment 1 demonstrated better discrimination performance and a reduced fixation-locked N170 component (fN170) with valid than with invalid preview, demonstrating integration of pre- and post-saccadic information. Moreover, the early fixation-related potentials (FRP) showed a preview face inversion effect suggesting that some pre-saccadic input was represented in the brain until around 170 ms post fixation-onset. Experiment 2 replicated Experiment 1 and manipulated the proportion of valid and invalid trials to test whether the preview effect reflects context-based prediction across trials. A whole-scalp Bayes factor analysis showed that this manipulation did not alter the fN170 preview effect but did influence the face inversion effect before the saccade. The pre-saccadic inversion effect declined earlier in the mostly invalid block than in the mostly valid block, which is consistent with the notion of pre-saccadic expectations. In addition, in both studies, we found strong evidence for an interaction between the pre-saccadic preview stimulus and the post-saccadic target as early as 50 ms (Experiment 2) or 90 ms (Experiment 1) into the new fixation. These findings suggest that visual stability may involve three temporal stages: prediction about the saccadic target, integration of pre-saccadic and post-saccadic information at around 50-90 ms post fixation onset, and post-saccadic facilitation of rapid categorization. |
Florian Sandhaeger; Constantin von Nicolai; Earl K Miller; Markus Siegel Monkey EEG links neuronal color and motion information across species and scales Journal Article eLife, 8 , pp. 1–21, 2019. @article{Sandhaeger2019, title = {Monkey EEG links neuronal color and motion information across species and scales}, author = {Florian Sandhaeger and Constantin von Nicolai and Earl K Miller and Markus Siegel}, doi = {10.7554/elife.45645}, year = {2019}, date = {2019-07-01}, journal = {eLife}, volume = {8}, pages = {1--21}, publisher = {eLife Sciences Publications, Ltd}, abstract = {It remains challenging to relate EEG and MEG to underlying circuit processes and comparable experiments on both spatial scales are rare. To close this gap between invasive and non-invasive electrophysiology we developed and recorded human-comparable EEG in macaque monkeys during visual stimulation with colored dynamic random dot patterns. Furthermore, we performed simultaneous microelectrode recordings from 6 areas of macaque cortex and human MEG. Motion direction and color information were accessible in all signals. Tuning of the non-invasive signals was similar to V4 and IT, but not to dorsal and frontal areas. Thus, MEG and EEG were dominated by early visual and ventral stream sources. Source level analysis revealed corresponding information and latency gradients across cortex. We show how information-based methods and monkey EEG can identify analogous properties of visual processing in signals spanning spatial scales from single units to MEG – a valuable framework for relating human and animal studies. DOI:}, keywords = {}, pubstate = {published}, tppubtype = {article} } It remains challenging to relate EEG and MEG to underlying circuit processes and comparable experiments on both spatial scales are rare. To close this gap between invasive and non-invasive electrophysiology we developed and recorded human-comparable EEG in macaque monkeys during visual stimulation with colored dynamic random dot patterns. Furthermore, we performed simultaneous microelectrode recordings from 6 areas of macaque cortex and human MEG. Motion direction and color information were accessible in all signals. Tuning of the non-invasive signals was similar to V4 and IT, but not to dorsal and frontal areas. Thus, MEG and EEG were dominated by early visual and ventral stream sources. Source level analysis revealed corresponding information and latency gradients across cortex. We show how information-based methods and monkey EEG can identify analogous properties of visual processing in signals spanning spatial scales from single units to MEG – a valuable framework for relating human and animal studies. DOI: |
Amirsaman Sajad; David C Godlove; Jeffrey D Schall Cortical microcircuitry of performance monitoring Journal Article Nature Neuroscience, 22 , pp. 265–274, 2019. @article{Sajad2019, title = {Cortical microcircuitry of performance monitoring}, author = {Amirsaman Sajad and David C Godlove and Jeffrey D Schall}, doi = {10.1038/s41593-018-0309-8}, year = {2019}, date = {2019-01-01}, journal = {Nature Neuroscience}, volume = {22}, pages = {265--274}, publisher = {Springer US}, abstract = {The medial frontal cortex enables performance monitoring, indexed by the error-related negativity (ERN) and manifested by performance adaptations. We recorded electroencephalogram over and neural spiking across all layers of the supplementary eye field, an agranular cortical area, in monkeys performing a saccade-countermanding (stop signal) task. Neurons signaling error production, feedback predicting reward gain or loss, and delivery of fluid reward had different spike widths and were concentrated differently across layers. Neurons signaling error or loss of reward were more common in layers 2 and 3 (L2/3), whereas neurons signaling gain of reward were more common in layers 5 and 6 (L5/6). Variation of error– and reinforcement-related spike rates in L2/3 but not L5/6 predicted response time adaptation. Variation in error-related spike rate in L2/3 but not L5/6 predicted ERN magnitude. These findings reveal novel features of cortical microcircuitry supporting performance monitoring and confirm one cortical source of the ERN.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The medial frontal cortex enables performance monitoring, indexed by the error-related negativity (ERN) and manifested by performance adaptations. We recorded electroencephalogram over and neural spiking across all layers of the supplementary eye field, an agranular cortical area, in monkeys performing a saccade-countermanding (stop signal) task. Neurons signaling error production, feedback predicting reward gain or loss, and delivery of fluid reward had different spike widths and were concentrated differently across layers. Neurons signaling error or loss of reward were more common in layers 2 and 3 (L2/3), whereas neurons signaling gain of reward were more common in layers 5 and 6 (L5/6). Variation of error– and reinforcement-related spike rates in L2/3 but not L5/6 predicted response time adaptation. Variation in error-related spike rate in L2/3 but not L5/6 predicted ERN magnitude. These findings reveal novel features of cortical microcircuitry supporting performance monitoring and confirm one cortical source of the ERN. |
Sebastian Schindler; Maximilian Bruchmann; Florian Bublatzky; Thomas Straube Modulation of face- and emotion-selective ERPs by the three most common types of face image manipulations Journal Article Social Cognitive and Affective Neuroscience, 14 (5), pp. 493–503, 2019. @article{Schindler2019, title = {Modulation of face- and emotion-selective ERPs by the three most common types of face image manipulations}, author = {Sebastian Schindler and Maximilian Bruchmann and Florian Bublatzky and Thomas Straube}, doi = {10.1093/scan/nsz027}, year = {2019}, date = {2019-01-01}, journal = {Social Cognitive and Affective Neuroscience}, volume = {14}, number = {5}, pages = {493--503}, abstract = {In neuroscientific studies, the naturalness of face presentation differs; a third of published studies makes use of close-up full coloured faces, a third uses close-up grey-scaled faces and another third employs cutout grey-scaled faces. Whether and how these methodological choices affect emotion-sensitive components of the event-related brain potentials (ERPs) is yet unclear. Therefore, this pre-registered study examined ERP modulations to close-up full-coloured and grey-scaled faces as well as cutout fearful and neutral facial expressions, while attention was directed to no-face oddballs. Results revealed no interaction of face naturalness and emotion for any ERP component, but showed, however, large main effects for both factors. Specifically, fearful faces and decreasing face naturalness elicited substantially enlarged N170 and early posterior negativity amplitudes and lower face naturalness also resulted in a larger P1.This pattern reversed for the LPP, showing linear increases in LPP amplitudes with increasing naturalness.We observed no interaction of emotion with face naturalness, which suggests that face naturalness and emotion are decoded in parallel at these early stages. Researchers interested in strong modulations of early components should make use of cutout grey-scaled faces, while those interested in a pronounced late positivity should use close-up coloured faces.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In neuroscientific studies, the naturalness of face presentation differs; a third of published studies makes use of close-up full coloured faces, a third uses close-up grey-scaled faces and another third employs cutout grey-scaled faces. Whether and how these methodological choices affect emotion-sensitive components of the event-related brain potentials (ERPs) is yet unclear. Therefore, this pre-registered study examined ERP modulations to close-up full-coloured and grey-scaled faces as well as cutout fearful and neutral facial expressions, while attention was directed to no-face oddballs. Results revealed no interaction of face naturalness and emotion for any ERP component, but showed, however, large main effects for both factors. Specifically, fearful faces and decreasing face naturalness elicited substantially enlarged N170 and early posterior negativity amplitudes and lower face naturalness also resulted in a larger P1.This pattern reversed for the LPP, showing linear increases in LPP amplitudes with increasing naturalness.We observed no interaction of emotion with face naturalness, which suggests that face naturalness and emotion are decoded in parallel at these early stages. Researchers interested in strong modulations of early components should make use of cutout grey-scaled faces, while those interested in a pronounced late positivity should use close-up coloured faces. |
Shirin Vafaei Shooshtari; Jamal Esmaily Sadrabadi; Zahra Azizi; Reza Ebrahimpour Confidence representation of perceptual decision by EEG and eye data in a random dot motion task Journal Article Neuroscience, 406 , pp. 510–527, 2019. @article{Shooshtari2019, title = {Confidence representation of perceptual decision by EEG and eye data in a random dot motion task}, author = {Shirin Vafaei Shooshtari and Jamal Esmaily Sadrabadi and Zahra Azizi and Reza Ebrahimpour}, doi = {10.1016/j.neuroscience.2019.03.031}, year = {2019}, date = {2019-01-01}, journal = {Neuroscience}, volume = {406}, pages = {510--527}, publisher = {IBRO}, abstract = {The Confidence of a decision could be considered as the internal estimate of decision accuracy. This variable has been studied extensively by different types of recording data such as behavioral, electroencephalography (EEG), eye and electrophysiology data. Although the value of the reported confidence is considered as one of the most important parameters in decision making, the confidence reporting phase might be considered as a restrictive element in investigating the decision process. Thus, decision confidence should be extracted by means of other provided types of information. Here, we proposed eight confidence related properties in EEG and eye data which are significantly descriptive of the defined confidence levels in a random dot motion (RDM) task. As a matter of fact, our proposed EEG and eye data properties are capable of recognizing more than nine distinct levels of confidence. Among our proposed features, the latency of the pupil maximum diameter through the stimulus presentation was established to be the most associated one to the confidence levels. Through the time-dependent analysis of these features, we recognized the time interval of 500–600 ms after the stimulus onset as an important time in correlating features to the confidence levels.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The Confidence of a decision could be considered as the internal estimate of decision accuracy. This variable has been studied extensively by different types of recording data such as behavioral, electroencephalography (EEG), eye and electrophysiology data. Although the value of the reported confidence is considered as one of the most important parameters in decision making, the confidence reporting phase might be considered as a restrictive element in investigating the decision process. Thus, decision confidence should be extracted by means of other provided types of information. Here, we proposed eight confidence related properties in EEG and eye data which are significantly descriptive of the defined confidence levels in a random dot motion (RDM) task. As a matter of fact, our proposed EEG and eye data properties are capable of recognizing more than nine distinct levels of confidence. Among our proposed features, the latency of the pupil maximum diameter through the stimulus presentation was established to be the most associated one to the confidence levels. Through the time-dependent analysis of these features, we recognized the time interval of 500–600 ms after the stimulus onset as an important time in correlating features to the confidence levels. |
Lisa Stacchi; Meike Ramon; Junpeng Lao; Roberto Caldara Neural representations of faces are tuned to eye movements Journal Article The Journal of Neuroscience, 39 (21), pp. 4113–4123, 2019. @article{Stacchi2019, title = {Neural representations of faces are tuned to eye movements}, author = {Lisa Stacchi and Meike Ramon and Junpeng Lao and Roberto Caldara}, doi = {10.1523/JNEUROSCI.2968-18.2019}, year = {2019}, date = {2019-01-01}, journal = {The Journal of Neuroscience}, volume = {39}, number = {21}, pages = {4113--4123}, abstract = {Eye movements provide a functional signature of how human vision is achieved. Many recent studies have consistently reported robust idiosyncratic visual sampling strategies during face recognition. Whether these interindividual differences are mirrored by idiosyncratic neural responses remains unknown. To this aim, we first tracked eye movements of male and female observers during face recognition. Additionally, for every observer we obtained an objective index of neural face discrimination through EEG that was recorded while they fixated different facial information. We found that foveation of facial features fixated longer during face recognition elicited stronger neural face discrimination responses across all observers. This relationship occurred independently of interindividual differences in preferential facial information sampling (e.g., eye vs mouth lookers), and started as early as the first fixation. Our data show that eye movements play a functional role during face processing by providing the neural system with the information that is diagnostic to a specific observer. The effective processing of identity involves idiosyncratic, rather than universal face representations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Eye movements provide a functional signature of how human vision is achieved. Many recent studies have consistently reported robust idiosyncratic visual sampling strategies during face recognition. Whether these interindividual differences are mirrored by idiosyncratic neural responses remains unknown. To this aim, we first tracked eye movements of male and female observers during face recognition. Additionally, for every observer we obtained an objective index of neural face discrimination through EEG that was recorded while they fixated different facial information. We found that foveation of facial features fixated longer during face recognition elicited stronger neural face discrimination responses across all observers. This relationship occurred independently of interindividual differences in preferential facial information sampling (e.g., eye vs mouth lookers), and started as early as the first fixation. Our data show that eye movements play a functional role during face processing by providing the neural system with the information that is diagnostic to a specific observer. The effective processing of identity involves idiosyncratic, rather than universal face representations. |
David W Sutterer; Joshua J Foster; Kirsten C S Adam; Edward K Vogel; Edward Awh Item-specific delay activity demonstrates concurrent storage of multiple active neural representations in working memory Journal Article PLoS Biology, 17 (4), pp. 1–25, 2019. @article{Sutterer2019, title = {Item-specific delay activity demonstrates concurrent storage of multiple active neural representations in working memory}, author = {David W Sutterer and Joshua J Foster and Kirsten C S Adam and Edward K Vogel and Edward Awh}, doi = {10.1371/journal.pbio.3000239}, year = {2019}, date = {2019-01-01}, journal = {PLoS Biology}, volume = {17}, number = {4}, pages = {1--25}, abstract = {Persistent neural activity that encodes online mental representations plays a central role in working memory (WM). However, there has been debate regarding the number of items that can be concurrently represented in this active neural state, which is often called the “focus of attention.” Some models propose a strict single-item limit, such that just 1 item can be neurally active at once while other items are relegated to an activity-silent state. Although past studies have decoded multiple items stored in WM, these studies cannot rule out a switching account in which only a single item is actively represented at a time. Here, we directly tested whether multiple representations can be held concurrently in an active state. We tracked spatial representations in WM using alpha-band (8–12 Hz) activity, which encodes spatial positions held in WM. Human observers remembered 1 or 2 positions over a short delay while we recorded electroencephalography (EEG) data. Using a spatial encoding model, we reconstructed active stimulus-specific representations (channel-tuning functions [CTFs]) from the scalp distribution of alpha-band power. Consistent with past work, we found that the selectivity of spatial CTFs was lower when 2 items were stored than when 1 item was stored. Critically, data-driven simulations revealed that the selectivity of spatial representations in the two-item condition could not be explained by models that propose that only a single item can exist in an active state at once. Thus, our findings demonstrate that multiple items can be concurrently represented in an active neural state.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Persistent neural activity that encodes online mental representations plays a central role in working memory (WM). However, there has been debate regarding the number of items that can be concurrently represented in this active neural state, which is often called the “focus of attention.” Some models propose a strict single-item limit, such that just 1 item can be neurally active at once while other items are relegated to an activity-silent state. Although past studies have decoded multiple items stored in WM, these studies cannot rule out a switching account in which only a single item is actively represented at a time. Here, we directly tested whether multiple representations can be held concurrently in an active state. We tracked spatial representations in WM using alpha-band (8–12 Hz) activity, which encodes spatial positions held in WM. Human observers remembered 1 or 2 positions over a short delay while we recorded electroencephalography (EEG) data. Using a spatial encoding model, we reconstructed active stimulus-specific representations (channel-tuning functions [CTFs]) from the scalp distribution of alpha-band power. Consistent with past work, we found that the selectivity of spatial CTFs was lower when 2 items were stored than when 1 item was stored. Critically, data-driven simulations revealed that the selectivity of spatial representations in the two-item condition could not be explained by models that propose that only a single item can exist in an active state at once. Thus, our findings demonstrate that multiple items can be concurrently represented in an active neural state. |
David W Sutterer; Joshua J Foster; John T Serences; Edward K Vogel; Edward Awh Alpha-band oscillations track the retrieval of precise spatial representations from long-term memory Journal Article Journal of Neurophysiology, 122 (2), pp. 539–551, 2019. @article{Sutterer2019a, title = {Alpha-band oscillations track the retrieval of precise spatial representations from long-term memory}, author = {David W Sutterer and Joshua J Foster and John T Serences and Edward K Vogel and Edward Awh}, doi = {10.1152/jn.00268.2019}, year = {2019}, date = {2019-01-01}, journal = {Journal of Neurophysiology}, volume = {122}, number = {2}, pages = {539--551}, abstract = {A hallmark of episodic memory is the phenomenon of mentally reexperiencing the details of past events, and a well-established concept is that the neuronal activity that mediates encoding is reinstated at retrieval. Evidence for reinstatement has come from multiple modalities, including functional magnetic resonance imaging and electroencephalography (EEG). These EEG studies have shed light on the time course of reinstatement but have been limited to distinguishing between a few categories. The goal of this work was to use recently developed experimental and technical approaches, namely continuous report tasks and inverted encoding models, to determine which frequencies of oscillatory brain activity support the retrieval of precise spatial memories. In experiment 1, we establish that an inverted encoding model applied to multivariate alpha topography tracks the retrieval of precise spatial memories. In experiment 2, we demonstrate that the frequencies and patterns of multivariate activity at study are similar to the frequencies and patterns observed during retrieval. These findings highlight the broad potential for using encoding models to characterize long-term memory retrieval. NEW & NOTEWORTHY Previous EEG work has shown that category-level information observed during encoding is recapitulated during memory retrieval, but studies with this time-resolved method have not demonstrated the reinstatement of feature-specific patterns of neural activity during retrieval. Here we show that EEG alpha-band activity tracks the retrieval of spatial representations from long-term memory. Moreover, we find considerable overlap between the frequencies and patterns of activity that track spatial memories during initial study and at retrieval.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A hallmark of episodic memory is the phenomenon of mentally reexperiencing the details of past events, and a well-established concept is that the neuronal activity that mediates encoding is reinstated at retrieval. Evidence for reinstatement has come from multiple modalities, including functional magnetic resonance imaging and electroencephalography (EEG). These EEG studies have shed light on the time course of reinstatement but have been limited to distinguishing between a few categories. The goal of this work was to use recently developed experimental and technical approaches, namely continuous report tasks and inverted encoding models, to determine which frequencies of oscillatory brain activity support the retrieval of precise spatial memories. In experiment 1, we establish that an inverted encoding model applied to multivariate alpha topography tracks the retrieval of precise spatial memories. In experiment 2, we demonstrate that the frequencies and patterns of multivariate activity at study are similar to the frequencies and patterns observed during retrieval. These findings highlight the broad potential for using encoding models to characterize long-term memory retrieval. NEW & NOTEWORTHY Previous EEG work has shown that category-level information observed during encoding is recapitulated during memory retrieval, but studies with this time-resolved method have not demonstrated the reinstatement of feature-specific patterns of neural activity during retrieval. Here we show that EEG alpha-band activity tracks the retrieval of spatial representations from long-term memory. Moreover, we find considerable overlap between the frequencies and patterns of activity that track spatial memories during initial study and at retrieval. |
Yuta Suzuki; Tetsuto Minami; Shigeki Nakauchi Pupil constriction in the glare illusion modulates the steady-state visual evoked potentials Journal Article Neuroscience, 416 , pp. 221–228, 2019. @article{Suzuki2019, title = {Pupil constriction in the glare illusion modulates the steady-state visual evoked potentials}, author = {Yuta Suzuki and Tetsuto Minami and Shigeki Nakauchi}, doi = {10.1016/j.neuroscience.2019.08.003}, year = {2019}, date = {2019-01-01}, journal = {Neuroscience}, volume = {416}, pages = {221--228}, publisher = {The Author(s)}, abstract = {The glare illusion enhances the perceived brightness of a central white area surrounded by a luminance gradient, without any actual change in light intensity. In this study, we measured the varied brightness and neurophysiological responses of electroencephalography (EEG) and pupil size with the several luminance contrast patterns of the glare illusion to address the question of whether the illusory brightness changes to the glare illusion process in the early visual cortex. We hypothesized that if the illusory brightness enhancement was created in the early stages of visual processing, the neural response would be similar to how it processes an actual change in light intensity. To test this, we observed the sustained visual cortical response of steady-state visual evoked potentials (SSVEPs), while participants watched flickering dots displayed in the central white area of both the varied luminance contrast of glare illusion and a control stimulus (no glare condition). We found the SSVEP amplitude was lower in the glare illusion than in the control condition, especially under high luminance contrast conditions. Furthermore, we found the probable mechanisms of the inhibited SSVEP amplitude to the high luminance contrast of glare illusion based on the greater pupil constriction, thereby decreasing the amount of light entering the pupil. Thus, the brightness enhancement in the glare illusion is already represented at the primary stage of visual processing linked to the larger pupil constriction.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The glare illusion enhances the perceived brightness of a central white area surrounded by a luminance gradient, without any actual change in light intensity. In this study, we measured the varied brightness and neurophysiological responses of electroencephalography (EEG) and pupil size with the several luminance contrast patterns of the glare illusion to address the question of whether the illusory brightness changes to the glare illusion process in the early visual cortex. We hypothesized that if the illusory brightness enhancement was created in the early stages of visual processing, the neural response would be similar to how it processes an actual change in light intensity. To test this, we observed the sustained visual cortical response of steady-state visual evoked potentials (SSVEPs), while participants watched flickering dots displayed in the central white area of both the varied luminance contrast of glare illusion and a control stimulus (no glare condition). We found the SSVEP amplitude was lower in the glare illusion than in the control condition, especially under high luminance contrast conditions. Furthermore, we found the probable mechanisms of the inhibited SSVEP amplitude to the high luminance contrast of glare illusion based on the greater pupil constriction, thereby decreasing the amount of light entering the pupil. Thus, the brightness enhancement in the glare illusion is already represented at the primary stage of visual processing linked to the larger pupil constriction. |
Pei Yi Tsai; Hsiao-Ching She; Sheng Chang Chen; Li Yu Huang; Wen Chi Chou; Jeng Ren Duann; Tzyy-Ping Jung Eye fixation-related fronto-parietal neural network correlates of memory retrieval Journal Article International Journal of Psychophysiology, 138 , pp. 57–70, 2019. @article{Tsai2019, title = {Eye fixation-related fronto-parietal neural network correlates of memory retrieval}, author = {Pei Yi Tsai and Hsiao-Ching She and Sheng Chang Chen and Li Yu Huang and Wen Chi Chou and Jeng Ren Duann and Tzyy-Ping Jung}, doi = {10.1016/j.ijpsycho.2019.02.008}, year = {2019}, date = {2019-01-01}, journal = {International Journal of Psychophysiology}, volume = {138}, pages = {57--70}, publisher = {Elsevier}, abstract = {Eye movements are considered to be informative with regard to the underlying cognitive processes of human beings. Previous studies have reported that eye movements are associated with which scientific concepts are retrieved correctly. Moreover, other studies have also suggested that eye movements involve the cooperative activity of the human brain's fronto-parietal circuits. Less research has been conducted to investigate whether fronto-parietal EEG oscillations are associated with the retrieval processing of scientific concepts. Our findings in this study demonstrated that the fronto-parietal network is indeed crucial for successful memory retrieval. In short, significantly lower theta augmentation in the frontal midline and lower alpha suppression in the right parietal region were observed at the 5th eye fixation for physics concepts that were correctly retrieved than for those that were incorrectly retrieved. Moreover, the visual cortex in the occipital lobe exhibits a significantly greater theta augmentation followed by an alpha suppression following each eye fixation, while a right fronto-parietal asymmetry was also found for the successful retrieval of presentations of physics concepts. In particular, the study results showed that eye fixation-related frontal midline theta power and right parietal alpha power at the 5th eye fixation have the greatest predictive power regarding the correctness of the retrieval of physics concepts.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Eye movements are considered to be informative with regard to the underlying cognitive processes of human beings. Previous studies have reported that eye movements are associated with which scientific concepts are retrieved correctly. Moreover, other studies have also suggested that eye movements involve the cooperative activity of the human brain's fronto-parietal circuits. Less research has been conducted to investigate whether fronto-parietal EEG oscillations are associated with the retrieval processing of scientific concepts. Our findings in this study demonstrated that the fronto-parietal network is indeed crucial for successful memory retrieval. In short, significantly lower theta augmentation in the frontal midline and lower alpha suppression in the right parietal region were observed at the 5th eye fixation for physics concepts that were correctly retrieved than for those that were incorrectly retrieved. Moreover, the visual cortex in the occipital lobe exhibits a significantly greater theta augmentation followed by an alpha suppression following each eye fixation, while a right fronto-parietal asymmetry was also found for the successful retrieval of presentations of physics concepts. In particular, the study results showed that eye fixation-related frontal midline theta power and right parietal alpha power at the 5th eye fixation have the greatest predictive power regarding the correctness of the retrieval of physics concepts. |
Raphael Vallat; David Meunier; Alain Nicolas; Perrine Ruby Hard to wake up? The cerebral correlates of sleep inertia assessed using combined behavioral, EEG and fMRI measures Journal Article NeuroImage, 184 , pp. 266–278, 2019. @article{Vallat2019, title = {Hard to wake up? The cerebral correlates of sleep inertia assessed using combined behavioral, EEG and fMRI measures}, author = {Raphael Vallat and David Meunier and Alain Nicolas and Perrine Ruby}, doi = {10.1016/j.neuroimage.2018.09.033}, year = {2019}, date = {2019-01-01}, journal = {NeuroImage}, volume = {184}, pages = {266--278}, publisher = {Elsevier Ltd}, abstract = {The first minutes following awakening from sleep are typically marked by reduced vigilance, increased sleepiness and impaired performance, a state referred to as sleep inertia. Although the behavioral aspects of sleep inertia are well documented, its cerebral correlates remain poorly understood. The present study aimed at filling this gap by measuring in 34 participants the changes in behavioral performance (descending subtraction task, DST), EEG spectral power, and resting-state fMRI functional connectivity across three time points: before an early-afternoon 45-min nap, 5 min after awakening from the nap and 25 min after awakening. Our results showed impaired performance at the DST at awakening and an intrusion of sleep-specific features (spectral power and functional connectivity) into wakefulness brain activity, the intensity of which was dependent on the prior sleep duration and depth for the functional connectivity (14 participants awakened from N2 sleep, 20 from N3 sleep). Awakening in N3 (deep) sleep induced the most robust changes and was characterized by a global loss of brain functional segregation between task-positive (dorsal attention, salience, sensorimotor) and task-negative (default mode) networks. Significant correlations were observed notably between the EEG delta power and the functional connectivity between the default and dorsal attention networks, as well as between the percentage of mistake at the DST and the default network functional connectivity. These results highlight (1) significant correlations between EEG and fMRI functional connectivity measures, (2) significant correlations between the behavioral aspect of sleep inertia and measures of the cerebral functioning at awakening (both EEG and fMRI), and (3) the important difference in the cerebral underpinnings of sleep inertia at awakening from N2 and N3 sleep.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The first minutes following awakening from sleep are typically marked by reduced vigilance, increased sleepiness and impaired performance, a state referred to as sleep inertia. Although the behavioral aspects of sleep inertia are well documented, its cerebral correlates remain poorly understood. The present study aimed at filling this gap by measuring in 34 participants the changes in behavioral performance (descending subtraction task, DST), EEG spectral power, and resting-state fMRI functional connectivity across three time points: before an early-afternoon 45-min nap, 5 min after awakening from the nap and 25 min after awakening. Our results showed impaired performance at the DST at awakening and an intrusion of sleep-specific features (spectral power and functional connectivity) into wakefulness brain activity, the intensity of which was dependent on the prior sleep duration and depth for the functional connectivity (14 participants awakened from N2 sleep, 20 from N3 sleep). Awakening in N3 (deep) sleep induced the most robust changes and was characterized by a global loss of brain functional segregation between task-positive (dorsal attention, salience, sensorimotor) and task-negative (default mode) networks. Significant correlations were observed notably between the EEG delta power and the functional connectivity between the default and dorsal attention networks, as well as between the percentage of mistake at the DST and the default network functional connectivity. These results highlight (1) significant correlations between EEG and fMRI functional connectivity measures, (2) significant correlations between the behavioral aspect of sleep inertia and measures of the cerebral functioning at awakening (both EEG and fMRI), and (3) the important difference in the cerebral underpinnings of sleep inertia at awakening from N2 and N3 sleep. |
Joram van Driel; Eduard Ort; Johannes J Fahrenfort; Christian N L Olivers Beta and theta oscillations differentially support free versus forced control over multiple-target search Journal Article The Journal of Neuroscience, 39 (9), pp. 1733–1743, 2019. @article{Driel2019, title = {Beta and theta oscillations differentially support free versus forced control over multiple-target search}, author = {Joram van Driel and Eduard Ort and Johannes J Fahrenfort and Christian N L Olivers}, doi = {10.1523/JNEUROSCI.2547-18.2018}, year = {2019}, date = {2019-01-01}, journal = {The Journal of Neuroscience}, volume = {39}, number = {9}, pages = {1733--1743}, abstract = {Many important situations require human observers to simultaneously search for more than one object. Despite a long history of research into visual search, the behavioral and neural mechanisms associated with multiple-target search are poorly understood. Here we test the novel theory that the efficiency of looking for multiple targets critically depends on the mode of cognitive control the environment affords to the observer. We used an innovative combination of electroencephalogram (EEG) and eye tracking while participants searched for two targets, within two different contexts: either both targets were present in the search display and observers were free to prioritize either one of them, thus enabling proactive control over selection; or only one of the two targets would be present in each search display, which requires reactive control to reconfigure selection when the wrong target has been prioritized. During proactive control, both univariate and multivariate signals of beta-band (15–35 Hz) power suppression before display onset predicted switches between target selections. This signal originated over midfrontal and sensorimotor regions and has previously been associated with endogenous state changes. In contrast, imposed target selections requiring reactive control elicited prefrontal power enhancements in the delta/theta band (2– 8 Hz), but only after display onset. This signal predicted individual differences in associated oculomotor switch costs, reflecting reactive reconfiguration of target selection. The results provide compelling evidence that multiple target representations are differentially prioritized during visual search, and for the first time reveal distinct neural mechanisms underlying proactive and reactive control over multiple-target search.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Many important situations require human observers to simultaneously search for more than one object. Despite a long history of research into visual search, the behavioral and neural mechanisms associated with multiple-target search are poorly understood. Here we test the novel theory that the efficiency of looking for multiple targets critically depends on the mode of cognitive control the environment affords to the observer. We used an innovative combination of electroencephalogram (EEG) and eye tracking while participants searched for two targets, within two different contexts: either both targets were present in the search display and observers were free to prioritize either one of them, thus enabling proactive control over selection; or only one of the two targets would be present in each search display, which requires reactive control to reconfigure selection when the wrong target has been prioritized. During proactive control, both univariate and multivariate signals of beta-band (15–35 Hz) power suppression before display onset predicted switches between target selections. This signal originated over midfrontal and sensorimotor regions and has previously been associated with endogenous state changes. In contrast, imposed target selections requiring reactive control elicited prefrontal power enhancements in the delta/theta band (2– 8 Hz), but only after display onset. This signal predicted individual differences in associated oculomotor switch costs, reflecting reactive reconfiguration of target selection. The results provide compelling evidence that multiple target representations are differentially prioritized during visual search, and for the first time reveal distinct neural mechanisms underlying proactive and reactive control over multiple-target search. |
Marine Vernet; Chloé Stengel; Romain Quentin; Julià L Amengual; Antoni Valero-Cabré Entrainment of local synchrony reveals a causal role for high-beta right frontal oscillations in human visual consciousness Journal Article Scientific Reports, 9 , pp. 1–15, 2019. @article{Vernet2019, title = {Entrainment of local synchrony reveals a causal role for high-beta right frontal oscillations in human visual consciousness}, author = {Marine Vernet and Chloé Stengel and Romain Quentin and Juli{à} L Amengual and Antoni Valero-Cabré}, doi = {10.1038/s41598-019-49673-1}, year = {2019}, date = {2019-01-01}, journal = {Scientific Reports}, volume = {9}, pages = {1--15}, abstract = {Prior evidence supports a critical role of oscillatory activity in visual cognition, but are cerebral oscillations simply correlated or causally linked to our ability to consciously acknowledge the presence of a target in our visual field? Here, EEG signals were recorded on humans performing a visual detection task, while they received brief patterns of rhythmic or random transcranial magnetic stimulation (TMS) delivered to the right Frontal Eye Field (FEF) prior to the onset of a lateralized target. TMS entrained oscillations, i.e., increased high-beta power and phase alignment (the latter to a higher extent for rhythmic high-beta patterns than random patterns) while also boosting visual detection sensitivity. Considering post-hoc only those participants in which rhythmic stimulation enhanced visual detection, the magnitude of high-beta entrainment correlated with left visual performance increases. Our study provides evidence in favor of a causal link between high-beta oscillatory activity in the Frontal Eye Field and visual detection. Furthermore, it supports future applications of brain stimulation to manipulate local synchrony and improve or restore impaired visual behaviors.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Prior evidence supports a critical role of oscillatory activity in visual cognition, but are cerebral oscillations simply correlated or causally linked to our ability to consciously acknowledge the presence of a target in our visual field? Here, EEG signals were recorded on humans performing a visual detection task, while they received brief patterns of rhythmic or random transcranial magnetic stimulation (TMS) delivered to the right Frontal Eye Field (FEF) prior to the onset of a lateralized target. TMS entrained oscillations, i.e., increased high-beta power and phase alignment (the latter to a higher extent for rhythmic high-beta patterns than random patterns) while also boosting visual detection sensitivity. Considering post-hoc only those participants in which rhythmic stimulation enhanced visual detection, the magnitude of high-beta entrainment correlated with left visual performance increases. Our study provides evidence in favor of a causal link between high-beta oscillatory activity in the Frontal Eye Field and visual detection. Furthermore, it supports future applications of brain stimulation to manipulate local synchrony and improve or restore impaired visual behaviors. |
Leonhard Waschke; Sarah Tune; Jonas Obleser Local cortical desynchronization and pupil-linked arousal differentially shape brain states for optimal sensory performance Journal Article eLife, 8 , pp. 1–27, 2019. @article{Waschke2019, title = {Local cortical desynchronization and pupil-linked arousal differentially shape brain states for optimal sensory performance}, author = {Leonhard Waschke and Sarah Tune and Jonas Obleser}, doi = {10.7554/eLife.51501}, year = {2019}, date = {2019-01-01}, journal = {eLife}, volume = {8}, pages = {1--27}, abstract = {Instantaneous brain states have consequences for our sensation, perception, and behaviour. Fluctuations in arousal and neural desynchronization likely pose perceptually relevant states. However, their relationship and their relative impact on perception is unclear. We here show that, at the single-trial level in humans, local desynchronization in sensory cortex (expressed as time-series entropy) versus pupil-linked arousal differentially impact perceptual processing. While we recorded electroencephalography (EEG) and pupillometry data, stimuli of a demanding auditory discrimination task were presented into states of high or low desynchronization of auditory cortex via a real-time closed-loop setup. Desynchronization and arousal distinctly influenced stimulus-evoked activity and shaped behaviour displaying an inverted u-shaped relationship: States of intermediate desynchronization elicited minimal response bias and fastest responses, while states of intermediate arousal gave rise to highest response sensitivity. Our results speak to a model in which independent states of local desynchronization and global arousal jointly optimise sensory processing and performance.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Instantaneous brain states have consequences for our sensation, perception, and behaviour. Fluctuations in arousal and neural desynchronization likely pose perceptually relevant states. However, their relationship and their relative impact on perception is unclear. We here show that, at the single-trial level in humans, local desynchronization in sensory cortex (expressed as time-series entropy) versus pupil-linked arousal differentially impact perceptual processing. While we recorded electroencephalography (EEG) and pupillometry data, stimuli of a demanding auditory discrimination task were presented into states of high or low desynchronization of auditory cortex via a real-time closed-loop setup. Desynchronization and arousal distinctly influenced stimulus-evoked activity and shaped behaviour displaying an inverted u-shaped relationship: States of intermediate desynchronization elicited minimal response bias and fastest responses, while states of intermediate arousal gave rise to highest response sensitivity. Our results speak to a model in which independent states of local desynchronization and global arousal jointly optimise sensory processing and performance. |
Chad C Williams; Mitchel Kappen; Cameron D Hassall; Bruce Wright; Olave E Krigolson Thinking theta and alpha: Mechanisms of intuitive and analytical reasoning Journal Article NeuroImage, 189 , pp. 574–580, 2019. @article{Williams2019, title = {Thinking theta and alpha: Mechanisms of intuitive and analytical reasoning}, author = {Chad C Williams and Mitchel Kappen and Cameron D Hassall and Bruce Wright and Olave E Krigolson}, doi = {10.1016/j.neuroimage.2019.01.048}, year = {2019}, date = {2019-01-01}, journal = {NeuroImage}, volume = {189}, pages = {574--580}, publisher = {Elsevier Ltd}, abstract = {Humans have a unique ability to engage in different modes of thinking. Intuitive thinking (coined System 1, see Kahneman, 2011) is fast, automatic, and effortless whereas analytical thinking (coined System 2) is slow, contemplative, and effortful. We extend seminal pupillometry research examining these modes of thinking by using electroencephalography (EEG) to decipher their respective underlying neural mechanisms. We demonstrate that System 1 thinking is characterized by an increase in parietal alpha EEG power reflecting autonomic access to long-term memory and a release of attentional resources whereas System 2 thinking is characterized by an increase in frontal theta EEG power indicative of the engagement of cognitive control and working memory processes. Consider our results in terms of an example - a child may need cognitive control and working memory when contemplating a mathematics problem yet an adullt can drive a car with little to no attention by drawing on easily accessed memories. Importantly, the unravelling of intuitive and analytical thinking mechanisms and their neural signatures will provide insight as to how different modes of thinking drive our everyday lives.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Humans have a unique ability to engage in different modes of thinking. Intuitive thinking (coined System 1, see Kahneman, 2011) is fast, automatic, and effortless whereas analytical thinking (coined System 2) is slow, contemplative, and effortful. We extend seminal pupillometry research examining these modes of thinking by using electroencephalography (EEG) to decipher their respective underlying neural mechanisms. We demonstrate that System 1 thinking is characterized by an increase in parietal alpha EEG power reflecting autonomic access to long-term memory and a release of attentional resources whereas System 2 thinking is characterized by an increase in frontal theta EEG power indicative of the engagement of cognitive control and working memory processes. Consider our results in terms of an example - a child may need cognitive control and working memory when contemplating a mathematics problem yet an adullt can drive a car with little to no attention by drawing on easily accessed memories. Importantly, the unravelling of intuitive and analytical thinking mechanisms and their neural signatures will provide insight as to how different modes of thinking drive our everyday lives. |
Jing Zhu; Ying Wang; Rong La; Jiawei Zhan; Junhong Niu; Shuai Zeng; Xiping Hu Multimodal mild depression recognition based on EEG-EM synchronization acquisition network Journal Article IEEE Access, 7 , pp. 28196–28210, 2019. @article{Zhu2019b, title = {Multimodal mild depression recognition based on EEG-EM synchronization acquisition network}, author = {Jing Zhu and Ying Wang and Rong La and Jiawei Zhan and Junhong Niu and Shuai Zeng and Xiping Hu}, doi = {10.1109/ACCESS.2019.2901950}, year = {2019}, date = {2019-01-01}, journal = {IEEE Access}, volume = {7}, pages = {28196--28210}, publisher = {IEEE}, abstract = {In this paper, we used electroencephalography (EEG)-eye movement (EM) synchronization acquisition network to simultaneously record both EEG and EM physiological signals of the mild depression and normal controls during free viewing. Then, we consider a multimodal feature fusion method that can best discriminate between mild depression and normal control subjects as a step toward achieving our long-term aim of developing an objective and effective multimodal system that assists doctors during diagnosis and monitoring of mild depression. Based on the multimodal denoising autoencoder, we use two feature fusion strategies (feature fusion and hidden layer fusion) for fusion of the EEG and EM signals to improve the recognition performance of classifiers for mild depression. Our experimental results indicate that the EEG-EM synchronization acquisition network ensures that the recorded EM and EEG data require that both the data streams are synchronized with millisecond precision, and both fusion methods can improve the mild depression recognition accuracy, thus demonstrating the complementary nature of the modalities. Compared with the unimodal classification approach that uses only EEG or EM, the feature fusion method slightly improved the recognition accuracy by 1.88%, while the hidden layer fusion method significantly improved the classification rate by up to 7.36%. In particular, the highest classification accuracy achieved in this paper was 83.42%. These results indicate that the multimodal deep learning approaches with input data using a combination of EEG and EM signals are promising in achieving real-time monitoring and identification of mild depression.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In this paper, we used electroencephalography (EEG)-eye movement (EM) synchronization acquisition network to simultaneously record both EEG and EM physiological signals of the mild depression and normal controls during free viewing. Then, we consider a multimodal feature fusion method that can best discriminate between mild depression and normal control subjects as a step toward achieving our long-term aim of developing an objective and effective multimodal system that assists doctors during diagnosis and monitoring of mild depression. Based on the multimodal denoising autoencoder, we use two feature fusion strategies (feature fusion and hidden layer fusion) for fusion of the EEG and EM signals to improve the recognition performance of classifiers for mild depression. Our experimental results indicate that the EEG-EM synchronization acquisition network ensures that the recorded EM and EEG data require that both the data streams are synchronized with millisecond precision, and both fusion methods can improve the mild depression recognition accuracy, thus demonstrating the complementary nature of the modalities. Compared with the unimodal classification approach that uses only EEG or EM, the feature fusion method slightly improved the recognition accuracy by 1.88%, while the hidden layer fusion method significantly improved the classification rate by up to 7.36%. In particular, the highest classification accuracy achieved in this paper was 83.42%. These results indicate that the multimodal deep learning approaches with input data using a combination of EEG and EM signals are promising in achieving real-time monitoring and identification of mild depression. |
Nuno Alexandre De Sá Teixeira; Gianfranco Bosco; Sergio Delle Monache; Francesco Lacquaniti Experimental Brain Research, 237 (12), pp. 3375–3390, 2019. @article{DeSaTeixeira2019, title = {The role of cortical areas hMT/V5+ and TPJ on the magnitude of representational momentum and representational gravity: A transcranial magnetic stimulation study}, author = {Nuno Alexandre {De Sá Teixeira} and Gianfranco Bosco and Sergio {Delle Monache} and Francesco Lacquaniti}, doi = {10.1007/s00221-019-05683-z}, year = {2019}, date = {2019-01-01}, journal = {Experimental Brain Research}, volume = {237}, number = {12}, pages = {3375--3390}, publisher = {Springer Berlin Heidelberg}, abstract = {The perceived vanishing location of a moving target is systematically displaced forward, in the direction of motion—representational momentum—, and downward, in the direction of gravity—representational gravity. Despite a wealth of research on the factors that modulate these phenomena, little is known regarding their neurophysiological substrates. The present experiment aims to explore which role is played by cortical areas hMT/V5+, linked to the processing of visual motion, and TPJ, thought to support the functioning of an internal model of gravity, in modulating both effects. Participants were required to perform a standard spatial localization task while the activity of the right hMT/V5+ or TPJ sites was selectively disrupted with an offline continuous theta-burst stimulation (cTBS) protocol, interspersed with control blocks with no stimulation. Eye movements were recorded during all spatial localizations. Results revealed an increase in representational gravity contingent on the disruption of the activity of hMT/V5+ and, conversely, some evidence suggested a bigger representational momentum when TPJ was stimulated. Furthermore, stimulation of hMT/V5+ led to a decreased ocular overshoot and to a time-dependent downward drift of gaze location. These outcomes suggest that a reciprocal balance between perceived kinematics and anticipated dynamics might modulate these spatial localization responses, compatible with a push–pull mechanism.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The perceived vanishing location of a moving target is systematically displaced forward, in the direction of motion—representational momentum—, and downward, in the direction of gravity—representational gravity. Despite a wealth of research on the factors that modulate these phenomena, little is known regarding their neurophysiological substrates. The present experiment aims to explore which role is played by cortical areas hMT/V5+, linked to the processing of visual motion, and TPJ, thought to support the functioning of an internal model of gravity, in modulating both effects. Participants were required to perform a standard spatial localization task while the activity of the right hMT/V5+ or TPJ sites was selectively disrupted with an offline continuous theta-burst stimulation (cTBS) protocol, interspersed with control blocks with no stimulation. Eye movements were recorded during all spatial localizations. Results revealed an increase in representational gravity contingent on the disruption of the activity of hMT/V5+ and, conversely, some evidence suggested a bigger representational momentum when TPJ was stimulated. Furthermore, stimulation of hMT/V5+ led to a decreased ocular overshoot and to a time-dependent downward drift of gaze location. These outcomes suggest that a reciprocal balance between perceived kinematics and anticipated dynamics might modulate these spatial localization responses, compatible with a push–pull mechanism. |
Luis Aguado; Karisa B Parkington; Teresa Dieguez-Risco; José A Hinojosa; Roxane J Itier Joint modulation of facial expression processing by contextual congruency and task demands Journal Article Brain Sciences, 9 , pp. 1–20, 2019. @article{Aguado2019, title = {Joint modulation of facial expression processing by contextual congruency and task demands}, author = {Luis Aguado and Karisa B Parkington and Teresa Dieguez-Risco and José A Hinojosa and Roxane J Itier}, doi = {10.3390/brainsci9050116}, year = {2019}, date = {2019-01-01}, journal = {Brain Sciences}, volume = {9}, pages = {1--20}, abstract = {Faces showing expressions of happiness or anger were presented together with sentences that described happiness-inducing or anger-inducing situations. Two main variables were manipulated: (i) congruency between contexts and expressions (congruent/incongruent) and (ii) the task assigned to the participant, discriminating the emotion shown by the target face (emotion task) or judging whether the expression shown by the face was congruent or not with the context (congruency task). Behavioral and electrophysiological results (event-related potentials (ERP)) showed that processing facial expressions was jointly influenced by congruency and task demands. ERP results revealed task effects at frontal sites, with larger positive amplitudes between 250–450 ms in the congruency task, reflecting the higher cognitive effort required by this task. Effects of congruency appeared at latencies and locations corresponding to the early posterior negativity (EPN) and late positive potential (LPP) components that have previously been found to be sensitive to emotion and affective congruency. The magnitude and spatial distribution of the congruency effects varied depending on the task and the target expression. These results are discussed in terms of the modulatory role of context on facial expression processing and the different mechanisms underlying the processing of expressions of positive and negative emotions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Faces showing expressions of happiness or anger were presented together with sentences that described happiness-inducing or anger-inducing situations. Two main variables were manipulated: (i) congruency between contexts and expressions (congruent/incongruent) and (ii) the task assigned to the participant, discriminating the emotion shown by the target face (emotion task) or judging whether the expression shown by the face was congruent or not with the context (congruency task). Behavioral and electrophysiological results (event-related potentials (ERP)) showed that processing facial expressions was jointly influenced by congruency and task demands. ERP results revealed task effects at frontal sites, with larger positive amplitudes between 250–450 ms in the congruency task, reflecting the higher cognitive effort required by this task. Effects of congruency appeared at latencies and locations corresponding to the early posterior negativity (EPN) and late positive potential (LPP) components that have previously been found to be sensitive to emotion and affective congruency. The magnitude and spatial distribution of the congruency effects varied depending on the task and the target expression. These results are discussed in terms of the modulatory role of context on facial expression processing and the different mechanisms underlying the processing of expressions of positive and negative emotions. |
Máté Aller; Uta Noppeney To integrate or not to integrate: Temporal dynamics of hierarchical Bayesian causal inference Journal Article PLoS Biology, 17 (4), pp. 1–31, 2019. @article{Aller2019, title = {To integrate or not to integrate: Temporal dynamics of hierarchical Bayesian causal inference}, author = {Máté Aller and Uta Noppeney}, doi = {10.1371/journal.pbio.3000210}, year = {2019}, date = {2019-01-01}, journal = {PLoS Biology}, volume = {17}, number = {4}, pages = {1--31}, abstract = {To form a percept of the environment, the brain needs to solve the binding problem—inferring whether signals come from a common cause and are integrated or come from independent causes and are segregated. Behaviourally, humans solve this problem near-optimally as predicted by Bayesian causal inference; but the neural mechanisms remain unclear. Combining Bayesian modelling, electroencephalography (EEG), and multivariate decoding in an audiovisual spatial localisation task, we show that the brain accomplishes Bayesian causal inference by dynamically encoding multiple spatial estimates. Initially, auditory and visual signal locations are estimated independently; next, an estimate is formed that combines information from vision and audition. Yet, it is only from 200 ms onwards that the brain integrates audiovisual signals weighted by their bottom-up sensory reliabilities and top-down task relevance into spatial priority maps that guide behavioural responses. As predicted by Bayesian causal inference, these spatial priority maps take into account the brain's uncertainty about the world's causal structure and flexibly arbitrate between sensory integration and segregation. The dynamic evolution of perceptual estimates thus reflects the hierarchical nature of Bayesian causal inference, a statistical computation, which is crucial for effective interactions with the environment.}, keywords = {}, pubstate = {published}, tppubtype = {article} } To form a percept of the environment, the brain needs to solve the binding problem—inferring whether signals come from a common cause and are integrated or come from independent causes and are segregated. Behaviourally, humans solve this problem near-optimally as predicted by Bayesian causal inference; but the neural mechanisms remain unclear. Combining Bayesian modelling, electroencephalography (EEG), and multivariate decoding in an audiovisual spatial localisation task, we show that the brain accomplishes Bayesian causal inference by dynamically encoding multiple spatial estimates. Initially, auditory and visual signal locations are estimated independently; next, an estimate is formed that combines information from vision and audition. Yet, it is only from 200 ms onwards that the brain integrates audiovisual signals weighted by their bottom-up sensory reliabilities and top-down task relevance into spatial priority maps that guide behavioural responses. As predicted by Bayesian causal inference, these spatial priority maps take into account the brain's uncertainty about the world's causal structure and flexibly arbitrate between sensory integration and segregation. The dynamic evolution of perceptual estimates thus reflects the hierarchical nature of Bayesian causal inference, a statistical computation, which is crucial for effective interactions with the environment. |
Roy Amit; Dekel Abeles; Marisa Carrasco; Shlomit Yuval-Greenberg Oculomotor inhibition reflects temporal expectations Journal Article NeuroImage, 184 , pp. 279–292, 2019. @article{Amit2019a, title = {Oculomotor inhibition reflects temporal expectations}, author = {Roy Amit and Dekel Abeles and Marisa Carrasco and Shlomit Yuval-Greenberg}, doi = {10.1016/j.neuroimage.2018.09.026}, year = {2019}, date = {2019-01-01}, journal = {NeuroImage}, volume = {184}, pages = {279--292}, abstract = {The accurate extraction of signals out of noisy environments is a major challenge of the perceptual system. Forming temporal expectations and continuously matching them with perceptual input can facilitate this process. In humans, temporal expectations are typically assessed using behavioral measures, which provide only retrospective but no real-time estimates during target anticipation, or by using electrophysiological measures, which require extensive preprocessing and are difficult to interpret. Here we show a new correlate of temporal expectations based on oculomotor behavior. Observers performed an orientation-discrimination task on a central grating target, while their gaze position and EEG were monitored. In each trial, a cue preceded the target by a varying interval (“foreperiod”). In separate blocks, the cue was either predictive or non-predictive regarding the timing of the target. Results showed that saccades and blinks were inhibited more prior to an anticipated regular target than a less-anticipated irregular one. This consistent oculomotor inhibition effect enabled a trial-by-trial classification according to interval-regularity. Additionally, in the regular condition the slope of saccade-rate and drift were shallower for longer than shorter foreperiods, indicating their adjustment according to temporal expectations. Comparing the sensitivity of this oculomotor marker with those of other common predictability markers (e.g. alpha-suppression) showed that it is a sensitive marker for cue-related anticipation. In contrast, temporal changes in conditional probabilities (hazard-rate) modulated alpha-suppression more than cue-related anticipation. We conclude that pre-target oculomotor inhibition is a correlate of temporal predictions induced by cue-target associations, whereas alpha-suppression is more sensitive to conditional probabilities across time.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The accurate extraction of signals out of noisy environments is a major challenge of the perceptual system. Forming temporal expectations and continuously matching them with perceptual input can facilitate this process. In humans, temporal expectations are typically assessed using behavioral measures, which provide only retrospective but no real-time estimates during target anticipation, or by using electrophysiological measures, which require extensive preprocessing and are difficult to interpret. Here we show a new correlate of temporal expectations based on oculomotor behavior. Observers performed an orientation-discrimination task on a central grating target, while their gaze position and EEG were monitored. In each trial, a cue preceded the target by a varying interval (“foreperiod”). In separate blocks, the cue was either predictive or non-predictive regarding the timing of the target. Results showed that saccades and blinks were inhibited more prior to an anticipated regular target than a less-anticipated irregular one. This consistent oculomotor inhibition effect enabled a trial-by-trial classification according to interval-regularity. Additionally, in the regular condition the slope of saccade-rate and drift were shallower for longer than shorter foreperiods, indicating their adjustment according to temporal expectations. Comparing the sensitivity of this oculomotor marker with those of other common predictability markers (e.g. alpha-suppression) showed that it is a sensitive marker for cue-related anticipation. In contrast, temporal changes in conditional probabilities (hazard-rate) modulated alpha-suppression more than cue-related anticipation. We conclude that pre-target oculomotor inhibition is a correlate of temporal predictions induced by cue-target associations, whereas alpha-suppression is more sensitive to conditional probabilities across time. |
Ayelet Arazi; Yaffa Yeshurun; Ilan Dinstein Neural variability is quenched by attention Journal Article The Journal of Neuroscience, 39 (30), pp. 5975–5985, 2019. @article{Arazi2019, title = {Neural variability is quenched by attention}, author = {Ayelet Arazi and Yaffa Yeshurun and Ilan Dinstein}, doi = {10.1523/JNEUROSCI.0355-19.2019}, year = {2019}, date = {2019-01-01}, journal = {The Journal of Neuroscience}, volume = {39}, number = {30}, pages = {5975--5985}, abstract = {Attention can be subdivided into several components, including alertness and spatial attention. It is believed that the behavioral benefits of attention, such as increased accuracy and faster reaction times, are generated by an increase in neural activity and a decrease in neural variability, which enhance the signal-to-noise ratio of task-relevant neural populations. However, empirical evidence regarding attention-related changes in neural variability in humans is extremely rare. Here we used EEG to demonstrate that trial-by-trial neural variability was reduced by visual cues that modulated alertness and spatial attention. Reductions in neural variability were specific to the visual system and larger in the contralateral hemisphere of the attended visual field. Subjects with higher initial levels of neural variability and larger decreases in variability exhibited greater behavioral benefits from attentional cues. These findings demonstrate that both alertness and spatial attention modulate neural variability and highlight the importance of reducing/quenching neural variability for attaining the behavioral benefits of attention.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Attention can be subdivided into several components, including alertness and spatial attention. It is believed that the behavioral benefits of attention, such as increased accuracy and faster reaction times, are generated by an increase in neural activity and a decrease in neural variability, which enhance the signal-to-noise ratio of task-relevant neural populations. However, empirical evidence regarding attention-related changes in neural variability in humans is extremely rare. Here we used EEG to demonstrate that trial-by-trial neural variability was reduced by visual cues that modulated alertness and spatial attention. Reductions in neural variability were specific to the visual system and larger in the contralateral hemisphere of the attended visual field. Subjects with higher initial levels of neural variability and larger decreases in variability exhibited greater behavioral benefits from attentional cues. These findings demonstrate that both alertness and spatial attention modulate neural variability and highlight the importance of reducing/quenching neural variability for attaining the behavioral benefits of attention. |
Ryszard Auksztulewicz; Nicholas E Myers; Jan W Schnupp; Anna C Nobre Rhythmic temporal expectation boosts neural activity by increasing neural gain Journal Article The Journal of Neuroscience, 39 (49), pp. 9806–9817, 2019. @article{Auksztulewicz2019, title = {Rhythmic temporal expectation boosts neural activity by increasing neural gain}, author = {Ryszard Auksztulewicz and Nicholas E Myers and Jan W Schnupp and Anna C Nobre}, doi = {10.1523/JNEUROSCI.0925-19.2019}, year = {2019}, date = {2019-01-01}, journal = {The Journal of Neuroscience}, volume = {39}, number = {49}, pages = {9806--9817}, abstract = {Temporal orienting improves sensory processing, akin to other top–down biases. However, it is unknown whether these improvements reflect increased neural gain to any stimuli presented at expected time points, or specific tuning to task-relevant stimulus aspects. Furthermore, while other top–down biases are selective, the extent of trade-offs across time is less well characterized. Here, we tested whether gain and/or tuning ofauditory frequency processing in humans is modulated by rhythmic temporal expectations, and whether these modulations are specific to time points relevant for task performance. Healthy participants (N⫽ 23) of either sex performed an auditory discrimination task while their brain activity was measured using magnetoencephalography/electroencephalography (M/EEG). Acoustic stimulation consisted ofsequences ofbriefdistractors interspersed with targets, presented in a rhythmic or jittered way. Target rhythmicity not only improved behavioral discrimination accuracy and M/EEG-based decoding oftargets, but also ofirrelevant distrac- tors preceding these targets. To explain this finding in terms ofincreased sensitivity and/or sharpened tuning to auditory frequency, we estimated tuning curves based on M/EEG decoding results, with separate parameters describing gain and sharpness. The effect of rhythmic expectation on distractor decoding was linked to gain increase only, suggesting increased neural sensitivity to any stimuli presented at relevant time points.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Temporal orienting improves sensory processing, akin to other top–down biases. However, it is unknown whether these improvements reflect increased neural gain to any stimuli presented at expected time points, or specific tuning to task-relevant stimulus aspects. Furthermore, while other top–down biases are selective, the extent of trade-offs across time is less well characterized. Here, we tested whether gain and/or tuning ofauditory frequency processing in humans is modulated by rhythmic temporal expectations, and whether these modulations are specific to time points relevant for task performance. Healthy participants (N⫽ 23) of either sex performed an auditory discrimination task while their brain activity was measured using magnetoencephalography/electroencephalography (M/EEG). Acoustic stimulation consisted ofsequences ofbriefdistractors interspersed with targets, presented in a rhythmic or jittered way. Target rhythmicity not only improved behavioral discrimination accuracy and M/EEG-based decoding oftargets, but also ofirrelevant distrac- tors preceding these targets. To explain this finding in terms ofincreased sensitivity and/or sharpened tuning to auditory frequency, we estimated tuning curves based on M/EEG decoding results, with separate parameters describing gain and sharpness. The effect of rhythmic expectation on distractor decoding was linked to gain increase only, suggesting increased neural sensitivity to any stimuli presented at relevant time points. |
Nathan Caruana; Genevieve McArthur The mind minds minds: The effect of intentional stance on the neural encoding of joint attention Journal Article Cognitive, Affective and Behavioral Neuroscience, 19 (6), pp. 1479–1491, 2019. @article{Caruana2019a, title = {The mind minds minds: The effect of intentional stance on the neural encoding of joint attention}, author = {Nathan Caruana and Genevieve McArthur}, doi = {10.3758/s13415-019-00734-y}, year = {2019}, date = {2019-01-01}, journal = {Cognitive, Affective and Behavioral Neuroscience}, volume = {19}, number = {6}, pages = {1479--1491}, publisher = {Cognitive, Affective, & Behavioral Neuroscience}, abstract = {Recent neuroimaging studies have observed that the neural processing of social cues from a virtual reality character appears to be affected by "intentional stance" (i.e., attributing mental states, agency, and "humanness"). However, this effect could also be explained by individual differences or perceptual effects resulting from the design of these studies. The current study used a new design that measured centro-parietal P250, P350, and N170 event-related potentials (ERPs) in 20 healthy adults while they initiated gaze-related joint attention with a virtual character (“Alan”) in two conditions. In one condition, they were told that Alan was controlled by a human; in the other, they were told that he was controlled by a computer. When participants believed Alan was human, his congruent gaze shifts, which resulted in joint attention, generated significantly larger P250 ERPs than his incongruent gaze shifts. In contrast, his incongruent gaze shifts triggered significantly larger increases in P350 ERPs than his congruent gaze shifts. These findings support previous studies suggesting that intentional stance affects the neural processing of social cues from a virtual character. The outcomes also suggest the use of the P250 and P350 ERPs as objective indices of social engagement during the design of socially approachable robots and virtual agents.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Recent neuroimaging studies have observed that the neural processing of social cues from a virtual reality character appears to be affected by "intentional stance" (i.e., attributing mental states, agency, and "humanness"). However, this effect could also be explained by individual differences or perceptual effects resulting from the design of these studies. The current study used a new design that measured centro-parietal P250, P350, and N170 event-related potentials (ERPs) in 20 healthy adults while they initiated gaze-related joint attention with a virtual character (“Alan”) in two conditions. In one condition, they were told that Alan was controlled by a human; in the other, they were told that he was controlled by a computer. When participants believed Alan was human, his congruent gaze shifts, which resulted in joint attention, generated significantly larger P250 ERPs than his incongruent gaze shifts. In contrast, his incongruent gaze shifts triggered significantly larger increases in P350 ERPs than his congruent gaze shifts. These findings support previous studies suggesting that intentional stance affects the neural processing of social cues from a virtual character. The outcomes also suggest the use of the P250 and P350 ERPs as objective indices of social engagement during the design of socially approachable robots and virtual agents. |
Jing Chen; Matteo Valsecchi; Karl R Gegenfurtner Saccadic suppression measured by steady-state visual evoked potentials Journal Article Journal of Neurophysiology, 122 (1), pp. 251–258, 2019. @article{Chen2019e, title = {Saccadic suppression measured by steady-state visual evoked potentials}, author = {Jing Chen and Matteo Valsecchi and Karl R Gegenfurtner}, doi = {10.1152/jn.00712.2018}, year = {2019}, date = {2019-01-01}, journal = {Journal of Neurophysiology}, volume = {122}, number = {1}, pages = {251--258}, abstract = {Visual sensitivity is severely impaired during the execution of saccadic eye movements. This phenomenon has been extensively characterized in human psychophysics and nonhuman primate single-neuron studies, but a physiological characterization in humans is less established. Here, we used a method based on steadystate visually evoked potential (SSVEP), an oscillatory brain response to periodic visual stimulation, to examine how saccades affect visual sensitivity. Observers made horizontal saccades back and forth, while horizontal black-and-white gratings flickered at 5-30 Hz in the background. We analyzed EEG epochs with a length of 0.3 s either centered at saccade onset (saccade epochs) or centered at fixations half a second before the saccade (fixation epochs). Compared with fixation epochs, saccade epochs showed a broadband power increase, which most likely resulted from saccade-related EEG activity. The execution of saccades, however, led to an average reduction of 57% in the SSVEP amplitude at the stimulation frequency. This result provides additional evidence for an active saccadic suppression in the early visual cortex in humans. Compared with previous functional MRI and EEG studies, an advantage of this approach lies in its capability to trace the temporal dynamics of neural activity throughout the time course of a saccade. In contrast to previous electrophysiological studies in nonhuman primates, we did not find any evidence for postsaccadic enhancement, even though simulation results show that our method would have been able to detect it. We conclude that SSVEP is a useful technique to investigate the neural correlates of visual perception during saccadic eye movements in humans.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual sensitivity is severely impaired during the execution of saccadic eye movements. This phenomenon has been extensively characterized in human psychophysics and nonhuman primate single-neuron studies, but a physiological characterization in humans is less established. Here, we used a method based on steadystate visually evoked potential (SSVEP), an oscillatory brain response to periodic visual stimulation, to examine how saccades affect visual sensitivity. Observers made horizontal saccades back and forth, while horizontal black-and-white gratings flickered at 5-30 Hz in the background. We analyzed EEG epochs with a length of 0.3 s either centered at saccade onset (saccade epochs) or centered at fixations half a second before the saccade (fixation epochs). Compared with fixation epochs, saccade epochs showed a broadband power increase, which most likely resulted from saccade-related EEG activity. The execution of saccades, however, led to an average reduction of 57% in the SSVEP amplitude at the stimulation frequency. This result provides additional evidence for an active saccadic suppression in the early visual cortex in humans. Compared with previous functional MRI and EEG studies, an advantage of this approach lies in its capability to trace the temporal dynamics of neural activity throughout the time course of a saccade. In contrast to previous electrophysiological studies in nonhuman primates, we did not find any evidence for postsaccadic enhancement, even though simulation results show that our method would have been able to detect it. We conclude that SSVEP is a useful technique to investigate the neural correlates of visual perception during saccadic eye movements in humans. |
Tim Cornelissen; Jona Sassenhagen; Melissa Le-Hoa Hoa Võ Improving free-viewing fixation-related EEG potentials with continuous-time regression Journal Article Journal of Neuroscience Methods, 313 , pp. 77–94, 2019. @article{Cornelissen2019, title = {Improving free-viewing fixation-related EEG potentials with continuous-time regression}, author = {Tim Cornelissen and Jona Sassenhagen and Melissa Le-Hoa Hoa V{õ}}, doi = {10.1016/j.jneumeth.2018.12.010}, year = {2019}, date = {2019-01-01}, journal = {Journal of Neuroscience Methods}, volume = {313}, pages = {77--94}, publisher = {Elsevier}, abstract = {Background: In the analysis of combined ET-EEG data, there are several issues with estimating FRPs by averaging. Neural responses associated with fixations will likely overlap with one another in the EEG recording and neural responses change as a function of eye movement characteristics. Especially in tasks that do not constrain eye movements in any way, these issues can become confounds. New method: Here, we propose the use of regression based estimates as an alternative to averaging. Multiple regression can disentangle different influences on the EEG and correct for overlap. It thereby accounts for potential confounds in a way that averaging cannot. Specifically, we test the applicability of the rERP framework, as proposed by Smith and Kutas (2015b), (2017), or Sassenhagen (2018) to combined eye tracking and EEG data from a visual search and a scene memorization task. Results: Results show that the method successfully estimates eye movement related confounds in real experimental data, so that these potential confounds can be accounted for when estimating experimental effects. Comparison with existing methods: The rERP method successfully corrects for overlapping neural responses in instances where averaging does not. As a consequence, baselining can be applied without risking distortions. By estimating a known experimental effect, we show that rERPs provide an estimate with less variance and more accuracy than averaged FRPs. The method therefore provides a practically feasible and favorable alternative to averaging. Conclusions: We conclude that regression based ERPs provide novel opportunities for estimating fixation related EEG in free-viewing experiments.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Background: In the analysis of combined ET-EEG data, there are several issues with estimating FRPs by averaging. Neural responses associated with fixations will likely overlap with one another in the EEG recording and neural responses change as a function of eye movement characteristics. Especially in tasks that do not constrain eye movements in any way, these issues can become confounds. New method: Here, we propose the use of regression based estimates as an alternative to averaging. Multiple regression can disentangle different influences on the EEG and correct for overlap. It thereby accounts for potential confounds in a way that averaging cannot. Specifically, we test the applicability of the rERP framework, as proposed by Smith and Kutas (2015b), (2017), or Sassenhagen (2018) to combined eye tracking and EEG data from a visual search and a scene memorization task. Results: Results show that the method successfully estimates eye movement related confounds in real experimental data, so that these potential confounds can be accounted for when estimating experimental effects. Comparison with existing methods: The rERP method successfully corrects for overlapping neural responses in instances where averaging does not. As a consequence, baselining can be applied without risking distortions. By estimating a known experimental effect, we show that rERPs provide an estimate with less variance and more accuracy than averaged FRPs. The method therefore provides a practically feasible and favorable alternative to averaging. Conclusions: We conclude that regression based ERPs provide novel opportunities for estimating fixation related EEG in free-viewing experiments. |
Peter de Lissa; Genevieve McArthur; Stefan Hawelka; Romina Palermo; Yatin Mahajan; Federica Degno; Florian Hutzler Peripheral preview abolishes N170 face-sensitivity at fixation: Using fixation-related potentials to investigate dynamic face processing Journal Article Visual Cognition, 27 (9-10), pp. 740–759, 2019. @article{Lissa2019, title = {Peripheral preview abolishes N170 face-sensitivity at fixation: Using fixation-related potentials to investigate dynamic face processing}, author = {Peter de Lissa and Genevieve McArthur and Stefan Hawelka and Romina Palermo and Yatin Mahajan and Federica Degno and Florian Hutzler}, doi = {10.1080/13506285.2019.1676855}, year = {2019}, date = {2019-01-01}, journal = {Visual Cognition}, volume = {27}, number = {9-10}, pages = {740--759}, abstract = {The N170 ERP peak has been found to be consistently larger in response to the presentation of faces than to other objects, yet it is not clear whether this face-sensitive N170 is also elicited during fixations made subsequent to the initial presentation. To investigate this question, the current study utilised Event and Fixation-Related Potentials in two experiments, time-locking brain potentials to the presentation of faces and objects (watches) images in participants' peripheral vision, and to their first fixations on the images. Experiment 1 found that a face-sensitive N170 was elicited by the onset of images but not by a subsequent fixation on the images, and that face inversion did not modulate N170 beyond presentation. Experiment 2 found that disrupting the structure of the peripheral preview (phase-scrambling) led to a face-sensitive N170 at fixation onsets on the intact-images. Interestingly, N170 amplitudes for both faces and objects were significantly enhanced after the peripheral preview was phase-scrambled, suggesting that the N170 in part reflects a category-detection process that is elicited once when an image structure is viewed. These results indicate that neural processing during fixations will be significantly modulated when they are immediately preceded by peripheral previews, and is not specific to faces.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The N170 ERP peak has been found to be consistently larger in response to the presentation of faces than to other objects, yet it is not clear whether this face-sensitive N170 is also elicited during fixations made subsequent to the initial presentation. To investigate this question, the current study utilised Event and Fixation-Related Potentials in two experiments, time-locking brain potentials to the presentation of faces and objects (watches) images in participants' peripheral vision, and to their first fixations on the images. Experiment 1 found that a face-sensitive N170 was elicited by the onset of images but not by a subsequent fixation on the images, and that face inversion did not modulate N170 beyond presentation. Experiment 2 found that disrupting the structure of the peripheral preview (phase-scrambling) led to a face-sensitive N170 at fixation onsets on the intact-images. Interestingly, N170 amplitudes for both faces and objects were significantly enhanced after the peripheral preview was phase-scrambled, suggesting that the N170 in part reflects a category-detection process that is elicited once when an image structure is viewed. These results indicate that neural processing during fixations will be significantly modulated when they are immediately preceded by peripheral previews, and is not specific to faces. |
Federica Degno; Otto Loberg; Chuanli Zang; Manman Zhang; Nick Donnelly; Simon P Liversedge Parafoveal previews and lexical frequency in natural reading: Evidence from eye movements and fixation-related potentials. Journal Article Journal of Experimental Psychology: General, 148 (3), pp. 453–474, 2019. @article{Degno2019a, title = {Parafoveal previews and lexical frequency in natural reading: Evidence from eye movements and fixation-related potentials.}, author = {Federica Degno and Otto Loberg and Chuanli Zang and Manman Zhang and Nick Donnelly and Simon P Liversedge}, doi = {10.1037/xge0000494}, year = {2019}, date = {2019-01-01}, journal = {Journal of Experimental Psychology: General}, volume = {148}, number = {3}, pages = {453--474}, abstract = {Participants' eye movements and electroencephalogram (EEG) signal were recorded as they read sentences displayed according to the gaze-contingent boundary paradigm. Two target words in each sentence were manipulated for lexical frequency (high vs. low frequency) and parafoveal preview of each target word (identical vs. string of random letters vs. string of Xs). Eye movement data revealed visual parafoveal-on-foveal (PoF) effects, as well as foveal visual and orthographic preview effects and word frequency effects. Fixation-related potentials (FRPs) showed visual and orthographic PoF effects as well as foveal visual and orthographic preview effects. Our results replicated the early preview positivity effect (Dimigen, Kliegl, & Sommer, 2012) in the X-string preview condition, and revealed different neural correlates associated with a preview comprised of a string of random letters relative to a string of Xs. The former effects seem likely to reflect difficulty associated with the integration of parafoveal and foveal information, as well as feature overlap, while the latter reflect inhibition, and potentially disruption, to processing underlying reading. Interestingly, and consistent with Kretzschmar, Schlesewsky, and Staub (2015), no frequency effect was reflected in the FRP measures. The findings provide insight into the neural correlates of parafoveal processing and written word recognition in reading and demonstrate the value of utilizing ecologically valid paradigms to study well established phenomena that occur as text is read naturally.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Participants' eye movements and electroencephalogram (EEG) signal were recorded as they read sentences displayed according to the gaze-contingent boundary paradigm. Two target words in each sentence were manipulated for lexical frequency (high vs. low frequency) and parafoveal preview of each target word (identical vs. string of random letters vs. string of Xs). Eye movement data revealed visual parafoveal-on-foveal (PoF) effects, as well as foveal visual and orthographic preview effects and word frequency effects. Fixation-related potentials (FRPs) showed visual and orthographic PoF effects as well as foveal visual and orthographic preview effects. Our results replicated the early preview positivity effect (Dimigen, Kliegl, & Sommer, 2012) in the X-string preview condition, and revealed different neural correlates associated with a preview comprised of a string of random letters relative to a string of Xs. The former effects seem likely to reflect difficulty associated with the integration of parafoveal and foveal information, as well as feature overlap, while the latter reflect inhibition, and potentially disruption, to processing underlying reading. Interestingly, and consistent with Kretzschmar, Schlesewsky, and Staub (2015), no frequency effect was reflected in the FRP measures. The findings provide insight into the neural correlates of parafoveal processing and written word recognition in reading and demonstrate the value of utilizing ecologically valid paradigms to study well established phenomena that occur as text is read naturally. |
Federica Degno; Otto Loberg; Chuanli Zang; Manman Zhang; Nick Donnelly; Simon P Liversedge A co-registration investigation of inter-word spacing and parafoveal preview: Eye movements and fixation-related potentials Journal Article PLoS ONE, 14 (12), pp. 1–23, 2019. @article{Degno2019b, title = {A co-registration investigation of inter-word spacing and parafoveal preview: Eye movements and fixation-related potentials}, author = {Federica Degno and Otto Loberg and Chuanli Zang and Manman Zhang and Nick Donnelly and Simon P Liversedge}, doi = {10.1371/journal.pone.0225819}, year = {2019}, date = {2019-01-01}, journal = {PLoS ONE}, volume = {14}, number = {12}, pages = {1--23}, abstract = {Participants' eye movements (EMs) and EEG signal were simultaneously recorded to examine foveal and parafoveal processing during sentence reading. All the words in the sentence were manipulated for inter-word spacing (intact spaces vs. spaces replaced by a random letter) and parafoveal preview (identical preview vs. random letter string preview). We observed disruption for unspaced text and invalid preview conditions in both EMs and fixation- related potentials (FRPs). Unspaced and invalid preview conditions received longer reading times than spaced and valid preview conditions. In addition, the FRP data showed that unspaced previews disrupted reading in earlier time windows of analysis, compared to string preview conditions. Moreover, the effect of parafoveal preview was greater for spaced relative to unspaced conditions, in both EMs and FRPs. These findings replicate well-established preview effects, provide novel insight into the neural correlates of reading with and without inter-word spacing and suggest that spatial selection precedes lexical processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Participants' eye movements (EMs) and EEG signal were simultaneously recorded to examine foveal and parafoveal processing during sentence reading. All the words in the sentence were manipulated for inter-word spacing (intact spaces vs. spaces replaced by a random letter) and parafoveal preview (identical preview vs. random letter string preview). We observed disruption for unspaced text and invalid preview conditions in both EMs and fixation- related potentials (FRPs). Unspaced and invalid preview conditions received longer reading times than spaced and valid preview conditions. In addition, the FRP data showed that unspaced previews disrupted reading in earlier time windows of analysis, compared to string preview conditions. Moreover, the effect of parafoveal preview was greater for spaced relative to unspaced conditions, in both EMs and FRPs. These findings replicate well-established preview effects, provide novel insight into the neural correlates of reading with and without inter-word spacing and suggest that spatial selection precedes lexical processing. |
Christ Devia; Rocio Mayol-Troncoso; Javiera Parrini; Gricel Orellana; Aida Ruiz; Pedro E Maldonado; Jose Ignacio Egaña EEG classification during scene free-viewing for schizophrenia detection Journal Article IEEE Transactions on Neural Systems and Rehabilitation Engineering, 27 (6), pp. 1193–1199, 2019. @article{Devia2019, title = {EEG classification during scene free-viewing for schizophrenia detection}, author = {Christ Devia and Rocio Mayol-Troncoso and Javiera Parrini and Gricel Orellana and Aida Ruiz and Pedro E Maldonado and Jose Ignacio Ega{ñ}a}, doi = {10.1109/TNSRE.2019.2913799}, year = {2019}, date = {2019-01-01}, journal = {IEEE Transactions on Neural Systems and Rehabilitation Engineering}, volume = {27}, number = {6}, pages = {1193--1199}, abstract = {Currently, the diagnosis of schizophrenia is made solely based on interviews and behavioral observations by a trained psychiatrist. Technologies such as electroencephalography (EEG) are used for differential diagnosis and not to support the psychiatrist's positive diagnosis. Here, we show the potential of EEG recordings as biomarkers of the schizophrenia syndrome. We recorded EEG while schizophrenia patients freely viewed natural scenes, and we analyzed the average EEG activity locked to the image onset. We found significant differences between patients and healthy controls in occipital areas approximately 500 ms after image onset. These differences were used to train a classifier to discriminate the schizophrenia patients from the controls. The best classifier had 81% sensitivity for the detection of patients and specificity of 59% for the detection of controls, with an overall accuracy of 71%. These results indicate that EEG signals from a free-viewing paradigm discriminate patients from healthy controls and have the potential to become a tool for the psychiatrist to support the positive diagnosis of schizophrenia.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Currently, the diagnosis of schizophrenia is made solely based on interviews and behavioral observations by a trained psychiatrist. Technologies such as electroencephalography (EEG) are used for differential diagnosis and not to support the psychiatrist's positive diagnosis. Here, we show the potential of EEG recordings as biomarkers of the schizophrenia syndrome. We recorded EEG while schizophrenia patients freely viewed natural scenes, and we analyzed the average EEG activity locked to the image onset. We found significant differences between patients and healthy controls in occipital areas approximately 500 ms after image onset. These differences were used to train a classifier to discriminate the schizophrenia patients from the controls. The best classifier had 81% sensitivity for the detection of patients and specificity of 59% for the detection of controls, with an overall accuracy of 71%. These results indicate that EEG signals from a free-viewing paradigm discriminate patients from healthy controls and have the potential to become a tool for the psychiatrist to support the positive diagnosis of schizophrenia. |
Rasa Gulbinaite; Diane H M Roozendaal; Rufin VanRullen Attention differentially modulates the amplitude of resonance frequencies in the visual cortex Journal Article NeuroImage, 203 , pp. 1–17, 2019. @article{Gulbinaite2019, title = {Attention differentially modulates the amplitude of resonance frequencies in the visual cortex}, author = {Rasa Gulbinaite and Diane H M Roozendaal and Rufin VanRullen}, doi = {10.1016/j.neuroimage.2019.116146}, year = {2019}, date = {2019-01-01}, journal = {NeuroImage}, volume = {203}, pages = {1--17}, abstract = {Rhythmic visual stimuli (flicker) elicit rhythmic brain responses at the frequency of the stimulus, and attention generally enhances these oscillatory brain responses (steady state visual evoked potentials, SSVEPs). Although SSVEP responses have been tested for flicker frequencies up to 100 Hz [Herrmann, 2001], effects of attention on SSVEP amplitude have only been reported for lower frequencies (up to ~30 Hz), with no systematic comparison across a wide, finely sampled frequency range. Does attention modulate SSVEP amplitude at higher flicker frequencies (gamma band, 30–80 Hz), and is attentional modulation constant across frequencies? By isolating SSVEP responses from the broadband EEG signal using a multivariate spatiotemporal source separation method, we demonstrate that flicker in the alpha and gamma bands elicit strongest and maximally phase stable brain responses (resonance), on which the effect of attention is opposite: positive for gamma and negative for alpha. Finding subject-specific gamma resonance frequency and a positive attentional modulation of gamma-band SSVEPs points to the untapped potential of flicker as a non-invasive tool for studying the causal effects of interactions between visual gamma-band rhythmic stimuli and endogenous gamma oscillations on perception and attention.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Rhythmic visual stimuli (flicker) elicit rhythmic brain responses at the frequency of the stimulus, and attention generally enhances these oscillatory brain responses (steady state visual evoked potentials, SSVEPs). Although SSVEP responses have been tested for flicker frequencies up to 100 Hz [Herrmann, 2001], effects of attention on SSVEP amplitude have only been reported for lower frequencies (up to ~30 Hz), with no systematic comparison across a wide, finely sampled frequency range. Does attention modulate SSVEP amplitude at higher flicker frequencies (gamma band, 30–80 Hz), and is attentional modulation constant across frequencies? By isolating SSVEP responses from the broadband EEG signal using a multivariate spatiotemporal source separation method, we demonstrate that flicker in the alpha and gamma bands elicit strongest and maximally phase stable brain responses (resonance), on which the effect of attention is opposite: positive for gamma and negative for alpha. Finding subject-specific gamma resonance frequency and a positive attentional modulation of gamma-band SSVEPs points to the untapped potential of flicker as a non-invasive tool for studying the causal effects of interactions between visual gamma-band rhythmic stimuli and endogenous gamma oscillations on perception and attention. |
Nicole Hakim; Kirsten C S Adam; Eren Gunseli; Edward Awh; Edward K Vogel Dissecting the neural focus of attention reveals distinct processes for spatial attention and object-based storage in visual working memory Journal Article Psychological Science, 30 (4), pp. 526–540, 2019. @article{Hakim2019, title = {Dissecting the neural focus of attention reveals distinct processes for spatial attention and object-based storage in visual working memory}, author = {Nicole Hakim and Kirsten C S Adam and Eren Gunseli and Edward Awh and Edward K Vogel}, doi = {10.1177/0956797619830384}, year = {2019}, date = {2019-01-01}, journal = {Psychological Science}, volume = {30}, number = {4}, pages = {526--540}, abstract = {Complex cognition relies on both on-line representations in working memory (WM), said to reside in the focus of attention, and passive off-line representations of related information. Here, we dissected the focus of attention by showing that distinct neural signals index the on-line storage of objects and sustained spatial attention. We recorded electroencephalogram (EEG) activity during two tasks that employed identical stimulus displays but varied the relative demands for object storage and spatial attention. We found distinct delay-period signatures for an attention task (which required only spatial attention) and a WM task (which invoked both spatial attention and object storage). Although both tasks required active maintenance of spatial information, only the WM task elicited robust contralateral delay activity that was sensitive to mnemonic load. Thus, we argue that the focus of attention is maintained via a collaboration between distinct processes for covert spatial orienting and object-based storage.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Complex cognition relies on both on-line representations in working memory (WM), said to reside in the focus of attention, and passive off-line representations of related information. Here, we dissected the focus of attention by showing that distinct neural signals index the on-line storage of objects and sustained spatial attention. We recorded electroencephalogram (EEG) activity during two tasks that employed identical stimulus displays but varied the relative demands for object storage and spatial attention. We found distinct delay-period signatures for an attention task (which required only spatial attention) and a WM task (which invoked both spatial attention and object storage). Although both tasks required active maintenance of spatial information, only the WM task elicited robust contralateral delay activity that was sensitive to mnemonic load. Thus, we argue that the focus of attention is maintained via a collaboration between distinct processes for covert spatial orienting and object-based storage. |
Nicole Hakim; Tobias Feldmann-Wüstefeld; Edward Awh; Edward K Vogel Perturbing neural representations of working memory with task-irrelevant interruption Journal Article Journal of Cognitive Neuroscience, 32 (3), pp. 558–569, 2019. @article{Hakim2019a, title = {Perturbing neural representations of working memory with task-irrelevant interruption}, author = {Nicole Hakim and Tobias Feldmann-Wüstefeld and Edward Awh and Edward K Vogel}, doi = {10.1162/jocn_a_01481}, year = {2019}, date = {2019-01-01}, journal = {Journal of Cognitive Neuroscience}, volume = {32}, number = {3}, pages = {558--569}, abstract = {Working memory maintains information so that it can be used in complex cognitive tasks. A key challenge for this system is to maintain relevant information in the face of task-irrelevant perturbations. Across two experiments, we investigated the impact of task-irrelevant interruptions on neural representations of working memory. We recorded EEG activity in humans while they performed a working memory task. On a subset of trials, we interrupted participants with salient but task-irrelevant objects. To track the impact of these task-irrelevant interruptions on neural representations of working memory, we measured two well-characterized, temporally sensitive EEG markers that reflect active, prioritized working memory representations: the contralateral delay activity and lateralized alpha power (8–12 Hz). After interruption, we found that contralateral delay activity amplitude momentarily sustained but was gone by the end of the trial. Lateralized alpha power was immediately influenced by the interrupters but recovered by the end of the trial. This suggests that dissociable neural processes contribute to the maintenance of working memory information and that brief irrelevant onsets disrupt two distinct online aspects of working memory. In addition, we found that task expectancy modulated the timing and magnitude of how these two neural signals responded to task-irrelevant interruptions, suggesting that the brain's response to task-irrelevant interruption is shaped by task context.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Working memory maintains information so that it can be used in complex cognitive tasks. A key challenge for this system is to maintain relevant information in the face of task-irrelevant perturbations. Across two experiments, we investigated the impact of task-irrelevant interruptions on neural representations of working memory. We recorded EEG activity in humans while they performed a working memory task. On a subset of trials, we interrupted participants with salient but task-irrelevant objects. To track the impact of these task-irrelevant interruptions on neural representations of working memory, we measured two well-characterized, temporally sensitive EEG markers that reflect active, prioritized working memory representations: the contralateral delay activity and lateralized alpha power (8–12 Hz). After interruption, we found that contralateral delay activity amplitude momentarily sustained but was gone by the end of the trial. Lateralized alpha power was immediately influenced by the interrupters but recovered by the end of the trial. This suggests that dissociable neural processes contribute to the maintenance of working memory information and that brief irrelevant onsets disrupt two distinct online aspects of working memory. In addition, we found that task expectancy modulated the timing and magnitude of how these two neural signals responded to task-irrelevant interruptions, suggesting that the brain's response to task-irrelevant interruption is shaped by task context. |
Qiming Han; Huan Luo Visual crowding involves delayed frontoparietal response and enhanced top-down modulation Journal Article European Journal of Neuroscience, 50 (6), pp. 2931–2941, 2019. @article{Han2019a, title = {Visual crowding involves delayed frontoparietal response and enhanced top-down modulation}, author = {Qiming Han and Huan Luo}, doi = {10.1111/ejn.14401}, year = {2019}, date = {2019-01-01}, journal = {European Journal of Neuroscience}, volume = {50}, number = {6}, pages = {2931--2941}, abstract = {Crowding, the disrupted recognition of a peripheral target in the presence of nearby flankers, sets a fundamental limit on peripheral vision perception. Debates persist on whether the limit occurs at early visual cortices or is induced by top-down modulation, leaving the neural mechanism for visual crowding largely unclear. To resolve the debate, it is crucial to extract the neural signals elicited by the target from that by the target-flanker clutter, with high temporal resolution. To achieve this purpose, here we employed a temporal response function (TRF) approach to dissociate target-specific response from the overall electroencephalograph (EEG) recordings when the target was presented with (crowded) or without flankers (uncrowded) while subjects were performing a discrimination task on the peripherally presented target. Our results demonstrated two components in the target-specific contrast-tracking TRF response—an early component (100–170 ms) in occipital channels and a late component (210–450 ms) in frontoparietal channels. The late frontoparietal component, which was delayed in time under the crowded condition, was correlated with target discrimination performance, suggesting its involvement in visual crowding. Granger causality analysis further revealed stronger top-down modulation on the target stimulus under the crowded condition. Taken together, our findings support that crowding is associated with a top-down process which modulates the low-level sensory processing and delays the behavioral-relevant response in the high-level region.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Crowding, the disrupted recognition of a peripheral target in the presence of nearby flankers, sets a fundamental limit on peripheral vision perception. Debates persist on whether the limit occurs at early visual cortices or is induced by top-down modulation, leaving the neural mechanism for visual crowding largely unclear. To resolve the debate, it is crucial to extract the neural signals elicited by the target from that by the target-flanker clutter, with high temporal resolution. To achieve this purpose, here we employed a temporal response function (TRF) approach to dissociate target-specific response from the overall electroencephalograph (EEG) recordings when the target was presented with (crowded) or without flankers (uncrowded) while subjects were performing a discrimination task on the peripherally presented target. Our results demonstrated two components in the target-specific contrast-tracking TRF response—an early component (100–170 ms) in occipital channels and a late component (210–450 ms) in frontoparietal channels. The late frontoparietal component, which was delayed in time under the crowded condition, was correlated with target discrimination performance, suggesting its involvement in visual crowding. Granger causality analysis further revealed stronger top-down modulation on the target stimulus under the crowded condition. Taken together, our findings support that crowding is associated with a top-down process which modulates the low-level sensory processing and delays the behavioral-relevant response in the high-level region. |
Maximilian F A Hauser; Stefanie Heba; Tobias Schmidt-Wilcke; Martin Tegenthoff; Denise Manahan-Vaughan Cerebellar-hippocampal processing in passive perception of visuospatial change: An ego- and allocentric axis? Journal Article Human Brain Mapping, pp. 1–14, 2019. @article{Hauser2019, title = {Cerebellar-hippocampal processing in passive perception of visuospatial change: An ego- and allocentric axis?}, author = {Maximilian F A Hauser and Stefanie Heba and Tobias Schmidt-Wilcke and Martin Tegenthoff and Denise Manahan-Vaughan}, doi = {10.1002/hbm.24865}, year = {2019}, date = {2019-01-01}, journal = {Human Brain Mapping}, pages = {1--14}, abstract = {In addition to its role in visuospatial navigation and the generation of spatial representations, in recent years, the hippocampus has been proposed to support perceptual processes. This is especially the case where high-resolution details, in the form of fine-grained relationships between features such as angles between components of a visual scene, are involved. An unresolved question is how, in the visual domain, perspective-changes are differentiated from allocentric changes to these perceived feature relationships, both of which may be argued to involve the hippocampus. We conducted functional magnetic resonance imaging of the brain response (corroborated through separate event-related potential source-localization) in a passive visuospatial oddball-paradigm to examine to what extent the hippocampus and other brain regions process changes in perspective, or configuration of abstract, three-dimensional structures. We observed activation of the left superior parietal cortex during perspective shifts, and right anterior hippocampus in configuration-changes. Strikingly, we also found the cerebellum to differentiate between the two, in a way that appeared tightly coupled to hippocampal processing. These results point toward a relationship between the cerebellum and the hippocampus that occurs during perception of changes in visuospatial information that has previously only been reported with regard to visuospatial navigation.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In addition to its role in visuospatial navigation and the generation of spatial representations, in recent years, the hippocampus has been proposed to support perceptual processes. This is especially the case where high-resolution details, in the form of fine-grained relationships between features such as angles between components of a visual scene, are involved. An unresolved question is how, in the visual domain, perspective-changes are differentiated from allocentric changes to these perceived feature relationships, both of which may be argued to involve the hippocampus. We conducted functional magnetic resonance imaging of the brain response (corroborated through separate event-related potential source-localization) in a passive visuospatial oddball-paradigm to examine to what extent the hippocampus and other brain regions process changes in perspective, or configuration of abstract, three-dimensional structures. We observed activation of the left superior parietal cortex during perspective shifts, and right anterior hippocampus in configuration-changes. Strikingly, we also found the cerebellum to differentiate between the two, in a way that appeared tightly coupled to hippocampal processing. These results point toward a relationship between the cerebellum and the hippocampus that occurs during perception of changes in visuospatial information that has previously only been reported with regard to visuospatial navigation. |
Piril Hepsomali; Julie A Hadwin; Simon P Liversedge; Federica Degno; Matthew Garner Experimental Brain Research, 237 (4), pp. 897–909, 2019. @article{Hepsomali2019, title = {The impact of cognitive load on processing efficiency and performance effectiveness in anxiety: Evidence from event-related potentials and pupillary responses}, author = {Piril Hepsomali and Julie A Hadwin and Simon P Liversedge and Federica Degno and Matthew Garner}, doi = {10.1007/s00221-018-05466-y}, year = {2019}, date = {2019-01-01}, journal = {Experimental Brain Research}, volume = {237}, number = {4}, pages = {897--909}, publisher = {Springer Berlin Heidelberg}, abstract = {Anxiety has been associated with poor attentional control, as reflected in lowered performance on experimental measures of executive attention and inhibitory control. Recent conceptualisations of anxiety propose that individuals who report elevated anxiety symptoms worry about performance and will exert greater cognitive effort to complete tasks well, particularly when cognitive demands are high. Across two experiments, we examined the effect of anxiety on task performance and across two load conditions using (1) measures of inhibitory control (behavioural reaction times and eye-movement responses) and (2) task effort with pupillary and electrocortical markers of effort (CNV) and inhibitory control (N2). Experiment 1 used an oculomotor-delayed-response task that manipulated load by increasing delay duration to create a high load, relative to a low load, condition. Experiment 2 used a Go/No-Go task and load was manipulated by decreasing the No-Go probabilities (i.e., 20% No-Go in the high load condition and 50% No-Go in the low load condition). Experiment 1 showed individuals with high (vs. low) anxiety made more antisaccade errors across load conditions, and made more effort during the high load condition, as evidenced by greater frontal CNV and increased pupillary responses. In Experiment 2, individuals with high anxiety showed increased effort (irrespective of cognitive load), as characterised by larger pupillary responses. In addition, N2 amplitudes were sensitive to load only in individuals with low anxiety. Evidence of reduced performance effectiveness and efficiency across electrophysiological, pupillary, and oculomotor systems in anxiety provides some support for neurocognitive models of frontocortical attentional dysfunction in anxiety.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Anxiety has been associated with poor attentional control, as reflected in lowered performance on experimental measures of executive attention and inhibitory control. Recent conceptualisations of anxiety propose that individuals who report elevated anxiety symptoms worry about performance and will exert greater cognitive effort to complete tasks well, particularly when cognitive demands are high. Across two experiments, we examined the effect of anxiety on task performance and across two load conditions using (1) measures of inhibitory control (behavioural reaction times and eye-movement responses) and (2) task effort with pupillary and electrocortical markers of effort (CNV) and inhibitory control (N2). Experiment 1 used an oculomotor-delayed-response task that manipulated load by increasing delay duration to create a high load, relative to a low load, condition. Experiment 2 used a Go/No-Go task and load was manipulated by decreasing the No-Go probabilities (i.e., 20% No-Go in the high load condition and 50% No-Go in the low load condition). Experiment 1 showed individuals with high (vs. low) anxiety made more antisaccade errors across load conditions, and made more effort during the high load condition, as evidenced by greater frontal CNV and increased pupillary responses. In Experiment 2, individuals with high anxiety showed increased effort (irrespective of cognitive load), as characterised by larger pupillary responses. In addition, N2 amplitudes were sensitive to load only in individuals with low anxiety. Evidence of reduced performance effectiveness and efficiency across electrophysiological, pupillary, and oculomotor systems in anxiety provides some support for neurocognitive models of frontocortical attentional dysfunction in anxiety. |
Jan Herding; Simon Ludwig; Alexander von Lautz; Bernhard Spitzer; Felix Blankenburg Centro-parietal EEG potentials index subjective evidence and confidence during perceptual decision making Journal Article NeuroImage, 201 , pp. 1–11, 2019. @article{Herding2019, title = {Centro-parietal EEG potentials index subjective evidence and confidence during perceptual decision making}, author = {Jan Herding and Simon Ludwig and Alexander von Lautz and Bernhard Spitzer and Felix Blankenburg}, doi = {10.1016/j.neuroimage.2019.116011}, year = {2019}, date = {2019-01-01}, journal = {NeuroImage}, volume = {201}, pages = {1--11}, abstract = {Recent studies suggest that a centro-parietal positivity (CPP) in the EEG signal tracks the absolute (unsigned) strength of accumulated evidence for choices that require the integration of noisy sensory input. Here, we investigated whether the CPP might also reflect the evidence for decisions based on a quantitative comparison between two sequentially presented stimuli (a signed quantity). We recorded EEG while participants decided whether the latter of two vibrotactile frequencies was higher or lower than the former in six variants of this task (n ¼ 116). To account for biases in sequential comparisons, we applied a behavioral model based on Bayesian inference that estimated subjectively perceived frequency differences. Immediately after the second stimulus, parietal ERPs reflected the signed value of subjectively perceived differences and afterwards their absolute value. Strikingly, the modulation by signed difference was evident in trials without any objective evidence for either choice and correlated with choice-selective premotor beta band amplitudes. Modulations by the absolute strength of subjectively perceived evidence-a direct indicator of task difficulty-exhibited all features of statistical decision confidence. Together, our data suggest that parietal EEG signals first index subjective evidence, and later include a measure of confidence in the context of perceptual decision making.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Recent studies suggest that a centro-parietal positivity (CPP) in the EEG signal tracks the absolute (unsigned) strength of accumulated evidence for choices that require the integration of noisy sensory input. Here, we investigated whether the CPP might also reflect the evidence for decisions based on a quantitative comparison between two sequentially presented stimuli (a signed quantity). We recorded EEG while participants decided whether the latter of two vibrotactile frequencies was higher or lower than the former in six variants of this task (n ¼ 116). To account for biases in sequential comparisons, we applied a behavioral model based on Bayesian inference that estimated subjectively perceived frequency differences. Immediately after the second stimulus, parietal ERPs reflected the signed value of subjectively perceived differences and afterwards their absolute value. Strikingly, the modulation by signed difference was evident in trials without any objective evidence for either choice and correlated with choice-selective premotor beta band amplitudes. Modulations by the absolute strength of subjectively perceived evidence-a direct indicator of task difficulty-exhibited all features of statistical decision confidence. Together, our data suggest that parietal EEG signals first index subjective evidence, and later include a measure of confidence in the context of perceptual decision making. |
Taylor Hornung; Wen Hsuan Chan; Ralph Axel Müller; Jeanne Townsend; Brandon Keehn Dopaminergic hypo-activity and reduced theta-band power in autism spectrum disorder: A resting-state EEG study Journal Article International Journal of Psychophysiology, 146 , pp. 101–106, 2019. @article{Hornung2019, title = {Dopaminergic hypo-activity and reduced theta-band power in autism spectrum disorder: A resting-state EEG study}, author = {Taylor Hornung and Wen Hsuan Chan and Ralph Axel Müller and Jeanne Townsend and Brandon Keehn}, doi = {10.1016/j.ijpsycho.2019.08.012}, year = {2019}, date = {2019-01-01}, journal = {International Journal of Psychophysiology}, volume = {146}, pages = {101--106}, publisher = {Elsevier}, abstract = {Background: Prior studies using a variety of methodologies have reported inconsistent dopamine (DA) findings in individuals with autism spectrum disorder (ASD), ranging from dopaminergic hypo- to hyper-activity. Theta-band power derived from scalp-recorded electroencephalography (EEG), which may be associated with dopamine levels in frontal cortex, has also been shown to be atypical in ASD. The present study examined spontaneous eye-blink rate (EBR), an indirect, non-invasive measure of central dopaminergic activity, and theta power in children with ASD to determine: 1) whether ASD may be associated with atypical DA levels, and 2) whether dopaminergic dysfunction may be associated with aberrant theta-band activation. Method: Participants included thirty-two children with ASD and thirty-two age-, IQ-, and sex-matched typically developing (TD) children. Electroencephalography and eye-tracking data were acquired while participants completed an eyes-open resting-state session. Blinks were counted and EBR was determined by dividing blink frequency by session duration and theta power (4–7.5 Hz) was extracted from midline leads. Results: Eye-blink rate and theta-band activity were significantly reduced in children with ASD as compared to their TD peers. For all participants, greater midline theta power was associated with increased EBR (related to higher DA levels). Conclusions: These results suggest that ASD may be associated with dopaminergic hypo-activity, and that this may contribute to atypical theta-band power. Lastly, EBR may be a useful tool to non-invasively index dopamine levels in ASD and could potentially have many clinical applications, including selecting treatment options and monitoring treatment response.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Background: Prior studies using a variety of methodologies have reported inconsistent dopamine (DA) findings in individuals with autism spectrum disorder (ASD), ranging from dopaminergic hypo- to hyper-activity. Theta-band power derived from scalp-recorded electroencephalography (EEG), which may be associated with dopamine levels in frontal cortex, has also been shown to be atypical in ASD. The present study examined spontaneous eye-blink rate (EBR), an indirect, non-invasive measure of central dopaminergic activity, and theta power in children with ASD to determine: 1) whether ASD may be associated with atypical DA levels, and 2) whether dopaminergic dysfunction may be associated with aberrant theta-band activation. Method: Participants included thirty-two children with ASD and thirty-two age-, IQ-, and sex-matched typically developing (TD) children. Electroencephalography and eye-tracking data were acquired while participants completed an eyes-open resting-state session. Blinks were counted and EBR was determined by dividing blink frequency by session duration and theta power (4–7.5 Hz) was extracted from midline leads. Results: Eye-blink rate and theta-band activity were significantly reduced in children with ASD as compared to their TD peers. For all participants, greater midline theta power was associated with increased EBR (related to higher DA levels). Conclusions: These results suggest that ASD may be associated with dopaminergic hypo-activity, and that this may contribute to atypical theta-band power. Lastly, EBR may be a useful tool to non-invasively index dopamine levels in ASD and could potentially have many clinical applications, including selecting treatment options and monitoring treatment response. |
Robert Jagiello; Ulrich Pomper; Makoto Yoneya; Sijia Zhao; Maria Chait Rapid brain sesponses to familiar vs. unfamiliar music-an EEG and pupillometry study Journal Article Scientific Reports, 9 , pp. 1–13, 2019. @article{Jagiello2019, title = {Rapid brain sesponses to familiar vs. unfamiliar music-an EEG and pupillometry study}, author = {Robert Jagiello and Ulrich Pomper and Makoto Yoneya and Sijia Zhao and Maria Chait}, doi = {10.1038/s41598-019-51759-9}, year = {2019}, date = {2019-01-01}, journal = {Scientific Reports}, volume = {9}, pages = {1--13}, abstract = {Human listeners exhibit marked sensitivity to familiar music, perhaps most readily revealed by popular "name that tune" games, in which listeners often succeed in recognizing a familiar song based on extremely brief presentation. In this work, we used electroencephalography (EEG) and pupillometry to reveal the temporal signatures of the brain processes that allow differentiation between a familiar, well liked, and unfamiliar piece of music. In contrast to previous work, which has quantified gradual changes in pupil diameter (the so-called "pupil dilation response"), here we focus on the occurrence of pupil dilation events. This approach is substantially more sensitive in the temporal domain and allowed us to tap early activity with the putative salience network. Participants (N = 10) passively listened to snippets (750 ms) of a familiar, personally relevant and, an acoustically matched, unfamiliar song, presented in random order. A group of control participants (N = 12), who were unfamiliar with all of the songs, was also tested. We reveal a rapid differentiation between snippets from familiar and unfamiliar songs: Pupil responses showed greater dilation rate to familiar music from 100-300 ms post-stimulus-onset, consistent with a faster activation of the autonomic salience network. Brain responses measured with EEG showed a later differentiation between familiar and unfamiliar music from 350 ms post onset. Remarkably, the cluster pattern identified in the EEG response is very similar to that commonly found in the classic old/new memory retrieval paradigms, suggesting that the recognition of brief, randomly presented, music snippets, draws on similar processes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Human listeners exhibit marked sensitivity to familiar music, perhaps most readily revealed by popular "name that tune" games, in which listeners often succeed in recognizing a familiar song based on extremely brief presentation. In this work, we used electroencephalography (EEG) and pupillometry to reveal the temporal signatures of the brain processes that allow differentiation between a familiar, well liked, and unfamiliar piece of music. In contrast to previous work, which has quantified gradual changes in pupil diameter (the so-called "pupil dilation response"), here we focus on the occurrence of pupil dilation events. This approach is substantially more sensitive in the temporal domain and allowed us to tap early activity with the putative salience network. Participants (N = 10) passively listened to snippets (750 ms) of a familiar, personally relevant and, an acoustically matched, unfamiliar song, presented in random order. A group of control participants (N = 12), who were unfamiliar with all of the songs, was also tested. We reveal a rapid differentiation between snippets from familiar and unfamiliar songs: Pupil responses showed greater dilation rate to familiar music from 100-300 ms post-stimulus-onset, consistent with a faster activation of the autonomic salience network. Brain responses measured with EEG showed a later differentiation between familiar and unfamiliar music from 350 ms post onset. Remarkably, the cluster pattern identified in the EEG response is very similar to that commonly found in the classic old/new memory retrieval paradigms, suggesting that the recognition of brief, randomly presented, music snippets, draws on similar processes. |
Woojae Jeong; Seolmin Kim; Yee Joon Kim; Joonyeol Lee Motion direction representation in multivariate electroencephalography activity for smooth pursuit eye movements Journal Article NeuroImage, 202 , pp. 1–10, 2019. @article{Jeong2019, title = {Motion direction representation in multivariate electroencephalography activity for smooth pursuit eye movements}, author = {Woojae Jeong and Seolmin Kim and Yee Joon Kim and Joonyeol Lee}, doi = {10.1016/j.neuroimage.2019.116160}, year = {2019}, date = {2019-01-01}, journal = {NeuroImage}, volume = {202}, pages = {1--10}, publisher = {Elsevier Ltd}, abstract = {Visually-guided smooth pursuit eye movements are composed of initial open-loop and later steady-state periods. Feedforward sensory information dominates the motor behavior during the open-loop pursuit, and a more complex feedback loop regulates the steady-state pursuit. To understand the neural representations of motion direction during open-loop and steady-state smooth pursuits, we recorded electroencephalography (EEG) responses from human observers while they tracked random-dot kinematograms as pursuit targets. We estimated population direction tuning curves from multivariate EEG activity using an inverted encoding model. We found significant direction tuning curves as early as about 60 ms from stimulus onset. Direction tuning responses were generalized to later times during the open-loop smooth pursuit, but they became more dynamic during the later steady-state pursuit. The encoding quality of retinal motion direction information estimated from the early direction tuning curves was predictive of trial-by-trial variation in initial pursuit directions. These results suggest that the movement directions of open-loop smooth pursuit are guided by the representation of the retinal motion present in the multivariate EEG activity.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visually-guided smooth pursuit eye movements are composed of initial open-loop and later steady-state periods. Feedforward sensory information dominates the motor behavior during the open-loop pursuit, and a more complex feedback loop regulates the steady-state pursuit. To understand the neural representations of motion direction during open-loop and steady-state smooth pursuits, we recorded electroencephalography (EEG) responses from human observers while they tracked random-dot kinematograms as pursuit targets. We estimated population direction tuning curves from multivariate EEG activity using an inverted encoding model. We found significant direction tuning curves as early as about 60 ms from stimulus onset. Direction tuning responses were generalized to later times during the open-loop smooth pursuit, but they became more dynamic during the later steady-state pursuit. The encoding quality of retinal motion direction information estimated from the early direction tuning curves was predictive of trial-by-trial variation in initial pursuit directions. These results suggest that the movement directions of open-loop smooth pursuit are guided by the representation of the retinal motion present in the multivariate EEG activity. |
Jianrong Jia; Fang Fang; Huan Luo Selective spatial attention involves two alpha-band components associated with distinct spatiotemporal and functional characteristics Journal Article NeuroImage, 199 , pp. 228–236, 2019. @article{Jia2019, title = {Selective spatial attention involves two alpha-band components associated with distinct spatiotemporal and functional characteristics}, author = {Jianrong Jia and Fang Fang and Huan Luo}, doi = {10.1016/j.neuroimage.2019.05.079}, year = {2019}, date = {2019-01-01}, journal = {NeuroImage}, volume = {199}, pages = {228--236}, publisher = {Elsevier Ltd}, abstract = {Attention is crucial for efficiently coordinating resources over multiple objects in a visual scene. Recently, a growing number of studies suggest that attention is implemented through a temporal organization process during which resources are dynamically allocated over a multitude of objects, yet the associated neural evidence, particularly in low-level sensory areas, is still limited. Here we used EEG recordings in combination with a temporal response function (TRF) approach to examine the spatiotemporal characteristics of neuronal impulse response in covert selective attention. We demonstrate two distinct alpha-band components – one in post-central parietal area and one in contralateral occipital area – that are involved in coordinating neural representations of attended and unattended stimuli. Specifically, consistent with previous findings, the central alpha-band component showed enhanced activities for unattended versus attended stimuli within the first 200 ms temporal lag of TRF response, suggesting its inhibitory function in attention. In contrast, the contralateral occipital component displayed relatively earlier activation for the attended than unattended one in the TRF response. Furthermore, the central component but not the occipital component was correlated with attentional behavioral performance. Finally, the parietal area exerted directional influences on the occipital activity through alpha-band rhythm. Taken together, spatial attention involves two hierarchically organized alpha-band components that are associated with distinct spatiotemporal characteristics and presumably play different functions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Attention is crucial for efficiently coordinating resources over multiple objects in a visual scene. Recently, a growing number of studies suggest that attention is implemented through a temporal organization process during which resources are dynamically allocated over a multitude of objects, yet the associated neural evidence, particularly in low-level sensory areas, is still limited. Here we used EEG recordings in combination with a temporal response function (TRF) approach to examine the spatiotemporal characteristics of neuronal impulse response in covert selective attention. We demonstrate two distinct alpha-band components – one in post-central parietal area and one in contralateral occipital area – that are involved in coordinating neural representations of attended and unattended stimuli. Specifically, consistent with previous findings, the central alpha-band component showed enhanced activities for unattended versus attended stimuli within the first 200 ms temporal lag of TRF response, suggesting its inhibitory function in attention. In contrast, the contralateral occipital component displayed relatively earlier activation for the attended than unattended one in the TRF response. Furthermore, the central component but not the occipital component was correlated with attentional behavioral performance. Finally, the parietal area exerted directional influences on the occipital activity through alpha-band rhythm. Taken together, spatial attention involves two hierarchically organized alpha-band components that are associated with distinct spatiotemporal characteristics and presumably play different functions. |
Han-Gue Gue Jo; Thilo Kellermann; Conrad Baumann; Junji Ito; Barbara Schulte Holthausen; Frank Schneider; Sonja Grün; Ute Habel Distinct modes of top-down cognitive processing in the ventral visual cortex Journal Article NeuroImage, 193 , pp. 201–213, 2019. @article{Jo2019, title = {Distinct modes of top-down cognitive processing in the ventral visual cortex}, author = {Han-Gue Gue Jo and Thilo Kellermann and Conrad Baumann and Junji Ito and Barbara {Schulte Holthausen} and Frank Schneider and Sonja Grün and Ute Habel}, doi = {10.1016/j.neuroimage.2019.02.068}, year = {2019}, date = {2019-01-01}, journal = {NeuroImage}, volume = {193}, pages = {201--213}, publisher = {Elsevier Ltd}, abstract = {Top-down cognitive control leads to changes in the sensory processing of the brain. In visual perception such changes can take place in the ventral visual cortex altering the functional asymmetry in forward and backward connections. Here we used fixation-related evoked responses of EEG measurement and dynamic causal modeling to examine hierarchical forward-backward asymmetry, while twenty-six healthy adults performed cognitive tasks that require different types of top-down cognitive control (memorizing or searching visual objects embedded in a natural scene image). The generative model revealed an enhanced asymmetry toward forward connections during memorizing, whereas enhanced backward connections were found during searching. This task-dependent modulation of forward and backward connections suggests two distinct modes of top-down cognitive processing in cortical networks. The alteration in forward-backward asymmetry might underlie the functional role in the cognitive control of visual information processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Top-down cognitive control leads to changes in the sensory processing of the brain. In visual perception such changes can take place in the ventral visual cortex altering the functional asymmetry in forward and backward connections. Here we used fixation-related evoked responses of EEG measurement and dynamic causal modeling to examine hierarchical forward-backward asymmetry, while twenty-six healthy adults performed cognitive tasks that require different types of top-down cognitive control (memorizing or searching visual objects embedded in a natural scene image). The generative model revealed an enhanced asymmetry toward forward connections during memorizing, whereas enhanced backward connections were found during searching. This task-dependent modulation of forward and backward connections suggests two distinct modes of top-down cognitive processing in cortical networks. The alteration in forward-backward asymmetry might underlie the functional role in the cognitive control of visual information processing. |
Louisa Kulke Neural mechanisms of overt attention shifts to emotional faces Journal Article Neuroscience, 418 , pp. 59–68, 2019. @article{Kulke2019, title = {Neural mechanisms of overt attention shifts to emotional faces}, author = {Louisa Kulke}, doi = {10.1016/j.neuroscience.2019.08.023}, year = {2019}, date = {2019-01-01}, journal = {Neuroscience}, volume = {418}, pages = {59--68}, publisher = {Elsevier Ltd}, abstract = {Emotional faces draw attention and eye-movements towards them. However, the neural mechanisms of attention have mainly been investigated during fixation, which is uncommon in everyday life where people move their eyes to shift attention to faces. Therefore, the current study combined eye-tracking and Electroencephalography (EEG) to measure neural mechanisms of overt attention shifts to faces with happy, neutral and angry expressions, allowing participants to move their eyes freely towards the stimuli. Saccade latencies towards peripheral faces did not differ depending on expression and early neural response (P1) amplitudes and latencies were unaffected. However, the later occurring Early Posterior Negativity (EPN) was significantly larger for emotional than for neutral faces. This response appears after saccades towards the faces. Therefore, emotion modulations only occurred after an overt shift of gaze towards the stimulus had already been completed. Visual saliency rather than emotional content may therefore drive early saccades, while later top-down processes reflect emotion processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Emotional faces draw attention and eye-movements towards them. However, the neural mechanisms of attention have mainly been investigated during fixation, which is uncommon in everyday life where people move their eyes to shift attention to faces. Therefore, the current study combined eye-tracking and Electroencephalography (EEG) to measure neural mechanisms of overt attention shifts to faces with happy, neutral and angry expressions, allowing participants to move their eyes freely towards the stimuli. Saccade latencies towards peripheral faces did not differ depending on expression and early neural response (P1) amplitudes and latencies were unaffected. However, the later occurring Early Posterior Negativity (EPN) was significantly larger for emotional than for neutral faces. This response appears after saccades towards the faces. Therefore, emotion modulations only occurred after an overt shift of gaze towards the stimulus had already been completed. Visual saliency rather than emotional content may therefore drive early saccades, while later top-down processes reflect emotion processing. |
Ya Li; Yonghui Wang; Sheng Li Recurrent processing of contour integration in the human visual cortex as revealed by fMRI-guided TMS Journal Article Cerebral Cortex, 29 (1), pp. 17–26, 2019. @article{Li2019h, title = {Recurrent processing of contour integration in the human visual cortex as revealed by fMRI-guided TMS}, author = {Ya Li and Yonghui Wang and Sheng Li}, doi = {10.1093/cercor/bhx296}, year = {2019}, date = {2019-01-01}, journal = {Cerebral Cortex}, volume = {29}, number = {1}, pages = {17--26}, abstract = {Contour integration is a critical step in visual perception because it groups discretely local elements into perceptually global contours. Previous investigations have suggested that striate and extrastriate visual areas are involved in this mid-level processing of visual perception. However, the temporal dynamics of these areas in the human brain during contour integration is less understood. The present study used functional magnetic resonance imaging-guided transcranial magnetic stimulation (TMS) to briefly disrupt 1 of 2 visual areas (V1/V2 and V3B) and examined the causal contributions of these areas to contour detection. The results demonstrated that the earliest critical time window at which behavioral detection performance was impaired by TMS pluses differed between V1/V2 and V3B. The first critical window of V3B (90-110 ms after stimulus onset) was earlier than that of V1/V2 (120-140 ms after stimulus onset), thus indicating that feedback connection from higher to lower area was necessary for complete contour integration. These results suggested that the fine processing of contour-related information in V1/V2 follows the generation of a coarse template in the higher visual areas, such as V3B. Our findings provide direct causal evidence that a recurrent mechanism is necessary for the integration of contours from cluttered background in the human brain.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Contour integration is a critical step in visual perception because it groups discretely local elements into perceptually global contours. Previous investigations have suggested that striate and extrastriate visual areas are involved in this mid-level processing of visual perception. However, the temporal dynamics of these areas in the human brain during contour integration is less understood. The present study used functional magnetic resonance imaging-guided transcranial magnetic stimulation (TMS) to briefly disrupt 1 of 2 visual areas (V1/V2 and V3B) and examined the causal contributions of these areas to contour detection. The results demonstrated that the earliest critical time window at which behavioral detection performance was impaired by TMS pluses differed between V1/V2 and V3B. The first critical window of V3B (90-110 ms after stimulus onset) was earlier than that of V1/V2 (120-140 ms after stimulus onset), thus indicating that feedback connection from higher to lower area was necessary for complete contour integration. These results suggested that the fine processing of contour-related information in V1/V2 follows the generation of a coarse template in the higher visual areas, such as V3B. Our findings provide direct causal evidence that a recurrent mechanism is necessary for the integration of contours from cluttered background in the human brain. |
Otto Loberg; Jarkko Hautala; Jarmo A Hämäläinen; Paavo H T Leppänen Influence of reading skill and word length on fixation-related brain activity in school-aged children during natural reading Journal Article Vision Research, 165 , pp. 109–122, 2019. @article{Loberg2019, title = {Influence of reading skill and word length on fixation-related brain activity in school-aged children during natural reading}, author = {Otto Loberg and Jarkko Hautala and Jarmo A Hämäläinen and Paavo H T Leppänen}, doi = {10.1016/j.visres.2019.07.008}, year = {2019}, date = {2019-01-01}, journal = {Vision Research}, volume = {165}, pages = {109--122}, publisher = {Elsevier}, abstract = {Word length is one of the main determinants of eye movements during reading and has been shown to influence slow readers more strongly than typical readers. The influence of word length on reading in individuals with different reading skill levels has been shown in separate eye-tracking and electroencephalography studies. However, the influence of reading difficulty on cortical correlates of word length effect during natural reading is unknown. To investigate how reading skill is related to brain activity during natural reading, we performed an exploratory analysis on our data set from a previous study, where slow reading (N = 27) and typically reading (N = 65) 12-to-13.5-year-old children read sentences while co-registered ET-EEG was recorded. We extracted fixation-related potentials (FRPs) from the sentences using the linear deconvolution approach. We examined standard eye-movement variables and deconvoluted FRP estimates: intercept of the response, categorical effect of first fixation versus additional fixation and continuous effect of word length. We replicated the pattern of stronger word length effect in eye movements for slow readers. We found a difference between typical readers and slow readers in the FRP intercept, which contains activity that is common to all fixations, within a fixation time-window of 50–300 ms. For both groups, the word length effect was present in brain activity during additional fixations; however, this effect was not different between groups. This suggests that stronger word length effect in the eye movements of slow readers might be mainly due re-fixations, which are more probable due to the lower efficiency of visual processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Word length is one of the main determinants of eye movements during reading and has been shown to influence slow readers more strongly than typical readers. The influence of word length on reading in individuals with different reading skill levels has been shown in separate eye-tracking and electroencephalography studies. However, the influence of reading difficulty on cortical correlates of word length effect during natural reading is unknown. To investigate how reading skill is related to brain activity during natural reading, we performed an exploratory analysis on our data set from a previous study, where slow reading (N = 27) and typically reading (N = 65) 12-to-13.5-year-old children read sentences while co-registered ET-EEG was recorded. We extracted fixation-related potentials (FRPs) from the sentences using the linear deconvolution approach. We examined standard eye-movement variables and deconvoluted FRP estimates: intercept of the response, categorical effect of first fixation versus additional fixation and continuous effect of word length. We replicated the pattern of stronger word length effect in eye movements for slow readers. We found a difference between typical readers and slow readers in the FRP intercept, which contains activity that is common to all fixations, within a fixation time-window of 50–300 ms. For both groups, the word length effect was present in brain activity during additional fixations; however, this effect was not different between groups. This suggests that stronger word length effect in the eye movements of slow readers might be mainly due re-fixations, which are more probable due to the lower efficiency of visual processing. |
Mary H MacLean; Tom Bullock; Barry Giesbrecht Dual process coding of recalled locations in human oscillatory brain activity Journal Article The Journal of Neuroscience, 39 (34), pp. 6737–6750, 2019. @article{MacLean2019, title = {Dual process coding of recalled locations in human oscillatory brain activity}, author = {Mary H MacLean and Tom Bullock and Barry Giesbrecht}, doi = {10.1523/JNEUROSCI.0059-19.2019}, year = {2019}, date = {2019-01-01}, journal = {The Journal of Neuroscience}, volume = {39}, number = {34}, pages = {6737--6750}, abstract = {A mental representation of the location of an object can be constructed using sensory information selected from the environment and information stored internally. Human electrophysiological evidence indicates that behaviorally relevant locations, regardless of the source of sensory information, are represented in alpha-band oscillations suggesting a shared process. Here, we present evidence from human subjects of either sex for two distinct alpha-band-based processes that separately support the representation of location, exploiting sensory evidence sampled either externally or internally.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A mental representation of the location of an object can be constructed using sensory information selected from the environment and information stored internally. Human electrophysiological evidence indicates that behaviorally relevant locations, regardless of the source of sensory information, are represented in alpha-band oscillations suggesting a shared process. Here, we present evidence from human subjects of either sex for two distinct alpha-band-based processes that separately support the representation of location, exploiting sensory evidence sampled either externally or internally. |
Sarah D McCrackin; Roxane J Itier Perceived gaze direction differentially affects discrimination of facial emotion, attention, and gender - An ERP study Journal Article Frontiers in Neuroscience, 13 , pp. 1–14, 2019. @article{McCrackin2019, title = {Perceived gaze direction differentially affects discrimination of facial emotion, attention, and gender - An ERP study}, author = {Sarah D McCrackin and Roxane J Itier}, doi = {10.3389/fnins.2019.00517}, year = {2019}, date = {2019-01-01}, journal = {Frontiers in Neuroscience}, volume = {13}, pages = {1--14}, abstract = {The perception of eye-gaze is thought to be a key component of our everyday social interactions. While the neural correlates of direct and averted gaze processing have been investigated, there is little consensus about how these gaze directions may be processed differently as a function of the task being performed. In a within-subject design, we examined how perception of direct and averted gaze affected performance on tasks requiring participants to use directly available facial cues to infer the individuals' emotional state (emotion discrimination), direction of attention (attention discrimination) and gender (gender discrimination). Neural activity was recorded throughout the three tasks using EEG, and ERPs time-locked to face onset were analyzed. Participants were most accurate at discriminating emotions with direct gaze faces, but most accurate at discriminating attention with averted gaze faces, while gender discrimination was not affected by gaze direction. At the neural level, direct and averted gaze elicited different patterns of activation depending on the task over frontal sites, from approximately 220-290 ms. More positive amplitudes were seen for direct than averted gaze in the emotion discrimination task. In contrast, more positive amplitudes were seen for averted gaze than for direct gaze in the gender discrimination task. These findings are among the first direct evidence that perceived gaze direction modulates neural activity differently depending on task demands, and that at the behavioral level, specific gaze directions functionally overlap with emotion and attention discrimination, precursors to more elaborated theory of mind processes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The perception of eye-gaze is thought to be a key component of our everyday social interactions. While the neural correlates of direct and averted gaze processing have been investigated, there is little consensus about how these gaze directions may be processed differently as a function of the task being performed. In a within-subject design, we examined how perception of direct and averted gaze affected performance on tasks requiring participants to use directly available facial cues to infer the individuals' emotional state (emotion discrimination), direction of attention (attention discrimination) and gender (gender discrimination). Neural activity was recorded throughout the three tasks using EEG, and ERPs time-locked to face onset were analyzed. Participants were most accurate at discriminating emotions with direct gaze faces, but most accurate at discriminating attention with averted gaze faces, while gender discrimination was not affected by gaze direction. At the neural level, direct and averted gaze elicited different patterns of activation depending on the task over frontal sites, from approximately 220-290 ms. More positive amplitudes were seen for direct than averted gaze in the emotion discrimination task. In contrast, more positive amplitudes were seen for averted gaze than for direct gaze in the gender discrimination task. These findings are among the first direct evidence that perceived gaze direction modulates neural activity differently depending on task demands, and that at the behavioral level, specific gaze directions functionally overlap with emotion and attention discrimination, precursors to more elaborated theory of mind processes. |
Jana Annina Müller; Dorothea Wendt; Birger Kollmeier; Stefan Debener; Thomas Brand Effect of speech rate on neural tracking of speech Journal Article Frontiers in Psychology, 10 , pp. 1–15, 2019. @article{Mueller2019b, title = {Effect of speech rate on neural tracking of speech}, author = {Jana Annina Müller and Dorothea Wendt and Birger Kollmeier and Stefan Debener and Thomas Brand}, doi = {10.3389/fpsyg.2019.00449}, year = {2019}, date = {2019-01-01}, journal = {Frontiers in Psychology}, volume = {10}, pages = {1--15}, abstract = {Speech comprehension requires effort in demanding listening situations. Selective attention may be required for focusing on a specific talker in a multi-talker environment, may enhance effort by requiring additional cognitive resources, and is known to enhance the neural representation of the attended talker in the listener's neural response. The aim of the study was to investigate the relation of listening effort, as quantified by subjective effort ratings and pupil dilation, and neural speech tracking during sentence recognition. Task demands were varied using sentences with varying levels of linguistic complexity and using two different speech rates in a picture-matching paradigm with 20 normal-hearing listeners. The participants' task was to match the acoustically presented sentence with a picture presented before the acoustic stimulus. Afterwards they rated their perceived effort on a categorical effort scale. During each trial, pupil dilation (as an indicator of listening effort) and electroencephalogram (as an indicator of neural speech tracking) were recorded. Neither measure was significantly affected by linguistic complexity. However, speech rate showed a strong influence on subjectively rated effort, pupil dilation, and neural tracking. The neural tracking analysis revealed a shorter latency for faster sentences, which may reflect a neural adaptation to the rate of the input. No relation was found between neural tracking and listening effort, even though both measures were clearly influenced by speech rate. This is probably due to factors that influence both measures differently. Consequently, the amount of listening effort is not clearly represented in the neural tracking.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Speech comprehension requires effort in demanding listening situations. Selective attention may be required for focusing on a specific talker in a multi-talker environment, may enhance effort by requiring additional cognitive resources, and is known to enhance the neural representation of the attended talker in the listener's neural response. The aim of the study was to investigate the relation of listening effort, as quantified by subjective effort ratings and pupil dilation, and neural speech tracking during sentence recognition. Task demands were varied using sentences with varying levels of linguistic complexity and using two different speech rates in a picture-matching paradigm with 20 normal-hearing listeners. The participants' task was to match the acoustically presented sentence with a picture presented before the acoustic stimulus. Afterwards they rated their perceived effort on a categorical effort scale. During each trial, pupil dilation (as an indicator of listening effort) and electroencephalogram (as an indicator of neural speech tracking) were recorded. Neither measure was significantly affected by linguistic complexity. However, speech rate showed a strong influence on subjectively rated effort, pupil dilation, and neural tracking. The neural tracking analysis revealed a shorter latency for faster sentences, which may reflect a neural adaptation to the rate of the input. No relation was found between neural tracking and listening effort, even though both measures were clearly influenced by speech rate. This is probably due to factors that influence both measures differently. Consequently, the amount of listening effort is not clearly represented in the neural tracking. |
Aisling E O'Sullivan; Chantelle Y Lim; Edmund C Lalor European Journal of Neuroscience, 50 (8), pp. 3282–3295, 2019. @article{OSullivan2019, title = {Look at me when I'm talking to you: Selective attention at a multisensory cocktail party can be decoded using stimulus reconstruction and alpha power modulations}, author = {Aisling E O'Sullivan and Chantelle Y Lim and Edmund C Lalor}, doi = {10.1111/ejn.14425}, year = {2019}, date = {2019-01-01}, journal = {European Journal of Neuroscience}, volume = {50}, number = {8}, pages = {3282--3295}, abstract = {Recent work using electroencephalography has applied stimulus reconstruction techniques to identify the attended speaker in a cocktail party environment. The success of these approaches has been primarily based on the ability to detect cortical tracking of the acoustic envelope at the scalp level. However, most studies have ignored the effects of visual input, which is almost always present in naturalistic scenarios. In this study, we investigated the effects of visual input on envelope-based cocktail party decoding in two multisensory cocktail party situations: (a) Congruent AV—facing the attended speaker while ignoring another speaker represented by the audio-only stream and (b) Incongruent AV (eavesdropping)—attending the audio-only speaker while looking at the unattended speaker. We trained and tested decoders for each condition separately and found that we can successfully decode attention to congruent audiovisual speech and can also decode attention when listeners were eavesdropping, i.e., looking at the face of the unattended talker. In addition to this, we found alpha power to be a reliable measure of attention to the visual speech. Using parieto-occipital alpha power, we found that we can distinguish whether subjects are attending or ignoring the speaker's face. Considering the practical applications of these methods, we demonstrate that with only six near-ear electrodes we can successfully determine the attended speech. This work extends the current framework for decoding attention to speech to more naturalistic scenarios, and in doing so provides additional neural measures which may be incorporated to improve decoding accuracy.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Recent work using electroencephalography has applied stimulus reconstruction techniques to identify the attended speaker in a cocktail party environment. The success of these approaches has been primarily based on the ability to detect cortical tracking of the acoustic envelope at the scalp level. However, most studies have ignored the effects of visual input, which is almost always present in naturalistic scenarios. In this study, we investigated the effects of visual input on envelope-based cocktail party decoding in two multisensory cocktail party situations: (a) Congruent AV—facing the attended speaker while ignoring another speaker represented by the audio-only stream and (b) Incongruent AV (eavesdropping)—attending the audio-only speaker while looking at the unattended speaker. We trained and tested decoders for each condition separately and found that we can successfully decode attention to congruent audiovisual speech and can also decode attention when listeners were eavesdropping, i.e., looking at the face of the unattended talker. In addition to this, we found alpha power to be a reliable measure of attention to the visual speech. Using parieto-occipital alpha power, we found that we can distinguish whether subjects are attending or ignoring the speaker's face. Considering the practical applications of these methods, we demonstrate that with only six near-ear electrodes we can successfully determine the attended speech. This work extends the current framework for decoding attention to speech to more naturalistic scenarios, and in doing so provides additional neural measures which may be incorporated to improve decoding accuracy. |
Karisa B Parkington; Roxane J Itier From eye to face: The impact of face outline, feature number, and feature saliency on the early neural response to faces Journal Article Brain Research, 1722 , pp. 1–14, 2019. @article{Parkington2019, title = {From eye to face: The impact of face outline, feature number, and feature saliency on the early neural response to faces}, author = {Karisa B Parkington and Roxane J Itier}, doi = {10.1016/j.brainres.2019.146343}, year = {2019}, date = {2019-01-01}, journal = {Brain Research}, volume = {1722}, pages = {1--14}, abstract = {The LIFTED model of early face perception postulates that the face-sensitive N170 event-related potential may reflect underlying neural inhibition mechanisms which serve to regulate holistic and featural processing. It remains unclear, however, what specific factors impact these neural inhibition processes. Here, N170 peak re- sponses were recorded whilst adults maintained fixation on a single eye using a gaze-contingent paradigm, and the presence/absence of a face outline, as well as the number and type of parafoveal features within the outline, were manipulated. N170 amplitudes and latencies were reduced when a single eye was fixated within a face outline compared to fixation on the same eye in isolation, demonstrating that the simple presence of a face outline is sufficient to elicit a shift towards a more face-like neural response. A monotonic decrease in the N170 amplitude and latency was observed with increasing numbers of parafoveal features, and the type of feature(s) present in parafovea further modulated this early face response. These results support the idea of neural in- hibition exerted by parafoveal features onto the foveated feature as a function of the number, and possibly the nature, of parafoveal features. Specifically, the results suggest the use of a feature saliency framework (eyes textgreater mouth textgreater nose) at the neural level, such that the parafoveal eye may play a role in down-regulating the response to the other eye (in fovea) more so than the nose or the mouth. These results confirm the importance of parafoveal features and the face outline in the neural inhibition mechanism, and provide further support for a feature saliency mechanism guiding early face perception.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The LIFTED model of early face perception postulates that the face-sensitive N170 event-related potential may reflect underlying neural inhibition mechanisms which serve to regulate holistic and featural processing. It remains unclear, however, what specific factors impact these neural inhibition processes. Here, N170 peak re- sponses were recorded whilst adults maintained fixation on a single eye using a gaze-contingent paradigm, and the presence/absence of a face outline, as well as the number and type of parafoveal features within the outline, were manipulated. N170 amplitudes and latencies were reduced when a single eye was fixated within a face outline compared to fixation on the same eye in isolation, demonstrating that the simple presence of a face outline is sufficient to elicit a shift towards a more face-like neural response. A monotonic decrease in the N170 amplitude and latency was observed with increasing numbers of parafoveal features, and the type of feature(s) present in parafovea further modulated this early face response. These results support the idea of neural in- hibition exerted by parafoveal features onto the foveated feature as a function of the number, and possibly the nature, of parafoveal features. Specifically, the results suggest the use of a feature saliency framework (eyes textgreater mouth textgreater nose) at the neural level, such that the parafoveal eye may play a role in down-regulating the response to the other eye (in fovea) more so than the nose or the mouth. These results confirm the importance of parafoveal features and the face outline in the neural inhibition mechanism, and provide further support for a feature saliency mechanism guiding early face perception. |
Nathan M Petro; Nina N Thigpen; Steven Garcia; Maeve R Boylan; Andreas Keil Pre-target alpha power predicts the speed of cued target discrimination Journal Article NeuroImage, 189 , pp. 878–885, 2019. @article{Petro2019, title = {Pre-target alpha power predicts the speed of cued target discrimination}, author = {Nathan M Petro and Nina N Thigpen and Steven Garcia and Maeve R Boylan and Andreas Keil}, doi = {10.1016/j.neuroimage.2019.01.066}, year = {2019}, date = {2019-01-01}, journal = {NeuroImage}, volume = {189}, pages = {878--885}, abstract = {The human visual system selects information from dense and complex streams of spatiotemporal input. This selection process is aided by prior knowledge of the features, location, and temporal proximity of an upcoming stimulus. In the laboratory, this knowledge is often conveyed by cues, preceding a task-relevant target stimulus. Response speed in cued selection tasks varies within and across participants and is often thought to index efficient selection of a cued feature, location, or moment in time. The present study used a reverse correlation approach to identify neural predictors of efficient target discrimination: Participants identified the orientation of a sinusoidal grating, which was presented in one hemifield following the presentation of bilateral visual cues that carried temporal but not spatial information about the target. Across different analytic approaches, faster target responses were predicted by larger alpha power preceding the target. These results suggest that heightened pre-target alpha power during a cue period may index a state that is beneficial for subsequent target processing. Our findings are broadly consistent with models that emphasize capacity sharing across time, as well as models that link alpha oscillations to temporal predictions regarding upcoming events.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The human visual system selects information from dense and complex streams of spatiotemporal input. This selection process is aided by prior knowledge of the features, location, and temporal proximity of an upcoming stimulus. In the laboratory, this knowledge is often conveyed by cues, preceding a task-relevant target stimulus. Response speed in cued selection tasks varies within and across participants and is often thought to index efficient selection of a cued feature, location, or moment in time. The present study used a reverse correlation approach to identify neural predictors of efficient target discrimination: Participants identified the orientation of a sinusoidal grating, which was presented in one hemifield following the presentation of bilateral visual cues that carried temporal but not spatial information about the target. Across different analytic approaches, faster target responses were predicted by larger alpha power preceding the target. These results suggest that heightened pre-target alpha power during a cue period may index a state that is beneficial for subsequent target processing. Our findings are broadly consistent with models that emphasize capacity sharing across time, as well as models that link alpha oscillations to temporal predictions regarding upcoming events. |
Ulrich Pomper; Thomas Ditye; Ulrich Ansorge Contralateral delay activity during temporal order memory Journal Article Neuropsychologia, 129 , pp. 104–116, 2019. @article{Pomper2019, title = {Contralateral delay activity during temporal order memory}, author = {Ulrich Pomper and Thomas Ditye and Ulrich Ansorge}, doi = {10.1016/j.neuropsychologia.2019.03.012}, year = {2019}, date = {2019-01-01}, journal = {Neuropsychologia}, volume = {129}, pages = {104--116}, abstract = {In everyday life, we constantly need to remember the temporal sequence of visual events over short periods of time, for example, when making sense of others' actions or watching a movie. While there is increasing knowledge available on neural mechanisms underlying visual working memory (VWM) regarding the identity and spatial location of objects, less is known about how the brain encodes and retains information on temporal sequences. Here, we investigate whether the contralateral-delay activity (CDA), a well-studied electroencephalographic (EEG) component associated with VWM of object identity, also reflects the encoding and retention of temporal order. In two independent experiments, we presented participants with a sequence of four or six images, followed by a 1 s retention period. Participants judged temporal order by indicating whether a subsequently presented probe image was originally displayed during the first or the second half of the sequence. As a main novel result, we report the emergence of a contralateral negativity already following the presentation of the first item of the sequence, which increases over the course of a trial with every presented item, up to a limit of four items. We further observed no differences in the CDA during the temporal-order task compared to one obtained during a task concerning the spatial location of the presented items. Since the characteristics of the CDA appear to be highly similar between different encoded feature dimensions and increases as additional items are being encoded, we suggest this component might be a general characteristic of various types of VWM.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In everyday life, we constantly need to remember the temporal sequence of visual events over short periods of time, for example, when making sense of others' actions or watching a movie. While there is increasing knowledge available on neural mechanisms underlying visual working memory (VWM) regarding the identity and spatial location of objects, less is known about how the brain encodes and retains information on temporal sequences. Here, we investigate whether the contralateral-delay activity (CDA), a well-studied electroencephalographic (EEG) component associated with VWM of object identity, also reflects the encoding and retention of temporal order. In two independent experiments, we presented participants with a sequence of four or six images, followed by a 1 s retention period. Participants judged temporal order by indicating whether a subsequently presented probe image was originally displayed during the first or the second half of the sequence. As a main novel result, we report the emergence of a contralateral negativity already following the presentation of the first item of the sequence, which increases over the course of a trial with every presented item, up to a limit of four items. We further observed no differences in the CDA during the temporal-order task compared to one obtained during a task concerning the spatial location of the presented items. Since the characteristics of the CDA appear to be highly similar between different encoded feature dimensions and increases as additional items are being encoded, we suggest this component might be a general characteristic of various types of VWM. |
2018 |
Eleanor J Cole; Nick E Barraclough; Peter G Enticott Investigating mirror system (MS) activity in adults with ASD when inferring others' intentions using both TMS and EEG Journal Article Journal of Autism and Developmental Disorders, 48 (7), pp. 2350–2367, 2018. @article{Cole2018, title = {Investigating mirror system (MS) activity in adults with ASD when inferring others' intentions using both TMS and EEG}, author = {Eleanor J Cole and Nick E Barraclough and Peter G Enticott}, doi = {10.1007/s10803-018-3492-2}, year = {2018}, date = {2018-07-01}, journal = {Journal of Autism and Developmental Disorders}, volume = {48}, number = {7}, pages = {2350--2367}, publisher = {Springer US}, abstract = {ASD is associated with mentalizing deficits that may correspond with atypical mirror system (MS) activation. We investigated MS activity in adults with and without ASD when inferring others' intentions using TMS-induced motor evoked potentials (MEPs) and mu suppression measured by EEG. Autistic traits were measured for all participants. Our EEG data show, high levels of autistic traits predicted reduced right mu (8–10 Hz) suppression when mentalizing. Higher left mu (8–10 Hz) suppression was associated with superior mentalizing performances. Eye-tracking and TMS data showed no differences associated with autistic traits. Our data suggest ASD is associated with reduced right MS activity when mentalizing, TMS-induced MEPs and mu suppression measure different aspects of MS functioning and the MS is directly involved in inferring intentions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } ASD is associated with mentalizing deficits that may correspond with atypical mirror system (MS) activation. We investigated MS activity in adults with and without ASD when inferring others' intentions using TMS-induced motor evoked potentials (MEPs) and mu suppression measured by EEG. Autistic traits were measured for all participants. Our EEG data show, high levels of autistic traits predicted reduced right mu (8–10 Hz) suppression when mentalizing. Higher left mu (8–10 Hz) suppression was associated with superior mentalizing performances. Eye-tracking and TMS data showed no differences associated with autistic traits. Our data suggest ASD is associated with reduced right MS activity when mentalizing, TMS-induced MEPs and mu suppression measure different aspects of MS functioning and the MS is directly involved in inferring intentions. |
Kivilcim Afacan-Seref; Natalie A Steinemann; Annabelle Blangero; Simon P Kelly Dynamic interplay of value and sensory information in high-speed decision making Journal Article Current Biology, 28 (5), pp. 795–802, 2018. @article{Afacan-Seref2018, title = {Dynamic interplay of value and sensory information in high-speed decision making}, author = {Kivilcim Afacan-Seref and Natalie A Steinemann and Annabelle Blangero and Simon P Kelly}, doi = {10.1016/j.cub.2018.01.071}, year = {2018}, date = {2018-03-01}, journal = {Current Biology}, volume = {28}, number = {5}, pages = {795--802}, abstract = {In dynamic environments, split-second sensorimotor decisions must be prioritized according to potential payoffs to maximize overall rewards. The impact of relative value on deliberative perceptual judgments has been examined extensively [1–6], but relatively little is known about value-biasing mechanisms in the common situation where physical evidence is strong but the time to act is severely limited. In prominent decision models, a noisy but statistically stationary representation of sensory evidence is integrated over time to an action-triggering bound, and value-biases are affected by starting the integrator closer to the more valuable bound. Here, we show significant departures from this account for humans making rapid sensory-instructed action choices. Behavior was best explained by a simple model in which the evidence representation—and hence, rate of accumulation—is itself biased by value and is non-stationary, increasing over the short decision time frame. Because the value bias initially dominates, the model uniquely predicts a dynamic ‘‘turn-around'' effect on low-value cues, where the accumulator first launches toward the incorrect action but is then re-routed to the correct one. This was clearly exhibited in electrophysiological signals reflecting motor preparation and evidence accumulation. Finally, we construct an extended model that implements this dynamic effect through plausible sensory neural response modulations and demonstrate the correspondence between decision signal dynamics simulated from a behavioral fit of that model and the empirical decision signals. Our findings suggest that value and sensory information can exert simultaneous and dynamically countervailing influences on the trajectory of the accumulation-to-bound process, driving rapid, sensory-guided actions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In dynamic environments, split-second sensorimotor decisions must be prioritized according to potential payoffs to maximize overall rewards. The impact of relative value on deliberative perceptual judgments has been examined extensively [1–6], but relatively little is known about value-biasing mechanisms in the common situation where physical evidence is strong but the time to act is severely limited. In prominent decision models, a noisy but statistically stationary representation of sensory evidence is integrated over time to an action-triggering bound, and value-biases are affected by starting the integrator closer to the more valuable bound. Here, we show significant departures from this account for humans making rapid sensory-instructed action choices. Behavior was best explained by a simple model in which the evidence representation—and hence, rate of accumulation—is itself biased by value and is non-stationary, increasing over the short decision time frame. Because the value bias initially dominates, the model uniquely predicts a dynamic ‘‘turn-around'' effect on low-value cues, where the accumulator first launches toward the incorrect action but is then re-routed to the correct one. This was clearly exhibited in electrophysiological signals reflecting motor preparation and evidence accumulation. Finally, we construct an extended model that implements this dynamic effect through plausible sensory neural response modulations and demonstrate the correspondence between decision signal dynamics simulated from a behavioral fit of that model and the empirical decision signals. Our findings suggest that value and sensory information can exert simultaneous and dynamically countervailing influences on the trajectory of the accumulation-to-bound process, driving rapid, sensory-guided actions. |
Joshua D Cosman; Kaleb A Lowe; Wolf Zinke; Geoffrey F Woodman; Jeffrey D Schall Prefrontal control of visual distraction Journal Article Current Biology, 28 (3), pp. 414–420, 2018. @article{Cosman2018, title = {Prefrontal control of visual distraction}, author = {Joshua D Cosman and Kaleb A Lowe and Wolf Zinke and Geoffrey F Woodman and Jeffrey D Schall}, doi = {10.1016/j.cub.2017.12.023}, year = {2018}, date = {2018-02-01}, journal = {Current Biology}, volume = {28}, number = {3}, pages = {414--420}, abstract = {Avoiding distraction by conspicuous but irrelevant stimuli is critical to accomplishing daily tasks. Regions of prefrontal cortex control attention by enhancing the representation of task-relevant information in sensory cortex, which can be measured in modulation of both single neurons and event-related electrical potentials (ERPs) on the cranial surface [1, 2]. When irrelevant information is particularly conspicuous, it can distract attention and interfere with the selection of behaviorally relevant information. Such distraction can be minimized via top-down control [3–5], but the cognitive and neural mechanisms giving rise to this control over distraction remain uncertain and debated [6–9]. Bridging neurophysiology to electrophysiology, we simultaneously recorded neurons in prefrontal cortex and ERPs over extrastriate visual cortex to track the processing of salient distractors during a visual search task. Critically, when the salient distractor was successfully ignored, but not otherwise, we observed robust suppression of salient distractor representations. Like target selection, the distractor suppression was observed in prefrontal cortex before it appeared over extrastriate cortical areas. Furthermore, all prefrontal neurons that showed suppression of the task-irrelevant distractor also contributed to selecting the target. This suggests a common prefrontal mechanism is responsible for both selecting task-relevant and suppressing task-irrelevant information in sensory cortex. Taken together, our results resolve a long-standing debate over the mechanisms that prevent distraction, and provide the first evidence directly linking suppressed neural firing in prefrontal cortex with surface ERP measures of distractor suppression.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Avoiding distraction by conspicuous but irrelevant stimuli is critical to accomplishing daily tasks. Regions of prefrontal cortex control attention by enhancing the representation of task-relevant information in sensory cortex, which can be measured in modulation of both single neurons and event-related electrical potentials (ERPs) on the cranial surface [1, 2]. When irrelevant information is particularly conspicuous, it can distract attention and interfere with the selection of behaviorally relevant information. Such distraction can be minimized via top-down control [3–5], but the cognitive and neural mechanisms giving rise to this control over distraction remain uncertain and debated [6–9]. Bridging neurophysiology to electrophysiology, we simultaneously recorded neurons in prefrontal cortex and ERPs over extrastriate visual cortex to track the processing of salient distractors during a visual search task. Critically, when the salient distractor was successfully ignored, but not otherwise, we observed robust suppression of salient distractor representations. Like target selection, the distractor suppression was observed in prefrontal cortex before it appeared over extrastriate cortical areas. Furthermore, all prefrontal neurons that showed suppression of the task-irrelevant distractor also contributed to selecting the target. This suggests a common prefrontal mechanism is responsible for both selecting task-relevant and suppressing task-irrelevant information in sensory cortex. Taken together, our results resolve a long-standing debate over the mechanisms that prevent distraction, and provide the first evidence directly linking suppressed neural firing in prefrontal cortex with surface ERP measures of distractor suppression. |
Jeffrey S Bedwell; Christopher C Spencer; Chi C Chan; Pamela D Butler; Pejman Sehatpour; Joseph Schmidt The P1 visual-evoked potential, red light, and transdiagnostic psychiatric symptoms Journal Article Brain Research, 1687 , pp. 144–154, 2018. @article{Bedwell2018, title = {The P1 visual-evoked potential, red light, and transdiagnostic psychiatric symptoms}, author = {Jeffrey S Bedwell and Christopher C Spencer and Chi C Chan and Pamela D Butler and Pejman Sehatpour and Joseph Schmidt}, doi = {10.1016/j.brainres.2018.03.002}, year = {2018}, date = {2018-01-01}, journal = {Brain Research}, volume = {1687}, pages = {144--154}, publisher = {Elsevier B.V.}, abstract = {A reduced P1 visual-evoked potential amplitude has been reported across several psychiatric disorders, including schizophrenia-spectrum, bipolar, and depressive disorders. In addition, a difference in P1 amplitude change to a red background compared to its opponent color, green, has been found in schizophrenia-spectrum samples. The current study examined whether specific psychiatric symptoms that related to these P1 abnormalities in earlier studies would be replicated when using a broad transdiagnostic sample. The final sample consisted of 135 participants: 26 with bipolar disorders, 25 with schizophrenia-spectrum disorders, 19 with unipolar depression, 62 with no current psychiatric disorder, and 3 with disorders in other categories. Low (8%) and high (64%) contrast check arrays were presented on gray, green, and red background conditions during electroencephalogram, while an eye tracker monitored visual fixation on the stimuli. Linear regressions across the entire sample (N = 135) found that greater severity of both clinician-rated and self-reported delusions/magical thinking correlated with a reduced P1 amplitude on the low contrast gray (neutral) background condition. In addition, across the entire sample, higher self-reported constricted affect was associated with a larger decrease in P1 amplitude (averaged across contrast conditions) to the red, compared to green, background. All relationships remained statistically significant after covarying for diagnostic class, suggesting that they are relatively transdiagnostic in nature. These findings indicate that early visual processing abnormalities may be more directly related to specific transdiagnostic symptoms such as delusions and constricted affect rather than specific psychiatric diagnoses or broad symptom factor scales.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A reduced P1 visual-evoked potential amplitude has been reported across several psychiatric disorders, including schizophrenia-spectrum, bipolar, and depressive disorders. In addition, a difference in P1 amplitude change to a red background compared to its opponent color, green, has been found in schizophrenia-spectrum samples. The current study examined whether specific psychiatric symptoms that related to these P1 abnormalities in earlier studies would be replicated when using a broad transdiagnostic sample. The final sample consisted of 135 participants: 26 with bipolar disorders, 25 with schizophrenia-spectrum disorders, 19 with unipolar depression, 62 with no current psychiatric disorder, and 3 with disorders in other categories. Low (8%) and high (64%) contrast check arrays were presented on gray, green, and red background conditions during electroencephalogram, while an eye tracker monitored visual fixation on the stimuli. Linear regressions across the entire sample (N = 135) found that greater severity of both clinician-rated and self-reported delusions/magical thinking correlated with a reduced P1 amplitude on the low contrast gray (neutral) background condition. In addition, across the entire sample, higher self-reported constricted affect was associated with a larger decrease in P1 amplitude (averaged across contrast conditions) to the red, compared to green, background. All relationships remained statistically significant after covarying for diagnostic class, suggesting that they are relatively transdiagnostic in nature. These findings indicate that early visual processing abnormalities may be more directly related to specific transdiagnostic symptoms such as delusions and constricted affect rather than specific psychiatric diagnoses or broad symptom factor scales. |
Adam Borowicz Using a multichannel Wiener filter to remove eye-blink artifacts from EEG data Journal Article Biomedical Signal Processing and Control, 45 , pp. 246–255, 2018. @article{Borowicz2018, title = {Using a multichannel Wiener filter to remove eye-blink artifacts from EEG data}, author = {Adam Borowicz}, doi = {10.1016/j.bspc.2018.05.012}, year = {2018}, date = {2018-01-01}, journal = {Biomedical Signal Processing and Control}, volume = {45}, pages = {246--255}, publisher = {Elsevier Ltd}, abstract = {This paper presents a novel method for removing ocular artifacts from EEG recordings. The proposed approach is based on time-domain linear filtering. Instead of directly estimating the artifact-free signal, we propose to obtain the eye-blink signal first, using a multichannel Wiener filter (MWF) and a small subset of the frontal electrodes, so that extra EOG sensors are unnecessary. Then, the estimate of the eye-blink signal is subtracted from the noisy EEG signal in accordance with principles of regression analysis. We have performed numerical simulations so as to compare our approach to the independent component analysis (ICA) that is commonly used in EEG enhancement. Our experiments show that the MWF-based approach can perform better than the ICA in terms of eye-blink cancellation and signal distortions. Besides that, the proposed approach is conceptually simpler and better suited to real-time applications.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This paper presents a novel method for removing ocular artifacts from EEG recordings. The proposed approach is based on time-domain linear filtering. Instead of directly estimating the artifact-free signal, we propose to obtain the eye-blink signal first, using a multichannel Wiener filter (MWF) and a small subset of the frontal electrodes, so that extra EOG sensors are unnecessary. Then, the estimate of the eye-blink signal is subtracted from the noisy EEG signal in accordance with principles of regression analysis. We have performed numerical simulations so as to compare our approach to the independent component analysis (ICA) that is commonly used in EEG enhancement. Our experiments show that the MWF-based approach can perform better than the ICA in terms of eye-blink cancellation and signal distortions. Besides that, the proposed approach is conceptually simpler and better suited to real-time applications. |
Hoseok Choi; Seho Lee; Jeyeon Lee; Kyeongran Min; Seokbeen Lim; Jinsick Park; Kyoung ha Ahn; In Young Kim; Kyoung-Min Lee; Dong Pyo Jang Long-term evaluation and feasibility study of the insulated screw electrode for ECoG recording Journal Article Journal of Neuroscience Methods, 308 , pp. 261–268, 2018. @article{Choi2018, title = {Long-term evaluation and feasibility study of the insulated screw electrode for ECoG recording}, author = {Hoseok Choi and Seho Lee and Jeyeon Lee and Kyeongran Min and Seokbeen Lim and Jinsick Park and Kyoung ha Ahn and In Young Kim and Kyoung-Min Lee and Dong Pyo Jang}, doi = {10.1016/j.jneumeth.2018.06.027}, year = {2018}, date = {2018-01-01}, journal = {Journal of Neuroscience Methods}, volume = {308}, pages = {261--268}, publisher = {Elsevier}, abstract = {Background: A screw-shaped electrode can offer a compromise between signal quality and invasiveness. However, the standard screw electrode can be vulnerable to electrical noise while directly contact with the skull or skin, and the feasibility and stability for chronic implantation in primate have not been fully evaluated. New Method: We designed a novel screw electrocorticogram (ECoG) electrode composed of three parts: recording electrode, insulator, and nut. The recording electrode was made of titanium with high biocompatibility and high electrical conductivity. Zirconia is used for insulator and nut to prevent electrical noise. Result: In computer simulations, the screw ECoG with insulator showed a significantly higher performance in signal acquisition compared to the condition without insulator. In a non-human primate, using screw ECoG, clear visual-evoked potential (VEP) waveforms were obtained, VEP components were reliably maintained, and the electrode's impedance was stable during the whole evaluation period. Moreover, it showed higher SNR and wider frequency band compared to the electroencephalogram (EEG). We also observed the screw ECoG has a higher sensitivity that captures different responses on various stimuli than the EEG. Comparison: The screw ECoG showed reliable electrical characteristic and biocompatibility for three months, that shows great promise for chronic implants. These results contrasted with previous reports that general screw electrode was only applicable for acute applications. Conclusion: The suggested electrode can offer whole-brain monitoring with high signal quality and minimal invasiveness. The screw ECoG can be used to provide more in-depth understanding, not only relationship between functional networks and cognitive behavior, but also pathomechanisms in brain diseases.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Background: A screw-shaped electrode can offer a compromise between signal quality and invasiveness. However, the standard screw electrode can be vulnerable to electrical noise while directly contact with the skull or skin, and the feasibility and stability for chronic implantation in primate have not been fully evaluated. New Method: We designed a novel screw electrocorticogram (ECoG) electrode composed of three parts: recording electrode, insulator, and nut. The recording electrode was made of titanium with high biocompatibility and high electrical conductivity. Zirconia is used for insulator and nut to prevent electrical noise. Result: In computer simulations, the screw ECoG with insulator showed a significantly higher performance in signal acquisition compared to the condition without insulator. In a non-human primate, using screw ECoG, clear visual-evoked potential (VEP) waveforms were obtained, VEP components were reliably maintained, and the electrode's impedance was stable during the whole evaluation period. Moreover, it showed higher SNR and wider frequency band compared to the electroencephalogram (EEG). We also observed the screw ECoG has a higher sensitivity that captures different responses on various stimuli than the EEG. Comparison: The screw ECoG showed reliable electrical characteristic and biocompatibility for three months, that shows great promise for chronic implants. These results contrasted with previous reports that general screw electrode was only applicable for acute applications. Conclusion: The suggested electrode can offer whole-brain monitoring with high signal quality and minimal invasiveness. The screw ECoG can be used to provide more in-depth understanding, not only relationship between functional networks and cognitive behavior, but also pathomechanisms in brain diseases. |
Thérèse Collins; Pierre O Jacquet TMS over posterior parietal cortex disrupts trans-saccadic visual stability Journal Article Brain Stimulation, 11 (2), pp. 390–399, 2018. @article{Collins2018, title = {TMS over posterior parietal cortex disrupts trans-saccadic visual stability}, author = {Thér{è}se Collins and Pierre O Jacquet}, doi = {10.1016/j.brs.2017.11.019}, year = {2018}, date = {2018-01-01}, journal = {Brain Stimulation}, volume = {11}, number = {2}, pages = {390--399}, abstract = {Background: Saccadic eye movements change the retinal location of visual objects, but we do not experience the visual world as constantly moving, we perceive it as seamless and stable. This visual stability may be achieved by an internal or efference copy of each saccade that, combined with the retinal information, allows the visual system to cancel out or ignore the self-caused retinal motion. Objective: The current study investigated the underlying brain mechanisms responsible for visual stability in humans with online transcranial magnetic stimulation (TMS). Methods: We used two classic tasks that measure efference copy: the double-step task and the in-flight displacement task. The double-step task requires subjects to make two memory-guided saccades, the second of which depends on an accurate internal copy of the first. The in-flight displacement task requires subjects to report the relative location of a (possibly displaced) target across a saccade. In separate experimental sessions, subjects participated in each task while we delivered online 3-pulse TMS over frontal eye fields (FEF), posterior parietal cortex, or vertex. TMS was contingent on saccade execution. Results: Second saccades were not disrupted in the double-step task, but surprisingly, TMS over FEF modified the metrics of the ongoing saccade. Spatiotopic performance in the in-flight displacement task was altered following TMS over parietal cortex, but not FEF or vertex. Conclusion: These results suggest that TMS disrupted eye-centered position coding in the parietal cortex. Trans-saccadic correspondence, and visual stability, may therefore causally depend on parietal maps.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Background: Saccadic eye movements change the retinal location of visual objects, but we do not experience the visual world as constantly moving, we perceive it as seamless and stable. This visual stability may be achieved by an internal or efference copy of each saccade that, combined with the retinal information, allows the visual system to cancel out or ignore the self-caused retinal motion. Objective: The current study investigated the underlying brain mechanisms responsible for visual stability in humans with online transcranial magnetic stimulation (TMS). Methods: We used two classic tasks that measure efference copy: the double-step task and the in-flight displacement task. The double-step task requires subjects to make two memory-guided saccades, the second of which depends on an accurate internal copy of the first. The in-flight displacement task requires subjects to report the relative location of a (possibly displaced) target across a saccade. In separate experimental sessions, subjects participated in each task while we delivered online 3-pulse TMS over frontal eye fields (FEF), posterior parietal cortex, or vertex. TMS was contingent on saccade execution. Results: Second saccades were not disrupted in the double-step task, but surprisingly, TMS over FEF modified the metrics of the ongoing saccade. Spatiotopic performance in the in-flight displacement task was altered following TMS over parietal cortex, but not FEF or vertex. Conclusion: These results suggest that TMS disrupted eye-centered position coding in the parietal cortex. Trans-saccadic correspondence, and visual stability, may therefore causally depend on parietal maps. |
Gerard Derosiere; Pierre-Alexandre Klein; Sylvie Nozaradan; Alexandre Zénon; André Mouraux; Julie Duque Visuomotor correlates of conflict expectation in the context of motor decisions Journal Article The Journal of Neuroscience, 38 (44), pp. 9486–9504, 2018. @article{Derosiere2018, title = {Visuomotor correlates of conflict expectation in the context of motor decisions}, author = {Gerard Derosiere and Pierre-Alexandre Klein and Sylvie Nozaradan and Alexandre Zénon and André Mouraux and Julie Duque}, doi = {10.1523/jneurosci.0623-18.2018}, year = {2018}, date = {2018-01-01}, journal = {The Journal of Neuroscience}, volume = {38}, number = {44}, pages = {9486--9504}, abstract = {Many behaviors require choosing between conflicting options competing against each other in visuomotor areas. Such choices can benefit from top-down control processes engaging frontal areas in advance of conflict when it is anticipated. Yet, very little is known about how this proactive control system shapes the visuomotor competition. Here, we used electroencephalography in human subjects (male and female) to identify the visual and motor correlates of conflict expectation in a version ofthe Eriksen Flanker task that required left or right responses according to the direction of a central target arrow surrounded by congruent or incongruent (conflicting) flankers. Visual conflict was either highly expected (it occurred in 80% of trials; mostly incongruent blocks) or very unlikely (20% of trials; mostly congruent blocks). We evaluated selective attention in the visual cortex by recording target- and flanker-related steady-state visual- evoked potentials (SSVEPs) and probed action selection by measuring response-locked potentials (RLPs) in the motor cortex. Conflict expectation enhanced accuracy in incongruent trials, but this improvement occurred at the cost ofspeed in congruent trials. Intriguingly, this behavioral adjustment occurred while visuomotor activity was less finely tuned: target-related SSVEPs were smaller while flanker related SSVEPs were higher in mostly incongruent blocks than in mostly congruent blocks, and incongruent trials were associated with larger RLPs in the ipsilateral (nonselected) motor cortex. Hence, our data suggest that conflict expectation recruits control processes that augment the tolerance for inappropriate visuomotor activations (rather than processes that down regulate their amplitude), allowing for overflow activity to occur without having it turn into the selection of an incorrect response.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Many behaviors require choosing between conflicting options competing against each other in visuomotor areas. Such choices can benefit from top-down control processes engaging frontal areas in advance of conflict when it is anticipated. Yet, very little is known about how this proactive control system shapes the visuomotor competition. Here, we used electroencephalography in human subjects (male and female) to identify the visual and motor correlates of conflict expectation in a version ofthe Eriksen Flanker task that required left or right responses according to the direction of a central target arrow surrounded by congruent or incongruent (conflicting) flankers. Visual conflict was either highly expected (it occurred in 80% of trials; mostly incongruent blocks) or very unlikely (20% of trials; mostly congruent blocks). We evaluated selective attention in the visual cortex by recording target- and flanker-related steady-state visual- evoked potentials (SSVEPs) and probed action selection by measuring response-locked potentials (RLPs) in the motor cortex. Conflict expectation enhanced accuracy in incongruent trials, but this improvement occurred at the cost ofspeed in congruent trials. Intriguingly, this behavioral adjustment occurred while visuomotor activity was less finely tuned: target-related SSVEPs were smaller while flanker related SSVEPs were higher in mostly incongruent blocks than in mostly congruent blocks, and incongruent trials were associated with larger RLPs in the ipsilateral (nonselected) motor cortex. Hence, our data suggest that conflict expectation recruits control processes that augment the tolerance for inappropriate visuomotor activations (rather than processes that down regulate their amplitude), allowing for overflow activity to occur without having it turn into the selection of an incorrect response. |
Grace Edwards; Rufin VanRullen; Patrick Cavanagh Decoding trans-saccadic memory Journal Article The Journal of Neuroscience, 38 (5), pp. 1114–1123, 2018. @article{Edwards2018, title = {Decoding trans-saccadic memory}, author = {Grace Edwards and Rufin VanRullen and Patrick Cavanagh}, doi = {10.1523/jneurosci.0854-17.2017}, year = {2018}, date = {2018-01-01}, journal = {The Journal of Neuroscience}, volume = {38}, number = {5}, pages = {1114--1123}, abstract = {We examine whether peripheral information at a planned saccade target affects immediate post-saccadic processing at the fovea on saccade landing. Current neuroimaging research suggests that pre-saccadic stimulation has a late effect on post-saccadic processing, in contrast to the early effect seen in behavioral studies. Human participants (both male and female) were instructed to saccade toward a face or a house that, on different trials, remained the same, changed, or disappeared during the saccade. We used a multivariate pattern analysis (MVPA) of electroencephalography (EEG) data to decode face versus house processing directly after the saccade. The classifier was trained on separate trials without a saccade, where a house or face was presented at the fovea. When the saccade target remained the same across the saccade, we could reliably decode the target 123 ms after saccade offset. In contrast, when the target was changed during the saccade, the new target was decoded at a later time-point, 151 ms after saccade offset. The "same" condition advantage suggests that congruent pre-saccadic information facilitates processing of the post-saccadic stimulus compared to incongruent information. Finally, the saccade target could be decoded above chance even when it had been removed during the saccade, albeit with a slower time-course (162 ms) and poorer signal strength. These findings indicate that information about the (peripheral) pre-saccadic stimulus is transferred across the saccade so that it becomes quickly available and influences processing at its expected, new retinal position (the fovea).}, keywords = {}, pubstate = {published}, tppubtype = {article} } We examine whether peripheral information at a planned saccade target affects immediate post-saccadic processing at the fovea on saccade landing. Current neuroimaging research suggests that pre-saccadic stimulation has a late effect on post-saccadic processing, in contrast to the early effect seen in behavioral studies. Human participants (both male and female) were instructed to saccade toward a face or a house that, on different trials, remained the same, changed, or disappeared during the saccade. We used a multivariate pattern analysis (MVPA) of electroencephalography (EEG) data to decode face versus house processing directly after the saccade. The classifier was trained on separate trials without a saccade, where a house or face was presented at the fovea. When the saccade target remained the same across the saccade, we could reliably decode the target 123 ms after saccade offset. In contrast, when the target was changed during the saccade, the new target was decoded at a later time-point, 151 ms after saccade offset. The "same" condition advantage suggests that congruent pre-saccadic information facilitates processing of the post-saccadic stimulus compared to incongruent information. Finally, the saccade target could be decoded above chance even when it had been removed during the saccade, albeit with a slower time-course (162 ms) and poorer signal strength. These findings indicate that information about the (peripheral) pre-saccadic stimulus is transferred across the saccade so that it becomes quickly available and influences processing at its expected, new retinal position (the fovea). |
Hagar Gelbard-Sagiv; Efrat Magidov; Haggai Sharon; Talma Hendler Noradrenaline modulates visual perception and late visually evoked activity Journal Article Current Biology, 28 , pp. 2239–2249, 2018. @article{Gelbard-Sagiv2018, title = {Noradrenaline modulates visual perception and late visually evoked activity}, author = {Hagar Gelbard-Sagiv and Efrat Magidov and Haggai Sharon and Talma Hendler}, doi = {10.1016/j.cub.2018.05.051}, year = {2018}, date = {2018-01-01}, journal = {Current Biology}, volume = {28}, pages = {2239--2249}, abstract = {An identical sensory stimulus may or may not be incorporated into perceptual experience, depending on the behavioral and cognitive state of the organism. What determines whether a sensory stimulus will be perceived? While different behavioral and cognitive states may share a similar profile of electrophysiology, metabolism, and early sensory responses, neuromodulation is often different and therefore may constitute a key mechanism enabling perceptual awareness. Specifically, noradrenaline improves sensory responses, correlates with orienting toward behaviorally relevant stimuli, and is markedly reduced during sleep, while experience is largely ‘‘disconnected'' from external events. Despite correlative evidence hinting at a relationship between noradrenaline and perception, causal evidence remains absent. Here, we pharmacologically down- and upregulated noradrenaline signaling in healthy volunteers using clonidine and reboxetine in double-blind placebo-controlled experiments, testing the effects on perceptual abilities and visually evoked electroencephalography (EEG) and fMRI responses. We found that detection sensitivity, discrimination accuracy, and subjective visibility change in accordance with noradrenaline (NE) levels, whereas decision bias (criterion) is not affected. Similarly, noradrenaline increases the consistency of EEG visually evoked potentials, while lower noradrenaline levels delay response components around 200 ms. Furthermore, bloodoxygen-level-dependent (BOLD) fMRI activations in high-order visual cortex selectively vary along with noradrenaline signaling. Taken together, these results point to noradrenaline as a key factor causally linking visual awareness to external world events.}, keywords = {}, pubstate = {published}, tppubtype = {article} } An identical sensory stimulus may or may not be incorporated into perceptual experience, depending on the behavioral and cognitive state of the organism. What determines whether a sensory stimulus will be perceived? While different behavioral and cognitive states may share a similar profile of electrophysiology, metabolism, and early sensory responses, neuromodulation is often different and therefore may constitute a key mechanism enabling perceptual awareness. Specifically, noradrenaline improves sensory responses, correlates with orienting toward behaviorally relevant stimuli, and is markedly reduced during sleep, while experience is largely ‘‘disconnected'' from external events. Despite correlative evidence hinting at a relationship between noradrenaline and perception, causal evidence remains absent. Here, we pharmacologically down- and upregulated noradrenaline signaling in healthy volunteers using clonidine and reboxetine in double-blind placebo-controlled experiments, testing the effects on perceptual abilities and visually evoked electroencephalography (EEG) and fMRI responses. We found that detection sensitivity, discrimination accuracy, and subjective visibility change in accordance with noradrenaline (NE) levels, whereas decision bias (criterion) is not affected. Similarly, noradrenaline increases the consistency of EEG visually evoked potentials, while lower noradrenaline levels delay response components around 200 ms. Furthermore, bloodoxygen-level-dependent (BOLD) fMRI activations in high-order visual cortex selectively vary along with noradrenaline signaling. Taken together, these results point to noradrenaline as a key factor causally linking visual awareness to external world events. |
Marcello Giannini; David M Alexander; Andrey R Nikolaev; Cees van Leeuwen Large-scale traveling waves in EEG activity following eye movement Journal Article Brain Topography, 31 (4), pp. 608–622, 2018. @article{Giannini2018, title = {Large-scale traveling waves in EEG activity following eye movement}, author = {Marcello Giannini and David M Alexander and Andrey R Nikolaev and Cees van Leeuwen}, doi = {10.1007/s10548-018-0622-2}, year = {2018}, date = {2018-01-01}, journal = {Brain Topography}, volume = {31}, number = {4}, pages = {608--622}, publisher = {Springer US}, abstract = {In spontaneous, stimulus-evoked, and eye-movement evoked EEG, the oscillatory signal shows large scale, dynamically organized patterns of phase. We investigated eye-movement evoked patterns in free-viewing conditions. Participants viewed photographs of natural scenes in anticipation of a memory test. From 200 ms intervals following saccades, we estimated the EEG phase gradient over the entire scalp, and the wave activity, i.e. the goodness of fit of a wave model involving a phase gradient assumed to be smooth over the scalp. In frequencies centered at 6.5 Hz, large-scale phase organization occurred, peaking around 70 ms after fixation onset and taking the form of a traveling wave. According to the wave gradient, most of the times the wave spreads from the posterior-inferior to anterior–superior direction. In these directions, the gradients depended on the size and direction of the saccade. Wave propagation velocity decreased in the course of the fixation, particularly in the interval from 50 to 150 ms after fixation onset. This interval corresponds to the fixation-related lambda activity, which reflects early perceptual processes following fixation onset. We conclude that lambda activity has a prominent traveling wave component. This component consists of a short-term whole-head phase pattern of specific direction and velocity, which may reflect feedforward propagation of visual information at fixation.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In spontaneous, stimulus-evoked, and eye-movement evoked EEG, the oscillatory signal shows large scale, dynamically organized patterns of phase. We investigated eye-movement evoked patterns in free-viewing conditions. Participants viewed photographs of natural scenes in anticipation of a memory test. From 200 ms intervals following saccades, we estimated the EEG phase gradient over the entire scalp, and the wave activity, i.e. the goodness of fit of a wave model involving a phase gradient assumed to be smooth over the scalp. In frequencies centered at 6.5 Hz, large-scale phase organization occurred, peaking around 70 ms after fixation onset and taking the form of a traveling wave. According to the wave gradient, most of the times the wave spreads from the posterior-inferior to anterior–superior direction. In these directions, the gradients depended on the size and direction of the saccade. Wave propagation velocity decreased in the course of the fixation, particularly in the interval from 50 to 150 ms after fixation onset. This interval corresponds to the fixation-related lambda activity, which reflects early perceptual processes following fixation onset. We conclude that lambda activity has a prominent traveling wave component. This component consists of a short-term whole-head phase pattern of specific direction and velocity, which may reflect feedforward propagation of visual information at fixation. |
Julia Habicht; Mareike Finke; Tobias Neher Auditory acclimatization to bilateral hearing aids: Effects on sentence-in-noise processing times and speech-evoked potentials Journal Article Ear and Hearing, 39 (1), pp. 161–171, 2018. @article{Habicht2018, title = {Auditory acclimatization to bilateral hearing aids: Effects on sentence-in-noise processing times and speech-evoked potentials}, author = {Julia Habicht and Mareike Finke and Tobias Neher}, doi = {10.1097/AUD.0000000000000476}, year = {2018}, date = {2018-01-01}, journal = {Ear and Hearing}, volume = {39}, number = {1}, pages = {161--171}, abstract = {Objectives: Using a longitudinal design, the present study sought to substantiate indications from two previous cross-sectional studies that hearing aid (HA) experience leads to improved speech processing abilities as quantified using eye-gaze measurements. Another aim was to explore potential concomitant changes in event-related potentials (ERPs) to speech stimuli. Design: Groups of elderly novice (novHA) and experienced (expHA) HA users matched in terms of age and working memory capacity participated. The novHA users were acclimatized to bilateral HA fittings for up to 24 weeks. The expHA users continued to use their own HAs during the same period. The participants' speech processing abilities were assessed after 0 weeks (novHA: N = 16; expHA: N = 14), 12 weeks (novHA: N = 16; expHA: N = 14), and 24 weeks (N = 10 each). To that end, an eye-tracking paradigm was used for estimating how quickly the participants could grasp the meaning of sentences presented against background noise together with two similar pictures that either correctly or incorrectly depicted the meaning conveyed by the sentences (the “processing time”). Additionally, ERPs were measured with an active oddball paradigm requiring the participants to categorize word stimuli as living (targets) or nonliving (nontargets) entities. For all measurements, the stimuli were spectrally shaped according to individual real-ear insertion gains and presented via earphones. Results: Concerning the processing times, no changes across time were found for the expHA group. After 0 weeks of HA use, the novHA group had significantly longer (poorer) processing times than the expHA group, consistent with previous findings. After 24 weeks, a significant mean improvement of ~30% was observed for the novHA users, leading to a performance comparable with that of the expHA group. Concerning the ERPs, no changes across time were found. Conclusions: The results from this exploratory study are consistent with the view that auditory acclimatization to HAs positively impacts speech comprehension in noise. Further research is needed to substantiate them.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Objectives: Using a longitudinal design, the present study sought to substantiate indications from two previous cross-sectional studies that hearing aid (HA) experience leads to improved speech processing abilities as quantified using eye-gaze measurements. Another aim was to explore potential concomitant changes in event-related potentials (ERPs) to speech stimuli. Design: Groups of elderly novice (novHA) and experienced (expHA) HA users matched in terms of age and working memory capacity participated. The novHA users were acclimatized to bilateral HA fittings for up to 24 weeks. The expHA users continued to use their own HAs during the same period. The participants' speech processing abilities were assessed after 0 weeks (novHA: N = 16; expHA: N = 14), 12 weeks (novHA: N = 16; expHA: N = 14), and 24 weeks (N = 10 each). To that end, an eye-tracking paradigm was used for estimating how quickly the participants could grasp the meaning of sentences presented against background noise together with two similar pictures that either correctly or incorrectly depicted the meaning conveyed by the sentences (the “processing time”). Additionally, ERPs were measured with an active oddball paradigm requiring the participants to categorize word stimuli as living (targets) or nonliving (nontargets) entities. For all measurements, the stimuli were spectrally shaped according to individual real-ear insertion gains and presented via earphones. Results: Concerning the processing times, no changes across time were found for the expHA group. After 0 weeks of HA use, the novHA group had significantly longer (poorer) processing times than the expHA group, consistent with previous findings. After 24 weeks, a significant mean improvement of ~30% was observed for the novHA users, leading to a performance comparable with that of the expHA group. Concerning the ERPs, no changes across time were found. Conclusions: The results from this exploratory study are consistent with the view that auditory acclimatization to HAs positively impacts speech comprehension in noise. Further research is needed to substantiate them. |
Simone G Heideman; Freek van Ede; Anna C Nobre Temporal alignment of anticipatory motor cortical beta lateralisation in hidden visual-motor sequences Journal Article European Journal of Neuroscience, 48 (8), pp. 2684–2695, 2018. @article{Heideman2018b, title = {Temporal alignment of anticipatory motor cortical beta lateralisation in hidden visual-motor sequences}, author = {Simone G Heideman and Freek van Ede and Anna C Nobre}, doi = {10.1111/ejn.13700}, year = {2018}, date = {2018-01-01}, journal = {European Journal of Neuroscience}, volume = {48}, number = {8}, pages = {2684--2695}, abstract = {Performance improves when participants respond to events that are structured in repeating sequences, suggesting that learning can lead to proactive anticipatory preparation. Whereas most sequence-learning studies have emphasised spatial structure, most sequences also contain a prominent temporal structure. We used MEG to investigate spatial and temporal anticipatory neural dynamics in a modified serial reaction time (SRT) task. Performance and brain activity were compared between blocks with learned spatial-temporal sequences and blocks with new sequences. After confirming a strong behavioural benefit of spatial-temporal predictability, we show lateralisation of beta oscillations in anticipation of the response associated with the upcoming target location and show that this also aligns to the expected timing of these forthcoming events. This effect was found both when comparing between repeated (learned) and new (unlearned) sequences, as well as when comparing targets that were expected after short vs. long intervals within the repeated (learned) sequence. Our findings suggest that learning of spatial-temporal structure leads to proactive and dynamic modulation of motor cortical excitability in anticipation of both the location and timing of events that are relevant to guide action.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Performance improves when participants respond to events that are structured in repeating sequences, suggesting that learning can lead to proactive anticipatory preparation. Whereas most sequence-learning studies have emphasised spatial structure, most sequences also contain a prominent temporal structure. We used MEG to investigate spatial and temporal anticipatory neural dynamics in a modified serial reaction time (SRT) task. Performance and brain activity were compared between blocks with learned spatial-temporal sequences and blocks with new sequences. After confirming a strong behavioural benefit of spatial-temporal predictability, we show lateralisation of beta oscillations in anticipation of the response associated with the upcoming target location and show that this also aligns to the expected timing of these forthcoming events. This effect was found both when comparing between repeated (learned) and new (unlearned) sequences, as well as when comparing targets that were expected after short vs. long intervals within the repeated (learned) sequence. Our findings suggest that learning of spatial-temporal structure leads to proactive and dynamic modulation of motor cortical excitability in anticipation of both the location and timing of events that are relevant to guide action. |
Jenni Heikkilä; Kaisa Tiippana; Otto Loberg; Paavo H T Leppänen Neural processing of congruent and incongruent audiovisual speech in school-age children and adults Journal Article Language Learning, 68 , pp. 58–79, 2018. @article{Heikkilae2018, title = {Neural processing of congruent and incongruent audiovisual speech in school-age children and adults}, author = {Jenni Heikkilä and Kaisa Tiippana and Otto Loberg and Paavo H T Leppänen}, doi = {10.1111/lang.12266}, year = {2018}, date = {2018-01-01}, journal = {Language Learning}, volume = {68}, pages = {58--79}, abstract = {Seeing articulatory gestures enhances speech perception. Perception ofauditory speech can even be changed by incongruent visual gestures, which is known as the McGurk effect (e.g., dubbing a voice saying /mi/ onto a face articulating /ni/, observers often hear /ni/). In children, the McGurk effect is weaker than in adults, but no previous knowledge exists about the neural-level correlates of the McGurk effect in school-age children. Using brain event-related potentials, we investigated change detection responses to congruent and incongruent audiovisual speech in school-age children and adults. We used an oddball paradigm with a congruent audiovisual /mi/ as the standard stimulus and a congruent audiovisual /ni/ or McGurk A/mi/V/ni/ as the deviant stimulus. In adults, a similar change detection response was elicited by both deviant stimuli. In children, change detection responses differed between the congruent and the McGurk stimulus. This reflects a maturational difference in the influence of visual stimuli on auditory processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Seeing articulatory gestures enhances speech perception. Perception ofauditory speech can even be changed by incongruent visual gestures, which is known as the McGurk effect (e.g., dubbing a voice saying /mi/ onto a face articulating /ni/, observers often hear /ni/). In children, the McGurk effect is weaker than in adults, but no previous knowledge exists about the neural-level correlates of the McGurk effect in school-age children. Using brain event-related potentials, we investigated change detection responses to congruent and incongruent audiovisual speech in school-age children and adults. We used an oddball paradigm with a congruent audiovisual /mi/ as the standard stimulus and a congruent audiovisual /ni/ or McGurk A/mi/V/ni/ as the deviant stimulus. In adults, a similar change detection response was elicited by both deviant stimuli. In children, change detection responses differed between the congruent and the McGurk stimulus. This reflects a maturational difference in the influence of visual stimuli on auditory processing. |
Hannah Hiebel; Anja Ischebeck; Clemens Brunner; Andrey R Nikolaev; Margit Höfler; Christof Körner Target probability modulates fixation-related potentials in visual search Journal Article Biological Psychology, 138 , pp. 199–210, 2018. @article{Hiebel2018, title = {Target probability modulates fixation-related potentials in visual search}, author = {Hannah Hiebel and Anja Ischebeck and Clemens Brunner and Andrey R Nikolaev and Margit Höfler and Christof Körner}, doi = {10.1016/j.biopsycho.2018.09.007}, year = {2018}, date = {2018-01-01}, journal = {Biological Psychology}, volume = {138}, pages = {199--210}, publisher = {Elsevier}, abstract = {This study investigated the influence of target probability on the neural response to target detection in free viewing visual search. Participants were asked to indicate the number of targets (one or two) among distractors in a visual search task while EEG and eye movements were co-registered. Target probability was manipulated by varying the set size of the displays between 10, 22, and 30 items. Fixation-related potentials time-locked to first target fixations revealed a pronounced P300 at the centro-parietal cortex with larger amplitudes for set sizes 22 and 30 than for set size 10. With increasing set size, more distractor fixations preceded the detection of the target, resulting in a decreased target probability and, consequently, a larger P300. For distractors, no increase of P300 amplitude with set size was observed. The findings suggest that set size specifically affects target but not distractor processing in overt serial visual search.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study investigated the influence of target probability on the neural response to target detection in free viewing visual search. Participants were asked to indicate the number of targets (one or two) among distractors in a visual search task while EEG and eye movements were co-registered. Target probability was manipulated by varying the set size of the displays between 10, 22, and 30 items. Fixation-related potentials time-locked to first target fixations revealed a pronounced P300 at the centro-parietal cortex with larger amplitudes for set sizes 22 and 30 than for set size 10. With increasing set size, more distractor fixations preceded the detection of the target, resulting in a decreased target probability and, consequently, a larger P300. For distractors, no increase of P300 amplitude with set size was observed. The findings suggest that set size specifically affects target but not distractor processing in overt serial visual search. |
Rinat Hilo-Merkovich; Marisa Carrasco; Shlomit Yuval-Greenberg Task performance in covert, but not overt, attention correlates with early laterality of visual evoked potentials Journal Article Neuropsychologia, 119 , pp. 330–339, 2018. @article{Hilo-Merkovich2018, title = {Task performance in covert, but not overt, attention correlates with early laterality of visual evoked potentials}, author = {Rinat Hilo-Merkovich and Marisa Carrasco and Shlomit Yuval-Greenberg}, doi = {10.1016/j.neuropsychologia.2018.08.012}, year = {2018}, date = {2018-01-01}, journal = {Neuropsychologia}, volume = {119}, pages = {330--339}, publisher = {Elsevier Ltd}, abstract = {Attention affects visual perception at target locations via the amplification of stimuli signal strength, perceptual performance and perceived contrast. Behavioral and neural correlates of attention can be observed when attention is both covertly and overtly oriented (with or without accompanying eye movements). Previous studies have demonstrated that at the grand-average level, lateralization of Event Related Potentials (ERP) is associated with attentional facilitation at cued, relative to un-cued locations. Yet, the correspondence between ERP lateralization and behavior has not been established at the single-subject level. Specifically, it is an open question whether inter-individual differences in the neural manifestation of attentional orienting can predict differences in perception. Here, we addressed this question by examining the correlation between ERP lateralization and visual sensitivity at attended locations. Participants were presented with a cue indicating where a low-contrast grating patch target will appear, following a delay of varying durations. During this delay, while participants were waiting for the target to appear, a task-irrelevant checkerboard probe was presented briefly and bilaterally. ERP was measured relative to the onset of this probe. In separate blocks, participants were requested to report detection of a low-contrast target either by making a fast eye-movement toward the target (overt orienting), or by pressing a button (covert orienting). Results show that in the covert orienting condition, ERP lateralization of individual participants was positively correlated with their mean visual sensitivity for the target. But, no such correlation was found in the overt orienting condition. We conclude that ERP lateralization of individual participants can predict their performance on a covert, but not an overt, target detection task.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Attention affects visual perception at target locations via the amplification of stimuli signal strength, perceptual performance and perceived contrast. Behavioral and neural correlates of attention can be observed when attention is both covertly and overtly oriented (with or without accompanying eye movements). Previous studies have demonstrated that at the grand-average level, lateralization of Event Related Potentials (ERP) is associated with attentional facilitation at cued, relative to un-cued locations. Yet, the correspondence between ERP lateralization and behavior has not been established at the single-subject level. Specifically, it is an open question whether inter-individual differences in the neural manifestation of attentional orienting can predict differences in perception. Here, we addressed this question by examining the correlation between ERP lateralization and visual sensitivity at attended locations. Participants were presented with a cue indicating where a low-contrast grating patch target will appear, following a delay of varying durations. During this delay, while participants were waiting for the target to appear, a task-irrelevant checkerboard probe was presented briefly and bilaterally. ERP was measured relative to the onset of this probe. In separate blocks, participants were requested to report detection of a low-contrast target either by making a fast eye-movement toward the target (overt orienting), or by pressing a button (covert orienting). Results show that in the covert orienting condition, ERP lateralization of individual participants was positively correlated with their mean visual sensitivity for the target. But, no such correlation was found in the overt orienting condition. We conclude that ERP lateralization of individual participants can predict their performance on a covert, but not an overt, target detection task. |
Nora Hollenstein; Jonathan Rotsztejn; Marius Troendle; Andreas Pedroni; Ce Zhang; Nicolas Langer Data descriptor: ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading Journal Article Scientific Data, 5 , pp. 1–13, 2018. @article{Hollenstein2018, title = {Data descriptor: ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading}, author = {Nora Hollenstein and Jonathan Rotsztejn and Marius Troendle and Andreas Pedroni and Ce Zhang and Nicolas Langer}, doi = {10.1038/sdata.2018.291}, year = {2018}, date = {2018-01-01}, journal = {Scientific Data}, volume = {5}, pages = {1--13}, publisher = {The Author(s)}, abstract = {We present the Zurich Cognitive Language Processing Corpus (ZuCo), a dataset combining electroencephalography (EEG) and eye-tracking recordings from subjects reading natural sentences. ZuCo includes high-density EEG and eye-tracking data of 12 healthy adult native English speakers, each reading natural English text for 4–6 hours. The recordings span two normal reading tasks and one task-specific reading task, resulting in a dataset that encompasses EEG and eye-tracking data of 21,629 words in 1107 sentences and 154,173 fixations. We believe that this dataset represents a valuable resource for natural language processing (NLP). The EEG and eye-tracking signals lend themselves to train improved machine- learning models for various tasks, in particular for information extraction tasks such as entity and relation extraction and sentiment analysis. Moreover, this dataset is useful for advancing research into the human reading and language understanding process at the level of brain activity and eye-movement.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We present the Zurich Cognitive Language Processing Corpus (ZuCo), a dataset combining electroencephalography (EEG) and eye-tracking recordings from subjects reading natural sentences. ZuCo includes high-density EEG and eye-tracking data of 12 healthy adult native English speakers, each reading natural English text for 4–6 hours. The recordings span two normal reading tasks and one task-specific reading task, resulting in a dataset that encompasses EEG and eye-tracking data of 21,629 words in 1107 sentences and 154,173 fixations. We believe that this dataset represents a valuable resource for natural language processing (NLP). The EEG and eye-tracking signals lend themselves to train improved machine- learning models for various tasks, in particular for information extraction tasks such as entity and relation extraction and sentiment analysis. Moreover, this dataset is useful for advancing research into the human reading and language understanding process at the level of brain activity and eye-movement. |
Anthony J Ries; David Slayback; Jon Touryan The fixation-related lambda response: Effects of saccade magnitude, spatial frequency, and ocular artifact removal Journal Article International Journal of Psychophysiology, 134 , pp. 1–8, 2018. @article{Ries2018b, title = {The fixation-related lambda response: Effects of saccade magnitude, spatial frequency, and ocular artifact removal}, author = {Anthony J Ries and David Slayback and Jon Touryan}, doi = {10.1016/j.ijpsycho.2018.09.004}, year = {2018}, date = {2018-01-01}, journal = {International Journal of Psychophysiology}, volume = {134}, pages = {1--8}, publisher = {Elsevier}, abstract = {Fixation-related potentials (FRPs) enable examination of electrophysiological signatures of visual perception under naturalistic conditions, providing a neural snapshot of the fixated scene. The most prominent FRP component, commonly referred to as the lambda response, is a large deflection over occipital electrodes (O1, Oz, O2) peaking 80–100 ms post fixation, reflecting afferent input to visual cortex. The lambda response is affected by bottom-up stimulus features and the size of the preceding saccade; however, prior research has not adequately controlled for these influences in free viewing paradigms. The current experiment (N = 16, 1 female) addresses these concerns by systematically manipulating spatial frequency in a free-viewing task requiring a range of saccade sizes. Given the close temporal proximity between saccade related activity and the onset of the lambda response, we evaluate how removing independent components (IC) associated with ocular motion artifacts affects lambda response amplitude. Our results indicate that removing ocular artifact ICs based on the covariance with gaze position did not significantly affect the amplitude of this occipital potential. Moreover, the results showed that spatial frequency and saccade magnitude each produce significant effects on lambda amplitude, where amplitude decreased with increasing spatial frequency and increased as a function of saccade size for small and medium-sized saccades. The amplitude differences between spatial frequencies were maintained across all saccade magnitudes suggesting these effects are produced from distinctly different and uncorrelated mechanisms. The current results will inform future analyses of the lambda potential in natural scenes where saccade magnitudes and spatial frequencies ultimately vary.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Fixation-related potentials (FRPs) enable examination of electrophysiological signatures of visual perception under naturalistic conditions, providing a neural snapshot of the fixated scene. The most prominent FRP component, commonly referred to as the lambda response, is a large deflection over occipital electrodes (O1, Oz, O2) peaking 80–100 ms post fixation, reflecting afferent input to visual cortex. The lambda response is affected by bottom-up stimulus features and the size of the preceding saccade; however, prior research has not adequately controlled for these influences in free viewing paradigms. The current experiment (N = 16, 1 female) addresses these concerns by systematically manipulating spatial frequency in a free-viewing task requiring a range of saccade sizes. Given the close temporal proximity between saccade related activity and the onset of the lambda response, we evaluate how removing independent components (IC) associated with ocular motion artifacts affects lambda response amplitude. Our results indicate that removing ocular artifact ICs based on the covariance with gaze position did not significantly affect the amplitude of this occipital potential. Moreover, the results showed that spatial frequency and saccade magnitude each produce significant effects on lambda amplitude, where amplitude decreased with increasing spatial frequency and increased as a function of saccade size for small and medium-sized saccades. The amplitude differences between spatial frequencies were maintained across all saccade magnitudes suggesting these effects are produced from distinctly different and uncorrelated mechanisms. The current results will inform future analyses of the lambda potential in natural scenes where saccade magnitudes and spatial frequencies ultimately vary. |
Naoyuki Sato; Hiroaki Mizuhara Successful encoding during natural reading is associated with fixation-related potentials and large-scale network deactivation Journal Article Eneuro, 5 (5), pp. 1–12, 2018. @article{Sato2018, title = {Successful encoding during natural reading is associated with fixation-related potentials and large-scale network deactivation}, author = {Naoyuki Sato and Hiroaki Mizuhara}, doi = {10.1523/eneuro.0122-18.2018}, year = {2018}, date = {2018-01-01}, journal = {Eneuro}, volume = {5}, number = {5}, pages = {1--12}, abstract = {Reading literature (e.g., an entire book) is an enriching experience that qualitatively differs from reading a single sentence; however, the brain dynamics of such context-dependent memory remains unclear. This study aimed to elucidate mnemonic neural dynamics during natural reading of literature by performing electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI). Brain activities of human participants recruited on campus were correlated with their subsequent memory, which was quantified by semantic correlation between the read text and reports subsequently written by them based on state of the art natural language processing procedures. The results of the EEG data analysis showed a significant positive relationship between subsequent memory and fixation-related EEG. Sentence-length and paragraph-length mnemonic processes were associated with N1-P2 and P3 fixation-related potential (FRP) components and fixation-related $theta$-band (4–8 Hz) EEG power, respectively. In contrast, the results of fMRI analysis showed a significant negative relationship between subsequent memory and blood oxygenation level-dependent (BOLD) activation. Sentence-length and paragraph-length mnemonic processes were associated with networks of regions forming part of the salience network and the default mode network (DMN), respectively. Taken together with the EEG results, these memory-related deactivations in the salience network and the DMN were thought to reflect the reading of sentences characterized by low mnemonic load and the suppression of task-irreverent thoughts, respectively. It was suggested that the context-dependent mnemonic process during literature reading requires large-scale network deactivation, which might reflect coordination of a range of voluntary processes during reading.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Reading literature (e.g., an entire book) is an enriching experience that qualitatively differs from reading a single sentence; however, the brain dynamics of such context-dependent memory remains unclear. This study aimed to elucidate mnemonic neural dynamics during natural reading of literature by performing electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI). Brain activities of human participants recruited on campus were correlated with their subsequent memory, which was quantified by semantic correlation between the read text and reports subsequently written by them based on state of the art natural language processing procedures. The results of the EEG data analysis showed a significant positive relationship between subsequent memory and fixation-related EEG. Sentence-length and paragraph-length mnemonic processes were associated with N1-P2 and P3 fixation-related potential (FRP) components and fixation-related $theta$-band (4–8 Hz) EEG power, respectively. In contrast, the results of fMRI analysis showed a significant negative relationship between subsequent memory and blood oxygenation level-dependent (BOLD) activation. Sentence-length and paragraph-length mnemonic processes were associated with networks of regions forming part of the salience network and the default mode network (DMN), respectively. Taken together with the EEG results, these memory-related deactivations in the salience network and the DMN were thought to reflect the reading of sentences characterized by low mnemonic load and the suppression of task-irreverent thoughts, respectively. It was suggested that the context-dependent mnemonic process during literature reading requires large-scale network deactivation, which might reflect coordination of a range of voluntary processes during reading. |
Brian Scally; Melanie Rose Burke; David Bunce; Jean Francois Delvenne Visual and visuomotor interhemispheric transfer time in older adults Journal Article Neurobiology of Aging, 65 , pp. 69–76, 2018. @article{Scally2018, title = {Visual and visuomotor interhemispheric transfer time in older adults}, author = {Brian Scally and Melanie Rose Burke and David Bunce and Jean Francois Delvenne}, doi = {10.1016/j.neurobiolaging.2018.01.005}, year = {2018}, date = {2018-01-01}, journal = {Neurobiology of Aging}, volume = {65}, pages = {69--76}, publisher = {Elsevier Inc}, abstract = {Older adults typically experience reductions in the structural integrity of the anterior channels of the corpus callosum. Despite preserved structural integrity in central and posterior channels, many studies have reported that interhemispheric transfer, a function attributed to these regions, is detrimentally affected by aging. In this study, we use a constrained event-related potential analysis in the theta and alpha frequency bands to determine whether interhemispheric transfer is affected in older adults. The crossed-uncrossed difference and lateralized visual evoked potentials were used to assess interhemispheric transfer in young (18–27) and older adults (63–80). We observed no differences in the crossed-uncrossed difference measure between young and older groups. Older adults appeared to have elongated transfer in the theta band potentials, but this effect was driven by shortened contralateral peak latencies, rather than delayed ipsilateral latencies. In the alpha band, there was a trend toward quicker transfer in older adults. We conclude that older adults do not experience elongated interhemispheric transfer in the visuomotor or visual domains and that these functions are likely attributed to posterior sections of the corpus callosum, which are unaffected by aging.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Older adults typically experience reductions in the structural integrity of the anterior channels of the corpus callosum. Despite preserved structural integrity in central and posterior channels, many studies have reported that interhemispheric transfer, a function attributed to these regions, is detrimentally affected by aging. In this study, we use a constrained event-related potential analysis in the theta and alpha frequency bands to determine whether interhemispheric transfer is affected in older adults. The crossed-uncrossed difference and lateralized visual evoked potentials were used to assess interhemispheric transfer in young (18–27) and older adults (63–80). We observed no differences in the crossed-uncrossed difference measure between young and older groups. Older adults appeared to have elongated transfer in the theta band potentials, but this effect was driven by shortened contralateral peak latencies, rather than delayed ipsilateral latencies. In the alpha band, there was a trend toward quicker transfer in older adults. We conclude that older adults do not experience elongated interhemispheric transfer in the visuomotor or visual domains and that these functions are likely attributed to posterior sections of the corpus callosum, which are unaffected by aging. |
K Seeliger; M Fritsche; U Güçlü; S Schoenmakers; J M Schoffelen; S E Bosch; M A J van Gerven Convolutional neural network-based encoding and decoding of visual object recognition in space and time Journal Article NeuroImage, 180 , pp. 253–266, 2018. @article{Seeliger2018, title = {Convolutional neural network-based encoding and decoding of visual object recognition in space and time}, author = {K Seeliger and M Fritsche and U Gü{ç}lü and S Schoenmakers and J M Schoffelen and S E Bosch and M A J van Gerven}, doi = {10.1016/j.neuroimage.2017.07.018}, year = {2018}, date = {2018-01-01}, journal = {NeuroImage}, volume = {180}, pages = {253--266}, publisher = {Elsevier Ltd}, abstract = {Representations learned by deep convolutional neural networks (CNNs) for object recognition are a widely investigated model of the processing hierarchy in the human visual system. Using functional magnetic resonance imaging, CNN representations of visual stimuli have previously been shown to correspond to processing stages in the ventral and dorsal streams of the visual system. Whether this correspondence between models and brain signals also holds for activity acquired at high temporal resolution has been explored less exhaustively. Here, we addressed this question by combining CNN-based encoding models with magnetoencephalography (MEG). Human participants passively viewed 1,000 images of objects while MEG signals were acquired. We modelled their high temporal resolution source-reconstructed cortical activity with CNNs, and observed a feed-forward sweep across the visual hierarchy between 75 and 200 ms after stimulus onset. This spatiotemporal cascade was captured by the network layer representations, where the increasingly abstract stimulus representation in the hierarchical network model was reflected in different parts of the visual cortex, following the visual ventral stream. We further validated the accuracy of our encoding model by decoding stimulus identity in a left-out validation set of viewed objects, achieving state-of-the-art decoding accuracy.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Representations learned by deep convolutional neural networks (CNNs) for object recognition are a widely investigated model of the processing hierarchy in the human visual system. Using functional magnetic resonance imaging, CNN representations of visual stimuli have previously been shown to correspond to processing stages in the ventral and dorsal streams of the visual system. Whether this correspondence between models and brain signals also holds for activity acquired at high temporal resolution has been explored less exhaustively. Here, we addressed this question by combining CNN-based encoding models with magnetoencephalography (MEG). Human participants passively viewed 1,000 images of objects while MEG signals were acquired. We modelled their high temporal resolution source-reconstructed cortical activity with CNNs, and observed a feed-forward sweep across the visual hierarchy between 75 and 200 ms after stimulus onset. This spatiotemporal cascade was captured by the network layer representations, where the increasingly abstract stimulus representation in the hierarchical network model was reflected in different parts of the visual cortex, following the visual ventral stream. We further validated the accuracy of our encoding model by decoding stimulus identity in a left-out validation set of viewed objects, achieving state-of-the-art decoding accuracy. |
Adam C Snyder; Deepa Issar; Matthew A Smith What does scalp electroencephalogram coherence tell us about long-range cortical networks? Journal Article European Journal of Neuroscience, pp. 1–16, 2018. @article{Snyder2018, title = {What does scalp electroencephalogram coherence tell us about long-range cortical networks?}, author = {Adam C Snyder and Deepa Issar and Matthew A Smith}, doi = {10.1111/ejn.13840}, year = {2018}, date = {2018-01-01}, journal = {European Journal of Neuroscience}, pages = {1--16}, abstract = {Long-range interactions between cortical areas are undoubtedly a key to the computational power of the brain. For healthy human subjects, the premier method for measuring brain activity on fast timescales is electroencephalography (EEG), and coherence between EEG signals is often used to assay functional connectivity between different brain regions. However, the nature of the underlying brain activity that is reflected in EEG coherence is currently the realm of speculation, because seldom have EEG signals been recorded simultaneously with intracranial recordings near cell bodies in multiple brain areas. Here, we take the early steps towards narrowing this gap in our understanding of EEG coherence by measuring local field potentials with microelectrode arrays in two brain areas (extrastriate visual area V4 and dorsolateral prefrontal cortex) simultaneously with EEG at the nearby scalp in rhesus macaque monkeys. Although we found inter-area coherence at both scales of measurement, we did not find that scalp-level coherence was reliably related to coherence between brain areas measured intracranially on a trial-to-trial basis, despite that scalp-level EEG was related to other important features of neural oscillations, such as trial-to-trial variability in overall amplitudes. This suggests that caution must be exercised when interpreting EEG coherence effects, and new theories devised about what aspects of neural activity long-range coherence in the EEG reflects.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Long-range interactions between cortical areas are undoubtedly a key to the computational power of the brain. For healthy human subjects, the premier method for measuring brain activity on fast timescales is electroencephalography (EEG), and coherence between EEG signals is often used to assay functional connectivity between different brain regions. However, the nature of the underlying brain activity that is reflected in EEG coherence is currently the realm of speculation, because seldom have EEG signals been recorded simultaneously with intracranial recordings near cell bodies in multiple brain areas. Here, we take the early steps towards narrowing this gap in our understanding of EEG coherence by measuring local field potentials with microelectrode arrays in two brain areas (extrastriate visual area V4 and dorsolateral prefrontal cortex) simultaneously with EEG at the nearby scalp in rhesus macaque monkeys. Although we found inter-area coherence at both scales of measurement, we did not find that scalp-level coherence was reliably related to coherence between brain areas measured intracranially on a trial-to-trial basis, despite that scalp-level EEG was related to other important features of neural oscillations, such as trial-to-trial variability in overall amplitudes. This suggests that caution must be exercised when interpreting EEG coherence effects, and new theories devised about what aspects of neural activity long-range coherence in the EEG reflects. |
Noam Tal; Shlomit Yuval-Greenberg Reducing saccadic artifacts and confounds in brain imaging studies through experimental design Journal Article Psychophysiology, 55 (11), pp. 1–20, 2018. @article{Tal2018, title = {Reducing saccadic artifacts and confounds in brain imaging studies through experimental design}, author = {Noam Tal and Shlomit Yuval-Greenberg}, doi = {10.1111/psyp.13215}, year = {2018}, date = {2018-01-01}, journal = {Psychophysiology}, volume = {55}, number = {11}, pages = {1--20}, abstract = {Saccades constitute a major source of artifacts and confounds in brain imaging studies. Whereas some artifacts can be removed by omitting segments of data, saccadic artifacts cannot be typically eliminated by this method because of their high occurrence rate even during fixation (1–3 per second). Some saccadic artifacts can be alleviated by offline-correction algorithms, but these methods leave nonnegligible residuals and cannot mitigate the saccade-related visual activity. Here, we propose a novel yet simple approach for diminishing saccadic artifacts and confounds through experimental design. We suggest that specific tasks can lead to substantially less saccade occurrences around the time of stimulus presentation, starting from slightly before its onset and lasting for a few hundred milliseconds. In three experiments, we compared the frequency and size of saccades in a variety of tasks. Results of Experiment 1 showed that a foveal change-detection task reduced the number and sizes of saccades, relative to a parafoveal orientation-discrimination task. Experiment 2 replicated this finding with a parafoveal object recognition task. Experiment 3 showed that both foveal and parafoveal continuous change detection tasks induced fewer and smaller saccades than a discrete orientation-discrimination task. We conclude that adding a foveal or a parafoveal continuous task reduces saccades' number and size. This would lead to better artifact correction and enable the omission of contaminated data segments. This study may be the first step toward developing saccade-free experimental designs.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Saccades constitute a major source of artifacts and confounds in brain imaging studies. Whereas some artifacts can be removed by omitting segments of data, saccadic artifacts cannot be typically eliminated by this method because of their high occurrence rate even during fixation (1–3 per second). Some saccadic artifacts can be alleviated by offline-correction algorithms, but these methods leave nonnegligible residuals and cannot mitigate the saccade-related visual activity. Here, we propose a novel yet simple approach for diminishing saccadic artifacts and confounds through experimental design. We suggest that specific tasks can lead to substantially less saccade occurrences around the time of stimulus presentation, starting from slightly before its onset and lasting for a few hundred milliseconds. In three experiments, we compared the frequency and size of saccades in a variety of tasks. Results of Experiment 1 showed that a foveal change-detection task reduced the number and sizes of saccades, relative to a parafoveal orientation-discrimination task. Experiment 2 replicated this finding with a parafoveal object recognition task. Experiment 3 showed that both foveal and parafoveal continuous change detection tasks induced fewer and smaller saccades than a discrete orientation-discrimination task. We conclude that adding a foveal or a parafoveal continuous task reduces saccades' number and size. This would lead to better artifact correction and enable the omission of contaminated data segments. This study may be the first step toward developing saccade-free experimental designs. |
Nina N Thigpen; Forest L Gruss; Steven Garcia; David R Herring; Andreas Keil What does the dot-probe task measure? A reverse correlation analysis of electrocortical activity Journal Article Psychophysiology, 55 (6), pp. 1–16, 2018. @article{Thigpen2018b, title = {What does the dot-probe task measure? A reverse correlation analysis of electrocortical activity}, author = {Nina N Thigpen and Forest L Gruss and Steven Garcia and David R Herring and Andreas Keil}, doi = {10.1111/psyp.13058}, year = {2018}, date = {2018-01-01}, journal = {Psychophysiology}, volume = {55}, number = {6}, pages = {1--16}, abstract = {The dot-probe task is considered a gold standard for assessing the intrinsic attentive selection of one of two lateralized visual cues, measured by the response time to a subsequent, lateralized response probe. However, this task has recently been associated with poor reliability and conflicting results. To resolve these discrepancies, we tested the underlying assumption of the dot-probe task-that fast probe responses index heightened cue selection-using an electrophysiological measure of selective attention. Specifically, we used a reverse correlation approach in combination with frequency-tagged steady-state visual potentials (ssVEPs). Twenty-one participants completed a modified dot-probe task in which each member of a pair of lateralized face cues, varying in emotional expression (angry-angry, neutral-angry, neutral-neutral), flickered at one of two frequencies (15 or 20 Hz), to evoke ssVEPs. One cue was then replaced by a response probe, and participants indicated the probe orientation (0° or 90°). We analyzed the ssVEP evoked by the cues as a function of response speed to the subsequent probe (i.e., a reverse correlation analysis). Electrophysiological measures of cue processing varied with probe hemifield location: Faster responses to left probes were associated with weak amplification of the preceding left cue, apparent only in a median split analysis. By contrast, faster responses to right probes were systematically and parametrically predicted by diminished visuocortical selection of the preceding right cue. Together, these findings highlight the poor validity of the dot-probe task, in terms of quantifying intrinsic, nondirected attentive selection irrespective of probe/cue location.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The dot-probe task is considered a gold standard for assessing the intrinsic attentive selection of one of two lateralized visual cues, measured by the response time to a subsequent, lateralized response probe. However, this task has recently been associated with poor reliability and conflicting results. To resolve these discrepancies, we tested the underlying assumption of the dot-probe task-that fast probe responses index heightened cue selection-using an electrophysiological measure of selective attention. Specifically, we used a reverse correlation approach in combination with frequency-tagged steady-state visual potentials (ssVEPs). Twenty-one participants completed a modified dot-probe task in which each member of a pair of lateralized face cues, varying in emotional expression (angry-angry, neutral-angry, neutral-neutral), flickered at one of two frequencies (15 or 20 Hz), to evoke ssVEPs. One cue was then replaced by a response probe, and participants indicated the probe orientation (0° or 90°). We analyzed the ssVEP evoked by the cues as a function of response speed to the subsequent probe (i.e., a reverse correlation analysis). Electrophysiological measures of cue processing varied with probe hemifield location: Faster responses to left probes were associated with weak amplification of the preceding left cue, apparent only in a median split analysis. By contrast, faster responses to right probes were systematically and parametrically predicted by diminished visuocortical selection of the preceding right cue. Together, these findings highlight the poor validity of the dot-probe task, in terms of quantifying intrinsic, nondirected attentive selection irrespective of probe/cue location. |
Nathalie Van Humbeeck; Radha Nila Meghanathan; Johan Wagemans; Cees van Leeuwen; Andrey R Nikolaev Presaccadic EEG activity predicts visual saliency in free-viewing contour integration Journal Article Psychophysiology, 55 (12), pp. 1–21, 2018. @article{VanHumbeeck2018, title = {Presaccadic EEG activity predicts visual saliency in free-viewing contour integration}, author = {Nathalie {Van Humbeeck} and Radha Nila Meghanathan and Johan Wagemans and Cees van Leeuwen and Andrey R Nikolaev}, doi = {10.1111/psyp.13267}, year = {2018}, date = {2018-01-01}, journal = {Psychophysiology}, volume = {55}, number = {12}, pages = {1--21}, abstract = {While viewing a scene, the eyes are attracted to salient stimuli. We set out to identify the brain signals controlling this process. In a contour integration task, in which participants searched for a collinear contour in a field of randomly oriented Gabor elements, a previously established model was applied to calculate a visual saliency value for each fixation location. We studied brain activity related to the modeled saliency values, using coregistered eye tracking and EEG. To disentangle EEG signals reflecting salience in free viewing from overlapping EEG responses to sequential eye movements, we adopted generalized additive mixed modeling (GAMM) to single epochs of saccade‐related EEG. We found that, when saliency at the next fixation location was high, amplitude of the presaccadic EEG activity was low. Since presaccadic activity reflects covert attention to the saccade target, our results indicate that larger attentional effort is needed for selecting less salient saccade targets than more salient ones. This effect was prominent in contour‐present conditions (half of the trials), but ambiguous in the contour‐absent condition. Presaccadic EEG activity may thus be indicative of bottom‐up factors in saccade guidance. The results underscore the utility of GAMM for EEG—eye movement coregistration research.}, keywords = {}, pubstate = {published}, tppubtype = {article} } While viewing a scene, the eyes are attracted to salient stimuli. We set out to identify the brain signals controlling this process. In a contour integration task, in which participants searched for a collinear contour in a field of randomly oriented Gabor elements, a previously established model was applied to calculate a visual saliency value for each fixation location. We studied brain activity related to the modeled saliency values, using coregistered eye tracking and EEG. To disentangle EEG signals reflecting salience in free viewing from overlapping EEG responses to sequential eye movements, we adopted generalized additive mixed modeling (GAMM) to single epochs of saccade‐related EEG. We found that, when saliency at the next fixation location was high, amplitude of the presaccadic EEG activity was low. Since presaccadic activity reflects covert attention to the saccade target, our results indicate that larger attentional effort is needed for selecting less salient saccade targets than more salient ones. This effect was prominent in contour‐present conditions (half of the trials), but ambiguous in the contour‐absent condition. Presaccadic EEG activity may thus be indicative of bottom‐up factors in saccade guidance. The results underscore the utility of GAMM for EEG—eye movement coregistration research. |
Martin Völker; Lukas D J Fiederer; Sofie Berberich; Jiří Hammer; Joos Behncke; Pavel Kršek; Martin Tomášek; Petr Marusič; Peter C Reinacher; Volker A Coenen; Moritz Helias; Andreas Schulze-Bonhage; Wolfram Burgard; Tonio Ball The dynamics of error processing in the human brain as reflected by high-gamma activity in noninvasive and intracranial EEG Journal Article NeuroImage, 173 (2018), pp. 564–579, 2018. @article{Voelker2018, title = {The dynamics of error processing in the human brain as reflected by high-gamma activity in noninvasive and intracranial EEG}, author = {Martin Völker and Lukas D J Fiederer and Sofie Berberich and Jiří Hammer and Joos Behncke and Pavel Kršek and Martin Tomášek and Petr Marusi{č} and Peter C Reinacher and Volker A Coenen and Moritz Helias and Andreas Schulze-Bonhage and Wolfram Burgard and Tonio Ball}, doi = {10.1016/j.neuroimage.2018.01.059}, year = {2018}, date = {2018-01-01}, journal = {NeuroImage}, volume = {173}, number = {2018}, pages = {564--579}, abstract = {Error detection in motor behavior is a fundamental cognitive function heavily relying on local cortical information processing. Neural activity in the high-gamma frequency band (HGB) closely reflects such local cortical processing, but little is known about its role in error processing, particularly in the healthy human brain. Here we characterize the error-related response of the human brain based on data obtained with noninvasive EEG optimized for HGB mapping in 31 healthy subjects (15 females, 16 males), and additional intracranial EEG data from 9 epilepsy patients (4 females, 5 males). Our findings reveal a multiscale picture of the global and local dynamics of error-related HGB activity in the human brain. On the global level as reflected in the noninvasive EEG, the error-related response started with an early component dominated by anterior brain regions, followed by a shift to parietal regions, and a subsequent phase characterized by sustained parietal HGB activity. This phase lasted for more than 1 s after the error onset. On the local level reflected in the intracranial EEG, a cascade of both transient and sustained error-related responses involved an even more extended network, spanning beyond frontal and parietal regions to the insula and the hippocampus. HGB mapping appeared especially well suited to investigate late, sustained components of the error response, possibly linked to downstream functional stages such as error-related learning and behavioral adaptation. Our findings establish the basic spatio-temporal properties of HGB activity as a neural correlate of error processing, complementing traditional error-related potential studies.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Error detection in motor behavior is a fundamental cognitive function heavily relying on local cortical information processing. Neural activity in the high-gamma frequency band (HGB) closely reflects such local cortical processing, but little is known about its role in error processing, particularly in the healthy human brain. Here we characterize the error-related response of the human brain based on data obtained with noninvasive EEG optimized for HGB mapping in 31 healthy subjects (15 females, 16 males), and additional intracranial EEG data from 9 epilepsy patients (4 females, 5 males). Our findings reveal a multiscale picture of the global and local dynamics of error-related HGB activity in the human brain. On the global level as reflected in the noninvasive EEG, the error-related response started with an early component dominated by anterior brain regions, followed by a shift to parietal regions, and a subsequent phase characterized by sustained parietal HGB activity. This phase lasted for more than 1 s after the error onset. On the local level reflected in the intracranial EEG, a cascade of both transient and sustained error-related responses involved an even more extended network, spanning beyond frontal and parietal regions to the insula and the hippocampus. HGB mapping appeared especially well suited to investigate late, sustained components of the error response, possibly linked to downstream functional stages such as error-related learning and behavioral adaptation. Our findings establish the basic spatio-temporal properties of HGB activity as a neural correlate of error processing, complementing traditional error-related potential studies. |
Andreas Widmann; Erich Schröger; Nicole Wetzel Biological Psychology, 133 , pp. 10–17, 2018. @article{Widmann2018, title = {Emotion lies in the eye of the listener: Emotional arousal to novel sounds is reflected in the sympathetic contribution to the pupil dilation response and the P3}, author = {Andreas Widmann and Erich Schröger and Nicole Wetzel}, doi = {10.1016/j.biopsycho.2018.01.010}, year = {2018}, date = {2018-01-01}, journal = {Biological Psychology}, volume = {133}, pages = {10--17}, publisher = {Elsevier}, abstract = {Novel sounds in the auditory oddball paradigm elicit a biphasic dilation of the pupil (PDR) and P3a as well as novelty P3 event-related potentials (ERPs). The biphasic PDR has been hypothesized to reflect the relaxation of the iris sphincter muscle due to parasympathetic inhibition and the constriction of the iris dilator muscle due to sympathetic activation. We measured the PDR and the P3 to neutral and to emotionally arousing negative novels in dark and moderate lighting conditions. By means of principal component analysis (PCA) of the PDR data we extracted two components: the early one was absent in darkness and, thus, presumably reflects parasympathetic inhibition, whereas the late component occurred in darkness and light and presumably reflects sympathetic activation. Importantly, only this sympathetic late component was enhanced for emotionally arousing (as compared to neutral) sounds supporting the hypothesis that emotional arousal specifically activates the sympathetic nervous system. In the ERPs we observed P3a and novelty P3 in response to novel sounds. Both components were enhanced for emotionally arousing (as compared to neutral) novels. Our results demonstrate that sympathetic and parasympathetic contributions to the PDR can be separated and link emotional arousal to sympathetic nervous system activation.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Novel sounds in the auditory oddball paradigm elicit a biphasic dilation of the pupil (PDR) and P3a as well as novelty P3 event-related potentials (ERPs). The biphasic PDR has been hypothesized to reflect the relaxation of the iris sphincter muscle due to parasympathetic inhibition and the constriction of the iris dilator muscle due to sympathetic activation. We measured the PDR and the P3 to neutral and to emotionally arousing negative novels in dark and moderate lighting conditions. By means of principal component analysis (PCA) of the PDR data we extracted two components: the early one was absent in darkness and, thus, presumably reflects parasympathetic inhibition, whereas the late component occurred in darkness and light and presumably reflects sympathetic activation. Importantly, only this sympathetic late component was enhanced for emotionally arousing (as compared to neutral) sounds supporting the hypothesis that emotional arousal specifically activates the sympathetic nervous system. In the ERPs we observed P3a and novelty P3 in response to novel sounds. Both components were enhanced for emotionally arousing (as compared to neutral) novels. Our results demonstrate that sympathetic and parasympathetic contributions to the PDR can be separated and link emotional arousal to sympathetic nervous system activation. |
Tommy J Wilson; Michael J Gray; Jan Willem Van Klinken; Melissa Kaczmarczyk; John J Foxe Macronutrient composition of a morning meal and the maintenance of attention throughout the morning Journal Article Nutritional Neuroscience, 21 (10), pp. 729–743, 2018. @article{Wilson2018b, title = {Macronutrient composition of a morning meal and the maintenance of attention throughout the morning}, author = {Tommy J Wilson and Michael J Gray and Jan Willem {Van Klinken} and Melissa Kaczmarczyk and John J Foxe}, doi = {10.1080/1028415X.2017.1347998}, year = {2018}, date = {2018-01-01}, journal = {Nutritional Neuroscience}, volume = {21}, number = {10}, pages = {729--743}, publisher = {Taylor & Francis}, abstract = {At present, the impact of macronutrient composition and nutrient intake on sustained attention in adults is unclear, although some prior work suggests that nutritive interventions that engender slow, steady glucose availability support sustained attention after consumption. A separate line of evidence suggests that nutrient consumption may alter electroencephalographic markers of neurophysiological activity, including neural oscillations in the alpha-band (8-14 Hz), which are known to be richly interconnected with the allocation of attention. It is here investigated whether morning ingestion of foodstuffs with differing macronutrient compositions might differentially impact the allocation of sustained attention throughout the day as indexed by both behavior and the deployment of attention-related alpha-band activity. METHODS: Twenty-four adult participants were recruited into a three-day study with a cross-over design that employed a previously validated sustained attention task (the Spatial CTET). On each experimental day, subjects consumed one of three breakfasts with differing carbohydrate availabilities (oatmeal, cornflakes, and water) and completed blocks of the Spatial CTET throughout the morning while behavioral performance, subjective metrics of hunger/fullness, and electroencephalographic (EEG) measurements of alpha oscillatory activity were recorded. RESULTS: Although behavior and electrophysiological metrics changed over the course of the day, no differences in their trajectories were observed as a function of breakfast condition. However, subjective metrics of hunger/fullness revealed that caloric interventions (oatmeal and cornflakes) reduced hunger across the experimental day with respect to the non-caloric, volume-matched control (water). Yet, no differences in hunger/fullness were observed between the oatmeal and cornflakes interventions. CONCLUSION: Observation of a relationship between macronutrient intervention and sustained attention (if one exists) will require further standardization of empirical investigations to aid in the synthesis and replicability of results. In addition, continued implementation of neurophysiological markers in this domain is encouraged, as they often produce nuanced insight into cognition even in the absence of overt behavioral changes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } At present, the impact of macronutrient composition and nutrient intake on sustained attention in adults is unclear, although some prior work suggests that nutritive interventions that engender slow, steady glucose availability support sustained attention after consumption. A separate line of evidence suggests that nutrient consumption may alter electroencephalographic markers of neurophysiological activity, including neural oscillations in the alpha-band (8-14 Hz), which are known to be richly interconnected with the allocation of attention. It is here investigated whether morning ingestion of foodstuffs with differing macronutrient compositions might differentially impact the allocation of sustained attention throughout the day as indexed by both behavior and the deployment of attention-related alpha-band activity. METHODS: Twenty-four adult participants were recruited into a three-day study with a cross-over design that employed a previously validated sustained attention task (the Spatial CTET). On each experimental day, subjects consumed one of three breakfasts with differing carbohydrate availabilities (oatmeal, cornflakes, and water) and completed blocks of the Spatial CTET throughout the morning while behavioral performance, subjective metrics of hunger/fullness, and electroencephalographic (EEG) measurements of alpha oscillatory activity were recorded. RESULTS: Although behavior and electrophysiological metrics changed over the course of the day, no differences in their trajectories were observed as a function of breakfast condition. However, subjective metrics of hunger/fullness revealed that caloric interventions (oatmeal and cornflakes) reduced hunger across the experimental day with respect to the non-caloric, volume-matched control (water). Yet, no differences in hunger/fullness were observed between the oatmeal and cornflakes interventions. CONCLUSION: Observation of a relationship between macronutrient intervention and sustained attention (if one exists) will require further standardization of empirical investigations to aid in the synthesis and replicability of results. In addition, continued implementation of neurophysiological markers in this domain is encouraged, as they often produce nuanced insight into cognition even in the absence of overt behavioral changes. |
Seref Can Gurel; Miguel Castelo-Branco; Alexander T Sack; Felix Duecker Assessing the functional role of frontal eye fields in voluntary and reflexive saccades using continuous theta burst stimulation Journal Article Frontiers in Neuroscience, 12 , pp. 1–11, 2018. @article{Gurel2018, title = {Assessing the functional role of frontal eye fields in voluntary and reflexive saccades using continuous theta burst stimulation}, author = {Seref Can Gurel and Miguel Castelo-Branco and Alexander T Sack and Felix Duecker}, doi = {10.3389/fnins.2018.00944}, year = {2018}, date = {2018-01-01}, journal = {Frontiers in Neuroscience}, volume = {12}, pages = {1--11}, abstract = {The frontal eye fields (FEFs) are core nodes of the oculomotor system contributing to saccade planning, control, and execution. Here, we aimed to reveal hemispheric asymmetries between left and right FEF in both voluntary and reflexive saccades toward horizontal and vertical targets. To this end, we applied fMRI-guided continuous theta burst stimulation (cTBS) over either left or right FEF and assessed the consequences of this disruption on saccade latencies. Using a fully counterbalanced within-subject design, we measured saccade latencies before and after the application of cTBS in eighteen healthy volunteers. In general, saccade latencies on both tasks were susceptible to our experimental manipulations, that is, voluntary saccades were slower than reflexive saccades, and downward saccades were slower than upward saccades. Contrary to our expectations, we failed to reveal any TMS-related effects on saccade latencies, and Bayesian analyses provided strong support in favor of a TMS null result for both tasks. Keeping in mind the interpretative challenges of null results, we discuss possible explanations for this absence of behavioral TMS effects, focusing on methodological differences compared to previous studies (task parameters and online vs. offline TMS interventions). We also speculate about what our results might reveal about the functional role of FEF.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The frontal eye fields (FEFs) are core nodes of the oculomotor system contributing to saccade planning, control, and execution. Here, we aimed to reveal hemispheric asymmetries between left and right FEF in both voluntary and reflexive saccades toward horizontal and vertical targets. To this end, we applied fMRI-guided continuous theta burst stimulation (cTBS) over either left or right FEF and assessed the consequences of this disruption on saccade latencies. Using a fully counterbalanced within-subject design, we measured saccade latencies before and after the application of cTBS in eighteen healthy volunteers. In general, saccade latencies on both tasks were susceptible to our experimental manipulations, that is, voluntary saccades were slower than reflexive saccades, and downward saccades were slower than upward saccades. Contrary to our expectations, we failed to reveal any TMS-related effects on saccade latencies, and Bayesian analyses provided strong support in favor of a TMS null result for both tasks. Keeping in mind the interpretative challenges of null results, we discuss possible explanations for this absence of behavioral TMS effects, focusing on methodological differences compared to previous studies (task parameters and online vs. offline TMS interventions). We also speculate about what our results might reveal about the functional role of FEF. |
George L Malcolm; Edward H Silson; Jennifer R Henry; Chris I Baker Transcranial magnetic stimulation to the occipital place area biases gaze during scene viewing Journal Article Frontiers in Human Neuroscience, 12 , pp. 1–13, 2018. @article{Malcolm2018, title = {Transcranial magnetic stimulation to the occipital place area biases gaze during scene viewing}, author = {George L Malcolm and Edward H Silson and Jennifer R Henry and Chris I Baker}, doi = {10.3389/fnhum.2018.00189}, year = {2018}, date = {2018-01-01}, journal = {Frontiers in Human Neuroscience}, volume = {12}, pages = {1--13}, abstract = {We can understand viewed scenes and extract task-relevant information within a few hundred milliseconds. This process is generally supported by three cortical regions that show selectivity for scene images: parahippocampal place area (PPA), medial place area (MPA) and occipital place area (OPA). Prior studies have focused on the visual information each region is responsive to, usually within the context of recognition or navigation. Here, we move beyond these tasks to investigate gaze allocation during scene viewing. Eye movements rely on a scene's visual representation to direct saccades, and thus foveal vision. In particular, we focus on the contribution of OPA, which is: (i) located in occipito-parietal cortex, likely feeding information into parts of the dorsal pathway critical for eye movements; and (ii) contains strong retinotopic representations of the contralateral visual field. Participants viewed scene images for 1034 ms while their eye movements were recorded. On half of the trials, a 500 ms train of five transcranial magnetic stimulation (TMS) pulses was applied to the participant's cortex, starting at scene onset. TMS was applied to the right hemisphere over either OPA or the occipital face area (OFA), which also exhibits a contralateral visual field bias but shows selectivity for face stimuli. Participants generally made an overall left-toright, top-to-bottom pattern of eye movements across all conditions. When TMS was applied to OPA, there was an increased saccade latency for eye movements toward the contralateral relative to the ipsilateral visual field after the final TMS pulse (400 ms). Additionally, TMS to the OPA biased fixation positions away from the contralateral side of the scene compared to the control condition, while the OFA group showed no such effect. There was no effect on horizontal saccade amplitudes. These combined results suggest that OPA might serve to represent local scene information that can then be utilized by visuomotor control networks to guide gaze allocation in natural scenes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We can understand viewed scenes and extract task-relevant information within a few hundred milliseconds. This process is generally supported by three cortical regions that show selectivity for scene images: parahippocampal place area (PPA), medial place area (MPA) and occipital place area (OPA). Prior studies have focused on the visual information each region is responsive to, usually within the context of recognition or navigation. Here, we move beyond these tasks to investigate gaze allocation during scene viewing. Eye movements rely on a scene's visual representation to direct saccades, and thus foveal vision. In particular, we focus on the contribution of OPA, which is: (i) located in occipito-parietal cortex, likely feeding information into parts of the dorsal pathway critical for eye movements; and (ii) contains strong retinotopic representations of the contralateral visual field. Participants viewed scene images for 1034 ms while their eye movements were recorded. On half of the trials, a 500 ms train of five transcranial magnetic stimulation (TMS) pulses was applied to the participant's cortex, starting at scene onset. TMS was applied to the right hemisphere over either OPA or the occipital face area (OFA), which also exhibits a contralateral visual field bias but shows selectivity for face stimuli. Participants generally made an overall left-toright, top-to-bottom pattern of eye movements across all conditions. When TMS was applied to OPA, there was an increased saccade latency for eye movements toward the contralateral relative to the ipsilateral visual field after the final TMS pulse (400 ms). Additionally, TMS to the OPA biased fixation positions away from the contralateral side of the scene compared to the control condition, while the OFA group showed no such effect. There was no effect on horizontal saccade amplitudes. These combined results suggest that OPA might serve to represent local scene information that can then be utilized by visuomotor control networks to guide gaze allocation in natural scenes. |
James Mathew; Frederic R Danion Ups and downs in catch-up saccades following single-pulse TMS-methodological considerations Journal Article PLoS ONE, 13 (10), pp. 1–14, 2018. @article{Mathew2018a, title = {Ups and downs in catch-up saccades following single-pulse TMS-methodological considerations}, author = {James Mathew and Frederic R Danion}, doi = {10.1371/journal.pone.0205208}, year = {2018}, date = {2018-01-01}, journal = {PLoS ONE}, volume = {13}, number = {10}, pages = {1--14}, abstract = {Transcranial magnetic stimulation (TMS) can interfere with smooth pursuit or with saccades initiated from a fixed position toward a fixed target, but little is known about the effect of TMS on catch-up saccade made to assist smooth pursuit. Here we explored the effect of TMS on catch-up saccades by means of a situation in which the moving target was driven by an external agent, or moved by the participants' hand, a condition known to decrease the occurrence of catch-up saccade. Two sites of stimulation were tested, the vertex and M1 hand area. Compared to conditions with no TMS, we found a consistent modulation of saccadic activity after TMS such that it decreased at 40-100ms, strongly resumed at 100-160ms, and then decreased at 200-300ms. Despite this modulatory effect, the accuracy of catch-up saccade was maintained, and the mean saccadic activity over the 0-300ms period remained unchanged. Those findings are discussed in the context of studies showing that single-pulse TMS can induce widespread effects on neural oscillations as well as perturbations in the latency of saccades during reaction time protocols. At a more general level, despite challenges and interpretational limitations making uncertain the origin of this modulatory effect, our study provides direct evidence that TMS over presumably non-oculomotor regions interferes with the initiation of catch-up saccades, and thus offers methodological considerations for future studies that wish to investigate the underlying neural circuitry of catch-up saccades using TMS.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Transcranial magnetic stimulation (TMS) can interfere with smooth pursuit or with saccades initiated from a fixed position toward a fixed target, but little is known about the effect of TMS on catch-up saccade made to assist smooth pursuit. Here we explored the effect of TMS on catch-up saccades by means of a situation in which the moving target was driven by an external agent, or moved by the participants' hand, a condition known to decrease the occurrence of catch-up saccade. Two sites of stimulation were tested, the vertex and M1 hand area. Compared to conditions with no TMS, we found a consistent modulation of saccadic activity after TMS such that it decreased at 40-100ms, strongly resumed at 100-160ms, and then decreased at 200-300ms. Despite this modulatory effect, the accuracy of catch-up saccade was maintained, and the mean saccadic activity over the 0-300ms period remained unchanged. Those findings are discussed in the context of studies showing that single-pulse TMS can induce widespread effects on neural oscillations as well as perturbations in the latency of saccades during reaction time protocols. At a more general level, despite challenges and interpretational limitations making uncertain the origin of this modulatory effect, our study provides direct evidence that TMS over presumably non-oculomotor regions interferes with the initiation of catch-up saccades, and thus offers methodological considerations for future studies that wish to investigate the underlying neural circuitry of catch-up saccades using TMS. |
Denis Pélisson; Ouazna Habchi; Muriel T N Panouillères; Charles Hernoux; Alessandro Farnè A cortical substrate for the long-term memory of saccadic eye movements calibration Journal Article NeuroImage, 179 , pp. 348–356, 2018. @article{Pelisson2018, title = {A cortical substrate for the long-term memory of saccadic eye movements calibration}, author = {Denis Pélisson and Ouazna Habchi and Muriel T N Panouill{è}res and Charles Hernoux and Alessandro Farn{è}}, doi = {10.1016/j.neuroimage.2018.06.051}, year = {2018}, date = {2018-01-01}, journal = {NeuroImage}, volume = {179}, pages = {348--356}, abstract = {How movements are continuously adapted to physiological and environmental changes is a fundamental question in systems neuroscience. While many studies have elucidated the mechanisms which underlie short-term sensorimotor adaptation (∼10–30 min), how these motor memories are maintained over longer-term (textgreater3–5 days) -and thanks to which neural systems-is virtually unknown. Here, we examine in healthy human participants whether the temporo-parietal junction (TPJ) is causally involved in the induction and/or the retention of saccadic eye movements' adaptation. Single-pulse transcranial magnetic stimulation (spTMS) was applied while subjects performed a ∼15min size-decrease adaptation task of leftward reactive saccades. A TMS pulse was delivered over the TPJ in the right hemisphere (rTPJ) in each trial either 30, 60, 90 or 120 msec (in 4 separate adaptation sessions) after the saccade onset. In two control groups of subjects, the same adaptation procedure was achieved either alone (No-TMS) or combined with spTMS applied over the vertex (SHAM-TMS). While the timing of spTMS over the rTPJ did not significantly affect the speed and immediate after-effect of adaptation, we found that the amount of adaptation retention measured 10 days later was markedly larger (42%) than in both the No-TMS (21%) and the SHAM-TMS (11%) control groups. These results demonstrate for the first time that the cerebral cortex is causally involved in maintaining long-term oculomotor memories.}, keywords = {}, pubstate = {published}, tppubtype = {article} } How movements are continuously adapted to physiological and environmental changes is a fundamental question in systems neuroscience. While many studies have elucidated the mechanisms which underlie short-term sensorimotor adaptation (∼10–30 min), how these motor memories are maintained over longer-term (textgreater3–5 days) -and thanks to which neural systems-is virtually unknown. Here, we examine in healthy human participants whether the temporo-parietal junction (TPJ) is causally involved in the induction and/or the retention of saccadic eye movements' adaptation. Single-pulse transcranial magnetic stimulation (spTMS) was applied while subjects performed a ∼15min size-decrease adaptation task of leftward reactive saccades. A TMS pulse was delivered over the TPJ in the right hemisphere (rTPJ) in each trial either 30, 60, 90 or 120 msec (in 4 separate adaptation sessions) after the saccade onset. In two control groups of subjects, the same adaptation procedure was achieved either alone (No-TMS) or combined with spTMS applied over the vertex (SHAM-TMS). While the timing of spTMS over the rTPJ did not significantly affect the speed and immediate after-effect of adaptation, we found that the amount of adaptation retention measured 10 days later was markedly larger (42%) than in both the No-TMS (21%) and the SHAM-TMS (11%) control groups. These results demonstrate for the first time that the cerebral cortex is causally involved in maintaining long-term oculomotor memories. |
Leyla Isik; Jedediah Singer; Joseph R Madsen; Nancy Kanwisher; Gabriel Kreiman What is changing when: Decoding visual information in movies from human intracranial recordings Journal Article NeuroImage, 180 , pp. 147–159, 2018. @article{Isik2018, title = {What is changing when: Decoding visual information in movies from human intracranial recordings}, author = {Leyla Isik and Jedediah Singer and Joseph R Madsen and Nancy Kanwisher and Gabriel Kreiman}, doi = {10.1016/j.neuroimage.2017.08.027}, year = {2018}, date = {2018-01-01}, journal = {NeuroImage}, volume = {180}, pages = {147--159}, publisher = {Elsevier Ltd}, abstract = {The majority of visual recognition studies have focused on the neural responses to repeated presentations of static stimuli with abrupt and well-defined onset and offset times. In contrast, natural vision involves unique renderings of visual inputs that are continuously changing without explicitly defined temporal transitions. Here we considered commercial movies as a coarse proxy to natural vision. We recorded intracranial field potential signals from 1,284 electrodes implanted in 15 patients with epilepsy while the subjects passively viewed commercial movies. We could rapidly detect large changes in the visual inputs within approximately 100 ms of their occurrence, using exclusively field potential signals from ventral visual cortical areas including the inferior temporal gyrus and inferior occipital gyrus. Furthermore, we could decode the content of those visual changes even in a single movie presentation, generalizing across the wide range of transformations present in a movie. These results present a methodological framework for studying cognition during dynamic and natural vision.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The majority of visual recognition studies have focused on the neural responses to repeated presentations of static stimuli with abrupt and well-defined onset and offset times. In contrast, natural vision involves unique renderings of visual inputs that are continuously changing without explicitly defined temporal transitions. Here we considered commercial movies as a coarse proxy to natural vision. We recorded intracranial field potential signals from 1,284 electrodes implanted in 15 patients with epilepsy while the subjects passively viewed commercial movies. We could rapidly detect large changes in the visual inputs within approximately 100 ms of their occurrence, using exclusively field potential signals from ventral visual cortical areas including the inferior temporal gyrus and inferior occipital gyrus. Furthermore, we could decode the content of those visual changes even in a single movie presentation, generalizing across the wide range of transformations present in a movie. These results present a methodological framework for studying cognition during dynamic and natural vision. |
Roxane J Itier; Frank Preston Increased early sensitivity to eyes in mouthless faces: In support of the LIFTED model of early face processing Journal Article Brain Topography, 31 (6), pp. 972–984, 2018. @article{Itier2018, title = {Increased early sensitivity to eyes in mouthless faces: In support of the LIFTED model of early face processing}, author = {Roxane J Itier and Frank Preston}, doi = {10.1007/s10548-018-0663-6}, year = {2018}, date = {2018-01-01}, journal = {Brain Topography}, volume = {31}, number = {6}, pages = {972--984}, publisher = {Springer US}, abstract = {The N170 ERP component is a central neural marker of early face perception usually thought to reflect holistic processing. However, it is also highly sensitive to eyes presented in isolation and to fixation on the eyes within a full face. The lateral inhibition face template and eye detector (LIFTED) model (Nemrodov et al. in NeuroImage 97:81–94, 2014) integrates these views by proposing a neural inhibition mechanism that perceptually glues features into a whole, in parallel to the activ- ity of an eye detector that accounts for the eye sensitivity. The LIFTED model was derived from a large number of results obtained with intact and eyeless faces presented upright and inverted. The present study provided a control condition to the original design by replacing eyeless with mouthless faces, hereby enabling testing of specific predictions derived from the model. Using the same gaze-contingent approach, we replicated the N170 eye sensitivity regardless of face orientation. Furthermore, when eyes were fixated in upright faces, the N170 was larger for mouthless compared to intact faces, while inverted mouthless faces elicited smaller amplitude than intact inverted faces when fixation was on the mouth and nose. The results are largely in line with the LIFTED model, in particular with the idea of an inhibition mechanism involved in holistic processing of upright faces and the lack of such inhibition in processing inverted faces. Some modifications to the original model are also proposed based on these results.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The N170 ERP component is a central neural marker of early face perception usually thought to reflect holistic processing. However, it is also highly sensitive to eyes presented in isolation and to fixation on the eyes within a full face. The lateral inhibition face template and eye detector (LIFTED) model (Nemrodov et al. in NeuroImage 97:81–94, 2014) integrates these views by proposing a neural inhibition mechanism that perceptually glues features into a whole, in parallel to the activ- ity of an eye detector that accounts for the eye sensitivity. The LIFTED model was derived from a large number of results obtained with intact and eyeless faces presented upright and inverted. The present study provided a control condition to the original design by replacing eyeless with mouthless faces, hereby enabling testing of specific predictions derived from the model. Using the same gaze-contingent approach, we replicated the N170 eye sensitivity regardless of face orientation. Furthermore, when eyes were fixated in upright faces, the N170 was larger for mouthless compared to intact faces, while inverted mouthless faces elicited smaller amplitude than intact inverted faces when fixation was on the mouth and nose. The results are largely in line with the LIFTED model, in particular with the idea of an inhibition mechanism involved in holistic processing of upright faces and the lack of such inhibition in processing inverted faces. Some modifications to the original model are also proposed based on these results. |
Peiqing Jin; Jiajie Zou; Tao Zhou; Nai Ding Eye activity tracks task-relevant structures during speech and auditory sequence perception Journal Article Nature Communications, 9 (1), pp. 1–15, 2018. @article{Jin2018a, title = {Eye activity tracks task-relevant structures during speech and auditory sequence perception}, author = {Peiqing Jin and Jiajie Zou and Tao Zhou and Nai Ding}, doi = {10.1038/s41467-018-07773-y}, year = {2018}, date = {2018-01-01}, journal = {Nature Communications}, volume = {9}, number = {1}, pages = {1--15}, publisher = {Springer US}, abstract = {The sensory and motor systems jointly contribute to complex behaviors, but whether motor systems are involved in high-order perceptual tasks such as speech and auditory comprehension remain debated. Here, we show that ocular muscle activity is synchronized to mentally constructed sentences during speech listening, in the absence of any sentence-related visual or prosodic cue. Ocular tracking of sentences is observed in the vertical electrooculogram (EOG), whether the eyes are open or closed, and in eye blinks measured by eyetracking. Critically, the phase of sentence-tracking ocular activity is strongly modulated by temporal attention, i.e., which word in a sentence is attended. Ocular activity also tracks high-level structures in non-linguistic auditory and visual sequences, and captures rapid fluctuations in temporal attention. Ocular tracking of non-visual rhythms possibly reflects global neural entrainment to task-relevant temporal structures across sensory and motor areas, which could serve to implement temporal attention and coordinate cortical networks.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The sensory and motor systems jointly contribute to complex behaviors, but whether motor systems are involved in high-order perceptual tasks such as speech and auditory comprehension remain debated. Here, we show that ocular muscle activity is synchronized to mentally constructed sentences during speech listening, in the absence of any sentence-related visual or prosodic cue. Ocular tracking of sentences is observed in the vertical electrooculogram (EOG), whether the eyes are open or closed, and in eye blinks measured by eyetracking. Critically, the phase of sentence-tracking ocular activity is strongly modulated by temporal attention, i.e., which word in a sentence is attended. Ocular activity also tracks high-level structures in non-linguistic auditory and visual sequences, and captures rapid fluctuations in temporal attention. Ocular tracking of non-visual rhythms possibly reflects global neural entrainment to task-relevant temporal structures across sensory and motor areas, which could serve to implement temporal attention and coordinate cortical networks. |
Juan E Kamienkowski; Alexander Varatharajah; Mariano Sigman; Matias J Ison Parsing a mental program: Fixation-related brain signatures of unitary operations and routines in natural visual search Journal Article NeuroImage, 183 , pp. 73–86, 2018. @article{Kamienkowski2018a, title = {Parsing a mental program: Fixation-related brain signatures of unitary operations and routines in natural visual search}, author = {Juan E Kamienkowski and Alexander Varatharajah and Mariano Sigman and Matias J Ison}, doi = {10.1016/j.neuroimage.2018.08.010}, year = {2018}, date = {2018-01-01}, journal = {NeuroImage}, volume = {183}, pages = {73--86}, publisher = {Elsevier Ltd}, abstract = {Visual search involves a sequence or routine of unitary operations (i.e. fixations) embedded in a larger mental global program. The process can indeed be seen as a program based on a while loop (while the target is not found), a conditional construct (whether the target is matched or not based on specific recognition algorithms) and a decision making step to determine the position of the next searched location based on existent evidence. Recent developments in our ability to co-register brain scalp potentials (EEG) during free eye movements has allowed investigating brain responses related to fixations (fixation-Related Potentials; fERPs), including the identification of sensory and cognitive local EEG components linked to individual fixations. However, the way in which the mental program guiding the search unfolds has not yet been investigated. We performed an EEG and eye tracking co-registration experiment in which participants searched for a target face in natural images of crowds. Here we show how unitary steps of the program are encoded by specific local target detection signatures and how the positioning of each unitary operation within the global search program can be pinpointed by changes in the EEG signal amplitude as well as the signal power in different frequency bands. By simultaneously studying brain signatures of unitary operations and those occurring during the sequence of fixations, our study sheds light into how local and global properties are combined in implementing visual routines in natural tasks.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual search involves a sequence or routine of unitary operations (i.e. fixations) embedded in a larger mental global program. The process can indeed be seen as a program based on a while loop (while the target is not found), a conditional construct (whether the target is matched or not based on specific recognition algorithms) and a decision making step to determine the position of the next searched location based on existent evidence. Recent developments in our ability to co-register brain scalp potentials (EEG) during free eye movements has allowed investigating brain responses related to fixations (fixation-Related Potentials; fERPs), including the identification of sensory and cognitive local EEG components linked to individual fixations. However, the way in which the mental program guiding the search unfolds has not yet been investigated. We performed an EEG and eye tracking co-registration experiment in which participants searched for a target face in natural images of crowds. Here we show how unitary steps of the program are encoded by specific local target detection signatures and how the positioning of each unitary operation within the global search program can be pinpointed by changes in the EEG signal amplitude as well as the signal power in different frequency bands. By simultaneously studying brain signatures of unitary operations and those occurring during the sequence of fixations, our study sheds light into how local and global properties are combined in implementing visual routines in natural tasks. |
Hause Lin; Blair Saunders; Cendri A Hutcherson; Michael Inzlicht Midfrontal theta and pupil dilation parametrically track subjective conflict (but also surprise) during intertemporal choice Journal Article NeuroImage, 172 , pp. 838–852, 2018. @article{Lin2018b, title = {Midfrontal theta and pupil dilation parametrically track subjective conflict (but also surprise) during intertemporal choice}, author = {Hause Lin and Blair Saunders and Cendri A Hutcherson and Michael Inzlicht}, doi = {10.1016/j.neuroimage.2017.10.055}, year = {2018}, date = {2018-01-01}, journal = {NeuroImage}, volume = {172}, pages = {838--852}, publisher = {Elsevier Ltd}, abstract = {Many everyday choices are based on personal, subjective preferences. When choosing between two options, we often feel conflicted, especially when trading off costs and benefits occurring at different times (e.g., saving for later versus spending now). Although previous work has investigated the neurophysiological basis of conflict during inhibitory control tasks, less is known about subjective conflict resulting from competing subjective preferences. In this pre-registered study, we investigated subjective conflict during intertemporal choice, whereby participants chose between smaller immediate versus larger delayed rewards (e.g., $15 today vs. $22 in 30 days). We used economic modeling to parametrically vary eleven different levels of conflict, and recorded EEG data and pupil dilation. Midfrontal theta power, derived from EEG, correlated with pupil responses, and our results suggest that these signals track different gradations of subjective conflict. Unexpectedly, both signals were also maximally enhanced when decisions were surprisingly easy. Therefore, these signals may track events requiring increased attention and adaptive shifts in behavioral responses, with subjective conflict being only one type of such event. Our results suggest that the neural systems underlying midfrontal theta and pupil responses interact when weighing costs and benefits during intertemporal choice. Thus, understanding these interactions might elucidate how individuals resolve self-control conflicts.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Many everyday choices are based on personal, subjective preferences. When choosing between two options, we often feel conflicted, especially when trading off costs and benefits occurring at different times (e.g., saving for later versus spending now). Although previous work has investigated the neurophysiological basis of conflict during inhibitory control tasks, less is known about subjective conflict resulting from competing subjective preferences. In this pre-registered study, we investigated subjective conflict during intertemporal choice, whereby participants chose between smaller immediate versus larger delayed rewards (e.g., $15 today vs. $22 in 30 days). We used economic modeling to parametrically vary eleven different levels of conflict, and recorded EEG data and pupil dilation. Midfrontal theta power, derived from EEG, correlated with pupil responses, and our results suggest that these signals track different gradations of subjective conflict. Unexpectedly, both signals were also maximally enhanced when decisions were surprisingly easy. Therefore, these signals may track events requiring increased attention and adaptive shifts in behavioral responses, with subjective conflict being only one type of such event. Our results suggest that the neural systems underlying midfrontal theta and pupil responses interact when weighing costs and benefits during intertemporal choice. Thus, understanding these interactions might elucidate how individuals resolve self-control conflicts. |
Anna Marzecová; Antonio Schettino; Andreas Widmann; Iria SanMiguel; Sonja A Kotz; Erich Schröger Attentional gain is modulated by probabilistic feature expectations in a spatial cueing task: ERP evidence Journal Article Scientific Reports, 8 (1), pp. 1–14, 2018. @article{Marzecova2018, title = {Attentional gain is modulated by probabilistic feature expectations in a spatial cueing task: ERP evidence}, author = {Anna Marzecová and Antonio Schettino and Andreas Widmann and Iria SanMiguel and Sonja A Kotz and Erich Schröger}, doi = {10.1038/s41598-017-18347-1}, year = {2018}, date = {2018-01-01}, journal = {Scientific Reports}, volume = {8}, number = {1}, pages = {1--14}, abstract = {Several theoretical and empirical studies suggest that attention and perceptual expectations influence perception in an interactive manner, whereby attentional gain is enhanced for predicted stimuli. The current study assessed whether attention and perceptual expectations interface when they are fully orthogonal, i.e., each of them relates to different stimulus features. We used a spatial cueing task with block-wise spatial attention cues that directed attention to either left or right visual field, in which Gabor gratings of either predicted (more likely) or unpredicted (less likely) orientation were presented. The lateralised posterior N1pc component was additively influenced by attention and perceptual expectations. Bayesian analysis showed no reliable evidence for the interactive effect of attention and expectations on the N1pc amplitude. However, attention and perceptual expectations interactively influenced the frontally distributed anterior N1 component (N1a). The attention effect (i.e., enhanced N1a amplitude in the attended compared to the unattended condition) was observed only for the gratings of predicted orientation, but not in the unpredicted condition. These findings suggest that attention and perceptual expectations interactively influence visual processing within 200 ms after stimulus onset and such joint influence may lead to enhanced endogenous attentional control in the dorsal fronto-parietal attention network.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Several theoretical and empirical studies suggest that attention and perceptual expectations influence perception in an interactive manner, whereby attentional gain is enhanced for predicted stimuli. The current study assessed whether attention and perceptual expectations interface when they are fully orthogonal, i.e., each of them relates to different stimulus features. We used a spatial cueing task with block-wise spatial attention cues that directed attention to either left or right visual field, in which Gabor gratings of either predicted (more likely) or unpredicted (less likely) orientation were presented. The lateralised posterior N1pc component was additively influenced by attention and perceptual expectations. Bayesian analysis showed no reliable evidence for the interactive effect of attention and expectations on the N1pc amplitude. However, attention and perceptual expectations interactively influenced the frontally distributed anterior N1 component (N1a). The attention effect (i.e., enhanced N1a amplitude in the attended compared to the unattended condition) was observed only for the gratings of predicted orientation, but not in the unpredicted condition. These findings suggest that attention and perceptual expectations interactively influence visual processing within 200 ms after stimulus onset and such joint influence may lead to enhanced endogenous attentional control in the dorsal fronto-parietal attention network. |
Sarah D McCrackin; Roxane J Itier Is it about me? Time-course of self-relevance and valence effects on the perception of neutral faces with direct and averted gaze Journal Article Biological Psychology, 135 , pp. 47–64, 2018. @article{McCrackin2018, title = {Is it about me? Time-course of self-relevance and valence effects on the perception of neutral faces with direct and averted gaze}, author = {Sarah D McCrackin and Roxane J Itier}, doi = {10.1016/j.biopsycho.2018.03.003}, year = {2018}, date = {2018-01-01}, journal = {Biological Psychology}, volume = {135}, pages = {47--64}, publisher = {Elsevier}, abstract = {Most face processing research has investigated how we perceive faces presented by themselves, but we view faces everyday within a rich social context. Recent ERP research has demonstrated that context cues, including self-relevance and valence, impact electrocortical and emotional responses to neutral faces. However, the time-course of these effects is still unclear, and it is unknown whether these effects interact with the face gaze direction, a cue that inherently contains self-referential information and triggers emotional responses. We primed direct and averted gaze neutral faces (gaze manipulation) with contextual sentences that contained positive or negative opinions (valence manipulation) about the participants or someone else (self-relevance manipulation). In each trial, participants rated how positive or negative, and how affectively aroused, the face made them feel. Eye-tracking ensured sentence reading and face fixation while ERPs were recorded to face presentations. Faces put into self-relevant contexts were more arousing than those in other-relevant contexts, and elicited ERP differences from 150 to 750 ms post-face, encompassing EPN and LPP components. Self-relevance interacted with valence at both the behavioural and ERP level starting 150 ms post-face. Finally, faces put into positive, self-referential contexts elicited different N170 ERP amplitudes depending on gaze direction. Behaviourally, direct gaze elicited more positive valence ratings than averted gaze during positive, self-referential contexts. Thus, self-relevance and valence contextual cues impact visual perception of neutral faces and interact with gaze direction during the earliest stages of face processing. The results highlight the importance of studying face processing within contexts mimicking the complexities of real world interactions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Most face processing research has investigated how we perceive faces presented by themselves, but we view faces everyday within a rich social context. Recent ERP research has demonstrated that context cues, including self-relevance and valence, impact electrocortical and emotional responses to neutral faces. However, the time-course of these effects is still unclear, and it is unknown whether these effects interact with the face gaze direction, a cue that inherently contains self-referential information and triggers emotional responses. We primed direct and averted gaze neutral faces (gaze manipulation) with contextual sentences that contained positive or negative opinions (valence manipulation) about the participants or someone else (self-relevance manipulation). In each trial, participants rated how positive or negative, and how affectively aroused, the face made them feel. Eye-tracking ensured sentence reading and face fixation while ERPs were recorded to face presentations. Faces put into self-relevant contexts were more arousing than those in other-relevant contexts, and elicited ERP differences from 150 to 750 ms post-face, encompassing EPN and LPP components. Self-relevance interacted with valence at both the behavioural and ERP level starting 150 ms post-face. Finally, faces put into positive, self-referential contexts elicited different N170 ERP amplitudes depending on gaze direction. Behaviourally, direct gaze elicited more positive valence ratings than averted gaze during positive, self-referential contexts. Thus, self-relevance and valence contextual cues impact visual perception of neutral faces and interact with gaze direction during the earliest stages of face processing. The results highlight the importance of studying face processing within contexts mimicking the complexities of real world interactions. |
Andrey R Nikolaev; Radha Nila Meghanathan; Cees van Leeuwen Refixation control in free viewing: A specialized mechanism divulged by eye-movement related brain activity. Journal Article Journal of neurophysiology, pp. 2311–2324, 2018. @article{Nikolaev2018, title = {Refixation control in free viewing: A specialized mechanism divulged by eye-movement related brain activity.}, author = {Andrey R Nikolaev and Radha Nila Meghanathan and Cees van Leeuwen}, doi = {10.1152/jn.00121.2018}, year = {2018}, date = {2018-01-01}, journal = {Journal of neurophysiology}, pages = {2311--2324}, abstract = {In free viewing, the eyes return to previously visited locations rather frequently, even though the attentional and memory-related processes controlling eye-movement show a strong anti-refixation bias. To overcome this bias, a special refixation triggering mechanism may have to be recruited. We probed the neural evidence for such a mechanism by combining eye tracking with EEG recording. A distinctive signal associated with refixation planning was observed in the EEG during the presaccadic interval: the presaccadic potential was reduced in amplitude prior to a refixation, compared to normal fixations. The result offers direct evidence for a special refixation mechanism that operates in the saccade planning stage of eye-movement control. Once the eyes have landed on the revisited location, acquisition of visual information proceeds indistinguishably from ordinary fixations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In free viewing, the eyes return to previously visited locations rather frequently, even though the attentional and memory-related processes controlling eye-movement show a strong anti-refixation bias. To overcome this bias, a special refixation triggering mechanism may have to be recruited. We probed the neural evidence for such a mechanism by combining eye tracking with EEG recording. A distinctive signal associated with refixation planning was observed in the EEG during the presaccadic interval: the presaccadic potential was reduced in amplitude prior to a refixation, compared to normal fixations. The result offers direct evidence for a special refixation mechanism that operates in the saccade planning stage of eye-movement control. Once the eyes have landed on the revisited location, acquisition of visual information proceeds indistinguishably from ordinary fixations. |
Karisa B Parkington; Roxane J Itier One versus two eyes makes a difference! Early face perception is modulated by featural fixation and feature context Journal Article Cortex, 109 , pp. 35–49, 2018. @article{Parkington2018, title = {One versus two eyes makes a difference! Early face perception is modulated by featural fixation and feature context}, author = {Karisa B Parkington and Roxane J Itier}, doi = {10.1016/j.cortex.2018.08.025}, year = {2018}, date = {2018-01-01}, journal = {Cortex}, volume = {109}, pages = {35--49}, abstract = {The N170 event-related potential component is an early marker of face perception that is particularly sensitive to isolated eye regions and to eye fixations within a face. Here, this eye sensitivity was tested further by measuring the N170 to isolated facial features and to the same features fixated within a face, using a gaze-contingent procedure. The neural response to single isolated eyes and eye regions (two eyes) was also compared. Pixel intensity and contrast were controlled at the global (image) and local (featural) levels. Consistent with previous findings, larger N170 amplitudes were elicited when the left or right eye was fixated within a face, compared to the mouth or nose, demonstrating that the N170 eye sensitivity reflects higher-order perceptual processes and not merely low-level perceptual effects. The N170 was also largest and most delayed for isolated features, compared to equivalent fixations within a face. Specifically, mouth fixation yielded the largest amplitude difference, and nose fixation yielded the largest latency difference between these two contexts, suggesting the N170 may reflect a complex interplay between holistic and featural processes. Critically, eye regions elicited consistently larger and shorter N170 responses compared to single eyes, with enhanced responses for contralat-eral eye content, irrespective of eye or nasion fixation. These results confirm the importance of the eyes in early face perception, and provide novel evidence of an increased sensitivity to the presence of two symmetric eyes compared to only one eye, consistent with a neural eye region detector rather than an eye detector per se.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The N170 event-related potential component is an early marker of face perception that is particularly sensitive to isolated eye regions and to eye fixations within a face. Here, this eye sensitivity was tested further by measuring the N170 to isolated facial features and to the same features fixated within a face, using a gaze-contingent procedure. The neural response to single isolated eyes and eye regions (two eyes) was also compared. Pixel intensity and contrast were controlled at the global (image) and local (featural) levels. Consistent with previous findings, larger N170 amplitudes were elicited when the left or right eye was fixated within a face, compared to the mouth or nose, demonstrating that the N170 eye sensitivity reflects higher-order perceptual processes and not merely low-level perceptual effects. The N170 was also largest and most delayed for isolated features, compared to equivalent fixations within a face. Specifically, mouth fixation yielded the largest amplitude difference, and nose fixation yielded the largest latency difference between these two contexts, suggesting the N170 may reflect a complex interplay between holistic and featural processes. Critically, eye regions elicited consistently larger and shorter N170 responses compared to single eyes, with enhanced responses for contralat-eral eye content, irrespective of eye or nasion fixation. These results confirm the importance of the eyes in early face perception, and provide novel evidence of an increased sensitivity to the presence of two symmetric eyes compared to only one eye, consistent with a neural eye region detector rather than an eye detector per se. |
2017 |
Roxane J Itier; Karly N Neath-Tavares Effects of task demands on the early neural processing of fearful and happy facial expressions Journal Article Brain Research, 1663 , pp. 38–50, 2017. @article{Itier2017, title = {Effects of task demands on the early neural processing of fearful and happy facial expressions}, author = {Roxane J Itier and Karly N Neath-Tavares}, doi = {10.1016/j.brainres.2017.03.013}, year = {2017}, date = {2017-01-01}, journal = {Brain Research}, volume = {1663}, pages = {38--50}, publisher = {Elsevier B.V.}, abstract = {Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200 to 350 ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150 to 350 ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350 ms of visual processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200 to 350 ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150 to 350 ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350 ms of visual processing. |
Syaheed B Jabar; Alex Filipowicz; Britt Anderson Tuned by experience: How orientation probability modulates early perceptual processing Journal Article Vision Research, 138 , pp. 86–96, 2017. @article{Jabar2017a, title = {Tuned by experience: How orientation probability modulates early perceptual processing}, author = {Syaheed B Jabar and Alex Filipowicz and Britt Anderson}, doi = {10.1016/j.visres.2017.07.008}, year = {2017}, date = {2017-01-01}, journal = {Vision Research}, volume = {138}, pages = {86--96}, publisher = {Elsevier Ltd}, abstract = {Probable stimuli are more often and more quickly detected. While stimulus probability is known to affect decision-making, it can also be explained as a perceptual phenomenon. Using spatial gratings, we have previously shown that probable orientations are also more precisely estimated, even while participants remained naive to the manipulation. We conducted an electrophysiological study to investigate the effect that probability has on perception and visual-evoked potentials. In line with previous studies on oddballs and stimulus prevalence, low-probability orientations were associated with a greater late positive ‘P300' component which might be related to either surprise or decision-making. However, the early ‘C1' component, thought to reflect V1 processing, was dampened for high-probability orientations while later P1 and N1 components were unaffected. Exploratory analyses revealed a participant-level correlation between C1 and P300 amplitudes, suggesting a link between perceptual processing and decision-making. We discuss how these probability effects could be indicative of sharpening of neurons preferring the probable orientations, due either to perceptual learning, or to feature-based attention.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Probable stimuli are more often and more quickly detected. While stimulus probability is known to affect decision-making, it can also be explained as a perceptual phenomenon. Using spatial gratings, we have previously shown that probable orientations are also more precisely estimated, even while participants remained naive to the manipulation. We conducted an electrophysiological study to investigate the effect that probability has on perception and visual-evoked potentials. In line with previous studies on oddballs and stimulus prevalence, low-probability orientations were associated with a greater late positive ‘P300' component which might be related to either surprise or decision-making. However, the early ‘C1' component, thought to reflect V1 processing, was dampened for high-probability orientations while later P1 and N1 components were unaffected. Exploratory analyses revealed a participant-level correlation between C1 and P300 amplitudes, suggesting a link between perceptual processing and decision-making. We discuss how these probability effects could be indicative of sharpening of neurons preferring the probable orientations, due either to perceptual learning, or to feature-based attention. |
Jianrong Jia; Ling Liu; Fang Fang; Huan Luo Sequential sampling of visual objects during sustained attention Journal Article PLoS Biology, 15 (6), pp. 1–19, 2017. @article{Jia2017b, title = {Sequential sampling of visual objects during sustained attention}, author = {Jianrong Jia and Ling Liu and Fang Fang and Huan Luo}, doi = {10.1371/journal.pbio.2001903}, year = {2017}, date = {2017-01-01}, journal = {PLoS Biology}, volume = {15}, number = {6}, pages = {1--19}, abstract = {In a crowded visual scene, attention must be distributed efficiently and flexibly over time and space to accommodate different contexts. It is well established that selective attention enhances the corresponding neural responses, presumably implying that attention would persistently dwell on the task-relevant item. Meanwhile, recent studies, mostly in divided attentional contexts, suggest that attention does not remain stationary but samples objects alternately over time, suggesting a rhythmic view of attention. However, it remains unknown whether the dynamic mechanism essentially mediates attentional processes at a general level. Importantly, there is also a complete lack of direct neural evidence reflecting whether and how the brain rhythmically samples multiple visual objects during stimulus processing. To address these issues, in this study, we employed electroencephalography (EEG) and a temporal response function (TRF) approach, which can dissociate responses that exclusively represent a single object from the overall neuronal activity, to examine the spatiotemporal characteristics of attention in various attentional contexts. First, attention, which is characterized by inhibitory alpha-band (approximately 10 Hz) activity in TRFs, switches between attended and unattended objects every approximately 200 ms, suggesting a sequential sampling even when attention is required to mostly stay on the attended object. Second, the attentional spatiotemporal pattern is modulated by the task context, such that alpha-mediated switching becomes increasingly prominent as the task requires a more uniform distribution of attention. Finally, the switching pattern correlates with attentional behavioral performance. Our work provides direct neural evidence supporting a generally central role of temporal organization mechanism in attention, such that multiple objects are sequentially sorted according to their priority in attentional contexts. The results suggest that selective attention, in addition to the classically posited attentional “focus,” involves a dynamic mechanism for monitoring all objects outside of the focus. Our findings also suggest that attention implements a space (object)-to-time transformation by acting as a series of concatenating attentional chunks that operate on 1 object at a time.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In a crowded visual scene, attention must be distributed efficiently and flexibly over time and space to accommodate different contexts. It is well established that selective attention enhances the corresponding neural responses, presumably implying that attention would persistently dwell on the task-relevant item. Meanwhile, recent studies, mostly in divided attentional contexts, suggest that attention does not remain stationary but samples objects alternately over time, suggesting a rhythmic view of attention. However, it remains unknown whether the dynamic mechanism essentially mediates attentional processes at a general level. Importantly, there is also a complete lack of direct neural evidence reflecting whether and how the brain rhythmically samples multiple visual objects during stimulus processing. To address these issues, in this study, we employed electroencephalography (EEG) and a temporal response function (TRF) approach, which can dissociate responses that exclusively represent a single object from the overall neuronal activity, to examine the spatiotemporal characteristics of attention in various attentional contexts. First, attention, which is characterized by inhibitory alpha-band (approximately 10 Hz) activity in TRFs, switches between attended and unattended objects every approximately 200 ms, suggesting a sequential sampling even when attention is required to mostly stay on the attended object. Second, the attentional spatiotemporal pattern is modulated by the task context, such that alpha-mediated switching becomes increasingly prominent as the task requires a more uniform distribution of attention. Finally, the switching pattern correlates with attentional behavioral performance. Our work provides direct neural evidence supporting a generally central role of temporal organization mechanism in attention, such that multiple objects are sequentially sorted according to their priority in attentional contexts. The results suggest that selective attention, in addition to the classically posited attentional “focus,” involves a dynamic mechanism for monitoring all objects outside of the focus. Our findings also suggest that attention implements a space (object)-to-time transformation by acting as a series of concatenating attentional chunks that operate on 1 object at a time. |