• Skip to primary navigation
  • Skip to main content
  • Skip to footer
SR Research Logo

SR Research

Fast, Accurate, Reliable Eye Tracking

  • Hardware
    • EyeLink 1000 Plus
    • EyeLink Portable Duo
    • fMRI and MEG Systems
    • EyeLink II
    • Hardware Integration
  • Software
    • Experiment Builder
    • Data Viewer
    • WebLink
    • Software Integration
  • Solutions
    • Reading / Language
    • Developmental
    • fMRI / MEG
    • More…
  • Support
    • Forum
    • Resources
    • Workshops
    • Lab Visits
  • About
    • About Eye Tracking
    • History
    • Manufacturing
    • Careers
  • Blog
  • Contact
  • 中文

Cognitive Publications

EyeLink Cognitive Publications

All EyeLink cognitive and perception research publications up until 2019 (with some early 2020s) are listed below by year. You can search the publications using key words such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception article, please email us!

All EyeLink cognitive and perception publications are also available for download / import into reference management software as a single Bibtex (.bib) file.

 

4375 entries « ‹ 1 of 44 › »

2020

Aave Hannus; Harold Bekkering; Frans W Cornelissen

Preview of partial stimulus information in search prioritizes features and conjunctions, not locations Journal Article

Attention, Perception, & Psychophysics, 82 , pp. 140–152, 2020.

Abstract | Links | BibTeX

@article{Hannus2020,
title = {Preview of partial stimulus information in search prioritizes features and conjunctions, not locations},
author = {Aave Hannus and Harold Bekkering and Frans W Cornelissen},
doi = {10.3758/s13414-019-01841-1},
year = {2020},
date = {2020-09-01},
journal = {Attention, Perception, & Psychophysics},
volume = {82},
pages = {140--152},
publisher = {Springer Science and Business Media LLC},
abstract = {Visual search often requires combining information on distinct visual features such as color and orientation, but how the visual system does this is not fully understood. To better understand this, we showed observers a brief preview of part of a search stimulus—either its color or orientation—before they performed a conjunction search task. Our experimental questions were (1) whether observers would use such previews to prioritize either potential target locations or features, and (2) which neural mechanisms might underlie the observed effects. In two experiments, participants searched for a prespecified target in a display consisting of bar elements, each combining one of two possible colors and one of two possible orientations. Participants responded by making an eye movement to the selected bar. In our first experiment, we found that a preview consisting of colored bars with identical orientation improved saccadic target selection performance, while a preview of oriented gray bars substantially decreased performance. In a follow-up experiment, we found that previews consisting of discs of the same color as the bars (and thus without orientation information) hardly affected performance. Thus, performance improved only when the preview combined color and (noninformative) orientation information. Previews apparently result in a prioritization of features and conjunctions rather than of spatial locations (in the latter case, all previews should have had similar effects). Our results thus also indicate that search for, and prioritization of, combinations involve conjunctively tuned neural mechanisms. These probably reside at the level of the primary visual cortex.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual search often requires combining information on distinct visual features such as color and orientation, but how the visual system does this is not fully understood. To better understand this, we showed observers a brief preview of part of a search stimulus—either its color or orientation—before they performed a conjunction search task. Our experimental questions were (1) whether observers would use such previews to prioritize either potential target locations or features, and (2) which neural mechanisms might underlie the observed effects. In two experiments, participants searched for a prespecified target in a display consisting of bar elements, each combining one of two possible colors and one of two possible orientations. Participants responded by making an eye movement to the selected bar. In our first experiment, we found that a preview consisting of colored bars with identical orientation improved saccadic target selection performance, while a preview of oriented gray bars substantially decreased performance. In a follow-up experiment, we found that previews consisting of discs of the same color as the bars (and thus without orientation information) hardly affected performance. Thus, performance improved only when the preview combined color and (noninformative) orientation information. Previews apparently result in a prioritization of features and conjunctions rather than of spatial locations (in the latter case, all previews should have had similar effects). Our results thus also indicate that search for, and prioritization of, combinations involve conjunctively tuned neural mechanisms. These probably reside at the level of the primary visual cortex.

Close

  • doi:10.3758/s13414-019-01841-1

Close

Maxi Becker; Tobias Sommer; Simone Kühn

Verbal insight revisited: fMRI evidence for early processing in bilateral insulae for solutions with AHA! experience shortly after trial onset Journal Article

Human Brain Mapping, 41 (1), pp. 30–45, 2020.

Abstract | Links | BibTeX

@article{Becker2020,
title = {Verbal insight revisited: fMRI evidence for early processing in bilateral insulae for solutions with AHA! experience shortly after trial onset},
author = {Maxi Becker and Tobias Sommer and Simone Kühn},
doi = {10.1002/hbm.24785},
year = {2020},
date = {2020-09-01},
journal = {Human Brain Mapping},
volume = {41},
number = {1},
pages = {30--45},
abstract = {Abstract In insight problem solving solutions with AHA! experience have been assumed to be the consequence of restructuring of a problem which usually takes place shortly before the solution. However, evidence from priming studies suggests that solutions with AHA! are not spontaneously generated during the solution process but already relate to prior subliminal processing. We test this hypothesis by conducting an fMRI study using a modified compound remote associates paradigm which incorporates semantic priming.Weobserve stronger brain activity in bilateral anterior insulae already shortly after trial onset in problems that were latersolvedwiththan without AHA!. This early activity was independent of semantic priming but may be related to other lexical properties of attended words helping to reduce the amount of solutions to look for. In contrast, there was more brain activity in bilateral anterior insulae during solutions that were solved without than with AHA!. This timing (after trial start/during solution) x solution experience (with/without AHA!) interaction was significant. The results suggest that (a) solutions accompanied with AHA! relate to early solution-relevant processing and (b) both solution experiences differ in timingwhen solution-relevant processing takes place. In this context, we discuss the potential role of the anterior insula as part of the salience network involved in problem solving by allocating attentional resources.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Abstract In insight problem solving solutions with AHA! experience have been assumed to be the consequence of restructuring of a problem which usually takes place shortly before the solution. However, evidence from priming studies suggests that solutions with AHA! are not spontaneously generated during the solution process but already relate to prior subliminal processing. We test this hypothesis by conducting an fMRI study using a modified compound remote associates paradigm which incorporates semantic priming.Weobserve stronger brain activity in bilateral anterior insulae already shortly after trial onset in problems that were latersolvedwiththan without AHA!. This early activity was independent of semantic priming but may be related to other lexical properties of attended words helping to reduce the amount of solutions to look for. In contrast, there was more brain activity in bilateral anterior insulae during solutions that were solved without than with AHA!. This timing (after trial start/during solution) x solution experience (with/without AHA!) interaction was significant. The results suggest that (a) solutions accompanied with AHA! relate to early solution-relevant processing and (b) both solution experiences differ in timingwhen solution-relevant processing takes place. In this context, we discuss the potential role of the anterior insula as part of the salience network involved in problem solving by allocating attentional resources.

Close

  • doi:10.1002/hbm.24785

Close

Joshua Zonca; Giorgio Coricelli; Luca Polonio

Gaze data reveal individual differences in relational representation processes Journal Article

Journal of Experimental Psychology: Learning, Memory, and Cognition, 46 (2), pp. 257–279, 2020.

Abstract | Links | BibTeX

@article{Zonca2020,
title = {Gaze data reveal individual differences in relational representation processes},
author = {Joshua Zonca and Giorgio Coricelli and Luca Polonio},
doi = {10.1037/xlm0000723},
year = {2020},
date = {2020-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {46},
number = {2},
pages = {257--279},
publisher = {American Psychological Association Inc.},
abstract = {In our everyday life, we often need to anticipate the potential occurrence of events and their consequences. In this context, the way we represent contingencies can determine our ability to adapt to the environment. However, it is not clear how agents encode and organize available knowledge about the future to react to possible states of the world. In the present study, we investigated the process of contingency representation with three eye-tracking experiments. In Experiment 1, we introduced a novel relational-inference task in which participants had to learn and represent conditional rules regulating the occurrence of interdependent future events. A cluster analysis on early gaze data revealed the existence of 2 distinct types of encoders. A group of (sophisticated) participants built exhaustive contingency models that explicitly linked states with each of their potential consequences. Another group of (unsophisticated) participants simply learned binary conditional rules without exploring the underlying relational complexity. Analyses of individual cognitive measures revealed that cognitive reflection is associated with the emergence of either sophisticated or unsophisticated representation behavior. In Experiment 2, we observed that unsophisticated participants switched toward the sophisticated strategy after having received information about its existence, suggesting that representation behavior was modulated by strategy generation mechanisms. In Experiment 3, we showed that the heterogeneity in representation strategy emerges also in conditional reasoning with verbal sequences, indicating the existence of a general disposition in building either sophisticated or unsophisticated models of contingencies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In our everyday life, we often need to anticipate the potential occurrence of events and their consequences. In this context, the way we represent contingencies can determine our ability to adapt to the environment. However, it is not clear how agents encode and organize available knowledge about the future to react to possible states of the world. In the present study, we investigated the process of contingency representation with three eye-tracking experiments. In Experiment 1, we introduced a novel relational-inference task in which participants had to learn and represent conditional rules regulating the occurrence of interdependent future events. A cluster analysis on early gaze data revealed the existence of 2 distinct types of encoders. A group of (sophisticated) participants built exhaustive contingency models that explicitly linked states with each of their potential consequences. Another group of (unsophisticated) participants simply learned binary conditional rules without exploring the underlying relational complexity. Analyses of individual cognitive measures revealed that cognitive reflection is associated with the emergence of either sophisticated or unsophisticated representation behavior. In Experiment 2, we observed that unsophisticated participants switched toward the sophisticated strategy after having received information about its existence, suggesting that representation behavior was modulated by strategy generation mechanisms. In Experiment 3, we showed that the heterogeneity in representation strategy emerges also in conditional reasoning with verbal sequences, indicating the existence of a general disposition in building either sophisticated or unsophisticated models of contingencies.

Close

  • doi:10.1037/xlm0000723

Close

Anke Weidmann; Laura Richert; Maximilian Bernecker; Miriam Knauss; Kathlen Priebe; Benedikt Reuter; Martin Bohus; Meike Müller-Engelmann; Thomas Fydrich

Dwelling on verbal but not pictorial threat cues: An eye-tracking study with adult survivors of childhood interpersonal violence Journal Article

Psychological Trauma: Theory, Research, Practice, and Policy, 12 (1), pp. 46–54, 2020.

Abstract | Links | BibTeX

@article{Weidmann2020,
title = {Dwelling on verbal but not pictorial threat cues: An eye-tracking study with adult survivors of childhood interpersonal violence},
author = {Anke Weidmann and Laura Richert and Maximilian Bernecker and Miriam Knauss and Kathlen Priebe and Benedikt Reuter and Martin Bohus and Meike Müller-Engelmann and Thomas Fydrich},
doi = {10.1037/tra0000424},
year = {2020},
date = {2020-01-01},
journal = {Psychological Trauma: Theory, Research, Practice, and Policy},
volume = {12},
number = {1},
pages = {46--54},
abstract = {Objective: Previous studies have found evidence of an attentional bias for trauma-related stimuli in posttraumatic stress disorder (PTSD) using eye-tracking (ET) technlogy. However, it is unclear whether findings for PTSD after traumatic events in adulthood can be transferred to PTSD after interpersonal trauma in childhood. The latter is often accompanied by more complex symptom features, including, for example, affective dysregulation and has not yet been studied using ET. The aim of this study was to explore which components of attention are biased in adult victims of childhood trauma with PTSD compared to those without PTSD. Method: Female participants with (n = 27) or without (n = 27) PTSD who had experienced interpersonal violence in childhood or adolescence watched different traumarelated stimuli (Experiment 1: words, Experiment 2: facial expressions). We analyzed whether traumarelated stimuli were primarily detected (vigilance bias) and/or dwelled on longer (maintenance bias) compared to stimuli of other emotional qualities. Results: For trauma-related words, there was evidence of a maintenance bias but not of a vigilance bias. For trauma-related facial expressions, there was no evidence of any bias. Conclusions: At present, an attentional bias to trauma-related stimuli cannot be considered as robust in PTSD following trauma in childhood compared to that of PTSD following trauma in adulthood. The findings are discussed with respect to difficulties attributing effects specifically to PTSD in this highly comorbid though understudied population.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: Previous studies have found evidence of an attentional bias for trauma-related stimuli in posttraumatic stress disorder (PTSD) using eye-tracking (ET) technlogy. However, it is unclear whether findings for PTSD after traumatic events in adulthood can be transferred to PTSD after interpersonal trauma in childhood. The latter is often accompanied by more complex symptom features, including, for example, affective dysregulation and has not yet been studied using ET. The aim of this study was to explore which components of attention are biased in adult victims of childhood trauma with PTSD compared to those without PTSD. Method: Female participants with (n = 27) or without (n = 27) PTSD who had experienced interpersonal violence in childhood or adolescence watched different traumarelated stimuli (Experiment 1: words, Experiment 2: facial expressions). We analyzed whether traumarelated stimuli were primarily detected (vigilance bias) and/or dwelled on longer (maintenance bias) compared to stimuli of other emotional qualities. Results: For trauma-related words, there was evidence of a maintenance bias but not of a vigilance bias. For trauma-related facial expressions, there was no evidence of any bias. Conclusions: At present, an attentional bias to trauma-related stimuli cannot be considered as robust in PTSD following trauma in childhood compared to that of PTSD following trauma in adulthood. The findings are discussed with respect to difficulties attributing effects specifically to PTSD in this highly comorbid though understudied population.

Close

  • doi:10.1037/tra0000424

Close

Maximilian Stefani; Marian Sauter; Wolfgang Mack

Delayed disengagement from irrelevant fixation items: Is it generally functional? Journal Article

Attention, Perception, & Psychophysics, pp. 1–18, 2020.

Abstract | BibTeX

@article{Stefani2020,
title = {Delayed disengagement from irrelevant fixation items: Is it generally functional?},
author = {Maximilian Stefani and Marian Sauter and Wolfgang Mack},
year = {2020},
date = {2020-01-01},
journal = {Attention, Perception, & Psychophysics},
pages = {1--18},
publisher = {Attention, Perception, & Psychophysics},
abstract = {In a circular visual search paradigm, the disengagement of attention is automatically delayed when a fixated but irrelevant center item shares features ofthe target item. Additionally, if mismatching letters are presented on these items, response times (RTs) are slowed further, while matching letters evoke faster responses (Wright, Boot, & Brockmole, 2015a). This is interpreted as a functional reason of the delayed disengagement effect in terms of deeper processing of the fixation item. The purpose of the present study was the generalization of these findings to unfamiliar symbols and to linear instead ofcircular layouts. Experiments 1 and 2 replicated the functional delayed disengagement effect with letters and symbols. In Experiment 3, the search layout was changed from circular to linear and only saccades from left to right had to be performed. We did not find supportive data for the proposed functional nature of the effect. In Experiments 4 and 5, we tested whether the unidirectional saccade decision, a potential blurring by adjacent items, or a lack of statistical power was the cause of the diminished effects in Experiment 3. With increased sample sizes, the delayed disengagement effect as well as its functional underpinning were now observed consistently. Taken together, our results support prior assumptions that delayed disengagement effects are functionally rooted in a deeper processing of the fixation items. They also generalize to unfamiliar symbols and linear display layouts.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In a circular visual search paradigm, the disengagement of attention is automatically delayed when a fixated but irrelevant center item shares features ofthe target item. Additionally, if mismatching letters are presented on these items, response times (RTs) are slowed further, while matching letters evoke faster responses (Wright, Boot, & Brockmole, 2015a). This is interpreted as a functional reason of the delayed disengagement effect in terms of deeper processing of the fixation item. The purpose of the present study was the generalization of these findings to unfamiliar symbols and to linear instead ofcircular layouts. Experiments 1 and 2 replicated the functional delayed disengagement effect with letters and symbols. In Experiment 3, the search layout was changed from circular to linear and only saccades from left to right had to be performed. We did not find supportive data for the proposed functional nature of the effect. In Experiments 4 and 5, we tested whether the unidirectional saccade decision, a potential blurring by adjacent items, or a lack of statistical power was the cause of the diminished effects in Experiment 3. With increased sample sizes, the delayed disengagement effect as well as its functional underpinning were now observed consistently. Taken together, our results support prior assumptions that delayed disengagement effects are functionally rooted in a deeper processing of the fixation items. They also generalize to unfamiliar symbols and linear display layouts.

Close

Deirdre A Robertson; Peter D Lunn

The effect of spatial location of calorie information on choice, consumption and eye movements Journal Article

Appetite, 144 , pp. 1–10, 2020.

Abstract | Links | BibTeX

@article{Robertson2020,
title = {The effect of spatial location of calorie information on choice, consumption and eye movements},
author = {Deirdre A Robertson and Peter D Lunn},
doi = {10.1016/j.appet.2019.104446},
year = {2020},
date = {2020-01-01},
journal = {Appetite},
volume = {144},
pages = {1--10},
abstract = {We manipulated the presence and spatial location of calorie labels on menus while tracking eye movements. A novel “lab-in-the-field” experimental design allowed eye movements to be recorded while participants chose lunch from a menu, unaware that their choice was part of a study. Participants exposed to calorie information ordered 93 fewer calories (11%) relative to a control group who saw no calorie labels. The difference in number of calories consumed was greater still. The impact was strongest when calorie information was displayed just to the right of the price, in an equivalent font. The effects were mediated by knowledge of the amount of calories in the meal, implying that calorie posting led to more informed decision-making. There was no impact on enjoyment of the meal. The eye-tracking data suggested that the spatial arrangement altered individuals' search strategies while viewing the menu. This research suggests that the spatial location of calories on menus may be an important consideration when designing calorie posting legislation and policy. 1.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We manipulated the presence and spatial location of calorie labels on menus while tracking eye movements. A novel “lab-in-the-field” experimental design allowed eye movements to be recorded while participants chose lunch from a menu, unaware that their choice was part of a study. Participants exposed to calorie information ordered 93 fewer calories (11%) relative to a control group who saw no calorie labels. The difference in number of calories consumed was greater still. The impact was strongest when calorie information was displayed just to the right of the price, in an equivalent font. The effects were mediated by knowledge of the amount of calories in the meal, implying that calorie posting led to more informed decision-making. There was no impact on enjoyment of the meal. The eye-tracking data suggested that the spatial arrangement altered individuals' search strategies while viewing the menu. This research suggests that the spatial location of calories on menus may be an important consideration when designing calorie posting legislation and policy. 1.

Close

  • doi:10.1016/j.appet.2019.104446

Close

Johannes Rennig; Kira Wegner-Clemens; Michael S Beauchamp

Face viewing behavior predicts multisensory gain during speech perception Journal Article

Psychonomic Bulletin & Review, 27 , pp. 70–77, 2020.

Abstract | Links | BibTeX

@article{Rennig2020,
title = {Face viewing behavior predicts multisensory gain during speech perception},
author = {Johannes Rennig and Kira Wegner-Clemens and Michael S Beauchamp},
doi = {10.3758/s13423-019-01665-y},
year = {2020},
date = {2020-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {27},
pages = {70--77},
publisher = {Psychonomic Bulletin & Review},
abstract = {Visual information from the face of an interlocutor complements auditory information from their voice, enhancing intelligibility. However, there are large individual differences in the ability to comprehend noisy audiovisual speech. Another axis of individual variability is the extent to which humans fixate the mouth or the eyes of a viewed face. We speculated that across a lifetime of face viewing, individuals who prefer to fixate the mouth of a viewed face might accumulate stronger associations between visual and auditory speech, resulting in improved comprehension of noisy audiovisual speech. To test this idea, we assessed interindividual variability in two tasks. Participants (n = 102) varied greatly in their ability to understand noisy audiovisual sentences (accuracy from 2–58%) and in the time they spent fixating the mouth of a talker enunciating clear audiovisual syllables (3–98% of total time). These two variables were positively correlated: a 10% increase in time spent fixating the mouth equated to a 5.6% increase in multisensory gain. This finding demonstrates an unexpected link, mediated by histories of visual exposure, between two fundamental human abilities: processing faces and understanding speech.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual information from the face of an interlocutor complements auditory information from their voice, enhancing intelligibility. However, there are large individual differences in the ability to comprehend noisy audiovisual speech. Another axis of individual variability is the extent to which humans fixate the mouth or the eyes of a viewed face. We speculated that across a lifetime of face viewing, individuals who prefer to fixate the mouth of a viewed face might accumulate stronger associations between visual and auditory speech, resulting in improved comprehension of noisy audiovisual speech. To test this idea, we assessed interindividual variability in two tasks. Participants (n = 102) varied greatly in their ability to understand noisy audiovisual sentences (accuracy from 2–58%) and in the time they spent fixating the mouth of a talker enunciating clear audiovisual syllables (3–98% of total time). These two variables were positively correlated: a 10% increase in time spent fixating the mouth equated to a 5.6% increase in multisensory gain. This finding demonstrates an unexpected link, mediated by histories of visual exposure, between two fundamental human abilities: processing faces and understanding speech.

Close

  • doi:10.3758/s13423-019-01665-y

Close

A Pressigout; K Dore-Mazars

How does number magnitude influence temporal and spatial parameters of eye movements? Journal Article

Experimental Brain Research, 238 , pp. 101–109, 2020.

Abstract | Links | BibTeX

@article{Pressigout2020,
title = {How does number magnitude influence temporal and spatial parameters of eye movements?},
author = {A Pressigout and K Dore-Mazars},
doi = {10.1007/s00221-019-05701-0},
year = {2020},
date = {2020-01-01},
journal = {Experimental Brain Research},
volume = {238},
pages = {101--109},
publisher = {Springer Berlin Heidelberg},
abstract = {The influence of numerical processing on individuals' behavior is now well documented. The spatial representation of numbers on a left-to-right mental line (i.e., SNARC effect) has been shown to have sensorimotor consequences, the majority of studies being mainly concerned with its impact on the response times. Its impact on the motor programming stage remains less documented, although swiping movement amplitudes have recently been shown to be modulated by number magnitude. Regarding saccadic eye movements, the few available studies have not provided clear-cut conclusions. They showed that spatial–numerical associations modulated ocular drifts, but not the amplitude of memory-guided saccades. Because these studies held saccadic coordinates constant, which might have masked potential numerical effects, we examined whether spontaneous saccadic eye movements (with no saccadic target) could reflect numerical effects. Participants were asked to look either to the left or to the right side of an empty screen to estimate the magnitude (textless or textgreater 5) of a centrally presented digit. Latency data confirmed the presence of the classical SNARC and distance effects. More critically, saccade amplitude reflected a numerical effect: participants' saccades were longer for digits far from the standard (1 and 9) and were shorter for digits close to it (4 and 6). Our results suggest that beyond response times, kinematic parameters also offer valuable information for the understanding of the link between numerical cognition and motor programming.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The influence of numerical processing on individuals' behavior is now well documented. The spatial representation of numbers on a left-to-right mental line (i.e., SNARC effect) has been shown to have sensorimotor consequences, the majority of studies being mainly concerned with its impact on the response times. Its impact on the motor programming stage remains less documented, although swiping movement amplitudes have recently been shown to be modulated by number magnitude. Regarding saccadic eye movements, the few available studies have not provided clear-cut conclusions. They showed that spatial–numerical associations modulated ocular drifts, but not the amplitude of memory-guided saccades. Because these studies held saccadic coordinates constant, which might have masked potential numerical effects, we examined whether spontaneous saccadic eye movements (with no saccadic target) could reflect numerical effects. Participants were asked to look either to the left or to the right side of an empty screen to estimate the magnitude (textless or textgreater 5) of a centrally presented digit. Latency data confirmed the presence of the classical SNARC and distance effects. More critically, saccade amplitude reflected a numerical effect: participants' saccades were longer for digits far from the standard (1 and 9) and were shorter for digits close to it (4 and 6). Our results suggest that beyond response times, kinematic parameters also offer valuable information for the understanding of the link between numerical cognition and motor programming.

Close

  • doi:10.1007/s00221-019-05701-0

Close

Salome Pedrett; Lea Kaspar; Andrea Frick

Understanding of object rotation between two and three years of age Journal Article

Developmental Psychology, 56 (2), pp. 261–274, 2020.

Abstract | Links | BibTeX

@article{Pedrett2020,
title = {Understanding of object rotation between two and three years of age},
author = {Salome Pedrett and Lea Kaspar and Andrea Frick},
doi = {10.1037/dev0000871},
year = {2020},
date = {2020-01-01},
journal = {Developmental Psychology},
volume = {56},
number = {2},
pages = {261--274},
abstract = {Toddlers' understanding of object rotation was investigated using a multimethod approach. Participants were 44 toddlers between 22 and 38 months of age. In an eye-tracking task, they observed a shape that rotated and disappeared briefly behind an occluder. In an object-fitting task, they rotated wooden blocks and fit them through apertures. Results of the eye-tracking task showed that with increasing age, the toddlers encoded the visible rotation using a more complex eye-movement pattern, increasingly combining tracking movements with gaze shifts to the pivot point. During occlusion, anticipatory looks to the location where the shape would reappear increased with age, whereas looking back to the location where the shape had just disappeared decreased. This suggests that, with increasing age, the toddlers formed a clearer mental representation about the object and its rotational movement. In the object-fitting task, the toddlers succeeded more with increasing age and also rotated the wooden blocks more often correctly before they tried to insert them. Importantly, these preadjustments correlated with anticipatory eye movements, suggesting that both measures tap the same underlying understanding of object rotation. The findings yield new insights into the relation between tasks using looking times and behavioral measures as dependent variables and thus may help to clarify performance differences that have previously been observed in studies with infants and young children.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Toddlers' understanding of object rotation was investigated using a multimethod approach. Participants were 44 toddlers between 22 and 38 months of age. In an eye-tracking task, they observed a shape that rotated and disappeared briefly behind an occluder. In an object-fitting task, they rotated wooden blocks and fit them through apertures. Results of the eye-tracking task showed that with increasing age, the toddlers encoded the visible rotation using a more complex eye-movement pattern, increasingly combining tracking movements with gaze shifts to the pivot point. During occlusion, anticipatory looks to the location where the shape would reappear increased with age, whereas looking back to the location where the shape had just disappeared decreased. This suggests that, with increasing age, the toddlers formed a clearer mental representation about the object and its rotational movement. In the object-fitting task, the toddlers succeeded more with increasing age and also rotated the wooden blocks more often correctly before they tried to insert them. Importantly, these preadjustments correlated with anticipatory eye movements, suggesting that both measures tap the same underlying understanding of object rotation. The findings yield new insights into the relation between tasks using looking times and behavioral measures as dependent variables and thus may help to clarify performance differences that have previously been observed in studies with infants and young children.

Close

  • doi:10.1037/dev0000871

Close

Kinan Muhammed; Edwin Dalmaijer; Sanjay Manohar; Masud Husain

Voluntary modulation of saccadic peak velocity associated with individual differences in motivation Journal Article

Cortex, 122 , pp. 198–212, 2020.

Abstract | Links | BibTeX

@article{Muhammed2020,
title = {Voluntary modulation of saccadic peak velocity associated with individual differences in motivation},
author = {Kinan Muhammed and Edwin Dalmaijer and Sanjay Manohar and Masud Husain},
doi = {10.1016/j.cortex.2018.12.001},
year = {2020},
date = {2020-01-01},
journal = {Cortex},
volume = {122},
pages = {198--212},
publisher = {Elsevier Ltd},
abstract = {Saccadic peak velocity increases in a stereotyped manner with the amplitude of eye movements. This relationship, known as the main sequence, has classically been considered to be fixed, although several recent studies have demonstrated that velocity can be modulated to some extent by external incentives. However, the ability to voluntarily control saccadic velocity and its association with motivation has yet to be investigated. Here, in three separate experimental paradigms, we measured the effects of incentivisation on saccadic velocity, reaction time and preparatory pupillary changes in 53 young healthy participants. In addition, the ability to voluntarily modulate saccadic velocity with and without incentivisation was assessed. Participants varied in their ability to increase and decrease the velocity of their saccades when instructed to do so. This effect correlated with motivation level across participants, and was further modulated by addition of monetary reward and avoidance of loss. The findings show that a degree of voluntary control of saccadic velocity is possible in some individuals, and that the ability to modulate peak velocity is associated with intrinsic levels of motivation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Saccadic peak velocity increases in a stereotyped manner with the amplitude of eye movements. This relationship, known as the main sequence, has classically been considered to be fixed, although several recent studies have demonstrated that velocity can be modulated to some extent by external incentives. However, the ability to voluntarily control saccadic velocity and its association with motivation has yet to be investigated. Here, in three separate experimental paradigms, we measured the effects of incentivisation on saccadic velocity, reaction time and preparatory pupillary changes in 53 young healthy participants. In addition, the ability to voluntarily modulate saccadic velocity with and without incentivisation was assessed. Participants varied in their ability to increase and decrease the velocity of their saccades when instructed to do so. This effect correlated with motivation level across participants, and was further modulated by addition of monetary reward and avoidance of loss. The findings show that a degree of voluntary control of saccadic velocity is possible in some individuals, and that the ability to modulate peak velocity is associated with intrinsic levels of motivation.

Close

  • doi:10.1016/j.cortex.2018.12.001

Close

Anna Kosovicheva; Peter J Bex

What color was it? A psychophysical paradigm for tracking subjective progress in continuous tasks Journal Article

Perception, 49 (1), pp. 21–38, 2020.

Abstract | Links | BibTeX

@article{Kosovicheva2020,
title = {What color was it? A psychophysical paradigm for tracking subjective progress in continuous tasks},
author = {Anna Kosovicheva and Peter J Bex},
doi = {10.1177/0301006619886247},
year = {2020},
date = {2020-01-01},
journal = {Perception},
volume = {49},
number = {1},
pages = {21--38},
abstract = {When making a sequence of fixations, how does the timing of visual experience compare with the timing of fixation onsets? Previous studies have tracked shifts of attention or perceived gaze direction using self-report methods. We used a similar method, a dynamic color technique, to measure subjective timing in continuous tasks involving fixation sequences. Does the time that observers report reading a word coincide with their fixation on it, or is there an asynchrony, and does this relationship depend on the observer's task? Observers read sentences that continuously changed in hue and identified the color of a word at the time that they read it using a color palette. We compared responses with a nonreading condition, where observers reproduced their fixations, but viewed nonword stimuli. Results showed a delay between the color of stimuli at fixation onset and the reported color during perception. For nonword tasks, the delay was constant. However, in the reading task, the delay was larger for earlier compared with later words in the sentence. Our results offer a new method for measuring awareness or subjective progress within fixation sequences, which can be extended to other continuous tasks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When making a sequence of fixations, how does the timing of visual experience compare with the timing of fixation onsets? Previous studies have tracked shifts of attention or perceived gaze direction using self-report methods. We used a similar method, a dynamic color technique, to measure subjective timing in continuous tasks involving fixation sequences. Does the time that observers report reading a word coincide with their fixation on it, or is there an asynchrony, and does this relationship depend on the observer's task? Observers read sentences that continuously changed in hue and identified the color of a word at the time that they read it using a color palette. We compared responses with a nonreading condition, where observers reproduced their fixations, but viewed nonword stimuli. Results showed a delay between the color of stimuli at fixation onset and the reported color during perception. For nonword tasks, the delay was constant. However, in the reading task, the delay was larger for earlier compared with later words in the sentence. Our results offer a new method for measuring awareness or subjective progress within fixation sequences, which can be extended to other continuous tasks.

Close

  • doi:10.1177/0301006619886247

Close

Johan Hulleman; Kristofer Lund; Paul A Skarratt

Medium versus difficult visual search: How a quantitative change in the functional visual field leads to a qualitative difference in performance Journal Article

Attention, Perception, & Psychophysics, 82 , pp. 118–139, 2020.

Abstract | Links | BibTeX

@article{Hulleman2020,
title = {Medium versus difficult visual search: How a quantitative change in the functional visual field leads to a qualitative difference in performance},
author = {Johan Hulleman and Kristofer Lund and Paul A Skarratt},
doi = {10.3758/s13414-019-01787-4},
year = {2020},
date = {2020-01-01},
journal = {Attention, Perception, & Psychophysics},
volume = {82},
pages = {118--139},
abstract = {The dominant theories of visual search assume that search is a process involving comparisons of individual items against a target description that is based on the properties of the target in isolation. Here, we present four experiments that demonstrate that this holds true only in difficult search. In medium search it seems that the relation between the target and neighbouring items is also part of the target description. We used two sets of oriented lines to construct the search items. The cardinal set contained horizontal and vertical lines, the diagonal set contained left diagonal and right diagonal lines. In all experiments, participants knew the identity of the target and the line set used to construct it. In difficult search this knowledge allowed performance to improve in displays where only half of the search items came from the same line set as the target (50% eligibility), relative to displays where all items did (100% eligibility). However, in medium search, performance was actually poorer for 50% eligibility, especially on target-absent trials. This opposite effect of ineligible items in medium search and difficult search is hard to reconcile with theories based on individual items. It is more in line with theories that conceive search as a sequence of fixations where the number of items processed during a fixation depends on the difficulty of the search task: When search is medium, multiple items are processed per fixation. But when search is difficult, only a single item is processed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The dominant theories of visual search assume that search is a process involving comparisons of individual items against a target description that is based on the properties of the target in isolation. Here, we present four experiments that demonstrate that this holds true only in difficult search. In medium search it seems that the relation between the target and neighbouring items is also part of the target description. We used two sets of oriented lines to construct the search items. The cardinal set contained horizontal and vertical lines, the diagonal set contained left diagonal and right diagonal lines. In all experiments, participants knew the identity of the target and the line set used to construct it. In difficult search this knowledge allowed performance to improve in displays where only half of the search items came from the same line set as the target (50% eligibility), relative to displays where all items did (100% eligibility). However, in medium search, performance was actually poorer for 50% eligibility, especially on target-absent trials. This opposite effect of ineligible items in medium search and difficult search is hard to reconcile with theories based on individual items. It is more in line with theories that conceive search as a sequence of fixations where the number of items processed during a fixation depends on the difficulty of the search task: When search is medium, multiple items are processed per fixation. But when search is difficult, only a single item is processed.

Close

  • doi:10.3758/s13414-019-01787-4

Close

James E Hoffman; Minwoo Kim; Matt Taylor; Kelsey Holiday

Emotional capture during emotion-induced blindness is not automatic Journal Article

Cortex, 122 , pp. 140–158, 2020.

Abstract | Links | BibTeX

@article{Hoffman2020,
title = {Emotional capture during emotion-induced blindness is not automatic},
author = {James E Hoffman and Minwoo Kim and Matt Taylor and Kelsey Holiday},
doi = {10.1016/j.cortex.2019.03.013},
year = {2020},
date = {2020-01-01},
journal = {Cortex},
volume = {122},
pages = {140--158},
publisher = {Elsevier Ltd},
abstract = {The present research used behavioral and event-related brain potentials (ERP) measures to determine whether emotional capture is automatic in the emotion-induced blindness (EIB) paradigm. The first experiment varied the priority of performing two concurrent tasks: identifying a negative or neutral picture appearing in a rapid serial visual presentation (RSVP) stream of pictures and multiple object tracking (MOT). Results showed that increased attention to the MOT task resulted in decreased accuracy for identifying both negative and neutral target pictures accompanied by decreases in the amplitude of the P3b component. In contrast, the early posterior negativity (EPN) component elicited by negative pictures was unaffected by variations in attention. Similarly, there was a decrement in MOT performance for dual-task versus single task conditions but no effect of picture type (negative vs neutral) on MOT accuracy which isn't consistent with automatic emotional capture of attention. However, the MOT task might simply be insensitive to brief interruptions of attention. The second experiment used a more sensitive reaction time (RT) measure to examine this possibility. Results showed that RT to discriminate a gap appearing in a tracked object was delayed by the simultaneous appearance of to-be-ignored distractor pictures even though MOT performance was once again unaffected by the distractor. Importantly, the RT delay was the same for both negative and neutral distractors suggesting that capture was driven by physical salience rather than emotional salience of the distractors. Despite this lack of emotional capture, the EPN component, which is thought to reflect emotional capture, was still present. We suggest that the EPN doesn't reflect capture but rather downstream effects of attention, including object recognition. These results show that capture by emotional pictures in EIB can be suppressed when attention is engaged in another difficult task. The results have important implications for understanding capture effects in EIB.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present research used behavioral and event-related brain potentials (ERP) measures to determine whether emotional capture is automatic in the emotion-induced blindness (EIB) paradigm. The first experiment varied the priority of performing two concurrent tasks: identifying a negative or neutral picture appearing in a rapid serial visual presentation (RSVP) stream of pictures and multiple object tracking (MOT). Results showed that increased attention to the MOT task resulted in decreased accuracy for identifying both negative and neutral target pictures accompanied by decreases in the amplitude of the P3b component. In contrast, the early posterior negativity (EPN) component elicited by negative pictures was unaffected by variations in attention. Similarly, there was a decrement in MOT performance for dual-task versus single task conditions but no effect of picture type (negative vs neutral) on MOT accuracy which isn't consistent with automatic emotional capture of attention. However, the MOT task might simply be insensitive to brief interruptions of attention. The second experiment used a more sensitive reaction time (RT) measure to examine this possibility. Results showed that RT to discriminate a gap appearing in a tracked object was delayed by the simultaneous appearance of to-be-ignored distractor pictures even though MOT performance was once again unaffected by the distractor. Importantly, the RT delay was the same for both negative and neutral distractors suggesting that capture was driven by physical salience rather than emotional salience of the distractors. Despite this lack of emotional capture, the EPN component, which is thought to reflect emotional capture, was still present. We suggest that the EPN doesn't reflect capture but rather downstream effects of attention, including object recognition. These results show that capture by emotional pictures in EIB can be suppressed when attention is engaged in another difficult task. The results have important implications for understanding capture effects in EIB.

Close

  • doi:10.1016/j.cortex.2019.03.013

Close

Rachael Gwinn; Ian Krajbich

Attitudes and attention Journal Article

Journal of Experimental Social Psychology, 86 , pp. 1–8, 2020.

Abstract | Links | BibTeX

@article{Gwinn2020,
title = {Attitudes and attention},
author = {Rachael Gwinn and Ian Krajbich},
doi = {10.1016/j.jesp.2019.103892},
year = {2020},
date = {2020-01-01},
journal = {Journal of Experimental Social Psychology},
volume = {86},
pages = {1--8},
abstract = {Attitudes play a vital role in our everyday decisions. However, it is unclear how various dimensions of attitudes affect the choice process, for instance the way that people allocate attention between alternatives. In this study we investigated these questions using eye-tracking and a two alternative forced food-choice task after measuring subjective values (attitude extremity) and their accompanying accessibility, certainty, and stability. Understanding this basic decision-making process is key if we are to gain insight on how to combat societal problems like obesity and other issues related to diet. We found that participants allocated more attention to items with lower attitude accessibility, but tended to choose items with higher attitude accessibility. Higher attitude certainty and stability had no effects on attention, but led to more attitude-consistent choices. These results imply that people are not simply choosing in line with their subjective values but are affected by other aspects of their attitudes. In addition, our attitude accessibility results indicate that more attention is not always beneficial.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Attitudes play a vital role in our everyday decisions. However, it is unclear how various dimensions of attitudes affect the choice process, for instance the way that people allocate attention between alternatives. In this study we investigated these questions using eye-tracking and a two alternative forced food-choice task after measuring subjective values (attitude extremity) and their accompanying accessibility, certainty, and stability. Understanding this basic decision-making process is key if we are to gain insight on how to combat societal problems like obesity and other issues related to diet. We found that participants allocated more attention to items with lower attitude accessibility, but tended to choose items with higher attitude accessibility. Higher attitude certainty and stability had no effects on attention, but led to more attitude-consistent choices. These results imply that people are not simply choosing in line with their subjective values but are affected by other aspects of their attitudes. In addition, our attitude accessibility results indicate that more attention is not always beneficial.

Close

  • doi:10.1016/j.jesp.2019.103892

Close

Ian Donovan; Ying Joey Zhou; Marisa Carrasco

In search of exogenous feature-based attention Journal Article

Attention, Perception, & Psychophysics, 82 (1), pp. 312–329, 2020.

Abstract | Links | BibTeX

@article{Donovan2020,
title = {In search of exogenous feature-based attention},
author = {Ian Donovan and Ying Joey Zhou and Marisa Carrasco},
doi = {10.3758/s13414-019-01815-3},
year = {2020},
date = {2020-01-01},
journal = {Attention, Perception, & Psychophysics},
volume = {82},
number = {1},
pages = {312--329},
abstract = {Visual attention prioritizes the processing of sensory information at specific spatial locations (spatial attention; SA) or with specific feature values (feature-based attention; FBA). SA is well characterized in terms of behavior, brain activity, and temporal dynamics-for both top-down (endogenous) and bottom-up (exogenous) spatial orienting. FBA has been thoroughly studied in terms of top-down endogenous orienting, but much less is known about the potential of bottom-up exogenous influences of FBA. Here, in four experiments, we adapted a procedure used in two previous studies that reported exogenous FBA effects, with the goal of replicating and expanding on these findings, especially regarding its temporal dynamics. Unlike the two previous studies, we did not find significant effects of exogenous FBA. This was true (1) whether accuracy or RT was prioritized as the main measure, (2) with precues presented peripherally or centrally, (3) with cue-to-stimulus ISIs of varying durations, (4) with four or eight possible target locations, (5) at different meridians, (6) with either brief or long stimulus presentations, (7) and with either fixation contingent or noncontingent stimulus displays. In the last experiment, a postexperiment participant questionnaire indicated that only a small subset of participants, who mistakenly believed the irrelevant color of the precue indicated which stimulus was the target, exhibited benefits for valid exogenous FBA precues. Overall, we conclude that with the protocol used in the studies reporting exogenous FBA, the exogenous stimulus-driven influence of FBA is elusive at best, and that FBA is primarily a top-down, goal-driven process.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual attention prioritizes the processing of sensory information at specific spatial locations (spatial attention; SA) or with specific feature values (feature-based attention; FBA). SA is well characterized in terms of behavior, brain activity, and temporal dynamics-for both top-down (endogenous) and bottom-up (exogenous) spatial orienting. FBA has been thoroughly studied in terms of top-down endogenous orienting, but much less is known about the potential of bottom-up exogenous influences of FBA. Here, in four experiments, we adapted a procedure used in two previous studies that reported exogenous FBA effects, with the goal of replicating and expanding on these findings, especially regarding its temporal dynamics. Unlike the two previous studies, we did not find significant effects of exogenous FBA. This was true (1) whether accuracy or RT was prioritized as the main measure, (2) with precues presented peripherally or centrally, (3) with cue-to-stimulus ISIs of varying durations, (4) with four or eight possible target locations, (5) at different meridians, (6) with either brief or long stimulus presentations, (7) and with either fixation contingent or noncontingent stimulus displays. In the last experiment, a postexperiment participant questionnaire indicated that only a small subset of participants, who mistakenly believed the irrelevant color of the precue indicated which stimulus was the target, exhibited benefits for valid exogenous FBA precues. Overall, we conclude that with the protocol used in the studies reporting exogenous FBA, the exogenous stimulus-driven influence of FBA is elusive at best, and that FBA is primarily a top-down, goal-driven process.

Close

  • doi:10.3758/s13414-019-01815-3

Close

Sabrina Michelle Di Lonardo; Matthew G Huebner; Katherine Newman; Jo-Anne LeFevre

Fixated in unfamiliar territory: Mapping estimates across typical and atypical number lines Journal Article

Quarterly Journal of Experimental Psychology, 73 (2), pp. 279–294, 2020.

Abstract | Links | BibTeX

@article{DiLonardo2020,
title = {Fixated in unfamiliar territory: Mapping estimates across typical and atypical number lines},
author = {Sabrina Michelle {Di Lonardo} and Matthew G Huebner and Katherine Newman and Jo-Anne LeFevre},
doi = {10.1177/1747021819881631},
year = {2020},
date = {2020-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {73},
number = {2},
pages = {279--294},
abstract = {Adults (N = 72) estimated the location of target numbers on number lines that varied in numerical range (i.e., typical range 0–10,000 or atypical range 0–7,000) and spatial orientation (i.e., the 0 endpoint on the left [traditional] or on the right [reversed]). Eye-tracking data were used to assess strategy use. Participants made meaningful first fixations on the line, with fixations occurring around the origin for low target numbers and around the midpoint and endpoint for high target numbers. On traditional direction number lines, participants used left-to-right scanning and showed a leftward bias; these effects were reduced for the reverse direction number lines. Participants made fixations around the midpoint for both ranges but were less accurate when estimating target numbers around the midpoint on the 7,000-range number line. Thus, participants are using the internal benchmark (i.e., midpoint) to guide estimates on atypical range number lines, but they have difficulty calculating the midpoint, leading to less accurate estimates. In summary, both range and direction influenced strategy use and accuracy, suggesting that both numerical and spatial processes influence number line estimation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Adults (N = 72) estimated the location of target numbers on number lines that varied in numerical range (i.e., typical range 0–10,000 or atypical range 0–7,000) and spatial orientation (i.e., the 0 endpoint on the left [traditional] or on the right [reversed]). Eye-tracking data were used to assess strategy use. Participants made meaningful first fixations on the line, with fixations occurring around the origin for low target numbers and around the midpoint and endpoint for high target numbers. On traditional direction number lines, participants used left-to-right scanning and showed a leftward bias; these effects were reduced for the reverse direction number lines. Participants made fixations around the midpoint for both ranges but were less accurate when estimating target numbers around the midpoint on the 7,000-range number line. Thus, participants are using the internal benchmark (i.e., midpoint) to guide estimates on atypical range number lines, but they have difficulty calculating the midpoint, leading to less accurate estimates. In summary, both range and direction influenced strategy use and accuracy, suggesting that both numerical and spatial processes influence number line estimation.

Close

  • doi:10.1177/1747021819881631

Close

Xianglan Chen; Hulin Ren; Yamin Liu; Bendegul Okumus; Anil Bilgihan

Attention to Chinese menus with metaphorical or metonymic names: An eye movement lab experiment Journal Article

International Journal of Hospitality Management, 84 , pp. 1–10, 2020.

Abstract | Links | BibTeX

@article{Chen2020,
title = {Attention to Chinese menus with metaphorical or metonymic names: An eye movement lab experiment},
author = {Xianglan Chen and Hulin Ren and Yamin Liu and Bendegul Okumus and Anil Bilgihan},
doi = {10.1016/J.IJHM.2019.05.001},
year = {2020},
date = {2020-01-01},
journal = {International Journal of Hospitality Management},
volume = {84},
pages = {1--10},
publisher = {Pergamon},
abstract = {Food is as cultural as it is practical, and names of dishes accordingly have cultural nuances. Menus serve as communication tools between restaurants and their guests, representing the culinary philosophy of the chefs and proprietors involved. The purpose of this experimental lab study is to compare differences of attention paid to textual and pictorial elements of menus with metaphorical and/or metonymic names. Eye movement technology was applied in a 2 × 3 between-subject experiment (n = 40), comparing the strength of visual metaphors (e.g., images of menu items on the menu) and direct textual names in Chinese and English with regard to guests' willingness to purchase the dishes in question. Post-test questionnaires were also employed to assess participants' attitudes toward menu designs. Study results suggest that visual metaphors are more efficient when reflecting a product's strength. Images are shown to positively influence consumers' expectations of taste and enjoyment, garnering the most attention under all six conditions studied here, and constitute the most effective format when Chinese alone names are present. The textual claim increases perception of the strength of menu items along with purchase intention. Metaphorical dish names with bilingual (i.e., Chinese and English) names hold the greatest appeal. This result can be interpreted from the perspective of grounded cognition theory, which suggests that situated simulations and re-enactment of perceptual, motor, and affective processes can support abstract thought. The lab results and survey provide specific theoretical and managerial implications with regard to translating names of Chinese dishes to attract customers' attention to specific menu items.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Food is as cultural as it is practical, and names of dishes accordingly have cultural nuances. Menus serve as communication tools between restaurants and their guests, representing the culinary philosophy of the chefs and proprietors involved. The purpose of this experimental lab study is to compare differences of attention paid to textual and pictorial elements of menus with metaphorical and/or metonymic names. Eye movement technology was applied in a 2 × 3 between-subject experiment (n = 40), comparing the strength of visual metaphors (e.g., images of menu items on the menu) and direct textual names in Chinese and English with regard to guests' willingness to purchase the dishes in question. Post-test questionnaires were also employed to assess participants' attitudes toward menu designs. Study results suggest that visual metaphors are more efficient when reflecting a product's strength. Images are shown to positively influence consumers' expectations of taste and enjoyment, garnering the most attention under all six conditions studied here, and constitute the most effective format when Chinese alone names are present. The textual claim increases perception of the strength of menu items along with purchase intention. Metaphorical dish names with bilingual (i.e., Chinese and English) names hold the greatest appeal. This result can be interpreted from the perspective of grounded cognition theory, which suggests that situated simulations and re-enactment of perceptual, motor, and affective processes can support abstract thought. The lab results and survey provide specific theoretical and managerial implications with regard to translating names of Chinese dishes to attract customers' attention to specific menu items.

Close

  • doi:10.1016/J.IJHM.2019.05.001

Close

Soazig Casteau; Daniel T Smith

Covert attention beyond the range of eye-movements: Evidence for a dissociation between exogenous and endogenous orienting Journal Article

Cortex, 122 , pp. 170–186, 2020.

Abstract | Links | BibTeX

@article{Casteau2020a,
title = {Covert attention beyond the range of eye-movements: Evidence for a dissociation between exogenous and endogenous orienting},
author = {Soazig Casteau and Daniel T Smith},
doi = {10.1016/j.cortex.2018.11.007},
year = {2020},
date = {2020-01-01},
journal = {Cortex},
volume = {122},
pages = {170--186},
publisher = {Elsevier Ltd},
abstract = {The relationship between covert shift of attention and the oculomotor system has been the subject of numerous studies. A widely held view, known as Premotor Theory, is that covert attention depends upon activation of the oculomotor system. However, recent work has argued that Premotor Theory is only true for covert, exogenous orienting of attention and that covert endogenous orienting is largely independent of the oculomotor system. To address this issue we examined how endogenous and exogenous covert orienting of attention was affected when stimuli were presented at a location outside the range of saccadic eye movements. Results from Experiment 1 showed that exogenous covert orienting was abolished when stimuli were presented beyond the range of saccadic eye movements, but preserved when stimuli were presented within this range. In contrast, in Experiment 2 endogenous covert orienting was preserved when stimuli appeared beyond the saccadic range. Finally, Experiment 3 confirmed the observations of Exp.1 and 2. Our results demonstrate that exogenous, covert orienting is limited to the range of overt saccadic eye movements, whereas covert endogenous orienting is not. These results are consistent with a weak, exogenous-only version of Premotor Theory.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The relationship between covert shift of attention and the oculomotor system has been the subject of numerous studies. A widely held view, known as Premotor Theory, is that covert attention depends upon activation of the oculomotor system. However, recent work has argued that Premotor Theory is only true for covert, exogenous orienting of attention and that covert endogenous orienting is largely independent of the oculomotor system. To address this issue we examined how endogenous and exogenous covert orienting of attention was affected when stimuli were presented at a location outside the range of saccadic eye movements. Results from Experiment 1 showed that exogenous covert orienting was abolished when stimuli were presented beyond the range of saccadic eye movements, but preserved when stimuli were presented within this range. In contrast, in Experiment 2 endogenous covert orienting was preserved when stimuli appeared beyond the saccadic range. Finally, Experiment 3 confirmed the observations of Exp.1 and 2. Our results demonstrate that exogenous, covert orienting is limited to the range of overt saccadic eye movements, whereas covert endogenous orienting is not. These results are consistent with a weak, exogenous-only version of Premotor Theory.

Close

  • doi:10.1016/j.cortex.2018.11.007

Close

Soazig Casteau; Daniel T Smith

On the link between attentional search and the oculomotor system: Is preattentive search restricted to the range of eye movements? Journal Article

Attention, Perception, & Psychophysics, pp. 1–15, 2020.

Abstract | BibTeX

@article{Casteau2020,
title = {On the link between attentional search and the oculomotor system: Is preattentive search restricted to the range of eye movements?},
author = {Soazig Casteau and Daniel T Smith},
year = {2020},
date = {2020-01-01},
journal = {Attention, Perception, & Psychophysics},
pages = {1--15},
publisher = {Attention, Perception, & Psychophysics},
abstract = {It has been proposed that covert visual search can be fast, efficient, and stimulus driven, particularly when the target is defined by a salient single feature, or slow, inefficient, and effortful when the target is defined by a nonsalient conjunction offeatures. This distinction between fast, stimulus-driven orienting and slow, effortful orienting can be related to the distinction between exog- enous spatial attention and endogenous spatial attention. Several studies have shown that exogenous, covert orienting is limited to the range of saccadic eye movements, whereas covert endogenous orienting is independent of the range of saccadic eye movements. The current study examined whether covert visual search is affected in a similar way. Experiment 1 showed that covert visual search for feature singletons was impaired when stimuli were presented beyond the range of saccadic eye move- ments, whereas conjunction search was unaffected by array position. Experiment 2 replicated and extended this effect by measuring search times at 6 eccentricities. The impairment in covert feature search emerged only when stimuli crossed the effective oculomotor range and remained stable for locations further into the periphery, ruling out the possibility that the results of Experiment 1 were due to a failure to fully compensate for the effects of cortical magnification. The findings are interpreted in terms ofbiased competition and oculomotor theories ofspatial attention. It is concluded that, as with covert exogenous orienting, biological constraints on overt orienting in the oculomotor system constrain covert, preattentive search. Keywords},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It has been proposed that covert visual search can be fast, efficient, and stimulus driven, particularly when the target is defined by a salient single feature, or slow, inefficient, and effortful when the target is defined by a nonsalient conjunction offeatures. This distinction between fast, stimulus-driven orienting and slow, effortful orienting can be related to the distinction between exog- enous spatial attention and endogenous spatial attention. Several studies have shown that exogenous, covert orienting is limited to the range of saccadic eye movements, whereas covert endogenous orienting is independent of the range of saccadic eye movements. The current study examined whether covert visual search is affected in a similar way. Experiment 1 showed that covert visual search for feature singletons was impaired when stimuli were presented beyond the range of saccadic eye move- ments, whereas conjunction search was unaffected by array position. Experiment 2 replicated and extended this effect by measuring search times at 6 eccentricities. The impairment in covert feature search emerged only when stimuli crossed the effective oculomotor range and remained stable for locations further into the periphery, ruling out the possibility that the results of Experiment 1 were due to a failure to fully compensate for the effects of cortical magnification. The findings are interpreted in terms ofbiased competition and oculomotor theories ofspatial attention. It is concluded that, as with covert exogenous orienting, biological constraints on overt orienting in the oculomotor system constrain covert, preattentive search. Keywords

Close

Christopher D D Cabrall; Riender Happee; Joost C F De Winter

Prediction of effort and eye movement measures from driving scene components Journal Article

Transportation Research Part F: Traffic Psychology and Behaviour, 68 , pp. 187–197, 2020.

Abstract | Links | BibTeX

@article{Cabrall2020,
title = {Prediction of effort and eye movement measures from driving scene components},
author = {Christopher D D Cabrall and Riender Happee and Joost C F {De Winter}},
doi = {10.1016/j.trf.2019.11.001},
year = {2020},
date = {2020-01-01},
journal = {Transportation Research Part F: Traffic Psychology and Behaviour},
volume = {68},
pages = {187--197},
publisher = {Elsevier Ltd},
abstract = {For transitions of control in automated vehicles, driver monitoring systems (DMS) may need to discern task difficulty and driver preparedness. Such DMS require models that relate driving scene components, driver effort, and eye measurements. Across two sessions, 15 participants enacted receiving control within 60 randomly ordered dashcam videos (3-second duration) with variations in visible scene components: road curve angle, road surface area, road users, symbols, infrastructure, and vegetation/trees while their eyes were measured for pupil diameter, fixation duration, and saccade amplitude. The subjective measure of effort and the objective measure of saccade amplitude evidenced the highest correlations (r = 0.34 and r = 0.42, respectively) with the scene component of road curve angle. In person-specific regression analyses combining all visual scene components as predictors, average predictive correlations ranged between 0.49 and 0.58 for subjective effort and between 0.36 and 0.49 for saccade amplitude, depending on cross-validation techniques of generalization and repetition. In conclusion, the present regression equations establish quantifiable relations between visible driving scene components with both subjective effort and objective eye movement measures. In future DMS, such knowledge can help inform road-facing and driver-facing cameras to jointly establish the readiness of would-be drivers ahead of receiving control.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

For transitions of control in automated vehicles, driver monitoring systems (DMS) may need to discern task difficulty and driver preparedness. Such DMS require models that relate driving scene components, driver effort, and eye measurements. Across two sessions, 15 participants enacted receiving control within 60 randomly ordered dashcam videos (3-second duration) with variations in visible scene components: road curve angle, road surface area, road users, symbols, infrastructure, and vegetation/trees while their eyes were measured for pupil diameter, fixation duration, and saccade amplitude. The subjective measure of effort and the objective measure of saccade amplitude evidenced the highest correlations (r = 0.34 and r = 0.42, respectively) with the scene component of road curve angle. In person-specific regression analyses combining all visual scene components as predictors, average predictive correlations ranged between 0.49 and 0.58 for subjective effort and between 0.36 and 0.49 for saccade amplitude, depending on cross-validation techniques of generalization and repetition. In conclusion, the present regression equations establish quantifiable relations between visible driving scene components with both subjective effort and objective eye movement measures. In future DMS, such knowledge can help inform road-facing and driver-facing cameras to jointly establish the readiness of would-be drivers ahead of receiving control.

Close

  • doi:10.1016/j.trf.2019.11.001

Close

John Brand; Travis D Masterson; Jennifer A Emond; Reina Lansigan; Diane Gilbert-diamond

Measuring attentional bias to food cues in young children using a visual search task : An eye-tracking study Journal Article

Appetite, 148 , pp. 1–7, 2020.

Abstract | Links | BibTeX

@article{Brand2020,
title = {Measuring attentional bias to food cues in young children using a visual search task : An eye-tracking study},
author = {John Brand and Travis D Masterson and Jennifer A Emond and Reina Lansigan and Diane Gilbert-diamond},
doi = {10.1016/j.appet.2020.104610},
year = {2020},
date = {2020-01-01},
journal = {Appetite},
volume = {148},
pages = {1--7},
publisher = {Elsevier},
abstract = {Objective: Attentional bias to food cues may be a risk factor for childhood obesity, yet there are few paradigms to measure such biases in young children. Therefore, the present work introduces an eye-tracking visual search task to measure attentional bias in young children. Methods: Fifty-one 3-6-year-olds played a game to find a target cartoon character among food (experimental condition) or toy (control condition) distractors. Children completed the experimental and toy conditions on two separate visits in randomized order. Behavioral (response latencies) and eye-tracking measures (time to first fixation, initial gaze duration duration, cumulative gaze duration ) of attention to food and toy cues were computed. Regressions were used to test for attentional bias to food versus toy cues, and whether attentional bias to food cues was related to current BMI z-score. Results: Children spent more cumulative time looking at food versus toy distractors and took longer to locate the target when searching through food versus toy distractors. The faster children fixated on their first food versus toy distractor was associated with higher BMI z-scores. Conclusions: Using a game-based paradigm employing eyetracking, we found a behavioral attentional bias to food vs. toy distractors in young children. Further, attentional bias to food cues was associated with current BMI z-score.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: Attentional bias to food cues may be a risk factor for childhood obesity, yet there are few paradigms to measure such biases in young children. Therefore, the present work introduces an eye-tracking visual search task to measure attentional bias in young children. Methods: Fifty-one 3-6-year-olds played a game to find a target cartoon character among food (experimental condition) or toy (control condition) distractors. Children completed the experimental and toy conditions on two separate visits in randomized order. Behavioral (response latencies) and eye-tracking measures (time to first fixation, initial gaze duration duration, cumulative gaze duration ) of attention to food and toy cues were computed. Regressions were used to test for attentional bias to food versus toy cues, and whether attentional bias to food cues was related to current BMI z-score. Results: Children spent more cumulative time looking at food versus toy distractors and took longer to locate the target when searching through food versus toy distractors. The faster children fixated on their first food versus toy distractor was associated with higher BMI z-scores. Conclusions: Using a game-based paradigm employing eyetracking, we found a behavioral attentional bias to food vs. toy distractors in young children. Further, attentional bias to food cues was associated with current BMI z-score.

Close

  • doi:10.1016/j.appet.2020.104610

Close

Judith Bek; Ellen Poliakoff; Karen Lander

Measuring emotion recognition by people with Parkinson's disease using eye-tracking with dynamic facial expressions Journal Article

Journal of Neuroscience Methods, 331 , pp. 1–7, 2020.

Abstract | Links | BibTeX

@article{Bek2020,
title = {Measuring emotion recognition by people with Parkinson's disease using eye-tracking with dynamic facial expressions},
author = {Judith Bek and Ellen Poliakoff and Karen Lander},
doi = {10.1016/j.jneumeth.2019.108524},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neuroscience Methods},
volume = {331},
pages = {1--7},
abstract = {Background: Motion is an important cue to emotion recognition, and it has been suggested that we recognize emotions via internal simulation of others' expressions. There is a reduction of facial expression in Parkinson's disease (PD), which may influence the ability to use motion to recognise emotions in others. However, the majority of previous work in PD has used only static expressions. Moreover, few studies have used eye-tracking to explore emotion processing in PD. New method: We measured accuracy and eye movements in people with PD and healthy controls when identifying emotions from both static and dynamic facial expressions. Results: The groups did not differ overall in emotion recognition accuracy, but motion significantly increased recognition only in the control group. Participants made fewer and longer fixations when viewing dynamic expressions, and interest area analysis revealed increased gaze to the mouth region and decreased gaze to the eyes for dynamic stimuli, although the latter was specific to the control group. Comparison with existing methods: Ours is the first study to directly compare recognition of static and dynamic emotional expressions in PD using eye-tracking, revealing subtle differences between groups that may otherwise be undetected. Conclusions: It is feasible and informative to use eye-tracking with dynamic expressions to investigate emotion recognition in PD. Our findings suggest that people with PD may differ from healthy older adults in how they utilise motion during facial emotion recognition. Nonetheless, gaze patterns indicate some effects of motion on emotional processing, highlighting the need for further investigation in this area.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Motion is an important cue to emotion recognition, and it has been suggested that we recognize emotions via internal simulation of others' expressions. There is a reduction of facial expression in Parkinson's disease (PD), which may influence the ability to use motion to recognise emotions in others. However, the majority of previous work in PD has used only static expressions. Moreover, few studies have used eye-tracking to explore emotion processing in PD. New method: We measured accuracy and eye movements in people with PD and healthy controls when identifying emotions from both static and dynamic facial expressions. Results: The groups did not differ overall in emotion recognition accuracy, but motion significantly increased recognition only in the control group. Participants made fewer and longer fixations when viewing dynamic expressions, and interest area analysis revealed increased gaze to the mouth region and decreased gaze to the eyes for dynamic stimuli, although the latter was specific to the control group. Comparison with existing methods: Ours is the first study to directly compare recognition of static and dynamic emotional expressions in PD using eye-tracking, revealing subtle differences between groups that may otherwise be undetected. Conclusions: It is feasible and informative to use eye-tracking with dynamic expressions to investigate emotion recognition in PD. Our findings suggest that people with PD may differ from healthy older adults in how they utilise motion during facial emotion recognition. Nonetheless, gaze patterns indicate some effects of motion on emotional processing, highlighting the need for further investigation in this area.

Close

  • doi:10.1016/j.jneumeth.2019.108524

Close

Valerie M Beck; Timothy J Vickery

Oculomotor capture reveals trial-by-trial neural correlates of attentional guidance by contents of visual working memory Journal Article

Cortex, 122 , pp. 159–169, 2020.

Abstract | Links | BibTeX

@article{Beck2020,
title = {Oculomotor capture reveals trial-by-trial neural correlates of attentional guidance by contents of visual working memory},
author = {Valerie M Beck and Timothy J Vickery},
doi = {10.1016/j.cortex.2018.09.017},
year = {2020},
date = {2020-01-01},
journal = {Cortex},
volume = {122},
pages = {159--169},
publisher = {Elsevier Ltd},
abstract = {Evidence from attentional and oculomotor capture, contingent capture, and other paradigms suggests that mechanisms supporting human visual working memory (VWM) and visual attention are intertwined. Features held in VWM bias guidance toward matching items even when those features are task irrelevant. However, the neural basis of this interaction is underspecified. Prior examinations using fMRI have primarily relied on coarse comparisons across experimental conditions that produce varying amounts of capture. To examine the neural dynamics of attentional capture on a trial-by-trial basis, we applied an oculomotor paradigm that produced discrete measures of capture. On each trial, subjects were shown a memory item, followed by a blank retention interval, then a saccade target that appeared to the left or right. On some trials, an irrelevant distractor appeared above or below fixation. Once the saccade target was fixated, subjects completed a forced-choice memory test. Critically, either the target or distractor could match the feature held in VWM. Although task irrelevant, this manipulation produced differences in behavior: participants were more likely to saccade first to an irrelevant VWM-matching distractor compared with a non-matching distractor – providing a discrete measure of capture. We replicated this finding while recording eye movements and scanning participants' brains using fMRI. To examine the neural basis of oculomotor capture, we separately modeled the retention interval for capture and non-capture trials within the distractor-match condition. We found that frontal activity, including anterior cingulate cortex and superior frontal gyrus regions, differentially predicted subsequent oculomotor capture by a memory-matching distractor. Other regions previously implicated as involved in attentional capture by VWM-matching items showed no differential activity across capture and non-capture trials, even at a liberal threshold. Our findings demonstrate the power of trial-by-trial analyses of oculomotor capture as a means to examine the underlying relationship between VWM and attentional guidance systems.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Evidence from attentional and oculomotor capture, contingent capture, and other paradigms suggests that mechanisms supporting human visual working memory (VWM) and visual attention are intertwined. Features held in VWM bias guidance toward matching items even when those features are task irrelevant. However, the neural basis of this interaction is underspecified. Prior examinations using fMRI have primarily relied on coarse comparisons across experimental conditions that produce varying amounts of capture. To examine the neural dynamics of attentional capture on a trial-by-trial basis, we applied an oculomotor paradigm that produced discrete measures of capture. On each trial, subjects were shown a memory item, followed by a blank retention interval, then a saccade target that appeared to the left or right. On some trials, an irrelevant distractor appeared above or below fixation. Once the saccade target was fixated, subjects completed a forced-choice memory test. Critically, either the target or distractor could match the feature held in VWM. Although task irrelevant, this manipulation produced differences in behavior: participants were more likely to saccade first to an irrelevant VWM-matching distractor compared with a non-matching distractor – providing a discrete measure of capture. We replicated this finding while recording eye movements and scanning participants' brains using fMRI. To examine the neural basis of oculomotor capture, we separately modeled the retention interval for capture and non-capture trials within the distractor-match condition. We found that frontal activity, including anterior cingulate cortex and superior frontal gyrus regions, differentially predicted subsequent oculomotor capture by a memory-matching distractor. Other regions previously implicated as involved in attentional capture by VWM-matching items showed no differential activity across capture and non-capture trials, even at a liberal threshold. Our findings demonstrate the power of trial-by-trial analyses of oculomotor capture as a means to examine the underlying relationship between VWM and attentional guidance systems.

Close

  • doi:10.1016/j.cortex.2018.09.017

Close

Yasaman Bagherzadeh; Daniel Baldauf; Dimitrios Pantazis; Robert Desimone

Alpha synchrony and the neurofeedback control of spatial attention Journal Article

Neuron, 105 , pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Bagherzadeh2020,
title = {Alpha synchrony and the neurofeedback control of spatial attention},
author = {Yasaman Bagherzadeh and Daniel Baldauf and Dimitrios Pantazis and Robert Desimone},
doi = {10.1016/j.neuron.2019.11.001},
year = {2020},
date = {2020-01-01},
journal = {Neuron},
volume = {105},
pages = {1--11},
publisher = {Elsevier Inc.},
abstract = {Decreases in alpha synchronization are correlated with enhanced attention, whereas alpha increases are correlated with inattention. However, correlation is not causality, and synchronization may be a byproduct of attention rather than a cause. To test for a causal role of alpha synchrony in attention, we used MEG neurofeedback to train subjects to manipulate the ratio of alpha power over the left versus right parietal cortex. We found that a comparable alpha asymmetry developed over the visual cortex. The alpha training led to corresponding asymmetrical changes in visually evoked responses to probes presented in the two hemifields during training. Thus, reduced alpha was associated with enhanced sensory processing. Testing after training showed a persistent bias in attention in the expected directions. The results support the proposal that alpha synchrony plays a causal role in modulating attention and visual processing, and alpha training could be used for testing hypotheses about synchrony.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Decreases in alpha synchronization are correlated with enhanced attention, whereas alpha increases are correlated with inattention. However, correlation is not causality, and synchronization may be a byproduct of attention rather than a cause. To test for a causal role of alpha synchrony in attention, we used MEG neurofeedback to train subjects to manipulate the ratio of alpha power over the left versus right parietal cortex. We found that a comparable alpha asymmetry developed over the visual cortex. The alpha training led to corresponding asymmetrical changes in visually evoked responses to probes presented in the two hemifields during training. Thus, reduced alpha was associated with enhanced sensory processing. Testing after training showed a persistent bias in attention in the expected directions. The results support the proposal that alpha synchrony plays a causal role in modulating attention and visual processing, and alpha training could be used for testing hypotheses about synchrony.

Close

  • doi:10.1016/j.neuron.2019.11.001

Close

Nicolai D Ayasse; Arthur Wingfield

Anticipatory baseline pupil diameter Is sensitive to differences in hearing thresholds Journal Article

Frontiers in Psychology, 10 , pp. 1–7, 2020.

Abstract | Links | BibTeX

@article{Ayasse2020,
title = {Anticipatory baseline pupil diameter Is sensitive to differences in hearing thresholds},
author = {Nicolai D Ayasse and Arthur Wingfield},
doi = {10.3389/fpsyg.2019.02947},
year = {2020},
date = {2020-01-01},
journal = {Frontiers in Psychology},
volume = {10},
pages = {1--7},
abstract = {Task-evoked changes in pupil dilation have long been used as a physiological index of cognitive effort. Unlike this response, that is measured during or after an experimental trial, the baseline pupil dilation (BPD) is a measure taken prior to an experimental trial. As such, it is considered to reflect an individual's arousal level in anticipation of an experimental trial. We report data for 68 participants, ages 18 to 89, whose hearing acuity ranged from normal hearing to a moderate hearing loss, tested over a series 160 trials on an auditory sentence comprehension task. Results showed that BPDs progressively declined over the course of the experimental trials, with participants with poorer pure tone detection thresholds showing a steeper rate of decline than those with better thresholds. Data showed this slope difference to be due to participants with poorer hearing having larger BPDs than those with better hearing at the start of the experiment, but with their BPDs approaching that of the better hearing participants by the end of the 160 trials. A finding of increasing response accuracy over trials was seen as inconsistent with a fatigue or reduced task engagement account of the diminishing BPDs. Rather, the present results imply BPD as reflecting a heightened arousal level in poorer-hearing participants in anticipation of a task that demands accurate speech perception, a concern that dissipates over trials with task success. These data taken with others suggest that the baseline pupillary response may not reflect a single construct.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Task-evoked changes in pupil dilation have long been used as a physiological index of cognitive effort. Unlike this response, that is measured during or after an experimental trial, the baseline pupil dilation (BPD) is a measure taken prior to an experimental trial. As such, it is considered to reflect an individual's arousal level in anticipation of an experimental trial. We report data for 68 participants, ages 18 to 89, whose hearing acuity ranged from normal hearing to a moderate hearing loss, tested over a series 160 trials on an auditory sentence comprehension task. Results showed that BPDs progressively declined over the course of the experimental trials, with participants with poorer pure tone detection thresholds showing a steeper rate of decline than those with better thresholds. Data showed this slope difference to be due to participants with poorer hearing having larger BPDs than those with better hearing at the start of the experiment, but with their BPDs approaching that of the better hearing participants by the end of the 160 trials. A finding of increasing response accuracy over trials was seen as inconsistent with a fatigue or reduced task engagement account of the diminishing BPDs. Rather, the present results imply BPD as reflecting a heightened arousal level in poorer-hearing participants in anticipation of a task that demands accurate speech perception, a concern that dissipates over trials with task success. These data taken with others suggest that the baseline pupillary response may not reflect a single construct.

Close

  • doi:10.3389/fpsyg.2019.02947

Close

Ramina Adam; Kevin Johnston; Ravi S Menon; Stefan Everling

Functional reorganization during the recovery of contralesional target selection deficits after prefrontal cortex lesions in macaque monkeys Journal Article

NeuroImage, 207 , pp. 1–17, 2020.

Abstract | Links | BibTeX

@article{Adam2020,
title = {Functional reorganization during the recovery of contralesional target selection deficits after prefrontal cortex lesions in macaque monkeys},
author = {Ramina Adam and Kevin Johnston and Ravi S Menon and Stefan Everling},
doi = {10.1016/j.neuroimage.2019.116339},
year = {2020},
date = {2020-01-01},
journal = {NeuroImage},
volume = {207},
pages = {1--17},
publisher = {Elsevier Ltd},
abstract = {Visual extinction has been characterized by the failure to respond to a visual stimulus in the contralesional hemifield when presented simultaneously with an ipsilesional stimulus (Corbetta and Shulman, 2011). Unilateral damage to the macaque frontoparietal cortex commonly leads to deficits in contralesional target selection that resemble visual extinction. Recently, we showed that macaque monkeys with unilateral lesions in the caudal prefrontal cortex (PFC) exhibited contralesional target selection deficits that recovered over 2–4 months (Adam et al., 2019). Here, we investigated the longitudinal changes in functional connectivity (FC) of the frontoparietal network after a small or large right caudal PFC lesion in four macaque monkeys. We collected ultra-high field resting-state fMRI at 7-T before the lesion and at weeks 1–16 post-lesion and compared the functional data with behavioural performance on a free-choice saccade task. We found that the pattern of frontoparietal network FC changes depended on lesion size, such that the recovery of contralesional extinction was associated with an initial increase in network FC that returned to baseline in the two small lesion monkeys, whereas FC continued to increase throughout recovery in the two monkeys with a larger lesion. We also found that the FC between contralesional dorsolateral PFC and ipsilesional parietal cortex correlated with behavioural recovery and that the contralesional dorsolateral PFC showed increasing degree centrality with the frontoparietal network. These findings suggest that both the contralesional and ipsilesional hemispheres play an important role in the recovery of function. Importantly, optimal compensation after large PFC lesions may require greater recruitment of distant and intact areas of the frontoparietal network, whereas recovery from smaller lesions was supported by a normalization of the functional network.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual extinction has been characterized by the failure to respond to a visual stimulus in the contralesional hemifield when presented simultaneously with an ipsilesional stimulus (Corbetta and Shulman, 2011). Unilateral damage to the macaque frontoparietal cortex commonly leads to deficits in contralesional target selection that resemble visual extinction. Recently, we showed that macaque monkeys with unilateral lesions in the caudal prefrontal cortex (PFC) exhibited contralesional target selection deficits that recovered over 2–4 months (Adam et al., 2019). Here, we investigated the longitudinal changes in functional connectivity (FC) of the frontoparietal network after a small or large right caudal PFC lesion in four macaque monkeys. We collected ultra-high field resting-state fMRI at 7-T before the lesion and at weeks 1–16 post-lesion and compared the functional data with behavioural performance on a free-choice saccade task. We found that the pattern of frontoparietal network FC changes depended on lesion size, such that the recovery of contralesional extinction was associated with an initial increase in network FC that returned to baseline in the two small lesion monkeys, whereas FC continued to increase throughout recovery in the two monkeys with a larger lesion. We also found that the FC between contralesional dorsolateral PFC and ipsilesional parietal cortex correlated with behavioural recovery and that the contralesional dorsolateral PFC showed increasing degree centrality with the frontoparietal network. These findings suggest that both the contralesional and ipsilesional hemispheres play an important role in the recovery of function. Importantly, optimal compensation after large PFC lesions may require greater recruitment of distant and intact areas of the frontoparietal network, whereas recovery from smaller lesions was supported by a normalization of the functional network.

Close

  • doi:10.1016/j.neuroimage.2019.116339

Close

2019

Ying Joey Zhou; Alexis Pérez-Bellido; Saskia Haegens; Floris P de Lange

Perceptual expectations modulate low-frequency activity: A statistical learning magnetoencephalographystudy Journal Article

Journal of Cognitive Neuroscience, pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Zhou2019c,
title = {Perceptual expectations modulate low-frequency activity: A statistical learning magnetoencephalographystudy},
author = {Ying Joey Zhou and Alexis Pérez-Bellido and Saskia Haegens and Floris P de Lange},
doi = {10.1162/jocn_a_01511},
year = {2019},
date = {2019-12-01},
journal = {Journal of Cognitive Neuroscience},
pages = {1--12},
publisher = {MIT Press - Journals},
abstract = {Perceptual expectations can change how a visual stimulus is perceived. Recent studies have shown mixed results in terms of whether expectations modulate sensory representations. Here, we used a statistical learning paradigm to study the temporal characteristics of perceptual expectations. We presented participants with pairs of object images organized in a predictive manner and then recorded their brain activity with magnetoencephalography while they viewed expected and unexpected image pairs on the subsequent day. We observed stronger alpha-band (7–14 Hz) activity in response to unexpected compared with expected object images. Specifically, the alpha-band modulation occurred as early as the onset of the stimuli and was most pronounced in left occipito-temporal cortex. Given that the differential response to expected versus unexpected stimuli occurred in sensory regions early in time, our results suggest that expectations modulate perceptual decision-making by changing the sensory response elicited by the stimuli.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Perceptual expectations can change how a visual stimulus is perceived. Recent studies have shown mixed results in terms of whether expectations modulate sensory representations. Here, we used a statistical learning paradigm to study the temporal characteristics of perceptual expectations. We presented participants with pairs of object images organized in a predictive manner and then recorded their brain activity with magnetoencephalography while they viewed expected and unexpected image pairs on the subsequent day. We observed stronger alpha-band (7–14 Hz) activity in response to unexpected compared with expected object images. Specifically, the alpha-band modulation occurred as early as the onset of the stimuli and was most pronounced in left occipito-temporal cortex. Given that the differential response to expected versus unexpected stimuli occurred in sensory regions early in time, our results suggest that expectations modulate perceptual decision-making by changing the sensory response elicited by the stimuli.

Close

  • doi:10.1162/jocn_a_01511

Close

Sijia Zhao; Maria Chait; Fred Dick; Peter Dayan; Shigeto Furukawa; Hsin-I Liao

Pupil-linked phasic arousal evoked by violation but not emergence of regularity within rapid sound sequences Journal Article

Nature Communications, 10 , pp. 1–16, 2019.

Abstract | Links | BibTeX

@article{Zhao2019b,
title = {Pupil-linked phasic arousal evoked by violation but not emergence of regularity within rapid sound sequences},
author = {Sijia Zhao and Maria Chait and Fred Dick and Peter Dayan and Shigeto Furukawa and Hsin-I Liao},
doi = {10.1038/s41467-019-12048-1},
year = {2019},
date = {2019-12-01},
journal = {Nature Communications},
volume = {10},
pages = {1--16},
publisher = {Springer Science and Business Media LLC},
abstract = {The ability to track the statistics of our surroundings is a key computational challenge. A prominent theory proposes that the brain monitors for unexpected uncertainty-events which deviate substantially from model predictions, indicating model failure. Norepinephrine is thought to play a key role in this process by serving as an interrupt signal, initiating model-resetting. However, evidence is from paradigms where participants actively monitored stimulus statistics. To determine whether Norepinephrine routinely reports the statistical structure of our surroundings, even when not behaviourally relevant, we used rapid tone-pip sequences that contained salient pattern-changes associated with abrupt structural violations vs. emergence of regular structure. Phasic pupil dilations (PDR) were monitored to assess Norepinephrine. We reveal a remarkable specificity: When not behaviourally relevant, only abrupt structural violations evoke a PDR. The results demonstrate that Norepinephrine tracks unexpected uncertainty on rapid time scales relevant to sensory signals.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The ability to track the statistics of our surroundings is a key computational challenge. A prominent theory proposes that the brain monitors for unexpected uncertainty-events which deviate substantially from model predictions, indicating model failure. Norepinephrine is thought to play a key role in this process by serving as an interrupt signal, initiating model-resetting. However, evidence is from paradigms where participants actively monitored stimulus statistics. To determine whether Norepinephrine routinely reports the statistical structure of our surroundings, even when not behaviourally relevant, we used rapid tone-pip sequences that contained salient pattern-changes associated with abrupt structural violations vs. emergence of regular structure. Phasic pupil dilations (PDR) were monitored to assess Norepinephrine. We reveal a remarkable specificity: When not behaviourally relevant, only abrupt structural violations evoke a PDR. The results demonstrate that Norepinephrine tracks unexpected uncertainty on rapid time scales relevant to sensory signals.

Close

  • doi:10.1038/s41467-019-12048-1

Close

Felicia Zhang; Sagi Jaffe-Dax; Robert C Wilson; Lauren L Emberson

Prediction in infants and adults: A pupillometry study Journal Article

Developmental Science, 22 , pp. 1–9, 2019.

Abstract | Links | BibTeX

@article{Zhang2019g,
title = {Prediction in infants and adults: A pupillometry study},
author = {Felicia Zhang and Sagi Jaffe-Dax and Robert C Wilson and Lauren L Emberson},
doi = {10.1111/desc.12780},
year = {2019},
date = {2019-12-01},
journal = {Developmental Science},
volume = {22},
pages = {1--9},
publisher = {John Wiley & Sons, Ltd (10.1111)},
abstract = {Adults use both bottom-up sensory inputs and top-down signals to generate predictions about future sensory inputs. Infants have also been shown to make predictions with simple stimuli and recent work has suggested top-down processing is available early in infancy. However, it is unknown whether this indicates that top-down prediction is an ability that is continuous across the lifespan or whether an infant's ability to predict is different from an adult's, qualitatively or quantitatively. We employed pupillometry to provide a direct comparison of prediction abilities across these disparate age groups. Pupil dilation response (PDR) was measured in 6-month olds and adults as they completed an identical implicit learning task designed to help learn associations between sounds and pictures. We found significantly larger PDR for visual omission trials (i.e. trials that violated participants' predictions without the presentation of new stimuli to control for bottom-up signals) compared to visual present trials (i.e. trials that confirmed participants' predictions) in both age groups. Furthermore, a computational learning model that is closely linked to prediction error (Rescorla-Wagner model) demonstrated similar learning trajectories suggesting a continuity of predictive capacity and learning across the two age groups.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Adults use both bottom-up sensory inputs and top-down signals to generate predictions about future sensory inputs. Infants have also been shown to make predictions with simple stimuli and recent work has suggested top-down processing is available early in infancy. However, it is unknown whether this indicates that top-down prediction is an ability that is continuous across the lifespan or whether an infant's ability to predict is different from an adult's, qualitatively or quantitatively. We employed pupillometry to provide a direct comparison of prediction abilities across these disparate age groups. Pupil dilation response (PDR) was measured in 6-month olds and adults as they completed an identical implicit learning task designed to help learn associations between sounds and pictures. We found significantly larger PDR for visual omission trials (i.e. trials that violated participants' predictions without the presentation of new stimuli to control for bottom-up signals) compared to visual present trials (i.e. trials that confirmed participants' predictions) in both age groups. Furthermore, a computational learning model that is closely linked to prediction error (Rescorla-Wagner model) demonstrated similar learning trajectories suggesting a continuity of predictive capacity and learning across the two age groups.

Close

  • doi:10.1111/desc.12780

Close

Hiroshi Ueda; Naotoshi Abekawa; Sho Ito; Hiroaki Gomi

Distinct temporal developments of visual motion and position representations for multi-stream visuomotor coordination Journal Article

Scientific Reports, 9 , pp. 1–6, 2019.

Abstract | Links | BibTeX

@article{Ueda2019,
title = {Distinct temporal developments of visual motion and position representations for multi-stream visuomotor coordination},
author = {Hiroshi Ueda and Naotoshi Abekawa and Sho Ito and Hiroaki Gomi},
doi = {10.1038/s41598-019-48535-0},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--6},
publisher = {Nature Publishing Group},
abstract = {A fundamental but controversial question in information coding of moving visual target is which of ‘motion' or ‘position' signal is employed in the brain for producing quick motor reactions. Prevailing theory assumed that visually guided reaching is driven always via target position representation influenced by various motion signals (e.g., target texture and surroundings). To rigorously examine this theory, we manipulated the nature of the influence of internal texture motion on the position representation of the target in reaching correction tasks. By focusing on the difference in illusory position shift of targets with the soft- and hard-edges, we succeeded in extracting the temporal development of an indirect effect only ascribed to changes in position representation. Our data revealed that the onset of indirect effect is significantly slower than the adjustment onset itself. This evidence indicates multi-stream processing in visuomotor control: fast and direct contribution of visual motion for quick action initiation, and relatively slow contribution of position representation updated by relevant motion signals for continuous action regulation. The distinctive visuomotor mechanism would be crucial in successfully interacting with time-varying environments in the real world.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A fundamental but controversial question in information coding of moving visual target is which of ‘motion' or ‘position' signal is employed in the brain for producing quick motor reactions. Prevailing theory assumed that visually guided reaching is driven always via target position representation influenced by various motion signals (e.g., target texture and surroundings). To rigorously examine this theory, we manipulated the nature of the influence of internal texture motion on the position representation of the target in reaching correction tasks. By focusing on the difference in illusory position shift of targets with the soft- and hard-edges, we succeeded in extracting the temporal development of an indirect effect only ascribed to changes in position representation. Our data revealed that the onset of indirect effect is significantly slower than the adjustment onset itself. This evidence indicates multi-stream processing in visuomotor control: fast and direct contribution of visual motion for quick action initiation, and relatively slow contribution of position representation updated by relevant motion signals for continuous action regulation. The distinctive visuomotor mechanism would be crucial in successfully interacting with time-varying environments in the real world.

Close

  • doi:10.1038/s41598-019-48535-0

Close

Martin Szinte; Michael Puntiroli; Heiner Deubel

The spread of presaccadic attention depends on the spatial configuration of the visual scene Journal Article

Scientific Reports, 9 , pp. 1–11, 2019.

Abstract | Links | BibTeX

@article{Szinte2019,
title = {The spread of presaccadic attention depends on the spatial configuration of the visual scene},
author = {Martin Szinte and Michael Puntiroli and Heiner Deubel},
doi = {10.1038/s41598-019-50541-1},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--11},
publisher = {Nature Publishing Group},
abstract = {When preparing a saccade, attentional resources are focused at the saccade target and its immediate vicinity. Here we show that this does not hold true when saccades are prepared toward a recently extinguished target. We obtained detailed maps of orientation sensitivity when participants prepared a saccade toward a target that either remained on the screen or disappeared before the eyes moved. We found that attention was mainly focused on the immediate surround of the visible target and spread to more peripheral locations as a function of the distance from the cue and the delay between the target's disappearance and the saccade. Interestingly, this spread was not accompanied with a spread of the saccade endpoint. These results suggest that presaccadic attention and saccade programming are two distinct processes that can be dissociated as a function of their interaction with the spatial configuration of the visual scene.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When preparing a saccade, attentional resources are focused at the saccade target and its immediate vicinity. Here we show that this does not hold true when saccades are prepared toward a recently extinguished target. We obtained detailed maps of orientation sensitivity when participants prepared a saccade toward a target that either remained on the screen or disappeared before the eyes moved. We found that attention was mainly focused on the immediate surround of the visible target and spread to more peripheral locations as a function of the distance from the cue and the delay between the target's disappearance and the saccade. Interestingly, this spread was not accompanied with a spread of the saccade endpoint. These results suggest that presaccadic attention and saccade programming are two distinct processes that can be dissociated as a function of their interaction with the spatial configuration of the visual scene.

Close

  • doi:10.1038/s41598-019-50541-1

Close

Katarzyna Stachowiak-Szymczak; Paweł Korpal

Interpreting accuracy and visual processing of numbers in professional and student snterpreters: An eye-tracking study Journal Article

Across Languages and Cultures, 20 (2), pp. 235–251, 2019.

Abstract | Links | BibTeX

@article{Stachowiak-Szymczak2019,
title = {Interpreting accuracy and visual processing of numbers in professional and student snterpreters: An eye-tracking study},
author = {Katarzyna Stachowiak-Szymczak and Pawe{ł} Korpal},
doi = {10.1556/084.2019.20.2.5},
year = {2019},
date = {2019-12-01},
journal = {Across Languages and Cultures},
volume = {20},
number = {2},
pages = {235--251},
abstract = {Simultaneous interpreting is a cognitively demanding task, based on performing several activities concurrently (Gile 1995; Seeber 2011). While multitasking itself is challenging, there are numerous tasks which make interpreting even more diffi cult, such as rendering of numbers and proper names, or dealing with a speaker's strong accent (Gile 2009). Among these, number interpreting is cognitively taxing since numerical data cannot be derived from the context and it needs to be rendered in a word-to-word manner (Mazza 2001). In our study, we aimed to examine cognitive load involved in number interpreting and to verify whether access to visual materials in the form of slides increases number interpreting accuracy in simultaneous interpreting performed by professional interpreters (N = 26) and interpreting trainees (N = 22). We used a remote EyeLink 1000+ eye-tracker to measure fi xation count, mean fi xation duration, and gaze time. The participants interpreted two short speeches from English into Polish, both containing 10 numerals. Slides were provided for one of the presentations. Our results show that novices are characterised by longer fixations and they provide a less accurate interpretation than professional interpreters. In addi- tion, access to slides increases number interpreting accuracy. The results obtained might be a valuable contribution to studies on visual processing in simultaneous interpreting, number interpreting as a competence, as well as interpreter training.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Simultaneous interpreting is a cognitively demanding task, based on performing several activities concurrently (Gile 1995; Seeber 2011). While multitasking itself is challenging, there are numerous tasks which make interpreting even more diffi cult, such as rendering of numbers and proper names, or dealing with a speaker's strong accent (Gile 2009). Among these, number interpreting is cognitively taxing since numerical data cannot be derived from the context and it needs to be rendered in a word-to-word manner (Mazza 2001). In our study, we aimed to examine cognitive load involved in number interpreting and to verify whether access to visual materials in the form of slides increases number interpreting accuracy in simultaneous interpreting performed by professional interpreters (N = 26) and interpreting trainees (N = 22). We used a remote EyeLink 1000+ eye-tracker to measure fi xation count, mean fi xation duration, and gaze time. The participants interpreted two short speeches from English into Polish, both containing 10 numerals. Slides were provided for one of the presentations. Our results show that novices are characterised by longer fixations and they provide a less accurate interpretation than professional interpreters. In addi- tion, access to slides increases number interpreting accuracy. The results obtained might be a valuable contribution to studies on visual processing in simultaneous interpreting, number interpreting as a competence, as well as interpreter training.

Close

  • doi:10.1556/084.2019.20.2.5

Close

Michele Scaltritti; Aliaksei Miniukovich; Paola Venuti; Remo Job; Antonella De Angeli; Simone Sulpizio

Investigating effects of typographic variables on webpage reading through eye movements Journal Article

Scientific Reports, 9 , pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Scaltritti2019,
title = {Investigating effects of typographic variables on webpage reading through eye movements},
author = {Michele Scaltritti and Aliaksei Miniukovich and Paola Venuti and Remo Job and Antonella {De Angeli} and Simone Sulpizio},
doi = {10.1038/s41598-019-49051-x},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--12},
publisher = {Nature Publishing Group},
abstract = {Webpage reading is ubiquitous in daily life. As Web technologies allow for a large variety of layouts and visual styles, the many formatting options may lead to poor design choices, including low readability. This research capitalizes on the existing readability guidelines for webpage design to outline several visuo-typographic variables and explore their effect on eye movements during webpage reading. Participants included children and adults, and for both groups typical readers and readers with dyslexia were considered. Actual webpages, rather than artificial ones, served as stimuli. This allowed to test multiple typographic variables in combination and in their typical ranges rather than in possibly unrealistic configurations. Several typographic variables displayed a significant effect on eye movements and reading performance. The effect was mostly homogeneous across the four groups, with a few exceptions. Beside supporting the notion that a few empirically-driven adjustments to the texts' visual appearance can facilitate reading across different populations, the results also highlight the challenge of making digital texts accessible to readers with dyslexia. Theoretically, the results highlight the importance of low-level visual factors, corroborating the emphasis of recent psychological models on visual attention and crowding in reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Webpage reading is ubiquitous in daily life. As Web technologies allow for a large variety of layouts and visual styles, the many formatting options may lead to poor design choices, including low readability. This research capitalizes on the existing readability guidelines for webpage design to outline several visuo-typographic variables and explore their effect on eye movements during webpage reading. Participants included children and adults, and for both groups typical readers and readers with dyslexia were considered. Actual webpages, rather than artificial ones, served as stimuli. This allowed to test multiple typographic variables in combination and in their typical ranges rather than in possibly unrealistic configurations. Several typographic variables displayed a significant effect on eye movements and reading performance. The effect was mostly homogeneous across the four groups, with a few exceptions. Beside supporting the notion that a few empirically-driven adjustments to the texts' visual appearance can facilitate reading across different populations, the results also highlight the challenge of making digital texts accessible to readers with dyslexia. Theoretically, the results highlight the importance of low-level visual factors, corroborating the emphasis of recent psychological models on visual attention and crowding in reading.

Close

  • doi:10.1038/s41598-019-49051-x

Close

Mohsen Rakhshan; Vivian Lee; Emily Chu; Lauren Harris; Lillian Laiks; Peyman Khorsand; Alireza Soltani

Influence of expected reward on temporal order judgment Journal Article

Journal of Cognitive Neuroscience, pp. 1–17, 2019.

Abstract | Links | BibTeX

@article{Rakhshan2019,
title = {Influence of expected reward on temporal order judgment},
author = {Mohsen Rakhshan and Vivian Lee and Emily Chu and Lauren Harris and Lillian Laiks and Peyman Khorsand and Alireza Soltani},
doi = {10.1162/jocn_a_01516},
year = {2019},
date = {2019-12-01},
journal = {Journal of Cognitive Neuroscience},
pages = {1--17},
publisher = {MIT Press - Journals},
abstract = {Perceptual decision-making has been shown to be influenced by reward expected from alternative options or actions, but the underlying neural mechanisms are currently unknown. More specifically, it is debated whether reward effects are mediated through changes in sensory processing, later stages of decision-making, or both. To address this question, we conducted two experiments in which human participants made saccades to what they perceived to be either the first or second of two visually identical but asynchronously presented targets while we manipulated expected reward from correct and incorrect responses on each trial. By comparing reward-induced bias in target selection (i.e., reward bias) during the two experiments, we determined whether reward caused changes in sensory or decision-making processes. We found similar reward biases in the two experiments indicating that reward information mainly influenced later stages of decision-making [R1.1]. Moreover, the observed reward biases were independent of the individual's sensitivity to sensory signals. This suggests that reward effects were determined heuristically via modulation of decision-making processes instead of sensory processing. To further explain our findings and uncover plausible neural mechanisms, we simulated our experiments with a cortical network model and tested alternative mechanisms for how reward could exert its influence. We found that our experimental observations are more compatible with reward-dependent input to the output layer of the decision circuit. Together, our results suggest that, during a temporal judgment task, reward exerts its influence via changing later stages of decision-making (i.e., response bias) rather than early sensory processing (i.e., perceptual bias).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Perceptual decision-making has been shown to be influenced by reward expected from alternative options or actions, but the underlying neural mechanisms are currently unknown. More specifically, it is debated whether reward effects are mediated through changes in sensory processing, later stages of decision-making, or both. To address this question, we conducted two experiments in which human participants made saccades to what they perceived to be either the first or second of two visually identical but asynchronously presented targets while we manipulated expected reward from correct and incorrect responses on each trial. By comparing reward-induced bias in target selection (i.e., reward bias) during the two experiments, we determined whether reward caused changes in sensory or decision-making processes. We found similar reward biases in the two experiments indicating that reward information mainly influenced later stages of decision-making [R1.1]. Moreover, the observed reward biases were independent of the individual's sensitivity to sensory signals. This suggests that reward effects were determined heuristically via modulation of decision-making processes instead of sensory processing. To further explain our findings and uncover plausible neural mechanisms, we simulated our experiments with a cortical network model and tested alternative mechanisms for how reward could exert its influence. We found that our experimental observations are more compatible with reward-dependent input to the output layer of the decision circuit. Together, our results suggest that, during a temporal judgment task, reward exerts its influence via changing later stages of decision-making (i.e., response bias) rather than early sensory processing (i.e., perceptual bias).

Close

  • doi:10.1162/jocn_a_01516

Close

Victoria I Nicholls; Geraldine Jean-Charles; Junpeng Lao; Peter de Lissa; Roberto Caldara; Sebastien Miellet

Developing attentional control in naturalistic dynamic road crossing situations Journal Article

Scientific Reports, 9 , pp. 1–10, 2019.

Abstract | Links | BibTeX

@article{Nicholls2019,
title = {Developing attentional control in naturalistic dynamic road crossing situations},
author = {Victoria I Nicholls and Geraldine Jean-Charles and Junpeng Lao and Peter de Lissa and Roberto Caldara and Sebastien Miellet},
doi = {10.1038/s41598-019-39737-7},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--10},
publisher = {Nature Publishing Group},
abstract = {In the last 20 years, there has been increasing interest in studying visual attentional processes under more natural conditions. In the present study, we propose to determine the critical age at which children show similar to adult performance and attentional control in a visually guided task; in a naturalistic dynamic and socially relevant context: road crossing. We monitored visual exploration and crossing decisions in adults and children aged between 5 and 15 while they watched road traffic videos containing a range of traffic densities with or without pedestrians. 5–10 year old (y/o) children showed less systematic gaze patterns. More specifically, adults and 11–15 y/o children look mainly at the vehicles' appearing point, which is an optimal location to sample diagnostic information for the task. In contrast, 5–10 y/os look more at socially relevant stimuli and attend to moving vehicles further down the trajectory when the traffic density is high. Critically, 5-10 y/o children also make an increased number of crossing decisions compared to 11–15 y/os and adults. Our findings reveal a critical shift around 10 y/o in attentional control and crossing decisions in a road crossing task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the last 20 years, there has been increasing interest in studying visual attentional processes under more natural conditions. In the present study, we propose to determine the critical age at which children show similar to adult performance and attentional control in a visually guided task; in a naturalistic dynamic and socially relevant context: road crossing. We monitored visual exploration and crossing decisions in adults and children aged between 5 and 15 while they watched road traffic videos containing a range of traffic densities with or without pedestrians. 5–10 year old (y/o) children showed less systematic gaze patterns. More specifically, adults and 11–15 y/o children look mainly at the vehicles' appearing point, which is an optimal location to sample diagnostic information for the task. In contrast, 5–10 y/os look more at socially relevant stimuli and attend to moving vehicles further down the trajectory when the traffic density is high. Critically, 5-10 y/o children also make an increased number of crossing decisions compared to 11–15 y/os and adults. Our findings reveal a critical shift around 10 y/o in attentional control and crossing decisions in a road crossing task.

Close

  • doi:10.1038/s41598-019-39737-7

Close

Hsin-Hung Li; Jasmine Pan; Marisa Carrasco

Presaccadic attention improves or impairs performance by enhancing sensitivity to higher spatial frequencies Journal Article

Scientific Reports, 9 , pp. 1–9, 2019.

Abstract | Links | BibTeX

@article{Li2019a,
title = {Presaccadic attention improves or impairs performance by enhancing sensitivity to higher spatial frequencies},
author = {Hsin-Hung Li and Jasmine Pan and Marisa Carrasco},
doi = {10.1038/s41598-018-38262-3},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--9},
publisher = {Nature Publishing Group},
abstract = {Right before we move our eyes, visual performance and neural responses for the saccade target are enhanced. This effect, presaccadic attention, is considered to prioritize the saccade target and to enhance behavioral performance for the saccade target. Recent evidence has shown that presaccadic attention modulates the processing of feature information. Hitherto, it remains unknown whether presaccadic modulations on feature information are flexible, to improve performance for the task at hand, or automatic, so that they alter the featural representation similarly regardless of the task. Using a masking procedure, here we report that presaccadic attention can either improve or impair performance depending on the spatial frequency content of the visual input. These counterintuitive modulations were significant at a time window right before saccade onset. Furthermore, merely deploying covert attention within the same temporal interval without preparing a saccade did not affect performance. This study reveals that presaccadic attention not only prioritizes the saccade target, but also automatically modifies its featural representation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Right before we move our eyes, visual performance and neural responses for the saccade target are enhanced. This effect, presaccadic attention, is considered to prioritize the saccade target and to enhance behavioral performance for the saccade target. Recent evidence has shown that presaccadic attention modulates the processing of feature information. Hitherto, it remains unknown whether presaccadic modulations on feature information are flexible, to improve performance for the task at hand, or automatic, so that they alter the featural representation similarly regardless of the task. Using a masking procedure, here we report that presaccadic attention can either improve or impair performance depending on the spatial frequency content of the visual input. These counterintuitive modulations were significant at a time window right before saccade onset. Furthermore, merely deploying covert attention within the same temporal interval without preparing a saccade did not affect performance. This study reveals that presaccadic attention not only prioritizes the saccade target, but also automatically modifies its featural representation.

Close

  • doi:10.1038/s41598-018-38262-3

Close

Louise Kauffmann; Carole Peyrin; Alan Chauvin; Léa Entzmann; Camille Breuil; Nathalie Guyader

Face perception influences the programming of eye movements Journal Article

Scientific Reports, 9 , pp. 1–14, 2019.

Abstract | Links | BibTeX

@article{Kauffmann2019,
title = {Face perception influences the programming of eye movements},
author = {Louise Kauffmann and Carole Peyrin and Alan Chauvin and Léa Entzmann and Camille Breuil and Nathalie Guyader},
doi = {10.1038/s41598-018-36510-0},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--14},
publisher = {Nature Publishing Group},
abstract = {Previous studies have shown that face stimuli elicit extremely fast and involuntary saccadic responses toward them, relative to other categories of visual stimuli. In the present study, we further investigated to what extent face stimuli influence the programming and execution of saccades examining their amplitude. We performed two experiments using a saccadic choice task: two images (one with a face, one with a vehicle) were simultaneously displayed in the left and right visual fields of participants who had to initiate a saccade toward the image (Experiment 1) or toward a cross in the image (Experiment 2) containing a target stimulus (a face or a vehicle). Results revealed shorter saccades toward vehicle than face targets, even if participants were explicitly asked to perform their saccades toward a specific location (Experiment 2). Furthermore, error saccades had smaller amplitude than correct saccades. Further analyses showed that error saccades were interrupted in mid-flight to initiate a concurrently-programmed corrective saccade. Overall, these data suggest that the content of visual stimuli can influence the programming of saccade amplitude, and that efficient online correction of saccades can be performed during the saccadic choice task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous studies have shown that face stimuli elicit extremely fast and involuntary saccadic responses toward them, relative to other categories of visual stimuli. In the present study, we further investigated to what extent face stimuli influence the programming and execution of saccades examining their amplitude. We performed two experiments using a saccadic choice task: two images (one with a face, one with a vehicle) were simultaneously displayed in the left and right visual fields of participants who had to initiate a saccade toward the image (Experiment 1) or toward a cross in the image (Experiment 2) containing a target stimulus (a face or a vehicle). Results revealed shorter saccades toward vehicle than face targets, even if participants were explicitly asked to perform their saccades toward a specific location (Experiment 2). Furthermore, error saccades had smaller amplitude than correct saccades. Further analyses showed that error saccades were interrupted in mid-flight to initiate a concurrently-programmed corrective saccade. Overall, these data suggest that the content of visual stimuli can influence the programming of saccade amplitude, and that efficient online correction of saccades can be performed during the saccadic choice task.

Close

  • doi:10.1038/s41598-018-36510-0

Close

Chun-Ting Hsu; Roy Clariana; Benjamin Schloss; Ping Li

Neurocognitive signatures of naturalistic reading of scientific texts: A fixation-related fMRI study Journal Article

Scientific Reports, 9 , pp. 1–16, 2019.

Abstract | Links | BibTeX

@article{Hsu2019,
title = {Neurocognitive signatures of naturalistic reading of scientific texts: A fixation-related fMRI study},
author = {Chun-Ting Hsu and Roy Clariana and Benjamin Schloss and Ping Li},
doi = {10.1038/s41598-019-47176-7},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--16},
publisher = {Nature Publishing Group},
abstract = {How do students gain scientific knowledge while reading expository text? This study examines the underlying neurocognitive basis of textual knowledge structure and individual readers' cognitive differences and reading habits, including the influence of text and reader characteristics, on outcomes of scientific text comprehension. By combining fixation-related fMRI and multiband data acquisition, the study is among the first to consider self-paced naturalistic reading inside the MRI scanner. Our results revealed the underlying neurocognitive patterns associated with information integration of different time scales during text reading, and significant individual differences due to the interaction between text characteristics (e.g., optimality of the textual knowledge structure) and reader characteristics (e.g., electronic device use habits). Individual differences impacted the amount of neural resources deployed for multitasking and information integration for constructing the underlying scientific mental models based on the text being read. Our findings have significant implications for understanding science reading in a population that is increasingly dependent on electronic devices.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

How do students gain scientific knowledge while reading expository text? This study examines the underlying neurocognitive basis of textual knowledge structure and individual readers' cognitive differences and reading habits, including the influence of text and reader characteristics, on outcomes of scientific text comprehension. By combining fixation-related fMRI and multiband data acquisition, the study is among the first to consider self-paced naturalistic reading inside the MRI scanner. Our results revealed the underlying neurocognitive patterns associated with information integration of different time scales during text reading, and significant individual differences due to the interaction between text characteristics (e.g., optimality of the textual knowledge structure) and reader characteristics (e.g., electronic device use habits). Individual differences impacted the amount of neural resources deployed for multitasking and information integration for constructing the underlying scientific mental models based on the text being read. Our findings have significant implications for understanding science reading in a population that is increasingly dependent on electronic devices.

Close

  • doi:10.1038/s41598-019-47176-7

Close

Praghajieeth Raajhen Santhana Gopalan; Otto Loberg; Jarmo Arvid Hämäläinen; Paavo H T Leppänen

Attentional processes in typically developing children as revealed using brain event-related potentials and their source localization in Attention Network Test Journal Article

Scientific Reports, 9 , pp. 1–13, 2019.

Abstract | Links | BibTeX

@article{Gopalan2019,
title = {Attentional processes in typically developing children as revealed using brain event-related potentials and their source localization in Attention Network Test},
author = {Praghajieeth Raajhen Santhana Gopalan and Otto Loberg and Jarmo Arvid Hämäläinen and Paavo H T Leppänen},
doi = {10.1038/s41598-018-36947-3},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--13},
publisher = {Nature Publishing Group},
abstract = {Attention-related processes include three functional sub-components: alerting, orienting, and inhibition. We investigated these components using EEG-based, brain event-related potentials and their neuronal source activations during the Attention Network Test in typically developing school-aged children. Participants were asked to detect the swimming direction of the centre fish in a group of five fish. The target stimulus was either preceded by a cue (centre, double, or spatial) or no cue. An EEG using 128 electrodes was recorded for 83 children aged 12–13 years. RTs showed significant effects across all three sub-components of attention. Alerting and orienting (responses to double vs non-cued target stimulus and spatially vs centre-cued target stimulus, respectively) resulted in larger N1 amplitude, whereas inhibition (responses to incongruent vs congruent target stimulus) resulted in larger P3 amplitude. Neuronal source activation for the alerting effect was localized in the right anterior temporal and bilateral occipital lobes, for the orienting effect bilaterally in the occipital lobe, and for the inhibition effect in the medial prefrontal cortex and left anterior temporal lobe. Neuronal sources of ERPs revealed that sub-processes related to the attention network are different in children as compared to earlier adult fMRI studies, which was not evident from scalp ERPs.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Attention-related processes include three functional sub-components: alerting, orienting, and inhibition. We investigated these components using EEG-based, brain event-related potentials and their neuronal source activations during the Attention Network Test in typically developing school-aged children. Participants were asked to detect the swimming direction of the centre fish in a group of five fish. The target stimulus was either preceded by a cue (centre, double, or spatial) or no cue. An EEG using 128 electrodes was recorded for 83 children aged 12–13 years. RTs showed significant effects across all three sub-components of attention. Alerting and orienting (responses to double vs non-cued target stimulus and spatially vs centre-cued target stimulus, respectively) resulted in larger N1 amplitude, whereas inhibition (responses to incongruent vs congruent target stimulus) resulted in larger P3 amplitude. Neuronal source activation for the alerting effect was localized in the right anterior temporal and bilateral occipital lobes, for the orienting effect bilaterally in the occipital lobe, and for the inhibition effect in the medial prefrontal cortex and left anterior temporal lobe. Neuronal sources of ERPs revealed that sub-processes related to the attention network are different in children as compared to earlier adult fMRI studies, which was not evident from scalp ERPs.

Close

  • doi:10.1038/s41598-018-36947-3

Close

Daniel S Ferreira; Geraldo L B Ramalho; Débora Torres; Alessandra H G Tobias; Mariana T Rezende; Fátima N S Medeiros; Andrea G C Bianchi; Cláudia M Carneiro; Daniela M Ushizima

Saliency-driven system models for cell analysis with deep learning Journal Article

Computer Methods and Programs in Biomedicine, 182 , pp. 1–13, 2019.

Abstract | Links | BibTeX

@article{Ferreira2019,
title = {Saliency-driven system models for cell analysis with deep learning},
author = {Daniel S Ferreira and Geraldo L B Ramalho and Débora Torres and Alessandra H G Tobias and Mariana T Rezende and Fátima N S Medeiros and Andrea G C Bianchi and Cláudia M Carneiro and Daniela M Ushizima},
doi = {10.1016/j.cmpb.2019.105053},
year = {2019},
date = {2019-12-01},
journal = {Computer Methods and Programs in Biomedicine},
volume = {182},
pages = {1--13},
publisher = {Elsevier BV},
abstract = {Background and objectives: Saliency refers to the visual perception quality that makes objects in a scene to stand out from others and attract attention. While computational saliency models can simulate the expert's visual attention, there is little evidence about how these models perform when used to predict the cytopathologist's eye fixations. Saliency models may be the key to instrumenting fast object detection on large Pap smear slides under real noisy conditions, artifacts, and cell occlusions. This paper describes how our computational schemes retrieve regions of interest (ROI) of clinical relevance using visual attention models. We also compare the performance of different computed saliency models as part of cell screening tasks, aiming to design a computer-aided diagnosis systems that supports cytopathologists. Method: We record eye fixation maps from cytopathologists at work, and compare with 13 different saliency prediction algorithms, including deep learning. We develop cell-specific convolutional neural networks (CNN) to investigate the impact of bottom-up and top-down factors on saliency prediction from real routine exams. By combining the eye tracking data from pathologists with computed saliency models, we assess algorithms reliability in identifying clinically relevant cells. Results:The proposed cell-specific CNN model outperforms all other saliency prediction methods, particularly regarding the number of false positives. Our algorithm also detects the most clinically relevant cells, which are among the three top salient regions, with accuracy above 98% for all diseases, except carcinoma (87%). Bottom-up methods performed satisfactorily, with saliency maps that enabled ROI detection above 75% for carcinoma and 86% for other pathologies. Conclusions:ROIs extraction using our saliency prediction methods enabled ranking the most relevant clinical areas within the image, a viable data reduction strategy to guide automatic analyses of Pap smear slides. Top-down factors for saliency prediction on cell images increases the accuracy of the estimated maps while bottom-up algorithms proved to be useful for predicting the cytopathologist's eye fixations depending on parameters, such as the number of false positive and negative. Our contributions are: comparison among 13 state-of-the-art saliency models to cytopathologists' visual attention and deliver a method that the associate the most conspicuous regions to clinically relevant cells.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background and objectives: Saliency refers to the visual perception quality that makes objects in a scene to stand out from others and attract attention. While computational saliency models can simulate the expert's visual attention, there is little evidence about how these models perform when used to predict the cytopathologist's eye fixations. Saliency models may be the key to instrumenting fast object detection on large Pap smear slides under real noisy conditions, artifacts, and cell occlusions. This paper describes how our computational schemes retrieve regions of interest (ROI) of clinical relevance using visual attention models. We also compare the performance of different computed saliency models as part of cell screening tasks, aiming to design a computer-aided diagnosis systems that supports cytopathologists. Method: We record eye fixation maps from cytopathologists at work, and compare with 13 different saliency prediction algorithms, including deep learning. We develop cell-specific convolutional neural networks (CNN) to investigate the impact of bottom-up and top-down factors on saliency prediction from real routine exams. By combining the eye tracking data from pathologists with computed saliency models, we assess algorithms reliability in identifying clinically relevant cells. Results:The proposed cell-specific CNN model outperforms all other saliency prediction methods, particularly regarding the number of false positives. Our algorithm also detects the most clinically relevant cells, which are among the three top salient regions, with accuracy above 98% for all diseases, except carcinoma (87%). Bottom-up methods performed satisfactorily, with saliency maps that enabled ROI detection above 75% for carcinoma and 86% for other pathologies. Conclusions:ROIs extraction using our saliency prediction methods enabled ranking the most relevant clinical areas within the image, a viable data reduction strategy to guide automatic analyses of Pap smear slides. Top-down factors for saliency prediction on cell images increases the accuracy of the estimated maps while bottom-up algorithms proved to be useful for predicting the cytopathologist's eye fixations depending on parameters, such as the number of false positive and negative. Our contributions are: comparison among 13 state-of-the-art saliency models to cytopathologists' visual attention and deliver a method that the associate the most conspicuous regions to clinically relevant cells.

Close

  • doi:10.1016/j.cmpb.2019.105053

Close

Felicity Deamer; Ellen Palmer; Quoc C Vuong; Nicol Ferrier; Andreas Finkelmeyer; Wolfram Hinzen; Stuart Watson

Non-literal understanding and psychosis: Metaphor comprehension in individuals with a diagnosis of schizophrenia Journal Article

Schizophrenia Research: Cognition, 18 , pp. 1–8, 2019.

Abstract | Links | BibTeX

@article{Deamer2019,
title = {Non-literal understanding and psychosis: Metaphor comprehension in individuals with a diagnosis of schizophrenia},
author = {Felicity Deamer and Ellen Palmer and Quoc C Vuong and Nicol Ferrier and Andreas Finkelmeyer and Wolfram Hinzen and Stuart Watson},
doi = {10.1016/J.SCOG.2019.100159},
year = {2019},
date = {2019-12-01},
journal = {Schizophrenia Research: Cognition},
volume = {18},
pages = {1--8},
publisher = {Elsevier},
abstract = {Previous studies suggest that understanding of non-literal expressions, and in particular metaphors, can be impaired in people with schizophrenia; although it is not clear why. We explored metaphor comprehension capacity using a novel picture selection paradigm; we compared task performance between people with schizophrenia and healthy comparator subjects and we further examined the relationships between the ability to interpret figurative expressions non-literally and performance on a number of other cognitive tasks. Eye-tracking was used to examine task strategy. We showed that even when IQ, years of education, and capacities for theory of mind and associative learning are factored in as covariates, patients are significantly more likely to interpret metaphorical expressions literally, despite eye-tracking findings suggesting that patients are following the same interpretation strategy as healthy controls. Inhibitory control deficits are likely to be one of multiple factors contributing to the poorer performance of our schizophrenia group on the metaphor trials of the picture selection task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous studies suggest that understanding of non-literal expressions, and in particular metaphors, can be impaired in people with schizophrenia; although it is not clear why. We explored metaphor comprehension capacity using a novel picture selection paradigm; we compared task performance between people with schizophrenia and healthy comparator subjects and we further examined the relationships between the ability to interpret figurative expressions non-literally and performance on a number of other cognitive tasks. Eye-tracking was used to examine task strategy. We showed that even when IQ, years of education, and capacities for theory of mind and associative learning are factored in as covariates, patients are significantly more likely to interpret metaphorical expressions literally, despite eye-tracking findings suggesting that patients are following the same interpretation strategy as healthy controls. Inhibitory control deficits are likely to be one of multiple factors contributing to the poorer performance of our schizophrenia group on the metaphor trials of the picture selection task.

Close

  • doi:10.1016/J.SCOG.2019.100159

Close

Garvin Brod; Jasmin Breitwieser

Lighting the wick in the candle of learning: Generating a prediction stimulates curiosity Journal Article

Science of Learning, 4 , pp. 1–7, 2019.

Abstract | Links | BibTeX

@article{Brod2019,
title = {Lighting the wick in the candle of learning: Generating a prediction stimulates curiosity},
author = {Garvin Brod and Jasmin Breitwieser},
doi = {10.1038/s41539-019-0056-y},
year = {2019},
date = {2019-12-01},
journal = {Science of Learning},
volume = {4},
pages = {1--7},
abstract = {Curiosity stimulates learning. We tested whether curiosity itself can be stimulated—not by extrinsic rewards but by an intrinsic desire to know whether a prediction holds true. Participants performed a numerical-facts learning task in which they had to generate either a prediction or an example before rating their curiosity and seeing the correct answer. More facts received high-curiosity ratings in the prediction condition, which indicates that generating predictions stimulated curiosity. In turn, high curiosity, compared with low curiosity, was associated with better memory for the correct answer. Concurrent pupillary data revealed that higher curiosity was associated with larger pupil dilation during anticipation of the correct answer. Pupil dilation was further enhanced when participants generated a prediction rather than an example, both during anticipation of the correct answer and in response to seeing it. These results suggest that generating a prediction stimulates curiosity by increasing the relevance of the knowledge gap.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Curiosity stimulates learning. We tested whether curiosity itself can be stimulated—not by extrinsic rewards but by an intrinsic desire to know whether a prediction holds true. Participants performed a numerical-facts learning task in which they had to generate either a prediction or an example before rating their curiosity and seeing the correct answer. More facts received high-curiosity ratings in the prediction condition, which indicates that generating predictions stimulated curiosity. In turn, high curiosity, compared with low curiosity, was associated with better memory for the correct answer. Concurrent pupillary data revealed that higher curiosity was associated with larger pupil dilation during anticipation of the correct answer. Pupil dilation was further enhanced when participants generated a prediction rather than an example, both during anticipation of the correct answer and in response to seeing it. These results suggest that generating a prediction stimulates curiosity by increasing the relevance of the knowledge gap.

Close

  • doi:10.1038/s41539-019-0056-y

Close

Hanna Brinkmann; Louis Williams; Eugene McSorley; Raphael Rosenberg

Does ‘action viewing' really exist? Perceived dynamism and viewing behaviour Journal Article

Art and Perception, pp. 1–22, 2019.

Abstract | Links | BibTeX

@article{Brinkmann2019,
title = {Does ‘action viewing' really exist? Perceived dynamism and viewing behaviour},
author = {Hanna Brinkmann and Louis Williams and Eugene McSorley and Raphael Rosenberg},
doi = {10.1163/22134913-20191128},
year = {2019},
date = {2019-12-01},
journal = {Art and Perception},
pages = {1--22},
publisher = {Brill},
abstract = {Throughout the 20th century, there have been many different forms of abstract painting. While works by some artists, e.g., Piet Mondrian, are usually described as static, others are described as dynamic, such as Jackson Pollock's ‘action paintings'. Art historians have assumed that beholders not only conceptualise such differences in depicted dynamics but also mirror these in their viewing behaviour. In an interdisciplinary eye-tracking study, we tested this concept through investigating both the localisation of fixations (polyfocal viewing) and the average duration of fixations as well as saccade velocity, duration and path curvature. We showed 30 different abstract paintings to 40 participants — 20 laypeople and 20 experts (art students) — and used self-reporting to investigate the perceived dynamism of each painting and its relationship with (a) the average number and duration of fixations, (b) the average number, duration and velocity of saccades as well as the amplitude and curvature area of saccade paths, and (c) pleasantness and familiarity ratings. We found that the average number of fixations and saccades, saccade velocity, and pleasantness ratings increase with an increase in perceived dynamism ratings. Meanwhile the saccade duration decreased with an increase in perceived dynamism. Additionally, the analysis showed that experts gave higher dynamic ratings compared to laypeople and were more familiar with the artworks. These results indicate that there is a correlation between perceived dynamism in abstract painting and viewing behaviour — something that has long been assumed by art historians but had never been empirically supported.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Throughout the 20th century, there have been many different forms of abstract painting. While works by some artists, e.g., Piet Mondrian, are usually described as static, others are described as dynamic, such as Jackson Pollock's ‘action paintings'. Art historians have assumed that beholders not only conceptualise such differences in depicted dynamics but also mirror these in their viewing behaviour. In an interdisciplinary eye-tracking study, we tested this concept through investigating both the localisation of fixations (polyfocal viewing) and the average duration of fixations as well as saccade velocity, duration and path curvature. We showed 30 different abstract paintings to 40 participants — 20 laypeople and 20 experts (art students) — and used self-reporting to investigate the perceived dynamism of each painting and its relationship with (a) the average number and duration of fixations, (b) the average number, duration and velocity of saccades as well as the amplitude and curvature area of saccade paths, and (c) pleasantness and familiarity ratings. We found that the average number of fixations and saccades, saccade velocity, and pleasantness ratings increase with an increase in perceived dynamism ratings. Meanwhile the saccade duration decreased with an increase in perceived dynamism. Additionally, the analysis showed that experts gave higher dynamic ratings compared to laypeople and were more familiar with the artworks. These results indicate that there is a correlation between perceived dynamism in abstract painting and viewing behaviour — something that has long been assumed by art historians but had never been empirically supported.

Close

  • doi:10.1163/22134913-20191128

Close

Rotem Botvinik-Nezer; Roni Iwanir; Felix Holzmeister; Jürgen Huber; Magnus Johannesson; Michael Kirchler; Anna Dreber; Colin F Camerer; Russell A Poldrack; Tom Schonberg

fMRI data of mixed gambles from the Neuroimaging Analysis Replication and Prediction Study Journal Article

Scientific Data, 6 , pp. 1–9, 2019.

Abstract | Links | BibTeX

@article{Botvinik-Nezer2019,
title = {fMRI data of mixed gambles from the Neuroimaging Analysis Replication and Prediction Study},
author = {Rotem Botvinik-Nezer and Roni Iwanir and Felix Holzmeister and Jürgen Huber and Magnus Johannesson and Michael Kirchler and Anna Dreber and Colin F Camerer and Russell A Poldrack and Tom Schonberg},
doi = {10.1038/s41597-019-0113-7},
year = {2019},
date = {2019-12-01},
journal = {Scientific Data},
volume = {6},
pages = {1--9},
publisher = {Nature Publishing Group},
abstract = {There is an ongoing debate about the replicability of neuroimaging research. It was suggested that one of the main reasons for the high rate of false positive results is the many degrees of freedom researchers have during data analysis. In the Neuroimaging Analysis Replication and Prediction Study (NARPS), we aim to provide the first scientific evidence on the variability of results across analysis teams in neuroscience. We collected fMRI data from 108 participants during two versions of the mixed gambles task, which is often used to study decision-making under risk. For each participant, the dataset includes an anatomical (T1 weighted) scan and fMRI as well as behavioral data from four runs of the task. The dataset is shared through OpenNeuro and is formatted according to the Brain Imaging Data Structure (BIDS) standard. Data pre-processed with fMRIprep and quality control reports are also publicly shared. This dataset can be used to study decision-making under risk and to test replicability and interpretability of previous results in the field.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

There is an ongoing debate about the replicability of neuroimaging research. It was suggested that one of the main reasons for the high rate of false positive results is the many degrees of freedom researchers have during data analysis. In the Neuroimaging Analysis Replication and Prediction Study (NARPS), we aim to provide the first scientific evidence on the variability of results across analysis teams in neuroscience. We collected fMRI data from 108 participants during two versions of the mixed gambles task, which is often used to study decision-making under risk. For each participant, the dataset includes an anatomical (T1 weighted) scan and fMRI as well as behavioral data from four runs of the task. The dataset is shared through OpenNeuro and is formatted according to the Brain Imaging Data Structure (BIDS) standard. Data pre-processed with fMRIprep and quality control reports are also publicly shared. This dataset can be used to study decision-making under risk and to test replicability and interpretability of previous results in the field.

Close

  • doi:10.1038/s41597-019-0113-7

Close

Angela Bartolo; Caroline Claisse; Fabrizia Gallo; Laurent Ott; Adriana Sampaio; Jean-Louis Nandrino

Gestures convey different physiological responses when performed toward and away from the body Journal Article

Scientific Reports, 9 , pp. 1–10, 2019.

Abstract | Links | BibTeX

@article{Bartolo2019,
title = {Gestures convey different physiological responses when performed toward and away from the body},
author = {Angela Bartolo and Caroline Claisse and Fabrizia Gallo and Laurent Ott and Adriana Sampaio and Jean-Louis Nandrino},
doi = {10.1038/s41598-019-49318-3},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--10},
publisher = {Nature Publishing Group},
abstract = {We assessed the sympathetic and parasympathetic activation associated to the observation of Pantomime (i.e. the mime of the use of a tool) and Intransitive gestures (i.e. expressive) performed toward (e.g. a comb and “thinking”) and away from the body (e.g. key and “come here”) in a group of healthy participants while both pupil dilation (N = 31) and heart rate variability (N = 33; HF-HRV) were recorded. Large pupil dilation was observed in both Pantomime and Intransitive gestures toward the body; whereas an increase of the vagal suppression was observed in Intransitive gestures away from the body but not in those toward the body. Our results suggest that the space where people act when performing a gesture has an impact on the physiological responses of the observer in relation to the type of social communicative information that the gesture direction conveys, from a more intimate (toward the body) to a more interactive one (away from the body).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We assessed the sympathetic and parasympathetic activation associated to the observation of Pantomime (i.e. the mime of the use of a tool) and Intransitive gestures (i.e. expressive) performed toward (e.g. a comb and “thinking”) and away from the body (e.g. key and “come here”) in a group of healthy participants while both pupil dilation (N = 31) and heart rate variability (N = 33; HF-HRV) were recorded. Large pupil dilation was observed in both Pantomime and Intransitive gestures toward the body; whereas an increase of the vagal suppression was observed in Intransitive gestures away from the body but not in those toward the body. Our results suggest that the space where people act when performing a gesture has an impact on the physiological responses of the observer in relation to the type of social communicative information that the gesture direction conveys, from a more intimate (toward the body) to a more interactive one (away from the body).

Close

  • doi:10.1038/s41598-019-49318-3

Close

Ariana R Andrei; Sorin Pojoga; Roger Janz; Valentin Dragoi

Integration of cortical population signals for visual perception Journal Article

Nature Communications, 10 (1), pp. 1–13, 2019.

Abstract | Links | BibTeX

@article{Andrei2019,
title = {Integration of cortical population signals for visual perception},
author = {Ariana R Andrei and Sorin Pojoga and Roger Janz and Valentin Dragoi},
doi = {10.1038/s41467-019-11736-2},
year = {2019},
date = {2019-12-01},
journal = {Nature Communications},
volume = {10},
number = {1},
pages = {1--13},
publisher = {Nature Publishing Group},
abstract = {Visual stimuli evoke heterogeneous responses across nearby neural populations. These signals must be locally integrated to contribute to perception, but the principles underlying this process are unknown. Here, we exploit the systematic organization of orientation preference in macaque primary visual cortex (V1) and perform causal manipulations to examine the limits of signal integration. Optogenetic stimulation and visual stimuli are used to simultaneously drive two neural populations with overlapping receptive fields. We report that optogenetic stimulation raises firing rates uniformly across conditions, but improves the detection of visual stimuli only when activating cells that are preferentially-tuned to the visual stimulus. Further, we show that changes in correlated variability are exclusively present when the optogenetically and visually-activated populations are functionally-proximal, suggesting that correlation changes represent a hallmark of signal integration. Our results demonstrate that information from functionally-proximal neurons is pooled for perception, but functionally-distal signals remain independent. Primary visual cortical neurons exhibit diverse responses to visual stimuli yet how these signals are integrated during visual perception is not well understood. Here, the authors show that optogenetic stimulation of neurons situated near the visually‐driven population leads to improved orientation detection in monkeys through changes in correlated variability.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual stimuli evoke heterogeneous responses across nearby neural populations. These signals must be locally integrated to contribute to perception, but the principles underlying this process are unknown. Here, we exploit the systematic organization of orientation preference in macaque primary visual cortex (V1) and perform causal manipulations to examine the limits of signal integration. Optogenetic stimulation and visual stimuli are used to simultaneously drive two neural populations with overlapping receptive fields. We report that optogenetic stimulation raises firing rates uniformly across conditions, but improves the detection of visual stimuli only when activating cells that are preferentially-tuned to the visual stimulus. Further, we show that changes in correlated variability are exclusively present when the optogenetically and visually-activated populations are functionally-proximal, suggesting that correlation changes represent a hallmark of signal integration. Our results demonstrate that information from functionally-proximal neurons is pooled for perception, but functionally-distal signals remain independent. Primary visual cortical neurons exhibit diverse responses to visual stimuli yet how these signals are integrated during visual perception is not well understood. Here, the authors show that optogenetic stimulation of neurons situated near the visually‐driven population leads to improved orientation detection in monkeys through changes in correlated variability.

Close

  • doi:10.1038/s41467-019-11736-2

Close

Jacob A Westerberg; Alexander Maier; Geoffrey F Woodman; Jeffrey D Schall

Performance monitoring during visual priming Journal Article

Journal of Cognitive Neuroscience, pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Westerberg2019a,
title = {Performance monitoring during visual priming},
author = {Jacob A Westerberg and Alexander Maier and Geoffrey F Woodman and Jeffrey D Schall},
doi = {10.1162/jocn_a_01499},
year = {2019},
date = {2019-11-01},
journal = {Journal of Cognitive Neuroscience},
pages = {1--12},
publisher = {MIT Press - Journals},
abstract = {Repetitive performance of single-feature (efficient or pop-out) visual search improves RTs and accuracy. This phenomenon, known as priming of pop-out, has been demonstrated in both humans and macaque monkeys. We investigated the relationship between performance monitoring and priming of pop-out. Neuronal activity in the supplementary eye field (SEF) contributes to performance monitoring and to the generation of performance monitoring signals in the EEG. To determine whether priming depends on performance monitoring, we investigated spiking activity in SEF as well as the concurrent EEG of two monkeys performing a priming of pop-out task. We found that SEF spiking did not modulate with priming. Surprisingly, concurrent EEG did covary with priming. Together, these results suggest that performance monitoring contributes to priming of pop-out. However, this performance monitoring seems not mediated by SEF. This dissociation suggests that EEG indices of performance monitoring arise from multiple, functionally distinct neural generators.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Repetitive performance of single-feature (efficient or pop-out) visual search improves RTs and accuracy. This phenomenon, known as priming of pop-out, has been demonstrated in both humans and macaque monkeys. We investigated the relationship between performance monitoring and priming of pop-out. Neuronal activity in the supplementary eye field (SEF) contributes to performance monitoring and to the generation of performance monitoring signals in the EEG. To determine whether priming depends on performance monitoring, we investigated spiking activity in SEF as well as the concurrent EEG of two monkeys performing a priming of pop-out task. We found that SEF spiking did not modulate with priming. Surprisingly, concurrent EEG did covary with priming. Together, these results suggest that performance monitoring contributes to priming of pop-out. However, this performance monitoring seems not mediated by SEF. This dissociation suggests that EEG indices of performance monitoring arise from multiple, functionally distinct neural generators.

Close

  • doi:10.1162/jocn_a_01499

Close

Antonia F Ten Brink; Jasper H Fabius; Nick A Weaver; Tanja C W Nijboer; Stefan Van der Stigchel

Trans-saccadic memory after right parietal brain damage Journal Article

Cortex, 120 , pp. 284–297, 2019.

Abstract | Links | BibTeX

@article{TenBrink2019a,
title = {Trans-saccadic memory after right parietal brain damage},
author = {Antonia F {Ten Brink} and Jasper H Fabius and Nick A Weaver and Tanja C W Nijboer and Stefan {Van der Stigchel}},
doi = {10.1016/j.cortex.2019.06.006},
year = {2019},
date = {2019-11-01},
journal = {Cortex},
volume = {120},
pages = {284--297},
publisher = {Elsevier BV},
abstract = {INTRODUCTION: Spatial remapping, the process of updating information across eye movements, is an important mechanism for trans-saccadic perception. The right posterior parietal cortex (PPC) is a region that has been associated most strongly with spatial remapping. The aim of the project was to investigate the effect of damage to the right PPC on direction specific trans-saccadic memory. We compared trans-saccadic memory performance for central items that had to be remembered while making a left- versus rightward eye movement, or for items that were remapped within the left versus right visual field. METHODS: We included 9 stroke patients with unilateral right PPC lesions and 31 healthy control subjects. Participants memorized the location of a briefly presented item, had to make one saccade (either towards the left or right, or upward or downward), and subsequently had to decide in what direction the probe had shifted. We used a staircase to adjust task difficulty (i.e., the distance between the memory item and probe). Bayesian repeated measures ANOVAs were used to compare left versus right eye movements and items in the left versus right visual field. RESULTS: In both conditions, patients with right PPC damage showed worse trans-saccadic memory performance compared to healthy control subjects (for the condition with left- and rightward gaze shifts},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

INTRODUCTION: Spatial remapping, the process of updating information across eye movements, is an important mechanism for trans-saccadic perception. The right posterior parietal cortex (PPC) is a region that has been associated most strongly with spatial remapping. The aim of the project was to investigate the effect of damage to the right PPC on direction specific trans-saccadic memory. We compared trans-saccadic memory performance for central items that had to be remembered while making a left- versus rightward eye movement, or for items that were remapped within the left versus right visual field. METHODS: We included 9 stroke patients with unilateral right PPC lesions and 31 healthy control subjects. Participants memorized the location of a briefly presented item, had to make one saccade (either towards the left or right, or upward or downward), and subsequently had to decide in what direction the probe had shifted. We used a staircase to adjust task difficulty (i.e., the distance between the memory item and probe). Bayesian repeated measures ANOVAs were used to compare left versus right eye movements and items in the left versus right visual field. RESULTS: In both conditions, patients with right PPC damage showed worse trans-saccadic memory performance compared to healthy control subjects (for the condition with left- and rightward gaze shifts

Close

  • doi:10.1016/j.cortex.2019.06.006

Close

Mariya E Manahova; Eelke Spaak; Floris P de Lange

Familiarity increases processing speed in the visual system Journal Article

Journal of Cognitive Neuroscience, pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Manahova2019,
title = {Familiarity increases processing speed in the visual system},
author = {Mariya E Manahova and Eelke Spaak and Floris P de Lange},
doi = {10.1162/jocn_a_01507},
year = {2019},
date = {2019-11-01},
journal = {Journal of Cognitive Neuroscience},
pages = {1--12},
publisher = {MIT Press - Journals},
abstract = {Familiarity with a stimulus leads to an attenuated neural response to the stimulus. Alongside this attenuation, recent studies have also observed a truncation of stimulus-evoked activity for familiar visual input. One proposed function of this truncation is to rapidly put neurons in a state of readiness to respond to new input. Here, we examined this hypothesis by presenting human participants with target stimuli that were embedded in rapid streams of familiar or novel distractor stimuli at different speeds of presentation, while recording brain activity using magnetoencephalography and measuring behavioral performance. We investigated the temporal and spatial dynamics of signal truncation and whether this phenomenon bears relationship to participants' ability to categorize target items within a visual stream. Behaviorally, target categorization performance was markedly better when the target was embedded within familiar distractors, and this benefit became more pronounced with increasing speed of presentation. Familiar distractors showed a truncation of neural activity in the visual system. This truncation was strongest for the fastest presentation speeds and peaked in progressively more anterior cortical regions as presentation speeds became slower. Moreover, the neural response evoked by the target was stronger when this target was preceded by familiar distractors. Taken together, these findings demonstrate that item familiarity results in a truncated neural response, is associated with stronger processing of relevant target information, and leads to superior perceptual performance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Familiarity with a stimulus leads to an attenuated neural response to the stimulus. Alongside this attenuation, recent studies have also observed a truncation of stimulus-evoked activity for familiar visual input. One proposed function of this truncation is to rapidly put neurons in a state of readiness to respond to new input. Here, we examined this hypothesis by presenting human participants with target stimuli that were embedded in rapid streams of familiar or novel distractor stimuli at different speeds of presentation, while recording brain activity using magnetoencephalography and measuring behavioral performance. We investigated the temporal and spatial dynamics of signal truncation and whether this phenomenon bears relationship to participants' ability to categorize target items within a visual stream. Behaviorally, target categorization performance was markedly better when the target was embedded within familiar distractors, and this benefit became more pronounced with increasing speed of presentation. Familiar distractors showed a truncation of neural activity in the visual system. This truncation was strongest for the fastest presentation speeds and peaked in progressively more anterior cortical regions as presentation speeds became slower. Moreover, the neural response evoked by the target was stronger when this target was preceded by familiar distractors. Taken together, these findings demonstrate that item familiarity results in a truncated neural response, is associated with stronger processing of relevant target information, and leads to superior perceptual performance.

Close

  • doi:10.1162/jocn_a_01507

Close

Moreno I Coco; Antje Nuthmann; Olaf Dimigen

Fixation-related brain potentials during semantic integration of object–scene information Journal Article

Journal of Cognitive Neuroscience, pp. 1–19, 2019.

Abstract | Links | BibTeX

@article{Coco2019,
title = {Fixation-related brain potentials during semantic integration of object–scene information},
author = {Moreno I Coco and Antje Nuthmann and Olaf Dimigen},
doi = {10.1162/jocn_a_01504},
year = {2019},
date = {2019-11-01},
journal = {Journal of Cognitive Neuroscience},
pages = {1--19},
publisher = {MIT Press - Journals},
abstract = {In vision science, a particularly controversial topic is whether and how quickly the semantic information about objects is available outside foveal vision. Here, we aimed at contributing to this debate by coregistering eye movements and EEG while participants viewed photographs of indoor scenes that contained a semantically consistent or inconsistent target object. Linear deconvolution modeling was used to analyze the ERPs evoked by scene onset as well as the fixation-related potentials (FRPs) elicited by the fixation on the target object ( t) and by the preceding fixation ( t − 1). Object–scene consistency did not influence the probability of immediate target fixation or the ERP evoked by scene onset, which suggests that object–scene semantics was not accessed immediately. However, during the subsequent scene exploration, inconsistent objects were prioritized over consistent objects in extrafoveal vision (i.e., looked at earlier) and were more effortful to process in foveal vision (i.e., looked at longer). In FRPs, we demonstrate a fixation-related N300/N400 effect, whereby inconsistent objects elicit a larger frontocentral negativity than consistent objects. In line with the behavioral findings, this effect was already seen in FRPs aligned to the pretarget fixation t − 1 and persisted throughout fixation t, indicating that the extraction of object semantics can already begin in extrafoveal vision. Taken together, the results emphasize the usefulness of combined EEG/eye movement recordings for understanding the mechanisms of object–scene integration during natural viewing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In vision science, a particularly controversial topic is whether and how quickly the semantic information about objects is available outside foveal vision. Here, we aimed at contributing to this debate by coregistering eye movements and EEG while participants viewed photographs of indoor scenes that contained a semantically consistent or inconsistent target object. Linear deconvolution modeling was used to analyze the ERPs evoked by scene onset as well as the fixation-related potentials (FRPs) elicited by the fixation on the target object ( t) and by the preceding fixation ( t − 1). Object–scene consistency did not influence the probability of immediate target fixation or the ERP evoked by scene onset, which suggests that object–scene semantics was not accessed immediately. However, during the subsequent scene exploration, inconsistent objects were prioritized over consistent objects in extrafoveal vision (i.e., looked at earlier) and were more effortful to process in foveal vision (i.e., looked at longer). In FRPs, we demonstrate a fixation-related N300/N400 effect, whereby inconsistent objects elicit a larger frontocentral negativity than consistent objects. In line with the behavioral findings, this effect was already seen in FRPs aligned to the pretarget fixation t − 1 and persisted throughout fixation t, indicating that the extraction of object semantics can already begin in extrafoveal vision. Taken together, the results emphasize the usefulness of combined EEG/eye movement recordings for understanding the mechanisms of object–scene integration during natural viewing.

Close

  • doi:10.1162/jocn_a_01504

Close

Nahid Zokaei; Alexander G Board; Sanjay G Manohar; Anna C Nobre

Modulation of the pupillary response by the content of visual working memory Journal Article

Proceedings of the National Academy of Sciences, 115 (45), pp. 22802–22810, 2019.

Abstract | Links | BibTeX

@article{Zokaei2019,
title = {Modulation of the pupillary response by the content of visual working memory},
author = {Nahid Zokaei and Alexander G Board and Sanjay G Manohar and Anna C Nobre},
doi = {10.1073/pnas.1909959116},
year = {2019},
date = {2019-10-01},
journal = {Proceedings of the National Academy of Sciences},
volume = {115},
number = {45},
pages = {22802--22810},
abstract = {Studies of selective attention during perception have revealed modulation of the pupillary response according to the brightness of task-relevant (attended) vs. -irrelevant (unattended) stimuli within a visual display. As a strong test of top-down modulation of the pupil response by selective attention, we asked whether changes in pupil diameter follow internal shifts of attention to memoranda of visual stimuli of different brightness maintained in working memory, in the absence of any visual stimulation. Across 3 studies, we reveal dilation of the pupil when participants orient attention to the memorandum of a dark grating relative to that of a bright grating. The effect occurs even when the attention-orienting cue is independent of stimulus brightness, and even when stimulus brightness is merely incidental and not required for the working-memory task of judging stimulus orientation. Furthermore, relative dilation and constriction of the pupil occurred dynamically and followed the changing temporal expectation that 1 or the other stimulus would be probed across the retention delay. The results provide surprising and consistent evidence that pupil responses are under top-down control by cognitive factors, even when there is no direct adaptive gain for such modulation, since no visual stimuli were presented or anticipated. The results also strengthen the view of sensory recruitment during working memory, suggesting even activation of sensory receptors. The thought-provoking corollary to our findings is that the pupils provide a reliable measure of what is in the focus of mind, thus giving a different meaning to old proverbs about the eyes being a window to the mind.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Studies of selective attention during perception have revealed modulation of the pupillary response according to the brightness of task-relevant (attended) vs. -irrelevant (unattended) stimuli within a visual display. As a strong test of top-down modulation of the pupil response by selective attention, we asked whether changes in pupil diameter follow internal shifts of attention to memoranda of visual stimuli of different brightness maintained in working memory, in the absence of any visual stimulation. Across 3 studies, we reveal dilation of the pupil when participants orient attention to the memorandum of a dark grating relative to that of a bright grating. The effect occurs even when the attention-orienting cue is independent of stimulus brightness, and even when stimulus brightness is merely incidental and not required for the working-memory task of judging stimulus orientation. Furthermore, relative dilation and constriction of the pupil occurred dynamically and followed the changing temporal expectation that 1 or the other stimulus would be probed across the retention delay. The results provide surprising and consistent evidence that pupil responses are under top-down control by cognitive factors, even when there is no direct adaptive gain for such modulation, since no visual stimuli were presented or anticipated. The results also strengthen the view of sensory recruitment during working memory, suggesting even activation of sensory receptors. The thought-provoking corollary to our findings is that the pupils provide a reliable measure of what is in the focus of mind, thus giving a different meaning to old proverbs about the eyes being a window to the mind.

Close

  • doi:10.1073/pnas.1909959116

Close

Toby Wise; Jochen Michely; Peter Dayan; Raymond J Dolan

A computational account of threat-related attentional bias Journal Article

PLOS Computational Biology, 15 (10), pp. 1–21, 2019.

Abstract | Links | BibTeX

@article{Wise2019,
title = {A computational account of threat-related attentional bias},
author = {Toby Wise and Jochen Michely and Peter Dayan and Raymond J Dolan},
editor = {Michael Browning},
doi = {10.1371/journal.pcbi.1007341},
year = {2019},
date = {2019-10-01},
journal = {PLOS Computational Biology},
volume = {15},
number = {10},
pages = {1--21},
abstract = {Visual selective attention acts as a filter on perceptual information, facilitating learning and inference about important events in an agent's environment. A role for visual attention in reward-based decisions has previously been demonstrated, but it remains unclear how visual attention is recruited during aversive learning, particularly when learning about multi- ple stimuli concurrently. This question is of particular importance in psychopathology, where enhanced attention to threat is a putative feature of pathological anxiety. Using an aversive reversal learning task that required subjects to learn, and exploit, predictions about multiple stimuli, we show that the allocation of visual attention is influenced significantly by aversive value but not by uncertainty. Moreover, this relationship is bidirectional in that attention biases value updates for attended stimuli, resulting in heightened value estimates. Our find- ings have implications for understanding biased attention in psychopathology and support a role for learning in the expression of threat-related attentional biases in anxiety.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual selective attention acts as a filter on perceptual information, facilitating learning and inference about important events in an agent's environment. A role for visual attention in reward-based decisions has previously been demonstrated, but it remains unclear how visual attention is recruited during aversive learning, particularly when learning about multi- ple stimuli concurrently. This question is of particular importance in psychopathology, where enhanced attention to threat is a putative feature of pathological anxiety. Using an aversive reversal learning task that required subjects to learn, and exploit, predictions about multiple stimuli, we show that the allocation of visual attention is influenced significantly by aversive value but not by uncertainty. Moreover, this relationship is bidirectional in that attention biases value updates for attended stimuli, resulting in heightened value estimates. Our find- ings have implications for understanding biased attention in psychopathology and support a role for learning in the expression of threat-related attentional biases in anxiety.

Close

  • doi:10.1371/journal.pcbi.1007341

Close

Oryah C Lancry-Dayan; Ganit Kupershmidt; Yoni Pertzov

Been there, seen that, done that: Modification of visual exploration across repeated exposures Journal Article

Journal of Vision, 19 (12), pp. 1–16, 2019.

Abstract | Links | BibTeX

@article{Lancry-Dayan2019,
title = {Been there, seen that, done that: Modification of visual exploration across repeated exposures},
author = {Oryah C Lancry-Dayan and Ganit Kupershmidt and Yoni Pertzov},
doi = {10.1167/19.12.2},
year = {2019},
date = {2019-10-01},
journal = {Journal of Vision},
volume = {19},
number = {12},
pages = {1--16},
abstract = {The underlying factors that determine gaze position are a central topic in visual cognitive research. Traditionally, studies emphasized the interaction between the low- level properties of an image and gaze position. Later studies examined the influence of the semantic properties of an image. These studies explored gaze behavior during a single presentation, thus ignoring the impact of familiarity. Sparse evidence suggested that across repetitive exposures, gaze exploration attenuates but the correlation between gaze position and the low- level features of the image remains stable. However, these studies neglected two fundamental issues: (a) repeated scenes are displayed later in the testing session, such that exploration attenuation could be a result of lethargy, and (b) even if these effects are related to familiarity, are they based on a verbatim familiarity with the image, or on high-level familiarity with the gist of the scene? We investigated these issues by exposing participants to a sequence of images, some of them repeated across blocks. We found fewer, longer fixations as familiarity increased, along with shorter saccades and decreased gaze allocation towards semantically meaningful regions. These effects could not be ascribed to tonic fatigue, since they did not manifest for images that changed across blocks. Moreover, there was no attenuation of gaze behavior when participants observed a flipped version of the familiar images, suggesting that gist familiarity is not sufficient for eliciting these effects. These findings contribute to the literature on memory-guided gaze behavior and provide novel insights into the mechanism underlying the visual exploration of familiar environments.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The underlying factors that determine gaze position are a central topic in visual cognitive research. Traditionally, studies emphasized the interaction between the low- level properties of an image and gaze position. Later studies examined the influence of the semantic properties of an image. These studies explored gaze behavior during a single presentation, thus ignoring the impact of familiarity. Sparse evidence suggested that across repetitive exposures, gaze exploration attenuates but the correlation between gaze position and the low- level features of the image remains stable. However, these studies neglected two fundamental issues: (a) repeated scenes are displayed later in the testing session, such that exploration attenuation could be a result of lethargy, and (b) even if these effects are related to familiarity, are they based on a verbatim familiarity with the image, or on high-level familiarity with the gist of the scene? We investigated these issues by exposing participants to a sequence of images, some of them repeated across blocks. We found fewer, longer fixations as familiarity increased, along with shorter saccades and decreased gaze allocation towards semantically meaningful regions. These effects could not be ascribed to tonic fatigue, since they did not manifest for images that changed across blocks. Moreover, there was no attenuation of gaze behavior when participants observed a flipped version of the familiar images, suggesting that gist familiarity is not sufficient for eliciting these effects. These findings contribute to the literature on memory-guided gaze behavior and provide novel insights into the mechanism underlying the visual exploration of familiar environments.

Close

  • doi:10.1167/19.12.2

Close

Christoph Huber-Huber; Antimo Buonocore; Olaf Dimigen; Clayton Hickey; David Melcher

The peripheral preview effect with faces: Combined EEG and eye-tracking suggests multiple stages of trans-saccadic predictive and non-predictive processing Journal Article

NeuroImage, 200 , pp. 344–362, 2019.

Abstract | Links | BibTeX

@article{Huber-Huber2019,
title = {The peripheral preview effect with faces: Combined EEG and eye-tracking suggests multiple stages of trans-saccadic predictive and non-predictive processing},
author = {Christoph Huber-Huber and Antimo Buonocore and Olaf Dimigen and Clayton Hickey and David Melcher},
doi = {10.1016/j.neuroimage.2019.06.059},
year = {2019},
date = {2019-10-01},
journal = {NeuroImage},
volume = {200},
pages = {344--362},
publisher = {Academic Press Inc.},
abstract = {The world appears stable despite saccadic eye-movements. One possible explanation for this phenomenon is that the visual system predicts upcoming input across saccadic eye-movements based on peripheral preview of the saccadic target. We tested this idea using concurrent electroencephalography (EEG) and eye-tracking. Participants made cued saccades to peripheral upright or inverted face stimuli that changed orientation (invalid preview) or maintained orientation (valid preview) while the saccade was completed. Experiment 1 demonstrated better discrimination performance and a reduced fixation-locked N170 component (fN170) with valid than with invalid preview, demonstrating integration of pre- and post-saccadic information. Moreover, the early fixation-related potentials (FRP) showed a preview face inversion effect suggesting that some pre-saccadic input was represented in the brain until around 170 ms post fixation-onset. Experiment 2 replicated Experiment 1 and manipulated the proportion of valid and invalid trials to test whether the preview effect reflects context-based prediction across trials. A whole-scalp Bayes factor analysis showed that this manipulation did not alter the fN170 preview effect but did influence the face inversion effect before the saccade. The pre-saccadic inversion effect declined earlier in the mostly invalid block than in the mostly valid block, which is consistent with the notion of pre-saccadic expectations. In addition, in both studies, we found strong evidence for an interaction between the pre-saccadic preview stimulus and the post-saccadic target as early as 50 ms (Experiment 2) or 90 ms (Experiment 1) into the new fixation. These findings suggest that visual stability may involve three temporal stages: prediction about the saccadic target, integration of pre-saccadic and post-saccadic information at around 50-90 ms post fixation onset, and post-saccadic facilitation of rapid categorization.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The world appears stable despite saccadic eye-movements. One possible explanation for this phenomenon is that the visual system predicts upcoming input across saccadic eye-movements based on peripheral preview of the saccadic target. We tested this idea using concurrent electroencephalography (EEG) and eye-tracking. Participants made cued saccades to peripheral upright or inverted face stimuli that changed orientation (invalid preview) or maintained orientation (valid preview) while the saccade was completed. Experiment 1 demonstrated better discrimination performance and a reduced fixation-locked N170 component (fN170) with valid than with invalid preview, demonstrating integration of pre- and post-saccadic information. Moreover, the early fixation-related potentials (FRP) showed a preview face inversion effect suggesting that some pre-saccadic input was represented in the brain until around 170 ms post fixation-onset. Experiment 2 replicated Experiment 1 and manipulated the proportion of valid and invalid trials to test whether the preview effect reflects context-based prediction across trials. A whole-scalp Bayes factor analysis showed that this manipulation did not alter the fN170 preview effect but did influence the face inversion effect before the saccade. The pre-saccadic inversion effect declined earlier in the mostly invalid block than in the mostly valid block, which is consistent with the notion of pre-saccadic expectations. In addition, in both studies, we found strong evidence for an interaction between the pre-saccadic preview stimulus and the post-saccadic target as early as 50 ms (Experiment 2) or 90 ms (Experiment 1) into the new fixation. These findings suggest that visual stability may involve three temporal stages: prediction about the saccadic target, integration of pre-saccadic and post-saccadic information at around 50-90 ms post fixation onset, and post-saccadic facilitation of rapid categorization.

Close

  • doi:10.1016/j.neuroimage.2019.06.059

Close

Jarkko Hautala; Otto Loberg; Najla Azaiez; Sara Taskinen; Simon P Tiffin-Richards; Paavo H T Leppänen

What information should I look for again? Attentional difficulties distracts reading of task assignments Journal Article

Learning and Individual Differences, 75 , pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Hautala2019,
title = {What information should I look for again? Attentional difficulties distracts reading of task assignments},
author = {Jarkko Hautala and Otto Loberg and Najla Azaiez and Sara Taskinen and Simon P Tiffin-Richards and Paavo H T Leppänen},
doi = {10.1016/j.lindif.2019.101775},
year = {2019},
date = {2019-10-01},
journal = {Learning and Individual Differences},
volume = {75},
pages = {1--12},
publisher = {Elsevier BV},
abstract = {This large-scale eye-movement study (N=164) investigated how students read short task assignments to complete information search problems and how their cognitive resources are associated with this reading behavior. These cognitive resources include information searching subskills, prior knowledge, verbal memory, reading fluency, and attentional difficulties. In this study, the task assignments consisted of four sentences. The first and last sentences provided context, while the second or third sentence was the relevant or irrelevant sentence under investigation. The results of a linear mixed-model and latent change score analyses showed the ubiquitous influence of reading fluency on first-pass eye movement measures, and the effects of sentence re- levancy on making more and longer reinspections and look-backs to the relevant than irrelevant sentence. In addition, the look-backs to the relevant sentence were associated with better information search subskills. Students with attentional difficulties made substantially fewer look-backs specifically to the relevant sentence. These results provide evidence that selective look-backs are used as an important index of comprehension monitoring independent of reading fluency. In this framework, slow reading fluency was found to be associated with laborious decoding but with intact comprehension monitoring, whereas attention difficulty was associated with intact decoding but with deficiency in comprehension monitoring.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This large-scale eye-movement study (N=164) investigated how students read short task assignments to complete information search problems and how their cognitive resources are associated with this reading behavior. These cognitive resources include information searching subskills, prior knowledge, verbal memory, reading fluency, and attentional difficulties. In this study, the task assignments consisted of four sentences. The first and last sentences provided context, while the second or third sentence was the relevant or irrelevant sentence under investigation. The results of a linear mixed-model and latent change score analyses showed the ubiquitous influence of reading fluency on first-pass eye movement measures, and the effects of sentence re- levancy on making more and longer reinspections and look-backs to the relevant than irrelevant sentence. In addition, the look-backs to the relevant sentence were associated with better information search subskills. Students with attentional difficulties made substantially fewer look-backs specifically to the relevant sentence. These results provide evidence that selective look-backs are used as an important index of comprehension monitoring independent of reading fluency. In this framework, slow reading fluency was found to be associated with laborious decoding but with intact comprehension monitoring, whereas attention difficulty was associated with intact decoding but with deficiency in comprehension monitoring.

Close

  • doi:10.1016/j.lindif.2019.101775

Close

Teresa Fasshauer; Andreas Sprenger; Karen Silling; Johanna Elisa Silberg; Anne Vosseler; Seiko Minoshita; Shinji Satoh; Michael Dorr; Katja Koelkebeck; Rebekka Lencer

Visual exploration of emotional faces in schizophrenia using masks from the Japanese Noh theatre Journal Article

Neuropsychologia, 133 , pp. 1–10, 2019.

Abstract | Links | BibTeX

@article{Fasshauer2019,
title = {Visual exploration of emotional faces in schizophrenia using masks from the Japanese Noh theatre},
author = {Teresa Fasshauer and Andreas Sprenger and Karen Silling and Johanna Elisa Silberg and Anne Vosseler and Seiko Minoshita and Shinji Satoh and Michael Dorr and Katja Koelkebeck and Rebekka Lencer},
doi = {10.1016/j.neuropsychologia.2019.107193},
year = {2019},
date = {2019-10-01},
journal = {Neuropsychologia},
volume = {133},
pages = {1--10},
abstract = {Studying eye movements during visual exploration is widely used to investigate visual information processing in schizophrenia. Here, we used masks from the Japanese Noh theatre to study visual exploration behavior during an emotional face recognition task and a brightness evaluation control task using the same stimuli. Eye movements were recorded in 25 patients with schizophrenia and 25 age-matched healthy controls while participants explored seven photos of Japanese Noh masks tilted to seven different angles. Additionally, parti- cipants were assessed on seven upright binary black and white pictures of these Noh masks (Mooney-like pictures), seven Upside-down pictures (180° upside-down turned Mooneys), and seven Neutral pictures. Participants either had to indicate whether they had recognized a face and its emotional expression, or they had to evaluate the brightness of the picture (total N=56 trials). We observed a clear effect of inclination angle of Noh masks on emotional ratings (ptextless0.001) and visual exploration behavior in both groups. Controls made larger saccades than patients when not being able to recognize a face in upside-down Mooney pictures (ptextless0.01). Patients also made smaller saccades when exploring pictures for brightness (ptextless0.05). Exploration behavior in patients was related to depressive symptom expression during emotional face recognition but not during brightness evaluation. Our findings suggest that visual exploration behavior in patients with schizophrenia is less flexible than in controls depending on the specific task requirements, specifically when exploring physical aspects of the environment.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Studying eye movements during visual exploration is widely used to investigate visual information processing in schizophrenia. Here, we used masks from the Japanese Noh theatre to study visual exploration behavior during an emotional face recognition task and a brightness evaluation control task using the same stimuli. Eye movements were recorded in 25 patients with schizophrenia and 25 age-matched healthy controls while participants explored seven photos of Japanese Noh masks tilted to seven different angles. Additionally, parti- cipants were assessed on seven upright binary black and white pictures of these Noh masks (Mooney-like pictures), seven Upside-down pictures (180° upside-down turned Mooneys), and seven Neutral pictures. Participants either had to indicate whether they had recognized a face and its emotional expression, or they had to evaluate the brightness of the picture (total N=56 trials). We observed a clear effect of inclination angle of Noh masks on emotional ratings (ptextless0.001) and visual exploration behavior in both groups. Controls made larger saccades than patients when not being able to recognize a face in upside-down Mooney pictures (ptextless0.01). Patients also made smaller saccades when exploring pictures for brightness (ptextless0.05). Exploration behavior in patients was related to depressive symptom expression during emotional face recognition but not during brightness evaluation. Our findings suggest that visual exploration behavior in patients with schizophrenia is less flexible than in controls depending on the specific task requirements, specifically when exploring physical aspects of the environment.

Close

  • doi:10.1016/j.neuropsychologia.2019.107193

Close

Seth W Egger; Evan D Remington; Chia-Jung Chang; Mehrdad Jazayeri

Internal models of sensorimotor integration regulate cortical dynamics Journal Article

Nature Neuroscience, 22 , pp. 1871–1882, 2019.

Abstract | Links | BibTeX

@article{Egger2019,
title = {Internal models of sensorimotor integration regulate cortical dynamics},
author = {Seth W Egger and Evan D Remington and Chia-Jung Chang and Mehrdad Jazayeri},
doi = {10.1038/s41593-019-0500-6},
year = {2019},
date = {2019-10-01},
journal = {Nature Neuroscience},
volume = {22},
pages = {1871--1882},
publisher = {Springer Science and Business Media LLC},
abstract = {Sensorimotor control during overt movements is characterized in terms of three building blocks: a controller, a simulator and a state estimator. We asked whether the same framework could explain the control of internal states in the absence of movements. Recently, it was shown that the brain controls the timing of future movements by adjusting an internal speed command. We trained monkeys in a novel task in which the speed command had to be dynamically controlled based on the timing of a sequence of flashes. Recordings from the frontal cortex provided evidence that the brain updates the internal speed command after each flash based on the error between the timing of the flash and the anticipated timing of the flash derived from a simulated motor plan. These findings suggest that cognitive control of internal states may be understood in terms of the same computational principles as motor control. Control of movements can be understood in terms of the interplay between a controller, a simulator and an estimator. Egger et. al. show that cortical neurons establish the same building blocks to control cognitive states in the absence of movement.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Sensorimotor control during overt movements is characterized in terms of three building blocks: a controller, a simulator and a state estimator. We asked whether the same framework could explain the control of internal states in the absence of movements. Recently, it was shown that the brain controls the timing of future movements by adjusting an internal speed command. We trained monkeys in a novel task in which the speed command had to be dynamically controlled based on the timing of a sequence of flashes. Recordings from the frontal cortex provided evidence that the brain updates the internal speed command after each flash based on the error between the timing of the flash and the anticipated timing of the flash derived from a simulated motor plan. These findings suggest that cognitive control of internal states may be understood in terms of the same computational principles as motor control. Control of movements can be understood in terms of the interplay between a controller, a simulator and an estimator. Egger et. al. show that cortical neurons establish the same building blocks to control cognitive states in the absence of movement.

Close

  • doi:10.1038/s41593-019-0500-6

Close

Sang-Ah Yoo; John K Tsotsos; Mazyar Fallah

Feed-forward visual processing suffices for coarse localization but fine-grained localization in an attention-demanding context needs feedback processing Journal Article

PLOS ONE, 14 (9), pp. 1–16, 2019.

Abstract | Links | BibTeX

@article{Yoo2019,
title = {Feed-forward visual processing suffices for coarse localization but fine-grained localization in an attention-demanding context needs feedback processing},
author = {Sang-Ah Yoo and John K Tsotsos and Mazyar Fallah},
editor = {Robin Baur{è}s},
doi = {10.1371/journal.pone.0223166},
year = {2019},
date = {2019-09-01},
journal = {PLOS ONE},
volume = {14},
number = {9},
pages = {1--16},
abstract = {It is well known that simple visual tasks, such as object detection or categorization, can be performed within a short period of time, suggesting the sufficiency of feed-forward visual processing. However, more complex visual tasks, such as fine-grained localization may require high-resolution information available at the early processing levels in the visual hier- archy. To access this information using a top-down approach, feedback processing would need to traverse several stages in the visual hierarchy and each step in this traversal takes processing time. In the present study, we compared the processing time required to com- plete object categorization and localization by varying presentation duration and complexity of natural scene stimuli. We hypothesized that performance would be asymptotic at shorter presentation durations when feed-forward processing suffices for visual tasks, whereas per- formance would gradually improve as images are presented longer if the tasks rely on feed- back processing. In Experiment 1, where simple images were presented, both object categorization and localization performance sharply improved until 100 ms of presentation then it leveled off. These results are a replication of previously reported rapid categorization effects but they do not support the role of feedback processing in localization tasks, indicat- ing that feed-forward processing enables coarse localization in relatively simple visual scenes. In Experiment 2, the same tasks were performed but more attention-demanding and ecologically valid images were used as stimuli. Unlike in Experiment 1, both object cate- gorization performance and localization precision gradually improved as stimulus presenta- tion duration became longer. This finding suggests that complex visual tasks that require visual scrutiny call for top-down feedback processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It is well known that simple visual tasks, such as object detection or categorization, can be performed within a short period of time, suggesting the sufficiency of feed-forward visual processing. However, more complex visual tasks, such as fine-grained localization may require high-resolution information available at the early processing levels in the visual hier- archy. To access this information using a top-down approach, feedback processing would need to traverse several stages in the visual hierarchy and each step in this traversal takes processing time. In the present study, we compared the processing time required to com- plete object categorization and localization by varying presentation duration and complexity of natural scene stimuli. We hypothesized that performance would be asymptotic at shorter presentation durations when feed-forward processing suffices for visual tasks, whereas per- formance would gradually improve as images are presented longer if the tasks rely on feed- back processing. In Experiment 1, where simple images were presented, both object categorization and localization performance sharply improved until 100 ms of presentation then it leveled off. These results are a replication of previously reported rapid categorization effects but they do not support the role of feedback processing in localization tasks, indicat- ing that feed-forward processing enables coarse localization in relatively simple visual scenes. In Experiment 2, the same tasks were performed but more attention-demanding and ecologically valid images were used as stimuli. Unlike in Experiment 1, both object cate- gorization performance and localization precision gradually improved as stimulus presenta- tion duration became longer. This finding suggests that complex visual tasks that require visual scrutiny call for top-down feedback processing.

Close

  • doi:10.1371/journal.pone.0223166

Close

Jonathan van Leeuwen; Artem V Belopolsky

Detection of object displacement during a saccade is prioritized by the oculomotor system Journal Article

Journal of Vision, 19 (11), pp. 1–15, 2019.

Abstract | Links | BibTeX

@article{Leeuwen2019,
title = {Detection of object displacement during a saccade is prioritized by the oculomotor system},
author = {Jonathan van Leeuwen and Artem V Belopolsky},
doi = {10.1167/19.11.11},
year = {2019},
date = {2019-09-01},
journal = {Journal of Vision},
volume = {19},
number = {11},
pages = {1--15},
abstract = {The human eye-movement system is equipped with a sophisticated updating mechanism that can adjust for large retinal displacements produced by saccadic eye movements. The nature of this updating mechanism is still highly debated. Previous studies have demonstrated that updating can occur very rapidly and is initiated before the start of a saccade. In the present study, we used saccade curvature to demonstrate that the oculomotor system is tuned for detecting object displacements during saccades. Participants made a sequence of saccades while ignoring an irrelevant distractor. Curvature of the second saccade relative to the distractor was used to estimate the time course of updating. Saccade curvature away from the presaccadic location of the distractor emerged as early as 80 ms after the first saccade when the distractor was displaced during a saccade. This is about 50 ms earlier than when a distractor was only present before a saccade, only present after a saccade, or remained stationary across a saccade. This shows that the oculomotor system prioritizes detection of object displacements during saccades, which may be useful for guiding corrective saccades. The results also challenge previous views by demonstrating the additional role of postsaccadic information in updating target–distractor competition across saccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The human eye-movement system is equipped with a sophisticated updating mechanism that can adjust for large retinal displacements produced by saccadic eye movements. The nature of this updating mechanism is still highly debated. Previous studies have demonstrated that updating can occur very rapidly and is initiated before the start of a saccade. In the present study, we used saccade curvature to demonstrate that the oculomotor system is tuned for detecting object displacements during saccades. Participants made a sequence of saccades while ignoring an irrelevant distractor. Curvature of the second saccade relative to the distractor was used to estimate the time course of updating. Saccade curvature away from the presaccadic location of the distractor emerged as early as 80 ms after the first saccade when the distractor was displaced during a saccade. This is about 50 ms earlier than when a distractor was only present before a saccade, only present after a saccade, or remained stationary across a saccade. This shows that the oculomotor system prioritizes detection of object displacements during saccades, which may be useful for guiding corrective saccades. The results also challenge previous views by demonstrating the additional role of postsaccadic information in updating target–distractor competition across saccades.

Close

  • doi:10.1167/19.11.11

Close

Nils S Van Den Berg; Rients B Huitema; Jacoba M Spikman; Peter Jan Van Laar; Edward H F De Haan

A shrunken world – micropsia after a right occipito-parietal ischemic stroke Journal Article

Neurocase, 25 (5), pp. 202–208, 2019.

Abstract | Links | BibTeX

@article{VanDenBerg2019,
title = {A shrunken world – micropsia after a right occipito-parietal ischemic stroke},
author = {Nils S {Van Den Berg} and Rients B Huitema and Jacoba M Spikman and Peter Jan {Van Laar} and Edward H F {De Haan}},
doi = {10.1080/13554794.2019.1656751},
year = {2019},
date = {2019-09-01},
journal = {Neurocase},
volume = {25},
number = {5},
pages = {202--208},
publisher = {Informa UK Limited},
abstract = {Micropsia is a rare condition in which patients perceive the outside world smaller in size than it actually is. We examined a patient who, after a right occipito-parietal stroke, subjectively reported perceiving everything at seventy percent of the actual size. Using experimental tasks, we confirmed the extent of his micropsia at 70%. Visual half-field tests showed an impaired perception of shape, location and motion in the left visual field. As his micropsia concerns the complete visual field, we suggest that it is caused by a higher-order compensation process in order to reconcile the conflicting information from the two hemifields.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Micropsia is a rare condition in which patients perceive the outside world smaller in size than it actually is. We examined a patient who, after a right occipito-parietal stroke, subjectively reported perceiving everything at seventy percent of the actual size. Using experimental tasks, we confirmed the extent of his micropsia at 70%. Visual half-field tests showed an impaired perception of shape, location and motion in the left visual field. As his micropsia concerns the complete visual field, we suggest that it is caused by a higher-order compensation process in order to reconcile the conflicting information from the two hemifields.

Close

  • doi:10.1080/13554794.2019.1656751

Close

Sabrina E Twilhaar; Artem V Belopolsky; Jorrit F Kieviet; Ruurd M Elburg; Jaap Oosterlaan

Voluntary and involuntary control of attention in adolescents born very preterm: A study of eye movements Journal Article

Child Development, pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Twilhaar2019,
title = {Voluntary and involuntary control of attention in adolescents born very preterm: A study of eye movements},
author = {Sabrina E Twilhaar and Artem V Belopolsky and Jorrit F Kieviet and Ruurd M Elburg and Jaap Oosterlaan},
doi = {10.1111/cdev.13310},
year = {2019},
date = {2019-09-01},
journal = {Child Development},
pages = {1--12},
abstract = {Very preterm birth is associated with attention deficits that interfere with academic performance. A better understanding of attention processes is necessary to support very preterm born children. This study examined voluntary and involuntary attentional control in very preterm born adolescents by measuring saccadic eye movements. Additionally, these control processes were related to symptoms of inattention, intelligence, and academic performance. Participants included 47 very preterm and 61 full-term born 13-years-old adolescents. Oculomotor control was assessed using the antisaccade and oculomotor capture paradigm. Very preterm born adolescents showed deficits in antisaccade but not in oculomotor capture performance, indicating impairments in voluntary but not involuntary attentional control. These impairments mediated the relation between very preterm birth and inattention, intelligence, and academic performance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Very preterm birth is associated with attention deficits that interfere with academic performance. A better understanding of attention processes is necessary to support very preterm born children. This study examined voluntary and involuntary attentional control in very preterm born adolescents by measuring saccadic eye movements. Additionally, these control processes were related to symptoms of inattention, intelligence, and academic performance. Participants included 47 very preterm and 61 full-term born 13-years-old adolescents. Oculomotor control was assessed using the antisaccade and oculomotor capture paradigm. Very preterm born adolescents showed deficits in antisaccade but not in oculomotor capture performance, indicating impairments in voluntary but not involuntary attentional control. These impairments mediated the relation between very preterm birth and inattention, intelligence, and academic performance.

Close

  • doi:10.1111/cdev.13310

Close

Jody Stanley; Jason D Forte; Olivia Carter

Rivalry onset in and around the fovea: The role of visual field location and eye dominance on perceptual dominance bias Journal Article

Vision, 3 , pp. 1–13, 2019.

Abstract | Links | BibTeX

@article{Stanley2019,
title = {Rivalry onset in and around the fovea: The role of visual field location and eye dominance on perceptual dominance bias},
author = {Jody Stanley and Jason D Forte and Olivia Carter},
doi = {10.3390/vision3040051},
year = {2019},
date = {2019-09-01},
journal = {Vision},
volume = {3},
pages = {1--13},
abstract = {When dissimilar images are presented to each eye, the images will alternate every few seconds in a phenomenon known as binocular rivalry. Recent research has found evidence of a bias towards one image at the initial ‘onset' period of rivalry that varies across the peripheral visual field. To determine the role that visual field location plays in and around the fovea at onset, trained observers were presented small orthogonal achromatic grating patches at various locations across the central 3° of visual space for 1-s and 60-s intervals. Results reveal stronger bias at onset than during continuous rivalry, and evidence of temporal hemifield dominance across observers, however, the nature of the hemifield effects differed between individuals and interacted with overall eye dominance. Despite using small grating patches, a high proportion of mixed percept was still reported, with more mixed percept at onset along the vertical midline, in general, and in increasing proportions with eccentricity in the lateral hemifields. Results show that even within the foveal range, onset rivalry bias varies across visual space, and differs in degree and sensitivity to biases in average dominance over continuous viewing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When dissimilar images are presented to each eye, the images will alternate every few seconds in a phenomenon known as binocular rivalry. Recent research has found evidence of a bias towards one image at the initial ‘onset' period of rivalry that varies across the peripheral visual field. To determine the role that visual field location plays in and around the fovea at onset, trained observers were presented small orthogonal achromatic grating patches at various locations across the central 3° of visual space for 1-s and 60-s intervals. Results reveal stronger bias at onset than during continuous rivalry, and evidence of temporal hemifield dominance across observers, however, the nature of the hemifield effects differed between individuals and interacted with overall eye dominance. Despite using small grating patches, a high proportion of mixed percept was still reported, with more mixed percept at onset along the vertical midline, in general, and in increasing proportions with eccentricity in the lateral hemifields. Results show that even within the foveal range, onset rivalry bias varies across visual space, and differs in degree and sensitivity to biases in average dominance over continuous viewing.

Close

  • doi:10.3390/vision3040051

Close

Owen Myles; Ben Grafton; Patrick Clarke; Colin MacLeod

GIVE me your attention: Differentiating goal identification and goal execution components of the anti-saccade effect Journal Article

PLOS ONE, 14 (9), pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Myles2019,
title = {GIVE me your attention: Differentiating goal identification and goal execution components of the anti-saccade effect},
author = {Owen Myles and Ben Grafton and Patrick Clarke and Colin MacLeod},
editor = {Alessandra S Souza},
doi = {10.1371/journal.pone.0222710},
year = {2019},
date = {2019-09-01},
journal = {PLOS ONE},
volume = {14},
number = {9},
pages = {1--12},
abstract = {The anti-saccade task is a commonly used method of assessing individual differences in cognitive control. It has been shown that a number of clinical disorders are characterised by increased anti-saccade cost. However, it remains unknown whether this reflects impaired goal identification or impaired goal execution, because, to date, no procedure has been developed to independently assess these two components of anti-saccade cost. The aim of the present study was to develop such an assessment task, which we term the Goal Identification Vs. Execution (GIVE) task. Fifty-one undergraduate students completed a conventional anti-saccade task, and our novel GIVE task. Our findings revealed that individual differences in anti-saccade goal identification costs and goal execution costs were uncorre- lated, when assessed using the GIVE task, but both predicted unique variance in the con- ventional anti-saccade cost measure. These results confirm that the GIVE task is capable of independently assessing variation in the goal identification and goal execution components of the anti-saccade effect. We discuss how this newly introduced assessment procedure now can be employed to illuminate the specific basis of the increased anti-saccade cost that characterises various forms of clinical dysfunction.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The anti-saccade task is a commonly used method of assessing individual differences in cognitive control. It has been shown that a number of clinical disorders are characterised by increased anti-saccade cost. However, it remains unknown whether this reflects impaired goal identification or impaired goal execution, because, to date, no procedure has been developed to independently assess these two components of anti-saccade cost. The aim of the present study was to develop such an assessment task, which we term the Goal Identification Vs. Execution (GIVE) task. Fifty-one undergraduate students completed a conventional anti-saccade task, and our novel GIVE task. Our findings revealed that individual differences in anti-saccade goal identification costs and goal execution costs were uncorre- lated, when assessed using the GIVE task, but both predicted unique variance in the con- ventional anti-saccade cost measure. These results confirm that the GIVE task is capable of independently assessing variation in the goal identification and goal execution components of the anti-saccade effect. We discuss how this newly introduced assessment procedure now can be employed to illuminate the specific basis of the increased anti-saccade cost that characterises various forms of clinical dysfunction.

Close

  • doi:10.1371/journal.pone.0222710

Close

Rebecca K Lawrence; Mark Edwards; Gordon W C Chan; Jolene A Cox; Stephanie C Goodhew

Does cultural background predict the spatial distribution of attention? Journal Article

Culture and Brain, pp. 1–29, 2019.

Abstract | Links | BibTeX

@article{Lawrence2019,
title = {Does cultural background predict the spatial distribution of attention?},
author = {Rebecca K Lawrence and Mark Edwards and Gordon W C Chan and Jolene A Cox and Stephanie C Goodhew},
doi = {10.1007/s40167-019-00086-x},
year = {2019},
date = {2019-09-01},
journal = {Culture and Brain},
pages = {1--29},
publisher = {Springer Science and Business Media LLC},
abstract = {The current study aimed to explore cultural differences in the covert spatial distribu- tion of attention. In particular, we tested whether those born in an East Asian coun- try adopted a different distribution of attention compared to individuals born in a Western country. Previous work suggests that Western individuals tend to distrib- ute attention narrowly and that East Asian individuals distribute attention broadly. However, these studies have used indirect methods to infer spatial attention scale. In particular, they have not measured changes in attention across space, nor have they controlled for differences eye movements patterns, which can differ across cultures. To address this, in the current study, we used an inhibition of return (IOR) paradigm which directly measured changes in attention across space, while controlling for eye movements. The use of the IOR task was a significant advancement, as it allowed for a highly sensitive measure of attention distribution compared to past research. Criti- cally, using this new measure, we failed to observe a cultural difference in the dis- tribution of covert spatial attention. Instead, individuals from East Asian countries and Western countries adopted a similar attention spread. However, we did observe a cultural difference in response speed, whereby Western participants were rela- tively faster to detect targets in the IOR task. This relationship persisted, even after controlling for individual variation in attention slope, indicating that factors other than attention distribution might account for cultural differences in response speed. Therefore, this study provides robust, converging evidence that group differences in covert spatial attentional distribution do not necessarily drive cultural variation in response speed. Keywords},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The current study aimed to explore cultural differences in the covert spatial distribu- tion of attention. In particular, we tested whether those born in an East Asian coun- try adopted a different distribution of attention compared to individuals born in a Western country. Previous work suggests that Western individuals tend to distrib- ute attention narrowly and that East Asian individuals distribute attention broadly. However, these studies have used indirect methods to infer spatial attention scale. In particular, they have not measured changes in attention across space, nor have they controlled for differences eye movements patterns, which can differ across cultures. To address this, in the current study, we used an inhibition of return (IOR) paradigm which directly measured changes in attention across space, while controlling for eye movements. The use of the IOR task was a significant advancement, as it allowed for a highly sensitive measure of attention distribution compared to past research. Criti- cally, using this new measure, we failed to observe a cultural difference in the dis- tribution of covert spatial attention. Instead, individuals from East Asian countries and Western countries adopted a similar attention spread. However, we did observe a cultural difference in response speed, whereby Western participants were rela- tively faster to detect targets in the IOR task. This relationship persisted, even after controlling for individual variation in attention slope, indicating that factors other than attention distribution might account for cultural differences in response speed. Therefore, this study provides robust, converging evidence that group differences in covert spatial attentional distribution do not necessarily drive cultural variation in response speed. Keywords

Close

  • doi:10.1007/s40167-019-00086-x

Close

Adam Frost; George Tomou; Harsh Parikh; Jagjot Kaur; Marija Zivcevska; Matthias Niemeier

Working memory in action: Inspecting the systematic and unsystematic errors of spatial memory across saccades Journal Article

Experimental Brain Research, 237 (11), pp. 2939–2956, 2019.

Abstract | Links | BibTeX

@article{Frost2019,
title = {Working memory in action: Inspecting the systematic and unsystematic errors of spatial memory across saccades},
author = {Adam Frost and George Tomou and Harsh Parikh and Jagjot Kaur and Marija Zivcevska and Matthias Niemeier},
doi = {10.1007/s00221-019-05623-x},
year = {2019},
date = {2019-09-01},
journal = {Experimental Brain Research},
volume = {237},
number = {11},
pages = {2939--2956},
publisher = {Springer Science and Business Media LLC},
abstract = {Our ability to interact with the world depends on memory buffers that flexibly store and process information for short periods of time. Current working memory research, however, mainly uses tasks that avoid eye movements, whereas in daily life we need to remember information across saccades. Because saccades disrupt perception and attention, the brain might use special transsaccadic memory systems. Therefore, to compare working memory systems between and across saccades, the current study devised transsaccadic memory tasks that evaluated the influence of memory load on several kinds of systematic and unsystematic spatial errors, and tested whether these measures predicted performance in more established working memory paradigms. Experiment 1 used a line intersection task that had people integrate lines shown before and after saccades, and it administered a 2-back task. Experiments 2 and 3 asked people to point at one of several locations within a memory array flashed before an eye movement, and we tested change detection and 2-back performance. We found that unsystematic trans-saccadic errors increased with memory load and were correlated with 2-back performance. Systematic errors produced similar results, although effects varied as a function of the geometric layout of the memory arrays. Surprisingly, transsaccadic errors did not predict change detection performance despite the latter being a widely accepted measure of working memory capacity. Our results suggest that working memory systems between and across saccades share, in part, similar neural resources. Nevertheless, our data highlight the importance of investigating working memory across saccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Our ability to interact with the world depends on memory buffers that flexibly store and process information for short periods of time. Current working memory research, however, mainly uses tasks that avoid eye movements, whereas in daily life we need to remember information across saccades. Because saccades disrupt perception and attention, the brain might use special transsaccadic memory systems. Therefore, to compare working memory systems between and across saccades, the current study devised transsaccadic memory tasks that evaluated the influence of memory load on several kinds of systematic and unsystematic spatial errors, and tested whether these measures predicted performance in more established working memory paradigms. Experiment 1 used a line intersection task that had people integrate lines shown before and after saccades, and it administered a 2-back task. Experiments 2 and 3 asked people to point at one of several locations within a memory array flashed before an eye movement, and we tested change detection and 2-back performance. We found that unsystematic trans-saccadic errors increased with memory load and were correlated with 2-back performance. Systematic errors produced similar results, although effects varied as a function of the geometric layout of the memory arrays. Surprisingly, transsaccadic errors did not predict change detection performance despite the latter being a widely accepted measure of working memory capacity. Our results suggest that working memory systems between and across saccades share, in part, similar neural resources. Nevertheless, our data highlight the importance of investigating working memory across saccades.

Close

  • doi:10.1007/s00221-019-05623-x

Close

Antonio Fernández; Hsin-Hung Li; Marisa Carrasco

How exogenous spatial attention affects visual representation Journal Article

Journal of Vision, 19 (11), pp. 1–13, 2019.

Abstract | Links | BibTeX

@article{Fernandez2019a,
title = {How exogenous spatial attention affects visual representation},
author = {Antonio Fernández and Hsin-Hung Li and Marisa Carrasco},
doi = {10.1167/19.11.4},
year = {2019},
date = {2019-09-01},
journal = {Journal of Vision},
volume = {19},
number = {11},
pages = {1--13},
publisher = {Association for Research in Vision and Ophthalmology (ARVO)},
abstract = {Orienting covert spatial attention to a target location enhances visual sensitivity and benefits performance in many visual tasks. How these attention-related improvements in performance affect the underlying visual representation of low-level visual features is not fully understood. Here we focus on characterizing how exogenous spatial attention affects the feature representations of orientation and spatial frequency. We asked observers to detect a vertical grating embedded in noise and performed psychophysical reverse correlation. Doing so allowed us to make comparisons with previous studies that utilized the same task and analysis to assess how endogenous attention and presaccadic modulations affect visual representations. We found that exogenous spatial attention improved performance and enhanced the gain of the target orientation without affecting orientation tuning width. Moreover, we found no change in spatial frequency tuning. We conclude that covert exogenous spatial attention alters performance by strictly boosting gain of orientation-selective filters, much like covert endogenous spatial attention. Introduction},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Orienting covert spatial attention to a target location enhances visual sensitivity and benefits performance in many visual tasks. How these attention-related improvements in performance affect the underlying visual representation of low-level visual features is not fully understood. Here we focus on characterizing how exogenous spatial attention affects the feature representations of orientation and spatial frequency. We asked observers to detect a vertical grating embedded in noise and performed psychophysical reverse correlation. Doing so allowed us to make comparisons with previous studies that utilized the same task and analysis to assess how endogenous attention and presaccadic modulations affect visual representations. We found that exogenous spatial attention improved performance and enhanced the gain of the target orientation without affecting orientation tuning width. Moreover, we found no change in spatial frequency tuning. We conclude that covert exogenous spatial attention alters performance by strictly boosting gain of orientation-selective filters, much like covert endogenous spatial attention. Introduction

Close

  • doi:10.1167/19.11.4

Close

Madeline S Cappelloni; Sabyasachi Shivkumar; Ralf M Haefner; Ross K Maddox

Task-uninformative visual stimuli improve auditory spatial discrimination in humans but not the ideal observer Journal Article

PLOS ONE, 14 (9), pp. 1–18, 2019.

Abstract | Links | BibTeX

@article{Cappelloni2019,
title = {Task-uninformative visual stimuli improve auditory spatial discrimination in humans but not the ideal observer},
author = {Madeline S Cappelloni and Sabyasachi Shivkumar and Ralf M Haefner and Ross K Maddox},
editor = {Jyrki Ahveninen},
doi = {10.1371/journal.pone.0215417},
year = {2019},
date = {2019-09-01},
journal = {PLOS ONE},
volume = {14},
number = {9},
pages = {1--18},
publisher = {Public Library of Science},
abstract = {In order to survive and function in the world, we must understand the content of our environment. This requires us to gather and parse complex, sometimes conflicting, information. Yet, the brain is capable of translating sensory stimuli from disparate modalities into a cohesive and accurate percept with little conscious effort. Previous studies of multisensory integration have suggested that the brain's integration of cues is well-approximated by an ideal observer implementing Bayesian causal inference. However, behavioral data from tasks that include only one stimulus in each modality fail to capture what is in nature a complex process. Here we employed an auditory spatial discrimination task in which listeners were asked to determine on which side they heard one of two concurrently presented sounds. We compared two visual conditions in which task-uninformative shapes were presented in the center of the screen, or spatially aligned with the auditory stimuli. We found that performance on the auditory task improved when the visual stimuli were spatially aligned with the auditory stimuli—even though the shapes provided no information about which side the auditory target was on. We also demonstrate that a model of a Bayesian ideal observer performing causal inference cannot explain this improvement, demonstrating that humans deviate systematically from the ideal observer model.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In order to survive and function in the world, we must understand the content of our environment. This requires us to gather and parse complex, sometimes conflicting, information. Yet, the brain is capable of translating sensory stimuli from disparate modalities into a cohesive and accurate percept with little conscious effort. Previous studies of multisensory integration have suggested that the brain's integration of cues is well-approximated by an ideal observer implementing Bayesian causal inference. However, behavioral data from tasks that include only one stimulus in each modality fail to capture what is in nature a complex process. Here we employed an auditory spatial discrimination task in which listeners were asked to determine on which side they heard one of two concurrently presented sounds. We compared two visual conditions in which task-uninformative shapes were presented in the center of the screen, or spatially aligned with the auditory stimuli. We found that performance on the auditory task improved when the visual stimuli were spatially aligned with the auditory stimuli—even though the shapes provided no information about which side the auditory target was on. We also demonstrate that a model of a Bayesian ideal observer performing causal inference cannot explain this improvement, demonstrating that humans deviate systematically from the ideal observer model.

Close

  • doi:10.1371/journal.pone.0215417

Close

Francesca Capozzi; Lauren J Human; Jelena Ristic

Attention promotes accurate impression formation Journal Article

Journal of Personality, pp. 1–11, 2019.

Abstract | Links | BibTeX

@article{Capozzi2019,
title = {Attention promotes accurate impression formation},
author = {Francesca Capozzi and Lauren J Human and Jelena Ristic},
doi = {10.1111/jopy.12509},
year = {2019},
date = {2019-09-01},
journal = {Journal of Personality},
pages = {1--11},
publisher = {Wiley},
abstract = {Objective: An ability to form accurate impressions of others is vital for adaptive social behavior in humans. Here, we examined if attending to persons more is associated with greater accuracy in personality impressions. Method: We asked 42 observers (36 females; mean age = 21 years, age range = 18–28; expected power = 0.96) to form personality impressions of unacquainted individuals (i.e., targets) from video interviews while their attentional behavior was assessed using eye tracking. We examined whether (a) attending more to targets benefited accuracy, (b) attending to specific body parts (e.g., face vs. body) drove this association, and (c) targets' ease of personality readability modulated these effects. Results: Paying more attention to a target was associated with forming more accurate personality impressions. Attention to the whole person contributed to this effect, with this association occurring independently of targets' ease of readability. Conclusions: These findings show that attending more to a person is associated with increased accuracy and thus suggest that attention promotes social adaption by supporting accurate social perception.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: An ability to form accurate impressions of others is vital for adaptive social behavior in humans. Here, we examined if attending to persons more is associated with greater accuracy in personality impressions. Method: We asked 42 observers (36 females; mean age = 21 years, age range = 18–28; expected power = 0.96) to form personality impressions of unacquainted individuals (i.e., targets) from video interviews while their attentional behavior was assessed using eye tracking. We examined whether (a) attending more to targets benefited accuracy, (b) attending to specific body parts (e.g., face vs. body) drove this association, and (c) targets' ease of personality readability modulated these effects. Results: Paying more attention to a target was associated with forming more accurate personality impressions. Attention to the whole person contributed to this effect, with this association occurring independently of targets' ease of readability. Conclusions: These findings show that attending more to a person is associated with increased accuracy and thus suggest that attention promotes social adaption by supporting accurate social perception.

Close

  • doi:10.1111/jopy.12509

Close

Quan Wang; Carla A Wall; Erin C Barney; Jessica L Bradshaw; Suzanne L Macari; Katarzyna Chawarska; Frederick Shic

Promoting social attention in 3‐year‐olds with ASD through gaze‐contingent eye tracking Journal Article

Autism Research, 13 (1), pp. 61–73, 2019.

Abstract | Links | BibTeX

@article{Wang2019h,
title = {Promoting social attention in 3‐year‐olds with ASD through gaze‐contingent eye tracking},
author = {Quan Wang and Carla A Wall and Erin C Barney and Jessica L Bradshaw and Suzanne L Macari and Katarzyna Chawarska and Frederick Shic},
doi = {10.1002/aur.2199},
year = {2019},
date = {2019-08-01},
journal = {Autism Research},
volume = {13},
number = {1},
pages = {61--73},
publisher = {Wiley},
abstract = {Young children with autism spectrum disorder (ASD) look less toward faces compared to their non-ASD peers, limiting access to social learning. Currently, no technologies directly target these core social attention difficulties. This study examines the feasibility of automated gaze modification training for improving attention to faces in 3-year-olds with ASD. Using free-viewing data from typically developing (TD) controls (n = 41), we implemented gaze-contingent adaptive cueing to redirect children with ASD toward normative looking patterns during viewing of videos of an actress. Children with ASD were randomly assigned to either (a) an adaptive Cue condition (Cue},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Young children with autism spectrum disorder (ASD) look less toward faces compared to their non-ASD peers, limiting access to social learning. Currently, no technologies directly target these core social attention difficulties. This study examines the feasibility of automated gaze modification training for improving attention to faces in 3-year-olds with ASD. Using free-viewing data from typically developing (TD) controls (n = 41), we implemented gaze-contingent adaptive cueing to redirect children with ASD toward normative looking patterns during viewing of videos of an actress. Children with ASD were randomly assigned to either (a) an adaptive Cue condition (Cue

Close

  • doi:10.1002/aur.2199

Close

Emma E M Stewart; Preeti Verghese; Anna Ma-Wyatt

The spatial and temporal properties of attentional selectivity for saccades and reaches Journal Article

Journal of Vision, 19 (9), pp. 1–19, 2019.

Abstract | Links | BibTeX

@article{Stewart2019bb,
title = {The spatial and temporal properties of attentional selectivity for saccades and reaches},
author = {Emma E M Stewart and Preeti Verghese and Anna Ma-Wyatt},
doi = {10.1167/19.9.12},
year = {2019},
date = {2019-08-01},
journal = {Journal of Vision},
volume = {19},
number = {9},
pages = {1--19},
publisher = {Association for Research in Vision and Ophthalmology (ARVO)},
abstract = {The preparation and execution of saccades and goal-directed movements elicits an accompanying shift in attention at the locus of the impending movement. However, some key aspects of the spatiotemporal profile of this attentional shift between eye and hand movements are not resolved. While there is evidence that attention is improved at the target location when making a reach, it is not clear how attention shifts over space and time around the movement target as a saccade and a reach are made to that target. Determining this spread of attention is an important aspect in understanding how attentional resources are used in relation to movement planning and guidance in real world tasks. We compared performance on a perceptual discrimination paradigm during a saccade- alone task, reach-alone task, and a saccade-plus-reach task to map the temporal profile of the premotor attentional shift at the goal of the movement and at three surrounding locations. We measured performance relative to a valid baseline level to determine whether motor planning induces additional attentional facilitation compared to mere covert attention. Sensitivity increased relative to movement onset at the target and at the surrounding locations, for both the saccade-alone and saccade-plus-reach conditions. The results suggest that the temporal profile of the attentional shift is similar for the two tasks involving saccades (saccade-alone and saccade-plus-reach tasks), but is very different when the influence of the saccade is removed. In this case, performance in the saccade-plus- reach task reflects the lower sensitivity observed when a reach-alone task is being conducted. In addition, the spatial profile of this spread of attention is not symmetrical around the target. This suggests that when a saccade and reach are being planned together, the saccade drives the attentional shift, and the reach-alone carries little attentional weight.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The preparation and execution of saccades and goal-directed movements elicits an accompanying shift in attention at the locus of the impending movement. However, some key aspects of the spatiotemporal profile of this attentional shift between eye and hand movements are not resolved. While there is evidence that attention is improved at the target location when making a reach, it is not clear how attention shifts over space and time around the movement target as a saccade and a reach are made to that target. Determining this spread of attention is an important aspect in understanding how attentional resources are used in relation to movement planning and guidance in real world tasks. We compared performance on a perceptual discrimination paradigm during a saccade- alone task, reach-alone task, and a saccade-plus-reach task to map the temporal profile of the premotor attentional shift at the goal of the movement and at three surrounding locations. We measured performance relative to a valid baseline level to determine whether motor planning induces additional attentional facilitation compared to mere covert attention. Sensitivity increased relative to movement onset at the target and at the surrounding locations, for both the saccade-alone and saccade-plus-reach conditions. The results suggest that the temporal profile of the attentional shift is similar for the two tasks involving saccades (saccade-alone and saccade-plus-reach tasks), but is very different when the influence of the saccade is removed. In this case, performance in the saccade-plus- reach task reflects the lower sensitivity observed when a reach-alone task is being conducted. In addition, the spatial profile of this spread of attention is not symmetrical around the target. This suggests that when a saccade and reach are being planned together, the saccade drives the attentional shift, and the reach-alone carries little attentional weight.

Close

  • doi:10.1167/19.9.12

Close

Matthew F Peterson; Ian Zaun; Harris Hoke; Guo Jiahui; Brad Duchaine; Nancy Kanwisher

Eye movements and retinotopic tuning in developmental prosopagnosia Journal Article

Journal of Vision, 19 (9), pp. 1–19, 2019.

Abstract | Links | BibTeX

@article{Peterson2019,
title = {Eye movements and retinotopic tuning in developmental prosopagnosia},
author = {Matthew F Peterson and Ian Zaun and Harris Hoke and Guo Jiahui and Brad Duchaine and Nancy Kanwisher},
doi = {10.1167/19.9.7},
year = {2019},
date = {2019-08-01},
journal = {Journal of Vision},
volume = {19},
number = {9},
pages = {1--19},
publisher = {The Association for Research in Vision and Ophthalmology},
abstract = {Despite extensive investigation, the causes and nature of developmental prosopagnosia (DP)—a severe face identification impairment in the absence of acquired brain injury—remain poorly understood. Drawing on previous work showing that individuals identified as being neurotypical (NT) show robust individual differences in where they fixate on faces, and recognize faces best when the faces are presented at this location, we defined and tested four novel hypotheses for how atypical face-looking behavior and/or retinotopic face encoding could impair face recognition in DP: (a) fixating regions of poor information, (b) inconsistent saccadic targeting, (c) weak retinotopic tuning, and (d) fixating locations not matched to the individual's own face tuning. We found no support for the first three hypotheses, with NTs and DPs consistently fixating similar locations and showing similar retinotopic tuning of their face perception performance. However, in testing the fourth hypothesis, we found preliminary evidence for two distinct phenotypes of DP: (a) Subjects characterized by impaired face memory, typical face perception, and a preference to look high on the face, and (b) Subjects characterized by profound impairments to both face memory and perception and a preference to look very low on the face. Further, while all NTs and upper-looking DPs performed best when faces were presented near their preferred fixation location, this was not true for lower-looking DPs. These results suggest that face recognition deficits in a substantial proportion of people with DP may arise not from aberrant face gaze or compromised retinotopic tuning, but from the suboptimal matching of gaze to tuning.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Despite extensive investigation, the causes and nature of developmental prosopagnosia (DP)—a severe face identification impairment in the absence of acquired brain injury—remain poorly understood. Drawing on previous work showing that individuals identified as being neurotypical (NT) show robust individual differences in where they fixate on faces, and recognize faces best when the faces are presented at this location, we defined and tested four novel hypotheses for how atypical face-looking behavior and/or retinotopic face encoding could impair face recognition in DP: (a) fixating regions of poor information, (b) inconsistent saccadic targeting, (c) weak retinotopic tuning, and (d) fixating locations not matched to the individual's own face tuning. We found no support for the first three hypotheses, with NTs and DPs consistently fixating similar locations and showing similar retinotopic tuning of their face perception performance. However, in testing the fourth hypothesis, we found preliminary evidence for two distinct phenotypes of DP: (a) Subjects characterized by impaired face memory, typical face perception, and a preference to look high on the face, and (b) Subjects characterized by profound impairments to both face memory and perception and a preference to look very low on the face. Further, while all NTs and upper-looking DPs performed best when faces were presented near their preferred fixation location, this was not true for lower-looking DPs. These results suggest that face recognition deficits in a substantial proportion of people with DP may arise not from aberrant face gaze or compromised retinotopic tuning, but from the suboptimal matching of gaze to tuning.

Close

  • doi:10.1167/19.9.7

Close

Mathias Norqvist; Bert Jonsson; Johan Lithner

Eye-tracking data and mathematical tasks with focus on mathematical reasoning Journal Article

Data in Brief, 25 , pp. 1–8, 2019.

Abstract | BibTeX

@article{Norqvist2019,
title = {Eye-tracking data and mathematical tasks with focus on mathematical reasoning},
author = {Mathias Norqvist and Bert Jonsson and Johan Lithner},
year = {2019},
date = {2019-08-01},
journal = {Data in Brief},
volume = {25},
pages = {1--8},
publisher = {Elsevier},
abstract = {This data article contains eye-tracking data (i.e., dwell time and fixations), Z-transformed cognitive data (i.e., Raven's Advanced Progressive Matrices and Operation span), and practice and test scores from a study in mathematics education. This data is provided in a supplementary file. The method section describes the mathematics tasks used in the study. These mathematics tasks are of two kinds, with and without solution templates, to induce different types of mathematical reasoning.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This data article contains eye-tracking data (i.e., dwell time and fixations), Z-transformed cognitive data (i.e., Raven's Advanced Progressive Matrices and Operation span), and practice and test scores from a study in mathematics education. This data is provided in a supplementary file. The method section describes the mathematics tasks used in the study. These mathematics tasks are of two kinds, with and without solution templates, to induce different types of mathematical reasoning.

Close

Yang Liu

Visual search characteristics of precise map reading by orienteers Journal Article

PeerJ, 7 , pp. 1–15, 2019.

Abstract | Links | BibTeX

@article{Liu2019c,
title = {Visual search characteristics of precise map reading by orienteers},
author = {Yang Liu},
doi = {10.7717/peerj.7592},
year = {2019},
date = {2019-08-01},
journal = {PeerJ},
volume = {7},
pages = {1--15},
publisher = {PeerJ Inc.},
abstract = {This article compares the differences in eye movements between orienteers of different skill levels on map information searches and explores the visual search patterns of orienteers during precise map reading so as to explore the cognitive characteristics of orienteers' visual search. We recruited 44 orienteers at different skill levels (experts, advanced beginners, and novices), and recorded their behavioral responses and eye movement data when reading maps of different complexities. We found that the complexity of map (complex vs. simple) affects the quality of orienteers' route planning during precise map reading. Specifically, when observing complex maps, orienteers of higher competency tend to have a better quality of route planning (i.e., a shorter route planning time, a longer gaze time, and a more concentrate distribution of gazes). Expert orienteers demonstrated obvious cognitive advantages in the ability to find key information. We also found that in the stage of route planning, expert orienteers and advanced beginners first pay attention to the checkpoint description table. The expert group extracted information faster, and their attention was more concentrated, whereas the novice group paid less attention to the checkpoint description table, and their gaze was scattered. We found that experts regarded the information in the checkpoint description table as the key to the problem and they give priority to this area in route decision making. These results advance our understanding of professional knowledge and problem solving in orienteering.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This article compares the differences in eye movements between orienteers of different skill levels on map information searches and explores the visual search patterns of orienteers during precise map reading so as to explore the cognitive characteristics of orienteers' visual search. We recruited 44 orienteers at different skill levels (experts, advanced beginners, and novices), and recorded their behavioral responses and eye movement data when reading maps of different complexities. We found that the complexity of map (complex vs. simple) affects the quality of orienteers' route planning during precise map reading. Specifically, when observing complex maps, orienteers of higher competency tend to have a better quality of route planning (i.e., a shorter route planning time, a longer gaze time, and a more concentrate distribution of gazes). Expert orienteers demonstrated obvious cognitive advantages in the ability to find key information. We also found that in the stage of route planning, expert orienteers and advanced beginners first pay attention to the checkpoint description table. The expert group extracted information faster, and their attention was more concentrated, whereas the novice group paid less attention to the checkpoint description table, and their gaze was scattered. We found that experts regarded the information in the checkpoint description table as the key to the problem and they give priority to this area in route decision making. These results advance our understanding of professional knowledge and problem solving in orienteering.

Close

  • doi:10.7717/peerj.7592

Close

Dong-Ho Lee; Sherryse L Corrow; Raika Pancaroglu; Jason J S Barton

The scanpaths of subjects with developmental prosopagnosia during a face memory task Journal Article

Brain Sciences, 9 , pp. 1–19, 2019.

Abstract | Links | BibTeX

@article{Lee2019a,
title = {The scanpaths of subjects with developmental prosopagnosia during a face memory task},
author = {Dong-Ho Lee and Sherryse L Corrow and Raika Pancaroglu and Jason J S Barton},
doi = {10.3390/brainsci9080188},
year = {2019},
date = {2019-08-01},
journal = {Brain Sciences},
volume = {9},
pages = {1--19},
publisher = {MDPI AG},
abstract = {The scanpaths of healthy subjects show biases towards the upper face, the eyes and the center of the face, which suggests that their fixations are guided by a feature hierarchy towards the regions most informative for face identification. However, subjects with developmental prosopagnosia have a lifelong impairment in face processing. Whether this is reflected in the loss of normal face-scanning strategies is not known. The goal of this study was to determine if subjects with developmental prosopagnosia showed anomalous scanning biases as they processed the identity of faces. We recorded the fixations of 10 subjects with developmental prosopagnosia as they performed a face memorization and recognition task, for comparison with 8 subjects with acquired prosopagnosia (four with anterior temporal lesions and four with occipitotemporal lesions) and 20 control subjects. The scanning of healthy subjects confirmed a bias to fixate the upper over the lower face, the eyes over the mouth, and the central over the peripheral face. Subjects with acquired prosopagnosia from occipitotemporal lesions had more dispersed fixations and a trend to fixate less informative facial regions. Subjects with developmental prosopagnosia did not differ from the controls. At a single-subject level, some developmental subjects performed abnormally, but none consistently across all metrics. Scanning distributions were not related to scores on perceptual or memory tests for faces. We conclude that despite lifelong difficulty with faces, subjects with developmental prosopagnosia still have an internal facial schema that guides their scanning behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The scanpaths of healthy subjects show biases towards the upper face, the eyes and the center of the face, which suggests that their fixations are guided by a feature hierarchy towards the regions most informative for face identification. However, subjects with developmental prosopagnosia have a lifelong impairment in face processing. Whether this is reflected in the loss of normal face-scanning strategies is not known. The goal of this study was to determine if subjects with developmental prosopagnosia showed anomalous scanning biases as they processed the identity of faces. We recorded the fixations of 10 subjects with developmental prosopagnosia as they performed a face memorization and recognition task, for comparison with 8 subjects with acquired prosopagnosia (four with anterior temporal lesions and four with occipitotemporal lesions) and 20 control subjects. The scanning of healthy subjects confirmed a bias to fixate the upper over the lower face, the eyes over the mouth, and the central over the peripheral face. Subjects with acquired prosopagnosia from occipitotemporal lesions had more dispersed fixations and a trend to fixate less informative facial regions. Subjects with developmental prosopagnosia did not differ from the controls. At a single-subject level, some developmental subjects performed abnormally, but none consistently across all metrics. Scanning distributions were not related to scores on perceptual or memory tests for faces. We conclude that despite lifelong difficulty with faces, subjects with developmental prosopagnosia still have an internal facial schema that guides their scanning behavior.

Close

  • doi:10.3390/brainsci9080188

Close

Jessica Klusek; Carly Moser; Joseph Schmidt; Leonard Abbeduto; Jane E Roberts

A novel eye‐tracking paradigm for indexing social avoidance‐related behavior in fragile X syndrome Journal Article

American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Klusek2019,
title = {A novel eye‐tracking paradigm for indexing social avoidance‐related behavior in fragile X syndrome},
author = {Jessica Klusek and Carly Moser and Joseph Schmidt and Leonard Abbeduto and Jane E Roberts},
doi = {10.1002/ajmg.b.32757},
year = {2019},
date = {2019-08-01},
journal = {American Journal of Medical Genetics Part B: Neuropsychiatric Genetics},
pages = {1--12},
publisher = {Wiley},
abstract = {Fragile X syndrome (FXS) is characterized by hallmark features of gaze avoidance, reduced social approach, and social anxiety. The development of therapeutics to manage these symptoms has been hindered, in part, by the lack of sensitive outcome measures. This study investigated the utility of a novel eye-tracking paradigm for indexing social avoidance-related phenotypes. Adolescent/young adult-aged males with FXS (n = 24) and typical development (n = 23) participated in the study. Participants viewed faces displaying direct or averted gaze and the first fixation duration on the eyes was recorded as an index of initial stimulus registration. Fixation durations did not differ across the direction of gaze conditions in either group, although the control group showed longer initial fixations on the eyes relative to the FXS group. Shorter initial fixation on averted gaze in males with FXS was a robust predictor of the severity of their social avoidance behavior exhibited during a social greeting context, whereas parent-reported social avoidance symptoms were not related to performance in the semi-naturalistic context. This eye-tracking paradigm may represent a promising outcome measure for FXS clinical trials because it provides a quantitative index that closely maps onto core social avoidance phenotypes of FXS, can be completed in less than 20 min, and is suitable for use with individuals with low IQ.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Fragile X syndrome (FXS) is characterized by hallmark features of gaze avoidance, reduced social approach, and social anxiety. The development of therapeutics to manage these symptoms has been hindered, in part, by the lack of sensitive outcome measures. This study investigated the utility of a novel eye-tracking paradigm for indexing social avoidance-related phenotypes. Adolescent/young adult-aged males with FXS (n = 24) and typical development (n = 23) participated in the study. Participants viewed faces displaying direct or averted gaze and the first fixation duration on the eyes was recorded as an index of initial stimulus registration. Fixation durations did not differ across the direction of gaze conditions in either group, although the control group showed longer initial fixations on the eyes relative to the FXS group. Shorter initial fixation on averted gaze in males with FXS was a robust predictor of the severity of their social avoidance behavior exhibited during a social greeting context, whereas parent-reported social avoidance symptoms were not related to performance in the semi-naturalistic context. This eye-tracking paradigm may represent a promising outcome measure for FXS clinical trials because it provides a quantitative index that closely maps onto core social avoidance phenotypes of FXS, can be completed in less than 20 min, and is suitable for use with individuals with low IQ.

Close

  • doi:10.1002/ajmg.b.32757

Close

Taylor R Hayes; John M Henderson

Center bias outperforms image salience but not semantics in accounting for attention during scene viewing Journal Article

Attention, Perception, & Psychophysics, pp. 1–10, 2019.

Abstract | Links | BibTeX

@article{Hayes2019,
title = {Center bias outperforms image salience but not semantics in accounting for attention during scene viewing},
author = {Taylor R Hayes and John M Henderson},
doi = {10.3758/s13414-019-01849-7},
year = {2019},
date = {2019-08-01},
journal = {Attention, Perception, & Psychophysics},
pages = {1--10},
publisher = {Springer Science and Business Media LLC},
abstract = {How do we determine where to focus our attention in real-world scenes? Image saliency theory proposes that our attention is ‘pulled' to scene regions that differ in low-level image features. However, models that formalize image saliency theory often contain significant scene-independent spatial biases. In the present studies, three different viewing tasks were used to evaluate whether image saliency models account for variance in scene fixation density based primarily on scene-dependent, low-level feature contrast, or on their scene-independent spatial biases. For comparison, fixation density was also compared to semantic feature maps (Meaning Maps; Henderson & Hayes, Nature Human Behaviour, 1, 743–747, 2017)thatwere generated using human ratings of isolated scene patches. The squared correlations (R2) between scene fixation density and each image saliency model's center bias, each full image saliency model, and meaning maps were computed. The results showed that in tasks that produced observer center bias, the image saliency models on average explained 23% less variance in scene fixation density than their center biases alone. In comparison, meaning maps explained on average 10% more variance than center bias alone. We conclude that image saliency theory generalizes poorly to real-world scenes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

How do we determine where to focus our attention in real-world scenes? Image saliency theory proposes that our attention is ‘pulled' to scene regions that differ in low-level image features. However, models that formalize image saliency theory often contain significant scene-independent spatial biases. In the present studies, three different viewing tasks were used to evaluate whether image saliency models account for variance in scene fixation density based primarily on scene-dependent, low-level feature contrast, or on their scene-independent spatial biases. For comparison, fixation density was also compared to semantic feature maps (Meaning Maps; Henderson & Hayes, Nature Human Behaviour, 1, 743–747, 2017)thatwere generated using human ratings of isolated scene patches. The squared correlations (R2) between scene fixation density and each image saliency model's center bias, each full image saliency model, and meaning maps were computed. The results showed that in tasks that produced observer center bias, the image saliency models on average explained 23% less variance in scene fixation density than their center biases alone. In comparison, meaning maps explained on average 10% more variance than center bias alone. We conclude that image saliency theory generalizes poorly to real-world scenes.

Close

  • doi:10.3758/s13414-019-01849-7

Close

Kun Guo; Zhihan Li; Yin Yan; Wu Li

Viewing heterospecific facial expressions: An eye-tracking study of human and monkey viewers Journal Article

Experimental Brain Research, 237 , pp. 2045–2059, 2019.

Abstract | Links | BibTeX

@article{Guo2019,
title = {Viewing heterospecific facial expressions: An eye-tracking study of human and monkey viewers},
author = {Kun Guo and Zhihan Li and Yin Yan and Wu Li},
doi = {10.1007/s00221-019-05574-3},
year = {2019},
date = {2019-08-01},
journal = {Experimental Brain Research},
volume = {237},
pages = {2045--2059},
publisher = {Springer Berlin Heidelberg},
abstract = {Common facial expressions of emotion have distinctive patterns of facial muscle movements that are culturally similar among humans, and perceiving these expressions is associated with stereotypical gaze allocation at local facial regions that are characteristic for each expression, such as eyes in angry faces. It is, however, unclear to what extent this ‘universality' view can be extended to process heterospecific facial expressions, and how ‘social learning' process contributes to heterospecific expression perception. In this eye-tracking study, we examined face-viewing gaze allocation of human (including dog owners and non-dog owners) and monkey observers while exploring expressive human, chimpanzee, monkey and dog faces (positive, neutral and negative expressions in human and dog faces; neutral and negative expressions in chimpanzee and monkey faces). Human observers showed species- and experience-dependent expression categorization accuracy. Furthermore, both human and monkey observers demonstrated different face-viewing gaze distributions which were also species dependent. Specifically, humans predominately attended at human eyes but animal mouth when judging facial expressions. Monkeys' gaze distributions in exploring human and monkey faces were qualitatively different from exploring chimpanzee and dog faces. Interestingly, the gaze behaviour of both human and monkey observers were further affected by their prior experience of the viewed species. It seems that facial expression processing is species dependent, and social learning may play a significant role in discriminating even rudimentary types of heterospecific expressions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Common facial expressions of emotion have distinctive patterns of facial muscle movements that are culturally similar among humans, and perceiving these expressions is associated with stereotypical gaze allocation at local facial regions that are characteristic for each expression, such as eyes in angry faces. It is, however, unclear to what extent this ‘universality' view can be extended to process heterospecific facial expressions, and how ‘social learning' process contributes to heterospecific expression perception. In this eye-tracking study, we examined face-viewing gaze allocation of human (including dog owners and non-dog owners) and monkey observers while exploring expressive human, chimpanzee, monkey and dog faces (positive, neutral and negative expressions in human and dog faces; neutral and negative expressions in chimpanzee and monkey faces). Human observers showed species- and experience-dependent expression categorization accuracy. Furthermore, both human and monkey observers demonstrated different face-viewing gaze distributions which were also species dependent. Specifically, humans predominately attended at human eyes but animal mouth when judging facial expressions. Monkeys' gaze distributions in exploring human and monkey faces were qualitatively different from exploring chimpanzee and dog faces. Interestingly, the gaze behaviour of both human and monkey observers were further affected by their prior experience of the viewed species. It seems that facial expression processing is species dependent, and social learning may play a significant role in discriminating even rudimentary types of heterospecific expressions.

Close

  • doi:10.1007/s00221-019-05574-3

Close

Rebecca M Foerster; Werner X Schneider

Task-irrelevant features in visual working memory influence covert attention: Evidence from a partial report task Journal Article

Vision, 3 , pp. 1–14, 2019.

Abstract | Links | BibTeX

@article{Foerster2019a,
title = {Task-irrelevant features in visual working memory influence covert attention: Evidence from a partial report task},
author = {Rebecca M Foerster and Werner X Schneider},
doi = {10.3390/vision3030042},
year = {2019},
date = {2019-08-01},
journal = {Vision},
volume = {3},
pages = {1--14},
publisher = {MDPI AG},
abstract = {Selecting a target based on a representation in visual working memory (VWM) affords biasing covert attention towards objects with memory-matching features. Recently, we showed that even task-irrelevant features of a VWM template bias attention. Specifically, when participants had to saccade to a cued shape, distractors sharing the cue's search-irrelevant color captured the eyes. While a saccade always aims at one target location, multiple locations can be attended covertly. Here, we investigated whether covert attention is captured similarly as the eyes. In our partial report task, each trial started with a shape-defined search cue, followed by a fixation cross. Next, two colored shapes, each including a letter, appeared left and right from fixation, followed by masks. The letter inside that shape matching the preceding cue had to be reported. In Experiment 1, either target, distractor, both, or no object matched the cue's irrelevant color. Target-letter reports were most frequent in target-match trials and least frequent in distractor-match trials. Irrelevant cue and target color never matched in Experiment 2. Still, participants reported the distractor more often to the target's disadvantage, when cue and distractor color matched. Thus, irrelevant features of a VWM template can influence covert attention in an involuntarily object-based manner when searching for trial-wise varying targets.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Selecting a target based on a representation in visual working memory (VWM) affords biasing covert attention towards objects with memory-matching features. Recently, we showed that even task-irrelevant features of a VWM template bias attention. Specifically, when participants had to saccade to a cued shape, distractors sharing the cue's search-irrelevant color captured the eyes. While a saccade always aims at one target location, multiple locations can be attended covertly. Here, we investigated whether covert attention is captured similarly as the eyes. In our partial report task, each trial started with a shape-defined search cue, followed by a fixation cross. Next, two colored shapes, each including a letter, appeared left and right from fixation, followed by masks. The letter inside that shape matching the preceding cue had to be reported. In Experiment 1, either target, distractor, both, or no object matched the cue's irrelevant color. Target-letter reports were most frequent in target-match trials and least frequent in distractor-match trials. Irrelevant cue and target color never matched in Experiment 2. Still, participants reported the distractor more often to the target's disadvantage, when cue and distractor color matched. Thus, irrelevant features of a VWM template can influence covert attention in an involuntarily object-based manner when searching for trial-wise varying targets.

Close

  • doi:10.3390/vision3030042

Close

Peter Vincent; Thomas Parr; David Benrimoh; Karl J Friston

With an eye on uncertainty: Modelling pupillary responses to environmental volatility Journal Article

PLOS Computational Biology, 15 (7), pp. 1–22, 2019.

Abstract | Links | BibTeX

@article{Vincent2019,
title = {With an eye on uncertainty: Modelling pupillary responses to environmental volatility},
author = {Peter Vincent and Thomas Parr and David Benrimoh and Karl J Friston},
editor = {Adrian M Haith},
doi = {10.1371/journal.pcbi.1007126},
year = {2019},
date = {2019-07-01},
journal = {PLOS Computational Biology},
volume = {15},
number = {7},
pages = {1--22},
publisher = {Public Library of Science},
abstract = {Living creatures must accurately infer the nature of their environments. They do this despite being confronted by stochastic and context sensitive contingencies—and so must constantly update their beliefs regarding their uncertainty about what might come next. In this work, we examine how we deal with uncertainty that evolves over time. This prospective uncertainty (or imprecision) is referred to as volatility and has previously been linked to noradrenergic signals that originate in the locus coeruleus. Using pupillary dilatation as a measure of central noradrenergic signalling, we tested the hypothesis that changes in pupil diameter reflect inferences humans make about environmental volatility. To do so, we collected pupillometry data from participants presented with a stream of numbers. We generated these numbers from a process with varying degrees of volatility. By measuring pupillary dilatation in response to these stimuli—and simulating the inferences made by an ideal Bayesian observer of the same stimuli—we demonstrate that humans update their beliefs about environmental contingencies in a Bayes optimal way. We show this by comparing general linear (convolution) models that formalised competing hypotheses about the causes of pupillary changes. We found greater evidence for models that included Bayes optimal estimates of volatility than those without. We additionally explore the interaction between different causes of pupil dilation and suggest a quantitative approach to characterising a person's prior beliefs about volatility.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Living creatures must accurately infer the nature of their environments. They do this despite being confronted by stochastic and context sensitive contingencies—and so must constantly update their beliefs regarding their uncertainty about what might come next. In this work, we examine how we deal with uncertainty that evolves over time. This prospective uncertainty (or imprecision) is referred to as volatility and has previously been linked to noradrenergic signals that originate in the locus coeruleus. Using pupillary dilatation as a measure of central noradrenergic signalling, we tested the hypothesis that changes in pupil diameter reflect inferences humans make about environmental volatility. To do so, we collected pupillometry data from participants presented with a stream of numbers. We generated these numbers from a process with varying degrees of volatility. By measuring pupillary dilatation in response to these stimuli—and simulating the inferences made by an ideal Bayesian observer of the same stimuli—we demonstrate that humans update their beliefs about environmental contingencies in a Bayes optimal way. We show this by comparing general linear (convolution) models that formalised competing hypotheses about the causes of pupillary changes. We found greater evidence for models that included Bayes optimal estimates of volatility than those without. We additionally explore the interaction between different causes of pupil dilation and suggest a quantitative approach to characterising a person's prior beliefs about volatility.

Close

  • doi:10.1371/journal.pcbi.1007126

Close

Florian Sandhaeger; Constantin von Nicolai; Earl K Miller; Markus Siegel

Monkey EEG links neuronal color and motion information across species and scales Journal Article

eLife, 8 , pp. 1–21, 2019.

Abstract | Links | BibTeX

@article{Sandhaeger2019,
title = {Monkey EEG links neuronal color and motion information across species and scales},
author = {Florian Sandhaeger and Constantin von Nicolai and Earl K Miller and Markus Siegel},
doi = {10.7554/elife.45645},
year = {2019},
date = {2019-07-01},
journal = {eLife},
volume = {8},
pages = {1--21},
publisher = {eLife Sciences Publications, Ltd},
abstract = {It remains challenging to relate EEG and MEG to underlying circuit processes and comparable experiments on both spatial scales are rare. To close this gap between invasive and non-invasive electrophysiology we developed and recorded human-comparable EEG in macaque monkeys during visual stimulation with colored dynamic random dot patterns. Furthermore, we performed simultaneous microelectrode recordings from 6 areas of macaque cortex and human MEG. Motion direction and color information were accessible in all signals. Tuning of the non-invasive signals was similar to V4 and IT, but not to dorsal and frontal areas. Thus, MEG and EEG were dominated by early visual and ventral stream sources. Source level analysis revealed corresponding information and latency gradients across cortex. We show how information-based methods and monkey EEG can identify analogous properties of visual processing in signals spanning spatial scales from single units to MEG – a valuable framework for relating human and animal studies. DOI:},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It remains challenging to relate EEG and MEG to underlying circuit processes and comparable experiments on both spatial scales are rare. To close this gap between invasive and non-invasive electrophysiology we developed and recorded human-comparable EEG in macaque monkeys during visual stimulation with colored dynamic random dot patterns. Furthermore, we performed simultaneous microelectrode recordings from 6 areas of macaque cortex and human MEG. Motion direction and color information were accessible in all signals. Tuning of the non-invasive signals was similar to V4 and IT, but not to dorsal and frontal areas. Thus, MEG and EEG were dominated by early visual and ventral stream sources. Source level analysis revealed corresponding information and latency gradients across cortex. We show how information-based methods and monkey EEG can identify analogous properties of visual processing in signals spanning spatial scales from single units to MEG – a valuable framework for relating human and animal studies. DOI:

Close

  • doi:10.7554/elife.45645

Close

Jessica Robin; Rosanna K Olsen

Scenes facilitate associative memory and integration Journal Article

Learning & Memory, 26 (7), pp. 252–261, 2019.

Abstract | Links | BibTeX

@article{Robin2019,
title = {Scenes facilitate associative memory and integration},
author = {Jessica Robin and Rosanna K Olsen},
doi = {10.1101/lm.049486.119},
year = {2019},
date = {2019-07-01},
journal = {Learning & Memory},
volume = {26},
number = {7},
pages = {252--261},
publisher = {Cold Spring Harbor Laboratory Press},
abstract = {How do we form mental links between related items? Forming associations between representations is a key feature of episodic memory and provides the foundation for learning and guiding behavior. Theories suggest that spatial context plays a supportive role in episodic memory, providing a scaffold on which to form associations, but this has mostly been tested in the context of autobiographical memory. We examined the memory boosting effect of spatial stimuli in memory using an associative inference paradigm combined with eye-tracking. Across two experiments, we found that memory was better for associations that included scenes, even indirectly, compared to objects and faces. Eye-tracking measures indicated that these effects may be partly mediated by greater fixations to scenes compared to objects, but did not explain the differences between scenes and faces. These results suggest that scenes facilitate associative memory and integration across memories, demonstrating evidence in support of theories of scenes as a spatial scaffold for episodic memory. A shared spatial context may promote learning and could potentially be leveraged to improve learning and memory in educational settings or for memory-impaired populations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

How do we form mental links between related items? Forming associations between representations is a key feature of episodic memory and provides the foundation for learning and guiding behavior. Theories suggest that spatial context plays a supportive role in episodic memory, providing a scaffold on which to form associations, but this has mostly been tested in the context of autobiographical memory. We examined the memory boosting effect of spatial stimuli in memory using an associative inference paradigm combined with eye-tracking. Across two experiments, we found that memory was better for associations that included scenes, even indirectly, compared to objects and faces. Eye-tracking measures indicated that these effects may be partly mediated by greater fixations to scenes compared to objects, but did not explain the differences between scenes and faces. These results suggest that scenes facilitate associative memory and integration across memories, demonstrating evidence in support of theories of scenes as a spatial scaffold for episodic memory. A shared spatial context may promote learning and could potentially be leveraged to improve learning and memory in educational settings or for memory-impaired populations.

Close

  • doi:10.1101/lm.049486.119

Close

Charlotte R Pennington; Adam W Qureshi; Rebecca L Monk; Katie Greenwood; Derek Heim

Beer? Over here! Examining attentional bias towards alcoholic and appetitive stimuli in a visual search eye-tracking task Journal Article

Psychopharmacology, 236 , pp. 3465–3476, 2019.

Abstract | Links | BibTeX

@article{Pennington2019b,
title = {Beer? Over here! Examining attentional bias towards alcoholic and appetitive stimuli in a visual search eye-tracking task},
author = {Charlotte R Pennington and Adam W Qureshi and Rebecca L Monk and Katie Greenwood and Derek Heim},
doi = {10.1007/s00213-019-05313-0},
year = {2019},
date = {2019-07-01},
journal = {Psychopharmacology},
volume = {236},
pages = {3465--3476},
abstract = {RATIONALE: Experimental tasks that demonstrate alcohol-related attentional bias typically expose participants to single-stimulus targets (e.g. addiction Stroop, visual probe, anti-saccade task), which may not correspond fully with real-world contexts where alcoholic and non-alcoholic cues simultaneously compete for attention. Moreover, alcoholic stimuli are rarely matched to other appetitive non-alcoholic stimuli. OBJECTIVES: To address these limitations by utilising a conjunction search eye-tracking task and matched stimuli to examine alcohol-related attentional bias. METHODS: Thirty social drinkers (Mage = 19.87},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

RATIONALE: Experimental tasks that demonstrate alcohol-related attentional bias typically expose participants to single-stimulus targets (e.g. addiction Stroop, visual probe, anti-saccade task), which may not correspond fully with real-world contexts where alcoholic and non-alcoholic cues simultaneously compete for attention. Moreover, alcoholic stimuli are rarely matched to other appetitive non-alcoholic stimuli. OBJECTIVES: To address these limitations by utilising a conjunction search eye-tracking task and matched stimuli to examine alcohol-related attentional bias. METHODS: Thirty social drinkers (Mage = 19.87

Close

  • doi:10.1007/s00213-019-05313-0

Close

Kristin Koller; Christopher M Hatton; Robert D Rogers; Robert D Rafal

Stria terminalis microstructure in humans predicts variability in orienting towards threat Journal Article

European Journal of Neuroscience, 50 , pp. 3804–3813, 2019.

Abstract | Links | BibTeX

@article{Koller2019,
title = {Stria terminalis microstructure in humans predicts variability in orienting towards threat},
author = {Kristin Koller and Christopher M Hatton and Robert D Rogers and Robert D Rafal},
doi = {10.1111/ejn.14504},
year = {2019},
date = {2019-07-01},
journal = {European Journal of Neuroscience},
volume = {50},
pages = {3804--3813},
publisher = {Wiley},
abstract = {Current concepts of the extended amygdala posit that basolateral to central amygdala projections mediate fear-conditioned autonomic alerting, whereas projections to the bed nucleus of the stria terminalis mediate sustained anxiety. Using diffusion tensor imaging tractography in humans, we show that microstructure of the stria terminalis correlates with an orienting bias towards threat in a saccade decision task, providing the first evidence that this circuit supports decisions guiding evaluation of threatening stimuli.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Current concepts of the extended amygdala posit that basolateral to central amygdala projections mediate fear-conditioned autonomic alerting, whereas projections to the bed nucleus of the stria terminalis mediate sustained anxiety. Using diffusion tensor imaging tractography in humans, we show that microstructure of the stria terminalis correlates with an orienting bias towards threat in a saccade decision task, providing the first evidence that this circuit supports decisions guiding evaluation of threatening stimuli.

Close

  • doi:10.1111/ejn.14504

Close

Gernot Horstmanna; Daniel Ernsta; Stefanie Becker

Dwelling on distractors varying in target-distractor similarity Journal Article

Acta Psychologica, 198 , pp. 1–10, 2019.

Abstract | Links | BibTeX

@article{Horstmanna2019,
title = {Dwelling on distractors varying in target-distractor similarity},
author = {Gernot Horstmanna and Daniel Ernsta and Stefanie Becker},
doi = {10.1016/J.CELREP.2019.05.072},
year = {2019},
date = {2019-07-01},
journal = {Acta Psychologica},
volume = {198},
pages = {1--10},
publisher = {North-Holland},
abstract = {Present day models of visual search focus on explaining search efficiency by visual guidance: The target guides attention to the target's position better in more efficient than in less efficient search. The time spent processing the distractor, however, is set to a constant in these models. In contrast to this assumption, recent studies found that dwelling on distractors is longer in more inefficient search. Previous experiments in support of this contention all presented the same distractors across all conditions, while varying the targets. While this procedure has its virtues, it confounds the manipulation of search efficiency with target type. Here we use the same targets over the entire experiment, while varying search efficiency by presenting different types of distractors. Eye fixation behavior was used to infer the amount of distractor dwelling, skipping, and revisiting. The results replicate previous results, with similarity affecting dwelling, and dwelling in turn affecting search performance. A regression analysis confirmed that variations in dwelling account for a large amount of variance in search speed, and that the similarity effect in dwelling accounts for the similarity effect in overall search performance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Present day models of visual search focus on explaining search efficiency by visual guidance: The target guides attention to the target's position better in more efficient than in less efficient search. The time spent processing the distractor, however, is set to a constant in these models. In contrast to this assumption, recent studies found that dwelling on distractors is longer in more inefficient search. Previous experiments in support of this contention all presented the same distractors across all conditions, while varying the targets. While this procedure has its virtues, it confounds the manipulation of search efficiency with target type. Here we use the same targets over the entire experiment, while varying search efficiency by presenting different types of distractors. Eye fixation behavior was used to infer the amount of distractor dwelling, skipping, and revisiting. The results replicate previous results, with similarity affecting dwelling, and dwelling in turn affecting search performance. A regression analysis confirmed that variations in dwelling account for a large amount of variance in search speed, and that the similarity effect in dwelling accounts for the similarity effect in overall search performance.

Close

  • doi:10.1016/J.CELREP.2019.05.072

Close

Alexander Goettker; Doris I Braun; Karl R Gegenfurtner

Dynamic combination of position and motion information when tracking moving targets Journal Article

Journal of Vision, 19 (7), pp. 1–22, 2019.

Abstract | Links | BibTeX

@article{Goettker2019a,
title = {Dynamic combination of position and motion information when tracking moving targets},
author = {Alexander Goettker and Doris I Braun and Karl R Gegenfurtner},
doi = {10.1167/19.7.2},
year = {2019},
date = {2019-07-01},
journal = {Journal of Vision},
volume = {19},
number = {7},
pages = {1--22},
publisher = {The Association for Research in Vision and Ophthalmology},
abstract = {To accurately foveate a moving target, the oculomotor system needs to estimate the position of the target at the saccade end, based on information about its position and ongoing movement, while accounting for neuronal delays and execution time of the saccade. We investigated human interceptive saccades and pursuit responses to moving targets defined by high and low luminance contrast or by chromatic contrast only (isoluminance). We used step-ramps with perpendicular directions between vertical target steps of 10 deg/s and horizontal ramps of 2.5 to 20 deg/s to separate errors with respect to the position step of the target in the vertical dimension, and errors related to target motion in the horizontal dimension. Interceptive saccades to targets of high and low luminance contrast landed close to the actual target positions, suggesting relatively accurate estimates of the amount of target displacement. Interceptive saccades to isoluminant targets were less accurate. They landed at positions the target had on average 100 ms before saccade onset. One account of this finding is that the integration of target motion is compromised for isoluminant targets moving in the periphery. In this case, the oculomotor system can use an accurate, but delayed position component, but cannot account for target movement. This deficit was also present for the postsaccadic pursuit speed. For the two luminance conditions, pursuit direction and speed were adjusted depending on the saccadic landing position. The rapid postsaccadic pursuit adjustments suggest shared position- and motion-related signals of target and eye for saccade and pursuit control.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

To accurately foveate a moving target, the oculomotor system needs to estimate the position of the target at the saccade end, based on information about its position and ongoing movement, while accounting for neuronal delays and execution time of the saccade. We investigated human interceptive saccades and pursuit responses to moving targets defined by high and low luminance contrast or by chromatic contrast only (isoluminance). We used step-ramps with perpendicular directions between vertical target steps of 10 deg/s and horizontal ramps of 2.5 to 20 deg/s to separate errors with respect to the position step of the target in the vertical dimension, and errors related to target motion in the horizontal dimension. Interceptive saccades to targets of high and low luminance contrast landed close to the actual target positions, suggesting relatively accurate estimates of the amount of target displacement. Interceptive saccades to isoluminant targets were less accurate. They landed at positions the target had on average 100 ms before saccade onset. One account of this finding is that the integration of target motion is compromised for isoluminant targets moving in the periphery. In this case, the oculomotor system can use an accurate, but delayed position component, but cannot account for target movement. This deficit was also present for the postsaccadic pursuit speed. For the two luminance conditions, pursuit direction and speed were adjusted depending on the saccadic landing position. The rapid postsaccadic pursuit adjustments suggest shared position- and motion-related signals of target and eye for saccade and pursuit control.

Close

  • doi:10.1167/19.7.2

Close

Sanuji Gajamange; Annie Shelton; Meaghan Clough; Owen White; Joanne Fielding; Scott Kolbe

Functional correlates of cognitive dysfunction in clinically isolated syndromes Journal Article

PLOS ONE, 14 (7), pp. 1–13, 2019.

Abstract | Links | BibTeX

@article{Gajamange2019,
title = {Functional correlates of cognitive dysfunction in clinically isolated syndromes},
author = {Sanuji Gajamange and Annie Shelton and Meaghan Clough and Owen White and Joanne Fielding and Scott Kolbe},
editor = {Friedemann Paul},
doi = {10.1371/journal.pone.0219590},
year = {2019},
date = {2019-07-01},
journal = {PLOS ONE},
volume = {14},
number = {7},
pages = {1--13},
publisher = {Public Library of Science},
abstract = {Cognitive dysfunction can be identified in patients with clinically isolated syndromes suggestive of multiple sclerosis using ocular motor testing. This study aimed to identify the functional neural correlates of cognitive dysfunction in patients with clinically isolated syndrome using MRI. Eighteen patients with clinically isolated syndrome and 17 healthy controls were recruited. Subjects underwent standard neurological and neuropsychological testing. Subjects also underwent functional MRI (fMRI) during a cognitive ocular motor task, involving pro-saccade (direct gaze towards target) and anti-saccade (direct gaze away from target) trials. Ocular motor performance variables (averaged response time and error rate) were calculated for each subject. Patients showed a trend towards a greater rate of anti-saccade errors (p = 0.09) compared to controls. Compared to controls, patients exhibited increased activation in the right postcentral, right supramarginal gyrus, and the right parietal operculum during the anti-saccadetextgreaterpro-saccade contrast. This study demonstrated that changes in functional organisation of cognitive brain networks is associated with subtle cognitive changes in patients with clinically isolated syndrome.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Cognitive dysfunction can be identified in patients with clinically isolated syndromes suggestive of multiple sclerosis using ocular motor testing. This study aimed to identify the functional neural correlates of cognitive dysfunction in patients with clinically isolated syndrome using MRI. Eighteen patients with clinically isolated syndrome and 17 healthy controls were recruited. Subjects underwent standard neurological and neuropsychological testing. Subjects also underwent functional MRI (fMRI) during a cognitive ocular motor task, involving pro-saccade (direct gaze towards target) and anti-saccade (direct gaze away from target) trials. Ocular motor performance variables (averaged response time and error rate) were calculated for each subject. Patients showed a trend towards a greater rate of anti-saccade errors (p = 0.09) compared to controls. Compared to controls, patients exhibited increased activation in the right postcentral, right supramarginal gyrus, and the right parietal operculum during the anti-saccadetextgreaterpro-saccade contrast. This study demonstrated that changes in functional organisation of cognitive brain networks is associated with subtle cognitive changes in patients with clinically isolated syndrome.

Close

  • doi:10.1371/journal.pone.0219590

Close

Guillaume Doucet; Roberto A Gulli; Benjamin W Corrigan; Lyndon R Duong; Julio C Martinez‐Trujillo

Modulation of local field potentials and neuronal activity in primate hippocampus during saccades Journal Article

Hippocampus, pp. 1–18, 2019.

Abstract | Links | BibTeX

@article{Doucet2019,
title = {Modulation of local field potentials and neuronal activity in primate hippocampus during saccades},
author = {Guillaume Doucet and Roberto A Gulli and Benjamin W Corrigan and Lyndon R Duong and Julio C Martinez‐Trujillo},
doi = {10.1002/hipo.23140},
year = {2019},
date = {2019-07-01},
journal = {Hippocampus},
pages = {1--18},
publisher = {Wiley},
abstract = {Primates use saccades to gather information about objects and their relative spatial arrangement, a process essential for visual perception and memory. It has been proposed that signals linked to saccades reset the phase of local field potential (LFP) oscillations in the hippocampus, providing a temporal window for visual signals to activate neurons in this region and influence memory formation. We investigated this issue by measuring hippocampal LFPs and spikes in two macaques performing different tasks with unconstrained eye movements. We found that LFP phase clustering (PC) in the alpha/beta (8-16 Hz) frequencies followed foveation onsets, while PC in frequencies lower than 8 Hz followed spontaneous saccades, even on a homogeneous background. Saccades to a solid grey background were not followed by increases in local neuronal firing, whereas saccades toward appearing visual stimuli were. Finally, saccade parameters correlated with LFPs phase and amplitude: saccade direction correlated with delta (≤4 Hz) phase, and saccade amplitude with theta (4-8 Hz) power. Our results suggest that signals linked to saccades reach the hippocampus, producing synchronization of delta/theta LFPs without a general activation of local neurons. Moreover, some visual inputs co-occurring with saccades produce LFP synchronization in the alpha/beta bands and elevated neuronal firing. Our findings support the hypothesis that saccade-related signals enact sensory input-dependent plasticity and therefore memory formation in the primate hippocampus.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Primates use saccades to gather information about objects and their relative spatial arrangement, a process essential for visual perception and memory. It has been proposed that signals linked to saccades reset the phase of local field potential (LFP) oscillations in the hippocampus, providing a temporal window for visual signals to activate neurons in this region and influence memory formation. We investigated this issue by measuring hippocampal LFPs and spikes in two macaques performing different tasks with unconstrained eye movements. We found that LFP phase clustering (PC) in the alpha/beta (8-16 Hz) frequencies followed foveation onsets, while PC in frequencies lower than 8 Hz followed spontaneous saccades, even on a homogeneous background. Saccades to a solid grey background were not followed by increases in local neuronal firing, whereas saccades toward appearing visual stimuli were. Finally, saccade parameters correlated with LFPs phase and amplitude: saccade direction correlated with delta (≤4 Hz) phase, and saccade amplitude with theta (4-8 Hz) power. Our results suggest that signals linked to saccades reach the hippocampus, producing synchronization of delta/theta LFPs without a general activation of local neurons. Moreover, some visual inputs co-occurring with saccades produce LFP synchronization in the alpha/beta bands and elevated neuronal firing. Our findings support the hypothesis that saccade-related signals enact sensory input-dependent plasticity and therefore memory formation in the primate hippocampus.

Close

  • doi:10.1002/hipo.23140

Close

Hans A Trukenbrod; Simon Barthelmé; Felix A Wichmann; Ralf Engbert

Spatial statistics for gaze patterns in scene viewing: Effects of repeated viewing Journal Article

Journal of Vision, 19 (6), pp. 1–19, 2019.

Abstract | Links | BibTeX

@article{Trukenbrod2019,
title = {Spatial statistics for gaze patterns in scene viewing: Effects of repeated viewing},
author = {Hans A Trukenbrod and Simon Barthelmé and Felix A Wichmann and Ralf Engbert},
doi = {10.1167/19.6.5},
year = {2019},
date = {2019-06-01},
journal = {Journal of Vision},
volume = {19},
number = {6},
pages = {1--19},
publisher = {The Association for Research in Vision and Ophthalmology},
abstract = {Scene viewing is used to study attentional selection in complex but still controlled environments. One of the main observations on eye movements during scene viewing is the inhomogeneous distribution of fixation locations: While some parts of an image are fixated by almost all observers and are inspected repeatedly by the same observer, other image parts remain unfixated by observers even after long exploration intervals. Here, we apply spatial point process methods to investigate the relationship between pairs of fixations. More precisely, we use the pair correlation function, a powerful statistical tool, to evaluate dependencies between fixation locations along individual scanpaths. We demonstrate that aggregation of fixation locations within 4° is stronger than expected from chance. Furthermore, the pair correlation function reveals stronger aggregation of fixations when the same image is presented a second time. We use simulations of a dynamical model to show that a narrower spatial attentional span may explain differences in pair correlations between the first and the second inspection of the same image.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Scene viewing is used to study attentional selection in complex but still controlled environments. One of the main observations on eye movements during scene viewing is the inhomogeneous distribution of fixation locations: While some parts of an image are fixated by almost all observers and are inspected repeatedly by the same observer, other image parts remain unfixated by observers even after long exploration intervals. Here, we apply spatial point process methods to investigate the relationship between pairs of fixations. More precisely, we use the pair correlation function, a powerful statistical tool, to evaluate dependencies between fixation locations along individual scanpaths. We demonstrate that aggregation of fixation locations within 4° is stronger than expected from chance. Furthermore, the pair correlation function reveals stronger aggregation of fixations when the same image is presented a second time. We use simulations of a dynamical model to show that a narrower spatial attentional span may explain differences in pair correlations between the first and the second inspection of the same image.

Close

  • doi:10.1167/19.6.5

Close

Emma E M Stewart; Alexander C Schütz

Transsaccadic integration is dominated by early, independent noise Journal Article

Journal of Vision, 19 (6), pp. 1–19, 2019.

Abstract | Links | BibTeX

@article{Stewart2019b,
title = {Transsaccadic integration is dominated by early, independent noise},
author = {Emma E M Stewart and Alexander C Schütz},
doi = {10.1167/19.6.17},
year = {2019},
date = {2019-06-01},
journal = {Journal of Vision},
volume = {19},
number = {6},
pages = {1--19},
publisher = {The Association for Research in Vision and Ophthalmology},
abstract = {Humans are able to integrate pre- and postsaccadic percepts of an object across saccades to maintain perceptual stability. Previous studies have used Maximum Likelihood Estimation (MLE) to determine that integration occurs in a near-optimal manner. Here, we compared three different models to investigate the mechanism of integration in more detail: an early noise model, where noise is added to the pre- and postsaccadic signals before integration occurs; a late-noise model, where noise is added to the integrated signal after integration occurs; and a temporal summation model, where integration benefits arise from the longer transsaccadic presentation duration compared to pre- and postsaccadic presentation only. We also measured spatiotemporal aspects of integration to determine whether integration can occur for very brief stimulus durations, across two hemifields, and in spatiotopic and retinotopic coordinates. Pre-, post-, and transsaccadic performance was measured at different stimulus presentation durations, both at the saccade target and a location where the pre- and postsaccadic stimuli were presented in different hemifields across the saccade. Results showed that for both within- and between-hemifields conditions, integration could occur when pre- and postsaccadic stimuli were presented only briefly, and that the pattern of integration followed an early noise model. Whereas integration occurred when the pre- and post-saccadic stimuli were presented in the same spatiotopic coordinates, there was no integration when they were presented in the same retinotopic coordinates. This contrast suggests that transsaccadic integration is limited by early, independent, sensory noise acting separately on pre- and postsaccadic signals.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Humans are able to integrate pre- and postsaccadic percepts of an object across saccades to maintain perceptual stability. Previous studies have used Maximum Likelihood Estimation (MLE) to determine that integration occurs in a near-optimal manner. Here, we compared three different models to investigate the mechanism of integration in more detail: an early noise model, where noise is added to the pre- and postsaccadic signals before integration occurs; a late-noise model, where noise is added to the integrated signal after integration occurs; and a temporal summation model, where integration benefits arise from the longer transsaccadic presentation duration compared to pre- and postsaccadic presentation only. We also measured spatiotemporal aspects of integration to determine whether integration can occur for very brief stimulus durations, across two hemifields, and in spatiotopic and retinotopic coordinates. Pre-, post-, and transsaccadic performance was measured at different stimulus presentation durations, both at the saccade target and a location where the pre- and postsaccadic stimuli were presented in different hemifields across the saccade. Results showed that for both within- and between-hemifields conditions, integration could occur when pre- and postsaccadic stimuli were presented only briefly, and that the pattern of integration followed an early noise model. Whereas integration occurred when the pre- and post-saccadic stimuli were presented in the same spatiotopic coordinates, there was no integration when they were presented in the same retinotopic coordinates. This contrast suggests that transsaccadic integration is limited by early, independent, sensory noise acting separately on pre- and postsaccadic signals.

Close

  • doi:10.1167/19.6.17

Close

Andrea F M Petrella; Glen Belfry; Matthew Heath

Older adults elicit a single-bout post-exercise executive benefit across a continuum of aerobically supported metabolic intensities Journal Article

Brain Research, 1712 , pp. 197–206, 2019.

Abstract | Links | BibTeX

@article{Petrella2019,
title = {Older adults elicit a single-bout post-exercise executive benefit across a continuum of aerobically supported metabolic intensities},
author = {Andrea F M Petrella and Glen Belfry and Matthew Heath},
doi = {10.1016/J.BRAINRES.2019.02.009},
year = {2019},
date = {2019-06-01},
journal = {Brain Research},
volume = {1712},
pages = {197--206},
publisher = {Elsevier},
abstract = {Ten minutes of aerobic or resistance training can ‘boost' executive function in older adults. Here, we examined whether the magnitude of the exercise benefit is influenced by exercise intensity. Older adults (N = 17: mean age = 73 years) completed a volitional test to exhaustion (VO2peak) via treadmill to determine participant-specific moderate (80% of lactate threshold (LT)), heavy (15% of the difference between LT and VO2peak) and very-heavy (50% of the difference between LT and VO2peak) exercise intensities. Subsequently, in separate sessions all participants completed 10-min constant load single-bouts of exercise at each intensity. Pre- and post-exercise executive function were examined via the antisaccade task. Antisaccades require a saccade mirror-symmetrical to a target and extensive evidence has shown that antisaccades are supported via frontoparietal networks that demonstrate task-dependent changes following single-bout and chronic exercise. We also included a non-executive task (saccade to veridical target location; i.e., prosaccade) to determine whether a putative post-exercise benefit is specific to executive-related oculomotor control. Results showed that VO2 and psychological ratings of perceived exertion concurrently increased with increasing exercise intensity. As well, antisaccade reaction times showed a 24 ms (i.e., 8%) reduction from pre- to post-exercise assessments (p textless .001), whereas prosaccade values did not (p = .19). Most notably, the post-exercise change in antisaccade RTs did not reliably vary with exercise intensity. Further, for each exercise intensity participants' cardiorespiratory fitness level was unrelated to the magnitude of the post-exercise executive benefit (ps textgreater .13). Accordingly, an exercise duration as brief as 10-min provides a selective benefit to executive function in older adults across the continuum of moderate to very-heavy intensities.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Ten minutes of aerobic or resistance training can ‘boost' executive function in older adults. Here, we examined whether the magnitude of the exercise benefit is influenced by exercise intensity. Older adults (N = 17: mean age = 73 years) completed a volitional test to exhaustion (VO2peak) via treadmill to determine participant-specific moderate (80% of lactate threshold (LT)), heavy (15% of the difference between LT and VO2peak) and very-heavy (50% of the difference between LT and VO2peak) exercise intensities. Subsequently, in separate sessions all participants completed 10-min constant load single-bouts of exercise at each intensity. Pre- and post-exercise executive function were examined via the antisaccade task. Antisaccades require a saccade mirror-symmetrical to a target and extensive evidence has shown that antisaccades are supported via frontoparietal networks that demonstrate task-dependent changes following single-bout and chronic exercise. We also included a non-executive task (saccade to veridical target location; i.e., prosaccade) to determine whether a putative post-exercise benefit is specific to executive-related oculomotor control. Results showed that VO2 and psychological ratings of perceived exertion concurrently increased with increasing exercise intensity. As well, antisaccade reaction times showed a 24 ms (i.e., 8%) reduction from pre- to post-exercise assessments (p textless .001), whereas prosaccade values did not (p = .19). Most notably, the post-exercise change in antisaccade RTs did not reliably vary with exercise intensity. Further, for each exercise intensity participants' cardiorespiratory fitness level was unrelated to the magnitude of the post-exercise executive benefit (ps textgreater .13). Accordingly, an exercise duration as brief as 10-min provides a selective benefit to executive function in older adults across the continuum of moderate to very-heavy intensities.

Close

  • doi:10.1016/J.BRAINRES.2019.02.009

Close

Effie J Pereira; Elina Birmingham; Jelena Ristic

Contextually-based social attention diverges across covert and overt measures Journal Article

Vision, 3 , pp. 1–19, 2019.

Abstract | Links | BibTeX

@article{Pereira2019b,
title = {Contextually-based social attention diverges across covert and overt measures},
author = {Effie J Pereira and Elina Birmingham and Jelena Ristic},
doi = {10.3390/vision3020029},
year = {2019},
date = {2019-06-01},
journal = {Vision},
volume = {3},
pages = {1--19},
publisher = {MDPI AG},
abstract = {Humans spontaneously attend to social cues like faces and eyes. However, recent data show that this behavior is significantly weakened when visual content, such as luminance and configuration of internal features, as well as visual context, such as background and facial expression, are controlled. Here, we investigated attentional biasing elicited in response to information presented within appropriate background contexts. Using a dot-probe task, participants were presented with a face–house cue pair, with a person sitting in a room and a house positioned within a picture hanging on a wall. A response target occurred at the previous location of the eyes, mouth, top of the house, or bottom of the house. Experiment 1 measured covert attention by assessing manual responses while participants maintained central fixation. Experiment 2 measured overt attention by assessing eye movements using an eye tracker. The data from both experiments indicated no evidence of spontaneous attentional biasing towards faces or facial features in manual responses; however, an infrequent, though reliable, overt bias towards the eyes of faces emerged. Together, these findings suggest that contextually-based social information does not determine spontaneous social attentional biasing in manual measures, although it may act to facilitate oculomotor behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Humans spontaneously attend to social cues like faces and eyes. However, recent data show that this behavior is significantly weakened when visual content, such as luminance and configuration of internal features, as well as visual context, such as background and facial expression, are controlled. Here, we investigated attentional biasing elicited in response to information presented within appropriate background contexts. Using a dot-probe task, participants were presented with a face–house cue pair, with a person sitting in a room and a house positioned within a picture hanging on a wall. A response target occurred at the previous location of the eyes, mouth, top of the house, or bottom of the house. Experiment 1 measured covert attention by assessing manual responses while participants maintained central fixation. Experiment 2 measured overt attention by assessing eye movements using an eye tracker. The data from both experiments indicated no evidence of spontaneous attentional biasing towards faces or facial features in manual responses; however, an infrequent, though reliable, overt bias towards the eyes of faces emerged. Together, these findings suggest that contextually-based social information does not determine spontaneous social attentional biasing in manual measures, although it may act to facilitate oculomotor behavior.

Close

  • doi:10.3390/vision3020029

Close

Megan H Papesh; Juan D Guevara Pinto

Spotting rare items makes the brain “blink” harder: Evidence from pupillometry Journal Article

Attention, Perception, & Psychophysics, 81 (8), pp. 2635–2647, 2019.

Abstract | Links | BibTeX

@article{Papesh2019,
title = {Spotting rare items makes the brain “blink” harder: Evidence from pupillometry},
author = {Megan H Papesh and Juan D {Guevara Pinto}},
doi = {10.3758/s13414-019-01777-6},
year = {2019},
date = {2019-06-01},
journal = {Attention, Perception, & Psychophysics},
volume = {81},
number = {8},
pages = {2635--2647},
publisher = {Springer Science and Business Media LLC},
abstract = {In many visual search tasks (e.g., cancer screening, airport baggage inspections), the most serious search targets occur infrequently. As an ironic side effect, when observers finally encounter important objects (e.g., a weapon in baggage), they often fail to notice them, a phenomenon known as the low-prevalence effect (LPE). Although many studies have investigated LPE search errors, we investigated the attentional consequences of successful rare target detection. Using an attentional blink paradigm, we manipulated how often observers encountered the first serial target (T1), then measured its effects on their ability to detect a following target (T2). Across two experiments, we show that the LPE is more than just an inflated miss rate: When observers successfully detected rare targets, they were less likely to spot subsequent targets. Using pupillometry to index locus-coeruleus (LC) mediated attentional engagement, Experiment 2 confirmed that an LC refractory period mediates the attentional blink (`Nieuwenhuis, Gilzenrat, Holmes, & Cohen, 2005, Journal of Experimental Psychology: General, 134[3], 291–307), and that these effects emerge relatively quickly following T1 onset. Moreover, in both behavioral and pupil analyses, we found that detecting low-prevalence targets exacerbates the LC refractory period. Consequences for theories of the LPE are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In many visual search tasks (e.g., cancer screening, airport baggage inspections), the most serious search targets occur infrequently. As an ironic side effect, when observers finally encounter important objects (e.g., a weapon in baggage), they often fail to notice them, a phenomenon known as the low-prevalence effect (LPE). Although many studies have investigated LPE search errors, we investigated the attentional consequences of successful rare target detection. Using an attentional blink paradigm, we manipulated how often observers encountered the first serial target (T1), then measured its effects on their ability to detect a following target (T2). Across two experiments, we show that the LPE is more than just an inflated miss rate: When observers successfully detected rare targets, they were less likely to spot subsequent targets. Using pupillometry to index locus-coeruleus (LC) mediated attentional engagement, Experiment 2 confirmed that an LC refractory period mediates the attentional blink (`Nieuwenhuis, Gilzenrat, Holmes, & Cohen, 2005, Journal of Experimental Psychology: General, 134[3], 291–307), and that these effects emerge relatively quickly following T1 onset. Moreover, in both behavioral and pupil analyses, we found that detecting low-prevalence targets exacerbates the LC refractory period. Consequences for theories of the LPE are discussed.

Close

  • doi:10.3758/s13414-019-01777-6

Close

Denise Baumeler; Sabine Born

Vertical and horizontal meridian modulations suggest areas with quadrantic representations as neural locus of the attentional repulsion effect Journal Article

Journal of Vision, 19 (6), pp. 1–16, 2019.

Abstract | Links | BibTeX

@article{Baumeler2019,
title = {Vertical and horizontal meridian modulations suggest areas with quadrantic representations as neural locus of the attentional repulsion effect},
author = {Denise Baumeler and Sabine Born},
doi = {10.1167/19.6.15},
year = {2019},
date = {2019-06-01},
journal = {Journal of Vision},
volume = {19},
number = {6},
pages = {1--16},
publisher = {The Association for Research in Vision and Ophthalmology},
abstract = {The attentional repulsion effect (ARE) is a perceptual bias attributed to a covert shift of attention toward a peripheral cue, which, in turn, repulses the perceived position of a subsequently presented probe (Suzuki & Cavanagh, 1997). So far, probes were mainly presented around the vertical meridian. Other studies of perceptual biases reported disruptions when stimuli were presented across the vertical meridian. These disruptions were explained by separate representations of the left and right visual hemifields, projecting to opposite anatomical hemispheres. As the ARE is typically examined through two-alternative, forced-choice tasks in which the estimation of the probe's position is based on the cue's effectiveness to repulse the probe across the vertical meridian, no such asymmetry has been reported. To test for similar meridian disruptions in the ARE, we collected absolute estimations (computer mouse responses) of the perceived probe positions (Experiment 1a). As absolute estimations of memorized positions are associated with overestimated distances in reproduction, results had to be compared to a no-cue baseline condition (Experiment 1b). Through this new methodological approach, we found the ARE to be strongest when the attentional capturing cue and the subsequently presented probe were displayed in the same hemifield (Experiment 2a). In a further experiment (Experiment 2b), we observed that the ARE is not only disrupted at the vertical, but also at the horizontal meridian. These disruptions at both meridians suggest the involvement of visual neural areas with quadrantic representations, such as V2 and/or V3 in the generation of the ARE.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The attentional repulsion effect (ARE) is a perceptual bias attributed to a covert shift of attention toward a peripheral cue, which, in turn, repulses the perceived position of a subsequently presented probe (Suzuki & Cavanagh, 1997). So far, probes were mainly presented around the vertical meridian. Other studies of perceptual biases reported disruptions when stimuli were presented across the vertical meridian. These disruptions were explained by separate representations of the left and right visual hemifields, projecting to opposite anatomical hemispheres. As the ARE is typically examined through two-alternative, forced-choice tasks in which the estimation of the probe's position is based on the cue's effectiveness to repulse the probe across the vertical meridian, no such asymmetry has been reported. To test for similar meridian disruptions in the ARE, we collected absolute estimations (computer mouse responses) of the perceived probe positions (Experiment 1a). As absolute estimations of memorized positions are associated with overestimated distances in reproduction, results had to be compared to a no-cue baseline condition (Experiment 1b). Through this new methodological approach, we found the ARE to be strongest when the attentional capturing cue and the subsequently presented probe were displayed in the same hemifield (Experiment 2a). In a further experiment (Experiment 2b), we observed that the ARE is not only disrupted at the vertical, but also at the horizontal meridian. These disruptions at both meridians suggest the involvement of visual neural areas with quadrantic representations, such as V2 and/or V3 in the generation of the ARE.

Close

  • doi:10.1167/19.6.15

Close

Aarit Ahuja; David L Sheinberg

Behavioral and oculomotor evidence for visual simulation of object movement Journal Article

Journal of Vision, 19 (6), pp. 1–17, 2019.

Abstract | Links | BibTeX

@article{Ahuja2019,
title = {Behavioral and oculomotor evidence for visual simulation of object movement},
author = {Aarit Ahuja and David L Sheinberg},
doi = {10.1167/19.6.13},
year = {2019},
date = {2019-06-01},
journal = {Journal of Vision},
volume = {19},
number = {6},
pages = {1--17},
publisher = {The Association for Research in Vision and Ophthalmology},
abstract = {We regularly interact with moving objects in our environment. Yet, little is known about how we extrapolate the future movements of visually perceived objects. One possibility is that movements are experienced by a mental visual simulation, allowing one to internally picture an object's upcoming motion trajectory, even as the object itself remains stationary. Here we examined this possibility by asking human participants to make judgments about the future position of a falling ball on an obstacle-filled display. We found that properties of the ball's trajectory were highly predictive of subjects' reaction times and accuracy on the task. We also found that the eye movements subjects made while attempting to ascertain where the ball might fall had significant spatiotemporal overlap with those made while actually perceiving the ball fall. These findings suggest that subjects simulated the ball's trajectory to inform their responses. Finally, we trained a convolutional neural network to see whether this problem could be solved by simple image analysis as opposed to the more intricate simulation strategy we propose. We found that while the network was able to solve our task, the model's output did not effectively or consistently predict human behavior. This implies that subjects employed a different strategy for solving our task, and bolsters the conclusion that they were engaging in visual simulation. The current study thus provides support for visual simulation of motion as a means of understanding complex visual scenes and paves the way for future investigations of this phenomenon at a neural level.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We regularly interact with moving objects in our environment. Yet, little is known about how we extrapolate the future movements of visually perceived objects. One possibility is that movements are experienced by a mental visual simulation, allowing one to internally picture an object's upcoming motion trajectory, even as the object itself remains stationary. Here we examined this possibility by asking human participants to make judgments about the future position of a falling ball on an obstacle-filled display. We found that properties of the ball's trajectory were highly predictive of subjects' reaction times and accuracy on the task. We also found that the eye movements subjects made while attempting to ascertain where the ball might fall had significant spatiotemporal overlap with those made while actually perceiving the ball fall. These findings suggest that subjects simulated the ball's trajectory to inform their responses. Finally, we trained a convolutional neural network to see whether this problem could be solved by simple image analysis as opposed to the more intricate simulation strategy we propose. We found that while the network was able to solve our task, the model's output did not effectively or consistently predict human behavior. This implies that subjects employed a different strategy for solving our task, and bolsters the conclusion that they were engaging in visual simulation. The current study thus provides support for visual simulation of motion as a means of understanding complex visual scenes and paves the way for future investigations of this phenomenon at a neural level.

Close

  • doi:10.1167/19.6.13

Close

Justin Riddle; Kai Hwang; Dillan Cellier; Sofia Dhanani; Mark D'Esposito

Causal evidence for the role of neuronal oscillations in top–down and bottom–up attention Journal Article

Journal of Cognitive Neuroscience, 31 , pp. 768–779, 2019.

Abstract | Links | BibTeX

@article{Riddle2019,
title = {Causal evidence for the role of neuronal oscillations in top–down and bottom–up attention},
author = {Justin Riddle and Kai Hwang and Dillan Cellier and Sofia Dhanani and Mark D'Esposito},
doi = {10.1162/jocn_a_01376},
year = {2019},
date = {2019-05-01},
journal = {Journal of Cognitive Neuroscience},
volume = {31},
pages = {768--779},
abstract = {Beta and gamma frequency neuronal oscillations have been implicated in top–down and bottom–up attention. In this study, we used rhythmic TMS to modulate ongoing beta and gamma frequency neuronal oscillations in frontal and parietal cortex while human participants performed a visual search task that manipulates bottom–up and top–down attention (single feature and conjunction search). Both task conditions will engage bottom–up attention processes, although the conjunction search condition will require more top–down attention. Gamma frequency TMS to superior precentral sulcus (sPCS) slowed saccadic RTs during both task conditions and induced a response bias to the contralateral visual field. In contrary, beta frequency TMS to sPCS and intraparietal sulcus decreased search accuracy only during the conjunction search condition that engaged more top–down attention. Furthermore, beta frequency TMS increased trial errors specifically when the target was in the ipsilateral visual field for the conjunction search condition. These results indicate that beta frequency TMS to sPCS and intraparietal sulcus disrupted top–down attention, whereas gamma frequency TMS to sPCS disrupted bottom–up, stimulus-driven attention processes. These findings provide causal evidence suggesting that beta and gamma oscillations have distinct functional roles for cognition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Beta and gamma frequency neuronal oscillations have been implicated in top–down and bottom–up attention. In this study, we used rhythmic TMS to modulate ongoing beta and gamma frequency neuronal oscillations in frontal and parietal cortex while human participants performed a visual search task that manipulates bottom–up and top–down attention (single feature and conjunction search). Both task conditions will engage bottom–up attention processes, although the conjunction search condition will require more top–down attention. Gamma frequency TMS to superior precentral sulcus (sPCS) slowed saccadic RTs during both task conditions and induced a response bias to the contralateral visual field. In contrary, beta frequency TMS to sPCS and intraparietal sulcus decreased search accuracy only during the conjunction search condition that engaged more top–down attention. Furthermore, beta frequency TMS increased trial errors specifically when the target was in the ipsilateral visual field for the conjunction search condition. These results indicate that beta frequency TMS to sPCS and intraparietal sulcus disrupted top–down attention, whereas gamma frequency TMS to sPCS disrupted bottom–up, stimulus-driven attention processes. These findings provide causal evidence suggesting that beta and gamma oscillations have distinct functional roles for cognition.

Close

  • doi:10.1162/jocn_a_01376

Close

Kristin Koller; Robert D Rafal; Adam Platt; Nicholas D Mitchell

Orienting toward threat: Contributions of a subcortical pathway transmitting retinal afferents to the amygdala via the superior colliculus and pulvinar Journal Article

Neuropsychologia, 128 , pp. 78–86, 2019.

Abstract | Links | BibTeX

@article{Koller2019b,
title = {Orienting toward threat: Contributions of a subcortical pathway transmitting retinal afferents to the amygdala via the superior colliculus and pulvinar},
author = {Kristin Koller and Robert D Rafal and Adam Platt and Nicholas D Mitchell},
doi = {10.1016/j.neuropsychologia.2018.01.027},
year = {2019},
date = {2019-05-01},
journal = {Neuropsychologia},
volume = {128},
pages = {78--86},
publisher = {Elsevier Ltd},
abstract = {Probabilistic diffusion tractography was used to provide the first direct evidence for a subcortical pathway from the retina to the amygdala, via the superior colliculus and pulvinar, that transmits visual stimuli signaling threat. A bias to orient toward threat was measured in a temporal order judgement saccade decision task, under monocular viewing, in a group of 19 healthy participants who also underwent diffusion weighted MR imaging. On each trial of the behavioural task a picture depicting threat was presented in one visual field and a competing non-threatening stimulus in the other. The onset interval between the two pictures was randomly varied and participants made a saccade toward the stimulus that they judged to have appeared first. The bias to orient toward threat was stronger when the threatening stimulus was in the temporal visual hemifield, suggesting that afferents via the retinotectal tract contributed to the bias. Probabalistic tractography was used to virtually dissect connections between the superior colliculus and the amygdala traversing the pulvinar. Individual differences in microstructure (fractional anisotropy) of the streamline predicted the magnitude of the bias to orient toward threat, providing supporting evidence for a functional role of the subcortical SC-amygdala pathway in processing threat in healthy humans.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Probabilistic diffusion tractography was used to provide the first direct evidence for a subcortical pathway from the retina to the amygdala, via the superior colliculus and pulvinar, that transmits visual stimuli signaling threat. A bias to orient toward threat was measured in a temporal order judgement saccade decision task, under monocular viewing, in a group of 19 healthy participants who also underwent diffusion weighted MR imaging. On each trial of the behavioural task a picture depicting threat was presented in one visual field and a competing non-threatening stimulus in the other. The onset interval between the two pictures was randomly varied and participants made a saccade toward the stimulus that they judged to have appeared first. The bias to orient toward threat was stronger when the threatening stimulus was in the temporal visual hemifield, suggesting that afferents via the retinotectal tract contributed to the bias. Probabalistic tractography was used to virtually dissect connections between the superior colliculus and the amygdala traversing the pulvinar. Individual differences in microstructure (fractional anisotropy) of the streamline predicted the magnitude of the bias to orient toward threat, providing supporting evidence for a functional role of the subcortical SC-amygdala pathway in processing threat in healthy humans.

Close

  • doi:10.1016/j.neuropsychologia.2018.01.027

Close

Mariana Babo-Rebelo; Anne Buot; Catherine Tallon-Baudry

Neural responses to heartbeats distinguish self from other during imagination Journal Article

NeuroImage, 191 , pp. 10–20, 2019.

Abstract | Links | BibTeX

@article{Babo-Rebelo2019,
title = {Neural responses to heartbeats distinguish self from other during imagination},
author = {Mariana Babo-Rebelo and Anne Buot and Catherine Tallon-Baudry},
doi = {10.1016/J.NEUROIMAGE.2019.02.012},
year = {2019},
date = {2019-05-01},
journal = {NeuroImage},
volume = {191},
pages = {10--20},
publisher = {Academic Press},
abstract = {Imagination is an internally-generated process, where one can make oneself or other people appear as protagonists of a scene. How does the brain tag the protagonist of an imagined scene as being oneself or someone else? Crucially, during imagination, neither external stimuli nor motor feedback are available to disentangle imagining oneself from imagining someone else. Here, we test the hypothesis that an internal mechanism based on the neural monitoring of heartbeats could distinguish between self and other. 23 participants imagined themselves (from a first-person perspective) or a friend (from a third-person perspective) in various scenarios, while their brain activity was recorded with magnetoencephalography and their cardiac activity was simultaneously monitored. We measured heartbeat-evoked responses, i.e. transients of neural activity occurring in response to each heartbeat, during imagination. The amplitude of heartbeat-evoked responses differed between imagining oneself and imagining a friend, in the precuneus and posterior cingulate regions bilaterally. Effect size was modulated by the daydreaming frequency scores of participants but not by their interoceptive abilities. These results could not be accounted for by other characteristics of imagination (e.g., the ability to adopt the perspective, valence or arousal), nor by cardiac parameters (e.g., heart rate) or arousal levels (e.g. arousal ratings, pupil diameter). Heartbeat-evoked responses thus appear as a neural marker distinguishing self from other during imagination.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Imagination is an internally-generated process, where one can make oneself or other people appear as protagonists of a scene. How does the brain tag the protagonist of an imagined scene as being oneself or someone else? Crucially, during imagination, neither external stimuli nor motor feedback are available to disentangle imagining oneself from imagining someone else. Here, we test the hypothesis that an internal mechanism based on the neural monitoring of heartbeats could distinguish between self and other. 23 participants imagined themselves (from a first-person perspective) or a friend (from a third-person perspective) in various scenarios, while their brain activity was recorded with magnetoencephalography and their cardiac activity was simultaneously monitored. We measured heartbeat-evoked responses, i.e. transients of neural activity occurring in response to each heartbeat, during imagination. The amplitude of heartbeat-evoked responses differed between imagining oneself and imagining a friend, in the precuneus and posterior cingulate regions bilaterally. Effect size was modulated by the daydreaming frequency scores of participants but not by their interoceptive abilities. These results could not be accounted for by other characteristics of imagination (e.g., the ability to adopt the perspective, valence or arousal), nor by cardiac parameters (e.g., heart rate) or arousal levels (e.g. arousal ratings, pupil diameter). Heartbeat-evoked responses thus appear as a neural marker distinguishing self from other during imagination.

Close

  • doi:10.1016/J.NEUROIMAGE.2019.02.012

Close

Yelda Semizer; Melchi M Michel

Natural image clutter degrades overt search performance independently of set size Journal Article

Journal of Vision, 19 (4), pp. 1–16, 2019.

Abstract | Links | BibTeX

@article{Semizer2019,
title = {Natural image clutter degrades overt search performance independently of set size},
author = {Yelda Semizer and Melchi M Michel},
doi = {10.1167/19.4.1},
year = {2019},
date = {2019-04-01},
journal = {Journal of Vision},
volume = {19},
number = {4},
pages = {1--16},
publisher = {The Association for Research in Vision and Ophthalmology},
abstract = {Although studies of visual search have repeatedly demonstrated that visual clutter impairs search performance in natural scenes, these studies have not attempted to disentangle the effects of search set size from those of clutter per se. Here, we investigate the effect of natural image clutter on performance in an overt search for categorical targets when the search set size is controlled. Observers completed a search task that required detecting and localizing common objects in a set of natural images. The images were sorted into high- and low-clutter conditions based on the clutter metric by Bravo and Farid (2008). The search set size was varied independently by fixing the number and positions of potential targets across set size conditions within a block of trials. Within each fixed set size condition, search times increased as a function of increasing clutter, suggesting that clutter degrades overt search performance independently of set size.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Although studies of visual search have repeatedly demonstrated that visual clutter impairs search performance in natural scenes, these studies have not attempted to disentangle the effects of search set size from those of clutter per se. Here, we investigate the effect of natural image clutter on performance in an overt search for categorical targets when the search set size is controlled. Observers completed a search task that required detecting and localizing common objects in a set of natural images. The images were sorted into high- and low-clutter conditions based on the clutter metric by Bravo and Farid (2008). The search set size was varied independently by fixing the number and positions of potential targets across set size conditions within a block of trials. Within each fixed set size condition, search times increased as a function of increasing clutter, suggesting that clutter degrades overt search performance independently of set size.

Close

  • doi:10.1167/19.4.1

Close

Sergio Delle Monache; Francesco Lacquaniti; Gianfranco Bosco

Ocular tracking of occluded ballistic trajectories: Effects of visual context and of target law of motion Journal Article

Journal of Vision, 19 (4), pp. 1–21, 2019.

Abstract | Links | BibTeX

@article{Monache2019,
title = {Ocular tracking of occluded ballistic trajectories: Effects of visual context and of target law of motion},
author = {Sergio Delle Monache and Francesco Lacquaniti and Gianfranco Bosco},
doi = {10.1167/19.4.13},
year = {2019},
date = {2019-04-01},
journal = {Journal of Vision},
volume = {19},
number = {4},
pages = {1--21},
publisher = {The Association for Research in Vision and Ophthalmology},
abstract = {In tracking a moving target, the visual context may provide cues for an observer to interpret the causal nature of the target motion and extract features to which the visual system is weakly sensitive, such as target acceleration. This information could be critical when vision of the target is temporarily impeded, requiring visual motion extrapolation processes. Here we investigated how visual context influences ocular tracking of motion either congruent or not with natural gravity. To this end, 28 subjects tracked computer-simulated ballistic trajectories either perturbed in the descending segment with altered gravity effects (0g/2g) or retaining natural-like motion (1g). Shortly after the perturbation (550 ms), targets disappeared for either 450 or 650 ms and became visible again until landing. Target motion occurred with either quasi-realistic pictorial cues or a uniform background, presented in counterbalanced order. We analyzed saccadic and pursuit movements after 0g and 2g target-motion perturbations and for corresponding intervals of unperturbed 1g trajectories, as well as after corresponding occlusions. Moreover, we considered the eye-to-target distance at target reappearance. Tracking parameters differed significantly between scenarios: With a neutral background, eye movements did not depend consistently on target motion, whereas with pictorial background they showed significant dependence, denoting better tracking of accelerated targets. These results suggest that oculomotor control is tuned to realistic properties of the visual scene.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In tracking a moving target, the visual context may provide cues for an observer to interpret the causal nature of the target motion and extract features to which the visual system is weakly sensitive, such as target acceleration. This information could be critical when vision of the target is temporarily impeded, requiring visual motion extrapolation processes. Here we investigated how visual context influences ocular tracking of motion either congruent or not with natural gravity. To this end, 28 subjects tracked computer-simulated ballistic trajectories either perturbed in the descending segment with altered gravity effects (0g/2g) or retaining natural-like motion (1g). Shortly after the perturbation (550 ms), targets disappeared for either 450 or 650 ms and became visible again until landing. Target motion occurred with either quasi-realistic pictorial cues or a uniform background, presented in counterbalanced order. We analyzed saccadic and pursuit movements after 0g and 2g target-motion perturbations and for corresponding intervals of unperturbed 1g trajectories, as well as after corresponding occlusions. Moreover, we considered the eye-to-target distance at target reappearance. Tracking parameters differed significantly between scenarios: With a neutral background, eye movements did not depend consistently on target motion, whereas with pictorial background they showed significant dependence, denoting better tracking of accelerated targets. These results suggest that oculomotor control is tuned to realistic properties of the visual scene.

Close

  • doi:10.1167/19.4.13

Close

David J Kelly; Sofia Duarte; David Meary; Markus Bindemann; Olivier Pascalis

Infants rapidly detect human faces in complex naturalistic visual scenes Journal Article

Developmental Science, 22 (6), pp. 1–10, 2019.

Abstract | Links | BibTeX

@article{Kelly2019,
title = {Infants rapidly detect human faces in complex naturalistic visual scenes},
author = {David J Kelly and Sofia Duarte and David Meary and Markus Bindemann and Olivier Pascalis},
doi = {10.1111/desc.12829},
year = {2019},
date = {2019-04-01},
journal = {Developmental Science},
volume = {22},
number = {6},
pages = {1--10},
publisher = {John Wiley & Sons, Ltd (10.1111)},
abstract = {Infants respond preferentially to faces and face-like stimuli from birth, but past research has typically presented faces in isolation or amongst an artificial array of competing objects. In the current study infants aged 3- to 12-months viewed a series of complex visual scenes; half of the scenes contained a person, the other half did not. Infants rapidly detected and oriented to faces in scenes even when they were not visually salient. Although a clear developmental improvement was observed in face detection and interest, all infants displayed sensitivity to the presence of a person in a scene, by displaying eye movements that differed quantifiably across a range of measures when viewing scenes that either did or did not contain a person. We argue that infant's face detection capabilities are ostensibly “better” with naturalistic stimuli and artificial array presentations used in previous studies have underestimated performance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Infants respond preferentially to faces and face-like stimuli from birth, but past research has typically presented faces in isolation or amongst an artificial array of competing objects. In the current study infants aged 3- to 12-months viewed a series of complex visual scenes; half of the scenes contained a person, the other half did not. Infants rapidly detected and oriented to faces in scenes even when they were not visually salient. Although a clear developmental improvement was observed in face detection and interest, all infants displayed sensitivity to the presence of a person in a scene, by displaying eye movements that differed quantifiably across a range of measures when viewing scenes that either did or did not contain a person. We argue that infant's face detection capabilities are ostensibly “better” with naturalistic stimuli and artificial array presentations used in previous studies have underestimated performance.

Close

  • doi:10.1111/desc.12829

Close

4375 entries « ‹ 1 of 44 › »

Let's Keep in Touch

SR Research Eye Tracking

NEWSLETTER SIGNUPNEWSLETTER ARCHIVE

Footer

Contact

info@sr-research.com
Phone: +1-613-271-8686
Toll Free: 1-866-821-0731
Fax: 613-482-4866

Quick Links

PRODUCTS

SOLUTIONS

SUPPORT FORUM

Legal Information

Legal Notice

Privacy Policy

EyeLink® eye trackers are intended for research purposes only and should not be used in the treatment or diagnosis of any medical condition.

Featured Blog Post

second language learning

How to Improve Second Language Learning

Copyright © 2020 SR Research Ltd. All Rights Reserved. EyeLink is a registered trademark of SR Research Ltd.

We use cookies to ensure the best experience on our website. You can find out more about the cookies we use in our Cookie Policy. You can accept all cookies (including those used by Google AdWords and other third party software), or you can accept only the cookies required for the website to remain functional (all except Google AdWords). By continuing to use the site, you consent to the cookies. Accept All CookiesRefuse Google AdwordsRead more