• Skip to primary navigation
  • Skip to main content
  • Skip to footer
SR Research Logo

SR Research

Fast, Accurate, Reliable Eye Tracking

  • Hardware
    • EyeLink 1000 Plus
    • EyeLink Portable Duo
    • fMRI and MEG Systems
    • EyeLink II
    • Hardware Integration
  • Software
    • Experiment Builder
    • Data Viewer
    • WebLink
    • Software Integration
  • Solutions
    • Reading / Language
    • Developmental
    • fMRI / MEG
    • More…
  • Support
    • Forum
    • Resources
    • Workshops
    • Lab Visits
  • About
    • About Eye Tracking
    • History
    • Manufacturing
    • Careers
  • Blog
  • Contact
  • 中文

EyeLink Eye Tracking Publications Library

All EyeLink Publications

All 8000+ peer-reviewed EyeLink research publications up until 2019 (with some early 2020s) are listed below by year. You can search the publications library using key words such as Visual Search, Smooth Pursuit, Parkinsons, etc. You can also search for individual author names. Eye tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye tracking paper, please email us!

All EyeLink publications are also available for download / import into reference management software as a single Bibtex (.bib) file.

 

8114 entries « ‹ 1 of 82 › »

2020

Aave Hannus; Harold Bekkering; Frans W Cornelissen

Preview of partial stimulus information in search prioritizes features and conjunctions, not locations Journal Article

Attention, Perception, & Psychophysics, 82 , pp. 140–152, 2020.

Abstract | Links | BibTeX

@article{Hannus2020,
title = {Preview of partial stimulus information in search prioritizes features and conjunctions, not locations},
author = {Aave Hannus and Harold Bekkering and Frans W Cornelissen},
doi = {10.3758/s13414-019-01841-1},
year = {2020},
date = {2020-09-01},
journal = {Attention, Perception, & Psychophysics},
volume = {82},
pages = {140--152},
publisher = {Springer Science and Business Media LLC},
abstract = {Visual search often requires combining information on distinct visual features such as color and orientation, but how the visual system does this is not fully understood. To better understand this, we showed observers a brief preview of part of a search stimulus—either its color or orientation—before they performed a conjunction search task. Our experimental questions were (1) whether observers would use such previews to prioritize either potential target locations or features, and (2) which neural mechanisms might underlie the observed effects. In two experiments, participants searched for a prespecified target in a display consisting of bar elements, each combining one of two possible colors and one of two possible orientations. Participants responded by making an eye movement to the selected bar. In our first experiment, we found that a preview consisting of colored bars with identical orientation improved saccadic target selection performance, while a preview of oriented gray bars substantially decreased performance. In a follow-up experiment, we found that previews consisting of discs of the same color as the bars (and thus without orientation information) hardly affected performance. Thus, performance improved only when the preview combined color and (noninformative) orientation information. Previews apparently result in a prioritization of features and conjunctions rather than of spatial locations (in the latter case, all previews should have had similar effects). Our results thus also indicate that search for, and prioritization of, combinations involve conjunctively tuned neural mechanisms. These probably reside at the level of the primary visual cortex.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual search often requires combining information on distinct visual features such as color and orientation, but how the visual system does this is not fully understood. To better understand this, we showed observers a brief preview of part of a search stimulus—either its color or orientation—before they performed a conjunction search task. Our experimental questions were (1) whether observers would use such previews to prioritize either potential target locations or features, and (2) which neural mechanisms might underlie the observed effects. In two experiments, participants searched for a prespecified target in a display consisting of bar elements, each combining one of two possible colors and one of two possible orientations. Participants responded by making an eye movement to the selected bar. In our first experiment, we found that a preview consisting of colored bars with identical orientation improved saccadic target selection performance, while a preview of oriented gray bars substantially decreased performance. In a follow-up experiment, we found that previews consisting of discs of the same color as the bars (and thus without orientation information) hardly affected performance. Thus, performance improved only when the preview combined color and (noninformative) orientation information. Previews apparently result in a prioritization of features and conjunctions rather than of spatial locations (in the latter case, all previews should have had similar effects). Our results thus also indicate that search for, and prioritization of, combinations involve conjunctively tuned neural mechanisms. These probably reside at the level of the primary visual cortex.

Close

  • doi:10.3758/s13414-019-01841-1

Close

Maxi Becker; Tobias Sommer; Simone Kühn

Verbal insight revisited: fMRI evidence for early processing in bilateral insulae for solutions with AHA! experience shortly after trial onset Journal Article

Human Brain Mapping, 41 (1), pp. 30–45, 2020.

Abstract | Links | BibTeX

@article{Becker2020,
title = {Verbal insight revisited: fMRI evidence for early processing in bilateral insulae for solutions with AHA! experience shortly after trial onset},
author = {Maxi Becker and Tobias Sommer and Simone Kühn},
doi = {10.1002/hbm.24785},
year = {2020},
date = {2020-09-01},
journal = {Human Brain Mapping},
volume = {41},
number = {1},
pages = {30--45},
abstract = {Abstract In insight problem solving solutions with AHA! experience have been assumed to be the consequence of restructuring of a problem which usually takes place shortly before the solution. However, evidence from priming studies suggests that solutions with AHA! are not spontaneously generated during the solution process but already relate to prior subliminal processing. We test this hypothesis by conducting an fMRI study using a modified compound remote associates paradigm which incorporates semantic priming.Weobserve stronger brain activity in bilateral anterior insulae already shortly after trial onset in problems that were latersolvedwiththan without AHA!. This early activity was independent of semantic priming but may be related to other lexical properties of attended words helping to reduce the amount of solutions to look for. In contrast, there was more brain activity in bilateral anterior insulae during solutions that were solved without than with AHA!. This timing (after trial start/during solution) x solution experience (with/without AHA!) interaction was significant. The results suggest that (a) solutions accompanied with AHA! relate to early solution-relevant processing and (b) both solution experiences differ in timingwhen solution-relevant processing takes place. In this context, we discuss the potential role of the anterior insula as part of the salience network involved in problem solving by allocating attentional resources.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Abstract In insight problem solving solutions with AHA! experience have been assumed to be the consequence of restructuring of a problem which usually takes place shortly before the solution. However, evidence from priming studies suggests that solutions with AHA! are not spontaneously generated during the solution process but already relate to prior subliminal processing. We test this hypothesis by conducting an fMRI study using a modified compound remote associates paradigm which incorporates semantic priming.Weobserve stronger brain activity in bilateral anterior insulae already shortly after trial onset in problems that were latersolvedwiththan without AHA!. This early activity was independent of semantic priming but may be related to other lexical properties of attended words helping to reduce the amount of solutions to look for. In contrast, there was more brain activity in bilateral anterior insulae during solutions that were solved without than with AHA!. This timing (after trial start/during solution) x solution experience (with/without AHA!) interaction was significant. The results suggest that (a) solutions accompanied with AHA! relate to early solution-relevant processing and (b) both solution experiences differ in timingwhen solution-relevant processing takes place. In this context, we discuss the potential role of the anterior insula as part of the salience network involved in problem solving by allocating attentional resources.

Close

  • doi:10.1002/hbm.24785

Close

Camilla E J Elphick; Graham E Pike; Graham J Hole

You can believe your eyes: measuring implicit recognition in a lineup with pupillometry Journal Article

Psychology, Crime and Law, 26 (1), pp. 67–92, 2020.

Abstract | Links | BibTeX

@article{Elphick2020,
title = {You can believe your eyes: measuring implicit recognition in a lineup with pupillometry},
author = {Camilla E J Elphick and Graham E Pike and Graham J Hole},
url = {https://doi.org/10.1080/1068316X.2019.1634196},
doi = {10.1080/1068316X.2019.1634196},
year = {2020},
date = {2020-01-01},
journal = {Psychology, Crime and Law},
volume = {26},
number = {1},
pages = {67--92},
publisher = {Taylor & Francis},
abstract = {As pupil size is affected by cognitive processes, we investigated whether it could serve as an independent indicator of target recognition in lineups. Participants saw a simulated crime video, followed by two viewings of either a target-present or target-absent video lineup while pupil size was measured with an eye-tracker. Participants who made correct identifications showed significantly larger pupil sizes when viewing the target compared with distractors. Some participants were uncertain about their choice of face from the lineup, but nevertheless showed pupillary changes when viewing the target, suggesting covert recognition of the target face had occurred. The results suggest that pupillometry might be a useful aid in assessing the accuracy of an eyewitness' identification.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

As pupil size is affected by cognitive processes, we investigated whether it could serve as an independent indicator of target recognition in lineups. Participants saw a simulated crime video, followed by two viewings of either a target-present or target-absent video lineup while pupil size was measured with an eye-tracker. Participants who made correct identifications showed significantly larger pupil sizes when viewing the target compared with distractors. Some participants were uncertain about their choice of face from the lineup, but nevertheless showed pupillary changes when viewing the target, suggesting covert recognition of the target face had occurred. The results suggest that pupillometry might be a useful aid in assessing the accuracy of an eyewitness' identification.

Close

  • https://doi.org/10.1080/1068316X.2019.1634196
  • doi:10.1080/1068316X.2019.1634196

Close

Joshua Zonca; Giorgio Coricelli; Luca Polonio

Gaze data reveal individual differences in relational representation processes Journal Article

Journal of Experimental Psychology: Learning, Memory, and Cognition, 46 (2), pp. 257–279, 2020.

Abstract | Links | BibTeX

@article{Zonca2020,
title = {Gaze data reveal individual differences in relational representation processes},
author = {Joshua Zonca and Giorgio Coricelli and Luca Polonio},
doi = {10.1037/xlm0000723},
year = {2020},
date = {2020-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {46},
number = {2},
pages = {257--279},
publisher = {American Psychological Association Inc.},
abstract = {In our everyday life, we often need to anticipate the potential occurrence of events and their consequences. In this context, the way we represent contingencies can determine our ability to adapt to the environment. However, it is not clear how agents encode and organize available knowledge about the future to react to possible states of the world. In the present study, we investigated the process of contingency representation with three eye-tracking experiments. In Experiment 1, we introduced a novel relational-inference task in which participants had to learn and represent conditional rules regulating the occurrence of interdependent future events. A cluster analysis on early gaze data revealed the existence of 2 distinct types of encoders. A group of (sophisticated) participants built exhaustive contingency models that explicitly linked states with each of their potential consequences. Another group of (unsophisticated) participants simply learned binary conditional rules without exploring the underlying relational complexity. Analyses of individual cognitive measures revealed that cognitive reflection is associated with the emergence of either sophisticated or unsophisticated representation behavior. In Experiment 2, we observed that unsophisticated participants switched toward the sophisticated strategy after having received information about its existence, suggesting that representation behavior was modulated by strategy generation mechanisms. In Experiment 3, we showed that the heterogeneity in representation strategy emerges also in conditional reasoning with verbal sequences, indicating the existence of a general disposition in building either sophisticated or unsophisticated models of contingencies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In our everyday life, we often need to anticipate the potential occurrence of events and their consequences. In this context, the way we represent contingencies can determine our ability to adapt to the environment. However, it is not clear how agents encode and organize available knowledge about the future to react to possible states of the world. In the present study, we investigated the process of contingency representation with three eye-tracking experiments. In Experiment 1, we introduced a novel relational-inference task in which participants had to learn and represent conditional rules regulating the occurrence of interdependent future events. A cluster analysis on early gaze data revealed the existence of 2 distinct types of encoders. A group of (sophisticated) participants built exhaustive contingency models that explicitly linked states with each of their potential consequences. Another group of (unsophisticated) participants simply learned binary conditional rules without exploring the underlying relational complexity. Analyses of individual cognitive measures revealed that cognitive reflection is associated with the emergence of either sophisticated or unsophisticated representation behavior. In Experiment 2, we observed that unsophisticated participants switched toward the sophisticated strategy after having received information about its existence, suggesting that representation behavior was modulated by strategy generation mechanisms. In Experiment 3, we showed that the heterogeneity in representation strategy emerges also in conditional reasoning with verbal sequences, indicating the existence of a general disposition in building either sophisticated or unsophisticated models of contingencies.

Close

  • doi:10.1037/xlm0000723

Close

Li Zhang; Guoli Yan; Li Zhou; Zebo Lan; Valerie Benson

The influence of irrelevant visual distractors on eye movement control in Chinese children with autism spectrum disorder: Evidence from the remote distractor paradigm Journal Article

Journal of Autism and Developmental Disorders, 50 , pp. 500–512, 2020.

Abstract | Links | BibTeX

@article{Zhang2020,
title = {The influence of irrelevant visual distractors on eye movement control in Chinese children with autism spectrum disorder: Evidence from the remote distractor paradigm},
author = {Li Zhang and Guoli Yan and Li Zhou and Zebo Lan and Valerie Benson},
doi = {10.1007/s10803-019-04271-y},
year = {2020},
date = {2020-01-01},
journal = {Journal of Autism and Developmental Disorders},
volume = {50},
pages = {500--512},
publisher = {Springer US},
abstract = {The current study examined eye movement control in autistic (ASD) children. Simple targets were presented in isolation, or with central, parafoveal, or peripheral distractors synchronously. Sixteen children with ASD (47–81 months) and nineteen age and IQ matched typically developing children were instructed to look to the target as accurately and quickly as possible. Both groups showed high proportions (40%) of saccadic errors towards parafoveal and peripheral distractors. For correctly executed eye movements to the targets, centrally presented distractors produced the longest latencies (time taken to initiate eye movements), followed by parafoveal and peripheral distractor conditions. Central distractors had a greater effect in the ASD group, indicating evidence for potential atypical voluntary attentional control in ASD children.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The current study examined eye movement control in autistic (ASD) children. Simple targets were presented in isolation, or with central, parafoveal, or peripheral distractors synchronously. Sixteen children with ASD (47–81 months) and nineteen age and IQ matched typically developing children were instructed to look to the target as accurately and quickly as possible. Both groups showed high proportions (40%) of saccadic errors towards parafoveal and peripheral distractors. For correctly executed eye movements to the targets, centrally presented distractors produced the longest latencies (time taken to initiate eye movements), followed by parafoveal and peripheral distractor conditions. Central distractors had a greater effect in the ASD group, indicating evidence for potential atypical voluntary attentional control in ASD children.

Close

  • doi:10.1007/s10803-019-04271-y

Close

Jordana S Wynn; Jennifer D Ryan; Morris Moscovitch

Effects of prior knowledge on active vision and memory in younger and older adults Journal Article

Journal of Experimental Psychology: General, 149 (3), pp. 518–529, 2020.

Abstract | Links | BibTeX

@article{Wynn2020,
title = {Effects of prior knowledge on active vision and memory in younger and older adults},
author = {Jordana S Wynn and Jennifer D Ryan and Morris Moscovitch},
doi = {10.1037/xge0000657},
year = {2020},
date = {2020-01-01},
journal = {Journal of Experimental Psychology: General},
volume = {149},
number = {3},
pages = {518--529},
abstract = {In our daily lives we rely on prior knowledge to make predictions about the world around us such as where to search for and locate common objects. Yet, equally important in visual search is the ability to inhibit such processes when those predictions fail. Mounting evidence suggests that relative to younger adults, older adults have difficulty retrieving episodic memories and inhibiting prior knowledge, even when that knowledge is detrimental to the task at hand. However, the consequences of these age-related changes for visual search remain unclear. In the present study, we used eye movement monitoring to investigate whether overreliance on prior knowledge alters the gaze patterns and performance of older adults during visual search. Younger and older adults searched for target objects in congruent or incongruent locations in real-world scenes. As predicted, targets in congruent locations were detected faster than targets in incongruent locations, and this effect was enhanced in older adults. Analysis of viewing behavior revealed that prior knowledge effects emerged early in search, as evidenced by initial saccades, and continued throughout search, with greater viewing of congruent regions by older relative to younger adults, suggesting that schema biasing of online processing increases with age. Finally, both younger and older adults showed enhanced memory for the location of congruent targets and the identity of incongruent targets, with schema-guided viewing during search predicting poor memory for schema-incongruent targets in younger adults on both tasks. Our results provide novel evidence that older adults' overreliance on prior knowledge has consequences for both active vision and memory.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In our daily lives we rely on prior knowledge to make predictions about the world around us such as where to search for and locate common objects. Yet, equally important in visual search is the ability to inhibit such processes when those predictions fail. Mounting evidence suggests that relative to younger adults, older adults have difficulty retrieving episodic memories and inhibiting prior knowledge, even when that knowledge is detrimental to the task at hand. However, the consequences of these age-related changes for visual search remain unclear. In the present study, we used eye movement monitoring to investigate whether overreliance on prior knowledge alters the gaze patterns and performance of older adults during visual search. Younger and older adults searched for target objects in congruent or incongruent locations in real-world scenes. As predicted, targets in congruent locations were detected faster than targets in incongruent locations, and this effect was enhanced in older adults. Analysis of viewing behavior revealed that prior knowledge effects emerged early in search, as evidenced by initial saccades, and continued throughout search, with greater viewing of congruent regions by older relative to younger adults, suggesting that schema biasing of online processing increases with age. Finally, both younger and older adults showed enhanced memory for the location of congruent targets and the identity of incongruent targets, with schema-guided viewing during search predicting poor memory for schema-incongruent targets in younger adults on both tasks. Our results provide novel evidence that older adults' overreliance on prior knowledge has consequences for both active vision and memory.

Close

  • doi:10.1037/xge0000657

Close

Anke Weidmann; Laura Richert; Maximilian Bernecker; Miriam Knauss; Kathlen Priebe; Benedikt Reuter; Martin Bohus; Meike Müller-Engelmann; Thomas Fydrich

Dwelling on verbal but not pictorial threat cues: An eye-tracking study with adult survivors of childhood interpersonal violence Journal Article

Psychological Trauma: Theory, Research, Practice, and Policy, 12 (1), pp. 46–54, 2020.

Abstract | Links | BibTeX

@article{Weidmann2020,
title = {Dwelling on verbal but not pictorial threat cues: An eye-tracking study with adult survivors of childhood interpersonal violence},
author = {Anke Weidmann and Laura Richert and Maximilian Bernecker and Miriam Knauss and Kathlen Priebe and Benedikt Reuter and Martin Bohus and Meike Müller-Engelmann and Thomas Fydrich},
doi = {10.1037/tra0000424},
year = {2020},
date = {2020-01-01},
journal = {Psychological Trauma: Theory, Research, Practice, and Policy},
volume = {12},
number = {1},
pages = {46--54},
abstract = {Objective: Previous studies have found evidence of an attentional bias for trauma-related stimuli in posttraumatic stress disorder (PTSD) using eye-tracking (ET) technlogy. However, it is unclear whether findings for PTSD after traumatic events in adulthood can be transferred to PTSD after interpersonal trauma in childhood. The latter is often accompanied by more complex symptom features, including, for example, affective dysregulation and has not yet been studied using ET. The aim of this study was to explore which components of attention are biased in adult victims of childhood trauma with PTSD compared to those without PTSD. Method: Female participants with (n = 27) or without (n = 27) PTSD who had experienced interpersonal violence in childhood or adolescence watched different traumarelated stimuli (Experiment 1: words, Experiment 2: facial expressions). We analyzed whether traumarelated stimuli were primarily detected (vigilance bias) and/or dwelled on longer (maintenance bias) compared to stimuli of other emotional qualities. Results: For trauma-related words, there was evidence of a maintenance bias but not of a vigilance bias. For trauma-related facial expressions, there was no evidence of any bias. Conclusions: At present, an attentional bias to trauma-related stimuli cannot be considered as robust in PTSD following trauma in childhood compared to that of PTSD following trauma in adulthood. The findings are discussed with respect to difficulties attributing effects specifically to PTSD in this highly comorbid though understudied population.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: Previous studies have found evidence of an attentional bias for trauma-related stimuli in posttraumatic stress disorder (PTSD) using eye-tracking (ET) technlogy. However, it is unclear whether findings for PTSD after traumatic events in adulthood can be transferred to PTSD after interpersonal trauma in childhood. The latter is often accompanied by more complex symptom features, including, for example, affective dysregulation and has not yet been studied using ET. The aim of this study was to explore which components of attention are biased in adult victims of childhood trauma with PTSD compared to those without PTSD. Method: Female participants with (n = 27) or without (n = 27) PTSD who had experienced interpersonal violence in childhood or adolescence watched different traumarelated stimuli (Experiment 1: words, Experiment 2: facial expressions). We analyzed whether traumarelated stimuli were primarily detected (vigilance bias) and/or dwelled on longer (maintenance bias) compared to stimuli of other emotional qualities. Results: For trauma-related words, there was evidence of a maintenance bias but not of a vigilance bias. For trauma-related facial expressions, there was no evidence of any bias. Conclusions: At present, an attentional bias to trauma-related stimuli cannot be considered as robust in PTSD following trauma in childhood compared to that of PTSD following trauma in adulthood. The findings are discussed with respect to difficulties attributing effects specifically to PTSD in this highly comorbid though understudied population.

Close

  • doi:10.1037/tra0000424

Close

Margreet Vogelzang; Francesca Foppolo; Maria Teresa Guasti; Hedderik van Rijn; Petra Hendriks

Reasoning about alternative forms is costly: The processing of null and overt pronouns in Italian using pupillary responses Journal Article

Discourse Processes, 57 (2), pp. 158–183, 2020.

Abstract | Links | BibTeX

@article{Vogelzang2020,
title = {Reasoning about alternative forms is costly: The processing of null and overt pronouns in Italian using pupillary responses},
author = {Margreet Vogelzang and Francesca Foppolo and Maria Teresa Guasti and Hedderik van Rijn and Petra Hendriks},
doi = {10.1080/0163853X.2019.1591127},
year = {2020},
date = {2020-01-01},
journal = {Discourse Processes},
volume = {57},
number = {2},
pages = {158--183},
publisher = {Routledge},
abstract = {Different words generally have different meanings. However, some words seemingly share similar meanings. An example are null and overt pronouns in Italian, which both refer to an individual in the discourse. Is the interpretation and processing of a form affected by the existence of another form with a similar meaning? With a pupillary response study, we show that null and overt pronouns are processed differently. Specifically, null pronouns are found to be less costly to process than overt pronouns. We argue that this difference is caused by an additional reasoning step that is needed to process marked overt pronouns but not unmarked null pronouns. A comparison with data from Dutch, a language with overt but no null pronouns, demonstrates that Italian pronouns are processed differently from Dutch pronouns. These findings suggest that the processing of a marked form is influenced by alternative forms within the same language, making its processing costly.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Different words generally have different meanings. However, some words seemingly share similar meanings. An example are null and overt pronouns in Italian, which both refer to an individual in the discourse. Is the interpretation and processing of a form affected by the existence of another form with a similar meaning? With a pupillary response study, we show that null and overt pronouns are processed differently. Specifically, null pronouns are found to be less costly to process than overt pronouns. We argue that this difference is caused by an additional reasoning step that is needed to process marked overt pronouns but not unmarked null pronouns. A comparison with data from Dutch, a language with overt but no null pronouns, demonstrates that Italian pronouns are processed differently from Dutch pronouns. These findings suggest that the processing of a marked form is influenced by alternative forms within the same language, making its processing costly.

Close

  • doi:10.1080/0163853X.2019.1591127

Close

Anastasia Ulicheva; Hannah Harvey; Mark Aronoff; Kathleen Rastle

Skilled readers' sensitivity to meaningful regularities in English writing Journal Article

Cognition, 195 , pp. 103810, 2020.

Abstract | Links | BibTeX

@article{Ulicheva2020,
title = {Skilled readers' sensitivity to meaningful regularities in English writing},
author = {Anastasia Ulicheva and Hannah Harvey and Mark Aronoff and Kathleen Rastle},
doi = {10.1016/j.cognition.2018.09.013},
year = {2020},
date = {2020-01-01},
journal = {Cognition},
volume = {195},
pages = {103810},
publisher = {Elsevier},
abstract = {Substantial research has been undertaken to understand the relationship between spelling and sound, but we know little about the relationship between spelling and meaning in alphabetic writing systems. We present a computational analysis of English writing in which we develop new constructs to describe this relationship. Diagnosticity captures the amount of meaningful information in a given spelling, whereas specificity estimates the degree of dispersion of this meaning across different spellings for a particular sound sequence. Using these two constructs, we demonstrate that particular suffix spellings tend to be reserved for particular meaningful functions. We then show across three paradigms (nonword classification, spelling, and eye tracking during sentence reading) that this form of regularity between spelling and meaning influences the behaviour of skilled readers, and that the degree of this behavioural sensitivity mirrors the strength of spelling-to-meaning regularities in the writing system. We close by arguing that English spelling may have become fractionated such that the high degree of spelling-sound inconsistency maximises the transmission of meaningful information.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Substantial research has been undertaken to understand the relationship between spelling and sound, but we know little about the relationship between spelling and meaning in alphabetic writing systems. We present a computational analysis of English writing in which we develop new constructs to describe this relationship. Diagnosticity captures the amount of meaningful information in a given spelling, whereas specificity estimates the degree of dispersion of this meaning across different spellings for a particular sound sequence. Using these two constructs, we demonstrate that particular suffix spellings tend to be reserved for particular meaningful functions. We then show across three paradigms (nonword classification, spelling, and eye tracking during sentence reading) that this form of regularity between spelling and meaning influences the behaviour of skilled readers, and that the degree of this behavioural sensitivity mirrors the strength of spelling-to-meaning regularities in the writing system. We close by arguing that English spelling may have become fractionated such that the high degree of spelling-sound inconsistency maximises the transmission of meaningful information.

Close

  • doi:10.1016/j.cognition.2018.09.013

Close

Maximilian Stefani; Marian Sauter; Wolfgang Mack

Delayed disengagement from irrelevant fixation items: Is it generally functional? Journal Article

Attention, Perception, & Psychophysics, pp. 1–18, 2020.

Abstract | BibTeX

@article{Stefani2020,
title = {Delayed disengagement from irrelevant fixation items: Is it generally functional?},
author = {Maximilian Stefani and Marian Sauter and Wolfgang Mack},
year = {2020},
date = {2020-01-01},
journal = {Attention, Perception, & Psychophysics},
pages = {1--18},
publisher = {Attention, Perception, & Psychophysics},
abstract = {In a circular visual search paradigm, the disengagement of attention is automatically delayed when a fixated but irrelevant center item shares features ofthe target item. Additionally, if mismatching letters are presented on these items, response times (RTs) are slowed further, while matching letters evoke faster responses (Wright, Boot, & Brockmole, 2015a). This is interpreted as a functional reason of the delayed disengagement effect in terms of deeper processing of the fixation item. The purpose of the present study was the generalization of these findings to unfamiliar symbols and to linear instead ofcircular layouts. Experiments 1 and 2 replicated the functional delayed disengagement effect with letters and symbols. In Experiment 3, the search layout was changed from circular to linear and only saccades from left to right had to be performed. We did not find supportive data for the proposed functional nature of the effect. In Experiments 4 and 5, we tested whether the unidirectional saccade decision, a potential blurring by adjacent items, or a lack of statistical power was the cause of the diminished effects in Experiment 3. With increased sample sizes, the delayed disengagement effect as well as its functional underpinning were now observed consistently. Taken together, our results support prior assumptions that delayed disengagement effects are functionally rooted in a deeper processing of the fixation items. They also generalize to unfamiliar symbols and linear display layouts.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In a circular visual search paradigm, the disengagement of attention is automatically delayed when a fixated but irrelevant center item shares features ofthe target item. Additionally, if mismatching letters are presented on these items, response times (RTs) are slowed further, while matching letters evoke faster responses (Wright, Boot, & Brockmole, 2015a). This is interpreted as a functional reason of the delayed disengagement effect in terms of deeper processing of the fixation item. The purpose of the present study was the generalization of these findings to unfamiliar symbols and to linear instead ofcircular layouts. Experiments 1 and 2 replicated the functional delayed disengagement effect with letters and symbols. In Experiment 3, the search layout was changed from circular to linear and only saccades from left to right had to be performed. We did not find supportive data for the proposed functional nature of the effect. In Experiments 4 and 5, we tested whether the unidirectional saccade decision, a potential blurring by adjacent items, or a lack of statistical power was the cause of the diminished effects in Experiment 3. With increased sample sizes, the delayed disengagement effect as well as its functional underpinning were now observed consistently. Taken together, our results support prior assumptions that delayed disengagement effects are functionally rooted in a deeper processing of the fixation items. They also generalize to unfamiliar symbols and linear display layouts.

Close

William Rosengren; Marcus Nyström; Björn Hammar; Martin Stridh

A robust method for calibration of eye tracking data recorded during nystagmus Journal Article

Behavior Research Methods, 52 , pp. 36–50, 2020.

Abstract | Links | BibTeX

@article{Rosengren2020,
title = {A robust method for calibration of eye tracking data recorded during nystagmus},
author = {William Rosengren and Marcus Nyström and Björn Hammar and Martin Stridh},
doi = {10.3758/s13428-019-01199-0},
year = {2020},
date = {2020-01-01},
journal = {Behavior Research Methods},
volume = {52},
pages = {36--50},
abstract = {Eye tracking is a useful tool when studying the oscillatory eye movements associated with nystagmus. However, this oscillatory nature of nystagmus is problematic during calibration since it introduces uncertainty about where the person is actually looking. This renders comparisons between separate recordings unreliable. Still, the influence of the calibration protocol on eye movement data from people with nystagmus has not been thoroughly investigated. In this work, we propose a calibration method using Procrustes analysis in combination with an outlier correction algorithm, which is based on a model of the calibration data and on the geometry of the experimental setup. The proposed method is compared to previously used calibration polynomials in terms of accuracy, calibration plane distortion and waveform robustness. Six recordings of calibration data, validation data and optokinetic nystagmus data from people with nystagmus and seven recordings from a control group were included in the study. Fixation errors during the recording of calibration data from the healthy participants were introduced, simulating fixation errors caused by the oscillatory movements found in nystagmus data. The outlier correction algorithm improved the accuracy for all tested calibration methods. The accuracy and calibration plane distortion performance of the Procrustes analysis calibration method were similar to the top performing mapping functions for the simulated fixation errors. The performance in terms of waveform robustness was superior for the Procrustes analysis calibration compared to the other calibration methods. The overall performance of the Procrustes calibration methods was best for the datasets containing errors during the calibration.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye tracking is a useful tool when studying the oscillatory eye movements associated with nystagmus. However, this oscillatory nature of nystagmus is problematic during calibration since it introduces uncertainty about where the person is actually looking. This renders comparisons between separate recordings unreliable. Still, the influence of the calibration protocol on eye movement data from people with nystagmus has not been thoroughly investigated. In this work, we propose a calibration method using Procrustes analysis in combination with an outlier correction algorithm, which is based on a model of the calibration data and on the geometry of the experimental setup. The proposed method is compared to previously used calibration polynomials in terms of accuracy, calibration plane distortion and waveform robustness. Six recordings of calibration data, validation data and optokinetic nystagmus data from people with nystagmus and seven recordings from a control group were included in the study. Fixation errors during the recording of calibration data from the healthy participants were introduced, simulating fixation errors caused by the oscillatory movements found in nystagmus data. The outlier correction algorithm improved the accuracy for all tested calibration methods. The accuracy and calibration plane distortion performance of the Procrustes analysis calibration method were similar to the top performing mapping functions for the simulated fixation errors. The performance in terms of waveform robustness was superior for the Procrustes analysis calibration compared to the other calibration methods. The overall performance of the Procrustes calibration methods was best for the datasets containing errors during the calibration.

Close

  • doi:10.3758/s13428-019-01199-0

Close

Deirdre A Robertson; Peter D Lunn

The effect of spatial location of calorie information on choice, consumption and eye movements Journal Article

Appetite, 144 , pp. 1–10, 2020.

Abstract | Links | BibTeX

@article{Robertson2020,
title = {The effect of spatial location of calorie information on choice, consumption and eye movements},
author = {Deirdre A Robertson and Peter D Lunn},
doi = {10.1016/j.appet.2019.104446},
year = {2020},
date = {2020-01-01},
journal = {Appetite},
volume = {144},
pages = {1--10},
abstract = {We manipulated the presence and spatial location of calorie labels on menus while tracking eye movements. A novel “lab-in-the-field” experimental design allowed eye movements to be recorded while participants chose lunch from a menu, unaware that their choice was part of a study. Participants exposed to calorie information ordered 93 fewer calories (11%) relative to a control group who saw no calorie labels. The difference in number of calories consumed was greater still. The impact was strongest when calorie information was displayed just to the right of the price, in an equivalent font. The effects were mediated by knowledge of the amount of calories in the meal, implying that calorie posting led to more informed decision-making. There was no impact on enjoyment of the meal. The eye-tracking data suggested that the spatial arrangement altered individuals' search strategies while viewing the menu. This research suggests that the spatial location of calories on menus may be an important consideration when designing calorie posting legislation and policy. 1.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We manipulated the presence and spatial location of calorie labels on menus while tracking eye movements. A novel “lab-in-the-field” experimental design allowed eye movements to be recorded while participants chose lunch from a menu, unaware that their choice was part of a study. Participants exposed to calorie information ordered 93 fewer calories (11%) relative to a control group who saw no calorie labels. The difference in number of calories consumed was greater still. The impact was strongest when calorie information was displayed just to the right of the price, in an equivalent font. The effects were mediated by knowledge of the amount of calories in the meal, implying that calorie posting led to more informed decision-making. There was no impact on enjoyment of the meal. The eye-tracking data suggested that the spatial arrangement altered individuals' search strategies while viewing the menu. This research suggests that the spatial location of calories on menus may be an important consideration when designing calorie posting legislation and policy. 1.

Close

  • doi:10.1016/j.appet.2019.104446

Close

Johannes Rennig; Kira Wegner-Clemens; Michael S Beauchamp

Face viewing behavior predicts multisensory gain during speech perception Journal Article

Psychonomic Bulletin & Review, 27 , pp. 70–77, 2020.

Abstract | Links | BibTeX

@article{Rennig2020,
title = {Face viewing behavior predicts multisensory gain during speech perception},
author = {Johannes Rennig and Kira Wegner-Clemens and Michael S Beauchamp},
doi = {10.3758/s13423-019-01665-y},
year = {2020},
date = {2020-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {27},
pages = {70--77},
publisher = {Psychonomic Bulletin & Review},
abstract = {Visual information from the face of an interlocutor complements auditory information from their voice, enhancing intelligibility. However, there are large individual differences in the ability to comprehend noisy audiovisual speech. Another axis of individual variability is the extent to which humans fixate the mouth or the eyes of a viewed face. We speculated that across a lifetime of face viewing, individuals who prefer to fixate the mouth of a viewed face might accumulate stronger associations between visual and auditory speech, resulting in improved comprehension of noisy audiovisual speech. To test this idea, we assessed interindividual variability in two tasks. Participants (n = 102) varied greatly in their ability to understand noisy audiovisual sentences (accuracy from 2–58%) and in the time they spent fixating the mouth of a talker enunciating clear audiovisual syllables (3–98% of total time). These two variables were positively correlated: a 10% increase in time spent fixating the mouth equated to a 5.6% increase in multisensory gain. This finding demonstrates an unexpected link, mediated by histories of visual exposure, between two fundamental human abilities: processing faces and understanding speech.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual information from the face of an interlocutor complements auditory information from their voice, enhancing intelligibility. However, there are large individual differences in the ability to comprehend noisy audiovisual speech. Another axis of individual variability is the extent to which humans fixate the mouth or the eyes of a viewed face. We speculated that across a lifetime of face viewing, individuals who prefer to fixate the mouth of a viewed face might accumulate stronger associations between visual and auditory speech, resulting in improved comprehension of noisy audiovisual speech. To test this idea, we assessed interindividual variability in two tasks. Participants (n = 102) varied greatly in their ability to understand noisy audiovisual sentences (accuracy from 2–58%) and in the time they spent fixating the mouth of a talker enunciating clear audiovisual syllables (3–98% of total time). These two variables were positively correlated: a 10% increase in time spent fixating the mouth equated to a 5.6% increase in multisensory gain. This finding demonstrates an unexpected link, mediated by histories of visual exposure, between two fundamental human abilities: processing faces and understanding speech.

Close

  • doi:10.3758/s13423-019-01665-y

Close

A Pressigout; K Dore-Mazars

How does number magnitude influence temporal and spatial parameters of eye movements? Journal Article

Experimental Brain Research, 238 , pp. 101–109, 2020.

Abstract | Links | BibTeX

@article{Pressigout2020,
title = {How does number magnitude influence temporal and spatial parameters of eye movements?},
author = {A Pressigout and K Dore-Mazars},
doi = {10.1007/s00221-019-05701-0},
year = {2020},
date = {2020-01-01},
journal = {Experimental Brain Research},
volume = {238},
pages = {101--109},
publisher = {Springer Berlin Heidelberg},
abstract = {The influence of numerical processing on individuals' behavior is now well documented. The spatial representation of numbers on a left-to-right mental line (i.e., SNARC effect) has been shown to have sensorimotor consequences, the majority of studies being mainly concerned with its impact on the response times. Its impact on the motor programming stage remains less documented, although swiping movement amplitudes have recently been shown to be modulated by number magnitude. Regarding saccadic eye movements, the few available studies have not provided clear-cut conclusions. They showed that spatial–numerical associations modulated ocular drifts, but not the amplitude of memory-guided saccades. Because these studies held saccadic coordinates constant, which might have masked potential numerical effects, we examined whether spontaneous saccadic eye movements (with no saccadic target) could reflect numerical effects. Participants were asked to look either to the left or to the right side of an empty screen to estimate the magnitude (textless or textgreater 5) of a centrally presented digit. Latency data confirmed the presence of the classical SNARC and distance effects. More critically, saccade amplitude reflected a numerical effect: participants' saccades were longer for digits far from the standard (1 and 9) and were shorter for digits close to it (4 and 6). Our results suggest that beyond response times, kinematic parameters also offer valuable information for the understanding of the link between numerical cognition and motor programming.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The influence of numerical processing on individuals' behavior is now well documented. The spatial representation of numbers on a left-to-right mental line (i.e., SNARC effect) has been shown to have sensorimotor consequences, the majority of studies being mainly concerned with its impact on the response times. Its impact on the motor programming stage remains less documented, although swiping movement amplitudes have recently been shown to be modulated by number magnitude. Regarding saccadic eye movements, the few available studies have not provided clear-cut conclusions. They showed that spatial–numerical associations modulated ocular drifts, but not the amplitude of memory-guided saccades. Because these studies held saccadic coordinates constant, which might have masked potential numerical effects, we examined whether spontaneous saccadic eye movements (with no saccadic target) could reflect numerical effects. Participants were asked to look either to the left or to the right side of an empty screen to estimate the magnitude (textless or textgreater 5) of a centrally presented digit. Latency data confirmed the presence of the classical SNARC and distance effects. More critically, saccade amplitude reflected a numerical effect: participants' saccades were longer for digits far from the standard (1 and 9) and were shorter for digits close to it (4 and 6). Our results suggest that beyond response times, kinematic parameters also offer valuable information for the understanding of the link between numerical cognition and motor programming.

Close

  • doi:10.1007/s00221-019-05701-0

Close

Vincent Porretta; Lori Buchanan; Juhani Järvikivi

When processing costs impact predictive processing: The case of foreign-accented speech and accent experience Journal Article

Attention, Perception, & Psychophysics, pp. 1–8, 2020.

Abstract | BibTeX

@article{Porretta2020,
title = {When processing costs impact predictive processing: The case of foreign-accented speech and accent experience},
author = {Vincent Porretta and Lori Buchanan and Juhani Järvikivi},
year = {2020},
date = {2020-01-01},
journal = {Attention, Perception, & Psychophysics},
pages = {1--8},
publisher = {Attention, Perception, & Psychophysics},
abstract = {Listeners use linguistic information and real-world knowledge to predict upcoming spoken words. However, studies ofpredictive processing have focused on prediction under optimal listening conditions. We examined the effect offoreign-accented speech on predictive processing. Furthermore, we investigated whether accent-specific experience facilitates predictive processing. Using the visual world paradigm, we demonstrated that although the presence of an accent impedes predictive processing, it does not preclude it. We further showed that as listener experience increases, predictive processing for accented speech increases and begins to approximate the pattern seen for native speech. These results speak to the limitation of the processing resources that must be allocated, leading to a trade-offwhen listeners are faced with increased uncertainty and more effortful recognition due to a foreign accent.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Listeners use linguistic information and real-world knowledge to predict upcoming spoken words. However, studies ofpredictive processing have focused on prediction under optimal listening conditions. We examined the effect offoreign-accented speech on predictive processing. Furthermore, we investigated whether accent-specific experience facilitates predictive processing. Using the visual world paradigm, we demonstrated that although the presence of an accent impedes predictive processing, it does not preclude it. We further showed that as listener experience increases, predictive processing for accented speech increases and begins to approximate the pattern seen for native speech. These results speak to the limitation of the processing resources that must be allocated, leading to a trade-offwhen listeners are faced with increased uncertainty and more effortful recognition due to a foreign accent.

Close

Salome Pedrett; Lea Kaspar; Andrea Frick

Understanding of object rotation between two and three years of age Journal Article

Developmental Psychology, 56 (2), pp. 261–274, 2020.

Abstract | Links | BibTeX

@article{Pedrett2020,
title = {Understanding of object rotation between two and three years of age},
author = {Salome Pedrett and Lea Kaspar and Andrea Frick},
doi = {10.1037/dev0000871},
year = {2020},
date = {2020-01-01},
journal = {Developmental Psychology},
volume = {56},
number = {2},
pages = {261--274},
abstract = {Toddlers' understanding of object rotation was investigated using a multimethod approach. Participants were 44 toddlers between 22 and 38 months of age. In an eye-tracking task, they observed a shape that rotated and disappeared briefly behind an occluder. In an object-fitting task, they rotated wooden blocks and fit them through apertures. Results of the eye-tracking task showed that with increasing age, the toddlers encoded the visible rotation using a more complex eye-movement pattern, increasingly combining tracking movements with gaze shifts to the pivot point. During occlusion, anticipatory looks to the location where the shape would reappear increased with age, whereas looking back to the location where the shape had just disappeared decreased. This suggests that, with increasing age, the toddlers formed a clearer mental representation about the object and its rotational movement. In the object-fitting task, the toddlers succeeded more with increasing age and also rotated the wooden blocks more often correctly before they tried to insert them. Importantly, these preadjustments correlated with anticipatory eye movements, suggesting that both measures tap the same underlying understanding of object rotation. The findings yield new insights into the relation between tasks using looking times and behavioral measures as dependent variables and thus may help to clarify performance differences that have previously been observed in studies with infants and young children.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Toddlers' understanding of object rotation was investigated using a multimethod approach. Participants were 44 toddlers between 22 and 38 months of age. In an eye-tracking task, they observed a shape that rotated and disappeared briefly behind an occluder. In an object-fitting task, they rotated wooden blocks and fit them through apertures. Results of the eye-tracking task showed that with increasing age, the toddlers encoded the visible rotation using a more complex eye-movement pattern, increasingly combining tracking movements with gaze shifts to the pivot point. During occlusion, anticipatory looks to the location where the shape would reappear increased with age, whereas looking back to the location where the shape had just disappeared decreased. This suggests that, with increasing age, the toddlers formed a clearer mental representation about the object and its rotational movement. In the object-fitting task, the toddlers succeeded more with increasing age and also rotated the wooden blocks more often correctly before they tried to insert them. Importantly, these preadjustments correlated with anticipatory eye movements, suggesting that both measures tap the same underlying understanding of object rotation. The findings yield new insights into the relation between tasks using looking times and behavioral measures as dependent variables and thus may help to clarify performance differences that have previously been observed in studies with infants and young children.

Close

  • doi:10.1037/dev0000871

Close

Chie Nakamura; Manabu Arai; Yuki Hirose; Suzanne Flynn

An extra cue is beneficial for native speakers but can be disruptive for second language learners: Integration of prosody and visual context in syntactic ambiguity resolution Journal Article

Frontiers in Psychology, 10 , pp. 1–14, 2020.

Abstract | Links | BibTeX

@article{Nakamura2020,
title = {An extra cue is beneficial for native speakers but can be disruptive for second language learners: Integration of prosody and visual context in syntactic ambiguity resolution},
author = {Chie Nakamura and Manabu Arai and Yuki Hirose and Suzanne Flynn},
doi = {10.3389/fpsyg.2019.02835},
year = {2020},
date = {2020-01-01},
journal = {Frontiers in Psychology},
volume = {10},
pages = {1--14},
abstract = {It has long been debated whether non-native speakers can process sentences in the same way as native speakers do or they suffer from certain qualitative deficit in their ability of language comprehension. The current study examined the influence of prosodic and visual information in processing sentences with a temporarily ambiguous prepositional phrase (“Put the cake on the plate in the basket”) with native English speakers and Japanese learners of English. Specifically, we investigated (1) whether native speakers assign different pragmatic functions to the same prosodic cues used in different contexts and (2) whether L2 learners can reach the correct analysis by integrating prosodic cues with syntax with reference to the visually presented contextual information. The results from native speakers showed that contrastive accents helped to resolve the referential ambiguity when a contrastive pair was present in visual scenes. However, without a contrastive pair in the visual scene, native speakers were slower to reach the correct analysis with the contrastive accent, which supports the view that the pragmatic function of intonation categories are highly context dependent. The results from L2 learners showed that visually presented context alone helped L2 learners to reach the correct analysis. However, L2 learners were unable to assign contrastive meaning to the prosodic cues when there were two potential referents in the visual scene. The results suggest that L2 learners are not capable of integrating multiple sources of information in an interactive manner during real-time language comprehension.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It has long been debated whether non-native speakers can process sentences in the same way as native speakers do or they suffer from certain qualitative deficit in their ability of language comprehension. The current study examined the influence of prosodic and visual information in processing sentences with a temporarily ambiguous prepositional phrase (“Put the cake on the plate in the basket”) with native English speakers and Japanese learners of English. Specifically, we investigated (1) whether native speakers assign different pragmatic functions to the same prosodic cues used in different contexts and (2) whether L2 learners can reach the correct analysis by integrating prosodic cues with syntax with reference to the visually presented contextual information. The results from native speakers showed that contrastive accents helped to resolve the referential ambiguity when a contrastive pair was present in visual scenes. However, without a contrastive pair in the visual scene, native speakers were slower to reach the correct analysis with the contrastive accent, which supports the view that the pragmatic function of intonation categories are highly context dependent. The results from L2 learners showed that visually presented context alone helped L2 learners to reach the correct analysis. However, L2 learners were unable to assign contrastive meaning to the prosodic cues when there were two potential referents in the visual scene. The results suggest that L2 learners are not capable of integrating multiple sources of information in an interactive manner during real-time language comprehension.

Close

  • doi:10.3389/fpsyg.2019.02835

Close

Leanne Nagels; Roelien Bastiaanse; Deniz Başkent; Anita Wagnera

Individual differences in lexical access among cochlear implant users Journal Article

Journal of Speech, Language, and Hearing Research, 63 , pp. 286–304, 2020.

Abstract | BibTeX

@article{Nagels2020,
title = {Individual differences in lexical access among cochlear implant users},
author = {Leanne Nagels and Roelien Bastiaanse and Deniz Başkent and Anita Wagnera},
year = {2020},
date = {2020-01-01},
journal = {Journal of Speech, Language, and Hearing Research},
volume = {63},
pages = {286--304},
abstract = {Purpose: The current study investigates how individual differences in cochlear implant (CI) users' sensitivity to word–nonword differences, reflecting lexical uncertainty, relate to their reliance on sentential context for lexical access in processing continuous speech. Method: Fifteen CI users and 14 normal-hearing (NH) controls participated in an auditory lexical decision task (Experiment 1) and a visual-world paradigm task (Experiment 2). Experiment 1 tested participants' reliance on lexical statistics, and Experiment 2 studied how sentential context affects the time course and patterns of lexical competition leading to lexical access. Results: In Experiment 1, CI users had lower accuracy scores and longer reaction times than NH listeners, particularly for nonwords. In Experiment 2, CI users' lexical competition patterns were, on average, similar to those of NH listeners, but the patterns of individual CI users varied greatly. Individual CI users' word–nonword sensitivity (Experiment 1) explained differences in the reliance on sentential context to resolve lexical competition, whereas clinical speech perception scores explained competition with phonologically related words. Conclusions: The general analysis of CI users' lexical competition patterns showed merely quantitative differences with NH listeners in the time course of lexical competition, but our additional analysis revealed more qualitative differences in CI users' strategies to process speech. Individuals' word–nonword sensitivity explained different parts of individual variability than clinical speech perception scores. These results stress, particularly for heterogeneous clinical populations such as CI users, the importance of investigating individual differences in addition to group averages, as they can be informative for clinical rehabilitation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Purpose: The current study investigates how individual differences in cochlear implant (CI) users' sensitivity to word–nonword differences, reflecting lexical uncertainty, relate to their reliance on sentential context for lexical access in processing continuous speech. Method: Fifteen CI users and 14 normal-hearing (NH) controls participated in an auditory lexical decision task (Experiment 1) and a visual-world paradigm task (Experiment 2). Experiment 1 tested participants' reliance on lexical statistics, and Experiment 2 studied how sentential context affects the time course and patterns of lexical competition leading to lexical access. Results: In Experiment 1, CI users had lower accuracy scores and longer reaction times than NH listeners, particularly for nonwords. In Experiment 2, CI users' lexical competition patterns were, on average, similar to those of NH listeners, but the patterns of individual CI users varied greatly. Individual CI users' word–nonword sensitivity (Experiment 1) explained differences in the reliance on sentential context to resolve lexical competition, whereas clinical speech perception scores explained competition with phonologically related words. Conclusions: The general analysis of CI users' lexical competition patterns showed merely quantitative differences with NH listeners in the time course of lexical competition, but our additional analysis revealed more qualitative differences in CI users' strategies to process speech. Individuals' word–nonword sensitivity explained different parts of individual variability than clinical speech perception scores. These results stress, particularly for heterogeneous clinical populations such as CI users, the importance of investigating individual differences in addition to group averages, as they can be informative for clinical rehabilitation.

Close

Kinan Muhammed; Edwin Dalmaijer; Sanjay Manohar; Masud Husain

Voluntary modulation of saccadic peak velocity associated with individual differences in motivation Journal Article

Cortex, 122 , pp. 198–212, 2020.

Abstract | Links | BibTeX

@article{Muhammed2020,
title = {Voluntary modulation of saccadic peak velocity associated with individual differences in motivation},
author = {Kinan Muhammed and Edwin Dalmaijer and Sanjay Manohar and Masud Husain},
doi = {10.1016/j.cortex.2018.12.001},
year = {2020},
date = {2020-01-01},
journal = {Cortex},
volume = {122},
pages = {198--212},
publisher = {Elsevier Ltd},
abstract = {Saccadic peak velocity increases in a stereotyped manner with the amplitude of eye movements. This relationship, known as the main sequence, has classically been considered to be fixed, although several recent studies have demonstrated that velocity can be modulated to some extent by external incentives. However, the ability to voluntarily control saccadic velocity and its association with motivation has yet to be investigated. Here, in three separate experimental paradigms, we measured the effects of incentivisation on saccadic velocity, reaction time and preparatory pupillary changes in 53 young healthy participants. In addition, the ability to voluntarily modulate saccadic velocity with and without incentivisation was assessed. Participants varied in their ability to increase and decrease the velocity of their saccades when instructed to do so. This effect correlated with motivation level across participants, and was further modulated by addition of monetary reward and avoidance of loss. The findings show that a degree of voluntary control of saccadic velocity is possible in some individuals, and that the ability to modulate peak velocity is associated with intrinsic levels of motivation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Saccadic peak velocity increases in a stereotyped manner with the amplitude of eye movements. This relationship, known as the main sequence, has classically been considered to be fixed, although several recent studies have demonstrated that velocity can be modulated to some extent by external incentives. However, the ability to voluntarily control saccadic velocity and its association with motivation has yet to be investigated. Here, in three separate experimental paradigms, we measured the effects of incentivisation on saccadic velocity, reaction time and preparatory pupillary changes in 53 young healthy participants. In addition, the ability to voluntarily modulate saccadic velocity with and without incentivisation was assessed. Participants varied in their ability to increase and decrease the velocity of their saccades when instructed to do so. This effect correlated with motivation level across participants, and was further modulated by addition of monetary reward and avoidance of loss. The findings show that a degree of voluntary control of saccadic velocity is possible in some individuals, and that the ability to modulate peak velocity is associated with intrinsic levels of motivation.

Close

  • doi:10.1016/j.cortex.2018.12.001

Close

Anna Kosovicheva; Peter J Bex

What color was it? A psychophysical paradigm for tracking subjective progress in continuous tasks Journal Article

Perception, 49 (1), pp. 21–38, 2020.

Abstract | Links | BibTeX

@article{Kosovicheva2020,
title = {What color was it? A psychophysical paradigm for tracking subjective progress in continuous tasks},
author = {Anna Kosovicheva and Peter J Bex},
doi = {10.1177/0301006619886247},
year = {2020},
date = {2020-01-01},
journal = {Perception},
volume = {49},
number = {1},
pages = {21--38},
abstract = {When making a sequence of fixations, how does the timing of visual experience compare with the timing of fixation onsets? Previous studies have tracked shifts of attention or perceived gaze direction using self-report methods. We used a similar method, a dynamic color technique, to measure subjective timing in continuous tasks involving fixation sequences. Does the time that observers report reading a word coincide with their fixation on it, or is there an asynchrony, and does this relationship depend on the observer's task? Observers read sentences that continuously changed in hue and identified the color of a word at the time that they read it using a color palette. We compared responses with a nonreading condition, where observers reproduced their fixations, but viewed nonword stimuli. Results showed a delay between the color of stimuli at fixation onset and the reported color during perception. For nonword tasks, the delay was constant. However, in the reading task, the delay was larger for earlier compared with later words in the sentence. Our results offer a new method for measuring awareness or subjective progress within fixation sequences, which can be extended to other continuous tasks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When making a sequence of fixations, how does the timing of visual experience compare with the timing of fixation onsets? Previous studies have tracked shifts of attention or perceived gaze direction using self-report methods. We used a similar method, a dynamic color technique, to measure subjective timing in continuous tasks involving fixation sequences. Does the time that observers report reading a word coincide with their fixation on it, or is there an asynchrony, and does this relationship depend on the observer's task? Observers read sentences that continuously changed in hue and identified the color of a word at the time that they read it using a color palette. We compared responses with a nonreading condition, where observers reproduced their fixations, but viewed nonword stimuli. Results showed a delay between the color of stimuli at fixation onset and the reported color during perception. For nonword tasks, the delay was constant. However, in the reading task, the delay was larger for earlier compared with later words in the sentence. Our results offer a new method for measuring awareness or subjective progress within fixation sequences, which can be extended to other continuous tasks.

Close

  • doi:10.1177/0301006619886247

Close

Raymond M Klein; Maryam Kavyani; Alireza Farsi; Michael A Lawrence

Using the locus of slack logic to determine whether the output form of inhibition of return affects an early or late stage of processing Journal Article

Cortex, 122 , pp. 123–130, 2020.

Abstract | Links | BibTeX

@article{Klein2020,
title = {Using the locus of slack logic to determine whether the output form of inhibition of return affects an early or late stage of processing},
author = {Raymond M Klein and Maryam Kavyani and Alireza Farsi and Michael A Lawrence},
doi = {10.1016/j.cortex.2018.10.023},
year = {2020},
date = {2020-01-01},
journal = {Cortex},
volume = {122},
pages = {123--130},
publisher = {Elsevier Ltd},
abstract = {Slower reaction times to targets presented at a previously cued or attended location are often attributed to inhibition of return (IOR). It has been suggested that IOR affects a process at the output end of processing continuum when it is generated while the oculomotor system is activated. Following the path set by Kavyani, Farsi, Abdoli, and Klein (2017) we used the locus of slack logic embedded in the psychological refractory period (PRP) paradigm to test this idea. We generated what we expected would be the output form of IOR by beginning each with participants making a target directed saccade which was followed by two tasks. Task 1, was a 2-choice auditory discrimination task and Task 2 was a 2-choice visual localization task. We varied the interval between the onsets of the two targets associated with these two tasks (using TTOAs of 200, 400, or 800 msec). As expected the visual task suffered from a robust PRP effect (substantially delayed RTs at the shorter TTOAs). There was also a robust IOR effect with RTs to localize visual targets being slower when the targets were presented at a previously fixated location. Importantly, and in striking to our previous results wherein we generated the input form of IOR, in the present study there was an additive effect between IOR and TTOA on RT2. As implied by the locus of slack logic, we therefore conclude that the form of IOR generated when the oculomotor system is activated affects a late stage of processing. Converging evidence for this conclusion, from a variety of neuroscientific methods, is presented and the dearth of such evidence about the input form of IOR is noted.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Slower reaction times to targets presented at a previously cued or attended location are often attributed to inhibition of return (IOR). It has been suggested that IOR affects a process at the output end of processing continuum when it is generated while the oculomotor system is activated. Following the path set by Kavyani, Farsi, Abdoli, and Klein (2017) we used the locus of slack logic embedded in the psychological refractory period (PRP) paradigm to test this idea. We generated what we expected would be the output form of IOR by beginning each with participants making a target directed saccade which was followed by two tasks. Task 1, was a 2-choice auditory discrimination task and Task 2 was a 2-choice visual localization task. We varied the interval between the onsets of the two targets associated with these two tasks (using TTOAs of 200, 400, or 800 msec). As expected the visual task suffered from a robust PRP effect (substantially delayed RTs at the shorter TTOAs). There was also a robust IOR effect with RTs to localize visual targets being slower when the targets were presented at a previously fixated location. Importantly, and in striking to our previous results wherein we generated the input form of IOR, in the present study there was an additive effect between IOR and TTOA on RT2. As implied by the locus of slack logic, we therefore conclude that the form of IOR generated when the oculomotor system is activated affects a late stage of processing. Converging evidence for this conclusion, from a variety of neuroscientific methods, is presented and the dearth of such evidence about the input form of IOR is noted.

Close

  • doi:10.1016/j.cortex.2018.10.023

Close

Johan Hulleman; Kristofer Lund; Paul A Skarratt

Medium versus difficult visual search: How a quantitative change in the functional visual field leads to a qualitative difference in performance Journal Article

Attention, Perception, & Psychophysics, 82 , pp. 118–139, 2020.

Abstract | Links | BibTeX

@article{Hulleman2020,
title = {Medium versus difficult visual search: How a quantitative change in the functional visual field leads to a qualitative difference in performance},
author = {Johan Hulleman and Kristofer Lund and Paul A Skarratt},
doi = {10.3758/s13414-019-01787-4},
year = {2020},
date = {2020-01-01},
journal = {Attention, Perception, & Psychophysics},
volume = {82},
pages = {118--139},
abstract = {The dominant theories of visual search assume that search is a process involving comparisons of individual items against a target description that is based on the properties of the target in isolation. Here, we present four experiments that demonstrate that this holds true only in difficult search. In medium search it seems that the relation between the target and neighbouring items is also part of the target description. We used two sets of oriented lines to construct the search items. The cardinal set contained horizontal and vertical lines, the diagonal set contained left diagonal and right diagonal lines. In all experiments, participants knew the identity of the target and the line set used to construct it. In difficult search this knowledge allowed performance to improve in displays where only half of the search items came from the same line set as the target (50% eligibility), relative to displays where all items did (100% eligibility). However, in medium search, performance was actually poorer for 50% eligibility, especially on target-absent trials. This opposite effect of ineligible items in medium search and difficult search is hard to reconcile with theories based on individual items. It is more in line with theories that conceive search as a sequence of fixations where the number of items processed during a fixation depends on the difficulty of the search task: When search is medium, multiple items are processed per fixation. But when search is difficult, only a single item is processed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The dominant theories of visual search assume that search is a process involving comparisons of individual items against a target description that is based on the properties of the target in isolation. Here, we present four experiments that demonstrate that this holds true only in difficult search. In medium search it seems that the relation between the target and neighbouring items is also part of the target description. We used two sets of oriented lines to construct the search items. The cardinal set contained horizontal and vertical lines, the diagonal set contained left diagonal and right diagonal lines. In all experiments, participants knew the identity of the target and the line set used to construct it. In difficult search this knowledge allowed performance to improve in displays where only half of the search items came from the same line set as the target (50% eligibility), relative to displays where all items did (100% eligibility). However, in medium search, performance was actually poorer for 50% eligibility, especially on target-absent trials. This opposite effect of ineligible items in medium search and difficult search is hard to reconcile with theories based on individual items. It is more in line with theories that conceive search as a sequence of fixations where the number of items processed during a fixation depends on the difficulty of the search task: When search is medium, multiple items are processed per fixation. But when search is difficult, only a single item is processed.

Close

  • doi:10.3758/s13414-019-01787-4

Close

James E Hoffman; Minwoo Kim; Matt Taylor; Kelsey Holiday

Emotional capture during emotion-induced blindness is not automatic Journal Article

Cortex, 122 , pp. 140–158, 2020.

Abstract | Links | BibTeX

@article{Hoffman2020,
title = {Emotional capture during emotion-induced blindness is not automatic},
author = {James E Hoffman and Minwoo Kim and Matt Taylor and Kelsey Holiday},
doi = {10.1016/j.cortex.2019.03.013},
year = {2020},
date = {2020-01-01},
journal = {Cortex},
volume = {122},
pages = {140--158},
publisher = {Elsevier Ltd},
abstract = {The present research used behavioral and event-related brain potentials (ERP) measures to determine whether emotional capture is automatic in the emotion-induced blindness (EIB) paradigm. The first experiment varied the priority of performing two concurrent tasks: identifying a negative or neutral picture appearing in a rapid serial visual presentation (RSVP) stream of pictures and multiple object tracking (MOT). Results showed that increased attention to the MOT task resulted in decreased accuracy for identifying both negative and neutral target pictures accompanied by decreases in the amplitude of the P3b component. In contrast, the early posterior negativity (EPN) component elicited by negative pictures was unaffected by variations in attention. Similarly, there was a decrement in MOT performance for dual-task versus single task conditions but no effect of picture type (negative vs neutral) on MOT accuracy which isn't consistent with automatic emotional capture of attention. However, the MOT task might simply be insensitive to brief interruptions of attention. The second experiment used a more sensitive reaction time (RT) measure to examine this possibility. Results showed that RT to discriminate a gap appearing in a tracked object was delayed by the simultaneous appearance of to-be-ignored distractor pictures even though MOT performance was once again unaffected by the distractor. Importantly, the RT delay was the same for both negative and neutral distractors suggesting that capture was driven by physical salience rather than emotional salience of the distractors. Despite this lack of emotional capture, the EPN component, which is thought to reflect emotional capture, was still present. We suggest that the EPN doesn't reflect capture but rather downstream effects of attention, including object recognition. These results show that capture by emotional pictures in EIB can be suppressed when attention is engaged in another difficult task. The results have important implications for understanding capture effects in EIB.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present research used behavioral and event-related brain potentials (ERP) measures to determine whether emotional capture is automatic in the emotion-induced blindness (EIB) paradigm. The first experiment varied the priority of performing two concurrent tasks: identifying a negative or neutral picture appearing in a rapid serial visual presentation (RSVP) stream of pictures and multiple object tracking (MOT). Results showed that increased attention to the MOT task resulted in decreased accuracy for identifying both negative and neutral target pictures accompanied by decreases in the amplitude of the P3b component. In contrast, the early posterior negativity (EPN) component elicited by negative pictures was unaffected by variations in attention. Similarly, there was a decrement in MOT performance for dual-task versus single task conditions but no effect of picture type (negative vs neutral) on MOT accuracy which isn't consistent with automatic emotional capture of attention. However, the MOT task might simply be insensitive to brief interruptions of attention. The second experiment used a more sensitive reaction time (RT) measure to examine this possibility. Results showed that RT to discriminate a gap appearing in a tracked object was delayed by the simultaneous appearance of to-be-ignored distractor pictures even though MOT performance was once again unaffected by the distractor. Importantly, the RT delay was the same for both negative and neutral distractors suggesting that capture was driven by physical salience rather than emotional salience of the distractors. Despite this lack of emotional capture, the EPN component, which is thought to reflect emotional capture, was still present. We suggest that the EPN doesn't reflect capture but rather downstream effects of attention, including object recognition. These results show that capture by emotional pictures in EIB can be suppressed when attention is engaged in another difficult task. The results have important implications for understanding capture effects in EIB.

Close

  • doi:10.1016/j.cortex.2019.03.013

Close

Ziad M Hafed; Laurent Goffart

Gaze direction as equilibrium: More evidence from spatial and temporal aspects of small-saccade triggering in the rhesus macaque monkey Journal Article

Journal of Neurophysiology, 123 (1), pp. 308–322, 2020.

Abstract | Links | BibTeX

@article{Hafed2020,
title = {Gaze direction as equilibrium: More evidence from spatial and temporal aspects of small-saccade triggering in the rhesus macaque monkey},
author = {Ziad M Hafed and Laurent Goffart},
doi = {10.1152/jn.00588.2019},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neurophysiology},
volume = {123},
number = {1},
pages = {308--322},
abstract = {Rigorous behavioral studies made in human subjects have shown that small-eccentricity target displacements are associated with increased saccadic reaction times, but the reasons for this remain unclear. Before characterizing the neurophysiological foundations underlying this relationship between the spatial and temporal aspects of saccades, we tested the triggering of small saccades in the male rhesus macaque monkey. We also compared our results to those obtained in human subjects, both from the existing literature and through our own additional measurements. Using a variety of behavioral tasks exercising visual and nonvisual guidance of small saccades, we found that small saccades consistently require more time than larger saccades to be triggered in the nonhuman primate, even in the absence of any visual guidance and when valid advance information about the saccade landing position is available. We also found a strong asymmetry in the reaction times of small upper versus lower visual field visually guided saccades, a phenomenon that has not been described before for small saccades, even in humans. Following the suggestion that an eye movement is not initiated as long as the visuo-oculomotor system is within a state of balance, in which opposing commands counterbalance each other, we propose that the longer reaction times are a signature of enhanced times needed to create the symmetry-breaking condition that puts downstream premotor neurons into a push-pull regime necessary for rotating the eyeballs. Our results provide an important catalog of nonhuman primate oculomotor capabilities on the miniature scale, allowing concrete predictions on underlying neurophysiological mechanisms.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Rigorous behavioral studies made in human subjects have shown that small-eccentricity target displacements are associated with increased saccadic reaction times, but the reasons for this remain unclear. Before characterizing the neurophysiological foundations underlying this relationship between the spatial and temporal aspects of saccades, we tested the triggering of small saccades in the male rhesus macaque monkey. We also compared our results to those obtained in human subjects, both from the existing literature and through our own additional measurements. Using a variety of behavioral tasks exercising visual and nonvisual guidance of small saccades, we found that small saccades consistently require more time than larger saccades to be triggered in the nonhuman primate, even in the absence of any visual guidance and when valid advance information about the saccade landing position is available. We also found a strong asymmetry in the reaction times of small upper versus lower visual field visually guided saccades, a phenomenon that has not been described before for small saccades, even in humans. Following the suggestion that an eye movement is not initiated as long as the visuo-oculomotor system is within a state of balance, in which opposing commands counterbalance each other, we propose that the longer reaction times are a signature of enhanced times needed to create the symmetry-breaking condition that puts downstream premotor neurons into a push-pull regime necessary for rotating the eyeballs. Our results provide an important catalog of nonhuman primate oculomotor capabilities on the miniature scale, allowing concrete predictions on underlying neurophysiological mechanisms.

Close

  • doi:10.1152/jn.00588.2019

Close

Rachael Gwinn; Ian Krajbich

Attitudes and attention Journal Article

Journal of Experimental Social Psychology, 86 , pp. 1–8, 2020.

Abstract | Links | BibTeX

@article{Gwinn2020,
title = {Attitudes and attention},
author = {Rachael Gwinn and Ian Krajbich},
doi = {10.1016/j.jesp.2019.103892},
year = {2020},
date = {2020-01-01},
journal = {Journal of Experimental Social Psychology},
volume = {86},
pages = {1--8},
abstract = {Attitudes play a vital role in our everyday decisions. However, it is unclear how various dimensions of attitudes affect the choice process, for instance the way that people allocate attention between alternatives. In this study we investigated these questions using eye-tracking and a two alternative forced food-choice task after measuring subjective values (attitude extremity) and their accompanying accessibility, certainty, and stability. Understanding this basic decision-making process is key if we are to gain insight on how to combat societal problems like obesity and other issues related to diet. We found that participants allocated more attention to items with lower attitude accessibility, but tended to choose items with higher attitude accessibility. Higher attitude certainty and stability had no effects on attention, but led to more attitude-consistent choices. These results imply that people are not simply choosing in line with their subjective values but are affected by other aspects of their attitudes. In addition, our attitude accessibility results indicate that more attention is not always beneficial.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Attitudes play a vital role in our everyday decisions. However, it is unclear how various dimensions of attitudes affect the choice process, for instance the way that people allocate attention between alternatives. In this study we investigated these questions using eye-tracking and a two alternative forced food-choice task after measuring subjective values (attitude extremity) and their accompanying accessibility, certainty, and stability. Understanding this basic decision-making process is key if we are to gain insight on how to combat societal problems like obesity and other issues related to diet. We found that participants allocated more attention to items with lower attitude accessibility, but tended to choose items with higher attitude accessibility. Higher attitude certainty and stability had no effects on attention, but led to more attitude-consistent choices. These results imply that people are not simply choosing in line with their subjective values but are affected by other aspects of their attitudes. In addition, our attitude accessibility results indicate that more attention is not always beneficial.

Close

  • doi:10.1016/j.jesp.2019.103892

Close

Ian Donovan; Ying Joey Zhou; Marisa Carrasco

In search of exogenous feature-based attention Journal Article

Attention, Perception, & Psychophysics, 82 (1), pp. 312–329, 2020.

Abstract | Links | BibTeX

@article{Donovan2020,
title = {In search of exogenous feature-based attention},
author = {Ian Donovan and Ying Joey Zhou and Marisa Carrasco},
doi = {10.3758/s13414-019-01815-3},
year = {2020},
date = {2020-01-01},
journal = {Attention, Perception, & Psychophysics},
volume = {82},
number = {1},
pages = {312--329},
abstract = {Visual attention prioritizes the processing of sensory information at specific spatial locations (spatial attention; SA) or with specific feature values (feature-based attention; FBA). SA is well characterized in terms of behavior, brain activity, and temporal dynamics-for both top-down (endogenous) and bottom-up (exogenous) spatial orienting. FBA has been thoroughly studied in terms of top-down endogenous orienting, but much less is known about the potential of bottom-up exogenous influences of FBA. Here, in four experiments, we adapted a procedure used in two previous studies that reported exogenous FBA effects, with the goal of replicating and expanding on these findings, especially regarding its temporal dynamics. Unlike the two previous studies, we did not find significant effects of exogenous FBA. This was true (1) whether accuracy or RT was prioritized as the main measure, (2) with precues presented peripherally or centrally, (3) with cue-to-stimulus ISIs of varying durations, (4) with four or eight possible target locations, (5) at different meridians, (6) with either brief or long stimulus presentations, (7) and with either fixation contingent or noncontingent stimulus displays. In the last experiment, a postexperiment participant questionnaire indicated that only a small subset of participants, who mistakenly believed the irrelevant color of the precue indicated which stimulus was the target, exhibited benefits for valid exogenous FBA precues. Overall, we conclude that with the protocol used in the studies reporting exogenous FBA, the exogenous stimulus-driven influence of FBA is elusive at best, and that FBA is primarily a top-down, goal-driven process.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual attention prioritizes the processing of sensory information at specific spatial locations (spatial attention; SA) or with specific feature values (feature-based attention; FBA). SA is well characterized in terms of behavior, brain activity, and temporal dynamics-for both top-down (endogenous) and bottom-up (exogenous) spatial orienting. FBA has been thoroughly studied in terms of top-down endogenous orienting, but much less is known about the potential of bottom-up exogenous influences of FBA. Here, in four experiments, we adapted a procedure used in two previous studies that reported exogenous FBA effects, with the goal of replicating and expanding on these findings, especially regarding its temporal dynamics. Unlike the two previous studies, we did not find significant effects of exogenous FBA. This was true (1) whether accuracy or RT was prioritized as the main measure, (2) with precues presented peripherally or centrally, (3) with cue-to-stimulus ISIs of varying durations, (4) with four or eight possible target locations, (5) at different meridians, (6) with either brief or long stimulus presentations, (7) and with either fixation contingent or noncontingent stimulus displays. In the last experiment, a postexperiment participant questionnaire indicated that only a small subset of participants, who mistakenly believed the irrelevant color of the precue indicated which stimulus was the target, exhibited benefits for valid exogenous FBA precues. Overall, we conclude that with the protocol used in the studies reporting exogenous FBA, the exogenous stimulus-driven influence of FBA is elusive at best, and that FBA is primarily a top-down, goal-driven process.

Close

  • doi:10.3758/s13414-019-01815-3

Close

Sabrina Michelle Di Lonardo; Matthew G Huebner; Katherine Newman; Jo-Anne LeFevre

Fixated in unfamiliar territory: Mapping estimates across typical and atypical number lines Journal Article

Quarterly Journal of Experimental Psychology, 73 (2), pp. 279–294, 2020.

Abstract | Links | BibTeX

@article{DiLonardo2020,
title = {Fixated in unfamiliar territory: Mapping estimates across typical and atypical number lines},
author = {Sabrina Michelle {Di Lonardo} and Matthew G Huebner and Katherine Newman and Jo-Anne LeFevre},
doi = {10.1177/1747021819881631},
year = {2020},
date = {2020-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {73},
number = {2},
pages = {279--294},
abstract = {Adults (N = 72) estimated the location of target numbers on number lines that varied in numerical range (i.e., typical range 0–10,000 or atypical range 0–7,000) and spatial orientation (i.e., the 0 endpoint on the left [traditional] or on the right [reversed]). Eye-tracking data were used to assess strategy use. Participants made meaningful first fixations on the line, with fixations occurring around the origin for low target numbers and around the midpoint and endpoint for high target numbers. On traditional direction number lines, participants used left-to-right scanning and showed a leftward bias; these effects were reduced for the reverse direction number lines. Participants made fixations around the midpoint for both ranges but were less accurate when estimating target numbers around the midpoint on the 7,000-range number line. Thus, participants are using the internal benchmark (i.e., midpoint) to guide estimates on atypical range number lines, but they have difficulty calculating the midpoint, leading to less accurate estimates. In summary, both range and direction influenced strategy use and accuracy, suggesting that both numerical and spatial processes influence number line estimation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Adults (N = 72) estimated the location of target numbers on number lines that varied in numerical range (i.e., typical range 0–10,000 or atypical range 0–7,000) and spatial orientation (i.e., the 0 endpoint on the left [traditional] or on the right [reversed]). Eye-tracking data were used to assess strategy use. Participants made meaningful first fixations on the line, with fixations occurring around the origin for low target numbers and around the midpoint and endpoint for high target numbers. On traditional direction number lines, participants used left-to-right scanning and showed a leftward bias; these effects were reduced for the reverse direction number lines. Participants made fixations around the midpoint for both ranges but were less accurate when estimating target numbers around the midpoint on the 7,000-range number line. Thus, participants are using the internal benchmark (i.e., midpoint) to guide estimates on atypical range number lines, but they have difficulty calculating the midpoint, leading to less accurate estimates. In summary, both range and direction influenced strategy use and accuracy, suggesting that both numerical and spatial processes influence number line estimation.

Close

  • doi:10.1177/1747021819881631

Close

Xianglan Chen; Hulin Ren; Yamin Liu; Bendegul Okumus; Anil Bilgihan

Attention to Chinese menus with metaphorical or metonymic names: An eye movement lab experiment Journal Article

International Journal of Hospitality Management, 84 , pp. 1–10, 2020.

Abstract | Links | BibTeX

@article{Chen2020,
title = {Attention to Chinese menus with metaphorical or metonymic names: An eye movement lab experiment},
author = {Xianglan Chen and Hulin Ren and Yamin Liu and Bendegul Okumus and Anil Bilgihan},
doi = {10.1016/J.IJHM.2019.05.001},
year = {2020},
date = {2020-01-01},
journal = {International Journal of Hospitality Management},
volume = {84},
pages = {1--10},
publisher = {Pergamon},
abstract = {Food is as cultural as it is practical, and names of dishes accordingly have cultural nuances. Menus serve as communication tools between restaurants and their guests, representing the culinary philosophy of the chefs and proprietors involved. The purpose of this experimental lab study is to compare differences of attention paid to textual and pictorial elements of menus with metaphorical and/or metonymic names. Eye movement technology was applied in a 2 × 3 between-subject experiment (n = 40), comparing the strength of visual metaphors (e.g., images of menu items on the menu) and direct textual names in Chinese and English with regard to guests' willingness to purchase the dishes in question. Post-test questionnaires were also employed to assess participants' attitudes toward menu designs. Study results suggest that visual metaphors are more efficient when reflecting a product's strength. Images are shown to positively influence consumers' expectations of taste and enjoyment, garnering the most attention under all six conditions studied here, and constitute the most effective format when Chinese alone names are present. The textual claim increases perception of the strength of menu items along with purchase intention. Metaphorical dish names with bilingual (i.e., Chinese and English) names hold the greatest appeal. This result can be interpreted from the perspective of grounded cognition theory, which suggests that situated simulations and re-enactment of perceptual, motor, and affective processes can support abstract thought. The lab results and survey provide specific theoretical and managerial implications with regard to translating names of Chinese dishes to attract customers' attention to specific menu items.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Food is as cultural as it is practical, and names of dishes accordingly have cultural nuances. Menus serve as communication tools between restaurants and their guests, representing the culinary philosophy of the chefs and proprietors involved. The purpose of this experimental lab study is to compare differences of attention paid to textual and pictorial elements of menus with metaphorical and/or metonymic names. Eye movement technology was applied in a 2 × 3 between-subject experiment (n = 40), comparing the strength of visual metaphors (e.g., images of menu items on the menu) and direct textual names in Chinese and English with regard to guests' willingness to purchase the dishes in question. Post-test questionnaires were also employed to assess participants' attitudes toward menu designs. Study results suggest that visual metaphors are more efficient when reflecting a product's strength. Images are shown to positively influence consumers' expectations of taste and enjoyment, garnering the most attention under all six conditions studied here, and constitute the most effective format when Chinese alone names are present. The textual claim increases perception of the strength of menu items along with purchase intention. Metaphorical dish names with bilingual (i.e., Chinese and English) names hold the greatest appeal. This result can be interpreted from the perspective of grounded cognition theory, which suggests that situated simulations and re-enactment of perceptual, motor, and affective processes can support abstract thought. The lab results and survey provide specific theoretical and managerial implications with regard to translating names of Chinese dishes to attract customers' attention to specific menu items.

Close

  • doi:10.1016/J.IJHM.2019.05.001

Close

Soazig Casteau; Daniel T Smith

Covert attention beyond the range of eye-movements: Evidence for a dissociation between exogenous and endogenous orienting Journal Article

Cortex, 122 , pp. 170–186, 2020.

Abstract | Links | BibTeX

@article{Casteau2020a,
title = {Covert attention beyond the range of eye-movements: Evidence for a dissociation between exogenous and endogenous orienting},
author = {Soazig Casteau and Daniel T Smith},
doi = {10.1016/j.cortex.2018.11.007},
year = {2020},
date = {2020-01-01},
journal = {Cortex},
volume = {122},
pages = {170--186},
publisher = {Elsevier Ltd},
abstract = {The relationship between covert shift of attention and the oculomotor system has been the subject of numerous studies. A widely held view, known as Premotor Theory, is that covert attention depends upon activation of the oculomotor system. However, recent work has argued that Premotor Theory is only true for covert, exogenous orienting of attention and that covert endogenous orienting is largely independent of the oculomotor system. To address this issue we examined how endogenous and exogenous covert orienting of attention was affected when stimuli were presented at a location outside the range of saccadic eye movements. Results from Experiment 1 showed that exogenous covert orienting was abolished when stimuli were presented beyond the range of saccadic eye movements, but preserved when stimuli were presented within this range. In contrast, in Experiment 2 endogenous covert orienting was preserved when stimuli appeared beyond the saccadic range. Finally, Experiment 3 confirmed the observations of Exp.1 and 2. Our results demonstrate that exogenous, covert orienting is limited to the range of overt saccadic eye movements, whereas covert endogenous orienting is not. These results are consistent with a weak, exogenous-only version of Premotor Theory.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The relationship between covert shift of attention and the oculomotor system has been the subject of numerous studies. A widely held view, known as Premotor Theory, is that covert attention depends upon activation of the oculomotor system. However, recent work has argued that Premotor Theory is only true for covert, exogenous orienting of attention and that covert endogenous orienting is largely independent of the oculomotor system. To address this issue we examined how endogenous and exogenous covert orienting of attention was affected when stimuli were presented at a location outside the range of saccadic eye movements. Results from Experiment 1 showed that exogenous covert orienting was abolished when stimuli were presented beyond the range of saccadic eye movements, but preserved when stimuli were presented within this range. In contrast, in Experiment 2 endogenous covert orienting was preserved when stimuli appeared beyond the saccadic range. Finally, Experiment 3 confirmed the observations of Exp.1 and 2. Our results demonstrate that exogenous, covert orienting is limited to the range of overt saccadic eye movements, whereas covert endogenous orienting is not. These results are consistent with a weak, exogenous-only version of Premotor Theory.

Close

  • doi:10.1016/j.cortex.2018.11.007

Close

Soazig Casteau; Daniel T Smith

On the link between attentional search and the oculomotor system: Is preattentive search restricted to the range of eye movements? Journal Article

Attention, Perception, & Psychophysics, pp. 1–15, 2020.

Abstract | BibTeX

@article{Casteau2020,
title = {On the link between attentional search and the oculomotor system: Is preattentive search restricted to the range of eye movements?},
author = {Soazig Casteau and Daniel T Smith},
year = {2020},
date = {2020-01-01},
journal = {Attention, Perception, & Psychophysics},
pages = {1--15},
publisher = {Attention, Perception, & Psychophysics},
abstract = {It has been proposed that covert visual search can be fast, efficient, and stimulus driven, particularly when the target is defined by a salient single feature, or slow, inefficient, and effortful when the target is defined by a nonsalient conjunction offeatures. This distinction between fast, stimulus-driven orienting and slow, effortful orienting can be related to the distinction between exog- enous spatial attention and endogenous spatial attention. Several studies have shown that exogenous, covert orienting is limited to the range of saccadic eye movements, whereas covert endogenous orienting is independent of the range of saccadic eye movements. The current study examined whether covert visual search is affected in a similar way. Experiment 1 showed that covert visual search for feature singletons was impaired when stimuli were presented beyond the range of saccadic eye move- ments, whereas conjunction search was unaffected by array position. Experiment 2 replicated and extended this effect by measuring search times at 6 eccentricities. The impairment in covert feature search emerged only when stimuli crossed the effective oculomotor range and remained stable for locations further into the periphery, ruling out the possibility that the results of Experiment 1 were due to a failure to fully compensate for the effects of cortical magnification. The findings are interpreted in terms ofbiased competition and oculomotor theories ofspatial attention. It is concluded that, as with covert exogenous orienting, biological constraints on overt orienting in the oculomotor system constrain covert, preattentive search. Keywords},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It has been proposed that covert visual search can be fast, efficient, and stimulus driven, particularly when the target is defined by a salient single feature, or slow, inefficient, and effortful when the target is defined by a nonsalient conjunction offeatures. This distinction between fast, stimulus-driven orienting and slow, effortful orienting can be related to the distinction between exog- enous spatial attention and endogenous spatial attention. Several studies have shown that exogenous, covert orienting is limited to the range of saccadic eye movements, whereas covert endogenous orienting is independent of the range of saccadic eye movements. The current study examined whether covert visual search is affected in a similar way. Experiment 1 showed that covert visual search for feature singletons was impaired when stimuli were presented beyond the range of saccadic eye move- ments, whereas conjunction search was unaffected by array position. Experiment 2 replicated and extended this effect by measuring search times at 6 eccentricities. The impairment in covert feature search emerged only when stimuli crossed the effective oculomotor range and remained stable for locations further into the periphery, ruling out the possibility that the results of Experiment 1 were due to a failure to fully compensate for the effects of cortical magnification. The findings are interpreted in terms ofbiased competition and oculomotor theories ofspatial attention. It is concluded that, as with covert exogenous orienting, biological constraints on overt orienting in the oculomotor system constrain covert, preattentive search. Keywords

Close

Christophe Carlei; Dirk Kerzel

Looking up improves performance in verbal tasks Journal Article

Laterality: Asymmetries of Body, Brain and Cognition, 25 (2), pp. 198–214, 2020.

Abstract | Links | BibTeX

@article{Carlei2020,
title = {Looking up improves performance in verbal tasks},
author = {Christophe Carlei and Dirk Kerzel},
doi = {10.1080/1357650X.2019.1646755},
year = {2020},
date = {2020-01-01},
journal = {Laterality: Asymmetries of Body, Brain and Cognition},
volume = {25},
number = {2},
pages = {198--214},
abstract = {Earlier research suggested that gaze direction has an impact on cognitive processing. It is likely that horizontal gaze direction increases activation in specific areas of the contralateral cerebral hemisphere. Consistent with the lateralization of memory functions, we previously showed that shifting gaze to the left improves visuo-spatial short-term memory. In the current study, we investigated the effect of unilateral gaze on verbal processing. We expected better performance with gaze directed to the right because language is lateralized in the left hemisphere. Also, an advantage of gaze directed upward was expected because local processing and object recognition are facilitated in the upper visual field. Observers directed their gaze at one of the corners of the computer screen while they performed lexical decision, grammatical gender and semantic discrimination tasks. Contrary to expectations, we did not observe performance differences between gaze directed to the left or right, which is consistent with the inconsistent literature on horizontal asymmetries with verbal tasks. However, RTs were shorter when observers looked at words in the upper compared to the lower part of the screen, suggesting that looking upwards enhances verbal processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Earlier research suggested that gaze direction has an impact on cognitive processing. It is likely that horizontal gaze direction increases activation in specific areas of the contralateral cerebral hemisphere. Consistent with the lateralization of memory functions, we previously showed that shifting gaze to the left improves visuo-spatial short-term memory. In the current study, we investigated the effect of unilateral gaze on verbal processing. We expected better performance with gaze directed to the right because language is lateralized in the left hemisphere. Also, an advantage of gaze directed upward was expected because local processing and object recognition are facilitated in the upper visual field. Observers directed their gaze at one of the corners of the computer screen while they performed lexical decision, grammatical gender and semantic discrimination tasks. Contrary to expectations, we did not observe performance differences between gaze directed to the left or right, which is consistent with the inconsistent literature on horizontal asymmetries with verbal tasks. However, RTs were shorter when observers looked at words in the upper compared to the lower part of the screen, suggesting that looking upwards enhances verbal processing.

Close

  • doi:10.1080/1357650X.2019.1646755

Close

Christopher D D Cabrall; Riender Happee; Joost C F De Winter

Prediction of effort and eye movement measures from driving scene components Journal Article

Transportation Research Part F: Traffic Psychology and Behaviour, 68 , pp. 187–197, 2020.

Abstract | Links | BibTeX

@article{Cabrall2020,
title = {Prediction of effort and eye movement measures from driving scene components},
author = {Christopher D D Cabrall and Riender Happee and Joost C F {De Winter}},
doi = {10.1016/j.trf.2019.11.001},
year = {2020},
date = {2020-01-01},
journal = {Transportation Research Part F: Traffic Psychology and Behaviour},
volume = {68},
pages = {187--197},
publisher = {Elsevier Ltd},
abstract = {For transitions of control in automated vehicles, driver monitoring systems (DMS) may need to discern task difficulty and driver preparedness. Such DMS require models that relate driving scene components, driver effort, and eye measurements. Across two sessions, 15 participants enacted receiving control within 60 randomly ordered dashcam videos (3-second duration) with variations in visible scene components: road curve angle, road surface area, road users, symbols, infrastructure, and vegetation/trees while their eyes were measured for pupil diameter, fixation duration, and saccade amplitude. The subjective measure of effort and the objective measure of saccade amplitude evidenced the highest correlations (r = 0.34 and r = 0.42, respectively) with the scene component of road curve angle. In person-specific regression analyses combining all visual scene components as predictors, average predictive correlations ranged between 0.49 and 0.58 for subjective effort and between 0.36 and 0.49 for saccade amplitude, depending on cross-validation techniques of generalization and repetition. In conclusion, the present regression equations establish quantifiable relations between visible driving scene components with both subjective effort and objective eye movement measures. In future DMS, such knowledge can help inform road-facing and driver-facing cameras to jointly establish the readiness of would-be drivers ahead of receiving control.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

For transitions of control in automated vehicles, driver monitoring systems (DMS) may need to discern task difficulty and driver preparedness. Such DMS require models that relate driving scene components, driver effort, and eye measurements. Across two sessions, 15 participants enacted receiving control within 60 randomly ordered dashcam videos (3-second duration) with variations in visible scene components: road curve angle, road surface area, road users, symbols, infrastructure, and vegetation/trees while their eyes were measured for pupil diameter, fixation duration, and saccade amplitude. The subjective measure of effort and the objective measure of saccade amplitude evidenced the highest correlations (r = 0.34 and r = 0.42, respectively) with the scene component of road curve angle. In person-specific regression analyses combining all visual scene components as predictors, average predictive correlations ranged between 0.49 and 0.58 for subjective effort and between 0.36 and 0.49 for saccade amplitude, depending on cross-validation techniques of generalization and repetition. In conclusion, the present regression equations establish quantifiable relations between visible driving scene components with both subjective effort and objective eye movement measures. In future DMS, such knowledge can help inform road-facing and driver-facing cameras to jointly establish the readiness of would-be drivers ahead of receiving control.

Close

  • doi:10.1016/j.trf.2019.11.001

Close

John Brand; Travis D Masterson; Jennifer A Emond; Reina Lansigan; Diane Gilbert-diamond

Measuring attentional bias to food cues in young children using a visual search task : An eye-tracking study Journal Article

Appetite, 148 , pp. 1–7, 2020.

Abstract | Links | BibTeX

@article{Brand2020,
title = {Measuring attentional bias to food cues in young children using a visual search task : An eye-tracking study},
author = {John Brand and Travis D Masterson and Jennifer A Emond and Reina Lansigan and Diane Gilbert-diamond},
doi = {10.1016/j.appet.2020.104610},
year = {2020},
date = {2020-01-01},
journal = {Appetite},
volume = {148},
pages = {1--7},
publisher = {Elsevier},
abstract = {Objective: Attentional bias to food cues may be a risk factor for childhood obesity, yet there are few paradigms to measure such biases in young children. Therefore, the present work introduces an eye-tracking visual search task to measure attentional bias in young children. Methods: Fifty-one 3-6-year-olds played a game to find a target cartoon character among food (experimental condition) or toy (control condition) distractors. Children completed the experimental and toy conditions on two separate visits in randomized order. Behavioral (response latencies) and eye-tracking measures (time to first fixation, initial gaze duration duration, cumulative gaze duration ) of attention to food and toy cues were computed. Regressions were used to test for attentional bias to food versus toy cues, and whether attentional bias to food cues was related to current BMI z-score. Results: Children spent more cumulative time looking at food versus toy distractors and took longer to locate the target when searching through food versus toy distractors. The faster children fixated on their first food versus toy distractor was associated with higher BMI z-scores. Conclusions: Using a game-based paradigm employing eyetracking, we found a behavioral attentional bias to food vs. toy distractors in young children. Further, attentional bias to food cues was associated with current BMI z-score.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: Attentional bias to food cues may be a risk factor for childhood obesity, yet there are few paradigms to measure such biases in young children. Therefore, the present work introduces an eye-tracking visual search task to measure attentional bias in young children. Methods: Fifty-one 3-6-year-olds played a game to find a target cartoon character among food (experimental condition) or toy (control condition) distractors. Children completed the experimental and toy conditions on two separate visits in randomized order. Behavioral (response latencies) and eye-tracking measures (time to first fixation, initial gaze duration duration, cumulative gaze duration ) of attention to food and toy cues were computed. Regressions were used to test for attentional bias to food versus toy cues, and whether attentional bias to food cues was related to current BMI z-score. Results: Children spent more cumulative time looking at food versus toy distractors and took longer to locate the target when searching through food versus toy distractors. The faster children fixated on their first food versus toy distractor was associated with higher BMI z-scores. Conclusions: Using a game-based paradigm employing eyetracking, we found a behavioral attentional bias to food vs. toy distractors in young children. Further, attentional bias to food cues was associated with current BMI z-score.

Close

  • doi:10.1016/j.appet.2020.104610

Close

Elisabeth Beyersmann; Signy Wegener; Kate Nation; Ayako Prokupzcuk; Hua-chen Wang; Elisabeth Beyersmann; Signy Wegener; Kate Nation; Hua-chen Wang; Anne Castles

Learning morphologically complex spoken words: Orthographic expectations of embedded stems are formed prior to print exposure Journal Article

Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–13, 2020.

Abstract | BibTeX

@article{Beyersmann2020,
title = {Learning morphologically complex spoken words: Orthographic expectations of embedded stems are formed prior to print exposure},
author = {Elisabeth Beyersmann and Signy Wegener and Kate Nation and Ayako Prokupzcuk and Hua-chen Wang and Elisabeth Beyersmann and Signy Wegener and Kate Nation and Hua-chen Wang and Anne Castles},
year = {2020},
date = {2020-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
pages = {1--13},
abstract = {It is well known that information from spoken language is integrated into reading processes, but the nature of these links and how they are acquired is less well understood. Recent evidence has suggested that predictions about the written form of newly learned spoken words are already generated prior to print exposure. We extend this work to morphologically complex words and ask whether the information that is available in spoken words goes beyond the mappings between phonology and orthography. Adults were taught the oral form of a set of novel morphologically complex words (e.g., “neshing”, “neshed”, “neshes”), with a 2nd set serving as untrained items. Following oral training, participants saw the printed form of the novel word stems for the first time (e.g., nesh), embedded in sentences, and their eye movements were monitored. Half of the stems were allocated a predictable and half an unpredictable spelling. Reading times were shorter for orally trained than untrained stems and for stems with predictable rather than unpredictable spellings. Crucially, there was an interaction between spelling predictability and training. This suggests that orthographic expectations of embedded stems are formed during spoken word learning. Reading aloud and spelling tests complemented the eye movement data, and findings are discussed in the context of theories of reading acquisition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It is well known that information from spoken language is integrated into reading processes, but the nature of these links and how they are acquired is less well understood. Recent evidence has suggested that predictions about the written form of newly learned spoken words are already generated prior to print exposure. We extend this work to morphologically complex words and ask whether the information that is available in spoken words goes beyond the mappings between phonology and orthography. Adults were taught the oral form of a set of novel morphologically complex words (e.g., “neshing”, “neshed”, “neshes”), with a 2nd set serving as untrained items. Following oral training, participants saw the printed form of the novel word stems for the first time (e.g., nesh), embedded in sentences, and their eye movements were monitored. Half of the stems were allocated a predictable and half an unpredictable spelling. Reading times were shorter for orally trained than untrained stems and for stems with predictable rather than unpredictable spellings. Crucially, there was an interaction between spelling predictability and training. This suggests that orthographic expectations of embedded stems are formed during spoken word learning. Reading aloud and spelling tests complemented the eye movement data, and findings are discussed in the context of theories of reading acquisition.

Close

Judith Bek; Ellen Poliakoff; Karen Lander

Measuring emotion recognition by people with Parkinson's disease using eye-tracking with dynamic facial expressions Journal Article

Journal of Neuroscience Methods, 331 , pp. 1–7, 2020.

Abstract | Links | BibTeX

@article{Bek2020,
title = {Measuring emotion recognition by people with Parkinson's disease using eye-tracking with dynamic facial expressions},
author = {Judith Bek and Ellen Poliakoff and Karen Lander},
doi = {10.1016/j.jneumeth.2019.108524},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neuroscience Methods},
volume = {331},
pages = {1--7},
abstract = {Background: Motion is an important cue to emotion recognition, and it has been suggested that we recognize emotions via internal simulation of others' expressions. There is a reduction of facial expression in Parkinson's disease (PD), which may influence the ability to use motion to recognise emotions in others. However, the majority of previous work in PD has used only static expressions. Moreover, few studies have used eye-tracking to explore emotion processing in PD. New method: We measured accuracy and eye movements in people with PD and healthy controls when identifying emotions from both static and dynamic facial expressions. Results: The groups did not differ overall in emotion recognition accuracy, but motion significantly increased recognition only in the control group. Participants made fewer and longer fixations when viewing dynamic expressions, and interest area analysis revealed increased gaze to the mouth region and decreased gaze to the eyes for dynamic stimuli, although the latter was specific to the control group. Comparison with existing methods: Ours is the first study to directly compare recognition of static and dynamic emotional expressions in PD using eye-tracking, revealing subtle differences between groups that may otherwise be undetected. Conclusions: It is feasible and informative to use eye-tracking with dynamic expressions to investigate emotion recognition in PD. Our findings suggest that people with PD may differ from healthy older adults in how they utilise motion during facial emotion recognition. Nonetheless, gaze patterns indicate some effects of motion on emotional processing, highlighting the need for further investigation in this area.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Motion is an important cue to emotion recognition, and it has been suggested that we recognize emotions via internal simulation of others' expressions. There is a reduction of facial expression in Parkinson's disease (PD), which may influence the ability to use motion to recognise emotions in others. However, the majority of previous work in PD has used only static expressions. Moreover, few studies have used eye-tracking to explore emotion processing in PD. New method: We measured accuracy and eye movements in people with PD and healthy controls when identifying emotions from both static and dynamic facial expressions. Results: The groups did not differ overall in emotion recognition accuracy, but motion significantly increased recognition only in the control group. Participants made fewer and longer fixations when viewing dynamic expressions, and interest area analysis revealed increased gaze to the mouth region and decreased gaze to the eyes for dynamic stimuli, although the latter was specific to the control group. Comparison with existing methods: Ours is the first study to directly compare recognition of static and dynamic emotional expressions in PD using eye-tracking, revealing subtle differences between groups that may otherwise be undetected. Conclusions: It is feasible and informative to use eye-tracking with dynamic expressions to investigate emotion recognition in PD. Our findings suggest that people with PD may differ from healthy older adults in how they utilise motion during facial emotion recognition. Nonetheless, gaze patterns indicate some effects of motion on emotional processing, highlighting the need for further investigation in this area.

Close

  • doi:10.1016/j.jneumeth.2019.108524

Close

Valerie M Beck; Timothy J Vickery

Oculomotor capture reveals trial-by-trial neural correlates of attentional guidance by contents of visual working memory Journal Article

Cortex, 122 , pp. 159–169, 2020.

Abstract | Links | BibTeX

@article{Beck2020,
title = {Oculomotor capture reveals trial-by-trial neural correlates of attentional guidance by contents of visual working memory},
author = {Valerie M Beck and Timothy J Vickery},
doi = {10.1016/j.cortex.2018.09.017},
year = {2020},
date = {2020-01-01},
journal = {Cortex},
volume = {122},
pages = {159--169},
publisher = {Elsevier Ltd},
abstract = {Evidence from attentional and oculomotor capture, contingent capture, and other paradigms suggests that mechanisms supporting human visual working memory (VWM) and visual attention are intertwined. Features held in VWM bias guidance toward matching items even when those features are task irrelevant. However, the neural basis of this interaction is underspecified. Prior examinations using fMRI have primarily relied on coarse comparisons across experimental conditions that produce varying amounts of capture. To examine the neural dynamics of attentional capture on a trial-by-trial basis, we applied an oculomotor paradigm that produced discrete measures of capture. On each trial, subjects were shown a memory item, followed by a blank retention interval, then a saccade target that appeared to the left or right. On some trials, an irrelevant distractor appeared above or below fixation. Once the saccade target was fixated, subjects completed a forced-choice memory test. Critically, either the target or distractor could match the feature held in VWM. Although task irrelevant, this manipulation produced differences in behavior: participants were more likely to saccade first to an irrelevant VWM-matching distractor compared with a non-matching distractor – providing a discrete measure of capture. We replicated this finding while recording eye movements and scanning participants' brains using fMRI. To examine the neural basis of oculomotor capture, we separately modeled the retention interval for capture and non-capture trials within the distractor-match condition. We found that frontal activity, including anterior cingulate cortex and superior frontal gyrus regions, differentially predicted subsequent oculomotor capture by a memory-matching distractor. Other regions previously implicated as involved in attentional capture by VWM-matching items showed no differential activity across capture and non-capture trials, even at a liberal threshold. Our findings demonstrate the power of trial-by-trial analyses of oculomotor capture as a means to examine the underlying relationship between VWM and attentional guidance systems.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Evidence from attentional and oculomotor capture, contingent capture, and other paradigms suggests that mechanisms supporting human visual working memory (VWM) and visual attention are intertwined. Features held in VWM bias guidance toward matching items even when those features are task irrelevant. However, the neural basis of this interaction is underspecified. Prior examinations using fMRI have primarily relied on coarse comparisons across experimental conditions that produce varying amounts of capture. To examine the neural dynamics of attentional capture on a trial-by-trial basis, we applied an oculomotor paradigm that produced discrete measures of capture. On each trial, subjects were shown a memory item, followed by a blank retention interval, then a saccade target that appeared to the left or right. On some trials, an irrelevant distractor appeared above or below fixation. Once the saccade target was fixated, subjects completed a forced-choice memory test. Critically, either the target or distractor could match the feature held in VWM. Although task irrelevant, this manipulation produced differences in behavior: participants were more likely to saccade first to an irrelevant VWM-matching distractor compared with a non-matching distractor – providing a discrete measure of capture. We replicated this finding while recording eye movements and scanning participants' brains using fMRI. To examine the neural basis of oculomotor capture, we separately modeled the retention interval for capture and non-capture trials within the distractor-match condition. We found that frontal activity, including anterior cingulate cortex and superior frontal gyrus regions, differentially predicted subsequent oculomotor capture by a memory-matching distractor. Other regions previously implicated as involved in attentional capture by VWM-matching items showed no differential activity across capture and non-capture trials, even at a liberal threshold. Our findings demonstrate the power of trial-by-trial analyses of oculomotor capture as a means to examine the underlying relationship between VWM and attentional guidance systems.

Close

  • doi:10.1016/j.cortex.2018.09.017

Close

Yasaman Bagherzadeh; Daniel Baldauf; Dimitrios Pantazis; Robert Desimone

Alpha synchrony and the neurofeedback control of spatial attention Journal Article

Neuron, 105 , pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Bagherzadeh2020,
title = {Alpha synchrony and the neurofeedback control of spatial attention},
author = {Yasaman Bagherzadeh and Daniel Baldauf and Dimitrios Pantazis and Robert Desimone},
doi = {10.1016/j.neuron.2019.11.001},
year = {2020},
date = {2020-01-01},
journal = {Neuron},
volume = {105},
pages = {1--11},
publisher = {Elsevier Inc.},
abstract = {Decreases in alpha synchronization are correlated with enhanced attention, whereas alpha increases are correlated with inattention. However, correlation is not causality, and synchronization may be a byproduct of attention rather than a cause. To test for a causal role of alpha synchrony in attention, we used MEG neurofeedback to train subjects to manipulate the ratio of alpha power over the left versus right parietal cortex. We found that a comparable alpha asymmetry developed over the visual cortex. The alpha training led to corresponding asymmetrical changes in visually evoked responses to probes presented in the two hemifields during training. Thus, reduced alpha was associated with enhanced sensory processing. Testing after training showed a persistent bias in attention in the expected directions. The results support the proposal that alpha synchrony plays a causal role in modulating attention and visual processing, and alpha training could be used for testing hypotheses about synchrony.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Decreases in alpha synchronization are correlated with enhanced attention, whereas alpha increases are correlated with inattention. However, correlation is not causality, and synchronization may be a byproduct of attention rather than a cause. To test for a causal role of alpha synchrony in attention, we used MEG neurofeedback to train subjects to manipulate the ratio of alpha power over the left versus right parietal cortex. We found that a comparable alpha asymmetry developed over the visual cortex. The alpha training led to corresponding asymmetrical changes in visually evoked responses to probes presented in the two hemifields during training. Thus, reduced alpha was associated with enhanced sensory processing. Testing after training showed a persistent bias in attention in the expected directions. The results support the proposal that alpha synchrony plays a causal role in modulating attention and visual processing, and alpha training could be used for testing hypotheses about synchrony.

Close

  • doi:10.1016/j.neuron.2019.11.001

Close

Nicolai D Ayasse; Arthur Wingfield

Anticipatory baseline pupil diameter Is sensitive to differences in hearing thresholds Journal Article

Frontiers in Psychology, 10 , pp. 1–7, 2020.

Abstract | Links | BibTeX

@article{Ayasse2020,
title = {Anticipatory baseline pupil diameter Is sensitive to differences in hearing thresholds},
author = {Nicolai D Ayasse and Arthur Wingfield},
doi = {10.3389/fpsyg.2019.02947},
year = {2020},
date = {2020-01-01},
journal = {Frontiers in Psychology},
volume = {10},
pages = {1--7},
abstract = {Task-evoked changes in pupil dilation have long been used as a physiological index of cognitive effort. Unlike this response, that is measured during or after an experimental trial, the baseline pupil dilation (BPD) is a measure taken prior to an experimental trial. As such, it is considered to reflect an individual's arousal level in anticipation of an experimental trial. We report data for 68 participants, ages 18 to 89, whose hearing acuity ranged from normal hearing to a moderate hearing loss, tested over a series 160 trials on an auditory sentence comprehension task. Results showed that BPDs progressively declined over the course of the experimental trials, with participants with poorer pure tone detection thresholds showing a steeper rate of decline than those with better thresholds. Data showed this slope difference to be due to participants with poorer hearing having larger BPDs than those with better hearing at the start of the experiment, but with their BPDs approaching that of the better hearing participants by the end of the 160 trials. A finding of increasing response accuracy over trials was seen as inconsistent with a fatigue or reduced task engagement account of the diminishing BPDs. Rather, the present results imply BPD as reflecting a heightened arousal level in poorer-hearing participants in anticipation of a task that demands accurate speech perception, a concern that dissipates over trials with task success. These data taken with others suggest that the baseline pupillary response may not reflect a single construct.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Task-evoked changes in pupil dilation have long been used as a physiological index of cognitive effort. Unlike this response, that is measured during or after an experimental trial, the baseline pupil dilation (BPD) is a measure taken prior to an experimental trial. As such, it is considered to reflect an individual's arousal level in anticipation of an experimental trial. We report data for 68 participants, ages 18 to 89, whose hearing acuity ranged from normal hearing to a moderate hearing loss, tested over a series 160 trials on an auditory sentence comprehension task. Results showed that BPDs progressively declined over the course of the experimental trials, with participants with poorer pure tone detection thresholds showing a steeper rate of decline than those with better thresholds. Data showed this slope difference to be due to participants with poorer hearing having larger BPDs than those with better hearing at the start of the experiment, but with their BPDs approaching that of the better hearing participants by the end of the 160 trials. A finding of increasing response accuracy over trials was seen as inconsistent with a fatigue or reduced task engagement account of the diminishing BPDs. Rather, the present results imply BPD as reflecting a heightened arousal level in poorer-hearing participants in anticipation of a task that demands accurate speech perception, a concern that dissipates over trials with task success. These data taken with others suggest that the baseline pupillary response may not reflect a single construct.

Close

  • doi:10.3389/fpsyg.2019.02947

Close

Ramina Adam; Kevin Johnston; Ravi S Menon; Stefan Everling

Functional reorganization during the recovery of contralesional target selection deficits after prefrontal cortex lesions in macaque monkeys Journal Article

NeuroImage, 207 , pp. 1–17, 2020.

Abstract | Links | BibTeX

@article{Adam2020,
title = {Functional reorganization during the recovery of contralesional target selection deficits after prefrontal cortex lesions in macaque monkeys},
author = {Ramina Adam and Kevin Johnston and Ravi S Menon and Stefan Everling},
doi = {10.1016/j.neuroimage.2019.116339},
year = {2020},
date = {2020-01-01},
journal = {NeuroImage},
volume = {207},
pages = {1--17},
publisher = {Elsevier Ltd},
abstract = {Visual extinction has been characterized by the failure to respond to a visual stimulus in the contralesional hemifield when presented simultaneously with an ipsilesional stimulus (Corbetta and Shulman, 2011). Unilateral damage to the macaque frontoparietal cortex commonly leads to deficits in contralesional target selection that resemble visual extinction. Recently, we showed that macaque monkeys with unilateral lesions in the caudal prefrontal cortex (PFC) exhibited contralesional target selection deficits that recovered over 2–4 months (Adam et al., 2019). Here, we investigated the longitudinal changes in functional connectivity (FC) of the frontoparietal network after a small or large right caudal PFC lesion in four macaque monkeys. We collected ultra-high field resting-state fMRI at 7-T before the lesion and at weeks 1–16 post-lesion and compared the functional data with behavioural performance on a free-choice saccade task. We found that the pattern of frontoparietal network FC changes depended on lesion size, such that the recovery of contralesional extinction was associated with an initial increase in network FC that returned to baseline in the two small lesion monkeys, whereas FC continued to increase throughout recovery in the two monkeys with a larger lesion. We also found that the FC between contralesional dorsolateral PFC and ipsilesional parietal cortex correlated with behavioural recovery and that the contralesional dorsolateral PFC showed increasing degree centrality with the frontoparietal network. These findings suggest that both the contralesional and ipsilesional hemispheres play an important role in the recovery of function. Importantly, optimal compensation after large PFC lesions may require greater recruitment of distant and intact areas of the frontoparietal network, whereas recovery from smaller lesions was supported by a normalization of the functional network.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual extinction has been characterized by the failure to respond to a visual stimulus in the contralesional hemifield when presented simultaneously with an ipsilesional stimulus (Corbetta and Shulman, 2011). Unilateral damage to the macaque frontoparietal cortex commonly leads to deficits in contralesional target selection that resemble visual extinction. Recently, we showed that macaque monkeys with unilateral lesions in the caudal prefrontal cortex (PFC) exhibited contralesional target selection deficits that recovered over 2–4 months (Adam et al., 2019). Here, we investigated the longitudinal changes in functional connectivity (FC) of the frontoparietal network after a small or large right caudal PFC lesion in four macaque monkeys. We collected ultra-high field resting-state fMRI at 7-T before the lesion and at weeks 1–16 post-lesion and compared the functional data with behavioural performance on a free-choice saccade task. We found that the pattern of frontoparietal network FC changes depended on lesion size, such that the recovery of contralesional extinction was associated with an initial increase in network FC that returned to baseline in the two small lesion monkeys, whereas FC continued to increase throughout recovery in the two monkeys with a larger lesion. We also found that the FC between contralesional dorsolateral PFC and ipsilesional parietal cortex correlated with behavioural recovery and that the contralesional dorsolateral PFC showed increasing degree centrality with the frontoparietal network. These findings suggest that both the contralesional and ipsilesional hemispheres play an important role in the recovery of function. Importantly, optimal compensation after large PFC lesions may require greater recruitment of distant and intact areas of the frontoparietal network, whereas recovery from smaller lesions was supported by a normalization of the functional network.

Close

  • doi:10.1016/j.neuroimage.2019.116339

Close

2019

Ying Joey Zhou; Alexis Pérez-Bellido; Saskia Haegens; Floris P de Lange

Perceptual expectations modulate low-frequency activity: A statistical learning magnetoencephalographystudy Journal Article

Journal of Cognitive Neuroscience, pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Zhou2019c,
title = {Perceptual expectations modulate low-frequency activity: A statistical learning magnetoencephalographystudy},
author = {Ying Joey Zhou and Alexis Pérez-Bellido and Saskia Haegens and Floris P de Lange},
doi = {10.1162/jocn_a_01511},
year = {2019},
date = {2019-12-01},
journal = {Journal of Cognitive Neuroscience},
pages = {1--12},
publisher = {MIT Press - Journals},
abstract = {Perceptual expectations can change how a visual stimulus is perceived. Recent studies have shown mixed results in terms of whether expectations modulate sensory representations. Here, we used a statistical learning paradigm to study the temporal characteristics of perceptual expectations. We presented participants with pairs of object images organized in a predictive manner and then recorded their brain activity with magnetoencephalography while they viewed expected and unexpected image pairs on the subsequent day. We observed stronger alpha-band (7–14 Hz) activity in response to unexpected compared with expected object images. Specifically, the alpha-band modulation occurred as early as the onset of the stimuli and was most pronounced in left occipito-temporal cortex. Given that the differential response to expected versus unexpected stimuli occurred in sensory regions early in time, our results suggest that expectations modulate perceptual decision-making by changing the sensory response elicited by the stimuli.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Perceptual expectations can change how a visual stimulus is perceived. Recent studies have shown mixed results in terms of whether expectations modulate sensory representations. Here, we used a statistical learning paradigm to study the temporal characteristics of perceptual expectations. We presented participants with pairs of object images organized in a predictive manner and then recorded their brain activity with magnetoencephalography while they viewed expected and unexpected image pairs on the subsequent day. We observed stronger alpha-band (7–14 Hz) activity in response to unexpected compared with expected object images. Specifically, the alpha-band modulation occurred as early as the onset of the stimuli and was most pronounced in left occipito-temporal cortex. Given that the differential response to expected versus unexpected stimuli occurred in sensory regions early in time, our results suggest that expectations modulate perceptual decision-making by changing the sensory response elicited by the stimuli.

Close

  • doi:10.1162/jocn_a_01511

Close

Sijia Zhao; Maria Chait; Fred Dick; Peter Dayan; Shigeto Furukawa; Hsin-I Liao

Pupil-linked phasic arousal evoked by violation but not emergence of regularity within rapid sound sequences Journal Article

Nature Communications, 10 , pp. 1–16, 2019.

Abstract | Links | BibTeX

@article{Zhao2019b,
title = {Pupil-linked phasic arousal evoked by violation but not emergence of regularity within rapid sound sequences},
author = {Sijia Zhao and Maria Chait and Fred Dick and Peter Dayan and Shigeto Furukawa and Hsin-I Liao},
doi = {10.1038/s41467-019-12048-1},
year = {2019},
date = {2019-12-01},
journal = {Nature Communications},
volume = {10},
pages = {1--16},
publisher = {Springer Science and Business Media LLC},
abstract = {The ability to track the statistics of our surroundings is a key computational challenge. A prominent theory proposes that the brain monitors for unexpected uncertainty-events which deviate substantially from model predictions, indicating model failure. Norepinephrine is thought to play a key role in this process by serving as an interrupt signal, initiating model-resetting. However, evidence is from paradigms where participants actively monitored stimulus statistics. To determine whether Norepinephrine routinely reports the statistical structure of our surroundings, even when not behaviourally relevant, we used rapid tone-pip sequences that contained salient pattern-changes associated with abrupt structural violations vs. emergence of regular structure. Phasic pupil dilations (PDR) were monitored to assess Norepinephrine. We reveal a remarkable specificity: When not behaviourally relevant, only abrupt structural violations evoke a PDR. The results demonstrate that Norepinephrine tracks unexpected uncertainty on rapid time scales relevant to sensory signals.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The ability to track the statistics of our surroundings is a key computational challenge. A prominent theory proposes that the brain monitors for unexpected uncertainty-events which deviate substantially from model predictions, indicating model failure. Norepinephrine is thought to play a key role in this process by serving as an interrupt signal, initiating model-resetting. However, evidence is from paradigms where participants actively monitored stimulus statistics. To determine whether Norepinephrine routinely reports the statistical structure of our surroundings, even when not behaviourally relevant, we used rapid tone-pip sequences that contained salient pattern-changes associated with abrupt structural violations vs. emergence of regular structure. Phasic pupil dilations (PDR) were monitored to assess Norepinephrine. We reveal a remarkable specificity: When not behaviourally relevant, only abrupt structural violations evoke a PDR. The results demonstrate that Norepinephrine tracks unexpected uncertainty on rapid time scales relevant to sensory signals.

Close

  • doi:10.1038/s41467-019-12048-1

Close

Felicia Zhang; Sagi Jaffe-Dax; Robert C Wilson; Lauren L Emberson

Prediction in infants and adults: A pupillometry study Journal Article

Developmental Science, 22 , pp. 1–9, 2019.

Abstract | Links | BibTeX

@article{Zhang2019g,
title = {Prediction in infants and adults: A pupillometry study},
author = {Felicia Zhang and Sagi Jaffe-Dax and Robert C Wilson and Lauren L Emberson},
doi = {10.1111/desc.12780},
year = {2019},
date = {2019-12-01},
journal = {Developmental Science},
volume = {22},
pages = {1--9},
publisher = {John Wiley & Sons, Ltd (10.1111)},
abstract = {Adults use both bottom-up sensory inputs and top-down signals to generate predictions about future sensory inputs. Infants have also been shown to make predictions with simple stimuli and recent work has suggested top-down processing is available early in infancy. However, it is unknown whether this indicates that top-down prediction is an ability that is continuous across the lifespan or whether an infant's ability to predict is different from an adult's, qualitatively or quantitatively. We employed pupillometry to provide a direct comparison of prediction abilities across these disparate age groups. Pupil dilation response (PDR) was measured in 6-month olds and adults as they completed an identical implicit learning task designed to help learn associations between sounds and pictures. We found significantly larger PDR for visual omission trials (i.e. trials that violated participants' predictions without the presentation of new stimuli to control for bottom-up signals) compared to visual present trials (i.e. trials that confirmed participants' predictions) in both age groups. Furthermore, a computational learning model that is closely linked to prediction error (Rescorla-Wagner model) demonstrated similar learning trajectories suggesting a continuity of predictive capacity and learning across the two age groups.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Adults use both bottom-up sensory inputs and top-down signals to generate predictions about future sensory inputs. Infants have also been shown to make predictions with simple stimuli and recent work has suggested top-down processing is available early in infancy. However, it is unknown whether this indicates that top-down prediction is an ability that is continuous across the lifespan or whether an infant's ability to predict is different from an adult's, qualitatively or quantitatively. We employed pupillometry to provide a direct comparison of prediction abilities across these disparate age groups. Pupil dilation response (PDR) was measured in 6-month olds and adults as they completed an identical implicit learning task designed to help learn associations between sounds and pictures. We found significantly larger PDR for visual omission trials (i.e. trials that violated participants' predictions without the presentation of new stimuli to control for bottom-up signals) compared to visual present trials (i.e. trials that confirmed participants' predictions) in both age groups. Furthermore, a computational learning model that is closely linked to prediction error (Rescorla-Wagner model) demonstrated similar learning trajectories suggesting a continuity of predictive capacity and learning across the two age groups.

Close

  • doi:10.1111/desc.12780

Close

Dexiang Zhang; Jukka Hyönä; Lei Cui; Zhaoxia Zhu; Shouxin Li

Effects of task instructions and topic signaling on text processing among adult readers with different reading styles: An eye-tracking study Journal Article

Learning and Instruction, 64 , pp. 1–15, 2019.

Abstract | Links | BibTeX

@article{Zhang2019b,
title = {Effects of task instructions and topic signaling on text processing among adult readers with different reading styles: An eye-tracking study},
author = {Dexiang Zhang and Jukka Hyönä and Lei Cui and Zhaoxia Zhu and Shouxin Li},
doi = {10.1016/j.learninstruc.2019.101246},
year = {2019},
date = {2019-12-01},
journal = {Learning and Instruction},
volume = {64},
pages = {1--15},
publisher = {Elsevier BV},
abstract = {Effects of task instructions and topic signaling on text processing among adult readers with different reading styles were studied by eye-tracking. In Experiment 1, readers read two multiple-topic expository texts guided either by a summary or a verification task. In Experiment 2, readers read a text with or without the topic sentences underlined. Four types of readers emerged: topic structure processors (TSPs), fast linear readers (FLRs), slow linear readers (SLRs), and nonselective reviewers (NSRs). TSPs paid ample fixation time on topic sentences regardless of their signaling. FLRs were characterized by fast first-pass reading, little rereading of previous text, and some signs of structure processing. The common feature of SLRs and NSRs was their slow first-pass reading. They differed from each other in that NSRs were characterized by spending ample time also during second-pass reading. They only showed some signs of topic structure processing when cued by task instructions or topic signaling.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Effects of task instructions and topic signaling on text processing among adult readers with different reading styles were studied by eye-tracking. In Experiment 1, readers read two multiple-topic expository texts guided either by a summary or a verification task. In Experiment 2, readers read a text with or without the topic sentences underlined. Four types of readers emerged: topic structure processors (TSPs), fast linear readers (FLRs), slow linear readers (SLRs), and nonselective reviewers (NSRs). TSPs paid ample fixation time on topic sentences regardless of their signaling. FLRs were characterized by fast first-pass reading, little rereading of previous text, and some signs of structure processing. The common feature of SLRs and NSRs was their slow first-pass reading. They differed from each other in that NSRs were characterized by spending ample time also during second-pass reading. They only showed some signs of topic structure processing when cued by task instructions or topic signaling.

Close

  • doi:10.1016/j.learninstruc.2019.101246

Close

Isabel M Vanegas; Annabelle Blangero; James E Galvin; Alessandro Di Rocco; Angelo Quartarone; Felice M Ghilardi; Simon P Kelly

Altered dynamics of visual contextual interactions in Parkinson's disease Journal Article

npj Parkinson's Disease, 5 , pp. 1–8, 2019.

Abstract | Links | BibTeX

@article{Vanegas2019,
title = {Altered dynamics of visual contextual interactions in Parkinson's disease},
author = {Isabel M Vanegas and Annabelle Blangero and James E Galvin and Alessandro {Di Rocco} and Angelo Quartarone and Felice M Ghilardi and Simon P Kelly},
doi = {10.1038/s41531-019-0085-5},
year = {2019},
date = {2019-12-01},
journal = {npj Parkinson's Disease},
volume = {5},
pages = {1--8},
publisher = {Nature Publishing Group},
abstract = {Over the last decades, psychophysical and electrophysiological studies in patients and animal models of Parkinson's disease (PD), have consistently revealed a number of visual abnormalities. In particular, specific alterations of contrast sensitivity curves, electroretinogram (ERG), and visual-evoked potentials (VEP), have been attributed to dopaminergic retinal depletion. However, fundamental mechanisms of cortical visual processing, such as normalization or “gain control” computations, have not yet been examined in PD patients. Here, we measured electrophysiological indices of gain control in both space (surround suppression) and time (sensory adaptation) in PD patients based on steady-state VEP (ssVEP). Compared with controls, patients exhibited a significantly higher initial ssVEP amplitude that quickly decayed over time, and greater relative suppression of ssVEP amplitude as a function of surrounding stimulus contrast. Meanwhile, EEG frequency spectra were broadly elevated in patients relative to controls. Thus, contrary to what might be expected given the reduced contrast sensitivity often reported in PD, visual neural responses are not weaker; rather, they are initially larger but undergo an exaggerated degree of spatial and temporal gain control and are embedded within a greater background noise level. These differences may reflect cortical mechanisms that compensate for dysfunctional center-surround interactions at the retinal level.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Over the last decades, psychophysical and electrophysiological studies in patients and animal models of Parkinson's disease (PD), have consistently revealed a number of visual abnormalities. In particular, specific alterations of contrast sensitivity curves, electroretinogram (ERG), and visual-evoked potentials (VEP), have been attributed to dopaminergic retinal depletion. However, fundamental mechanisms of cortical visual processing, such as normalization or “gain control” computations, have not yet been examined in PD patients. Here, we measured electrophysiological indices of gain control in both space (surround suppression) and time (sensory adaptation) in PD patients based on steady-state VEP (ssVEP). Compared with controls, patients exhibited a significantly higher initial ssVEP amplitude that quickly decayed over time, and greater relative suppression of ssVEP amplitude as a function of surrounding stimulus contrast. Meanwhile, EEG frequency spectra were broadly elevated in patients relative to controls. Thus, contrary to what might be expected given the reduced contrast sensitivity often reported in PD, visual neural responses are not weaker; rather, they are initially larger but undergo an exaggerated degree of spatial and temporal gain control and are embedded within a greater background noise level. These differences may reflect cortical mechanisms that compensate for dysfunctional center-surround interactions at the retinal level.

Close

  • doi:10.1038/s41531-019-0085-5

Close

Hiroshi Ueda; Naotoshi Abekawa; Sho Ito; Hiroaki Gomi

Distinct temporal developments of visual motion and position representations for multi-stream visuomotor coordination Journal Article

Scientific Reports, 9 , pp. 1–6, 2019.

Abstract | Links | BibTeX

@article{Ueda2019,
title = {Distinct temporal developments of visual motion and position representations for multi-stream visuomotor coordination},
author = {Hiroshi Ueda and Naotoshi Abekawa and Sho Ito and Hiroaki Gomi},
doi = {10.1038/s41598-019-48535-0},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--6},
publisher = {Nature Publishing Group},
abstract = {A fundamental but controversial question in information coding of moving visual target is which of ‘motion' or ‘position' signal is employed in the brain for producing quick motor reactions. Prevailing theory assumed that visually guided reaching is driven always via target position representation influenced by various motion signals (e.g., target texture and surroundings). To rigorously examine this theory, we manipulated the nature of the influence of internal texture motion on the position representation of the target in reaching correction tasks. By focusing on the difference in illusory position shift of targets with the soft- and hard-edges, we succeeded in extracting the temporal development of an indirect effect only ascribed to changes in position representation. Our data revealed that the onset of indirect effect is significantly slower than the adjustment onset itself. This evidence indicates multi-stream processing in visuomotor control: fast and direct contribution of visual motion for quick action initiation, and relatively slow contribution of position representation updated by relevant motion signals for continuous action regulation. The distinctive visuomotor mechanism would be crucial in successfully interacting with time-varying environments in the real world.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A fundamental but controversial question in information coding of moving visual target is which of ‘motion' or ‘position' signal is employed in the brain for producing quick motor reactions. Prevailing theory assumed that visually guided reaching is driven always via target position representation influenced by various motion signals (e.g., target texture and surroundings). To rigorously examine this theory, we manipulated the nature of the influence of internal texture motion on the position representation of the target in reaching correction tasks. By focusing on the difference in illusory position shift of targets with the soft- and hard-edges, we succeeded in extracting the temporal development of an indirect effect only ascribed to changes in position representation. Our data revealed that the onset of indirect effect is significantly slower than the adjustment onset itself. This evidence indicates multi-stream processing in visuomotor control: fast and direct contribution of visual motion for quick action initiation, and relatively slow contribution of position representation updated by relevant motion signals for continuous action regulation. The distinctive visuomotor mechanism would be crucial in successfully interacting with time-varying environments in the real world.

Close

  • doi:10.1038/s41598-019-48535-0

Close

Martin Szinte; Michael Puntiroli; Heiner Deubel

The spread of presaccadic attention depends on the spatial configuration of the visual scene Journal Article

Scientific Reports, 9 , pp. 1–11, 2019.

Abstract | Links | BibTeX

@article{Szinte2019,
title = {The spread of presaccadic attention depends on the spatial configuration of the visual scene},
author = {Martin Szinte and Michael Puntiroli and Heiner Deubel},
doi = {10.1038/s41598-019-50541-1},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--11},
publisher = {Nature Publishing Group},
abstract = {When preparing a saccade, attentional resources are focused at the saccade target and its immediate vicinity. Here we show that this does not hold true when saccades are prepared toward a recently extinguished target. We obtained detailed maps of orientation sensitivity when participants prepared a saccade toward a target that either remained on the screen or disappeared before the eyes moved. We found that attention was mainly focused on the immediate surround of the visible target and spread to more peripheral locations as a function of the distance from the cue and the delay between the target's disappearance and the saccade. Interestingly, this spread was not accompanied with a spread of the saccade endpoint. These results suggest that presaccadic attention and saccade programming are two distinct processes that can be dissociated as a function of their interaction with the spatial configuration of the visual scene.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When preparing a saccade, attentional resources are focused at the saccade target and its immediate vicinity. Here we show that this does not hold true when saccades are prepared toward a recently extinguished target. We obtained detailed maps of orientation sensitivity when participants prepared a saccade toward a target that either remained on the screen or disappeared before the eyes moved. We found that attention was mainly focused on the immediate surround of the visible target and spread to more peripheral locations as a function of the distance from the cue and the delay between the target's disappearance and the saccade. Interestingly, this spread was not accompanied with a spread of the saccade endpoint. These results suggest that presaccadic attention and saccade programming are two distinct processes that can be dissociated as a function of their interaction with the spatial configuration of the visual scene.

Close

  • doi:10.1038/s41598-019-50541-1

Close

Katarzyna Stachowiak-Szymczak; Paweł Korpal

Interpreting accuracy and visual processing of numbers in professional and student snterpreters: An eye-tracking study Journal Article

Across Languages and Cultures, 20 (2), pp. 235–251, 2019.

Abstract | Links | BibTeX

@article{Stachowiak-Szymczak2019,
title = {Interpreting accuracy and visual processing of numbers in professional and student snterpreters: An eye-tracking study},
author = {Katarzyna Stachowiak-Szymczak and Pawe{ł} Korpal},
doi = {10.1556/084.2019.20.2.5},
year = {2019},
date = {2019-12-01},
journal = {Across Languages and Cultures},
volume = {20},
number = {2},
pages = {235--251},
abstract = {Simultaneous interpreting is a cognitively demanding task, based on performing several activities concurrently (Gile 1995; Seeber 2011). While multitasking itself is challenging, there are numerous tasks which make interpreting even more diffi cult, such as rendering of numbers and proper names, or dealing with a speaker's strong accent (Gile 2009). Among these, number interpreting is cognitively taxing since numerical data cannot be derived from the context and it needs to be rendered in a word-to-word manner (Mazza 2001). In our study, we aimed to examine cognitive load involved in number interpreting and to verify whether access to visual materials in the form of slides increases number interpreting accuracy in simultaneous interpreting performed by professional interpreters (N = 26) and interpreting trainees (N = 22). We used a remote EyeLink 1000+ eye-tracker to measure fi xation count, mean fi xation duration, and gaze time. The participants interpreted two short speeches from English into Polish, both containing 10 numerals. Slides were provided for one of the presentations. Our results show that novices are characterised by longer fixations and they provide a less accurate interpretation than professional interpreters. In addi- tion, access to slides increases number interpreting accuracy. The results obtained might be a valuable contribution to studies on visual processing in simultaneous interpreting, number interpreting as a competence, as well as interpreter training.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Simultaneous interpreting is a cognitively demanding task, based on performing several activities concurrently (Gile 1995; Seeber 2011). While multitasking itself is challenging, there are numerous tasks which make interpreting even more diffi cult, such as rendering of numbers and proper names, or dealing with a speaker's strong accent (Gile 2009). Among these, number interpreting is cognitively taxing since numerical data cannot be derived from the context and it needs to be rendered in a word-to-word manner (Mazza 2001). In our study, we aimed to examine cognitive load involved in number interpreting and to verify whether access to visual materials in the form of slides increases number interpreting accuracy in simultaneous interpreting performed by professional interpreters (N = 26) and interpreting trainees (N = 22). We used a remote EyeLink 1000+ eye-tracker to measure fi xation count, mean fi xation duration, and gaze time. The participants interpreted two short speeches from English into Polish, both containing 10 numerals. Slides were provided for one of the presentations. Our results show that novices are characterised by longer fixations and they provide a less accurate interpretation than professional interpreters. In addi- tion, access to slides increases number interpreting accuracy. The results obtained might be a valuable contribution to studies on visual processing in simultaneous interpreting, number interpreting as a competence, as well as interpreter training.

Close

  • doi:10.1556/084.2019.20.2.5

Close

Michele Scaltritti; Aliaksei Miniukovich; Paola Venuti; Remo Job; Antonella De Angeli; Simone Sulpizio

Investigating effects of typographic variables on webpage reading through eye movements Journal Article

Scientific Reports, 9 , pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Scaltritti2019,
title = {Investigating effects of typographic variables on webpage reading through eye movements},
author = {Michele Scaltritti and Aliaksei Miniukovich and Paola Venuti and Remo Job and Antonella {De Angeli} and Simone Sulpizio},
doi = {10.1038/s41598-019-49051-x},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--12},
publisher = {Nature Publishing Group},
abstract = {Webpage reading is ubiquitous in daily life. As Web technologies allow for a large variety of layouts and visual styles, the many formatting options may lead to poor design choices, including low readability. This research capitalizes on the existing readability guidelines for webpage design to outline several visuo-typographic variables and explore their effect on eye movements during webpage reading. Participants included children and adults, and for both groups typical readers and readers with dyslexia were considered. Actual webpages, rather than artificial ones, served as stimuli. This allowed to test multiple typographic variables in combination and in their typical ranges rather than in possibly unrealistic configurations. Several typographic variables displayed a significant effect on eye movements and reading performance. The effect was mostly homogeneous across the four groups, with a few exceptions. Beside supporting the notion that a few empirically-driven adjustments to the texts' visual appearance can facilitate reading across different populations, the results also highlight the challenge of making digital texts accessible to readers with dyslexia. Theoretically, the results highlight the importance of low-level visual factors, corroborating the emphasis of recent psychological models on visual attention and crowding in reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Webpage reading is ubiquitous in daily life. As Web technologies allow for a large variety of layouts and visual styles, the many formatting options may lead to poor design choices, including low readability. This research capitalizes on the existing readability guidelines for webpage design to outline several visuo-typographic variables and explore their effect on eye movements during webpage reading. Participants included children and adults, and for both groups typical readers and readers with dyslexia were considered. Actual webpages, rather than artificial ones, served as stimuli. This allowed to test multiple typographic variables in combination and in their typical ranges rather than in possibly unrealistic configurations. Several typographic variables displayed a significant effect on eye movements and reading performance. The effect was mostly homogeneous across the four groups, with a few exceptions. Beside supporting the notion that a few empirically-driven adjustments to the texts' visual appearance can facilitate reading across different populations, the results also highlight the challenge of making digital texts accessible to readers with dyslexia. Theoretically, the results highlight the importance of low-level visual factors, corroborating the emphasis of recent psychological models on visual attention and crowding in reading.

Close

  • doi:10.1038/s41598-019-49051-x

Close

Maria C Romero; Marco Davare; Marcelo Armendariz; Peter Janssen

Neural effects of transcranial magnetic stimulation at the single-cell level Journal Article

Nature Communications, 10 (1), pp. 1–11, 2019.

Abstract | Links | BibTeX

@article{Romero2019,
title = {Neural effects of transcranial magnetic stimulation at the single-cell level},
author = {Maria C Romero and Marco Davare and Marcelo Armendariz and Peter Janssen},
doi = {10.1038/s41467-019-10638-7},
year = {2019},
date = {2019-12-01},
journal = {Nature Communications},
volume = {10},
number = {1},
pages = {1--11},
publisher = {Nature Publishing Group},
abstract = {Transcranial magnetic stimulation (TMS) can non-invasively modulate neural activity in humans. Despite three decades of research, the spatial extent of the cortical area activated by TMS is still controversial. Moreover, how TMS interacts with task-related activity during motor behavior is unknown. Here, we applied single-pulse TMS over macaque parietal cortex while recording single-unit activity at various distances from the center of stimulation during grasping. The spatial extent of TMS-induced activation is remarkably restricted, affecting the spiking activity of single neurons in an area of cortex measuring less than 2 mm in diameter. In task-related neurons, TMS evokes a transient excitation followed by reduced activity, paralleled by a significantly longer grasping time. Furthermore, TMS-induced activity and task-related activity do not summate in single neurons. These results furnish crucial experimental evidence for the neural effects of TMS at the single-cell level and uncover the neural underpinnings of behavioral effects of TMS.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Transcranial magnetic stimulation (TMS) can non-invasively modulate neural activity in humans. Despite three decades of research, the spatial extent of the cortical area activated by TMS is still controversial. Moreover, how TMS interacts with task-related activity during motor behavior is unknown. Here, we applied single-pulse TMS over macaque parietal cortex while recording single-unit activity at various distances from the center of stimulation during grasping. The spatial extent of TMS-induced activation is remarkably restricted, affecting the spiking activity of single neurons in an area of cortex measuring less than 2 mm in diameter. In task-related neurons, TMS evokes a transient excitation followed by reduced activity, paralleled by a significantly longer grasping time. Furthermore, TMS-induced activity and task-related activity do not summate in single neurons. These results furnish crucial experimental evidence for the neural effects of TMS at the single-cell level and uncover the neural underpinnings of behavioral effects of TMS.

Close

  • doi:10.1038/s41467-019-10638-7

Close

Mohsen Rakhshan; Vivian Lee; Emily Chu; Lauren Harris; Lillian Laiks; Peyman Khorsand; Alireza Soltani

Influence of expected reward on temporal order judgment Journal Article

Journal of Cognitive Neuroscience, pp. 1–17, 2019.

Abstract | Links | BibTeX

@article{Rakhshan2019,
title = {Influence of expected reward on temporal order judgment},
author = {Mohsen Rakhshan and Vivian Lee and Emily Chu and Lauren Harris and Lillian Laiks and Peyman Khorsand and Alireza Soltani},
doi = {10.1162/jocn_a_01516},
year = {2019},
date = {2019-12-01},
journal = {Journal of Cognitive Neuroscience},
pages = {1--17},
publisher = {MIT Press - Journals},
abstract = {Perceptual decision-making has been shown to be influenced by reward expected from alternative options or actions, but the underlying neural mechanisms are currently unknown. More specifically, it is debated whether reward effects are mediated through changes in sensory processing, later stages of decision-making, or both. To address this question, we conducted two experiments in which human participants made saccades to what they perceived to be either the first or second of two visually identical but asynchronously presented targets while we manipulated expected reward from correct and incorrect responses on each trial. By comparing reward-induced bias in target selection (i.e., reward bias) during the two experiments, we determined whether reward caused changes in sensory or decision-making processes. We found similar reward biases in the two experiments indicating that reward information mainly influenced later stages of decision-making [R1.1]. Moreover, the observed reward biases were independent of the individual's sensitivity to sensory signals. This suggests that reward effects were determined heuristically via modulation of decision-making processes instead of sensory processing. To further explain our findings and uncover plausible neural mechanisms, we simulated our experiments with a cortical network model and tested alternative mechanisms for how reward could exert its influence. We found that our experimental observations are more compatible with reward-dependent input to the output layer of the decision circuit. Together, our results suggest that, during a temporal judgment task, reward exerts its influence via changing later stages of decision-making (i.e., response bias) rather than early sensory processing (i.e., perceptual bias).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Perceptual decision-making has been shown to be influenced by reward expected from alternative options or actions, but the underlying neural mechanisms are currently unknown. More specifically, it is debated whether reward effects are mediated through changes in sensory processing, later stages of decision-making, or both. To address this question, we conducted two experiments in which human participants made saccades to what they perceived to be either the first or second of two visually identical but asynchronously presented targets while we manipulated expected reward from correct and incorrect responses on each trial. By comparing reward-induced bias in target selection (i.e., reward bias) during the two experiments, we determined whether reward caused changes in sensory or decision-making processes. We found similar reward biases in the two experiments indicating that reward information mainly influenced later stages of decision-making [R1.1]. Moreover, the observed reward biases were independent of the individual's sensitivity to sensory signals. This suggests that reward effects were determined heuristically via modulation of decision-making processes instead of sensory processing. To further explain our findings and uncover plausible neural mechanisms, we simulated our experiments with a cortical network model and tested alternative mechanisms for how reward could exert its influence. We found that our experimental observations are more compatible with reward-dependent input to the output layer of the decision circuit. Together, our results suggest that, during a temporal judgment task, reward exerts its influence via changing later stages of decision-making (i.e., response bias) rather than early sensory processing (i.e., perceptual bias).

Close

  • doi:10.1162/jocn_a_01516

Close

Luke O'Gorman; Chelsea S Norman; Luke Michaels; Tutte Newall; Andrew H Crosby; Christopher Mattocks; Angela J Cree; Andrew J Lotery; Emma L Baple; Arjuna J Ratnayaka; Diana Baralle; Helena Lee; Daniel Osborne; Fatima Shawkat; Jane Gibson; Sarah Ennis; Jay E Self

A small gene sequencing panel realises a high diagnostic rate in patients with congenital nystagmus following basic phenotyping Journal Article

Scientific Reports, 9 , pp. 1–8, 2019.

Abstract | Links | BibTeX

@article{OGorman2019,
title = {A small gene sequencing panel realises a high diagnostic rate in patients with congenital nystagmus following basic phenotyping},
author = {Luke O'Gorman and Chelsea S Norman and Luke Michaels and Tutte Newall and Andrew H Crosby and Christopher Mattocks and Angela J Cree and Andrew J Lotery and Emma L Baple and Arjuna J Ratnayaka and Diana Baralle and Helena Lee and Daniel Osborne and Fatima Shawkat and Jane Gibson and Sarah Ennis and Jay E Self},
doi = {10.1038/s41598-019-49368-7},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--8},
publisher = {Nature Publishing Group},
abstract = {Nystagmus is a disorder of uncontrolled eye movement and can occur as an isolated trait (idiopathic INS, IINS) or as part of multisystem disorders such as albinism, significant visual disorders or neurological disease. Eighty-one unrelated patients with nystagmus underwent routine ocular phenotyping using commonly available phenotyping methods and were grouped into four sub-cohorts according to the level of phenotyping information gained and their findings. DNA was extracted and sequenced using a broad utility next generation sequencing (NGS) gene panel. A clinical subpanel of genes for nystagmus/albinism was utilised and likely causal variants were prioritised according to methods currently employed by clinical diagnostic laboratories. We determine the likely underlying genetic cause for 43.2% of participants with similar yields regardless of prior phenotyping. This study demonstrates that a diagnostic workflow combining basic ocular phenotyping and a clinically available targeted NGS panel, can provide a high diagnostic yield for patients with infantile nystagmus, enabling access to disease specific management at a young age and reducing the need for multiple costly, often invasive tests. By describing diagnostic yield for groups of patients with incomplete phenotyping data, it also permits the subsequent design of ‘real-world' diagnostic workflows and illustrates the changing role of genetic testing in modern diagnostic workflows for heterogeneous ophthalmic disorders.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Nystagmus is a disorder of uncontrolled eye movement and can occur as an isolated trait (idiopathic INS, IINS) or as part of multisystem disorders such as albinism, significant visual disorders or neurological disease. Eighty-one unrelated patients with nystagmus underwent routine ocular phenotyping using commonly available phenotyping methods and were grouped into four sub-cohorts according to the level of phenotyping information gained and their findings. DNA was extracted and sequenced using a broad utility next generation sequencing (NGS) gene panel. A clinical subpanel of genes for nystagmus/albinism was utilised and likely causal variants were prioritised according to methods currently employed by clinical diagnostic laboratories. We determine the likely underlying genetic cause for 43.2% of participants with similar yields regardless of prior phenotyping. This study demonstrates that a diagnostic workflow combining basic ocular phenotyping and a clinically available targeted NGS panel, can provide a high diagnostic yield for patients with infantile nystagmus, enabling access to disease specific management at a young age and reducing the need for multiple costly, often invasive tests. By describing diagnostic yield for groups of patients with incomplete phenotyping data, it also permits the subsequent design of ‘real-world' diagnostic workflows and illustrates the changing role of genetic testing in modern diagnostic workflows for heterogeneous ophthalmic disorders.

Close

  • doi:10.1038/s41598-019-49368-7

Close

Victoria I Nicholls; Geraldine Jean-Charles; Junpeng Lao; Peter de Lissa; Roberto Caldara; Sebastien Miellet

Developing attentional control in naturalistic dynamic road crossing situations Journal Article

Scientific Reports, 9 , pp. 1–10, 2019.

Abstract | Links | BibTeX

@article{Nicholls2019,
title = {Developing attentional control in naturalistic dynamic road crossing situations},
author = {Victoria I Nicholls and Geraldine Jean-Charles and Junpeng Lao and Peter de Lissa and Roberto Caldara and Sebastien Miellet},
doi = {10.1038/s41598-019-39737-7},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--10},
publisher = {Nature Publishing Group},
abstract = {In the last 20 years, there has been increasing interest in studying visual attentional processes under more natural conditions. In the present study, we propose to determine the critical age at which children show similar to adult performance and attentional control in a visually guided task; in a naturalistic dynamic and socially relevant context: road crossing. We monitored visual exploration and crossing decisions in adults and children aged between 5 and 15 while they watched road traffic videos containing a range of traffic densities with or without pedestrians. 5–10 year old (y/o) children showed less systematic gaze patterns. More specifically, adults and 11–15 y/o children look mainly at the vehicles' appearing point, which is an optimal location to sample diagnostic information for the task. In contrast, 5–10 y/os look more at socially relevant stimuli and attend to moving vehicles further down the trajectory when the traffic density is high. Critically, 5-10 y/o children also make an increased number of crossing decisions compared to 11–15 y/os and adults. Our findings reveal a critical shift around 10 y/o in attentional control and crossing decisions in a road crossing task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the last 20 years, there has been increasing interest in studying visual attentional processes under more natural conditions. In the present study, we propose to determine the critical age at which children show similar to adult performance and attentional control in a visually guided task; in a naturalistic dynamic and socially relevant context: road crossing. We monitored visual exploration and crossing decisions in adults and children aged between 5 and 15 while they watched road traffic videos containing a range of traffic densities with or without pedestrians. 5–10 year old (y/o) children showed less systematic gaze patterns. More specifically, adults and 11–15 y/o children look mainly at the vehicles' appearing point, which is an optimal location to sample diagnostic information for the task. In contrast, 5–10 y/os look more at socially relevant stimuli and attend to moving vehicles further down the trajectory when the traffic density is high. Critically, 5-10 y/o children also make an increased number of crossing decisions compared to 11–15 y/os and adults. Our findings reveal a critical shift around 10 y/o in attentional control and crossing decisions in a road crossing task.

Close

  • doi:10.1038/s41598-019-39737-7

Close

Hsin-Hung Li; Jasmine Pan; Marisa Carrasco

Presaccadic attention improves or impairs performance by enhancing sensitivity to higher spatial frequencies Journal Article

Scientific Reports, 9 , pp. 1–9, 2019.

Abstract | Links | BibTeX

@article{Li2019a,
title = {Presaccadic attention improves or impairs performance by enhancing sensitivity to higher spatial frequencies},
author = {Hsin-Hung Li and Jasmine Pan and Marisa Carrasco},
doi = {10.1038/s41598-018-38262-3},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--9},
publisher = {Nature Publishing Group},
abstract = {Right before we move our eyes, visual performance and neural responses for the saccade target are enhanced. This effect, presaccadic attention, is considered to prioritize the saccade target and to enhance behavioral performance for the saccade target. Recent evidence has shown that presaccadic attention modulates the processing of feature information. Hitherto, it remains unknown whether presaccadic modulations on feature information are flexible, to improve performance for the task at hand, or automatic, so that they alter the featural representation similarly regardless of the task. Using a masking procedure, here we report that presaccadic attention can either improve or impair performance depending on the spatial frequency content of the visual input. These counterintuitive modulations were significant at a time window right before saccade onset. Furthermore, merely deploying covert attention within the same temporal interval without preparing a saccade did not affect performance. This study reveals that presaccadic attention not only prioritizes the saccade target, but also automatically modifies its featural representation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Right before we move our eyes, visual performance and neural responses for the saccade target are enhanced. This effect, presaccadic attention, is considered to prioritize the saccade target and to enhance behavioral performance for the saccade target. Recent evidence has shown that presaccadic attention modulates the processing of feature information. Hitherto, it remains unknown whether presaccadic modulations on feature information are flexible, to improve performance for the task at hand, or automatic, so that they alter the featural representation similarly regardless of the task. Using a masking procedure, here we report that presaccadic attention can either improve or impair performance depending on the spatial frequency content of the visual input. These counterintuitive modulations were significant at a time window right before saccade onset. Furthermore, merely deploying covert attention within the same temporal interval without preparing a saccade did not affect performance. This study reveals that presaccadic attention not only prioritizes the saccade target, but also automatically modifies its featural representation.

Close

  • doi:10.1038/s41598-018-38262-3

Close

Louise Kauffmann; Carole Peyrin; Alan Chauvin; Léa Entzmann; Camille Breuil; Nathalie Guyader

Face perception influences the programming of eye movements Journal Article

Scientific Reports, 9 , pp. 1–14, 2019.

Abstract | Links | BibTeX

@article{Kauffmann2019,
title = {Face perception influences the programming of eye movements},
author = {Louise Kauffmann and Carole Peyrin and Alan Chauvin and Léa Entzmann and Camille Breuil and Nathalie Guyader},
doi = {10.1038/s41598-018-36510-0},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--14},
publisher = {Nature Publishing Group},
abstract = {Previous studies have shown that face stimuli elicit extremely fast and involuntary saccadic responses toward them, relative to other categories of visual stimuli. In the present study, we further investigated to what extent face stimuli influence the programming and execution of saccades examining their amplitude. We performed two experiments using a saccadic choice task: two images (one with a face, one with a vehicle) were simultaneously displayed in the left and right visual fields of participants who had to initiate a saccade toward the image (Experiment 1) or toward a cross in the image (Experiment 2) containing a target stimulus (a face or a vehicle). Results revealed shorter saccades toward vehicle than face targets, even if participants were explicitly asked to perform their saccades toward a specific location (Experiment 2). Furthermore, error saccades had smaller amplitude than correct saccades. Further analyses showed that error saccades were interrupted in mid-flight to initiate a concurrently-programmed corrective saccade. Overall, these data suggest that the content of visual stimuli can influence the programming of saccade amplitude, and that efficient online correction of saccades can be performed during the saccadic choice task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous studies have shown that face stimuli elicit extremely fast and involuntary saccadic responses toward them, relative to other categories of visual stimuli. In the present study, we further investigated to what extent face stimuli influence the programming and execution of saccades examining their amplitude. We performed two experiments using a saccadic choice task: two images (one with a face, one with a vehicle) were simultaneously displayed in the left and right visual fields of participants who had to initiate a saccade toward the image (Experiment 1) or toward a cross in the image (Experiment 2) containing a target stimulus (a face or a vehicle). Results revealed shorter saccades toward vehicle than face targets, even if participants were explicitly asked to perform their saccades toward a specific location (Experiment 2). Furthermore, error saccades had smaller amplitude than correct saccades. Further analyses showed that error saccades were interrupted in mid-flight to initiate a concurrently-programmed corrective saccade. Overall, these data suggest that the content of visual stimuli can influence the programming of saccade amplitude, and that efficient online correction of saccades can be performed during the saccadic choice task.

Close

  • doi:10.1038/s41598-018-36510-0

Close

Chun-Ting Hsu; Roy Clariana; Benjamin Schloss; Ping Li

Neurocognitive signatures of naturalistic reading of scientific texts: A fixation-related fMRI study Journal Article

Scientific Reports, 9 , pp. 1–16, 2019.

Abstract | Links | BibTeX

@article{Hsu2019,
title = {Neurocognitive signatures of naturalistic reading of scientific texts: A fixation-related fMRI study},
author = {Chun-Ting Hsu and Roy Clariana and Benjamin Schloss and Ping Li},
doi = {10.1038/s41598-019-47176-7},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--16},
publisher = {Nature Publishing Group},
abstract = {How do students gain scientific knowledge while reading expository text? This study examines the underlying neurocognitive basis of textual knowledge structure and individual readers' cognitive differences and reading habits, including the influence of text and reader characteristics, on outcomes of scientific text comprehension. By combining fixation-related fMRI and multiband data acquisition, the study is among the first to consider self-paced naturalistic reading inside the MRI scanner. Our results revealed the underlying neurocognitive patterns associated with information integration of different time scales during text reading, and significant individual differences due to the interaction between text characteristics (e.g., optimality of the textual knowledge structure) and reader characteristics (e.g., electronic device use habits). Individual differences impacted the amount of neural resources deployed for multitasking and information integration for constructing the underlying scientific mental models based on the text being read. Our findings have significant implications for understanding science reading in a population that is increasingly dependent on electronic devices.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

How do students gain scientific knowledge while reading expository text? This study examines the underlying neurocognitive basis of textual knowledge structure and individual readers' cognitive differences and reading habits, including the influence of text and reader characteristics, on outcomes of scientific text comprehension. By combining fixation-related fMRI and multiband data acquisition, the study is among the first to consider self-paced naturalistic reading inside the MRI scanner. Our results revealed the underlying neurocognitive patterns associated with information integration of different time scales during text reading, and significant individual differences due to the interaction between text characteristics (e.g., optimality of the textual knowledge structure) and reader characteristics (e.g., electronic device use habits). Individual differences impacted the amount of neural resources deployed for multitasking and information integration for constructing the underlying scientific mental models based on the text being read. Our findings have significant implications for understanding science reading in a population that is increasingly dependent on electronic devices.

Close

  • doi:10.1038/s41598-019-47176-7

Close

Praghajieeth Raajhen Santhana Gopalan; Otto Loberg; Jarmo Arvid Hämäläinen; Paavo H T Leppänen

Attentional processes in typically developing children as revealed using brain event-related potentials and their source localization in Attention Network Test Journal Article

Scientific Reports, 9 , pp. 1–13, 2019.

Abstract | Links | BibTeX

@article{Gopalan2019,
title = {Attentional processes in typically developing children as revealed using brain event-related potentials and their source localization in Attention Network Test},
author = {Praghajieeth Raajhen Santhana Gopalan and Otto Loberg and Jarmo Arvid Hämäläinen and Paavo H T Leppänen},
doi = {10.1038/s41598-018-36947-3},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--13},
publisher = {Nature Publishing Group},
abstract = {Attention-related processes include three functional sub-components: alerting, orienting, and inhibition. We investigated these components using EEG-based, brain event-related potentials and their neuronal source activations during the Attention Network Test in typically developing school-aged children. Participants were asked to detect the swimming direction of the centre fish in a group of five fish. The target stimulus was either preceded by a cue (centre, double, or spatial) or no cue. An EEG using 128 electrodes was recorded for 83 children aged 12–13 years. RTs showed significant effects across all three sub-components of attention. Alerting and orienting (responses to double vs non-cued target stimulus and spatially vs centre-cued target stimulus, respectively) resulted in larger N1 amplitude, whereas inhibition (responses to incongruent vs congruent target stimulus) resulted in larger P3 amplitude. Neuronal source activation for the alerting effect was localized in the right anterior temporal and bilateral occipital lobes, for the orienting effect bilaterally in the occipital lobe, and for the inhibition effect in the medial prefrontal cortex and left anterior temporal lobe. Neuronal sources of ERPs revealed that sub-processes related to the attention network are different in children as compared to earlier adult fMRI studies, which was not evident from scalp ERPs.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Attention-related processes include three functional sub-components: alerting, orienting, and inhibition. We investigated these components using EEG-based, brain event-related potentials and their neuronal source activations during the Attention Network Test in typically developing school-aged children. Participants were asked to detect the swimming direction of the centre fish in a group of five fish. The target stimulus was either preceded by a cue (centre, double, or spatial) or no cue. An EEG using 128 electrodes was recorded for 83 children aged 12–13 years. RTs showed significant effects across all three sub-components of attention. Alerting and orienting (responses to double vs non-cued target stimulus and spatially vs centre-cued target stimulus, respectively) resulted in larger N1 amplitude, whereas inhibition (responses to incongruent vs congruent target stimulus) resulted in larger P3 amplitude. Neuronal source activation for the alerting effect was localized in the right anterior temporal and bilateral occipital lobes, for the orienting effect bilaterally in the occipital lobe, and for the inhibition effect in the medial prefrontal cortex and left anterior temporal lobe. Neuronal sources of ERPs revealed that sub-processes related to the attention network are different in children as compared to earlier adult fMRI studies, which was not evident from scalp ERPs.

Close

  • doi:10.1038/s41598-018-36947-3

Close

Daniel S Ferreira; Geraldo L B Ramalho; Débora Torres; Alessandra H G Tobias; Mariana T Rezende; Fátima N S Medeiros; Andrea G C Bianchi; Cláudia M Carneiro; Daniela M Ushizima

Saliency-driven system models for cell analysis with deep learning Journal Article

Computer Methods and Programs in Biomedicine, 182 , pp. 1–13, 2019.

Abstract | Links | BibTeX

@article{Ferreira2019,
title = {Saliency-driven system models for cell analysis with deep learning},
author = {Daniel S Ferreira and Geraldo L B Ramalho and Débora Torres and Alessandra H G Tobias and Mariana T Rezende and Fátima N S Medeiros and Andrea G C Bianchi and Cláudia M Carneiro and Daniela M Ushizima},
doi = {10.1016/j.cmpb.2019.105053},
year = {2019},
date = {2019-12-01},
journal = {Computer Methods and Programs in Biomedicine},
volume = {182},
pages = {1--13},
publisher = {Elsevier BV},
abstract = {Background and objectives: Saliency refers to the visual perception quality that makes objects in a scene to stand out from others and attract attention. While computational saliency models can simulate the expert's visual attention, there is little evidence about how these models perform when used to predict the cytopathologist's eye fixations. Saliency models may be the key to instrumenting fast object detection on large Pap smear slides under real noisy conditions, artifacts, and cell occlusions. This paper describes how our computational schemes retrieve regions of interest (ROI) of clinical relevance using visual attention models. We also compare the performance of different computed saliency models as part of cell screening tasks, aiming to design a computer-aided diagnosis systems that supports cytopathologists. Method: We record eye fixation maps from cytopathologists at work, and compare with 13 different saliency prediction algorithms, including deep learning. We develop cell-specific convolutional neural networks (CNN) to investigate the impact of bottom-up and top-down factors on saliency prediction from real routine exams. By combining the eye tracking data from pathologists with computed saliency models, we assess algorithms reliability in identifying clinically relevant cells. Results:The proposed cell-specific CNN model outperforms all other saliency prediction methods, particularly regarding the number of false positives. Our algorithm also detects the most clinically relevant cells, which are among the three top salient regions, with accuracy above 98% for all diseases, except carcinoma (87%). Bottom-up methods performed satisfactorily, with saliency maps that enabled ROI detection above 75% for carcinoma and 86% for other pathologies. Conclusions:ROIs extraction using our saliency prediction methods enabled ranking the most relevant clinical areas within the image, a viable data reduction strategy to guide automatic analyses of Pap smear slides. Top-down factors for saliency prediction on cell images increases the accuracy of the estimated maps while bottom-up algorithms proved to be useful for predicting the cytopathologist's eye fixations depending on parameters, such as the number of false positive and negative. Our contributions are: comparison among 13 state-of-the-art saliency models to cytopathologists' visual attention and deliver a method that the associate the most conspicuous regions to clinically relevant cells.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background and objectives: Saliency refers to the visual perception quality that makes objects in a scene to stand out from others and attract attention. While computational saliency models can simulate the expert's visual attention, there is little evidence about how these models perform when used to predict the cytopathologist's eye fixations. Saliency models may be the key to instrumenting fast object detection on large Pap smear slides under real noisy conditions, artifacts, and cell occlusions. This paper describes how our computational schemes retrieve regions of interest (ROI) of clinical relevance using visual attention models. We also compare the performance of different computed saliency models as part of cell screening tasks, aiming to design a computer-aided diagnosis systems that supports cytopathologists. Method: We record eye fixation maps from cytopathologists at work, and compare with 13 different saliency prediction algorithms, including deep learning. We develop cell-specific convolutional neural networks (CNN) to investigate the impact of bottom-up and top-down factors on saliency prediction from real routine exams. By combining the eye tracking data from pathologists with computed saliency models, we assess algorithms reliability in identifying clinically relevant cells. Results:The proposed cell-specific CNN model outperforms all other saliency prediction methods, particularly regarding the number of false positives. Our algorithm also detects the most clinically relevant cells, which are among the three top salient regions, with accuracy above 98% for all diseases, except carcinoma (87%). Bottom-up methods performed satisfactorily, with saliency maps that enabled ROI detection above 75% for carcinoma and 86% for other pathologies. Conclusions:ROIs extraction using our saliency prediction methods enabled ranking the most relevant clinical areas within the image, a viable data reduction strategy to guide automatic analyses of Pap smear slides. Top-down factors for saliency prediction on cell images increases the accuracy of the estimated maps while bottom-up algorithms proved to be useful for predicting the cytopathologist's eye fixations depending on parameters, such as the number of false positive and negative. Our contributions are: comparison among 13 state-of-the-art saliency models to cytopathologists' visual attention and deliver a method that the associate the most conspicuous regions to clinically relevant cells.

Close

  • doi:10.1016/j.cmpb.2019.105053

Close

Felicity Deamer; Ellen Palmer; Quoc C Vuong; Nicol Ferrier; Andreas Finkelmeyer; Wolfram Hinzen; Stuart Watson

Non-literal understanding and psychosis: Metaphor comprehension in individuals with a diagnosis of schizophrenia Journal Article

Schizophrenia Research: Cognition, 18 , pp. 1–8, 2019.

Abstract | Links | BibTeX

@article{Deamer2019,
title = {Non-literal understanding and psychosis: Metaphor comprehension in individuals with a diagnosis of schizophrenia},
author = {Felicity Deamer and Ellen Palmer and Quoc C Vuong and Nicol Ferrier and Andreas Finkelmeyer and Wolfram Hinzen and Stuart Watson},
doi = {10.1016/J.SCOG.2019.100159},
year = {2019},
date = {2019-12-01},
journal = {Schizophrenia Research: Cognition},
volume = {18},
pages = {1--8},
publisher = {Elsevier},
abstract = {Previous studies suggest that understanding of non-literal expressions, and in particular metaphors, can be impaired in people with schizophrenia; although it is not clear why. We explored metaphor comprehension capacity using a novel picture selection paradigm; we compared task performance between people with schizophrenia and healthy comparator subjects and we further examined the relationships between the ability to interpret figurative expressions non-literally and performance on a number of other cognitive tasks. Eye-tracking was used to examine task strategy. We showed that even when IQ, years of education, and capacities for theory of mind and associative learning are factored in as covariates, patients are significantly more likely to interpret metaphorical expressions literally, despite eye-tracking findings suggesting that patients are following the same interpretation strategy as healthy controls. Inhibitory control deficits are likely to be one of multiple factors contributing to the poorer performance of our schizophrenia group on the metaphor trials of the picture selection task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous studies suggest that understanding of non-literal expressions, and in particular metaphors, can be impaired in people with schizophrenia; although it is not clear why. We explored metaphor comprehension capacity using a novel picture selection paradigm; we compared task performance between people with schizophrenia and healthy comparator subjects and we further examined the relationships between the ability to interpret figurative expressions non-literally and performance on a number of other cognitive tasks. Eye-tracking was used to examine task strategy. We showed that even when IQ, years of education, and capacities for theory of mind and associative learning are factored in as covariates, patients are significantly more likely to interpret metaphorical expressions literally, despite eye-tracking findings suggesting that patients are following the same interpretation strategy as healthy controls. Inhibitory control deficits are likely to be one of multiple factors contributing to the poorer performance of our schizophrenia group on the metaphor trials of the picture selection task.

Close

  • doi:10.1016/J.SCOG.2019.100159

Close

Garvin Brod; Jasmin Breitwieser

Lighting the wick in the candle of learning: Generating a prediction stimulates curiosity Journal Article

Science of Learning, 4 , pp. 1–7, 2019.

Abstract | Links | BibTeX

@article{Brod2019,
title = {Lighting the wick in the candle of learning: Generating a prediction stimulates curiosity},
author = {Garvin Brod and Jasmin Breitwieser},
doi = {10.1038/s41539-019-0056-y},
year = {2019},
date = {2019-12-01},
journal = {Science of Learning},
volume = {4},
pages = {1--7},
abstract = {Curiosity stimulates learning. We tested whether curiosity itself can be stimulated—not by extrinsic rewards but by an intrinsic desire to know whether a prediction holds true. Participants performed a numerical-facts learning task in which they had to generate either a prediction or an example before rating their curiosity and seeing the correct answer. More facts received high-curiosity ratings in the prediction condition, which indicates that generating predictions stimulated curiosity. In turn, high curiosity, compared with low curiosity, was associated with better memory for the correct answer. Concurrent pupillary data revealed that higher curiosity was associated with larger pupil dilation during anticipation of the correct answer. Pupil dilation was further enhanced when participants generated a prediction rather than an example, both during anticipation of the correct answer and in response to seeing it. These results suggest that generating a prediction stimulates curiosity by increasing the relevance of the knowledge gap.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Curiosity stimulates learning. We tested whether curiosity itself can be stimulated—not by extrinsic rewards but by an intrinsic desire to know whether a prediction holds true. Participants performed a numerical-facts learning task in which they had to generate either a prediction or an example before rating their curiosity and seeing the correct answer. More facts received high-curiosity ratings in the prediction condition, which indicates that generating predictions stimulated curiosity. In turn, high curiosity, compared with low curiosity, was associated with better memory for the correct answer. Concurrent pupillary data revealed that higher curiosity was associated with larger pupil dilation during anticipation of the correct answer. Pupil dilation was further enhanced when participants generated a prediction rather than an example, both during anticipation of the correct answer and in response to seeing it. These results suggest that generating a prediction stimulates curiosity by increasing the relevance of the knowledge gap.

Close

  • doi:10.1038/s41539-019-0056-y

Close

Hanna Brinkmann; Louis Williams; Eugene McSorley; Raphael Rosenberg

Does ‘action viewing' really exist? Perceived dynamism and viewing behaviour Journal Article

Art and Perception, pp. 1–22, 2019.

Abstract | Links | BibTeX

@article{Brinkmann2019,
title = {Does ‘action viewing' really exist? Perceived dynamism and viewing behaviour},
author = {Hanna Brinkmann and Louis Williams and Eugene McSorley and Raphael Rosenberg},
doi = {10.1163/22134913-20191128},
year = {2019},
date = {2019-12-01},
journal = {Art and Perception},
pages = {1--22},
publisher = {Brill},
abstract = {Throughout the 20th century, there have been many different forms of abstract painting. While works by some artists, e.g., Piet Mondrian, are usually described as static, others are described as dynamic, such as Jackson Pollock's ‘action paintings'. Art historians have assumed that beholders not only conceptualise such differences in depicted dynamics but also mirror these in their viewing behaviour. In an interdisciplinary eye-tracking study, we tested this concept through investigating both the localisation of fixations (polyfocal viewing) and the average duration of fixations as well as saccade velocity, duration and path curvature. We showed 30 different abstract paintings to 40 participants — 20 laypeople and 20 experts (art students) — and used self-reporting to investigate the perceived dynamism of each painting and its relationship with (a) the average number and duration of fixations, (b) the average number, duration and velocity of saccades as well as the amplitude and curvature area of saccade paths, and (c) pleasantness and familiarity ratings. We found that the average number of fixations and saccades, saccade velocity, and pleasantness ratings increase with an increase in perceived dynamism ratings. Meanwhile the saccade duration decreased with an increase in perceived dynamism. Additionally, the analysis showed that experts gave higher dynamic ratings compared to laypeople and were more familiar with the artworks. These results indicate that there is a correlation between perceived dynamism in abstract painting and viewing behaviour — something that has long been assumed by art historians but had never been empirically supported.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Throughout the 20th century, there have been many different forms of abstract painting. While works by some artists, e.g., Piet Mondrian, are usually described as static, others are described as dynamic, such as Jackson Pollock's ‘action paintings'. Art historians have assumed that beholders not only conceptualise such differences in depicted dynamics but also mirror these in their viewing behaviour. In an interdisciplinary eye-tracking study, we tested this concept through investigating both the localisation of fixations (polyfocal viewing) and the average duration of fixations as well as saccade velocity, duration and path curvature. We showed 30 different abstract paintings to 40 participants — 20 laypeople and 20 experts (art students) — and used self-reporting to investigate the perceived dynamism of each painting and its relationship with (a) the average number and duration of fixations, (b) the average number, duration and velocity of saccades as well as the amplitude and curvature area of saccade paths, and (c) pleasantness and familiarity ratings. We found that the average number of fixations and saccades, saccade velocity, and pleasantness ratings increase with an increase in perceived dynamism ratings. Meanwhile the saccade duration decreased with an increase in perceived dynamism. Additionally, the analysis showed that experts gave higher dynamic ratings compared to laypeople and were more familiar with the artworks. These results indicate that there is a correlation between perceived dynamism in abstract painting and viewing behaviour — something that has long been assumed by art historians but had never been empirically supported.

Close

  • doi:10.1163/22134913-20191128

Close

Rotem Botvinik-Nezer; Roni Iwanir; Felix Holzmeister; Jürgen Huber; Magnus Johannesson; Michael Kirchler; Anna Dreber; Colin F Camerer; Russell A Poldrack; Tom Schonberg

fMRI data of mixed gambles from the Neuroimaging Analysis Replication and Prediction Study Journal Article

Scientific Data, 6 , pp. 1–9, 2019.

Abstract | Links | BibTeX

@article{Botvinik-Nezer2019,
title = {fMRI data of mixed gambles from the Neuroimaging Analysis Replication and Prediction Study},
author = {Rotem Botvinik-Nezer and Roni Iwanir and Felix Holzmeister and Jürgen Huber and Magnus Johannesson and Michael Kirchler and Anna Dreber and Colin F Camerer and Russell A Poldrack and Tom Schonberg},
doi = {10.1038/s41597-019-0113-7},
year = {2019},
date = {2019-12-01},
journal = {Scientific Data},
volume = {6},
pages = {1--9},
publisher = {Nature Publishing Group},
abstract = {There is an ongoing debate about the replicability of neuroimaging research. It was suggested that one of the main reasons for the high rate of false positive results is the many degrees of freedom researchers have during data analysis. In the Neuroimaging Analysis Replication and Prediction Study (NARPS), we aim to provide the first scientific evidence on the variability of results across analysis teams in neuroscience. We collected fMRI data from 108 participants during two versions of the mixed gambles task, which is often used to study decision-making under risk. For each participant, the dataset includes an anatomical (T1 weighted) scan and fMRI as well as behavioral data from four runs of the task. The dataset is shared through OpenNeuro and is formatted according to the Brain Imaging Data Structure (BIDS) standard. Data pre-processed with fMRIprep and quality control reports are also publicly shared. This dataset can be used to study decision-making under risk and to test replicability and interpretability of previous results in the field.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

There is an ongoing debate about the replicability of neuroimaging research. It was suggested that one of the main reasons for the high rate of false positive results is the many degrees of freedom researchers have during data analysis. In the Neuroimaging Analysis Replication and Prediction Study (NARPS), we aim to provide the first scientific evidence on the variability of results across analysis teams in neuroscience. We collected fMRI data from 108 participants during two versions of the mixed gambles task, which is often used to study decision-making under risk. For each participant, the dataset includes an anatomical (T1 weighted) scan and fMRI as well as behavioral data from four runs of the task. The dataset is shared through OpenNeuro and is formatted according to the Brain Imaging Data Structure (BIDS) standard. Data pre-processed with fMRIprep and quality control reports are also publicly shared. This dataset can be used to study decision-making under risk and to test replicability and interpretability of previous results in the field.

Close

  • doi:10.1038/s41597-019-0113-7

Close

Angela Bartolo; Caroline Claisse; Fabrizia Gallo; Laurent Ott; Adriana Sampaio; Jean-Louis Nandrino

Gestures convey different physiological responses when performed toward and away from the body Journal Article

Scientific Reports, 9 , pp. 1–10, 2019.

Abstract | Links | BibTeX

@article{Bartolo2019,
title = {Gestures convey different physiological responses when performed toward and away from the body},
author = {Angela Bartolo and Caroline Claisse and Fabrizia Gallo and Laurent Ott and Adriana Sampaio and Jean-Louis Nandrino},
doi = {10.1038/s41598-019-49318-3},
year = {2019},
date = {2019-12-01},
journal = {Scientific Reports},
volume = {9},
pages = {1--10},
publisher = {Nature Publishing Group},
abstract = {We assessed the sympathetic and parasympathetic activation associated to the observation of Pantomime (i.e. the mime of the use of a tool) and Intransitive gestures (i.e. expressive) performed toward (e.g. a comb and “thinking”) and away from the body (e.g. key and “come here”) in a group of healthy participants while both pupil dilation (N = 31) and heart rate variability (N = 33; HF-HRV) were recorded. Large pupil dilation was observed in both Pantomime and Intransitive gestures toward the body; whereas an increase of the vagal suppression was observed in Intransitive gestures away from the body but not in those toward the body. Our results suggest that the space where people act when performing a gesture has an impact on the physiological responses of the observer in relation to the type of social communicative information that the gesture direction conveys, from a more intimate (toward the body) to a more interactive one (away from the body).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We assessed the sympathetic and parasympathetic activation associated to the observation of Pantomime (i.e. the mime of the use of a tool) and Intransitive gestures (i.e. expressive) performed toward (e.g. a comb and “thinking”) and away from the body (e.g. key and “come here”) in a group of healthy participants while both pupil dilation (N = 31) and heart rate variability (N = 33; HF-HRV) were recorded. Large pupil dilation was observed in both Pantomime and Intransitive gestures toward the body; whereas an increase of the vagal suppression was observed in Intransitive gestures away from the body but not in those toward the body. Our results suggest that the space where people act when performing a gesture has an impact on the physiological responses of the observer in relation to the type of social communicative information that the gesture direction conveys, from a more intimate (toward the body) to a more interactive one (away from the body).

Close

  • doi:10.1038/s41598-019-49318-3

Close

Ariana R Andrei; Sorin Pojoga; Roger Janz; Valentin Dragoi

Integration of cortical population signals for visual perception Journal Article

Nature Communications, 10 (1), pp. 1–13, 2019.

Abstract | Links | BibTeX

@article{Andrei2019,
title = {Integration of cortical population signals for visual perception},
author = {Ariana R Andrei and Sorin Pojoga and Roger Janz and Valentin Dragoi},
doi = {10.1038/s41467-019-11736-2},
year = {2019},
date = {2019-12-01},
journal = {Nature Communications},
volume = {10},
number = {1},
pages = {1--13},
publisher = {Nature Publishing Group},
abstract = {Visual stimuli evoke heterogeneous responses across nearby neural populations. These signals must be locally integrated to contribute to perception, but the principles underlying this process are unknown. Here, we exploit the systematic organization of orientation preference in macaque primary visual cortex (V1) and perform causal manipulations to examine the limits of signal integration. Optogenetic stimulation and visual stimuli are used to simultaneously drive two neural populations with overlapping receptive fields. We report that optogenetic stimulation raises firing rates uniformly across conditions, but improves the detection of visual stimuli only when activating cells that are preferentially-tuned to the visual stimulus. Further, we show that changes in correlated variability are exclusively present when the optogenetically and visually-activated populations are functionally-proximal, suggesting that correlation changes represent a hallmark of signal integration. Our results demonstrate that information from functionally-proximal neurons is pooled for perception, but functionally-distal signals remain independent. Primary visual cortical neurons exhibit diverse responses to visual stimuli yet how these signals are integrated during visual perception is not well understood. Here, the authors show that optogenetic stimulation of neurons situated near the visually‐driven population leads to improved orientation detection in monkeys through changes in correlated variability.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual stimuli evoke heterogeneous responses across nearby neural populations. These signals must be locally integrated to contribute to perception, but the principles underlying this process are unknown. Here, we exploit the systematic organization of orientation preference in macaque primary visual cortex (V1) and perform causal manipulations to examine the limits of signal integration. Optogenetic stimulation and visual stimuli are used to simultaneously drive two neural populations with overlapping receptive fields. We report that optogenetic stimulation raises firing rates uniformly across conditions, but improves the detection of visual stimuli only when activating cells that are preferentially-tuned to the visual stimulus. Further, we show that changes in correlated variability are exclusively present when the optogenetically and visually-activated populations are functionally-proximal, suggesting that correlation changes represent a hallmark of signal integration. Our results demonstrate that information from functionally-proximal neurons is pooled for perception, but functionally-distal signals remain independent. Primary visual cortical neurons exhibit diverse responses to visual stimuli yet how these signals are integrated during visual perception is not well understood. Here, the authors show that optogenetic stimulation of neurons situated near the visually‐driven population leads to improved orientation detection in monkeys through changes in correlated variability.

Close

  • doi:10.1038/s41467-019-11736-2

Close

Jacob A Westerberg; Alexander Maier; Geoffrey F Woodman; Jeffrey D Schall

Performance monitoring during visual priming Journal Article

Journal of Cognitive Neuroscience, pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Westerberg2019a,
title = {Performance monitoring during visual priming},
author = {Jacob A Westerberg and Alexander Maier and Geoffrey F Woodman and Jeffrey D Schall},
doi = {10.1162/jocn_a_01499},
year = {2019},
date = {2019-11-01},
journal = {Journal of Cognitive Neuroscience},
pages = {1--12},
publisher = {MIT Press - Journals},
abstract = {Repetitive performance of single-feature (efficient or pop-out) visual search improves RTs and accuracy. This phenomenon, known as priming of pop-out, has been demonstrated in both humans and macaque monkeys. We investigated the relationship between performance monitoring and priming of pop-out. Neuronal activity in the supplementary eye field (SEF) contributes to performance monitoring and to the generation of performance monitoring signals in the EEG. To determine whether priming depends on performance monitoring, we investigated spiking activity in SEF as well as the concurrent EEG of two monkeys performing a priming of pop-out task. We found that SEF spiking did not modulate with priming. Surprisingly, concurrent EEG did covary with priming. Together, these results suggest that performance monitoring contributes to priming of pop-out. However, this performance monitoring seems not mediated by SEF. This dissociation suggests that EEG indices of performance monitoring arise from multiple, functionally distinct neural generators.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Repetitive performance of single-feature (efficient or pop-out) visual search improves RTs and accuracy. This phenomenon, known as priming of pop-out, has been demonstrated in both humans and macaque monkeys. We investigated the relationship between performance monitoring and priming of pop-out. Neuronal activity in the supplementary eye field (SEF) contributes to performance monitoring and to the generation of performance monitoring signals in the EEG. To determine whether priming depends on performance monitoring, we investigated spiking activity in SEF as well as the concurrent EEG of two monkeys performing a priming of pop-out task. We found that SEF spiking did not modulate with priming. Surprisingly, concurrent EEG did covary with priming. Together, these results suggest that performance monitoring contributes to priming of pop-out. However, this performance monitoring seems not mediated by SEF. This dissociation suggests that EEG indices of performance monitoring arise from multiple, functionally distinct neural generators.

Close

  • doi:10.1162/jocn_a_01499

Close

Antonia F Ten Brink; Jasper H Fabius; Nick A Weaver; Tanja C W Nijboer; Stefan Van der Stigchel

Trans-saccadic memory after right parietal brain damage Journal Article

Cortex, 120 , pp. 284–297, 2019.

Abstract | Links | BibTeX

@article{TenBrink2019a,
title = {Trans-saccadic memory after right parietal brain damage},
author = {Antonia F {Ten Brink} and Jasper H Fabius and Nick A Weaver and Tanja C W Nijboer and Stefan {Van der Stigchel}},
doi = {10.1016/j.cortex.2019.06.006},
year = {2019},
date = {2019-11-01},
journal = {Cortex},
volume = {120},
pages = {284--297},
publisher = {Elsevier BV},
abstract = {INTRODUCTION: Spatial remapping, the process of updating information across eye movements, is an important mechanism for trans-saccadic perception. The right posterior parietal cortex (PPC) is a region that has been associated most strongly with spatial remapping. The aim of the project was to investigate the effect of damage to the right PPC on direction specific trans-saccadic memory. We compared trans-saccadic memory performance for central items that had to be remembered while making a left- versus rightward eye movement, or for items that were remapped within the left versus right visual field. METHODS: We included 9 stroke patients with unilateral right PPC lesions and 31 healthy control subjects. Participants memorized the location of a briefly presented item, had to make one saccade (either towards the left or right, or upward or downward), and subsequently had to decide in what direction the probe had shifted. We used a staircase to adjust task difficulty (i.e., the distance between the memory item and probe). Bayesian repeated measures ANOVAs were used to compare left versus right eye movements and items in the left versus right visual field. RESULTS: In both conditions, patients with right PPC damage showed worse trans-saccadic memory performance compared to healthy control subjects (for the condition with left- and rightward gaze shifts},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

INTRODUCTION: Spatial remapping, the process of updating information across eye movements, is an important mechanism for trans-saccadic perception. The right posterior parietal cortex (PPC) is a region that has been associated most strongly with spatial remapping. The aim of the project was to investigate the effect of damage to the right PPC on direction specific trans-saccadic memory. We compared trans-saccadic memory performance for central items that had to be remembered while making a left- versus rightward eye movement, or for items that were remapped within the left versus right visual field. METHODS: We included 9 stroke patients with unilateral right PPC lesions and 31 healthy control subjects. Participants memorized the location of a briefly presented item, had to make one saccade (either towards the left or right, or upward or downward), and subsequently had to decide in what direction the probe had shifted. We used a staircase to adjust task difficulty (i.e., the distance between the memory item and probe). Bayesian repeated measures ANOVAs were used to compare left versus right eye movements and items in the left versus right visual field. RESULTS: In both conditions, patients with right PPC damage showed worse trans-saccadic memory performance compared to healthy control subjects (for the condition with left- and rightward gaze shifts

Close

  • doi:10.1016/j.cortex.2019.06.006

Close

Mariya E Manahova; Eelke Spaak; Floris P de Lange

Familiarity increases processing speed in the visual system Journal Article

Journal of Cognitive Neuroscience, pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Manahova2019,
title = {Familiarity increases processing speed in the visual system},
author = {Mariya E Manahova and Eelke Spaak and Floris P de Lange},
doi = {10.1162/jocn_a_01507},
year = {2019},
date = {2019-11-01},
journal = {Journal of Cognitive Neuroscience},
pages = {1--12},
publisher = {MIT Press - Journals},
abstract = {Familiarity with a stimulus leads to an attenuated neural response to the stimulus. Alongside this attenuation, recent studies have also observed a truncation of stimulus-evoked activity for familiar visual input. One proposed function of this truncation is to rapidly put neurons in a state of readiness to respond to new input. Here, we examined this hypothesis by presenting human participants with target stimuli that were embedded in rapid streams of familiar or novel distractor stimuli at different speeds of presentation, while recording brain activity using magnetoencephalography and measuring behavioral performance. We investigated the temporal and spatial dynamics of signal truncation and whether this phenomenon bears relationship to participants' ability to categorize target items within a visual stream. Behaviorally, target categorization performance was markedly better when the target was embedded within familiar distractors, and this benefit became more pronounced with increasing speed of presentation. Familiar distractors showed a truncation of neural activity in the visual system. This truncation was strongest for the fastest presentation speeds and peaked in progressively more anterior cortical regions as presentation speeds became slower. Moreover, the neural response evoked by the target was stronger when this target was preceded by familiar distractors. Taken together, these findings demonstrate that item familiarity results in a truncated neural response, is associated with stronger processing of relevant target information, and leads to superior perceptual performance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Familiarity with a stimulus leads to an attenuated neural response to the stimulus. Alongside this attenuation, recent studies have also observed a truncation of stimulus-evoked activity for familiar visual input. One proposed function of this truncation is to rapidly put neurons in a state of readiness to respond to new input. Here, we examined this hypothesis by presenting human participants with target stimuli that were embedded in rapid streams of familiar or novel distractor stimuli at different speeds of presentation, while recording brain activity using magnetoencephalography and measuring behavioral performance. We investigated the temporal and spatial dynamics of signal truncation and whether this phenomenon bears relationship to participants' ability to categorize target items within a visual stream. Behaviorally, target categorization performance was markedly better when the target was embedded within familiar distractors, and this benefit became more pronounced with increasing speed of presentation. Familiar distractors showed a truncation of neural activity in the visual system. This truncation was strongest for the fastest presentation speeds and peaked in progressively more anterior cortical regions as presentation speeds became slower. Moreover, the neural response evoked by the target was stronger when this target was preceded by familiar distractors. Taken together, these findings demonstrate that item familiarity results in a truncated neural response, is associated with stronger processing of relevant target information, and leads to superior perceptual performance.

Close

  • doi:10.1162/jocn_a_01507

Close

Moreno I Coco; Antje Nuthmann; Olaf Dimigen

Fixation-related brain potentials during semantic integration of object–scene information Journal Article

Journal of Cognitive Neuroscience, pp. 1–19, 2019.

Abstract | Links | BibTeX

@article{Coco2019,
title = {Fixation-related brain potentials during semantic integration of object–scene information},
author = {Moreno I Coco and Antje Nuthmann and Olaf Dimigen},
doi = {10.1162/jocn_a_01504},
year = {2019},
date = {2019-11-01},
journal = {Journal of Cognitive Neuroscience},
pages = {1--19},
publisher = {MIT Press - Journals},
abstract = {In vision science, a particularly controversial topic is whether and how quickly the semantic information about objects is available outside foveal vision. Here, we aimed at contributing to this debate by coregistering eye movements and EEG while participants viewed photographs of indoor scenes that contained a semantically consistent or inconsistent target object. Linear deconvolution modeling was used to analyze the ERPs evoked by scene onset as well as the fixation-related potentials (FRPs) elicited by the fixation on the target object ( t) and by the preceding fixation ( t − 1). Object–scene consistency did not influence the probability of immediate target fixation or the ERP evoked by scene onset, which suggests that object–scene semantics was not accessed immediately. However, during the subsequent scene exploration, inconsistent objects were prioritized over consistent objects in extrafoveal vision (i.e., looked at earlier) and were more effortful to process in foveal vision (i.e., looked at longer). In FRPs, we demonstrate a fixation-related N300/N400 effect, whereby inconsistent objects elicit a larger frontocentral negativity than consistent objects. In line with the behavioral findings, this effect was already seen in FRPs aligned to the pretarget fixation t − 1 and persisted throughout fixation t, indicating that the extraction of object semantics can already begin in extrafoveal vision. Taken together, the results emphasize the usefulness of combined EEG/eye movement recordings for understanding the mechanisms of object–scene integration during natural viewing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In vision science, a particularly controversial topic is whether and how quickly the semantic information about objects is available outside foveal vision. Here, we aimed at contributing to this debate by coregistering eye movements and EEG while participants viewed photographs of indoor scenes that contained a semantically consistent or inconsistent target object. Linear deconvolution modeling was used to analyze the ERPs evoked by scene onset as well as the fixation-related potentials (FRPs) elicited by the fixation on the target object ( t) and by the preceding fixation ( t − 1). Object–scene consistency did not influence the probability of immediate target fixation or the ERP evoked by scene onset, which suggests that object–scene semantics was not accessed immediately. However, during the subsequent scene exploration, inconsistent objects were prioritized over consistent objects in extrafoveal vision (i.e., looked at earlier) and were more effortful to process in foveal vision (i.e., looked at longer). In FRPs, we demonstrate a fixation-related N300/N400 effect, whereby inconsistent objects elicit a larger frontocentral negativity than consistent objects. In line with the behavioral findings, this effect was already seen in FRPs aligned to the pretarget fixation t − 1 and persisted throughout fixation t, indicating that the extraction of object semantics can already begin in extrafoveal vision. Taken together, the results emphasize the usefulness of combined EEG/eye movement recordings for understanding the mechanisms of object–scene integration during natural viewing.

Close

  • doi:10.1162/jocn_a_01504

Close

Nahid Zokaei; Alexander G Board; Sanjay G Manohar; Anna C Nobre

Modulation of the pupillary response by the content of visual working memory Journal Article

Proceedings of the National Academy of Sciences, 115 (45), pp. 22802–22810, 2019.

Abstract | Links | BibTeX

@article{Zokaei2019,
title = {Modulation of the pupillary response by the content of visual working memory},
author = {Nahid Zokaei and Alexander G Board and Sanjay G Manohar and Anna C Nobre},
doi = {10.1073/pnas.1909959116},
year = {2019},
date = {2019-10-01},
journal = {Proceedings of the National Academy of Sciences},
volume = {115},
number = {45},
pages = {22802--22810},
abstract = {Studies of selective attention during perception have revealed modulation of the pupillary response according to the brightness of task-relevant (attended) vs. -irrelevant (unattended) stimuli within a visual display. As a strong test of top-down modulation of the pupil response by selective attention, we asked whether changes in pupil diameter follow internal shifts of attention to memoranda of visual stimuli of different brightness maintained in working memory, in the absence of any visual stimulation. Across 3 studies, we reveal dilation of the pupil when participants orient attention to the memorandum of a dark grating relative to that of a bright grating. The effect occurs even when the attention-orienting cue is independent of stimulus brightness, and even when stimulus brightness is merely incidental and not required for the working-memory task of judging stimulus orientation. Furthermore, relative dilation and constriction of the pupil occurred dynamically and followed the changing temporal expectation that 1 or the other stimulus would be probed across the retention delay. The results provide surprising and consistent evidence that pupil responses are under top-down control by cognitive factors, even when there is no direct adaptive gain for such modulation, since no visual stimuli were presented or anticipated. The results also strengthen the view of sensory recruitment during working memory, suggesting even activation of sensory receptors. The thought-provoking corollary to our findings is that the pupils provide a reliable measure of what is in the focus of mind, thus giving a different meaning to old proverbs about the eyes being a window to the mind.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Studies of selective attention during perception have revealed modulation of the pupillary response according to the brightness of task-relevant (attended) vs. -irrelevant (unattended) stimuli within a visual display. As a strong test of top-down modulation of the pupil response by selective attention, we asked whether changes in pupil diameter follow internal shifts of attention to memoranda of visual stimuli of different brightness maintained in working memory, in the absence of any visual stimulation. Across 3 studies, we reveal dilation of the pupil when participants orient attention to the memorandum of a dark grating relative to that of a bright grating. The effect occurs even when the attention-orienting cue is independent of stimulus brightness, and even when stimulus brightness is merely incidental and not required for the working-memory task of judging stimulus orientation. Furthermore, relative dilation and constriction of the pupil occurred dynamically and followed the changing temporal expectation that 1 or the other stimulus would be probed across the retention delay. The results provide surprising and consistent evidence that pupil responses are under top-down control by cognitive factors, even when there is no direct adaptive gain for such modulation, since no visual stimuli were presented or anticipated. The results also strengthen the view of sensory recruitment during working memory, suggesting even activation of sensory receptors. The thought-provoking corollary to our findings is that the pupils provide a reliable measure of what is in the focus of mind, thus giving a different meaning to old proverbs about the eyes being a window to the mind.

Close

  • doi:10.1073/pnas.1909959116

Close

Toby Wise; Jochen Michely; Peter Dayan; Raymond J Dolan

A computational account of threat-related attentional bias Journal Article

PLOS Computational Biology, 15 (10), pp. 1–21, 2019.

Abstract | Links | BibTeX

@article{Wise2019,
title = {A computational account of threat-related attentional bias},
author = {Toby Wise and Jochen Michely and Peter Dayan and Raymond J Dolan},
editor = {Michael Browning},
doi = {10.1371/journal.pcbi.1007341},
year = {2019},
date = {2019-10-01},
journal = {PLOS Computational Biology},
volume = {15},
number = {10},
pages = {1--21},
abstract = {Visual selective attention acts as a filter on perceptual information, facilitating learning and inference about important events in an agent's environment. A role for visual attention in reward-based decisions has previously been demonstrated, but it remains unclear how visual attention is recruited during aversive learning, particularly when learning about multi- ple stimuli concurrently. This question is of particular importance in psychopathology, where enhanced attention to threat is a putative feature of pathological anxiety. Using an aversive reversal learning task that required subjects to learn, and exploit, predictions about multiple stimuli, we show that the allocation of visual attention is influenced significantly by aversive value but not by uncertainty. Moreover, this relationship is bidirectional in that attention biases value updates for attended stimuli, resulting in heightened value estimates. Our find- ings have implications for understanding biased attention in psychopathology and support a role for learning in the expression of threat-related attentional biases in anxiety.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual selective attention acts as a filter on perceptual information, facilitating learning and inference about important events in an agent's environment. A role for visual attention in reward-based decisions has previously been demonstrated, but it remains unclear how visual attention is recruited during aversive learning, particularly when learning about multi- ple stimuli concurrently. This question is of particular importance in psychopathology, where enhanced attention to threat is a putative feature of pathological anxiety. Using an aversive reversal learning task that required subjects to learn, and exploit, predictions about multiple stimuli, we show that the allocation of visual attention is influenced significantly by aversive value but not by uncertainty. Moreover, this relationship is bidirectional in that attention biases value updates for attended stimuli, resulting in heightened value estimates. Our find- ings have implications for understanding biased attention in psychopathology and support a role for learning in the expression of threat-related attentional biases in anxiety.

Close

  • doi:10.1371/journal.pcbi.1007341

Close

Michelle Perdomo; Edith Kaan

Prosodic cues in second-language speech processing: A visual world eye-tracking study Journal Article

Second Language Research, pp. 1–27, 2019.

Abstract | Links | BibTeX

@article{Perdomo2019,
title = {Prosodic cues in second-language speech processing: A visual world eye-tracking study},
author = {Michelle Perdomo and Edith Kaan},
doi = {10.1177/0267658319879196},
year = {2019},
date = {2019-10-01},
journal = {Second Language Research},
pages = {1--27},
abstract = {Listeners interpret cues in speech processing immediately rather than waiting until the end of a sentence. In particular, prosodic cues in auditory speech processing can aid listeners in building information structure and contrast sets. Native speakers even use this information in combination with syntactic and semantic information to build mental representations predictively. Research on second-language (L2) learners suggests that learners have difficulty integrating linguistic information across various domains, likely subject to L2 proficiency levels. The current study investigated eye-movement behavior of native speakers of English and Chinese learners of English in their use of contrastive intonational cues to restrict the set of upcoming referents in a visual world paradigm. Both native speakers and learners used contrastive pitch accent to restrict the set of referents. Whereas native speakers anticipated the upcoming set of referents, this was less clear in the L2 learners. This suggests that learners are able to integrate information across multiple domains to build information structure in the L2 but may not do so predictively. Prosodic processing was not affected by proficiency or working memory in the L2 speakers.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Listeners interpret cues in speech processing immediately rather than waiting until the end of a sentence. In particular, prosodic cues in auditory speech processing can aid listeners in building information structure and contrast sets. Native speakers even use this information in combination with syntactic and semantic information to build mental representations predictively. Research on second-language (L2) learners suggests that learners have difficulty integrating linguistic information across various domains, likely subject to L2 proficiency levels. The current study investigated eye-movement behavior of native speakers of English and Chinese learners of English in their use of contrastive intonational cues to restrict the set of upcoming referents in a visual world paradigm. Both native speakers and learners used contrastive pitch accent to restrict the set of referents. Whereas native speakers anticipated the upcoming set of referents, this was less clear in the L2 learners. This suggests that learners are able to integrate information across multiple domains to build information structure in the L2 but may not do so predictively. Prosodic processing was not affected by proficiency or working memory in the L2 speakers.

Close

  • doi:10.1177/0267658319879196

Close

Bob McMurray; Jamie Klein-Packarda; Bruce J Tomblinb

A real-time mechanism underlying lexical deficits in developmental language disorder: Between-word inhibition Journal Article

Cognition, 191 , pp. 1–13, 2019.

Abstract | Links | BibTeX

@article{McMurray2019a,
title = {A real-time mechanism underlying lexical deficits in developmental language disorder: Between-word inhibition},
author = {Bob McMurray and Jamie Klein-Packarda and Bruce J Tomblinb},
doi = {10.1016/J.COGNITION.2019.06.012},
year = {2019},
date = {2019-10-01},
journal = {Cognition},
volume = {191},
pages = {1--13},
publisher = {Elsevier},
abstract = {Eight to 11% of children have a clinical disorder in oral language (Developmental Language Disorder, DLD). Language deficits in DLD can affect all levels of language and persist through adulthood. Word-level processing may be critical as words link phonology, orthography, syntax and semantics. Thus, a lexical deficit could cascade throughout language. Cognitively, word recognition is a competition process: as the input (e.g., lizard) unfolds, multiple candidates (liver, wizard) compete for recognition. Children with DLD do not fully resolve this competition, but it is unclear what cognitive mechanisms underlie this. We examined lexical inhibition—the ability of more active words to suppress competitors—in 79 adolescents with and without DLD. Participants heard words (e.g. net) in which the onset was manipulated to briefly favor a competitor (neck). This was predicted to inhibit the target, slowing recognition. Word recognition was measured using a task in which participants heard the stimulus, and clicked on a picture of the item from an array of competitors, while eye-movements were monitored as a measure of how strongly the participant was committed to that interpretation over time. TD listeners showed evidence of inhibition with greater interference for stimuli that briefly activated a competitor word. DLD listeners did not. This suggests deficits in DLD may stem from a failure to engage lexical inhibition. This in turn could have ripple effects throughout the language system. This supports theoretical approaches to DLD that emphasize lexical-level deficits, and deficits in real-time processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eight to 11% of children have a clinical disorder in oral language (Developmental Language Disorder, DLD). Language deficits in DLD can affect all levels of language and persist through adulthood. Word-level processing may be critical as words link phonology, orthography, syntax and semantics. Thus, a lexical deficit could cascade throughout language. Cognitively, word recognition is a competition process: as the input (e.g., lizard) unfolds, multiple candidates (liver, wizard) compete for recognition. Children with DLD do not fully resolve this competition, but it is unclear what cognitive mechanisms underlie this. We examined lexical inhibition—the ability of more active words to suppress competitors—in 79 adolescents with and without DLD. Participants heard words (e.g. net) in which the onset was manipulated to briefly favor a competitor (neck). This was predicted to inhibit the target, slowing recognition. Word recognition was measured using a task in which participants heard the stimulus, and clicked on a picture of the item from an array of competitors, while eye-movements were monitored as a measure of how strongly the participant was committed to that interpretation over time. TD listeners showed evidence of inhibition with greater interference for stimuli that briefly activated a competitor word. DLD listeners did not. This suggests deficits in DLD may stem from a failure to engage lexical inhibition. This in turn could have ripple effects throughout the language system. This supports theoretical approaches to DLD that emphasize lexical-level deficits, and deficits in real-time processing.

Close

  • doi:10.1016/J.COGNITION.2019.06.012

Close

Oryah C Lancry-Dayan; Ganit Kupershmidt; Yoni Pertzov

Been there, seen that, done that: Modification of visual exploration across repeated exposures Journal Article

Journal of Vision, 19 (12), pp. 1–16, 2019.

Abstract | Links | BibTeX

@article{Lancry-Dayan2019,
title = {Been there, seen that, done that: Modification of visual exploration across repeated exposures},
author = {Oryah C Lancry-Dayan and Ganit Kupershmidt and Yoni Pertzov},
doi = {10.1167/19.12.2},
year = {2019},
date = {2019-10-01},
journal = {Journal of Vision},
volume = {19},
number = {12},
pages = {1--16},
abstract = {The underlying factors that determine gaze position are a central topic in visual cognitive research. Traditionally, studies emphasized the interaction between the low- level properties of an image and gaze position. Later studies examined the influence of the semantic properties of an image. These studies explored gaze behavior during a single presentation, thus ignoring the impact of familiarity. Sparse evidence suggested that across repetitive exposures, gaze exploration attenuates but the correlation between gaze position and the low- level features of the image remains stable. However, these studies neglected two fundamental issues: (a) repeated scenes are displayed later in the testing session, such that exploration attenuation could be a result of lethargy, and (b) even if these effects are related to familiarity, are they based on a verbatim familiarity with the image, or on high-level familiarity with the gist of the scene? We investigated these issues by exposing participants to a sequence of images, some of them repeated across blocks. We found fewer, longer fixations as familiarity increased, along with shorter saccades and decreased gaze allocation towards semantically meaningful regions. These effects could not be ascribed to tonic fatigue, since they did not manifest for images that changed across blocks. Moreover, there was no attenuation of gaze behavior when participants observed a flipped version of the familiar images, suggesting that gist familiarity is not sufficient for eliciting these effects. These findings contribute to the literature on memory-guided gaze behavior and provide novel insights into the mechanism underlying the visual exploration of familiar environments.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The underlying factors that determine gaze position are a central topic in visual cognitive research. Traditionally, studies emphasized the interaction between the low- level properties of an image and gaze position. Later studies examined the influence of the semantic properties of an image. These studies explored gaze behavior during a single presentation, thus ignoring the impact of familiarity. Sparse evidence suggested that across repetitive exposures, gaze exploration attenuates but the correlation between gaze position and the low- level features of the image remains stable. However, these studies neglected two fundamental issues: (a) repeated scenes are displayed later in the testing session, such that exploration attenuation could be a result of lethargy, and (b) even if these effects are related to familiarity, are they based on a verbatim familiarity with the image, or on high-level familiarity with the gist of the scene? We investigated these issues by exposing participants to a sequence of images, some of them repeated across blocks. We found fewer, longer fixations as familiarity increased, along with shorter saccades and decreased gaze allocation towards semantically meaningful regions. These effects could not be ascribed to tonic fatigue, since they did not manifest for images that changed across blocks. Moreover, there was no attenuation of gaze behavior when participants observed a flipped version of the familiar images, suggesting that gist familiarity is not sufficient for eliciting these effects. These findings contribute to the literature on memory-guided gaze behavior and provide novel insights into the mechanism underlying the visual exploration of familiar environments.

Close

  • doi:10.1167/19.12.2

Close

Christoph Huber-Huber; Antimo Buonocore; Olaf Dimigen; Clayton Hickey; David Melcher

The peripheral preview effect with faces: Combined EEG and eye-tracking suggests multiple stages of trans-saccadic predictive and non-predictive processing Journal Article

NeuroImage, 200 , pp. 344–362, 2019.

Abstract | Links | BibTeX

@article{Huber-Huber2019,
title = {The peripheral preview effect with faces: Combined EEG and eye-tracking suggests multiple stages of trans-saccadic predictive and non-predictive processing},
author = {Christoph Huber-Huber and Antimo Buonocore and Olaf Dimigen and Clayton Hickey and David Melcher},
doi = {10.1016/j.neuroimage.2019.06.059},
year = {2019},
date = {2019-10-01},
journal = {NeuroImage},
volume = {200},
pages = {344--362},
publisher = {Academic Press Inc.},
abstract = {The world appears stable despite saccadic eye-movements. One possible explanation for this phenomenon is that the visual system predicts upcoming input across saccadic eye-movements based on peripheral preview of the saccadic target. We tested this idea using concurrent electroencephalography (EEG) and eye-tracking. Participants made cued saccades to peripheral upright or inverted face stimuli that changed orientation (invalid preview) or maintained orientation (valid preview) while the saccade was completed. Experiment 1 demonstrated better discrimination performance and a reduced fixation-locked N170 component (fN170) with valid than with invalid preview, demonstrating integration of pre- and post-saccadic information. Moreover, the early fixation-related potentials (FRP) showed a preview face inversion effect suggesting that some pre-saccadic input was represented in the brain until around 170 ms post fixation-onset. Experiment 2 replicated Experiment 1 and manipulated the proportion of valid and invalid trials to test whether the preview effect reflects context-based prediction across trials. A whole-scalp Bayes factor analysis showed that this manipulation did not alter the fN170 preview effect but did influence the face inversion effect before the saccade. The pre-saccadic inversion effect declined earlier in the mostly invalid block than in the mostly valid block, which is consistent with the notion of pre-saccadic expectations. In addition, in both studies, we found strong evidence for an interaction between the pre-saccadic preview stimulus and the post-saccadic target as early as 50 ms (Experiment 2) or 90 ms (Experiment 1) into the new fixation. These findings suggest that visual stability may involve three temporal stages: prediction about the saccadic target, integration of pre-saccadic and post-saccadic information at around 50-90 ms post fixation onset, and post-saccadic facilitation of rapid categorization.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The world appears stable despite saccadic eye-movements. One possible explanation for this phenomenon is that the visual system predicts upcoming input across saccadic eye-movements based on peripheral preview of the saccadic target. We tested this idea using concurrent electroencephalography (EEG) and eye-tracking. Participants made cued saccades to peripheral upright or inverted face stimuli that changed orientation (invalid preview) or maintained orientation (valid preview) while the saccade was completed. Experiment 1 demonstrated better discrimination performance and a reduced fixation-locked N170 component (fN170) with valid than with invalid preview, demonstrating integration of pre- and post-saccadic information. Moreover, the early fixation-related potentials (FRP) showed a preview face inversion effect suggesting that some pre-saccadic input was represented in the brain until around 170 ms post fixation-onset. Experiment 2 replicated Experiment 1 and manipulated the proportion of valid and invalid trials to test whether the preview effect reflects context-based prediction across trials. A whole-scalp Bayes factor analysis showed that this manipulation did not alter the fN170 preview effect but did influence the face inversion effect before the saccade. The pre-saccadic inversion effect declined earlier in the mostly invalid block than in the mostly valid block, which is consistent with the notion of pre-saccadic expectations. In addition, in both studies, we found strong evidence for an interaction between the pre-saccadic preview stimulus and the post-saccadic target as early as 50 ms (Experiment 2) or 90 ms (Experiment 1) into the new fixation. These findings suggest that visual stability may involve three temporal stages: prediction about the saccadic target, integration of pre-saccadic and post-saccadic information at around 50-90 ms post fixation onset, and post-saccadic facilitation of rapid categorization.

Close

  • doi:10.1016/j.neuroimage.2019.06.059

Close

Jarkko Hautala; Otto Loberg; Najla Azaiez; Sara Taskinen; Simon P Tiffin-Richards; Paavo H T Leppänen

What information should I look for again? Attentional difficulties distracts reading of task assignments Journal Article

Learning and Individual Differences, 75 , pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Hautala2019,
title = {What information should I look for again? Attentional difficulties distracts reading of task assignments},
author = {Jarkko Hautala and Otto Loberg and Najla Azaiez and Sara Taskinen and Simon P Tiffin-Richards and Paavo H T Leppänen},
doi = {10.1016/j.lindif.2019.101775},
year = {2019},
date = {2019-10-01},
journal = {Learning and Individual Differences},
volume = {75},
pages = {1--12},
publisher = {Elsevier BV},
abstract = {This large-scale eye-movement study (N=164) investigated how students read short task assignments to complete information search problems and how their cognitive resources are associated with this reading behavior. These cognitive resources include information searching subskills, prior knowledge, verbal memory, reading fluency, and attentional difficulties. In this study, the task assignments consisted of four sentences. The first and last sentences provided context, while the second or third sentence was the relevant or irrelevant sentence under investigation. The results of a linear mixed-model and latent change score analyses showed the ubiquitous influence of reading fluency on first-pass eye movement measures, and the effects of sentence re- levancy on making more and longer reinspections and look-backs to the relevant than irrelevant sentence. In addition, the look-backs to the relevant sentence were associated with better information search subskills. Students with attentional difficulties made substantially fewer look-backs specifically to the relevant sentence. These results provide evidence that selective look-backs are used as an important index of comprehension monitoring independent of reading fluency. In this framework, slow reading fluency was found to be associated with laborious decoding but with intact comprehension monitoring, whereas attention difficulty was associated with intact decoding but with deficiency in comprehension monitoring.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This large-scale eye-movement study (N=164) investigated how students read short task assignments to complete information search problems and how their cognitive resources are associated with this reading behavior. These cognitive resources include information searching subskills, prior knowledge, verbal memory, reading fluency, and attentional difficulties. In this study, the task assignments consisted of four sentences. The first and last sentences provided context, while the second or third sentence was the relevant or irrelevant sentence under investigation. The results of a linear mixed-model and latent change score analyses showed the ubiquitous influence of reading fluency on first-pass eye movement measures, and the effects of sentence re- levancy on making more and longer reinspections and look-backs to the relevant than irrelevant sentence. In addition, the look-backs to the relevant sentence were associated with better information search subskills. Students with attentional difficulties made substantially fewer look-backs specifically to the relevant sentence. These results provide evidence that selective look-backs are used as an important index of comprehension monitoring independent of reading fluency. In this framework, slow reading fluency was found to be associated with laborious decoding but with intact comprehension monitoring, whereas attention difficulty was associated with intact decoding but with deficiency in comprehension monitoring.

Close

  • doi:10.1016/j.lindif.2019.101775

Close

Teresa Fasshauer; Andreas Sprenger; Karen Silling; Johanna Elisa Silberg; Anne Vosseler; Seiko Minoshita; Shinji Satoh; Michael Dorr; Katja Koelkebeck; Rebekka Lencer

Visual exploration of emotional faces in schizophrenia using masks from the Japanese Noh theatre Journal Article

Neuropsychologia, 133 , pp. 1–10, 2019.

Abstract | Links | BibTeX

@article{Fasshauer2019,
title = {Visual exploration of emotional faces in schizophrenia using masks from the Japanese Noh theatre},
author = {Teresa Fasshauer and Andreas Sprenger and Karen Silling and Johanna Elisa Silberg and Anne Vosseler and Seiko Minoshita and Shinji Satoh and Michael Dorr and Katja Koelkebeck and Rebekka Lencer},
doi = {10.1016/j.neuropsychologia.2019.107193},
year = {2019},
date = {2019-10-01},
journal = {Neuropsychologia},
volume = {133},
pages = {1--10},
abstract = {Studying eye movements during visual exploration is widely used to investigate visual information processing in schizophrenia. Here, we used masks from the Japanese Noh theatre to study visual exploration behavior during an emotional face recognition task and a brightness evaluation control task using the same stimuli. Eye movements were recorded in 25 patients with schizophrenia and 25 age-matched healthy controls while participants explored seven photos of Japanese Noh masks tilted to seven different angles. Additionally, parti- cipants were assessed on seven upright binary black and white pictures of these Noh masks (Mooney-like pictures), seven Upside-down pictures (180° upside-down turned Mooneys), and seven Neutral pictures. Participants either had to indicate whether they had recognized a face and its emotional expression, or they had to evaluate the brightness of the picture (total N=56 trials). We observed a clear effect of inclination angle of Noh masks on emotional ratings (ptextless0.001) and visual exploration behavior in both groups. Controls made larger saccades than patients when not being able to recognize a face in upside-down Mooney pictures (ptextless0.01). Patients also made smaller saccades when exploring pictures for brightness (ptextless0.05). Exploration behavior in patients was related to depressive symptom expression during emotional face recognition but not during brightness evaluation. Our findings suggest that visual exploration behavior in patients with schizophrenia is less flexible than in controls depending on the specific task requirements, specifically when exploring physical aspects of the environment.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Studying eye movements during visual exploration is widely used to investigate visual information processing in schizophrenia. Here, we used masks from the Japanese Noh theatre to study visual exploration behavior during an emotional face recognition task and a brightness evaluation control task using the same stimuli. Eye movements were recorded in 25 patients with schizophrenia and 25 age-matched healthy controls while participants explored seven photos of Japanese Noh masks tilted to seven different angles. Additionally, parti- cipants were assessed on seven upright binary black and white pictures of these Noh masks (Mooney-like pictures), seven Upside-down pictures (180° upside-down turned Mooneys), and seven Neutral pictures. Participants either had to indicate whether they had recognized a face and its emotional expression, or they had to evaluate the brightness of the picture (total N=56 trials). We observed a clear effect of inclination angle of Noh masks on emotional ratings (ptextless0.001) and visual exploration behavior in both groups. Controls made larger saccades than patients when not being able to recognize a face in upside-down Mooney pictures (ptextless0.01). Patients also made smaller saccades when exploring pictures for brightness (ptextless0.05). Exploration behavior in patients was related to depressive symptom expression during emotional face recognition but not during brightness evaluation. Our findings suggest that visual exploration behavior in patients with schizophrenia is less flexible than in controls depending on the specific task requirements, specifically when exploring physical aspects of the environment.

Close

  • doi:10.1016/j.neuropsychologia.2019.107193

Close

Seth W Egger; Evan D Remington; Chia-Jung Chang; Mehrdad Jazayeri

Internal models of sensorimotor integration regulate cortical dynamics Journal Article

Nature Neuroscience, 22 , pp. 1871–1882, 2019.

Abstract | Links | BibTeX

@article{Egger2019,
title = {Internal models of sensorimotor integration regulate cortical dynamics},
author = {Seth W Egger and Evan D Remington and Chia-Jung Chang and Mehrdad Jazayeri},
doi = {10.1038/s41593-019-0500-6},
year = {2019},
date = {2019-10-01},
journal = {Nature Neuroscience},
volume = {22},
pages = {1871--1882},
publisher = {Springer Science and Business Media LLC},
abstract = {Sensorimotor control during overt movements is characterized in terms of three building blocks: a controller, a simulator and a state estimator. We asked whether the same framework could explain the control of internal states in the absence of movements. Recently, it was shown that the brain controls the timing of future movements by adjusting an internal speed command. We trained monkeys in a novel task in which the speed command had to be dynamically controlled based on the timing of a sequence of flashes. Recordings from the frontal cortex provided evidence that the brain updates the internal speed command after each flash based on the error between the timing of the flash and the anticipated timing of the flash derived from a simulated motor plan. These findings suggest that cognitive control of internal states may be understood in terms of the same computational principles as motor control. Control of movements can be understood in terms of the interplay between a controller, a simulator and an estimator. Egger et. al. show that cortical neurons establish the same building blocks to control cognitive states in the absence of movement.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Sensorimotor control during overt movements is characterized in terms of three building blocks: a controller, a simulator and a state estimator. We asked whether the same framework could explain the control of internal states in the absence of movements. Recently, it was shown that the brain controls the timing of future movements by adjusting an internal speed command. We trained monkeys in a novel task in which the speed command had to be dynamically controlled based on the timing of a sequence of flashes. Recordings from the frontal cortex provided evidence that the brain updates the internal speed command after each flash based on the error between the timing of the flash and the anticipated timing of the flash derived from a simulated motor plan. These findings suggest that cognitive control of internal states may be understood in terms of the same computational principles as motor control. Control of movements can be understood in terms of the interplay between a controller, a simulator and an estimator. Egger et. al. show that cortical neurons establish the same building blocks to control cognitive states in the absence of movement.

Close

  • doi:10.1038/s41593-019-0500-6

Close

Georgia Zellou; Delphine Dahan

Listeners maintain phonological uncertainty over time and across words: The case of vowel nasality in English Journal Article

Journal of Phonetics, 76 , pp. 1–20, 2019.

Abstract | Links | BibTeX

@article{Zellou2019,
title = {Listeners maintain phonological uncertainty over time and across words: The case of vowel nasality in English},
author = {Georgia Zellou and Delphine Dahan},
doi = {10.1016/j.wocn.2019.06.001},
year = {2019},
date = {2019-09-01},
journal = {Journal of Phonetics},
volume = {76},
pages = {1--20},
publisher = {Elsevier BV},
abstract = {While the fact that phonetic information is evaluated in a non-discrete, probabilistic fashion is well established, there is less consensus regarding how long such encoding is maintained. Here, we examined whether people maintain in memory the amount of vowel nasality present in a word when processing a subsequent word that holds a semantic dependency with the first one. Vowel nasality in English is an acoustic correlate of the oral vs. nasal status of an adjacent consonant, and sometimes it is the only distinguishing phonetic feature (e.g., bet vs. bent). In Experiment 1, we show that people can perceive differences in nasality between two vowels above and beyond differences in the categorization of those vowels. In Experiment 2, we tracked listeners' eye-movements as they heard a sentence that mentioned one of four displayed images (e.g., ‘money') following a prime word (e.g., ‘bet') that held a semantic relationship with the target word. Recognition of the target was found to be modulated by the degree of nasality in the first word's vowel: Slightly greater uncertainty regarding the oral status of the post-vocalic consonant in the first word translated into a weaker semantic cue for the identification of the second word. Thus, listeners appear to maintain in memory the degree of vowel nasality they perceived on the first word and bring this information to bear onto the interpretation of a subsequent, semantically-dependent word. Probabilistic cue integration across words that hold semantic coherence, we argue, contributes to achieving robust language comprehension despite the inherent ambiguity of the speech signal.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

While the fact that phonetic information is evaluated in a non-discrete, probabilistic fashion is well established, there is less consensus regarding how long such encoding is maintained. Here, we examined whether people maintain in memory the amount of vowel nasality present in a word when processing a subsequent word that holds a semantic dependency with the first one. Vowel nasality in English is an acoustic correlate of the oral vs. nasal status of an adjacent consonant, and sometimes it is the only distinguishing phonetic feature (e.g., bet vs. bent). In Experiment 1, we show that people can perceive differences in nasality between two vowels above and beyond differences in the categorization of those vowels. In Experiment 2, we tracked listeners' eye-movements as they heard a sentence that mentioned one of four displayed images (e.g., ‘money') following a prime word (e.g., ‘bet') that held a semantic relationship with the target word. Recognition of the target was found to be modulated by the degree of nasality in the first word's vowel: Slightly greater uncertainty regarding the oral status of the post-vocalic consonant in the first word translated into a weaker semantic cue for the identification of the second word. Thus, listeners appear to maintain in memory the degree of vowel nasality they perceived on the first word and bring this information to bear onto the interpretation of a subsequent, semantically-dependent word. Probabilistic cue integration across words that hold semantic coherence, we argue, contributes to achieving robust language comprehension despite the inherent ambiguity of the speech signal.

Close

  • doi:10.1016/j.wocn.2019.06.001

Close

Sang-Ah Yoo; John K Tsotsos; Mazyar Fallah

Feed-forward visual processing suffices for coarse localization but fine-grained localization in an attention-demanding context needs feedback processing Journal Article

PLOS ONE, 14 (9), pp. 1–16, 2019.

Abstract | Links | BibTeX

@article{Yoo2019,
title = {Feed-forward visual processing suffices for coarse localization but fine-grained localization in an attention-demanding context needs feedback processing},
author = {Sang-Ah Yoo and John K Tsotsos and Mazyar Fallah},
editor = {Robin Baur{è}s},
doi = {10.1371/journal.pone.0223166},
year = {2019},
date = {2019-09-01},
journal = {PLOS ONE},
volume = {14},
number = {9},
pages = {1--16},
abstract = {It is well known that simple visual tasks, such as object detection or categorization, can be performed within a short period of time, suggesting the sufficiency of feed-forward visual processing. However, more complex visual tasks, such as fine-grained localization may require high-resolution information available at the early processing levels in the visual hier- archy. To access this information using a top-down approach, feedback processing would need to traverse several stages in the visual hierarchy and each step in this traversal takes processing time. In the present study, we compared the processing time required to com- plete object categorization and localization by varying presentation duration and complexity of natural scene stimuli. We hypothesized that performance would be asymptotic at shorter presentation durations when feed-forward processing suffices for visual tasks, whereas per- formance would gradually improve as images are presented longer if the tasks rely on feed- back processing. In Experiment 1, where simple images were presented, both object categorization and localization performance sharply improved until 100 ms of presentation then it leveled off. These results are a replication of previously reported rapid categorization effects but they do not support the role of feedback processing in localization tasks, indicat- ing that feed-forward processing enables coarse localization in relatively simple visual scenes. In Experiment 2, the same tasks were performed but more attention-demanding and ecologically valid images were used as stimuli. Unlike in Experiment 1, both object cate- gorization performance and localization precision gradually improved as stimulus presenta- tion duration became longer. This finding suggests that complex visual tasks that require visual scrutiny call for top-down feedback processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It is well known that simple visual tasks, such as object detection or categorization, can be performed within a short period of time, suggesting the sufficiency of feed-forward visual processing. However, more complex visual tasks, such as fine-grained localization may require high-resolution information available at the early processing levels in the visual hier- archy. To access this information using a top-down approach, feedback processing would need to traverse several stages in the visual hierarchy and each step in this traversal takes processing time. In the present study, we compared the processing time required to com- plete object categorization and localization by varying presentation duration and complexity of natural scene stimuli. We hypothesized that performance would be asymptotic at shorter presentation durations when feed-forward processing suffices for visual tasks, whereas per- formance would gradually improve as images are presented longer if the tasks rely on feed- back processing. In Experiment 1, where simple images were presented, both object categorization and localization performance sharply improved until 100 ms of presentation then it leveled off. These results are a replication of previously reported rapid categorization effects but they do not support the role of feedback processing in localization tasks, indicat- ing that feed-forward processing enables coarse localization in relatively simple visual scenes. In Experiment 2, the same tasks were performed but more attention-demanding and ecologically valid images were used as stimuli. Unlike in Experiment 1, both object cate- gorization performance and localization precision gradually improved as stimulus presenta- tion duration became longer. This finding suggests that complex visual tasks that require visual scrutiny call for top-down feedback processing.

Close

  • doi:10.1371/journal.pone.0223166

Close

Jonathan van Leeuwen; Artem V Belopolsky

Detection of object displacement during a saccade is prioritized by the oculomotor system Journal Article

Journal of Vision, 19 (11), pp. 1–15, 2019.

Abstract | Links | BibTeX

@article{Leeuwen2019,
title = {Detection of object displacement during a saccade is prioritized by the oculomotor system},
author = {Jonathan van Leeuwen and Artem V Belopolsky},
doi = {10.1167/19.11.11},
year = {2019},
date = {2019-09-01},
journal = {Journal of Vision},
volume = {19},
number = {11},
pages = {1--15},
abstract = {The human eye-movement system is equipped with a sophisticated updating mechanism that can adjust for large retinal displacements produced by saccadic eye movements. The nature of this updating mechanism is still highly debated. Previous studies have demonstrated that updating can occur very rapidly and is initiated before the start of a saccade. In the present study, we used saccade curvature to demonstrate that the oculomotor system is tuned for detecting object displacements during saccades. Participants made a sequence of saccades while ignoring an irrelevant distractor. Curvature of the second saccade relative to the distractor was used to estimate the time course of updating. Saccade curvature away from the presaccadic location of the distractor emerged as early as 80 ms after the first saccade when the distractor was displaced during a saccade. This is about 50 ms earlier than when a distractor was only present before a saccade, only present after a saccade, or remained stationary across a saccade. This shows that the oculomotor system prioritizes detection of object displacements during saccades, which may be useful for guiding corrective saccades. The results also challenge previous views by demonstrating the additional role of postsaccadic information in updating target–distractor competition across saccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The human eye-movement system is equipped with a sophisticated updating mechanism that can adjust for large retinal displacements produced by saccadic eye movements. The nature of this updating mechanism is still highly debated. Previous studies have demonstrated that updating can occur very rapidly and is initiated before the start of a saccade. In the present study, we used saccade curvature to demonstrate that the oculomotor system is tuned for detecting object displacements during saccades. Participants made a sequence of saccades while ignoring an irrelevant distractor. Curvature of the second saccade relative to the distractor was used to estimate the time course of updating. Saccade curvature away from the presaccadic location of the distractor emerged as early as 80 ms after the first saccade when the distractor was displaced during a saccade. This is about 50 ms earlier than when a distractor was only present before a saccade, only present after a saccade, or remained stationary across a saccade. This shows that the oculomotor system prioritizes detection of object displacements during saccades, which may be useful for guiding corrective saccades. The results also challenge previous views by demonstrating the additional role of postsaccadic information in updating target–distractor competition across saccades.

Close

  • doi:10.1167/19.11.11

Close

Nils S Van Den Berg; Rients B Huitema; Jacoba M Spikman; Peter Jan Van Laar; Edward H F De Haan

A shrunken world – micropsia after a right occipito-parietal ischemic stroke Journal Article

Neurocase, 25 (5), pp. 202–208, 2019.

Abstract | Links | BibTeX

@article{VanDenBerg2019,
title = {A shrunken world – micropsia after a right occipito-parietal ischemic stroke},
author = {Nils S {Van Den Berg} and Rients B Huitema and Jacoba M Spikman and Peter Jan {Van Laar} and Edward H F {De Haan}},
doi = {10.1080/13554794.2019.1656751},
year = {2019},
date = {2019-09-01},
journal = {Neurocase},
volume = {25},
number = {5},
pages = {202--208},
publisher = {Informa UK Limited},
abstract = {Micropsia is a rare condition in which patients perceive the outside world smaller in size than it actually is. We examined a patient who, after a right occipito-parietal stroke, subjectively reported perceiving everything at seventy percent of the actual size. Using experimental tasks, we confirmed the extent of his micropsia at 70%. Visual half-field tests showed an impaired perception of shape, location and motion in the left visual field. As his micropsia concerns the complete visual field, we suggest that it is caused by a higher-order compensation process in order to reconcile the conflicting information from the two hemifields.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Micropsia is a rare condition in which patients perceive the outside world smaller in size than it actually is. We examined a patient who, after a right occipito-parietal stroke, subjectively reported perceiving everything at seventy percent of the actual size. Using experimental tasks, we confirmed the extent of his micropsia at 70%. Visual half-field tests showed an impaired perception of shape, location and motion in the left visual field. As his micropsia concerns the complete visual field, we suggest that it is caused by a higher-order compensation process in order to reconcile the conflicting information from the two hemifields.

Close

  • doi:10.1080/13554794.2019.1656751

Close

Sabrina E Twilhaar; Artem V Belopolsky; Jorrit F Kieviet; Ruurd M Elburg; Jaap Oosterlaan

Voluntary and involuntary control of attention in adolescents born very preterm: A study of eye movements Journal Article

Child Development, pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Twilhaar2019,
title = {Voluntary and involuntary control of attention in adolescents born very preterm: A study of eye movements},
author = {Sabrina E Twilhaar and Artem V Belopolsky and Jorrit F Kieviet and Ruurd M Elburg and Jaap Oosterlaan},
doi = {10.1111/cdev.13310},
year = {2019},
date = {2019-09-01},
journal = {Child Development},
pages = {1--12},
abstract = {Very preterm birth is associated with attention deficits that interfere with academic performance. A better understanding of attention processes is necessary to support very preterm born children. This study examined voluntary and involuntary attentional control in very preterm born adolescents by measuring saccadic eye movements. Additionally, these control processes were related to symptoms of inattention, intelligence, and academic performance. Participants included 47 very preterm and 61 full-term born 13-years-old adolescents. Oculomotor control was assessed using the antisaccade and oculomotor capture paradigm. Very preterm born adolescents showed deficits in antisaccade but not in oculomotor capture performance, indicating impairments in voluntary but not involuntary attentional control. These impairments mediated the relation between very preterm birth and inattention, intelligence, and academic performance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Very preterm birth is associated with attention deficits that interfere with academic performance. A better understanding of attention processes is necessary to support very preterm born children. This study examined voluntary and involuntary attentional control in very preterm born adolescents by measuring saccadic eye movements. Additionally, these control processes were related to symptoms of inattention, intelligence, and academic performance. Participants included 47 very preterm and 61 full-term born 13-years-old adolescents. Oculomotor control was assessed using the antisaccade and oculomotor capture paradigm. Very preterm born adolescents showed deficits in antisaccade but not in oculomotor capture performance, indicating impairments in voluntary but not involuntary attentional control. These impairments mediated the relation between very preterm birth and inattention, intelligence, and academic performance.

Close

  • doi:10.1111/cdev.13310

Close

Jody Stanley; Jason D Forte; Olivia Carter

Rivalry onset in and around the fovea: The role of visual field location and eye dominance on perceptual dominance bias Journal Article

Vision, 3 , pp. 1–13, 2019.

Abstract | Links | BibTeX

@article{Stanley2019,
title = {Rivalry onset in and around the fovea: The role of visual field location and eye dominance on perceptual dominance bias},
author = {Jody Stanley and Jason D Forte and Olivia Carter},
doi = {10.3390/vision3040051},
year = {2019},
date = {2019-09-01},
journal = {Vision},
volume = {3},
pages = {1--13},
abstract = {When dissimilar images are presented to each eye, the images will alternate every few seconds in a phenomenon known as binocular rivalry. Recent research has found evidence of a bias towards one image at the initial ‘onset' period of rivalry that varies across the peripheral visual field. To determine the role that visual field location plays in and around the fovea at onset, trained observers were presented small orthogonal achromatic grating patches at various locations across the central 3° of visual space for 1-s and 60-s intervals. Results reveal stronger bias at onset than during continuous rivalry, and evidence of temporal hemifield dominance across observers, however, the nature of the hemifield effects differed between individuals and interacted with overall eye dominance. Despite using small grating patches, a high proportion of mixed percept was still reported, with more mixed percept at onset along the vertical midline, in general, and in increasing proportions with eccentricity in the lateral hemifields. Results show that even within the foveal range, onset rivalry bias varies across visual space, and differs in degree and sensitivity to biases in average dominance over continuous viewing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When dissimilar images are presented to each eye, the images will alternate every few seconds in a phenomenon known as binocular rivalry. Recent research has found evidence of a bias towards one image at the initial ‘onset' period of rivalry that varies across the peripheral visual field. To determine the role that visual field location plays in and around the fovea at onset, trained observers were presented small orthogonal achromatic grating patches at various locations across the central 3° of visual space for 1-s and 60-s intervals. Results reveal stronger bias at onset than during continuous rivalry, and evidence of temporal hemifield dominance across observers, however, the nature of the hemifield effects differed between individuals and interacted with overall eye dominance. Despite using small grating patches, a high proportion of mixed percept was still reported, with more mixed percept at onset along the vertical midline, in general, and in increasing proportions with eccentricity in the lateral hemifields. Results show that even within the foveal range, onset rivalry bias varies across visual space, and differs in degree and sensitivity to biases in average dominance over continuous viewing.

Close

  • doi:10.3390/vision3040051

Close

Elizabeth R Schotter; Anna Marie Fennell

Readers can identify the meanings of words without looking at them: Evidence from regressive eye movements Journal Article

Psychonomic Bulletin & Review, 26 (5), pp. 1697–1704, 2019.

Abstract | Links | BibTeX

@article{Schotter2019,
title = {Readers can identify the meanings of words without looking at them: Evidence from regressive eye movements},
author = {Elizabeth R Schotter and Anna Marie Fennell},
doi = {10.3758/s13423-019-01662-1},
year = {2019},
date = {2019-09-01},
journal = {Psychonomic Bulletin & Review},
volume = {26},
number = {5},
pages = {1697--1704},
publisher = {Springer Science and Business Media LLC},
abstract = {Previewing words prior to fixating them leads to faster reading, but does it lead to word identification (i.e., semantic encoding)? We tested this with a gaze-contingent display change study and a subsequent plausibility manipulation. Both the preview and the target words were plausible when encountered, and we manipulated the end of the sentence so that the different preview was rendered implausible (in critical sentences) or remained plausible (in neutral sentences). Regressive saccades from the end ofthe sentence increased when the preview was rendered implausible compared to when it was plausible, especially when the preview was high frequency. These data add to a growing body ofresearch suggesting that linguistic information can be obtained during preview, to the point where word meaning is accessed. In addition, these findings suggest that the meaning of the fixated target does not always override the semantic information obtained during preview.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previewing words prior to fixating them leads to faster reading, but does it lead to word identification (i.e., semantic encoding)? We tested this with a gaze-contingent display change study and a subsequent plausibility manipulation. Both the preview and the target words were plausible when encountered, and we manipulated the end of the sentence so that the different preview was rendered implausible (in critical sentences) or remained plausible (in neutral sentences). Regressive saccades from the end ofthe sentence increased when the preview was rendered implausible compared to when it was plausible, especially when the preview was high frequency. These data add to a growing body ofresearch suggesting that linguistic information can be obtained during preview, to the point where word meaning is accessed. In addition, these findings suggest that the meaning of the fixated target does not always override the semantic information obtained during preview.

Close

  • doi:10.3758/s13423-019-01662-1

Close

Owen Myles; Ben Grafton; Patrick Clarke; Colin MacLeod

GIVE me your attention: Differentiating goal identification and goal execution components of the anti-saccade effect Journal Article

PLOS ONE, 14 (9), pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Myles2019,
title = {GIVE me your attention: Differentiating goal identification and goal execution components of the anti-saccade effect},
author = {Owen Myles and Ben Grafton and Patrick Clarke and Colin MacLeod},
editor = {Alessandra S Souza},
doi = {10.1371/journal.pone.0222710},
year = {2019},
date = {2019-09-01},
journal = {PLOS ONE},
volume = {14},
number = {9},
pages = {1--12},
abstract = {The anti-saccade task is a commonly used method of assessing individual differences in cognitive control. It has been shown that a number of clinical disorders are characterised by increased anti-saccade cost. However, it remains unknown whether this reflects impaired goal identification or impaired goal execution, because, to date, no procedure has been developed to independently assess these two components of anti-saccade cost. The aim of the present study was to develop such an assessment task, which we term the Goal Identification Vs. Execution (GIVE) task. Fifty-one undergraduate students completed a conventional anti-saccade task, and our novel GIVE task. Our findings revealed that individual differences in anti-saccade goal identification costs and goal execution costs were uncorre- lated, when assessed using the GIVE task, but both predicted unique variance in the con- ventional anti-saccade cost measure. These results confirm that the GIVE task is capable of independently assessing variation in the goal identification and goal execution components of the anti-saccade effect. We discuss how this newly introduced assessment procedure now can be employed to illuminate the specific basis of the increased anti-saccade cost that characterises various forms of clinical dysfunction.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The anti-saccade task is a commonly used method of assessing individual differences in cognitive control. It has been shown that a number of clinical disorders are characterised by increased anti-saccade cost. However, it remains unknown whether this reflects impaired goal identification or impaired goal execution, because, to date, no procedure has been developed to independently assess these two components of anti-saccade cost. The aim of the present study was to develop such an assessment task, which we term the Goal Identification Vs. Execution (GIVE) task. Fifty-one undergraduate students completed a conventional anti-saccade task, and our novel GIVE task. Our findings revealed that individual differences in anti-saccade goal identification costs and goal execution costs were uncorre- lated, when assessed using the GIVE task, but both predicted unique variance in the con- ventional anti-saccade cost measure. These results confirm that the GIVE task is capable of independently assessing variation in the goal identification and goal execution components of the anti-saccade effect. We discuss how this newly introduced assessment procedure now can be employed to illuminate the specific basis of the increased anti-saccade cost that characterises various forms of clinical dysfunction.

Close

  • doi:10.1371/journal.pone.0222710

Close

Scott T Murdison; Gunnar Blohm; Frank Bremmer

Saccade-induced changes in ocular torsion reveal predictive orientation perception Journal Article

Journal of Vision, 19 (11), pp. 1–13, 2019.

Abstract | Links | BibTeX

@article{Murdison2019,
title = {Saccade-induced changes in ocular torsion reveal predictive orientation perception},
author = {Scott T Murdison and Gunnar Blohm and Frank Bremmer},
doi = {10.1167/19.11.10},
year = {2019},
date = {2019-09-01},
journal = {Journal of Vision},
volume = {19},
number = {11},
pages = {1--13},
abstract = {Natural orienting of gaze often results in a retinal image that is rotated relative to space due to ocular torsion. However, we perceive neither this rotation nor a moving world despite visual rotational motion on the retina. This perceptual stability is often attributed to the phenomenon known as predictive remapping, but the current remapping literature ignores this torsional component. In addition, studies often simply measure remapping across either space or features (e.g., orientation) but in natural circumstances, both components are bound together for stable perception. One natural circumstance in which the perceptual system must account for the current and future eye orientation to correctly interpret the orientation of external stimuli occurs during movements to or from oblique eye orientations (i.e., eye orientations with both a horizontal and vertical angular component relative to the primary position). Here we took advantage of oblique eye orientation-induced ocular torsion to examine perisaccadic orientation perception. First, we found that orientation perception was largely predicted by the rotated retinal image. Second, we observed a presaccadic remapping of orientation perception consistent with maintaining a stable (but spatially inaccurate) retinocentric perception throughout the saccade. These findings strongly suggest that our seamless perceptual stability relies on retinocentric signals that are predictively remapped in all three ocular dimensions with each saccade.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Natural orienting of gaze often results in a retinal image that is rotated relative to space due to ocular torsion. However, we perceive neither this rotation nor a moving world despite visual rotational motion on the retina. This perceptual stability is often attributed to the phenomenon known as predictive remapping, but the current remapping literature ignores this torsional component. In addition, studies often simply measure remapping across either space or features (e.g., orientation) but in natural circumstances, both components are bound together for stable perception. One natural circumstance in which the perceptual system must account for the current and future eye orientation to correctly interpret the orientation of external stimuli occurs during movements to or from oblique eye orientations (i.e., eye orientations with both a horizontal and vertical angular component relative to the primary position). Here we took advantage of oblique eye orientation-induced ocular torsion to examine perisaccadic orientation perception. First, we found that orientation perception was largely predicted by the rotated retinal image. Second, we observed a presaccadic remapping of orientation perception consistent with maintaining a stable (but spatially inaccurate) retinocentric perception throughout the saccade. These findings strongly suggest that our seamless perceptual stability relies on retinocentric signals that are predictively remapped in all three ocular dimensions with each saccade.

Close

  • doi:10.1167/19.11.10

Close

Eugene McSorley; Iain D Gilchrist; Rachel McCloy

The role of fixation disengagement in the parallel programming of sequences of saccades Journal Article

Experimental Brain Research, 237 (11), pp. 3033–3045, 2019.

Abstract | Links | BibTeX

@article{McSorley2019,
title = {The role of fixation disengagement in the parallel programming of sequences of saccades},
author = {Eugene McSorley and Iain D Gilchrist and Rachel McCloy},
doi = {10.1007/s00221-019-05641-9},
year = {2019},
date = {2019-09-01},
journal = {Experimental Brain Research},
volume = {237},
number = {11},
pages = {3033--3045},
abstract = {One of the core mechanisms involved in the control of saccade responses to selected target stimuli is the disengagement from the current fixation location, so that the next saccade can be executed. To carry out everyday visual tasks, we make multiple eye movements that can be programmed in parallel. However, the role of disengagement in the parallel programming of saccades has not been examined. It is well established that the need for disengagement slows down saccadic response time. This may be important in allowing the system to program accurate eye movements and have a role to play in the control of multiple eye movements but as yet this remains untested. Here, we report two experiments that seek to examine whether fixation disengagement reduces saccade latencies when the task completion demands multiple saccade responses. A saccade contingent paradigm was employed and participants were asked to execute saccadic eye movements to a series of seven targets while manipulating when these targets were shown. This both promotes fixation disengagement and controls the extent that parallel programming can occur. We found that trial duration decreased as more targets were made available prior to fixation: this was a result both of a reduction in the number of saccades being executed and in their saccade latencies. This supports the view that even when fixation disengagement is not required, parallel programming of multiple sequential saccadic eye movements is still present. By comparison with previous published data, we demonstrate a substantial speeded of response times in these condition (“a gap effect”) and that parallel programming is attenuated in these conditions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

One of the core mechanisms involved in the control of saccade responses to selected target stimuli is the disengagement from the current fixation location, so that the next saccade can be executed. To carry out everyday visual tasks, we make multiple eye movements that can be programmed in parallel. However, the role of disengagement in the parallel programming of saccades has not been examined. It is well established that the need for disengagement slows down saccadic response time. This may be important in allowing the system to program accurate eye movements and have a role to play in the control of multiple eye movements but as yet this remains untested. Here, we report two experiments that seek to examine whether fixation disengagement reduces saccade latencies when the task completion demands multiple saccade responses. A saccade contingent paradigm was employed and participants were asked to execute saccadic eye movements to a series of seven targets while manipulating when these targets were shown. This both promotes fixation disengagement and controls the extent that parallel programming can occur. We found that trial duration decreased as more targets were made available prior to fixation: this was a result both of a reduction in the number of saccades being executed and in their saccade latencies. This supports the view that even when fixation disengagement is not required, parallel programming of multiple sequential saccadic eye movements is still present. By comparison with previous published data, we demonstrate a substantial speeded of response times in these condition (“a gap effect”) and that parallel programming is attenuated in these conditions.

Close

  • doi:10.1007/s00221-019-05641-9

Close

Lee Mcilreavy; Tom C A Freeman; Jonathan T Erichsen

Two-dimensional analysis of smooth pursuit eye movements reveals quantitative deficits in precision and accuracy Journal Article

Translational Vision Science & Technology, 8 (5), pp. 1–17, 2019.

Abstract | Links | BibTeX

@article{Mcilreavy2019,
title = {Two-dimensional analysis of smooth pursuit eye movements reveals quantitative deficits in precision and accuracy},
author = {Lee Mcilreavy and Tom C A Freeman and Jonathan T Erichsen},
doi = {10.1167/tvst.8.5.7},
year = {2019},
date = {2019-09-01},
journal = {Translational Vision Science & Technology},
volume = {8},
number = {5},
pages = {1--17},
publisher = {Association for Research in Vision and Ophthalmology (ARVO)},
abstract = {Purpose: Small moving targets are followed by pursuit eye movements, with success ubiquitously defined by gain. Gain quantifies accuracy, rather than precision, and only for eye movements along the target trajectory. Analogous to previous studies of fixation, we analyzed pursuit performance in two dimensions as a function of target direction, velocity, and amplitude. As a subsidiary experiment, we compared pursuit performance against that of fixation. Methods: Eye position was recorded from 15 observers during pursuit. The target was a 0.48 dot that moved across a large screen at 88/s or 168/s, either horizontally or vertically, through peak-to-peak amplitudes of 88,168,or 328. Two-dimensional eye velocity was expressed relative to the target, and a bivariate probability density function computed to obtain accuracy and precision. As a comparison, identical metrics were derived from fixation data. Results: For all target directions, eye velocity was less precise along the target trajectory. Eye velocities orthogonal to the target trajectory were more accurate during vertical pursuit than horizontal. Pursuit accuracy and precision along and orthogonal to the target trajectory decreased at the higher target velocity. Accuracy along the target trajectory decreased with smaller target amplitudes. Conclusions: Orthogonal to the target trajectory, pursuit was inaccurate and imprecise. Compared to fixation, pursuit was less precise and less accurate even when following the stimulus that gave the best performance. Translational Relevance: This analytical approach may help the detection of subtle deficits in slow phase eye movements that could be used as biomarkers for disease progression and/or treatment.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Purpose: Small moving targets are followed by pursuit eye movements, with success ubiquitously defined by gain. Gain quantifies accuracy, rather than precision, and only for eye movements along the target trajectory. Analogous to previous studies of fixation, we analyzed pursuit performance in two dimensions as a function of target direction, velocity, and amplitude. As a subsidiary experiment, we compared pursuit performance against that of fixation. Methods: Eye position was recorded from 15 observers during pursuit. The target was a 0.48 dot that moved across a large screen at 88/s or 168/s, either horizontally or vertically, through peak-to-peak amplitudes of 88,168,or 328. Two-dimensional eye velocity was expressed relative to the target, and a bivariate probability density function computed to obtain accuracy and precision. As a comparison, identical metrics were derived from fixation data. Results: For all target directions, eye velocity was less precise along the target trajectory. Eye velocities orthogonal to the target trajectory were more accurate during vertical pursuit than horizontal. Pursuit accuracy and precision along and orthogonal to the target trajectory decreased at the higher target velocity. Accuracy along the target trajectory decreased with smaller target amplitudes. Conclusions: Orthogonal to the target trajectory, pursuit was inaccurate and imprecise. Compared to fixation, pursuit was less precise and less accurate even when following the stimulus that gave the best performance. Translational Relevance: This analytical approach may help the detection of subtle deficits in slow phase eye movements that could be used as biomarkers for disease progression and/or treatment.

Close

  • doi:10.1167/tvst.8.5.7

Close

Rebecca K Lawrence; Mark Edwards; Gordon W C Chan; Jolene A Cox; Stephanie C Goodhew

Does cultural background predict the spatial distribution of attention? Journal Article

Culture and Brain, pp. 1–29, 2019.

Abstract | Links | BibTeX

@article{Lawrence2019,
title = {Does cultural background predict the spatial distribution of attention?},
author = {Rebecca K Lawrence and Mark Edwards and Gordon W C Chan and Jolene A Cox and Stephanie C Goodhew},
doi = {10.1007/s40167-019-00086-x},
year = {2019},
date = {2019-09-01},
journal = {Culture and Brain},
pages = {1--29},
publisher = {Springer Science and Business Media LLC},
abstract = {The current study aimed to explore cultural differences in the covert spatial distribu- tion of attention. In particular, we tested whether those born in an East Asian coun- try adopted a different distribution of attention compared to individuals born in a Western country. Previous work suggests that Western individuals tend to distrib- ute attention narrowly and that East Asian individuals distribute attention broadly. However, these studies have used indirect methods to infer spatial attention scale. In particular, they have not measured changes in attention across space, nor have they controlled for differences eye movements patterns, which can differ across cultures. To address this, in the current study, we used an inhibition of return (IOR) paradigm which directly measured changes in attention across space, while controlling for eye movements. The use of the IOR task was a significant advancement, as it allowed for a highly sensitive measure of attention distribution compared to past research. Criti- cally, using this new measure, we failed to observe a cultural difference in the dis- tribution of covert spatial attention. Instead, individuals from East Asian countries and Western countries adopted a similar attention spread. However, we did observe a cultural difference in response speed, whereby Western participants were rela- tively faster to detect targets in the IOR task. This relationship persisted, even after controlling for individual variation in attention slope, indicating that factors other than attention distribution might account for cultural differences in response speed. Therefore, this study provides robust, converging evidence that group differences in covert spatial attentional distribution do not necessarily drive cultural variation in response speed. Keywords},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The current study aimed to explore cultural differences in the covert spatial distribu- tion of attention. In particular, we tested whether those born in an East Asian coun- try adopted a different distribution of attention compared to individuals born in a Western country. Previous work suggests that Western individuals tend to distrib- ute attention narrowly and that East Asian individuals distribute attention broadly. However, these studies have used indirect methods to infer spatial attention scale. In particular, they have not measured changes in attention across space, nor have they controlled for differences eye movements patterns, which can differ across cultures. To address this, in the current study, we used an inhibition of return (IOR) paradigm which directly measured changes in attention across space, while controlling for eye movements. The use of the IOR task was a significant advancement, as it allowed for a highly sensitive measure of attention distribution compared to past research. Criti- cally, using this new measure, we failed to observe a cultural difference in the dis- tribution of covert spatial attention. Instead, individuals from East Asian countries and Western countries adopted a similar attention spread. However, we did observe a cultural difference in response speed, whereby Western participants were rela- tively faster to detect targets in the IOR task. This relationship persisted, even after controlling for individual variation in attention slope, indicating that factors other than attention distribution might account for cultural differences in response speed. Therefore, this study provides robust, converging evidence that group differences in covert spatial attentional distribution do not necessarily drive cultural variation in response speed. Keywords

Close

  • doi:10.1007/s40167-019-00086-x

Close

Jessica E Goold; Wonil Choi; John M Henderson

Cortical control of eye movements in natural reading: Evidence from MVPA Journal Article

Experimental Brain Research, 237 , pp. 3099–3107, 2019.

Abstract | Links | BibTeX

@article{Goold2019,
title = {Cortical control of eye movements in natural reading: Evidence from MVPA},
author = {Jessica E Goold and Wonil Choi and John M Henderson},
doi = {10.1007/s00221-019-05655-3},
year = {2019},
date = {2019-09-01},
journal = {Experimental Brain Research},
volume = {237},
pages = {3099--3107},
abstract = {Language comprehension during reading requires fine-grained management of saccadic eye movements. A critical question, therefore, is how the brain controls eye movements in reading. Neural correlates of simple eye movements have been found in multiple cortical regions, but little is known about how this network operates in reading. To investigate this question in the present study, participants were presented with normal text, pseudo-word text, and consonant string text in a magnetic resonance imaging (MRI) scanner with eyetracking. Participants read naturally in the normal text condition and moved their eyes “as if they were reading” in the other conditions. Multi-voxel pattern analysis was used to analyze the fMRI signal in the oculomotor network. We found that activation patterns in a subset of network regions differentiated between stimulus types. These results suggest that the oculomotor network reflects more than simple saccade generation and are consistent with the hypothesis that specific network areas interface with cognitive systems.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Language comprehension during reading requires fine-grained management of saccadic eye movements. A critical question, therefore, is how the brain controls eye movements in reading. Neural correlates of simple eye movements have been found in multiple cortical regions, but little is known about how this network operates in reading. To investigate this question in the present study, participants were presented with normal text, pseudo-word text, and consonant string text in a magnetic resonance imaging (MRI) scanner with eyetracking. Participants read naturally in the normal text condition and moved their eyes “as if they were reading” in the other conditions. Multi-voxel pattern analysis was used to analyze the fMRI signal in the oculomotor network. We found that activation patterns in a subset of network regions differentiated between stimulus types. These results suggest that the oculomotor network reflects more than simple saccade generation and are consistent with the hypothesis that specific network areas interface with cognitive systems.

Close

  • doi:10.1007/s00221-019-05655-3

Close

Adam Frost; George Tomou; Harsh Parikh; Jagjot Kaur; Marija Zivcevska; Matthias Niemeier

Working memory in action: Inspecting the systematic and unsystematic errors of spatial memory across saccades Journal Article

Experimental Brain Research, 237 (11), pp. 2939–2956, 2019.

Abstract | Links | BibTeX

@article{Frost2019,
title = {Working memory in action: Inspecting the systematic and unsystematic errors of spatial memory across saccades},
author = {Adam Frost and George Tomou and Harsh Parikh and Jagjot Kaur and Marija Zivcevska and Matthias Niemeier},
doi = {10.1007/s00221-019-05623-x},
year = {2019},
date = {2019-09-01},
journal = {Experimental Brain Research},
volume = {237},
number = {11},
pages = {2939--2956},
publisher = {Springer Science and Business Media LLC},
abstract = {Our ability to interact with the world depends on memory buffers that flexibly store and process information for short periods of time. Current working memory research, however, mainly uses tasks that avoid eye movements, whereas in daily life we need to remember information across saccades. Because saccades disrupt perception and attention, the brain might use special transsaccadic memory systems. Therefore, to compare working memory systems between and across saccades, the current study devised transsaccadic memory tasks that evaluated the influence of memory load on several kinds of systematic and unsystematic spatial errors, and tested whether these measures predicted performance in more established working memory paradigms. Experiment 1 used a line intersection task that had people integrate lines shown before and after saccades, and it administered a 2-back task. Experiments 2 and 3 asked people to point at one of several locations within a memory array flashed before an eye movement, and we tested change detection and 2-back performance. We found that unsystematic trans-saccadic errors increased with memory load and were correlated with 2-back performance. Systematic errors produced similar results, although effects varied as a function of the geometric layout of the memory arrays. Surprisingly, transsaccadic errors did not predict change detection performance despite the latter being a widely accepted measure of working memory capacity. Our results suggest that working memory systems between and across saccades share, in part, similar neural resources. Nevertheless, our data highlight the importance of investigating working memory across saccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Our ability to interact with the world depends on memory buffers that flexibly store and process information for short periods of time. Current working memory research, however, mainly uses tasks that avoid eye movements, whereas in daily life we need to remember information across saccades. Because saccades disrupt perception and attention, the brain might use special transsaccadic memory systems. Therefore, to compare working memory systems between and across saccades, the current study devised transsaccadic memory tasks that evaluated the influence of memory load on several kinds of systematic and unsystematic spatial errors, and tested whether these measures predicted performance in more established working memory paradigms. Experiment 1 used a line intersection task that had people integrate lines shown before and after saccades, and it administered a 2-back task. Experiments 2 and 3 asked people to point at one of several locations within a memory array flashed before an eye movement, and we tested change detection and 2-back performance. We found that unsystematic trans-saccadic errors increased with memory load and were correlated with 2-back performance. Systematic errors produced similar results, although effects varied as a function of the geometric layout of the memory arrays. Surprisingly, transsaccadic errors did not predict change detection performance despite the latter being a widely accepted measure of working memory capacity. Our results suggest that working memory systems between and across saccades share, in part, similar neural resources. Nevertheless, our data highlight the importance of investigating working memory across saccades.

Close

  • doi:10.1007/s00221-019-05623-x

Close

Antonio Fernández; Hsin-Hung Li; Marisa Carrasco

How exogenous spatial attention affects visual representation Journal Article

Journal of Vision, 19 (11), pp. 1–13, 2019.

Abstract | Links | BibTeX

@article{Fernandez2019a,
title = {How exogenous spatial attention affects visual representation},
author = {Antonio Fernández and Hsin-Hung Li and Marisa Carrasco},
doi = {10.1167/19.11.4},
year = {2019},
date = {2019-09-01},
journal = {Journal of Vision},
volume = {19},
number = {11},
pages = {1--13},
publisher = {Association for Research in Vision and Ophthalmology (ARVO)},
abstract = {Orienting covert spatial attention to a target location enhances visual sensitivity and benefits performance in many visual tasks. How these attention-related improvements in performance affect the underlying visual representation of low-level visual features is not fully understood. Here we focus on characterizing how exogenous spatial attention affects the feature representations of orientation and spatial frequency. We asked observers to detect a vertical grating embedded in noise and performed psychophysical reverse correlation. Doing so allowed us to make comparisons with previous studies that utilized the same task and analysis to assess how endogenous attention and presaccadic modulations affect visual representations. We found that exogenous spatial attention improved performance and enhanced the gain of the target orientation without affecting orientation tuning width. Moreover, we found no change in spatial frequency tuning. We conclude that covert exogenous spatial attention alters performance by strictly boosting gain of orientation-selective filters, much like covert endogenous spatial attention. Introduction},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Orienting covert spatial attention to a target location enhances visual sensitivity and benefits performance in many visual tasks. How these attention-related improvements in performance affect the underlying visual representation of low-level visual features is not fully understood. Here we focus on characterizing how exogenous spatial attention affects the feature representations of orientation and spatial frequency. We asked observers to detect a vertical grating embedded in noise and performed psychophysical reverse correlation. Doing so allowed us to make comparisons with previous studies that utilized the same task and analysis to assess how endogenous attention and presaccadic modulations affect visual representations. We found that exogenous spatial attention improved performance and enhanced the gain of the target orientation without affecting orientation tuning width. Moreover, we found no change in spatial frequency tuning. We conclude that covert exogenous spatial attention alters performance by strictly boosting gain of orientation-selective filters, much like covert endogenous spatial attention. Introduction

Close

  • doi:10.1167/19.11.4

Close

Madeline S Cappelloni; Sabyasachi Shivkumar; Ralf M Haefner; Ross K Maddox

Task-uninformative visual stimuli improve auditory spatial discrimination in humans but not the ideal observer Journal Article

PLOS ONE, 14 (9), pp. 1–18, 2019.

Abstract | Links | BibTeX

@article{Cappelloni2019,
title = {Task-uninformative visual stimuli improve auditory spatial discrimination in humans but not the ideal observer},
author = {Madeline S Cappelloni and Sabyasachi Shivkumar and Ralf M Haefner and Ross K Maddox},
editor = {Jyrki Ahveninen},
doi = {10.1371/journal.pone.0215417},
year = {2019},
date = {2019-09-01},
journal = {PLOS ONE},
volume = {14},
number = {9},
pages = {1--18},
publisher = {Public Library of Science},
abstract = {In order to survive and function in the world, we must understand the content of our environment. This requires us to gather and parse complex, sometimes conflicting, information. Yet, the brain is capable of translating sensory stimuli from disparate modalities into a cohesive and accurate percept with little conscious effort. Previous studies of multisensory integration have suggested that the brain's integration of cues is well-approximated by an ideal observer implementing Bayesian causal inference. However, behavioral data from tasks that include only one stimulus in each modality fail to capture what is in nature a complex process. Here we employed an auditory spatial discrimination task in which listeners were asked to determine on which side they heard one of two concurrently presented sounds. We compared two visual conditions in which task-uninformative shapes were presented in the center of the screen, or spatially aligned with the auditory stimuli. We found that performance on the auditory task improved when the visual stimuli were spatially aligned with the auditory stimuli—even though the shapes provided no information about which side the auditory target was on. We also demonstrate that a model of a Bayesian ideal observer performing causal inference cannot explain this improvement, demonstrating that humans deviate systematically from the ideal observer model.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In order to survive and function in the world, we must understand the content of our environment. This requires us to gather and parse complex, sometimes conflicting, information. Yet, the brain is capable of translating sensory stimuli from disparate modalities into a cohesive and accurate percept with little conscious effort. Previous studies of multisensory integration have suggested that the brain's integration of cues is well-approximated by an ideal observer implementing Bayesian causal inference. However, behavioral data from tasks that include only one stimulus in each modality fail to capture what is in nature a complex process. Here we employed an auditory spatial discrimination task in which listeners were asked to determine on which side they heard one of two concurrently presented sounds. We compared two visual conditions in which task-uninformative shapes were presented in the center of the screen, or spatially aligned with the auditory stimuli. We found that performance on the auditory task improved when the visual stimuli were spatially aligned with the auditory stimuli—even though the shapes provided no information about which side the auditory target was on. We also demonstrate that a model of a Bayesian ideal observer performing causal inference cannot explain this improvement, demonstrating that humans deviate systematically from the ideal observer model.

Close

  • doi:10.1371/journal.pone.0215417

Close

Francesca Capozzi; Lauren J Human; Jelena Ristic

Attention promotes accurate impression formation Journal Article

Journal of Personality, pp. 1–11, 2019.

Abstract | Links | BibTeX

@article{Capozzi2019,
title = {Attention promotes accurate impression formation},
author = {Francesca Capozzi and Lauren J Human and Jelena Ristic},
doi = {10.1111/jopy.12509},
year = {2019},
date = {2019-09-01},
journal = {Journal of Personality},
pages = {1--11},
publisher = {Wiley},
abstract = {Objective: An ability to form accurate impressions of others is vital for adaptive social behavior in humans. Here, we examined if attending to persons more is associated with greater accuracy in personality impressions. Method: We asked 42 observers (36 females; mean age = 21 years, age range = 18–28; expected power = 0.96) to form personality impressions of unacquainted individuals (i.e., targets) from video interviews while their attentional behavior was assessed using eye tracking. We examined whether (a) attending more to targets benefited accuracy, (b) attending to specific body parts (e.g., face vs. body) drove this association, and (c) targets' ease of personality readability modulated these effects. Results: Paying more attention to a target was associated with forming more accurate personality impressions. Attention to the whole person contributed to this effect, with this association occurring independently of targets' ease of readability. Conclusions: These findings show that attending more to a person is associated with increased accuracy and thus suggest that attention promotes social adaption by supporting accurate social perception.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: An ability to form accurate impressions of others is vital for adaptive social behavior in humans. Here, we examined if attending to persons more is associated with greater accuracy in personality impressions. Method: We asked 42 observers (36 females; mean age = 21 years, age range = 18–28; expected power = 0.96) to form personality impressions of unacquainted individuals (i.e., targets) from video interviews while their attentional behavior was assessed using eye tracking. We examined whether (a) attending more to targets benefited accuracy, (b) attending to specific body parts (e.g., face vs. body) drove this association, and (c) targets' ease of personality readability modulated these effects. Results: Paying more attention to a target was associated with forming more accurate personality impressions. Attention to the whole person contributed to this effect, with this association occurring independently of targets' ease of readability. Conclusions: These findings show that attending more to a person is associated with increased accuracy and thus suggest that attention promotes social adaption by supporting accurate social perception.

Close

  • doi:10.1111/jopy.12509

Close

Sabine Born

Saccadic suppression of displacement does not reflect a saccade-specific bias to assume stability Journal Article

Vision, 3 , pp. 1–22, 2019.

Abstract | Links | BibTeX

@article{Born2019a,
title = {Saccadic suppression of displacement does not reflect a saccade-specific bias to assume stability},
author = {Sabine Born},
doi = {10.3390/vision3040049},
year = {2019},
date = {2019-09-01},
journal = {Vision},
volume = {3},
pages = {1--22},
abstract = {Across saccades, small displacements of a visual target are harder to detect and their directions more difficult to discriminate than during steady fixation. Prominent theories of this effect, known as saccadic suppression of displacement, propose that it is due to a bias to assume object stability across saccades. Recent studies comparing the saccadic effect to masking effects suggest that suppression of displacement is not saccade-specific. Further evidence for this account is presented from two experiments where participants judged the size of displacements on a continuous scale in saccade and mask conditions, with and without blanking. Saccades and masks both reduced the proportion of correctly perceived displacements and increased the proportion of missed displacements. Blanking improved performance in both conditions by reducing the proportion of missed displacements. Thus, if suppression of displacement reflects a bias for stability, it is not a saccade-specific bias, but a more general stability assumption revealed under conditions of impoverished vision. Specifically, I discuss the potentially decisive role of motion or other transient signals for displacement perception. Without transients or motion, the quality of relative position signals is poor, and saccadic and mask-induced suppression of displacement reflects performance when the decision has to be made on these signals alone. Blanking may improve those position signals by providing a transient onset or a longer time to encode the pre-saccadic target position.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Across saccades, small displacements of a visual target are harder to detect and their directions more difficult to discriminate than during steady fixation. Prominent theories of this effect, known as saccadic suppression of displacement, propose that it is due to a bias to assume object stability across saccades. Recent studies comparing the saccadic effect to masking effects suggest that suppression of displacement is not saccade-specific. Further evidence for this account is presented from two experiments where participants judged the size of displacements on a continuous scale in saccade and mask conditions, with and without blanking. Saccades and masks both reduced the proportion of correctly perceived displacements and increased the proportion of missed displacements. Blanking improved performance in both conditions by reducing the proportion of missed displacements. Thus, if suppression of displacement reflects a bias for stability, it is not a saccade-specific bias, but a more general stability assumption revealed under conditions of impoverished vision. Specifically, I discuss the potentially decisive role of motion or other transient signals for displacement perception. Without transients or motion, the quality of relative position signals is poor, and saccadic and mask-induced suppression of displacement reflects performance when the decision has to be made on these signals alone. Blanking may improve those position signals by providing a transient onset or a longer time to encode the pre-saccadic target position.

Close

  • doi:10.3390/vision3040049

Close

Peng Zhou; Likan Zhan; Huimin Ma

Understanding others' minds: Social inference in preschool children with autism spectrum disorder Journal Article

Journal of Autism and Developmental Disorders, pp. 1–12, 2019.

Abstract | Links | BibTeX

@article{Zhou2019a,
title = {Understanding others' minds: Social inference in preschool children with autism spectrum disorder},
author = {Peng Zhou and Likan Zhan and Huimin Ma},
doi = {10.1007/s10803-019-04167-x},
year = {2019},
date = {2019-08-01},
journal = {Journal of Autism and Developmental Disorders},
pages = {1--12},
publisher = {Springer Science and Business Media LLC},
abstract = {The study used an eye-tracking task to investigate whether preschool children with autism spectrum disorder (ASD) are able to make inferences about others' behavior in terms of their mental states in a social setting. Fifty typically developing (TD) 4- and 5-year-olds and 22 5-year-olds with ASD participated in the study, where their eye-movements were recorded as automatic responses to given situations. The results show that unlike their TD peers, children with ASD failed to exhibit eye gaze patterns that reflect their ability to infer about others' behavior by spontaneously encoding socially relevant information and attributing mental states to others. Implications of the findings were discussed in relation to the proposal that implicit/spontaneous Theory of Mind is persistently impaired in ASD.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The study used an eye-tracking task to investigate whether preschool children with autism spectrum disorder (ASD) are able to make inferences about others' behavior in terms of their mental states in a social setting. Fifty typically developing (TD) 4- and 5-year-olds and 22 5-year-olds with ASD participated in the study, where their eye-movements were recorded as automatic responses to given situations. The results show that unlike their TD peers, children with ASD failed to exhibit eye gaze patterns that reflect their ability to infer about others' behavior by spontaneously encoding socially relevant information and attributing mental states to others. Implications of the findings were discussed in relation to the proposal that implicit/spontaneous Theory of Mind is persistently impaired in ASD.

Close

  • doi:10.1007/s10803-019-04167-x

Close

Quan Wang; Carla A Wall; Erin C Barney; Jessica L Bradshaw; Suzanne L Macari; Katarzyna Chawarska; Frederick Shic

Promoting social attention in 3‐year‐olds with ASD through gaze‐contingent eye tracking Journal Article

Autism Research, 13 (1), pp. 61–73, 2019.

Abstract | Links | BibTeX

@article{Wang2019h,
title = {Promoting social attention in 3‐year‐olds with ASD through gaze‐contingent eye tracking},
author = {Quan Wang and Carla A Wall and Erin C Barney and Jessica L Bradshaw and Suzanne L Macari and Katarzyna Chawarska and Frederick Shic},
doi = {10.1002/aur.2199},
year = {2019},
date = {2019-08-01},
journal = {Autism Research},
volume = {13},
number = {1},
pages = {61--73},
publisher = {Wiley},
abstract = {Young children with autism spectrum disorder (ASD) look less toward faces compared to their non-ASD peers, limiting access to social learning. Currently, no technologies directly target these core social attention difficulties. This study examines the feasibility of automated gaze modification training for improving attention to faces in 3-year-olds with ASD. Using free-viewing data from typically developing (TD) controls (n = 41), we implemented gaze-contingent adaptive cueing to redirect children with ASD toward normative looking patterns during viewing of videos of an actress. Children with ASD were randomly assigned to either (a) an adaptive Cue condition (Cue},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Young children with autism spectrum disorder (ASD) look less toward faces compared to their non-ASD peers, limiting access to social learning. Currently, no technologies directly target these core social attention difficulties. This study examines the feasibility of automated gaze modification training for improving attention to faces in 3-year-olds with ASD. Using free-viewing data from typically developing (TD) controls (n = 41), we implemented gaze-contingent adaptive cueing to redirect children with ASD toward normative looking patterns during viewing of videos of an actress. Children with ASD were randomly assigned to either (a) an adaptive Cue condition (Cue

Close

  • doi:10.1002/aur.2199

Close

Emma E M Stewart; Preeti Verghese; Anna Ma-Wyatt

The spatial and temporal properties of attentional selectivity for saccades and reaches Journal Article

Journal of Vision, 19 (9), pp. 1–19, 2019.

Abstract | Links | BibTeX

@article{Stewart2019bb,
title = {The spatial and temporal properties of attentional selectivity for saccades and reaches},
author = {Emma E M Stewart and Preeti Verghese and Anna Ma-Wyatt},
doi = {10.1167/19.9.12},
year = {2019},
date = {2019-08-01},
journal = {Journal of Vision},
volume = {19},
number = {9},
pages = {1--19},
publisher = {Association for Research in Vision and Ophthalmology (ARVO)},
abstract = {The preparation and execution of saccades and goal-directed movements elicits an accompanying shift in attention at the locus of the impending movement. However, some key aspects of the spatiotemporal profile of this attentional shift between eye and hand movements are not resolved. While there is evidence that attention is improved at the target location when making a reach, it is not clear how attention shifts over space and time around the movement target as a saccade and a reach are made to that target. Determining this spread of attention is an important aspect in understanding how attentional resources are used in relation to movement planning and guidance in real world tasks. We compared performance on a perceptual discrimination paradigm during a saccade- alone task, reach-alone task, and a saccade-plus-reach task to map the temporal profile of the premotor attentional shift at the goal of the movement and at three surrounding locations. We measured performance relative to a valid baseline level to determine whether motor planning induces additional attentional facilitation compared to mere covert attention. Sensitivity increased relative to movement onset at the target and at the surrounding locations, for both the saccade-alone and saccade-plus-reach conditions. The results suggest that the temporal profile of the attentional shift is similar for the two tasks involving saccades (saccade-alone and saccade-plus-reach tasks), but is very different when the influence of the saccade is removed. In this case, performance in the saccade-plus- reach task reflects the lower sensitivity observed when a reach-alone task is being conducted. In addition, the spatial profile of this spread of attention is not symmetrical around the target. This suggests that when a saccade and reach are being planned together, the saccade drives the attentional shift, and the reach-alone carries little attentional weight.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The preparation and execution of saccades and goal-directed movements elicits an accompanying shift in attention at the locus of the impending movement. However, some key aspects of the spatiotemporal profile of this attentional shift between eye and hand movements are not resolved. While there is evidence that attention is improved at the target location when making a reach, it is not clear how attention shifts over space and time around the movement target as a saccade and a reach are made to that target. Determining this spread of attention is an important aspect in understanding how attentional resources are used in relation to movement planning and guidance in real world tasks. We compared performance on a perceptual discrimination paradigm during a saccade- alone task, reach-alone task, and a saccade-plus-reach task to map the temporal profile of the premotor attentional shift at the goal of the movement and at three surrounding locations. We measured performance relative to a valid baseline level to determine whether motor planning induces additional attentional facilitation compared to mere covert attention. Sensitivity increased relative to movement onset at the target and at the surrounding locations, for both the saccade-alone and saccade-plus-reach conditions. The results suggest that the temporal profile of the attentional shift is similar for the two tasks involving saccades (saccade-alone and saccade-plus-reach tasks), but is very different when the influence of the saccade is removed. In this case, performance in the saccade-plus- reach task reflects the lower sensitivity observed when a reach-alone task is being conducted. In addition, the spatial profile of this spread of attention is not symmetrical around the target. This suggests that when a saccade and reach are being planned together, the saccade drives the attentional shift, and the reach-alone carries little attentional weight.

Close

  • doi:10.1167/19.9.12

Close

Andrea Phillipou; Melissa Kirkovski; David J Castle; Caroline Gurvich; Larry A Abel; Stephanie Miles; Susan L Rossell

High‐definition transcranial direct current stimulation in anorexia nervosa: A pilot study Journal Article

International Journal of Eating Disorders, 52 (11), pp. 1274–1280, 2019.

Abstract | Links | BibTeX

@article{Phillipou2019,
title = {High‐definition transcranial direct current stimulation in anorexia nervosa: A pilot study},
author = {Andrea Phillipou and Melissa Kirkovski and David J Castle and Caroline Gurvich and Larry A Abel and Stephanie Miles and Susan L Rossell},
doi = {10.1002/eat.23146},
year = {2019},
date = {2019-08-01},
journal = {International Journal of Eating Disorders},
volume = {52},
number = {11},
pages = {1274--1280},
publisher = {Wiley},
abstract = {OBJECTIVE: Anorexia nervosa (AN) is a serious psychiatric condition often associated with poor outcomes. Biologically informed treatments for AN, such as brain stimulation, are lacking, in part due to the unclear nature of the neurobiological contributions to the illness. However, recent research has suggested a specific neurobiological target for the treatment of AN, namely stimulation of the inferior parietal lobe (IPL). The aim of this study was to stimulate-noninvasively-the left IPL in individuals with AN using high-definition transcranial direct current stimulation (HD-tDCS). METHOD: Twenty participants will be randomized to receive 10 daily sessions of HD-tDCS or sham HD-tDCS (placebo). Assessments will be carried out at baseline and end point, as well as 4- and 12-week follow-ups. DISCUSSION: This pilot investigation will primarily determine the feasibility and acceptability of this intervention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

OBJECTIVE: Anorexia nervosa (AN) is a serious psychiatric condition often associated with poor outcomes. Biologically informed treatments for AN, such as brain stimulation, are lacking, in part due to the unclear nature of the neurobiological contributions to the illness. However, recent research has suggested a specific neurobiological target for the treatment of AN, namely stimulation of the inferior parietal lobe (IPL). The aim of this study was to stimulate-noninvasively-the left IPL in individuals with AN using high-definition transcranial direct current stimulation (HD-tDCS). METHOD: Twenty participants will be randomized to receive 10 daily sessions of HD-tDCS or sham HD-tDCS (placebo). Assessments will be carried out at baseline and end point, as well as 4- and 12-week follow-ups. DISCUSSION: This pilot investigation will primarily determine the feasibility and acceptability of this intervention.

Close

  • doi:10.1002/eat.23146

Close

Matthew F Peterson; Ian Zaun; Harris Hoke; Guo Jiahui; Brad Duchaine; Nancy Kanwisher

Eye movements and retinotopic tuning in developmental prosopagnosia Journal Article

Journal of Vision, 19 (9), pp. 1–19, 2019.

Abstract | Links | BibTeX

@article{Peterson2019,
title = {Eye movements and retinotopic tuning in developmental prosopagnosia},
author = {Matthew F Peterson and Ian Zaun and Harris Hoke and Guo Jiahui and Brad Duchaine and Nancy Kanwisher},
doi = {10.1167/19.9.7},
year = {2019},
date = {2019-08-01},
journal = {Journal of Vision},
volume = {19},
number = {9},
pages = {1--19},
publisher = {The Association for Research in Vision and Ophthalmology},
abstract = {Despite extensive investigation, the causes and nature of developmental prosopagnosia (DP)—a severe face identification impairment in the absence of acquired brain injury—remain poorly understood. Drawing on previous work showing that individuals identified as being neurotypical (NT) show robust individual differences in where they fixate on faces, and recognize faces best when the faces are presented at this location, we defined and tested four novel hypotheses for how atypical face-looking behavior and/or retinotopic face encoding could impair face recognition in DP: (a) fixating regions of poor information, (b) inconsistent saccadic targeting, (c) weak retinotopic tuning, and (d) fixating locations not matched to the individual's own face tuning. We found no support for the first three hypotheses, with NTs and DPs consistently fixating similar locations and showing similar retinotopic tuning of their face perception performance. However, in testing the fourth hypothesis, we found preliminary evidence for two distinct phenotypes of DP: (a) Subjects characterized by impaired face memory, typical face perception, and a preference to look high on the face, and (b) Subjects characterized by profound impairments to both face memory and perception and a preference to look very low on the face. Further, while all NTs and upper-looking DPs performed best when faces were presented near their preferred fixation location, this was not true for lower-looking DPs. These results suggest that face recognition deficits in a substantial proportion of people with DP may arise not from aberrant face gaze or compromised retinotopic tuning, but from the suboptimal matching of gaze to tuning.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Despite extensive investigation, the causes and nature of developmental prosopagnosia (DP)—a severe face identification impairment in the absence of acquired brain injury—remain poorly understood. Drawing on previous work showing that individuals identified as being neurotypical (NT) show robust individual differences in where they fixate on faces, and recognize faces best when the faces are presented at this location, we defined and tested four novel hypotheses for how atypical face-looking behavior and/or retinotopic face encoding could impair face recognition in DP: (a) fixating regions of poor information, (b) inconsistent saccadic targeting, (c) weak retinotopic tuning, and (d) fixating locations not matched to the individual's own face tuning. We found no support for the first three hypotheses, with NTs and DPs consistently fixating similar locations and showing similar retinotopic tuning of their face perception performance. However, in testing the fourth hypothesis, we found preliminary evidence for two distinct phenotypes of DP: (a) Subjects characterized by impaired face memory, typical face perception, and a preference to look high on the face, and (b) Subjects characterized by profound impairments to both face memory and perception and a preference to look very low on the face. Further, while all NTs and upper-looking DPs performed best when faces were presented near their preferred fixation location, this was not true for lower-looking DPs. These results suggest that face recognition deficits in a substantial proportion of people with DP may arise not from aberrant face gaze or compromised retinotopic tuning, but from the suboptimal matching of gaze to tuning.

Close

  • doi:10.1167/19.9.7

Close

Mathias Norqvist; Bert Jonsson; Johan Lithner

Eye-tracking data and mathematical tasks with focus on mathematical reasoning Journal Article

Data in Brief, 25 , pp. 1–8, 2019.

Abstract | BibTeX

@article{Norqvist2019,
title = {Eye-tracking data and mathematical tasks with focus on mathematical reasoning},
author = {Mathias Norqvist and Bert Jonsson and Johan Lithner},
year = {2019},
date = {2019-08-01},
journal = {Data in Brief},
volume = {25},
pages = {1--8},
publisher = {Elsevier},
abstract = {This data article contains eye-tracking data (i.e., dwell time and fixations), Z-transformed cognitive data (i.e., Raven's Advanced Progressive Matrices and Operation span), and practice and test scores from a study in mathematics education. This data is provided in a supplementary file. The method section describes the mathematics tasks used in the study. These mathematics tasks are of two kinds, with and without solution templates, to induce different types of mathematical reasoning.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This data article contains eye-tracking data (i.e., dwell time and fixations), Z-transformed cognitive data (i.e., Raven's Advanced Progressive Matrices and Operation span), and practice and test scores from a study in mathematics education. This data is provided in a supplementary file. The method section describes the mathematics tasks used in the study. These mathematics tasks are of two kinds, with and without solution templates, to induce different types of mathematical reasoning.

Close

8114 entries « ‹ 1 of 82 › »

Let's Keep in Touch

SR Research Eye Tracking

NEWSLETTER SIGNUPNEWSLETTER ARCHIVE

Footer

Contact

info@sr-research.com
Phone: +1-613-271-8686
Toll Free: 1-866-821-0731
Fax: 613-482-4866

Quick Links

PRODUCTS

SOLUTIONS

SUPPORT FORUM

Legal Information

Legal Notice

Privacy Policy

EyeLink® eye trackers are intended for research purposes only and should not be used in the treatment or diagnosis of any medical condition.

Featured Blog Post

second language learning

How to Improve Second Language Learning

Copyright © 2020 SR Research Ltd. All Rights Reserved. EyeLink is a registered trademark of SR Research Ltd.

We use cookies to ensure the best experience on our website. You can find out more about the cookies we use in our Cookie Policy. You can accept all cookies (including those used by Google AdWords and other third party software), or you can accept only the cookies required for the website to remain functional (all except Google AdWords). By continuing to use the site, you consent to the cookies. Accept All CookiesRefuse Google AdwordsRead more