• Skip to primary navigation
  • Skip to main content
  • Skip to footer
SR Research Logo

SR Research

Fast, Accurate, Reliable Eye Tracking

  • Hardware
    • EyeLink 1000 Plus
    • EyeLink Portable Duo
    • fMRI and MEG Systems
    • EyeLink II
    • Hardware Integration
  • Software
    • Experiment Builder
    • Data Viewer
    • WebLink
    • Software Integration
  • Solutions
    • Reading / Language
    • Developmental
    • fMRI / MEG
    • More…
  • Support
    • Forum
    • Resources
    • Workshops
    • Lab Visits
  • About
    • About Eye Tracking
    • History
    • Manufacturing
    • Careers
  • Blog
  • Contact
  • 中文

EyeLink Eye Tracking Publications Library

All EyeLink Publications

All 9000+ peer-reviewed EyeLink research publications up until 2020 (with some early 2021s) are listed below by year. You can search the publications library using key words such as Visual Search, Smooth Pursuit, Parkinsons, etc. You can also search for individual author names. Eye tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye tracking paper, please email us!

All EyeLink publications are also available for download / import into reference management software as a single Bibtex (.bib) file.

 

9138 entries « ‹ 1 of 92 › »

2021

Delia A Gheorghe; Muriel T N Panouillères; Nicholas D Walsh

Investigating the effects of cerebellar transcranial direct current stimulation on saccadic adaptation and cortisol response Journal Article

Cerebellum and Ataxias, 8 (1), pp. 1–11, 2021.

Abstract | Links | BibTeX

@article{Gheorghe2021,
title = {Investigating the effects of cerebellar transcranial direct current stimulation on saccadic adaptation and cortisol response},
author = {Delia A Gheorghe and Muriel T N Panouill{è}res and Nicholas D Walsh},
doi = {10.1186/s40673-020-00124-y},
year = {2021},
date = {2021-12-01},
journal = {Cerebellum and Ataxias},
volume = {8},
number = {1},
pages = {1--11},
publisher = {BioMed Central Ltd},
abstract = {Background: Transcranial Direct Current Stimulation (tDCS) over the prefrontal cortex has been shown to modulate subjective, neuronal and neuroendocrine responses, particularly in the context of stress processing. However, it is currently unknown whether tDCS stimulation over other brain regions, such as the cerebellum, can similarly affect the stress response. Despite increasing evidence linking the cerebellum to stress-related processing, no studies have investigated the hormonal and behavioural effects of cerebellar tDCS. Methods: This study tested the hypothesis of a cerebellar tDCS effect on mood, behaviour and cortisol. To do this we employed a single-blind, sham-controlled design to measure performance on a cerebellar-dependent saccadic adaptation task, together with changes in cortisol output and mood, during online anodal and cathodal stimulation. Forty-five participants were included in the analysis. Stimulation groups were matched on demographic variables, potential confounding factors known to affect cortisol levels, mood and a number of personality characteristics. Results: Results showed that tDCS polarity did not affect cortisol levels or subjective mood, but did affect behaviour. Participants receiving anodal stimulation showed an 8.4% increase in saccadic adaptation, which was significantly larger compared to the cathodal group (1.6%). Conclusion: The stimulation effect on saccadic adaptation contributes to the current body of literature examining the mechanisms of cerebellar stimulation on associated function. We conclude that further studies are needed to understand whether and how cerebellar tDCS may module stress reactivity under challenge conditions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Transcranial Direct Current Stimulation (tDCS) over the prefrontal cortex has been shown to modulate subjective, neuronal and neuroendocrine responses, particularly in the context of stress processing. However, it is currently unknown whether tDCS stimulation over other brain regions, such as the cerebellum, can similarly affect the stress response. Despite increasing evidence linking the cerebellum to stress-related processing, no studies have investigated the hormonal and behavioural effects of cerebellar tDCS. Methods: This study tested the hypothesis of a cerebellar tDCS effect on mood, behaviour and cortisol. To do this we employed a single-blind, sham-controlled design to measure performance on a cerebellar-dependent saccadic adaptation task, together with changes in cortisol output and mood, during online anodal and cathodal stimulation. Forty-five participants were included in the analysis. Stimulation groups were matched on demographic variables, potential confounding factors known to affect cortisol levels, mood and a number of personality characteristics. Results: Results showed that tDCS polarity did not affect cortisol levels or subjective mood, but did affect behaviour. Participants receiving anodal stimulation showed an 8.4% increase in saccadic adaptation, which was significantly larger compared to the cathodal group (1.6%). Conclusion: The stimulation effect on saccadic adaptation contributes to the current body of literature examining the mechanisms of cerebellar stimulation on associated function. We conclude that further studies are needed to understand whether and how cerebellar tDCS may module stress reactivity under challenge conditions.

Close

  • doi:10.1186/s40673-020-00124-y

Close

Sarah Chabal; Sayuri Hayakawa; Viorica Marian

How a picture becomes a word: Individual differences in the development of language-mediated visual search Journal Article

Cognitive Research: Principles and Implications, 6 (2), pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Chabal2021,
title = {How a picture becomes a word: Individual differences in the development of language-mediated visual search},
author = {Sarah Chabal and Sayuri Hayakawa and Viorica Marian},
doi = {10.1186/s41235-020-00268-9},
year = {2021},
date = {2021-12-01},
journal = {Cognitive Research: Principles and Implications},
volume = {6},
number = {2},
pages = {1--10},
publisher = {Springer Science and Business Media LLC},
abstract = {Over the course of our lifetimes, we accumulate extensive experience associating the things that we see with the words we have learned to describe them. As a result, adults engaged in a visual search task will often look at items with labels that share phonological features with the target object, demonstrating that language can become activated even in non-linguistic contexts. This highly interactive cognitive system is the culmination of our linguistic and visual experiences—and yet, our understanding of how the relationship between language and vision develops remains limited. The present study explores the developmental trajectory of language-mediated visual search by examining whether children can be distracted by linguistic competitors during a non-linguistic visual search task. Though less robust compared to what has been previously observed with adults, we find evidence of phonological competition in children as young as 8 years old. Furthermore, the extent of language activation is predicted by individual differences in linguistic, visual, and domain-general cognitive abilities, with the greatest phonological competition observed among children with strong language abilities combined with weaker visual memory and inhibitory control. We propose that linguistic expertise is fundamental to the development of language-mediated visual search, but that the rate and degree of automatic language activation depends on interactions among a broader network of cognitive abilities.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Over the course of our lifetimes, we accumulate extensive experience associating the things that we see with the words we have learned to describe them. As a result, adults engaged in a visual search task will often look at items with labels that share phonological features with the target object, demonstrating that language can become activated even in non-linguistic contexts. This highly interactive cognitive system is the culmination of our linguistic and visual experiences—and yet, our understanding of how the relationship between language and vision develops remains limited. The present study explores the developmental trajectory of language-mediated visual search by examining whether children can be distracted by linguistic competitors during a non-linguistic visual search task. Though less robust compared to what has been previously observed with adults, we find evidence of phonological competition in children as young as 8 years old. Furthermore, the extent of language activation is predicted by individual differences in linguistic, visual, and domain-general cognitive abilities, with the greatest phonological competition observed among children with strong language abilities combined with weaker visual memory and inhibitory control. We propose that linguistic expertise is fundamental to the development of language-mediated visual search, but that the rate and degree of automatic language activation depends on interactions among a broader network of cognitive abilities.

Close

  • doi:10.1186/s41235-020-00268-9

Close

Jasmine R Aziz; Samantha R Good; Raymond M Klein; Gail A Eskes

Role of aging and working memory in performance on a naturalistic visual search task Journal Article

Cortex, 136 , pp. 28–40, 2021.

Abstract | Links | BibTeX

@article{Aziz2021,
title = {Role of aging and working memory in performance on a naturalistic visual search task},
author = {Jasmine R Aziz and Samantha R Good and Raymond M Klein and Gail A Eskes},
doi = {10.1016/j.cortex.2020.12.003},
year = {2021},
date = {2021-12-01},
journal = {Cortex},
volume = {136},
pages = {28--40},
publisher = {Elsevier BV},
abstract = {Studying age-related changes in working memory (WM) and visual search can provide insights into mechanisms of visuospatial attention. In visual search, WM is used to remember previously inspected objects/locations and to maintain a mental representation of the target to guide the search. We sought to extend this work, using aging as a case of reduced WM capacity. The present study tested whether various domains of WM would predict visual search performance in both young (n = 47; aged 18-35 yrs) and older (n = 48; aged 55-78) adults. Participants completed executive and domain-specific WM measures, and a naturalistic visual search task with (single) feature and triple-conjunction (three-feature) search conditions. We also varied the WM load requirements of the search task by manipulating whether a reference picture of the target (i.e., target template) was displayed during the search, or whether participants needed to search from memory. In both age groups, participants with better visuospatial executive WM were faster to locate complex search targets. Working memory storage capacity predicted search performance regardless of target complexity; however, visuospatial storage capacity was more predictive for young adults, whereas verbal storage capacity was more predictive for older adults. Displaying a target template during search diminished the involvement of WM in search performance, but this effect was primarily observed in young adults. Age-specific interactions between WM and visual search abilities are discussed in the context of mechanisms of visuospatial attention and how they may vary across the lifespan.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Studying age-related changes in working memory (WM) and visual search can provide insights into mechanisms of visuospatial attention. In visual search, WM is used to remember previously inspected objects/locations and to maintain a mental representation of the target to guide the search. We sought to extend this work, using aging as a case of reduced WM capacity. The present study tested whether various domains of WM would predict visual search performance in both young (n = 47; aged 18-35 yrs) and older (n = 48; aged 55-78) adults. Participants completed executive and domain-specific WM measures, and a naturalistic visual search task with (single) feature and triple-conjunction (three-feature) search conditions. We also varied the WM load requirements of the search task by manipulating whether a reference picture of the target (i.e., target template) was displayed during the search, or whether participants needed to search from memory. In both age groups, participants with better visuospatial executive WM were faster to locate complex search targets. Working memory storage capacity predicted search performance regardless of target complexity; however, visuospatial storage capacity was more predictive for young adults, whereas verbal storage capacity was more predictive for older adults. Displaying a target template during search diminished the involvement of WM in search performance, but this effect was primarily observed in young adults. Age-specific interactions between WM and visual search abilities are discussed in the context of mechanisms of visuospatial attention and how they may vary across the lifespan.

Close

  • doi:10.1016/j.cortex.2020.12.003

Close

Sainan Zhao; Lin Li; Min Chang; Jingxin Wang; Kevin B Paterson

A further look at ageing and word predictability effects in Chinese reading: Evidence from one-character words Journal Article

Quarterly Journal of Experimental Psychology, 74 (1), pp. 68–78, 2021.

Abstract | Links | BibTeX

@article{Zhao2021,
title = {A further look at ageing and word predictability effects in Chinese reading: Evidence from one-character words},
author = {Sainan Zhao and Lin Li and Min Chang and Jingxin Wang and Kevin B Paterson},
doi = {10.1177/1747021820951131},
year = {2021},
date = {2021-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {74},
number = {1},
pages = {68--78},
abstract = {Older adults are thought to compensate for slower lexical processing by making greater use of contextual knowledge, relative to young adults, to predict words in sentences. Accordingly, compared to young adults, older adults should produce larger contextual predictability effects in reading times and skipping rates for words. Empirical support for this account is nevertheless scarce. Perhaps the clearest evidence to date comes from a recent Chinese study showing larger word predictability effects for older adults in reading times but not skipping rates for two-character words. However, one possibility is that the absence of a word-skipping effect in this experiment was due to the older readers skipping words infrequently because of difficulty processing two-character words parafoveally. We therefore took a further look at this issue, using one-character target words to boost word-skipping. Young (18–30 years) and older (65+ years) adults read sentences containing a target word that was either highly predictable or less predictable from the prior sentence context. Our results replicate the finding that older adults produce larger word predictability effects in reading times but not word-skipping, despite high skipping rates. We discuss these findings in relation to ageing effects on reading in different writing systems.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Older adults are thought to compensate for slower lexical processing by making greater use of contextual knowledge, relative to young adults, to predict words in sentences. Accordingly, compared to young adults, older adults should produce larger contextual predictability effects in reading times and skipping rates for words. Empirical support for this account is nevertheless scarce. Perhaps the clearest evidence to date comes from a recent Chinese study showing larger word predictability effects for older adults in reading times but not skipping rates for two-character words. However, one possibility is that the absence of a word-skipping effect in this experiment was due to the older readers skipping words infrequently because of difficulty processing two-character words parafoveally. We therefore took a further look at this issue, using one-character target words to boost word-skipping. Young (18–30 years) and older (65+ years) adults read sentences containing a target word that was either highly predictable or less predictable from the prior sentence context. Our results replicate the finding that older adults produce larger word predictability effects in reading times but not word-skipping, despite high skipping rates. We discuss these findings in relation to ageing effects on reading in different writing systems.

Close

  • doi:10.1177/1747021820951131

Close

Guangyao Zhang; Binke Yuan; Huimin Hua; Ya Lou; Nan Lin; Xingshan Li

Individual differences in first-pass fixation duration in reading are related to resting-state functional connectivity Journal Article

Brain and Language, 213 , pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Zhang2021,
title = {Individual differences in first-pass fixation duration in reading are related to resting-state functional connectivity},
author = {Guangyao Zhang and Binke Yuan and Huimin Hua and Ya Lou and Nan Lin and Xingshan Li},
doi = {10.1016/j.bandl.2020.104893},
year = {2021},
date = {2021-01-01},
journal = {Brain and Language},
volume = {213},
pages = {1--10},
publisher = {Elsevier Inc.},
abstract = {Although there are considerable individual differences in eye movements during text reading, their neural correlates remain unclear. In this study, we investigated the relationship between the first-pass fixation duration (FPFD) in natural reading and resting-state functional connectivity (RSFC) in the brain. We defined the brain regions associated with early visual processing, word identification, attention shifts, and oculomotor control as seed regions. The results showed that individual FPFDs were positively correlated with individual RSFCs between the early visual network, visual word form area, and eye movement control/dorsal attention network. Our findings provide new evidence on the neural correlates of eye movements in text reading and indicate that individual differences in fixation time may shape the RSFC differences in the brain through the time-on-task effect and the mechanism of Hebbian learning.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Although there are considerable individual differences in eye movements during text reading, their neural correlates remain unclear. In this study, we investigated the relationship between the first-pass fixation duration (FPFD) in natural reading and resting-state functional connectivity (RSFC) in the brain. We defined the brain regions associated with early visual processing, word identification, attention shifts, and oculomotor control as seed regions. The results showed that individual FPFDs were positively correlated with individual RSFCs between the early visual network, visual word form area, and eye movement control/dorsal attention network. Our findings provide new evidence on the neural correlates of eye movements in text reading and indicate that individual differences in fixation time may shape the RSFC differences in the brain through the time-on-task effect and the mechanism of Hebbian learning.

Close

  • doi:10.1016/j.bandl.2020.104893

Close

Jia Qiong Xie; Detlef H Rost; Fu Xing Wang; Jin Liang Wang; Rebecca L Monk

The association between excessive social media use and distraction: An eye movement tracking study Journal Article

Information & Management, 58 (2), pp. 1–12, 2021.

Abstract | Links | BibTeX

@article{Xie2021a,
title = {The association between excessive social media use and distraction: An eye movement tracking study},
author = {Jia Qiong Xie and Detlef H Rost and Fu Xing Wang and Jin Liang Wang and Rebecca L Monk},
doi = {10.1016/j.im.2020.103415},
year = {2021},
date = {2021-01-01},
journal = {Information & Management},
volume = {58},
number = {2},
pages = {1--12},
publisher = {Elsevier B.V.},
abstract = {Drawing on scan-and-shift hypothesis and scattered attention hypothesis, this article attempted to explore the association between excessive social media use (ESMU) and distraction from an information engagement perspective. In study 1, the results, based on 743 questionnaires completed by Chinese college students, showed that ESMU is related to distraction in daily life. In Study 2, eye-tracking technology was used to investigate the distfile:///Users/PrinzEugen/Desktop/PDF/Uploaded/1-s2.0-S0378720620303530-main.pdfraction and performance of excessive microblog users when performing the modified Stroop task. The results showed that excessive microblog users had difficulties suppressing interference information than non-microblog users, resulting in poor performance. Theoretical and practical implications are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Drawing on scan-and-shift hypothesis and scattered attention hypothesis, this article attempted to explore the association between excessive social media use (ESMU) and distraction from an information engagement perspective. In study 1, the results, based on 743 questionnaires completed by Chinese college students, showed that ESMU is related to distraction in daily life. In Study 2, eye-tracking technology was used to investigate the distfile:///Users/PrinzEugen/Desktop/PDF/Uploaded/1-s2.0-S0378720620303530-main.pdfraction and performance of excessive microblog users when performing the modified Stroop task. The results showed that excessive microblog users had difficulties suppressing interference information than non-microblog users, resulting in poor performance. Theoretical and practical implications are discussed.

Close

  • doi:10.1016/j.im.2020.103415

Close

Guangming Xie; Wenbo Du; Hongping Yuan; Yushi Jiang

Promoting reviewer-related attribution: Moderately complex presentation of mixed opinions activates the analytic process Journal Article

Sustainability, 13 (2), pp. 1–28, 2021.

Abstract | Links | BibTeX

@article{Xie2021,
title = {Promoting reviewer-related attribution: Moderately complex presentation of mixed opinions activates the analytic process},
author = {Guangming Xie and Wenbo Du and Hongping Yuan and Yushi Jiang},
doi = {10.3390/su13020441},
year = {2021},
date = {2021-01-01},
journal = {Sustainability},
volume = {13},
number = {2},
pages = {1--28},
abstract = {Using metacognition and dual process theories, this paper studied the role of types of presentation of mixed opinions in mitigating negative impacts of online word of mouth (WOM) dispersion on consumer's purchasing decisions. Two studies were implemented, respectively. By employing an eye-tracking approach, study 1 recorded consumer's attention to WOM dispersion. The results show that the activation of the analytic system can improve reviewer-related attribution options. In study 2, three kinds of presentation of mixed opinions originating from China's leading online platform were compared. The results demonstrated that mixed opinions expressed in moderately complex form, integrating average ratings and reviewers' impressions of products, was effective in promoting reviewer-related attribution choices. However, too-complicated presentation types of WOM dispersion can impose excessively on consumers' cognitive load and eventually fail to activate the analytic system for promoting reviewer-related attribution choices. The main contribution of this paper lies in that consumer attribution-related choices are supplemented, which provides new insights into information consistency in consumer research. The managerial and theoretical significance of this paper are discussed in order to better understand the purchasing decisions of consumers.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Using metacognition and dual process theories, this paper studied the role of types of presentation of mixed opinions in mitigating negative impacts of online word of mouth (WOM) dispersion on consumer's purchasing decisions. Two studies were implemented, respectively. By employing an eye-tracking approach, study 1 recorded consumer's attention to WOM dispersion. The results show that the activation of the analytic system can improve reviewer-related attribution options. In study 2, three kinds of presentation of mixed opinions originating from China's leading online platform were compared. The results demonstrated that mixed opinions expressed in moderately complex form, integrating average ratings and reviewers' impressions of products, was effective in promoting reviewer-related attribution choices. However, too-complicated presentation types of WOM dispersion can impose excessively on consumers' cognitive load and eventually fail to activate the analytic system for promoting reviewer-related attribution choices. The main contribution of this paper lies in that consumer attribution-related choices are supplemented, which provides new insights into information consistency in consumer research. The managerial and theoretical significance of this paper are discussed in order to better understand the purchasing decisions of consumers.

Close

  • doi:10.3390/su13020441

Close

Ching-Lin Wu; Shu-Ling Peng; Hsueh-Chih Chen

Why can people effectively access remote associations? Eye movements during Chinese remote associates problem solving Journal Article

Creativity Research Journal, pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Wu2021,
title = {Why can people effectively access remote associations? Eye movements during Chinese remote associates problem solving},
author = {Ching-Lin Wu and Shu-Ling Peng and Hsueh-Chih Chen},
doi = {10.1080/10400419.2020.1856579},
year = {2021},
date = {2021-01-01},
journal = {Creativity Research Journal},
pages = {1--10},
abstract = {An increasing number of studies have explored the process of how subjects solve problems through remote association. Most research has investigated the relationship between an individual's response to semantic search during the think-aloud operation and the individual's reply performance. Few studies, however, have examined the process of obtaining objective physiological indices. Eye-tracking technology is a powerful tool with which to dissect the process of problem solving, with tracked fixation indices that reflect an individual's internal cognitive mechanisms. This study, based on participants' fixation order for various stimulus words, was the first to introduce the concept of association search span, a concept that can be further divided into distributed association and centralized association. This study recorded 62 participants' eye movement indices in an eye-tracking experiment. The results showed that participants with higher remote association ability used more distributed associations and fewer centralized asso- ciations. The results indicated that the stronger remote association ability a participant has, the more likely that participant is to form associations with different stimulus words. It was also found that flexible thinking plays a vital role in the generation of remote associations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

An increasing number of studies have explored the process of how subjects solve problems through remote association. Most research has investigated the relationship between an individual's response to semantic search during the think-aloud operation and the individual's reply performance. Few studies, however, have examined the process of obtaining objective physiological indices. Eye-tracking technology is a powerful tool with which to dissect the process of problem solving, with tracked fixation indices that reflect an individual's internal cognitive mechanisms. This study, based on participants' fixation order for various stimulus words, was the first to introduce the concept of association search span, a concept that can be further divided into distributed association and centralized association. This study recorded 62 participants' eye movement indices in an eye-tracking experiment. The results showed that participants with higher remote association ability used more distributed associations and fewer centralized asso- ciations. The results indicated that the stronger remote association ability a participant has, the more likely that participant is to form associations with different stimulus words. It was also found that flexible thinking plays a vital role in the generation of remote associations.

Close

  • doi:10.1080/10400419.2020.1856579

Close

Anne Wienholz; Derya Nuhbalaoglu; Markus Steinbach; Annika Herrmann; Nivedita Mani

Phonological priming in German sign language: An eye tracking study using the visual world paradigm Journal Article

Sign Language & Linguistics, 24 (1), pp. 1–32, 2021.

Abstract | Links | BibTeX

@article{Wienholz2021,
title = {Phonological priming in German sign language: An eye tracking study using the visual world paradigm},
author = {Anne Wienholz and Derya Nuhbalaoglu and Markus Steinbach and Annika Herrmann and Nivedita Mani},
doi = {10.1075/sll.19011.wie},
year = {2021},
date = {2021-01-01},
journal = {Sign Language & Linguistics},
volume = {24},
number = {1},
pages = {1--32},
abstract = {A number of studies provide evidence for a phonological priming effect in the recognition of single signs based on phonological parameters and that the specific phonological parameters modulated in the priming effect can influence the robustness of this effect. This eye tracking study on German Sign Language examined phonological priming effects at the sentence level, while varying the phonological relationship between prime-target sign pairs. We recorded participants' eye movements while presenting videos of sentences containing either related or unrelated prime-target sign pairs, and pictures of the target and an unrelated distractor. We observed a phonological priming effect for sign pairs sharing handshape and movement while differing in location parameter. Taken together, the data suggest a difference in the contribution of sign parameters to sign recognition and that sub-lexical features influence sign language processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A number of studies provide evidence for a phonological priming effect in the recognition of single signs based on phonological parameters and that the specific phonological parameters modulated in the priming effect can influence the robustness of this effect. This eye tracking study on German Sign Language examined phonological priming effects at the sentence level, while varying the phonological relationship between prime-target sign pairs. We recorded participants' eye movements while presenting videos of sentences containing either related or unrelated prime-target sign pairs, and pictures of the target and an unrelated distractor. We observed a phonological priming effect for sign pairs sharing handshape and movement while differing in location parameter. Taken together, the data suggest a difference in the contribution of sign parameters to sign recognition and that sub-lexical features influence sign language processing.

Close

  • doi:10.1075/sll.19011.wie

Close

Jonathan van Leeuwen; Artem V Belopolsky

Rapid spatial oculomotor updating across saccades is malleable Journal Article

Vision Research, 178 , pp. 60–69, 2021.

Abstract | Links | BibTeX

@article{Leeuwen2021,
title = {Rapid spatial oculomotor updating across saccades is malleable},
author = {Jonathan van Leeuwen and Artem V Belopolsky},
doi = {10.1016/j.visres.2020.09.006},
year = {2021},
date = {2021-01-01},
journal = {Vision Research},
volume = {178},
pages = {60--69},
publisher = {Elsevier Ltd},
abstract = {The oculomotor system uses a sophisticated updating mechanism to adjust for large retinal displacements which occur with every saccade. Previous studies have shown that updating operates rapidly and starts before saccade is initiated. Here we used saccade adaptation to alter life-long expectations about how a saccade changes the location of an object on the retina. Participants made a sequence of one horizontal and one vertical saccade and ignored an irrelevant distractor. The time-course of oculomotor updating was estimated using saccade curvature of the vertical saccade, relative to the distractor. During the first saccade both saccade targets were shifted on 80% of trials, which induced saccade adaptation (Experiment 1). Critically, since the distractor was left stationary, successful saccade adaptation (e.g., saccade becoming shorter) meant that after the first saccade the distractor appeared in a different hemifield than without adaptation. After adaptation, second saccades curved away only from the newly learned distractor location starting at 80 ms after the first saccade. When on the minority of trials (20%) the targets were not shifted, saccades again first curved away from the newly learned (now empty) location, but then quickly switched to curving away from the life-long learned, visible location. When on some trials the distractor was removed during the first saccade, saccades curved away only from the newly learned (but empty) location (Experiment 2). The results show that updating of locations across saccades is not only fast, but is highly malleable, relying on recently learned sensorimotor contingencies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The oculomotor system uses a sophisticated updating mechanism to adjust for large retinal displacements which occur with every saccade. Previous studies have shown that updating operates rapidly and starts before saccade is initiated. Here we used saccade adaptation to alter life-long expectations about how a saccade changes the location of an object on the retina. Participants made a sequence of one horizontal and one vertical saccade and ignored an irrelevant distractor. The time-course of oculomotor updating was estimated using saccade curvature of the vertical saccade, relative to the distractor. During the first saccade both saccade targets were shifted on 80% of trials, which induced saccade adaptation (Experiment 1). Critically, since the distractor was left stationary, successful saccade adaptation (e.g., saccade becoming shorter) meant that after the first saccade the distractor appeared in a different hemifield than without adaptation. After adaptation, second saccades curved away only from the newly learned distractor location starting at 80 ms after the first saccade. When on the minority of trials (20%) the targets were not shifted, saccades again first curved away from the newly learned (now empty) location, but then quickly switched to curving away from the life-long learned, visible location. When on some trials the distractor was removed during the first saccade, saccades curved away only from the newly learned (but empty) location (Experiment 2). The results show that updating of locations across saccades is not only fast, but is highly malleable, relying on recently learned sensorimotor contingencies.

Close

  • doi:10.1016/j.visres.2020.09.006

Close

Mats W J van Es; Tom R Marshall; Eelke Spaak; Ole Jensen; Jan-Mathijs Schoffelen

Phasic modulation of visual representations during sustained attention Journal Article

European Journal of Neuroscience, pp. 1–18, 2021.

Abstract | Links | BibTeX

@article{Es2021,
title = {Phasic modulation of visual representations during sustained attention},
author = {Mats W J van Es and Tom R Marshall and Eelke Spaak and Ole Jensen and Jan-Mathijs Schoffelen},
doi = {10.1111/ejn.15084},
year = {2021},
date = {2021-01-01},
journal = {European Journal of Neuroscience},
pages = {1--18},
abstract = {Sustained attention has long been thought to benefit perception in a continuous fashion, but recent evidence suggests that it affects perception in a discrete, rhythmic way. Periodic fluctuations in behavioral performance over time, and modulations of behavioral performance by the phase of spontaneous oscillatory brain activity point to an attentional sampling rate in the theta or alpha frequency range. We investigated whether such discrete sampling by attention is reflected in periodic fluctuations in the decodability of visual stimulus orientation from magnetoencephalographic (MEG) brain signals. In this exploratory study, human subjects attended one of two grating stimuli while MEG was being recorded. We assessed the strength of the visual representation of the attended stimulus using a support vector machine (SVM) to decode the orientation of the grating (clockwise vs. counterclockwise) from the MEG signal. We tested whether decoder performance depended on the theta/alpha phase of local brain activity. While the phase of ongoing activity in visual cortex did not modulate decoding performance, theta/alpha phase of activity in the FEF and parietal cortex, contralateral to the attended stimulus did modulate decoding performance. These findings suggest that phasic modulations of visual stimulus representations in the brain are caused by frequency-specific top-down activity in the fronto-parietal attention network.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Sustained attention has long been thought to benefit perception in a continuous fashion, but recent evidence suggests that it affects perception in a discrete, rhythmic way. Periodic fluctuations in behavioral performance over time, and modulations of behavioral performance by the phase of spontaneous oscillatory brain activity point to an attentional sampling rate in the theta or alpha frequency range. We investigated whether such discrete sampling by attention is reflected in periodic fluctuations in the decodability of visual stimulus orientation from magnetoencephalographic (MEG) brain signals. In this exploratory study, human subjects attended one of two grating stimuli while MEG was being recorded. We assessed the strength of the visual representation of the attended stimulus using a support vector machine (SVM) to decode the orientation of the grating (clockwise vs. counterclockwise) from the MEG signal. We tested whether decoder performance depended on the theta/alpha phase of local brain activity. While the phase of ongoing activity in visual cortex did not modulate decoding performance, theta/alpha phase of activity in the FEF and parietal cortex, contralateral to the attended stimulus did modulate decoding performance. These findings suggest that phasic modulations of visual stimulus representations in the brain are caused by frequency-specific top-down activity in the fronto-parietal attention network.

Close

  • doi:10.1111/ejn.15084

Close

David Torrents-Rodas; Stephan Koenig; Metin Uengoer; Harald Lachnit

A rise in prediction error increases attention to irrelevant cues Journal Article

Biological Psychology, 159 , pp. 1–11, 2021.

Abstract | Links | BibTeX

@article{TorrentsRodas2021,
title = {A rise in prediction error increases attention to irrelevant cues},
author = {David Torrents-Rodas and Stephan Koenig and Metin Uengoer and Harald Lachnit},
doi = {10.1016/j.biopsycho.2020.108007},
year = {2021},
date = {2021-01-01},
journal = {Biological Psychology},
volume = {159},
pages = {1--11},
publisher = {Elsevier B.V.},
abstract = {We investigated whether a sudden rise in prediction error widens an individual's focus of attention by increasing ocular fixations on cues that otherwise tend to be ignored. To this end, we used a discrimination learning task including cues that were either relevant or irrelevant for predicting the outcomes. Half of participants experienced contingency reversal once they had learned to predict the outcomes (reversal group},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We investigated whether a sudden rise in prediction error widens an individual's focus of attention by increasing ocular fixations on cues that otherwise tend to be ignored. To this end, we used a discrimination learning task including cues that were either relevant or irrelevant for predicting the outcomes. Half of participants experienced contingency reversal once they had learned to predict the outcomes (reversal group

Close

  • doi:10.1016/j.biopsycho.2020.108007

Close

Shin ichi Tokushige; Shunichi Matsuda; Satomi Inomata-Terada; Masashi Hamada; Yoshikazu Ugawa; Shoji Tsuji; Yasuo Terao

Premature saccades: A detailed physiological analysis Journal Article

Clinical Neurophysiology, 132 (1), pp. 63–76, 2021.

Abstract | Links | BibTeX

@article{Tokushige2021,
title = {Premature saccades: A detailed physiological analysis},
author = {Shin ichi Tokushige and Shunichi Matsuda and Satomi Inomata-Terada and Masashi Hamada and Yoshikazu Ugawa and Shoji Tsuji and Yasuo Terao},
doi = {10.1016/j.clinph.2020.09.026},
year = {2021},
date = {2021-01-01},
journal = {Clinical Neurophysiology},
volume = {132},
number = {1},
pages = {63--76},
publisher = {International Federation of Clinical Neurophysiology},
abstract = {Objective: Premature saccades (PSs) are those made with latencies too short for the direction and amplitude to be specifically programmed. We sought to determine the minimum latency needed to establish accurate direction and amplitude, and observed what occurs when saccades are launched before this minimum latency. Methods: In Experiment 1, 249 normal subjects performed the gap saccade task with horizontal targets. In Experiment 2, 28 normal subjects performed the gap saccade task with the targets placed in eight directions. In Experiment 3, 38 normal subjects, 49 patients with Parkinson's disease (PD), and 10 patients with spinocerebellar degeneration (SCD) performed the gap saccade task with horizontal targets. Results: In Experiment 1, it took 100 ms to accurately establish saccade amplitudes and directions. In Experiment 2, however, the latencies needed for accurate amplitude and direction establishment were both approximately 150 ms. In Experiment 3, the frequencies of PSs in patients with PD and SCD were lower than those of normal subjects. Conclusions: The saccade amplitudes and directions are determined simultaneously, 100–150 ms after target presentation. PSs may result from prediction of the oncoming target direction or latent saccade activities in the superior colliculus. Significance: Saccade direction and amplitude are determined simultaneously.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: Premature saccades (PSs) are those made with latencies too short for the direction and amplitude to be specifically programmed. We sought to determine the minimum latency needed to establish accurate direction and amplitude, and observed what occurs when saccades are launched before this minimum latency. Methods: In Experiment 1, 249 normal subjects performed the gap saccade task with horizontal targets. In Experiment 2, 28 normal subjects performed the gap saccade task with the targets placed in eight directions. In Experiment 3, 38 normal subjects, 49 patients with Parkinson's disease (PD), and 10 patients with spinocerebellar degeneration (SCD) performed the gap saccade task with horizontal targets. Results: In Experiment 1, it took 100 ms to accurately establish saccade amplitudes and directions. In Experiment 2, however, the latencies needed for accurate amplitude and direction establishment were both approximately 150 ms. In Experiment 3, the frequencies of PSs in patients with PD and SCD were lower than those of normal subjects. Conclusions: The saccade amplitudes and directions are determined simultaneously, 100–150 ms after target presentation. PSs may result from prediction of the oncoming target direction or latent saccade activities in the superior colliculus. Significance: Saccade direction and amplitude are determined simultaneously.

Close

  • doi:10.1016/j.clinph.2020.09.026

Close

Sarah Schuster; Nicole Alexandra; Florian Hutzler; Fabio Richlan; Martin Kronbichler; Stefan Hawelka

Cloze enough? Hemodynamic effects of predictive processing during natural reading Journal Article

NeuroImage, 228 , pp. 1–12, 2021.

Abstract | Links | BibTeX

@article{Schuster2021,
title = {Cloze enough? Hemodynamic effects of predictive processing during natural reading},
author = {Sarah Schuster and Nicole Alexandra and Florian Hutzler and Fabio Richlan and Martin Kronbichler and Stefan Hawelka},
doi = {10.1016/j.neuroimage.2020.117687},
year = {2021},
date = {2021-01-01},
journal = {NeuroImage},
volume = {228},
pages = {1--12},
publisher = {Elsevier Inc.},
abstract = {Evidence accrues that readers form multiple hypotheses about upcoming words. The present study investigated the hemodynamic effects of predictive processing during natural reading by means of combining fMRI and eye movement recordings. In particular, we investigated the neural and behavioral correlates of precision-weighted prediction errors, which are thought to be indicative of subsequent belief updating. Participants silently read sentences in which we manipulated the cloze probability and the semantic congruency of the final word that served as an index for precision and prediction error respectively. With respect to the neural correlates, our findings indicate an enhanced activation within the left inferior frontal and middle temporal gyrus suggesting an effect of precision on prediction update in higher (lexico-)semantic levels. Despite being evident at the neural level, we did not observe any evidence that this mechanism resulted in disproportionate reading times on participants' eye movements. The results speak against discrete predictions, but favor the notion that multiple words are activated in parallel during reading. 1.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Evidence accrues that readers form multiple hypotheses about upcoming words. The present study investigated the hemodynamic effects of predictive processing during natural reading by means of combining fMRI and eye movement recordings. In particular, we investigated the neural and behavioral correlates of precision-weighted prediction errors, which are thought to be indicative of subsequent belief updating. Participants silently read sentences in which we manipulated the cloze probability and the semantic congruency of the final word that served as an index for precision and prediction error respectively. With respect to the neural correlates, our findings indicate an enhanced activation within the left inferior frontal and middle temporal gyrus suggesting an effect of precision on prediction update in higher (lexico-)semantic levels. Despite being evident at the neural level, we did not observe any evidence that this mechanism resulted in disproportionate reading times on participants' eye movements. The results speak against discrete predictions, but favor the notion that multiple words are activated in parallel during reading. 1.

Close

  • doi:10.1016/j.neuroimage.2020.117687

Close

Jörg Schorer; Nico Heibült; Stuart G Wilson; Florian Loffing

Sleep facilitates anticipation training of a handball goalkeeping task in novices Journal Article

Psychology of Sport & Exercise, 53 , pp. 1–7, 2021.

Abstract | Links | BibTeX

@article{Schorer2021,
title = {Sleep facilitates anticipation training of a handball goalkeeping task in novices},
author = {Jörg Schorer and Nico Heibült and Stuart G Wilson and Florian Loffing},
doi = {10.1016/j.psychsport.2020.101841},
year = {2021},
date = {2021-01-01},
journal = {Psychology of Sport & Exercise},
volume = {53},
pages = {1--7},
abstract = {Sleep facilitates perceptual, cognitive and motor learning; however, the role of sleep for perceptual learning in sports is yet unclear. Here, we tested the impact of sleep on novices' visual anticipation training using a handball goalkeeping task. To this end, 30 novices were divided randomly in two groups and asked to predict the directional outcome of handball penalties presented as videos. One group did the pre-test and a single session of training in the morning, post-test in the evening on the same day, and the retention test in the next morning again. Conversely, the second group started and finished in the evening. Analyses of prediction accuracy revealed that the group starting in the evening improved largest between pre-and post-test (sleep in-between), while the greatest improvement for the group starting in the morning was found between post-and retention-test (sleep in-between). Overall, our results provide first insight into the potential relevance of sleep for effective anticipation training in sports.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Sleep facilitates perceptual, cognitive and motor learning; however, the role of sleep for perceptual learning in sports is yet unclear. Here, we tested the impact of sleep on novices' visual anticipation training using a handball goalkeeping task. To this end, 30 novices were divided randomly in two groups and asked to predict the directional outcome of handball penalties presented as videos. One group did the pre-test and a single session of training in the morning, post-test in the evening on the same day, and the retention test in the next morning again. Conversely, the second group started and finished in the evening. Analyses of prediction accuracy revealed that the group starting in the evening improved largest between pre-and post-test (sleep in-between), while the greatest improvement for the group starting in the morning was found between post-and retention-test (sleep in-between). Overall, our results provide first insight into the potential relevance of sleep for effective anticipation training in sports.

Close

  • doi:10.1016/j.psychsport.2020.101841

Close

Gaston Saux; Nicolas Vibert; Julien Dampuré; Debora I Burin; Anne M Britt; Jean François Rouet

From simple agents to information sources: Readers' differential processing of story characters as a function of story consistency Journal Article

Acta Psychologica, 212 , pp. 1–16, 2021.

Abstract | Links | BibTeX

@article{Saux2021,
title = {From simple agents to information sources: Readers' differential processing of story characters as a function of story consistency},
author = {Gaston Saux and Nicolas Vibert and Julien Dampuré and Debora I Burin and Anne M Britt and Jean Fran{ç}ois Rouet},
doi = {10.1016/j.actpsy.2020.103191},
year = {2021},
date = {2021-01-01},
journal = {Acta Psychologica},
volume = {212},
pages = {1--16},
abstract = {The study examined how readers integrate information from and about multiple information sources into a memory representation. In two experiments, college students read brief news reports containing two critical statements, each attributed to a source character. In half of the texts, the statements were consistent with each other, in the other half they were discrepant. Each story also featured a non-source character (who made no statement). The hypothesis was that discrepant statements, as compared to consistent statements, would promote distinct attention and memory only for the source characters. Experiment 1 used short interviews to assess participants' ability to recognize the source of one of the statements after reading. Experiment 2 used eye-tracking to collect data during reading and during a source-content recognition task after reading. As predicted, discrepancies only enhanced memory of, and attention to source-related segments of the texts. Discrepancies also enhanced the link between the two source characters in memory as opposed to the non-source character, as indicated by the participants' justifications (Experiment 1) and their visual inspection of the recognition items (Experiment 2). The results are interpreted within current theories of text comprehension and document literacy.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The study examined how readers integrate information from and about multiple information sources into a memory representation. In two experiments, college students read brief news reports containing two critical statements, each attributed to a source character. In half of the texts, the statements were consistent with each other, in the other half they were discrepant. Each story also featured a non-source character (who made no statement). The hypothesis was that discrepant statements, as compared to consistent statements, would promote distinct attention and memory only for the source characters. Experiment 1 used short interviews to assess participants' ability to recognize the source of one of the statements after reading. Experiment 2 used eye-tracking to collect data during reading and during a source-content recognition task after reading. As predicted, discrepancies only enhanced memory of, and attention to source-related segments of the texts. Discrepancies also enhanced the link between the two source characters in memory as opposed to the non-source character, as indicated by the participants' justifications (Experiment 1) and their visual inspection of the recognition items (Experiment 2). The results are interpreted within current theories of text comprehension and document literacy.

Close

  • doi:10.1016/j.actpsy.2020.103191

Close

Marian Sauter; Nina M Hanning; Heinrich R Liesefeld; Hermann J Müller

Post-capture processes contribute to statistical learning of distractor locations in visual search Journal Article

Cortex, 135 , pp. 108–126, 2021.

Abstract | Links | BibTeX

@article{Sauter2021,
title = {Post-capture processes contribute to statistical learning of distractor locations in visual search},
author = {Marian Sauter and Nina M Hanning and Heinrich R Liesefeld and Hermann J Müller},
doi = {10.1016/j.cortex.2020.11.016},
year = {2021},
date = {2021-01-01},
journal = {Cortex},
volume = {135},
pages = {108--126},
abstract = {People can learn to ignore salient distractors that occur frequently at particular locations, making them interfere less with task performance. This effect has been attributed to learnt suppression of the likely distractor locations at a pre-selective stage of attentional-priority computation. However, rather than distractors at frequent (vs rare) locations being just less likely to capture attention, attention may possibly also be disengaged faster from such distractors – a post-selective contribution to their reduced interference. Eye-movement studies confirm that learnt suppression, evidenced by a reduced rate of oculomotor capture by distractors at frequent locations, is a major factor, whereas the evidence is mixed with regard to a role of rapid disengagement However, methodological choices in these studies limited conclusions as to the contribution of a post-capture effect. Using an adjusted design, here we positively establish the rapid-disengagement effect, while corroborating the oculomotor-capture effect. Moreover, we examine distractor-location learning effects not only for distractors defined in a different visual dimension to the search target, but also for distractors defined within the same dimension, which are known to cause particularly strong interference and probability-cueing effects. Here, we show that both oculomotor-capture and disengagement dynamics contribute to this pattern. Additionally, on distractor-absent trials, the slowed responses to targets at frequent distractor locations—that we observe only in same-, but not different-, dimension conditions—arise pre-selectively, in prolonged latencies of the very first saccade. This supports the idea that learnt suppression is implemented at a different level of priority computation with same-versus different-dimension distractors.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

People can learn to ignore salient distractors that occur frequently at particular locations, making them interfere less with task performance. This effect has been attributed to learnt suppression of the likely distractor locations at a pre-selective stage of attentional-priority computation. However, rather than distractors at frequent (vs rare) locations being just less likely to capture attention, attention may possibly also be disengaged faster from such distractors – a post-selective contribution to their reduced interference. Eye-movement studies confirm that learnt suppression, evidenced by a reduced rate of oculomotor capture by distractors at frequent locations, is a major factor, whereas the evidence is mixed with regard to a role of rapid disengagement However, methodological choices in these studies limited conclusions as to the contribution of a post-capture effect. Using an adjusted design, here we positively establish the rapid-disengagement effect, while corroborating the oculomotor-capture effect. Moreover, we examine distractor-location learning effects not only for distractors defined in a different visual dimension to the search target, but also for distractors defined within the same dimension, which are known to cause particularly strong interference and probability-cueing effects. Here, we show that both oculomotor-capture and disengagement dynamics contribute to this pattern. Additionally, on distractor-absent trials, the slowed responses to targets at frequent distractor locations—that we observe only in same-, but not different-, dimension conditions—arise pre-selectively, in prolonged latencies of the very first saccade. This supports the idea that learnt suppression is implemented at a different level of priority computation with same-versus different-dimension distractors.

Close

  • doi:10.1016/j.cortex.2020.11.016

Close

Danila Rusich; Lisa S Arduino; Marika Mauti; Marialuisa Martelli; Silvia Primativo

Evidence of semantic processing in parafoveal reading: A rapid parallel visual presentation (RPVP) study Journal Article

Brain Sciences, 11 (28), pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Rusich2021,
title = {Evidence of semantic processing in parafoveal reading: A rapid parallel visual presentation (RPVP) study},
author = {Danila Rusich and Lisa S Arduino and Marika Mauti and Marialuisa Martelli and Silvia Primativo},
doi = {10.3390/brainsci11010028},
year = {2021},
date = {2021-01-01},
journal = {Brain Sciences},
volume = {11},
number = {28},
pages = {1--10},
abstract = {This study explores whether semantic processing in parafoveal reading in the Italian language is modulated by the perceptual and lexical features of stimuli by analyzing the results of the rapid parallel visual presentation (RPVP) paradigm experiment, which simultaneously presented two words, with one in the fovea and one in the parafovea. The words were randomly sampled from a set of semantically related and semantically unrelated pairs. The accuracy and reaction times in reading the words were measured as a function of the stimulus length and written word frequency. Fewer errors were observed in reading parafoveal words when they were semantically related to the foveal ones, and a larger semantic facilitatory effect was observed when the foveal word was highly frequent and the parafoveal word was short. Analysis of the reaction times suggests that the semantic relation between the two words sped up the naming of the foveal word when both words were short and highly frequent. Altogether, these results add further evidence in favor of the semantic processing of words in the parafovea during reading, modulated by the orthographic and lexical features of the stimuli. The results are discussed within the context of the most prominent models of word processing and eye movement controls in reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study explores whether semantic processing in parafoveal reading in the Italian language is modulated by the perceptual and lexical features of stimuli by analyzing the results of the rapid parallel visual presentation (RPVP) paradigm experiment, which simultaneously presented two words, with one in the fovea and one in the parafovea. The words were randomly sampled from a set of semantically related and semantically unrelated pairs. The accuracy and reaction times in reading the words were measured as a function of the stimulus length and written word frequency. Fewer errors were observed in reading parafoveal words when they were semantically related to the foveal ones, and a larger semantic facilitatory effect was observed when the foveal word was highly frequent and the parafoveal word was short. Analysis of the reaction times suggests that the semantic relation between the two words sped up the naming of the foveal word when both words were short and highly frequent. Altogether, these results add further evidence in favor of the semantic processing of words in the parafovea during reading, modulated by the orthographic and lexical features of the stimuli. The results are discussed within the context of the most prominent models of word processing and eye movement controls in reading.

Close

  • doi:10.3390/brainsci11010028

Close

William Rosengren; Marcus Nyström; Björn Hammar; Martin Stridh

Waveform characterisation and comparison of nystagmus eye-tracking signals Journal Article

Physiological Measurement, 2021.

Abstract | BibTeX

@article{Rosengren2021,
title = {Waveform characterisation and comparison of nystagmus eye-tracking signals},
author = {William Rosengren and Marcus Nyström and Björn Hammar and Martin Stridh},
year = {2021},
date = {2021-01-01},
journal = {Physiological Measurement},
abstract = {Objective: Pathological nystagmus is a symptom of oculomotor disease where the eyes oscillate involuntarily. The underlying cause of the nystagmus and the characteristics of the oscillatory eye movements are patient specific. An important part of clinical assessment in nystagmus patients is therefore to characterise different recorded eye-tracking signals, i.e., waveforms. Approach: A method for characterisation of the nystagmus waveform morphology is proposed. The method extracts local morphologic characteristics based on a sinusoidal model, and clusters these into a description of the complete signal. The clusters are used to characterise and compare recordings within and between patients and tasks. New metrics are proposed that can measure waveform similarity at different scales; from short signal segments up to entire signals, both within and between patients. Main results: The results show that the proposed method robustly can find the most prominent nystagmus waveforms in a recording. The method accurately identifies different eye movement patterns within and between patients and across different tasks. Significance: In conclusion, by allowing characterisation and comparison of nystagmus waveform patterns, the proposed method opens up for investigation and identification of the underlying condition in the individual patient, and for quantifying eye movements during tasks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: Pathological nystagmus is a symptom of oculomotor disease where the eyes oscillate involuntarily. The underlying cause of the nystagmus and the characteristics of the oscillatory eye movements are patient specific. An important part of clinical assessment in nystagmus patients is therefore to characterise different recorded eye-tracking signals, i.e., waveforms. Approach: A method for characterisation of the nystagmus waveform morphology is proposed. The method extracts local morphologic characteristics based on a sinusoidal model, and clusters these into a description of the complete signal. The clusters are used to characterise and compare recordings within and between patients and tasks. New metrics are proposed that can measure waveform similarity at different scales; from short signal segments up to entire signals, both within and between patients. Main results: The results show that the proposed method robustly can find the most prominent nystagmus waveforms in a recording. The method accurately identifies different eye movement patterns within and between patients and across different tasks. Significance: In conclusion, by allowing characterisation and comparison of nystagmus waveform patterns, the proposed method opens up for investigation and identification of the underlying condition in the individual patient, and for quantifying eye movements during tasks.

Close

Milena Raffi; Andrea Meoni; Alessandro Piras

Analysis of microsaccades during extended practice of a visual discrimination task in the macaque monkey Journal Article

Neuroscience Letters, 743 , pp. 1–7, 2021.

Abstract | Links | BibTeX

@article{Raffi2021,
title = {Analysis of microsaccades during extended practice of a visual discrimination task in the macaque monkey},
author = {Milena Raffi and Andrea Meoni and Alessandro Piras},
doi = {10.1016/j.neulet.2020.135581},
year = {2021},
date = {2021-01-01},
journal = {Neuroscience Letters},
volume = {743},
pages = {1--7},
publisher = {Elsevier B.V.},
abstract = {The spatial location indicated by a visual cue can bias microsaccades directions towards or away from the cue. Aim of this work was to evaluate the microsaccades characteristics during the monkey's training, investigating the relationship between a shift of attention and practice. The monkey was trained to press a lever at a target onset, then an expanding optic flow stimulus appeared to the right of the target. After a variable time delay, a visual cue appeared within the optic flow stimulus and the monkey had to release the lever in a maximum reaction time (RT) of 700 ms. In the control task no visual cue appeared and the monkey had to attend a change in the target color. Data were recorded in 9 months. Results revealed that the RTs at the control task changed significantly across time. The microsaccades directions were significantly clustered toward the visual cue, suggesting that the animal developed an attentional bias toward the visual space where the cue appeared. The microsaccades amplitude differed significantly across time. The microsaccades peak velocity differed significantly both across time and within the time delays, indicating that the monkey made faster microsaccades when it expected the cue to appear. The microsaccades number was significantly higher in the control task with respect to discrimination. The lack of change in microsaccades rate, duration, number and direction across time indicates that the experience acquired during practicing the task did not influence microsaccades generation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The spatial location indicated by a visual cue can bias microsaccades directions towards or away from the cue. Aim of this work was to evaluate the microsaccades characteristics during the monkey's training, investigating the relationship between a shift of attention and practice. The monkey was trained to press a lever at a target onset, then an expanding optic flow stimulus appeared to the right of the target. After a variable time delay, a visual cue appeared within the optic flow stimulus and the monkey had to release the lever in a maximum reaction time (RT) of 700 ms. In the control task no visual cue appeared and the monkey had to attend a change in the target color. Data were recorded in 9 months. Results revealed that the RTs at the control task changed significantly across time. The microsaccades directions were significantly clustered toward the visual cue, suggesting that the animal developed an attentional bias toward the visual space where the cue appeared. The microsaccades amplitude differed significantly across time. The microsaccades peak velocity differed significantly both across time and within the time delays, indicating that the monkey made faster microsaccades when it expected the cue to appear. The microsaccades number was significantly higher in the control task with respect to discrimination. The lack of change in microsaccades rate, duration, number and direction across time indicates that the experience acquired during practicing the task did not influence microsaccades generation.

Close

  • doi:10.1016/j.neulet.2020.135581

Close

Brendan L Portengen; Carlien Roelofzen; Giorgio L Porro; Saskia M Imhof; Alessio Fracasso; Marnix Naber

Blind spot and visual field anisotropy detection with flicker pupil perimetry across brightness and task variations Journal Article

Vision Research, 178 , pp. 79–85, 2021.

Abstract | Links | BibTeX

@article{Portengen2021,
title = {Blind spot and visual field anisotropy detection with flicker pupil perimetry across brightness and task variations},
author = {Brendan L Portengen and Carlien Roelofzen and Giorgio L Porro and Saskia M Imhof and Alessio Fracasso and Marnix Naber},
doi = {10.1016/j.visres.2020.10.005},
year = {2021},
date = {2021-01-01},
journal = {Vision Research},
volume = {178},
pages = {79--85},
publisher = {Elsevier Ltd},
abstract = {The pupil can be used as an objective measure for testing sensitivities across the visual field (pupil perimetry; PP). The recently developed gaze-contingent flicker PP (gcFPP) is a promising novel form of PP, with improved sensitivity due to retinotopically stable and repeated flickering stimulations, in a short time span. As a diagnostic tool gcFPP has not yet been benchmarked in healthy individuals. The main aims of the current study were to investigate whether gcFPP has the sensitivity to detect the blind spot, and upper versus lower visual field differences that were found before in previous studies. An additional aim was to test for the effects of attentional requirements and background luminance. A total of thirty individuals were tested with gcFPP across two separate experiments. The results showed that pupil oscillation amplitudes were smaller for stimuli presented inside as compared to outside the blind spot. Amplitudes also decreased as a function of eccentricity (i.e., distance to fixation) and were larger for upper as compared to lower visual fields. We measured the strongest and most sensitive pupil responses to stimuli presented on dark- and mid-gray backgrounds, and when observers covertly focused their attention to the flickering stimulus. GcFPP thus evokes pupil responses that are sensitive enough to detect local, and global differences in pupil sensitivity. The findings further encourage (1) the use of a gray background to prevent straylight without affecting gcFPPs sensitivity and (2) the use of an attention task to enhance pupil sensitivity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The pupil can be used as an objective measure for testing sensitivities across the visual field (pupil perimetry; PP). The recently developed gaze-contingent flicker PP (gcFPP) is a promising novel form of PP, with improved sensitivity due to retinotopically stable and repeated flickering stimulations, in a short time span. As a diagnostic tool gcFPP has not yet been benchmarked in healthy individuals. The main aims of the current study were to investigate whether gcFPP has the sensitivity to detect the blind spot, and upper versus lower visual field differences that were found before in previous studies. An additional aim was to test for the effects of attentional requirements and background luminance. A total of thirty individuals were tested with gcFPP across two separate experiments. The results showed that pupil oscillation amplitudes were smaller for stimuli presented inside as compared to outside the blind spot. Amplitudes also decreased as a function of eccentricity (i.e., distance to fixation) and were larger for upper as compared to lower visual fields. We measured the strongest and most sensitive pupil responses to stimuli presented on dark- and mid-gray backgrounds, and when observers covertly focused their attention to the flickering stimulus. GcFPP thus evokes pupil responses that are sensitive enough to detect local, and global differences in pupil sensitivity. The findings further encourage (1) the use of a gray background to prevent straylight without affecting gcFPPs sensitivity and (2) the use of an attention task to enhance pupil sensitivity.

Close

  • doi:10.1016/j.visres.2020.10.005

Close

B Platt; A Sfärlea; C Buhl; J Loechner; J Neumüller; L Asperud Thomsen; K Starman-Wöhrle; E Salemink; G Schulte-Körne

An eye-tracking study of attention biases in children at high familial risk for depression and their parents with depression Journal Article

Child Psychiatry & Human Development, pp. 1–20, 2021.

Abstract | Links | BibTeX

@article{Platt2021,
title = {An eye-tracking study of attention biases in children at high familial risk for depression and their parents with depression},
author = {B Platt and A Sfärlea and C Buhl and J Loechner and J Neumüller and L {Asperud Thomsen} and K Starman-Wöhrle and E Salemink and G Schulte-Körne},
doi = {10.1007/s10578-020-01105-2},
year = {2021},
date = {2021-01-01},
journal = {Child Psychiatry & Human Development},
pages = {1--20},
publisher = {Springer Science and Business Media LLC},
abstract = {Attention biases (AB) are a core component of cognitive models of depression yet it is unclear what role they play in the transgenerational transmission of depression. 44 children (9–14 years) with a high familial risk of depression (HR) were compared on multiple measures of AB with 36 children with a low familial risk of depression (LR). Their parents: 44 adults with a history of depression (HD) and 36 adults with no history of psychiatric disorder (ND) were also compared. There was no evidence of group differences in AB; neither between the HR and LR children, nor between HD and ND parents. There was no evidence of a correlation between parent and child AB. The internal consistency of the tasks varied greatly. The Dot-Probe Task showed unacceptable reliability whereas the behavioral index of the Visual-Search Task and an eye-tracking index of the Passive-Viewing Task showed better reliability. There was little correlation between the AB tasks and the tasks showed minimal convergence with symptoms of depression or anxiety. The null-findings of the current study contradict our expectations and much of the previous literature. They may be due to the poor psychometric properties associated with some of the AB indices, the unreliability of AB in general, or the relatively modest sample size. The poor reliability of the tasks in our sample suggest caution should be taken when interpreting the positive findings of previous studies which have used similar methods and populations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Attention biases (AB) are a core component of cognitive models of depression yet it is unclear what role they play in the transgenerational transmission of depression. 44 children (9–14 years) with a high familial risk of depression (HR) were compared on multiple measures of AB with 36 children with a low familial risk of depression (LR). Their parents: 44 adults with a history of depression (HD) and 36 adults with no history of psychiatric disorder (ND) were also compared. There was no evidence of group differences in AB; neither between the HR and LR children, nor between HD and ND parents. There was no evidence of a correlation between parent and child AB. The internal consistency of the tasks varied greatly. The Dot-Probe Task showed unacceptable reliability whereas the behavioral index of the Visual-Search Task and an eye-tracking index of the Passive-Viewing Task showed better reliability. There was little correlation between the AB tasks and the tasks showed minimal convergence with symptoms of depression or anxiety. The null-findings of the current study contradict our expectations and much of the previous literature. They may be due to the poor psychometric properties associated with some of the AB indices, the unreliability of AB in general, or the relatively modest sample size. The poor reliability of the tasks in our sample suggest caution should be taken when interpreting the positive findings of previous studies which have used similar methods and populations.

Close

  • doi:10.1007/s10578-020-01105-2

Close

Adam J Parker; Timothy J Slattery

Spelling ability influences early letter encoding during reading: Evidence from return-sweep eye movements Journal Article

Quarterly Journal of Experimental Psychology, 74 (1), pp. 135–149, 2021.

Abstract | Links | BibTeX

@article{Parker2021,
title = {Spelling ability influences early letter encoding during reading: Evidence from return-sweep eye movements},
author = {Adam J Parker and Timothy J Slattery},
doi = {10.1177/1747021820949150},
year = {2021},
date = {2021-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {74},
number = {1},
pages = {135--149},
abstract = {In recent years, there has been an increase in research concerning individual differences in readers' eye movements. However, this body of work is almost exclusively concerned with the reading of single-line texts. While spelling and reading ability have been reported to influence saccade targeting and fixation times during intra-line reading, where upcoming words are available for parafoveal processing, it is unclear how these variables affect fixations adjacent to return-sweeps. We, therefore, examined the influence of spelling and reading ability on return-sweep and corrective saccade parameters for 120 participants engaged in multiline text reading. Less-skilled readers and spellers tended to launch their return-sweeps closer to the end of the line, prefer a viewing location closer to the start of the next, and made more return-sweep undershoot errors. We additionally report several skill-related differences in readers' fixation durations across multiline texts. Reading ability influenced all fixations except those resulting from return-sweep error. In contrast, spelling ability influenced only those fixations following accurate return-sweeps—where parafoveal processing was not possible prior to fixation. This stands in contrasts to an established body of work where fixation durations are related to reading but not spelling ability. These results indicate that lexical quality shapes the rate at which readers access meaning from the text by enhancing early letter encoding, and influences saccade targeting even in the absence of parafoveal target information.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In recent years, there has been an increase in research concerning individual differences in readers' eye movements. However, this body of work is almost exclusively concerned with the reading of single-line texts. While spelling and reading ability have been reported to influence saccade targeting and fixation times during intra-line reading, where upcoming words are available for parafoveal processing, it is unclear how these variables affect fixations adjacent to return-sweeps. We, therefore, examined the influence of spelling and reading ability on return-sweep and corrective saccade parameters for 120 participants engaged in multiline text reading. Less-skilled readers and spellers tended to launch their return-sweeps closer to the end of the line, prefer a viewing location closer to the start of the next, and made more return-sweep undershoot errors. We additionally report several skill-related differences in readers' fixation durations across multiline texts. Reading ability influenced all fixations except those resulting from return-sweep error. In contrast, spelling ability influenced only those fixations following accurate return-sweeps—where parafoveal processing was not possible prior to fixation. This stands in contrasts to an established body of work where fixation durations are related to reading but not spelling ability. These results indicate that lexical quality shapes the rate at which readers access meaning from the text by enhancing early letter encoding, and influences saccade targeting even in the absence of parafoveal target information.

Close

  • doi:10.1177/1747021820949150

Close

Katya Olmos-Solis; Anouk Mariette van Loon; Christian N L Olivers

Content or status: Frontal and posterior cortical representations of object category and upcoming task goals in working memory Journal Article

Cortex, 135 , pp. 61–77, 2021.

Abstract | Links | BibTeX

@article{OlmosSolis2021,
title = {Content or status: Frontal and posterior cortical representations of object category and upcoming task goals in working memory},
author = {Katya Olmos-Solis and Anouk Mariette van Loon and Christian N L Olivers},
doi = {10.1016/j.cortex.2020.11.011},
year = {2021},
date = {2021-01-01},
journal = {Cortex},
volume = {135},
pages = {61--77},
publisher = {Elsevier Ltd},
abstract = {To optimize task sequences, the brain must differentiate between current and prospective goals. We previously showed that currently and prospectively relevant object representations in working memory can be dissociated within object-selective cortex. Based on other recent studies indicating that a range of brain areas may be involved in distinguishing between currently relevant and prospectively relevant information in working memory, here we conducted multivoxel pattern analyses of fMRI activity in additional posterior areas (specifically early visual cortex and the intraparietal sulcus) as well as frontal areas (specifically the frontal eye fields and lateral prefrontal cortex). We assessed whether these areas represent the memory content, the current versus prospective status of the memory, or both. On each trial, participants memorized an object drawn from three different categories. The object was the target for either a first task (currently relevant), a second task (prospectively relevant), or for neither task (irrelevant). The results revealed a division of labor across brain regions: While posterior areas preferentially coded for content (i.e., the category), frontal areas carried information about the current versus prospective relevance status of the memory, irrespective of the category. Intraparietal sulcus revealed both strong category- and status-sensitivity, consistent with its hub function of combining stimulus and priority signals. Furthermore, cross-decoding analyses revealed that while current and prospective representations were similar prior to search, they became dissimilar during search, in posterior as well as frontal areas. The findings provide further evidence for a dissociation between content and control networks in working memory.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

To optimize task sequences, the brain must differentiate between current and prospective goals. We previously showed that currently and prospectively relevant object representations in working memory can be dissociated within object-selective cortex. Based on other recent studies indicating that a range of brain areas may be involved in distinguishing between currently relevant and prospectively relevant information in working memory, here we conducted multivoxel pattern analyses of fMRI activity in additional posterior areas (specifically early visual cortex and the intraparietal sulcus) as well as frontal areas (specifically the frontal eye fields and lateral prefrontal cortex). We assessed whether these areas represent the memory content, the current versus prospective status of the memory, or both. On each trial, participants memorized an object drawn from three different categories. The object was the target for either a first task (currently relevant), a second task (prospectively relevant), or for neither task (irrelevant). The results revealed a division of labor across brain regions: While posterior areas preferentially coded for content (i.e., the category), frontal areas carried information about the current versus prospective relevance status of the memory, irrespective of the category. Intraparietal sulcus revealed both strong category- and status-sensitivity, consistent with its hub function of combining stimulus and priority signals. Furthermore, cross-decoding analyses revealed that while current and prospective representations were similar prior to search, they became dissimilar during search, in posterior as well as frontal areas. The findings provide further evidence for a dissociation between content and control networks in working memory.

Close

  • doi:10.1016/j.cortex.2020.11.011

Close

Mira L Nencheva; Elise A Piazza; Casey Lew‐Williams

The moment‐to‐moment pitch dynamics of child‐directed speech shape toddlers' attention and learning Journal Article

Developmental Science, 24 , pp. 1–15, 2021.

Abstract | Links | BibTeX

@article{Nencheva2021,
title = {The moment‐to‐moment pitch dynamics of child‐directed speech shape toddlers' attention and learning},
author = {Mira L Nencheva and Elise A Piazza and Casey Lew‐Williams},
doi = {10.1111/desc.12997},
year = {2021},
date = {2021-01-01},
journal = {Developmental Science},
volume = {24},
pages = {1--15},
abstract = {Young children have an overall preference for child-directed speech (CDS) over adult- directed speech (ADS), and its structural features are thought to facilitate language learning. Many studies have supported these findings, but less is known about pro- cessing of CDS at short, sub-second timescales. How do the moment-to-moment dynamics of CDS influence young children's attention and learning? In Study 1, we used hierarchical clustering to characterize patterns of pitch variability in a natural CDS corpus, which uncovered four main word-level contour shapes: ‘fall', ‘rise', ‘hill', and ‘valley'. In Study 2, we adapted a measure from adult attention research—pupil size synchrony—to quantify real-time attention to speech across participants, and found that toddlers showed higher synchrony to the dynamics of CDS than to ADS. Importantly, there were consistent differences in toddlers' attention when listening to the four word-level contour types. In Study 3, we found that pupil size synchrony during exposure to novel words predicted toddlers' learning at test. This suggests that the dynamics of pitch in CDS not only shape toddlers' attention but guide their learn- ing of new words. By revealing a physiological response to the real-time dynamics of CDS, this investigation yields a new sub-second framework for understanding young children's engagement with one of the most important signals in their environment.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Young children have an overall preference for child-directed speech (CDS) over adult- directed speech (ADS), and its structural features are thought to facilitate language learning. Many studies have supported these findings, but less is known about pro- cessing of CDS at short, sub-second timescales. How do the moment-to-moment dynamics of CDS influence young children's attention and learning? In Study 1, we used hierarchical clustering to characterize patterns of pitch variability in a natural CDS corpus, which uncovered four main word-level contour shapes: ‘fall', ‘rise', ‘hill', and ‘valley'. In Study 2, we adapted a measure from adult attention research—pupil size synchrony—to quantify real-time attention to speech across participants, and found that toddlers showed higher synchrony to the dynamics of CDS than to ADS. Importantly, there were consistent differences in toddlers' attention when listening to the four word-level contour types. In Study 3, we found that pupil size synchrony during exposure to novel words predicted toddlers' learning at test. This suggests that the dynamics of pitch in CDS not only shape toddlers' attention but guide their learn- ing of new words. By revealing a physiological response to the real-time dynamics of CDS, this investigation yields a new sub-second framework for understanding young children's engagement with one of the most important signals in their environment.

Close

  • doi:10.1111/desc.12997

Close

Carly Moser; Lyndsay Schmitt; Joseph Schmidt; Amanda Fairchild; Jessica Klusek

Response inhibition deficits in women with the FMR1 premutation are associated with age and gall risk Journal Article

Brain and Cognition, 148 , pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Moser2021,
title = {Response inhibition deficits in women with the FMR1 premutation are associated with age and gall risk},
author = {Carly Moser and Lyndsay Schmitt and Joseph Schmidt and Amanda Fairchild and Jessica Klusek},
doi = {10.1016/j.bandc.2020.105675},
year = {2021},
date = {2021-01-01},
journal = {Brain and Cognition},
volume = {148},
pages = {1--10},
publisher = {Elsevier Inc.},
abstract = {One in 113-178 females worldwide carry a premutation allele on the FMR1 gene. The FMR1 premutation is linked to neurocognitive and neuromotor impairments, although the phenotype is not fully understood, particularly with respect to age effects. This study sought to define oculomotor response inhibition skills in women with the FMR1 premutation and their association with age and fall risk. We employed an antisaccade eye- tracking paradigm to index oculomotor inhibition skills in 35 women with the FMR1 premutation and 28 control women. The FMR1 premutation group exhibited longer antisaccade latency and reduced accuracy relative to controls, indicating deficient response inhibition skills. Longer response latency was associated with older age in the FMR1 premutation and was also predictive of fall risk. Findings highlight the utility of the antisaccade paradigm for detecting early signs of age-related executive decline in the FMR1 premutation, which is related to fall risk. Findings support the need for clinical prevention efforts to decrease and delay the trajectory of age-related executive decline in women with the FMR1 premutation during midlife.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

One in 113-178 females worldwide carry a premutation allele on the FMR1 gene. The FMR1 premutation is linked to neurocognitive and neuromotor impairments, although the phenotype is not fully understood, particularly with respect to age effects. This study sought to define oculomotor response inhibition skills in women with the FMR1 premutation and their association with age and fall risk. We employed an antisaccade eye- tracking paradigm to index oculomotor inhibition skills in 35 women with the FMR1 premutation and 28 control women. The FMR1 premutation group exhibited longer antisaccade latency and reduced accuracy relative to controls, indicating deficient response inhibition skills. Longer response latency was associated with older age in the FMR1 premutation and was also predictive of fall risk. Findings highlight the utility of the antisaccade paradigm for detecting early signs of age-related executive decline in the FMR1 premutation, which is related to fall risk. Findings support the need for clinical prevention efforts to decrease and delay the trajectory of age-related executive decline in women with the FMR1 premutation during midlife.

Close

  • doi:10.1016/j.bandc.2020.105675

Close

Krithika Mohan; Oliver Zhu; David Freedman

Interaction between neuronal encoding and population dynamics during categorization task switching in parietal cortex Journal Article

Neuron, 109 , pp. 1–17, 2021.

Abstract | Links | BibTeX

@article{Mohan2021,
title = {Interaction between neuronal encoding and population dynamics during categorization task switching in parietal cortex},
author = {Krithika Mohan and Oliver Zhu and David Freedman},
doi = {10.1016/j.neuron.2020.11.022},
year = {2021},
date = {2021-01-01},
journal = {Neuron},
volume = {109},
pages = {1--17},
publisher = {Elsevier Inc.},
abstract = {Primates excel at categorization, a cognitive process for assigning stimuli into behaviorally relevant groups. Categories are encoded in multiple brain areas and tasks, yet it remains unclear how neural encoding and dynamics support cognitive tasks with different demands. We recorded from parietal cortex during flexible switching between categorization tasks with distinct cognitive and motor demands, and also studied recurrent neural networks (RNNs) trained on the same tasks. In the one-interval categorization task (OIC), monkeys rapidly reported their decisions with a saccade. In the delayed match-to-category (DMC) task, monkeys decided whether sequentially presented stimuli were categorical matches. Neuronal category encoding generalized across tasks, but categorical encoding was more binary-like in the DMC task and more graded in the OIC task. Furthermore, analysis of the trained RNNs supports the hypothesis that binary-like encoding in the DMC task arises through compression of graded feature encoding by population attractor dynamics underlying short-term working memory.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Primates excel at categorization, a cognitive process for assigning stimuli into behaviorally relevant groups. Categories are encoded in multiple brain areas and tasks, yet it remains unclear how neural encoding and dynamics support cognitive tasks with different demands. We recorded from parietal cortex during flexible switching between categorization tasks with distinct cognitive and motor demands, and also studied recurrent neural networks (RNNs) trained on the same tasks. In the one-interval categorization task (OIC), monkeys rapidly reported their decisions with a saccade. In the delayed match-to-category (DMC) task, monkeys decided whether sequentially presented stimuli were categorical matches. Neuronal category encoding generalized across tasks, but categorical encoding was more binary-like in the DMC task and more graded in the OIC task. Furthermore, analysis of the trained RNNs supports the hypothesis that binary-like encoding in the DMC task arises through compression of graded feature encoding by population attractor dynamics underlying short-term working memory.

Close

  • doi:10.1016/j.neuron.2020.11.022

Close

Leanna McConnell; Wendy Troop-Gordon

Attentional biases to bullies and bystanders and youth's coping with peer victimization Journal Article

Journal of Early Adolescence, 41 (1), pp. 97–127, 2021.

Abstract | Links | BibTeX

@article{McConnell2021,
title = {Attentional biases to bullies and bystanders and youth's coping with peer victimization},
author = {Leanna McConnell and Wendy Troop-Gordon},
doi = {10.1177/0272431620931206},
year = {2021},
date = {2021-01-01},
journal = {Journal of Early Adolescence},
volume = {41},
number = {1},
pages = {97--127},
abstract = {Effectively coping with peer victimization may be facilitated by deploying attention away from threat (i.e., bullies, reinforcers) and toward available support (e.g., defenders). To test this premise, 72 early adolescents (38 girls; Mage = 11.67},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Effectively coping with peer victimization may be facilitated by deploying attention away from threat (i.e., bullies, reinforcers) and toward available support (e.g., defenders). To test this premise, 72 early adolescents (38 girls; Mage = 11.67

Close

  • doi:10.1177/0272431620931206

Close

Feifei Liang; Jie Ma; Xuejun Bai; Simon P Liversedge

Initial landing position effects on Chinese word learning in children and adults Journal Article

Journal of Memory and Language, 116 , pp. 104183, 2021.

Abstract | Links | BibTeX

@article{Liang2021,
title = {Initial landing position effects on Chinese word learning in children and adults},
author = {Feifei Liang and Jie Ma and Xuejun Bai and Simon P Liversedge},
doi = {10.1016/j.jml.2020.104183},
year = {2021},
date = {2021-01-01},
journal = {Journal of Memory and Language},
volume = {116},
pages = {104183},
publisher = {Elsevier Inc.},
abstract = {textcopyright 2020 We adopted a word learning paradigm to examine whether children and adults differ in their saccade targeting strategies when learning novel words in Chinese reading. Adopting a developmental perspective, we extrapolated hypotheses pertaining to saccadic targeting and its development from the Chinese Reading Model (Li & Pollatsek, 2020). In our experiment, we embedded novel words into eight sentences, each of which provided a context for readers to form a new lexical representation. A group of children and a group of adults were required to read these sentences as their eye movements were recorded. At a basic level, we showed that decisions of initial saccadic targeting, and mechanisms responsible for computation of initial landing sites relative to launch sites are in place early in children, however, such targeting was less optimal in children than adults. Furthermore, for adults as lexical familiarity increased saccadic targeting behavior became more optimized, however, no such effects occurred in children. Mechanisms controlling initial saccadic targeting in relation to launch sites and in respect of lexical familiarity appear to operate with functional efficacy that is developmentally delayed. At a broad theoretical level, we consider our results in relation to issues associated with visually and linguistically, mediated saccadic control. More specifically, our novel findings fit neatly with our theoretical extrapolations from the CRM and suggest that its framework may be valuable for future investigations of the development of eye movement control in Chinese reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

textcopyright 2020 We adopted a word learning paradigm to examine whether children and adults differ in their saccade targeting strategies when learning novel words in Chinese reading. Adopting a developmental perspective, we extrapolated hypotheses pertaining to saccadic targeting and its development from the Chinese Reading Model (Li & Pollatsek, 2020). In our experiment, we embedded novel words into eight sentences, each of which provided a context for readers to form a new lexical representation. A group of children and a group of adults were required to read these sentences as their eye movements were recorded. At a basic level, we showed that decisions of initial saccadic targeting, and mechanisms responsible for computation of initial landing sites relative to launch sites are in place early in children, however, such targeting was less optimal in children than adults. Furthermore, for adults as lexical familiarity increased saccadic targeting behavior became more optimized, however, no such effects occurred in children. Mechanisms controlling initial saccadic targeting in relation to launch sites and in respect of lexical familiarity appear to operate with functional efficacy that is developmentally delayed. At a broad theoretical level, we consider our results in relation to issues associated with visually and linguistically, mediated saccadic control. More specifically, our novel findings fit neatly with our theoretical extrapolations from the CRM and suggest that its framework may be valuable for future investigations of the development of eye movement control in Chinese reading.

Close

  • doi:10.1016/j.jml.2020.104183

Close

Karin Ludwig; Thomas Schenk

Long-lasting effects of a gaze-contingent intervention on change detection in healthy participants – Implications for neglect rehabilitation Journal Article

Cortex, 134 , pp. 333–350, 2021.

Abstract | Links | BibTeX

@article{Ludwig2021,
title = {Long-lasting effects of a gaze-contingent intervention on change detection in healthy participants – Implications for neglect rehabilitation},
author = {Karin Ludwig and Thomas Schenk},
doi = {10.1016/j.cortex.2020.10.013},
year = {2021},
date = {2021-01-01},
journal = {Cortex},
volume = {134},
pages = {333--350},
publisher = {Elsevier Ltd},
abstract = {Patients with spatial neglect show an ipsilesional exploration bias. We developed a gaze- contingent intervention that aims at reducing this bias and tested its effects on visual exploration in healthy participants: During a visual search, stimuli in one half of the search display are removed when the gaze moves into this half. This leads to a relative increase in the exploration of the other half of the search display e the one that can be explored without impediments. In the first experiment, we tested whether this effect transferred to visual exploration during a change detection task (under change blindness conditions), which was the case. In a second experiment, we modified the intervention (to an inter- mittent application) but the original version yielded more promising results. Thus, in the third experiment, the original version was used to test the longevity of its effects and whether its repeated application produced even stronger results. To this aim, we compared two groups: the first group received the intervention once, the second group repeatedly on three consecutive days. The change detection task was administered before the inter- vention and at four points in time after the last intervention (directly afterwards, þ 1 hour, þ 1 day, and þ4 days). The results showed long-lasting effects of the intervention, most pronounced in the second group. Here the intervention changed the bias in the visual exploration pattern significantly until the last follow-up. We conclude that the intervention shows promise for the successful application in neglect patients.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Patients with spatial neglect show an ipsilesional exploration bias. We developed a gaze- contingent intervention that aims at reducing this bias and tested its effects on visual exploration in healthy participants: During a visual search, stimuli in one half of the search display are removed when the gaze moves into this half. This leads to a relative increase in the exploration of the other half of the search display e the one that can be explored without impediments. In the first experiment, we tested whether this effect transferred to visual exploration during a change detection task (under change blindness conditions), which was the case. In a second experiment, we modified the intervention (to an inter- mittent application) but the original version yielded more promising results. Thus, in the third experiment, the original version was used to test the longevity of its effects and whether its repeated application produced even stronger results. To this aim, we compared two groups: the first group received the intervention once, the second group repeatedly on three consecutive days. The change detection task was administered before the inter- vention and at four points in time after the last intervention (directly afterwards, þ 1 hour, þ 1 day, and þ4 days). The results showed long-lasting effects of the intervention, most pronounced in the second group. Here the intervention changed the bias in the visual exploration pattern significantly until the last follow-up. We conclude that the intervention shows promise for the successful application in neglect patients.

Close

  • doi:10.1016/j.cortex.2020.10.013

Close

Kunyu Lian; Jie Ma; Feifei Liang; Ling Wei; Shuwei Zhang; Yingying Wu; Xuejun Bai; Rong Lian

The role of character positional frequency in oral reading: A developmental study Journal Article

Social Behavior and Personality, 49 (1), pp. 1–13, 2021.

Abstract | Links | BibTeX

@article{Lian2021,
title = {The role of character positional frequency in oral reading: A developmental study},
author = {Kunyu Lian and Jie Ma and Feifei Liang and Ling Wei and Shuwei Zhang and Yingying Wu and Xuejun Bai and Rong Lian},
doi = {10.2224/sbp.9733},
year = {2021},
date = {2021-01-01},
journal = {Social Behavior and Personality},
volume = {49},
number = {1},
pages = {1--13},
abstract = {How frequently a character appears in a word (positional character frequency) is used as a cue in word segmentation when reading aloud in the Chinese language. In this study we created 176 sentences with a target word in the center of each. Participants were 76 college students (mature readers) and 76 third-grade students (beginner readers). Results show an interaction effect of age and positional frequency of the initial character in the word on gaze duration. Further analysis shows that the third-grade students' gaze duration was significantly longer in high, relative to low, positional character frequency of the target words. This trend was consistent with refixation duration, and there was a marginally significant interaction between age and total fixation time. Overall, positional character frequency was an important cue for word segmentation in oral reading in the Chinese language, and third-grade students relied more heavily on this cue than did college students.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

How frequently a character appears in a word (positional character frequency) is used as a cue in word segmentation when reading aloud in the Chinese language. In this study we created 176 sentences with a target word in the center of each. Participants were 76 college students (mature readers) and 76 third-grade students (beginner readers). Results show an interaction effect of age and positional frequency of the initial character in the word on gaze duration. Further analysis shows that the third-grade students' gaze duration was significantly longer in high, relative to low, positional character frequency of the target words. This trend was consistent with refixation duration, and there was a marginally significant interaction between age and total fixation time. Overall, positional character frequency was an important cue for word segmentation in oral reading in the Chinese language, and third-grade students relied more heavily on this cue than did college students.

Close

  • doi:10.2224/sbp.9733

Close

Onkar Krishna; Kiyoharu Aizawa; Go Irie

Computational attention system for children, adults and elderly Journal Article

Multimedia Tools and Applications, 80 , pp. 1055–1074, 2021.

Abstract | BibTeX

@article{Krishna2021,
title = {Computational attention system for children, adults and elderly},
author = {Onkar Krishna and Kiyoharu Aizawa and Go Irie},
year = {2021},
date = {2021-01-01},
journal = {Multimedia Tools and Applications},
volume = {80},
pages = {1055--1074},
publisher = {Multimedia Tools and Applications},
abstract = {The existing computational visual attention systems have focused on the objective to basically simulate and understand the concept of visual attention system in adults. Consequently, the impact of observer's age in scene viewing behavior has rarely been considered. This study quantitatively analyzed the age-related differences in gaze landings during scene viewing for three different class of images: naturals, man-made, and fractals. Observer's of different age-group have shown different scene viewing tendencies independent to the class of the image viewed. Several interesting observations are drawn from the results. First, gaze landings for man-made dataset showed that whereas child observers focus more on the scene foreground, i.e., locations that are near, elderly observers tend to explore the scene background, i.e., locations farther in the scene. Considering this result a framework is proposed in this paper to quantitatively measure the depth bias tendency across age groups. Second, the quantitative analysis results showed that children exhibit the lowest exploratory behavior level but the highest central bias tendency among the age groups and across the different scene categories. Third, inter-individual similarity metrics reveal that an adult had significantly lower gaze consistency with children and elderly compared to other adults for all the scene categories. Finally, these analysis results were consequently leveraged to develop a more accurate age-adapted saliency model independent to the image type. The prediction accuracy suggests that our model fits better to the collected eye-gaze data of the observers belonging to different age groups than the existing models do.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The existing computational visual attention systems have focused on the objective to basically simulate and understand the concept of visual attention system in adults. Consequently, the impact of observer's age in scene viewing behavior has rarely been considered. This study quantitatively analyzed the age-related differences in gaze landings during scene viewing for three different class of images: naturals, man-made, and fractals. Observer's of different age-group have shown different scene viewing tendencies independent to the class of the image viewed. Several interesting observations are drawn from the results. First, gaze landings for man-made dataset showed that whereas child observers focus more on the scene foreground, i.e., locations that are near, elderly observers tend to explore the scene background, i.e., locations farther in the scene. Considering this result a framework is proposed in this paper to quantitatively measure the depth bias tendency across age groups. Second, the quantitative analysis results showed that children exhibit the lowest exploratory behavior level but the highest central bias tendency among the age groups and across the different scene categories. Third, inter-individual similarity metrics reveal that an adult had significantly lower gaze consistency with children and elderly compared to other adults for all the scene categories. Finally, these analysis results were consequently leveraged to develop a more accurate age-adapted saliency model independent to the image type. The prediction accuracy suggests that our model fits better to the collected eye-gaze data of the observers belonging to different age groups than the existing models do.

Close

Tamás Káldi; Anna Babarczy

Linguistic focus guides attention during the encoding and refreshing of working memory content Journal Article

Journal of Memory and Language, 116 , pp. 104187, 2021.

Abstract | Links | BibTeX

@article{Kaldi2021,
title = {Linguistic focus guides attention during the encoding and refreshing of working memory content},
author = {Tamás Káldi and Anna Babarczy},
doi = {10.1016/j.jml.2020.104187},
year = {2021},
date = {2021-01-01},
journal = {Journal of Memory and Language},
volume = {116},
pages = {104187},
abstract = {Focus is a linguistic device that marks a piece of information within an utterance as most relevant, as when emphasis is placed by the speaker on a word using phonological stress, special intonation, or prosodic prominence. The question addressed in the present study is whether the use of linguistic focus is best seen as a means of directing the listener's attention. We investigated attention allocation on the part of the listener to linguistically focused elements in working memory in a series of eye-tracking experiments. We concentrated on two processes: the encoding of the focused element and its retention. Attentional load during encoding was measured by pupil dilation, and attention allocation during retention was estimated from fixations to locations of previously present visual stimuli on a blank screen. It was found that i) more attention was allocated during the processing of sentences with linguistic focus and ii) linguistically focused elements received more attention during memory retention. However, when the task demanded the sharing of attention, the advantage of the focused element during retention disappeared. Further experiments showed that when verbal stimuli whose prominence was not linguistically marked were presented, the patterns of attention allocation associated with linguistic focus during retention replicated. These results lend further support to the claim that linguistic focus is a grammaticalized means of expressing prominence, and as such, functions as an attention capturing device.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Focus is a linguistic device that marks a piece of information within an utterance as most relevant, as when emphasis is placed by the speaker on a word using phonological stress, special intonation, or prosodic prominence. The question addressed in the present study is whether the use of linguistic focus is best seen as a means of directing the listener's attention. We investigated attention allocation on the part of the listener to linguistically focused elements in working memory in a series of eye-tracking experiments. We concentrated on two processes: the encoding of the focused element and its retention. Attentional load during encoding was measured by pupil dilation, and attention allocation during retention was estimated from fixations to locations of previously present visual stimuli on a blank screen. It was found that i) more attention was allocated during the processing of sentences with linguistic focus and ii) linguistically focused elements received more attention during memory retention. However, when the task demanded the sharing of attention, the advantage of the focused element during retention disappeared. Further experiments showed that when verbal stimuli whose prominence was not linguistically marked were presented, the patterns of attention allocation associated with linguistic focus during retention replicated. These results lend further support to the claim that linguistic focus is a grammaticalized means of expressing prominence, and as such, functions as an attention capturing device.

Close

  • doi:10.1016/j.jml.2020.104187

Close

Mega Herlambang B Id; Fokie Cnossen; Niels A Taatgen

The effects of intrinsic motivation on mental fatigue Journal Article

PLoS ONE, 16 (1), pp. 1–22, 2021.

Abstract | Links | BibTeX

@article{Id2021,
title = {The effects of intrinsic motivation on mental fatigue},
author = {Mega Herlambang B Id and Fokie Cnossen and Niels A Taatgen},
doi = {10.1371/journal.pone.0243754},
year = {2021},
date = {2021-01-01},
journal = {PLoS ONE},
volume = {16},
number = {1},
pages = {1--22},
abstract = {There have been many studies attempting to disentangle the relation between motivation and mental fatigue. Mental fatigue occurs after performing a demanding task for a prolonged time, and many studies have suggested that motivation can counteract the negative effects of mental fatigue on task performance. To complicate matters, most mental fatigue studies looked exclusively at the effects of extrinsic motivation but not intrinsic motivation. Individu- als are said to be extrinsically motivated when they perform a task to attain rewards and avoid punishments, while they are said to be intrinsically motivated when they do for the pleasure of doing the activity. To assess whether intrinsic motivation has similar effects as extrinsic motivation, we conducted an experiment using subjective, performance, and physi- ological measures (heart rate variability and pupillometry). In this experiment, 28 partici- pants solved Sudoku puzzles on a computer for three hours, with a cat video playing in the corner of the screen. The experiment consisted of 14 blocks with two alternating conditions: low intrinsic motivation and high intrinsic motivation. The main results showed that irrespec- tive of condition, participants reported becoming fatigued over time. They performed better, invested more mental effort physiologically, and were less distracted in high-level than in low-level motivation blocks. The results suggest that similarly to extrinsic motivation, time- on-task effects are modulated by the level of intrinsic motivation: With high intrinsic motiva- tion, people can maintain their performance over time as they seem willing to invest more effort as time progresses than in low intrinsic motivation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

There have been many studies attempting to disentangle the relation between motivation and mental fatigue. Mental fatigue occurs after performing a demanding task for a prolonged time, and many studies have suggested that motivation can counteract the negative effects of mental fatigue on task performance. To complicate matters, most mental fatigue studies looked exclusively at the effects of extrinsic motivation but not intrinsic motivation. Individu- als are said to be extrinsically motivated when they perform a task to attain rewards and avoid punishments, while they are said to be intrinsically motivated when they do for the pleasure of doing the activity. To assess whether intrinsic motivation has similar effects as extrinsic motivation, we conducted an experiment using subjective, performance, and physi- ological measures (heart rate variability and pupillometry). In this experiment, 28 partici- pants solved Sudoku puzzles on a computer for three hours, with a cat video playing in the corner of the screen. The experiment consisted of 14 blocks with two alternating conditions: low intrinsic motivation and high intrinsic motivation. The main results showed that irrespec- tive of condition, participants reported becoming fatigued over time. They performed better, invested more mental effort physiologically, and were less distracted in high-level than in low-level motivation blocks. The results suggest that similarly to extrinsic motivation, time- on-task effects are modulated by the level of intrinsic motivation: With high intrinsic motiva- tion, people can maintain their performance over time as they seem willing to invest more effort as time progresses than in low intrinsic motivation.

Close

  • doi:10.1371/journal.pone.0243754

Close

Jukka Hyönä; Timo T Heikkilä; Seppo Vainio; Reinhold Kliegl

Parafoveal access to word stem during reading: An eye movement study Journal Article

Cognition, 208 , pp. 1–13, 2021.

Abstract | Links | BibTeX

@article{Hyoenae2021,
title = {Parafoveal access to word stem during reading: An eye movement study},
author = {Jukka Hyönä and Timo T Heikkilä and Seppo Vainio and Reinhold Kliegl},
doi = {10.1016/j.cognition.2020.104547},
year = {2021},
date = {2021-01-01},
journal = {Cognition},
volume = {208},
pages = {1--13},
abstract = {Previous studies (Hyönä, Yan, & Vainio, 2018; Yan et al., 2014) have demonstrated that in morphologically rich languages a word's morphological status is processed parafoveally to be used in modulating saccadic programming in reading. In the present parafoveal preview study conducted in Finnish, we examined the exact nature of this effect by comparing reading of morphologically complex words (a stem + two suffixes) to that of monomorphemic words. In the preview-change condition, the final 3–4 letters were replaced with other letters making the target word a pseudoword; for suffixed words, the word stem remained intact but the suffix information was unavailable; for monomorphemic words, only part of the stem was parafoveally available. Three alternative predictions were put forth. According to the first alternative, the morphological effect in initial fixation location is due to parafoveally perceiving the suffix as a highly frequent letter cluster and then adjusting the saccade program to land closer to the word beginning for suffixed than monomorphemic words. The second alternative, the processing difficulty hypothesis, assumes a morphological complexity effect: suffixed words are more complex than monomorphemic words. Therefore, the attentional window is narrower and the saccade is shorter. The third alternative posits that the effect reflects parafoveal access to the word's stem. The results for the initial fixation location and fixation durations were consistent with the parafoveal stem-access view.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous studies (Hyönä, Yan, & Vainio, 2018; Yan et al., 2014) have demonstrated that in morphologically rich languages a word's morphological status is processed parafoveally to be used in modulating saccadic programming in reading. In the present parafoveal preview study conducted in Finnish, we examined the exact nature of this effect by comparing reading of morphologically complex words (a stem + two suffixes) to that of monomorphemic words. In the preview-change condition, the final 3–4 letters were replaced with other letters making the target word a pseudoword; for suffixed words, the word stem remained intact but the suffix information was unavailable; for monomorphemic words, only part of the stem was parafoveally available. Three alternative predictions were put forth. According to the first alternative, the morphological effect in initial fixation location is due to parafoveally perceiving the suffix as a highly frequent letter cluster and then adjusting the saccade program to land closer to the word beginning for suffixed than monomorphemic words. The second alternative, the processing difficulty hypothesis, assumes a morphological complexity effect: suffixed words are more complex than monomorphemic words. Therefore, the attentional window is narrower and the saccade is shorter. The third alternative posits that the effect reflects parafoveal access to the word's stem. The results for the initial fixation location and fixation durations were consistent with the parafoveal stem-access view.

Close

  • doi:10.1016/j.cognition.2020.104547

Close

J Hartwig; A Kretschmer-trendowicz; J R Helmert; M L Jung; S Pannasch

Revealing the dynamics of prospective memory processes in children with eye movements Journal Article

International Journal of Psychophysiology, 160 , pp. 38–55, 2021.

Abstract | Links | BibTeX

@article{Hartwig2021,
title = {Revealing the dynamics of prospective memory processes in children with eye movements},
author = {J Hartwig and A Kretschmer-trendowicz and J R Helmert and M L Jung and S Pannasch},
doi = {10.1016/j.ijpsycho.2020.12.005},
year = {2021},
date = {2021-01-01},
journal = {International Journal of Psychophysiology},
volume = {160},
pages = {38--55},
publisher = {Elsevier B.V.},
abstract = {Prospective memory (PM), the memory for delayed intentions, develops during childhood. The current study examined PM processes, such as monitoring, PM cue identification and intention retrieval with particular focus on their temporal dynamics and interrelations during successful and unsuccessful PM performance. We analysed eye movements of 6–7 and 9–10 year olds during the inspection of movie stills while they completed one of three different tasks: scene viewing followed by a snippet allocation task, a PM task and a visual search task. We also tested children's executive functions of inhibition, flexibility and working memory. We found that older children outperformed younger children in all tasks but neither age group showed variations in monitoring behaviour during the course of the PM task. In fact, neither age group monitored. According to our data, initial processes necessary for PM success take place during the first fixation on the PM cue. In PM hit trials we found prolonged fixations after the first fixation on the PM cue, and older children showed a greater efficiency in PM processes following this first PM cue fixation. Regarding executive functions, only working memory had a significant effect on children's PM performance. Across both age groups children with better working memory scores needed less time to react to the PM cue. Our data support the notion that children rely on spontaneous processes to notice the PM cue, followed by a resource intensive search for the intended action.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Prospective memory (PM), the memory for delayed intentions, develops during childhood. The current study examined PM processes, such as monitoring, PM cue identification and intention retrieval with particular focus on their temporal dynamics and interrelations during successful and unsuccessful PM performance. We analysed eye movements of 6–7 and 9–10 year olds during the inspection of movie stills while they completed one of three different tasks: scene viewing followed by a snippet allocation task, a PM task and a visual search task. We also tested children's executive functions of inhibition, flexibility and working memory. We found that older children outperformed younger children in all tasks but neither age group showed variations in monitoring behaviour during the course of the PM task. In fact, neither age group monitored. According to our data, initial processes necessary for PM success take place during the first fixation on the PM cue. In PM hit trials we found prolonged fixations after the first fixation on the PM cue, and older children showed a greater efficiency in PM processes following this first PM cue fixation. Regarding executive functions, only working memory had a significant effect on children's PM performance. Across both age groups children with better working memory scores needed less time to react to the PM cue. Our data support the notion that children rely on spontaneous processes to notice the PM cue, followed by a resource intensive search for the intended action.

Close

  • doi:10.1016/j.ijpsycho.2020.12.005

Close

Carolina L Haass-Koffler; Rachel D Souza; James P Wilmott; Elizabeth R Aston; Joo-Hyun Song

A combined alcohol and smoking cue-reactivity paradigm in people who drink heavily and smoke Cigarettes: Preliminary findings Journal Article

Alcohol and Alcoholism, 56 (1), pp. 47–56, 2021.

Abstract | Links | BibTeX

@article{HaassKoffler2021,
title = {A combined alcohol and smoking cue-reactivity paradigm in people who drink heavily and smoke Cigarettes: Preliminary findings},
author = {Carolina L Haass-Koffler and Rachel D Souza and James P Wilmott and Elizabeth R Aston and Joo-Hyun Song},
doi = {10.1093/alcalc/agaa089},
year = {2021},
date = {2021-01-01},
journal = {Alcohol and Alcoholism},
volume = {56},
number = {1},
pages = {47--56},
abstract = {Aims: Previous studies have shown that there may be an underlying mechanism that is common for co-use of alcohol and tobacco and it has been shown that treatment for alcohol use disorder can increase rates of smoking cessation. The primary aim of this study was to assess a novel methodological approach to test a simultaneous behavioral alcohol-smoking cue reactivity (CR) paradigm in people who drink alcohol and smoke cigarettes. Methods: This was a human laboratory study that utilized a novel laboratory procedure with individuals who drink heavily (≥15 drinks/week for men; ≥8 drinks/week for women) and smoke (textgreater5 cigarettes/day). Participants completed a CR in a bar laboratory and an eye-tracking (ET) session using their preferred alcohol beverage, cigarettes brand and water. Results: In both the CR and ET session, there was a difference in time spent interacting with alcohol and cigarettes as compared to water (P's textless 0.001), but no difference in time spent interacting between alcohol and cigarettes (Ptextgreater 0.05). In the CR sessions, craving for cigaretteswas significantly greater than craving for alcohol (P textless 0.001), however, only time spent with alcohol, but not with cigarettes, was correlated with craving for both alcohol and cigarettes (P textless 0.05). Conclusion: This study showed that it is feasible to use simultaneous cues during a CR procedure in a bar laboratory paradigm. The attention bias measured in the integrated alcohol-cigarettes ET procedure predicted participants' decision making in the CR. This novel methodological approach revealed that in people who drink heavily and smoke, alcohol cues may affect craving for both alcohol and cigarettes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Aims: Previous studies have shown that there may be an underlying mechanism that is common for co-use of alcohol and tobacco and it has been shown that treatment for alcohol use disorder can increase rates of smoking cessation. The primary aim of this study was to assess a novel methodological approach to test a simultaneous behavioral alcohol-smoking cue reactivity (CR) paradigm in people who drink alcohol and smoke cigarettes. Methods: This was a human laboratory study that utilized a novel laboratory procedure with individuals who drink heavily (≥15 drinks/week for men; ≥8 drinks/week for women) and smoke (textgreater5 cigarettes/day). Participants completed a CR in a bar laboratory and an eye-tracking (ET) session using their preferred alcohol beverage, cigarettes brand and water. Results: In both the CR and ET session, there was a difference in time spent interacting with alcohol and cigarettes as compared to water (P's textless 0.001), but no difference in time spent interacting between alcohol and cigarettes (Ptextgreater 0.05). In the CR sessions, craving for cigaretteswas significantly greater than craving for alcohol (P textless 0.001), however, only time spent with alcohol, but not with cigarettes, was correlated with craving for both alcohol and cigarettes (P textless 0.05). Conclusion: This study showed that it is feasible to use simultaneous cues during a CR procedure in a bar laboratory paradigm. The attention bias measured in the integrated alcohol-cigarettes ET procedure predicted participants' decision making in the CR. This novel methodological approach revealed that in people who drink heavily and smoke, alcohol cues may affect craving for both alcohol and cigarettes.

Close

  • doi:10.1093/alcalc/agaa089

Close

Josephine M Groot; Nya M Boayue; Gábor Csifcsák; Wouter Boekel; René Huster; Birte U Forstmann; Matthias Mittner

Probing the neural signature of mind wandering with simultaneous fMRI-EEG and pupillometry Journal Article

NeuroImage, 224 , pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Groot2021,
title = {Probing the neural signature of mind wandering with simultaneous fMRI-EEG and pupillometry},
author = {Josephine M Groot and Nya M Boayue and Gábor Csifcsák and Wouter Boekel and René Huster and Birte U Forstmann and Matthias Mittner},
doi = {10.1016/j.neuroimage.2020.117412},
year = {2021},
date = {2021-01-01},
journal = {NeuroImage},
volume = {224},
pages = {1--10},
publisher = {Elsevier Inc.},
abstract = {Mind wandering reflects the shift in attentional focus from task-related cognition driven by external stimuli toward self-generated and internally-oriented thought processes. Although such task-unrelated thoughts (TUTs) are pervasive and detrimental to task performance, their underlying neural mechanisms are only modestly understood. To investigate TUTs with high spatial and temporal precision, we simultaneously measured fMRI, EEG, and pupillometry in healthy adults while they performed a sustained attention task with experience sampling probes. Features of interest were extracted from each modality at the single-trial level and fed to a support vector machine that was trained on the probe responses. Compared to task-focused attention, the neural signature of TUTs was characterized by weaker activity in the default mode network but elevated activity in its anticorrelated network, stronger functional coupling between these networks, widespread increase in alpha, theta, delta, but not beta, frequency power, predominantly reduced amplitudes of late, but not early, event-related potentials, and larger baseline pupil size. Particularly, information contained in dynamic interactions between large-scale cortical networks was predictive of transient changes in attentional focus above other modalities. Together, our results provide insight into the spatiotemporal dynamics of TUTs and the neural markers that may facilitate their detection.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Mind wandering reflects the shift in attentional focus from task-related cognition driven by external stimuli toward self-generated and internally-oriented thought processes. Although such task-unrelated thoughts (TUTs) are pervasive and detrimental to task performance, their underlying neural mechanisms are only modestly understood. To investigate TUTs with high spatial and temporal precision, we simultaneously measured fMRI, EEG, and pupillometry in healthy adults while they performed a sustained attention task with experience sampling probes. Features of interest were extracted from each modality at the single-trial level and fed to a support vector machine that was trained on the probe responses. Compared to task-focused attention, the neural signature of TUTs was characterized by weaker activity in the default mode network but elevated activity in its anticorrelated network, stronger functional coupling between these networks, widespread increase in alpha, theta, delta, but not beta, frequency power, predominantly reduced amplitudes of late, but not early, event-related potentials, and larger baseline pupil size. Particularly, information contained in dynamic interactions between large-scale cortical networks was predictive of transient changes in attentional focus above other modalities. Together, our results provide insight into the spatiotemporal dynamics of TUTs and the neural markers that may facilitate their detection.

Close

  • doi:10.1016/j.neuroimage.2020.117412

Close

Miguel Garcia Garcia; Katharina Rifai; Siegfried Wahl; Tamara Watson

Adaptation to geometrically skewed moving images: An asymmetrical effect on the double-drift illusion Journal Article

Vision Research, 179 , pp. 75–84, 2021.

Abstract | Links | BibTeX

@article{GarciaGarcia2021,
title = {Adaptation to geometrically skewed moving images: An asymmetrical effect on the double-drift illusion},
author = {Miguel {Garcia Garcia} and Katharina Rifai and Siegfried Wahl and Tamara Watson},
doi = {10.1016/j.visres.2020.11.008},
year = {2021},
date = {2021-01-01},
journal = {Vision Research},
volume = {179},
pages = {75--84},
publisher = {Elsevier Ltd},
abstract = {Progressive addition lenses introduce distortions in the peripheral visual field that alter both form and motion perception. Here we seek to understand how our peripheral visual field adapts to complex distortions. The adaptation was induced across the visual field by geometrically skewed image sequences, and aftereffects were measured via changes in perception of the double-drift illusion. The double-drift or curveball stimulus contains both local and object motion. Therefore, the aftereffects induced by geometrical distortions might be indicative of how this adaptation interacts with the local and object motion signals. In the absence of the local motion components, the adaptation to skewness modified the perceived trajectory of object motion in the opposite direction of the adaptation stimulus skew. This effect demonstrates that the environment can also tune perceived object trajectories. Testing with the full double-drift stimulus, adaptation to a skew in the opposite direction to the local motion component induced a change in perception, reducing the illusion magnitude (when the stimulus was presented on the right side of the screen. A non-statistically significant shift, when stimuli were on the left side). However, adaptation to the other orientation resulted in no change in the strength of the double-drift illusion (for both stimuli locations). Thus, it seems that the adaptor's orientation and the motion statistics of the stimulus jointly define the perception of the measured aftereffect. In conclusion, not only size, contrast or drifting speed affects the double-drift illusion, but also adaptation to image distortions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Progressive addition lenses introduce distortions in the peripheral visual field that alter both form and motion perception. Here we seek to understand how our peripheral visual field adapts to complex distortions. The adaptation was induced across the visual field by geometrically skewed image sequences, and aftereffects were measured via changes in perception of the double-drift illusion. The double-drift or curveball stimulus contains both local and object motion. Therefore, the aftereffects induced by geometrical distortions might be indicative of how this adaptation interacts with the local and object motion signals. In the absence of the local motion components, the adaptation to skewness modified the perceived trajectory of object motion in the opposite direction of the adaptation stimulus skew. This effect demonstrates that the environment can also tune perceived object trajectories. Testing with the full double-drift stimulus, adaptation to a skew in the opposite direction to the local motion component induced a change in perception, reducing the illusion magnitude (when the stimulus was presented on the right side of the screen. A non-statistically significant shift, when stimuli were on the left side). However, adaptation to the other orientation resulted in no change in the strength of the double-drift illusion (for both stimuli locations). Thus, it seems that the adaptor's orientation and the motion statistics of the stimulus jointly define the perception of the measured aftereffect. In conclusion, not only size, contrast or drifting speed affects the double-drift illusion, but also adaptation to image distortions.

Close

  • doi:10.1016/j.visres.2020.11.008

Close

Francesco Fabbrini; Rufin Vogels

Within- and between-hemifields generalization of repetition suppression in inferior temporal cortex Journal Article

Journal of Neurophysiology, 125 (1), pp. 1–20, 2021.

Abstract | Links | BibTeX

@article{Fabbrini2021,
title = {Within- and between-hemifields generalization of repetition suppression in inferior temporal cortex},
author = {Francesco Fabbrini and Rufin Vogels},
doi = {10.1152/jn.00361.2020},
year = {2021},
date = {2021-01-01},
journal = {Journal of Neurophysiology},
volume = {125},
number = {1},
pages = {1--20},
abstract = {The decrease in response with stimulus repetition is a common property observed in many sensory brain areas. This repetition suppression (RS) is ubiquitous in neurons of macaque inferior temporal (IT) cortex, the end-stage of the ventral visual pathway. The neural mechanisms of RS in IT are still unclear, and one possibility is that it is inherited from areas upstream to IT that show also RS. Since neurons in IT have larger receptive fields compared to earlier visual areas, we examined the inheritance hypothesis by presenting adapter and test stimuli at widely different spatial locations along both vertical and horizontal meridians, and across hemifields. RS was present for distances between adapter and test stimuli up to 22°, and when the two stimuli were presented in different hemifields. Also, we examined the position tolerance of the stimulus selectivity of adaptation by comparing the responses to a test stimulus following the same (repetition trial) or a different adapter (alternation trial) at a different position than the test stimulus. Stimulus-selective adaptation was still present and consistently stronger in the later phase of the response for distances up to 18°. Finally, we observed stimulus-selective adaptation in repetition trials even without a measurable excitatory response to the adapter stimulus. To accommodate these and previous data, we propose that at least part of the stimulus-selective adaptation in IT is based on short-term plasticity mechanisms within IT and/or reflects top-down activity from areas downstream to IT.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The decrease in response with stimulus repetition is a common property observed in many sensory brain areas. This repetition suppression (RS) is ubiquitous in neurons of macaque inferior temporal (IT) cortex, the end-stage of the ventral visual pathway. The neural mechanisms of RS in IT are still unclear, and one possibility is that it is inherited from areas upstream to IT that show also RS. Since neurons in IT have larger receptive fields compared to earlier visual areas, we examined the inheritance hypothesis by presenting adapter and test stimuli at widely different spatial locations along both vertical and horizontal meridians, and across hemifields. RS was present for distances between adapter and test stimuli up to 22°, and when the two stimuli were presented in different hemifields. Also, we examined the position tolerance of the stimulus selectivity of adaptation by comparing the responses to a test stimulus following the same (repetition trial) or a different adapter (alternation trial) at a different position than the test stimulus. Stimulus-selective adaptation was still present and consistently stronger in the later phase of the response for distances up to 18°. Finally, we observed stimulus-selective adaptation in repetition trials even without a measurable excitatory response to the adapter stimulus. To accommodate these and previous data, we propose that at least part of the stimulus-selective adaptation in IT is based on short-term plasticity mechanisms within IT and/or reflects top-down activity from areas downstream to IT.

Close

  • doi:10.1152/jn.00361.2020

Close

Lynn Eekhof; Moniek M Kuijpers; Myrthe Faber; Xin Gao; Marloes Mak; Emiel Van den Hoven; Roel M Willems

Lost in a story, detached from the words. Journal Article

Discourse Processes, pp. 1–20, 2021.

Abstract | Links | BibTeX

@article{Eekhof2021,
title = {Lost in a story, detached from the words.},
author = {Lynn Eekhof and Moniek M Kuijpers and Myrthe Faber and Xin Gao and Marloes Mak and Emiel {Van den Hoven} and Roel M Willems},
doi = {10.1080/0163853X.2020.1857619},
year = {2021},
date = {2021-01-01},
journal = {Discourse Processes},
pages = {1--20},
publisher = {Routledge},
abstract = {This article explores the relationship between low-and high-level aspects of reading by studying the interplay between word processing, as measured with eye tracking, and narrative absorption and liking, as measured with questionnaires. Specifically, we focused on how individual differences in sensitivity to lexical word characteristics-measured as the effect of these characteristics on gaze duration-were related to narrative absorption and liking. By reanalyzing a large data set consisting of three previous eye-tracking experiments in which subjects (N = 171) read literary short stories, we replicated the well-established finding that word length, lemma frequency , position in sentence, age of acquisition, and orthographic neighborhood size of words influenced gaze duration. More importantly, we found that individual differences in the degree of sensitivity to three of these word characteristics, i.e., word length, lemma frequency, and age of acquisition, were negatively related to print exposure and to a lesser degree to narrative absorption and liking. Even though the underlying mechanisms of this relationship are still unclear, we believe the current findings underline the need to map out the interplay between, on the one hand, the technical and, on the other hand, the subjective processes of reading by studying reading behavior in more natural settings.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This article explores the relationship between low-and high-level aspects of reading by studying the interplay between word processing, as measured with eye tracking, and narrative absorption and liking, as measured with questionnaires. Specifically, we focused on how individual differences in sensitivity to lexical word characteristics-measured as the effect of these characteristics on gaze duration-were related to narrative absorption and liking. By reanalyzing a large data set consisting of three previous eye-tracking experiments in which subjects (N = 171) read literary short stories, we replicated the well-established finding that word length, lemma frequency , position in sentence, age of acquisition, and orthographic neighborhood size of words influenced gaze duration. More importantly, we found that individual differences in the degree of sensitivity to three of these word characteristics, i.e., word length, lemma frequency, and age of acquisition, were negatively related to print exposure and to a lesser degree to narrative absorption and liking. Even though the underlying mechanisms of this relationship are still unclear, we believe the current findings underline the need to map out the interplay between, on the one hand, the technical and, on the other hand, the subjective processes of reading by studying reading behavior in more natural settings.

Close

  • doi:10.1080/0163853X.2020.1857619

Close

Marcos Domic-Siede; Martín Irani; Joaquín Valdés; Marcela Perrone-Bertolotti; Tomás Ossandón

Theta activity from frontopolar cortex, mid-cingulate cortex and anterior cingulate cortex shows different roles in cognitive planning performance Journal Article

NeuroImage, 226 , pp. 1–19, 2021.

Abstract | Links | BibTeX

@article{DomicSiede2021,
title = {Theta activity from frontopolar cortex, mid-cingulate cortex and anterior cingulate cortex shows different roles in cognitive planning performance},
author = {Marcos Domic-Siede and Martín Irani and Joaquín Valdés and Marcela Perrone-Bertolotti and Tomás Ossandón},
doi = {10.1016/j.neuroimage.2020.117557},
year = {2021},
date = {2021-01-01},
journal = {NeuroImage},
volume = {226},
pages = {1--19},
publisher = {Elsevier Inc.},
abstract = {Cognitive planning, the ability to develop a sequenced plan to achieve a goal, plays a crucial role in human goal-directed behavior. However, the specific role of frontal structures in planning is unclear. We used a novel and ecological task, that allowed us to separate the planning period from the execution period. The spatio-temporal dynamics of EEG recordings showed that planning induced a progressive and sustained increase of frontal-midline theta activity (FM$theta$) over time. Source analyses indicated that this activity was generated within the prefrontal cortex. Theta activity from the right mid-Cingulate Cortex (MCC) and the left Anterior Cingulate Cortex (ACC) were correlated with an increase in the time needed for elaborating plans. On the other hand, left Frontopolar cortex (FP) theta activity exhibited a negative correlation with the time required for executing a plan. Since reaction times of planning execution correlated with correct responses, left FP theta activity might be associated with efficiency and accuracy in making a plan. Associations between theta activity from the right MCC and the left ACC with reaction times of the planning period may reflect high cognitive demand of the task, due to the engagement of attentional control and conflict monitoring implementation. In turn, the specific association between left FP theta activity and planning performance may reflect the participation of this brain region in successfully self-generated plans.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Cognitive planning, the ability to develop a sequenced plan to achieve a goal, plays a crucial role in human goal-directed behavior. However, the specific role of frontal structures in planning is unclear. We used a novel and ecological task, that allowed us to separate the planning period from the execution period. The spatio-temporal dynamics of EEG recordings showed that planning induced a progressive and sustained increase of frontal-midline theta activity (FM$theta$) over time. Source analyses indicated that this activity was generated within the prefrontal cortex. Theta activity from the right mid-Cingulate Cortex (MCC) and the left Anterior Cingulate Cortex (ACC) were correlated with an increase in the time needed for elaborating plans. On the other hand, left Frontopolar cortex (FP) theta activity exhibited a negative correlation with the time required for executing a plan. Since reaction times of planning execution correlated with correct responses, left FP theta activity might be associated with efficiency and accuracy in making a plan. Associations between theta activity from the right MCC and the left ACC with reaction times of the planning period may reflect high cognitive demand of the task, due to the engagement of attentional control and conflict monitoring implementation. In turn, the specific association between left FP theta activity and planning performance may reflect the participation of this brain region in successfully self-generated plans.

Close

  • doi:10.1016/j.neuroimage.2020.117557

Close

Avital Deutsch; Hadas Velan; Yiska Merzbach; Tamar Michaly

The dependence of root extraction in a non-concatenated morphology on the word-specific orthographic context Journal Article

Journal of Memory and Language, 116 , pp. 104182, 2021.

Abstract | Links | BibTeX

@article{Deutsch2021,
title = {The dependence of root extraction in a non-concatenated morphology on the word-specific orthographic context},
author = {Avital Deutsch and Hadas Velan and Yiska Merzbach and Tamar Michaly},
doi = {10.1016/j.jml.2020.104182},
year = {2021},
date = {2021-01-01},
journal = {Journal of Memory and Language},
volume = {116},
pages = {104182},
publisher = {Elsevier Inc.},
abstract = {In Hebrew, as in other Semitic languages, most words are formed in a non-concatenated way, with a root morpheme embedded in a word-pattern morpheme consisting of only vowels or vowels plus consonants. Previous research on visual word recognition in Hebrew has revealed a robust morphological root-priming effect, with word recognition facilitated by the prior sub-perceptual presentation of the root morpheme, along with a less stable and more fragile word-pattern priming effect, particularly in the nominal system. These findings support the theory that morphological principles govern lexical access, with the root morpheme as a main organizational unit of the mental lexicon. However, less research has been done to delineate the algorithm underlying decomposition. The current study explores the importance of the natural lexical orthographic context of a complex root + pattern word structure for root extraction, using on-line measures based on tracking eye-movements in sentence reading. A series of 4 experiments using a fast-priming paradigm demonstrated that detaching the root morpheme from its lexical orthographic structure hinders the root-priming effect. Presenting the root in a non-word or a pseudo-word, that is, a non-existent combination of a real root + a real pattern did not make any difference. These results suggest that mapping the orthographic root onto its morphological mental representation depends on the orthographic context in which its letters appear. This finding constrains the role of the root in visual word-recognition, highlighting the crucial conditions for extracting it in the natural setting of reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In Hebrew, as in other Semitic languages, most words are formed in a non-concatenated way, with a root morpheme embedded in a word-pattern morpheme consisting of only vowels or vowels plus consonants. Previous research on visual word recognition in Hebrew has revealed a robust morphological root-priming effect, with word recognition facilitated by the prior sub-perceptual presentation of the root morpheme, along with a less stable and more fragile word-pattern priming effect, particularly in the nominal system. These findings support the theory that morphological principles govern lexical access, with the root morpheme as a main organizational unit of the mental lexicon. However, less research has been done to delineate the algorithm underlying decomposition. The current study explores the importance of the natural lexical orthographic context of a complex root + pattern word structure for root extraction, using on-line measures based on tracking eye-movements in sentence reading. A series of 4 experiments using a fast-priming paradigm demonstrated that detaching the root morpheme from its lexical orthographic structure hinders the root-priming effect. Presenting the root in a non-word or a pseudo-word, that is, a non-existent combination of a real root + a real pattern did not make any difference. These results suggest that mapping the orthographic root onto its morphological mental representation depends on the orthographic context in which its letters appear. This finding constrains the role of the root in visual word-recognition, highlighting the crucial conditions for extracting it in the natural setting of reading.

Close

  • doi:10.1016/j.jml.2020.104182

Close

Gayle DeDe; Denis Kelleher

Effects of animacy and sentence type on silent reading comprehension in aphasia: An eye-tracking study Journal Article

Journal of Neurolinguistics, 57 , pp. 1–19, 2021.

Abstract | Links | BibTeX

@article{DeDe2021,
title = {Effects of animacy and sentence type on silent reading comprehension in aphasia: An eye-tracking study},
author = {Gayle DeDe and Denis Kelleher},
doi = {10.1016/j.jneuroling.2020.100950},
year = {2021},
date = {2021-01-01},
journal = {Journal of Neurolinguistics},
volume = {57},
pages = {1--19},
publisher = {Elsevier Ltd},
abstract = {The present study examined how healthy aging and aphasia influence the capacity for readers to generate structural predictions during online reading, and how animacy cues influence this process. Non-brain-damaged younger (n = 24) and older (n = 12) adults (Experiment 1) and individuals with aphasia (IWA; n = 11; Experiment 2) read subject relative and object relative sentences in an eye-tracking experiment. Half of the sentences included animate sentential subjects, and the other half included inanimate sentential subjects. All three groups used animacy information to mitigate effects of syntactic complexity. These effects were greater in older than younger adults. IWA were sensitive to structural frequency, with longer reading times for object relative than subject relative sentences. As in previous work, effects of structural complexity did not emerge on IWA's first pass through the sentence, but were observed when IWA reread critical segments of the sentences. Thus, IWA may adopt atypical reading strategies when they encounter low frequency or complex sentence structures, but they are able to use animacy information to reduce the processing disruptions associated with these structures.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study examined how healthy aging and aphasia influence the capacity for readers to generate structural predictions during online reading, and how animacy cues influence this process. Non-brain-damaged younger (n = 24) and older (n = 12) adults (Experiment 1) and individuals with aphasia (IWA; n = 11; Experiment 2) read subject relative and object relative sentences in an eye-tracking experiment. Half of the sentences included animate sentential subjects, and the other half included inanimate sentential subjects. All three groups used animacy information to mitigate effects of syntactic complexity. These effects were greater in older than younger adults. IWA were sensitive to structural frequency, with longer reading times for object relative than subject relative sentences. As in previous work, effects of structural complexity did not emerge on IWA's first pass through the sentence, but were observed when IWA reread critical segments of the sentences. Thus, IWA may adopt atypical reading strategies when they encounter low frequency or complex sentence structures, but they are able to use animacy information to reduce the processing disruptions associated with these structures.

Close

  • doi:10.1016/j.jneuroling.2020.100950

Close

Alex de Carvalho; Isabelle Dautriche; Anne Caroline Fiévet; Anne Christophe

Toddlers exploit referential and syntactic cues to flexibly adapt their interpretation of novel verb meanings Journal Article

Journal of Experimental Child Psychology, 203 , pp. 1–25, 2021.

Abstract | Links | BibTeX

@article{Carvalho2021,
title = {Toddlers exploit referential and syntactic cues to flexibly adapt their interpretation of novel verb meanings},
author = {Alex de Carvalho and Isabelle Dautriche and Anne Caroline Fiévet and Anne Christophe},
doi = {10.1016/j.jecp.2020.105017},
year = {2021},
date = {2021-01-01},
journal = {Journal of Experimental Child Psychology},
volume = {203},
pages = {1--25},
publisher = {Elsevier Inc.},
abstract = {Because linguistic communication is often noisy and uncertain, adults flexibly rely on different information sources during sentence processing. We tested whether toddlers engage in a similar process and how that process interacts with verb learning. Across two experiments, we presented French 28-month-olds with right-dislocated sentences featuring a novel verb (“Hei is VERBing, the boyi”), where a clear prosodic boundary after the verb indicates that the sentence is intransitive (such that the NP “the boy” is coreferential with the pronoun “he” and the sentence means “The boy is VERBing”). By default, toddlers incorrectly interpreted the sentence based on the number of NPs (assuming, e.g., that someone is VERBing the boy). Yet, when children were provided with additional information about the syntactic contexts (Experiment 1},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Because linguistic communication is often noisy and uncertain, adults flexibly rely on different information sources during sentence processing. We tested whether toddlers engage in a similar process and how that process interacts with verb learning. Across two experiments, we presented French 28-month-olds with right-dislocated sentences featuring a novel verb (“Hei is VERBing, the boyi”), where a clear prosodic boundary after the verb indicates that the sentence is intransitive (such that the NP “the boy” is coreferential with the pronoun “he” and the sentence means “The boy is VERBing”). By default, toddlers incorrectly interpreted the sentence based on the number of NPs (assuming, e.g., that someone is VERBing the boy). Yet, when children were provided with additional information about the syntactic contexts (Experiment 1

Close

  • doi:10.1016/j.jecp.2020.105017

Close

Minke J de Boer; Deniz Başkent; Frans W Cornelissen

Eyes on emotion: Dynamic gaze allocation during emotion perception from speech-like stimuli Journal Article

Multisensory Research, 34 , pp. 17–47, 2021.

Abstract | Links | BibTeX

@article{Boer2021a,
title = {Eyes on emotion: Dynamic gaze allocation during emotion perception from speech-like stimuli},
author = {Minke J de Boer and Deniz Başkent and Frans W Cornelissen},
doi = {10.1163/22134808-bja10029},
year = {2021},
date = {2021-01-01},
journal = {Multisensory Research},
volume = {34},
pages = {17--47},
abstract = {The majority of emotional expressions used in daily communication are multimodal and dynamic in nature. Consequently, one would expect that human observers utilize specific perceptual strategies to process emotions and to handle the multimodal and dynamic nature of emotions. However, our present knowledge on these strategies is scarce, primarily because most studies on emotion perception have not fully covered this variation, and instead used static and/or unimodal stimuli with few emotion categories. To resolve this knowledge gap, the present study examined how dynamic emotional auditory and visual information is integrated into a unified percept. Since there is a broad spectrum of possible forms of integration, both eye movements and accuracy of emotion identification were evaluated while observers performed an emotion identification task in one of three conditions: audio-only, visual-only video, or audiovisual video. In terms of adaptations of perceptual strategies, eye movement results showed a shift in fixations toward the eyes and away from the nose and mouth when audio is added. Notably, in terms of task performance, audio-only performance was mostly significantly worse than video-only and audiovisual performances, but performance in the latter two conditions was often not different. These results suggest that individuals flexibly and momentarily adapt their perceptual strategies to changes in the available information for emotion recognition, and these changes can be comprehensively quantified with eye tracking.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The majority of emotional expressions used in daily communication are multimodal and dynamic in nature. Consequently, one would expect that human observers utilize specific perceptual strategies to process emotions and to handle the multimodal and dynamic nature of emotions. However, our present knowledge on these strategies is scarce, primarily because most studies on emotion perception have not fully covered this variation, and instead used static and/or unimodal stimuli with few emotion categories. To resolve this knowledge gap, the present study examined how dynamic emotional auditory and visual information is integrated into a unified percept. Since there is a broad spectrum of possible forms of integration, both eye movements and accuracy of emotion identification were evaluated while observers performed an emotion identification task in one of three conditions: audio-only, visual-only video, or audiovisual video. In terms of adaptations of perceptual strategies, eye movement results showed a shift in fixations toward the eyes and away from the nose and mouth when audio is added. Notably, in terms of task performance, audio-only performance was mostly significantly worse than video-only and audiovisual performances, but performance in the latter two conditions was often not different. These results suggest that individuals flexibly and momentarily adapt their perceptual strategies to changes in the available information for emotion recognition, and these changes can be comprehensively quantified with eye tracking.

Close

  • doi:10.1163/22134808-bja10029

Close

Jonathan Daume; Peng Wang; Alexander Maye; Dan Zhang; Andreas K Engel

Non-rhythmic temporal prediction involves phase resets of low-frequency delta oscillations Journal Article

NeuroImage, 224 , pp. 1–17, 2021.

Abstract | Links | BibTeX

@article{Daume2021,
title = {Non-rhythmic temporal prediction involves phase resets of low-frequency delta oscillations},
author = {Jonathan Daume and Peng Wang and Alexander Maye and Dan Zhang and Andreas K Engel},
doi = {10.1016/j.neuroimage.2020.117376},
year = {2021},
date = {2021-01-01},
journal = {NeuroImage},
volume = {224},
pages = {1--17},
publisher = {Elsevier Inc.},
abstract = {The phase of neural oscillatory signals aligns to the predicted onset of upcoming stimulation. Whether such phase alignments represent phase resets of underlying neural oscillations or just rhythmically evoked activity, and whether they can be observed in a rhythm-free visual context, however, remains unclear. Here, we recorded the magnetoencephalogram while participants were engaged in a temporal prediction task, judging the visual or tactile reappearance of a uniformly moving stimulus. The prediction conditions were contrasted with a control condition to dissociate phase adjustments of neural oscillations from stimulus-driven activity. We observed stronger delta band inter-trial phase consistency (ITPC) in a network of sensory, parietal and frontal brain areas, but no power increase reflecting stimulus-driven or prediction-related evoked activity. Delta ITPC further correlated with prediction performance in the cerebellum and visual cortex. Our results provide evidence that phase alignments of low-frequency neural oscillations underlie temporal predictions in a non-rhythmic visual and crossmodal context.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The phase of neural oscillatory signals aligns to the predicted onset of upcoming stimulation. Whether such phase alignments represent phase resets of underlying neural oscillations or just rhythmically evoked activity, and whether they can be observed in a rhythm-free visual context, however, remains unclear. Here, we recorded the magnetoencephalogram while participants were engaged in a temporal prediction task, judging the visual or tactile reappearance of a uniformly moving stimulus. The prediction conditions were contrasted with a control condition to dissociate phase adjustments of neural oscillations from stimulus-driven activity. We observed stronger delta band inter-trial phase consistency (ITPC) in a network of sensory, parietal and frontal brain areas, but no power increase reflecting stimulus-driven or prediction-related evoked activity. Delta ITPC further correlated with prediction performance in the cerebellum and visual cortex. Our results provide evidence that phase alignments of low-frequency neural oscillations underlie temporal predictions in a non-rhythmic visual and crossmodal context.

Close

  • doi:10.1016/j.neuroimage.2020.117376

Close

Frederic R Danion; James Mathew; Niels Gouirand; Eli Brenner

More precise tracking of horizontal than vertical target motion with both the eyes and hand Journal Article

Cortex, 134 , pp. 30–42, 2021.

Abstract | Links | BibTeX

@article{Danion2021,
title = {More precise tracking of horizontal than vertical target motion with both the eyes and hand},
author = {Frederic R Danion and James Mathew and Niels Gouirand and Eli Brenner},
doi = {10.1016/j.cortex.2020.10.001},
year = {2021},
date = {2021-01-01},
journal = {Cortex},
volume = {134},
pages = {30--42},
publisher = {Elsevier Ltd},
abstract = {When tracking targets moving in various directions with one's eyes, horizontal components of pursuit are more precise than vertical ones. Is this because horizontal target motion is predicted better or because horizontal movements of the eyes are controlled more precisely? When tracking a visual target with the hand, the eyes also track the target. We investigated whether the directional asymmetries that have been found during isolated eye movements are also present during such manual tracking, and if so, whether individual participants' asymmetry in eye movements is accompanied by a similar asymmetry in hand movements. We examined the data of 62 participants who used a joystick to track a visual target with a cursor. The target followed a smooth but unpredictable trajectory in two dimensions. Both the mean gaze-target distance and the mean cursor-target distance were about 20% larger in the vertical direction than in the horizontal direction. Gaze and cursor both followed the target with a slightly longer delay in the vertical than in the horizontal direction, irrespective of the target's trajectory. The delays of gaze and cursor were correlated, as were their errors in tracking the target. Gaze clearly followed the target rather than the cursor, so the asymmetry in both eye and hand movements presumably results from better predictions of the target's horizontal than of its vertical motion. Altogether this study speaks for the presence of anisotropic predictive processes that are shared across effectors.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When tracking targets moving in various directions with one's eyes, horizontal components of pursuit are more precise than vertical ones. Is this because horizontal target motion is predicted better or because horizontal movements of the eyes are controlled more precisely? When tracking a visual target with the hand, the eyes also track the target. We investigated whether the directional asymmetries that have been found during isolated eye movements are also present during such manual tracking, and if so, whether individual participants' asymmetry in eye movements is accompanied by a similar asymmetry in hand movements. We examined the data of 62 participants who used a joystick to track a visual target with a cursor. The target followed a smooth but unpredictable trajectory in two dimensions. Both the mean gaze-target distance and the mean cursor-target distance were about 20% larger in the vertical direction than in the horizontal direction. Gaze and cursor both followed the target with a slightly longer delay in the vertical than in the horizontal direction, irrespective of the target's trajectory. The delays of gaze and cursor were correlated, as were their errors in tracking the target. Gaze clearly followed the target rather than the cursor, so the asymmetry in both eye and hand movements presumably results from better predictions of the target's horizontal than of its vertical motion. Altogether this study speaks for the presence of anisotropic predictive processes that are shared across effectors.

Close

  • doi:10.1016/j.cortex.2020.10.001

Close

Vassilis Cutsuridis; Shouyong Jiang; Matt J Dunn; Anne Rosser; James Brawn; Jonathan T Erichsen

Neural modelling of antisaccade performance of healthy controls and early Huntington's disease patients Journal Article

Chaos, 31 , pp. 1–13, 2021.

Abstract | BibTeX

@article{Cutsuridis2021,
title = {Neural modelling of antisaccade performance of healthy controls and early Huntington's disease patients},
author = {Vassilis Cutsuridis and Shouyong Jiang and Matt J Dunn and Anne Rosser and James Brawn and Jonathan T Erichsen},
year = {2021},
date = {2021-01-01},
journal = {Chaos},
volume = {31},
pages = {1--13},
abstract = {Huntington's disease (HD), a genetically determined neurodegenerative disease, is positively correlated with eye movement abnormalities in decision making. The antisaccade conflict paradigm has been widely used to study response inhibition in eye movements and reliable performance deficits in HD subjects have been observed including greater number and timing of direction errors. We recorded the error rates and response latencies of early HD patients and healthy age-matched controls performing the mirror antisaccade task. HD participants displayed slower and more variable antisaccade latencies and increased error rates relative to healthy controls. A competitive accumulator-to-threshold neural model was then employed to quantitatively simulate the controls' and patients' reaction latencies and error rates and uncover the mechanisms giving rise to the observed HD antisaccade deficits. Our simulations showed: 1) a more gradual and noisy rate of accumulation of evidence by HD patients is responsible for the observed prolonged and more variable antisaccade latencies in early HD; 2) the confidence level of early HD patients making a decision is unaffected by the disease; and 3) the antisaccade performance of healthy controls and early HD patients is the end product of a neural lateral competition (inhibition) between a correct and an erroneous decision process, and not the end product of a third top-down stop signal suppressing the erroneous decision process as many have speculated.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Huntington's disease (HD), a genetically determined neurodegenerative disease, is positively correlated with eye movement abnormalities in decision making. The antisaccade conflict paradigm has been widely used to study response inhibition in eye movements and reliable performance deficits in HD subjects have been observed including greater number and timing of direction errors. We recorded the error rates and response latencies of early HD patients and healthy age-matched controls performing the mirror antisaccade task. HD participants displayed slower and more variable antisaccade latencies and increased error rates relative to healthy controls. A competitive accumulator-to-threshold neural model was then employed to quantitatively simulate the controls' and patients' reaction latencies and error rates and uncover the mechanisms giving rise to the observed HD antisaccade deficits. Our simulations showed: 1) a more gradual and noisy rate of accumulation of evidence by HD patients is responsible for the observed prolonged and more variable antisaccade latencies in early HD; 2) the confidence level of early HD patients making a decision is unaffected by the disease; and 3) the antisaccade performance of healthy controls and early HD patients is the end product of a neural lateral competition (inhibition) between a correct and an erroneous decision process, and not the end product of a third top-down stop signal suppressing the erroneous decision process as many have speculated.

Close

Annabell Coors; Natascha Merten; David D Ward; M Schmid; Monique M B Breteler; Ulrich Ettinger

Strong age but weak sex effects in eye movement performance in the general adult population: Evidence from the Rhineland Study Journal Article

Vision Research, 178 , pp. 124–133, 2021.

Abstract | Links | BibTeX

@article{Coors2021,
title = {Strong age but weak sex effects in eye movement performance in the general adult population: Evidence from the Rhineland Study},
author = {Annabell Coors and Natascha Merten and David D Ward and M Schmid and Monique M B Breteler and Ulrich Ettinger},
doi = {10.1016/j.visres.2020.10.004},
year = {2021},
date = {2021-01-01},
journal = {Vision Research},
volume = {178},
pages = {124--133},
publisher = {Elsevier Ltd},
abstract = {Assessing physiological changes that occur with healthy ageing is prerequisite for understanding pathophysiological age-related changes. Eye movements are studied as biomarkers for pathological changes because they are altered in patients with neurodegenerative disorders. However, there is a lack of data from large samples assessing age-related physiological changes and sex differences in oculomotor performance. Thus, we assessed and quantified cross-sectional relations of age and sex with oculomotor performance in the general population. We report results from the first 4,000 participants (aged 30–95 years) of the Rhineland Study, a community- based prospective cohort study in Bonn, Germany. Participants completed fixation, smooth pursuit, pro- saccade and antisaccade tasks. We quantified associations of age and sex with oculomotor outcomes using multivariable linear regression models. Performance in 12 out of 18 oculomotor measures declined with increasing age. No differences between age groups were observed in five antisaccade outcomes (amplitude- adjusted and unadjusted peak velocity, amplitude gain, spatial error and percentage of corrected errors) and for blink rate during fixation. Small sex differences occurred in smooth pursuit velocity gain (men have higher gain) and blink rate during fixation (men blink less). We conclude that performance declines with age in two thirds of oculomotor outcomes but that there was no evidence of sex differences in eye movement performance except for two outcomes. Since the percentage of corrected antisaccade errors was not associated with age but is known to be affected by pathological cognitive decline, it represents a promising candidate preclinical biomarker of neurodegeneration.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Assessing physiological changes that occur with healthy ageing is prerequisite for understanding pathophysiological age-related changes. Eye movements are studied as biomarkers for pathological changes because they are altered in patients with neurodegenerative disorders. However, there is a lack of data from large samples assessing age-related physiological changes and sex differences in oculomotor performance. Thus, we assessed and quantified cross-sectional relations of age and sex with oculomotor performance in the general population. We report results from the first 4,000 participants (aged 30–95 years) of the Rhineland Study, a community- based prospective cohort study in Bonn, Germany. Participants completed fixation, smooth pursuit, pro- saccade and antisaccade tasks. We quantified associations of age and sex with oculomotor outcomes using multivariable linear regression models. Performance in 12 out of 18 oculomotor measures declined with increasing age. No differences between age groups were observed in five antisaccade outcomes (amplitude- adjusted and unadjusted peak velocity, amplitude gain, spatial error and percentage of corrected errors) and for blink rate during fixation. Small sex differences occurred in smooth pursuit velocity gain (men have higher gain) and blink rate during fixation (men blink less). We conclude that performance declines with age in two thirds of oculomotor outcomes but that there was no evidence of sex differences in eye movement performance except for two outcomes. Since the percentage of corrected antisaccade errors was not associated with age but is known to be affected by pathological cognitive decline, it represents a promising candidate preclinical biomarker of neurodegeneration.

Close

  • doi:10.1016/j.visres.2020.10.004

Close

Francesca Ciardo; Jacopo De Angelis; Barbara F M Marino; Rossana Actis-Grosso; Paola Ricciardelli

Social categorization and joint attention: Interacting effects of age, sex, and social status Journal Article

Acta Psychologica, 212 , pp. 1–14, 2021.

Abstract | Links | BibTeX

@article{Ciardo2021,
title = {Social categorization and joint attention: Interacting effects of age, sex, and social status},
author = {Francesca Ciardo and Jacopo {De Angelis} and Barbara F M Marino and Rossana Actis-Grosso and Paola Ricciardelli},
doi = {10.1016/j.actpsy.2020.103223},
year = {2021},
date = {2021-01-01},
journal = {Acta Psychologica},
volume = {212},
pages = {1--14},
publisher = {Elsevier B.V.},
abstract = {In the present study, we examine how person categorization conveyed by the combination of multiple cues modulates joint attention. In three experiments, we tested the combinatory effect of age, sex, and social status on gaze-following behaviour and pro-social attitudes. In Experiments 1 and 2, young adults were required to perform an instructed saccade towards left or right targets while viewing a to-be-ignored distracting face (female or male) gazing left or right, that could belong to a young, middle-aged, or elderly adult of high or low social status. Social status was manipulated by semantic knowledge (Experiment 1) or through visual appearance (Experiment 2). Results showed a clear combinatory effect of person perception cues on joint attention (JA). Specifically, our results showed that age and sex cues interacted with social status information depending on the modality through which it was conveyed. In Experiment 3, we further investigated our results by testing whether the identities used in Experiments 1 and 2 triggered different pro-social behaviour. The results of Experiment 3 showed that the identities resulting as more distracting in Experiments 1 and 2 were also perceived as more in need and prompt helping behaviour. Taken together, our evidence shows a combinatorial effect of age, sex, and social status in modulating the gaze following behaviour, highlighting a complex and dynamic interplay between person categorization and joint attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the present study, we examine how person categorization conveyed by the combination of multiple cues modulates joint attention. In three experiments, we tested the combinatory effect of age, sex, and social status on gaze-following behaviour and pro-social attitudes. In Experiments 1 and 2, young adults were required to perform an instructed saccade towards left or right targets while viewing a to-be-ignored distracting face (female or male) gazing left or right, that could belong to a young, middle-aged, or elderly adult of high or low social status. Social status was manipulated by semantic knowledge (Experiment 1) or through visual appearance (Experiment 2). Results showed a clear combinatory effect of person perception cues on joint attention (JA). Specifically, our results showed that age and sex cues interacted with social status information depending on the modality through which it was conveyed. In Experiment 3, we further investigated our results by testing whether the identities used in Experiments 1 and 2 triggered different pro-social behaviour. The results of Experiment 3 showed that the identities resulting as more distracting in Experiments 1 and 2 were also perceived as more in need and prompt helping behaviour. Taken together, our evidence shows a combinatorial effect of age, sex, and social status in modulating the gaze following behaviour, highlighting a complex and dynamic interplay between person categorization and joint attention.

Close

  • doi:10.1016/j.actpsy.2020.103223

Close

Rotem Broday-Dvir; Rafael Malach

Resting-state fluctuations underlie free and creative verbal behaviors in the human brain Journal Article

Cerebral Cortex, 31 (1), pp. 213–232, 2021.

Abstract | Links | BibTeX

@article{BrodayDvir2021,
title = {Resting-state fluctuations underlie free and creative verbal behaviors in the human brain},
author = {Rotem Broday-Dvir and Rafael Malach},
doi = {10.1093/cercor/bhaa221},
year = {2021},
date = {2021-01-01},
journal = {Cerebral Cortex},
volume = {31},
number = {1},
pages = {213--232},
abstract = {Resting-state fluctuations are ubiquitous and widely studied phenomena of the human brain, yet we are largely in the dark regarding their function in human cognition. Here we examined the hypothesis that resting-state fluctuations underlie the generation of free and creative human behaviors. In our experiment, participants were asked to perform three voluntary verbal tasks: a verbal fluency task, a verbal creativity task, and a divergent thinking task, during functional magnetic resonance imaging scanning. Blood oxygenation level dependent (BOLD)-activity during these tasks was contrasted with a control- deterministic verbal task, in which the behavior was fully determined by external stimuli. Our results reveal that all voluntary verbal-generation responses displayed a gradual anticipatory buildup that preceded the deterministic control-related responses. Critically, the time-frequency dynamics of these anticipatory buildups were significantly correlated with resting-state fluctuations' dynamics. These correlations were not a general BOLD-related or verbal-response related result, as they were not found during the externally determined verbal control condition. Furthermore, they were located in brain regions known to be involved in language production, specifically the left inferior frontal gyrus. These results suggest a common function of resting-state fluctuations as the neural mechanism underlying the generation of free and creative behaviors in the human cortex.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Resting-state fluctuations are ubiquitous and widely studied phenomena of the human brain, yet we are largely in the dark regarding their function in human cognition. Here we examined the hypothesis that resting-state fluctuations underlie the generation of free and creative human behaviors. In our experiment, participants were asked to perform three voluntary verbal tasks: a verbal fluency task, a verbal creativity task, and a divergent thinking task, during functional magnetic resonance imaging scanning. Blood oxygenation level dependent (BOLD)-activity during these tasks was contrasted with a control- deterministic verbal task, in which the behavior was fully determined by external stimuli. Our results reveal that all voluntary verbal-generation responses displayed a gradual anticipatory buildup that preceded the deterministic control-related responses. Critically, the time-frequency dynamics of these anticipatory buildups were significantly correlated with resting-state fluctuations' dynamics. These correlations were not a general BOLD-related or verbal-response related result, as they were not found during the externally determined verbal control condition. Furthermore, they were located in brain regions known to be involved in language production, specifically the left inferior frontal gyrus. These results suggest a common function of resting-state fluctuations as the neural mechanism underlying the generation of free and creative behaviors in the human cortex.

Close

  • doi:10.1093/cercor/bhaa221

Close

Amarender R Bogadhi; Leor N Katz; Anil Bollimunta; David A Leopold; Richard J Krauzlis

Midbrain activity supports high-level visual properties in primate temporal cortex Miscellaneous

2021.

Abstract | Links | BibTeX

@misc{Bogadhi2021,
title = {Midbrain activity supports high-level visual properties in primate temporal cortex},
author = {Amarender R Bogadhi and Leor N Katz and Anil Bollimunta and David A Leopold and Richard J Krauzlis},
doi = {10.1101/841155},
year = {2021},
date = {2021-01-01},
booktitle = {Neuron},
volume = {109},
pages = {1--10},
publisher = {Elsevier Inc.},
abstract = {The evolution of the primate brain is marked by a dramatic increase in the number of neocortical areas that process visual information 1. This cortical expansion supports two hallmarks of high-level primate vision - the ability to selectively attend to particular visual features 2 and the ability to recognize a seemingly limitless number of complex visual objects 3. Given their prominent roles in high-level vision for primates, it is commonly assumed that these cortical processes supersede the earlier versions of these functions accomplished by the evolutionarily older brain structures that lie beneath the cortex. Contrary to this view, here we show that the superior colliculus (SC), a midbrain structure conserved across all vertebrates 4, is necessary for the normal expression of attention-related modulation and object selectivity in a newly identified region of macaque temporal cortex. Using a combination of psychophysics, causal perturbations and fMRI, we identified a localized region in the temporal cortex that is functionally dependent on the SC. Targeted electrophysiological recordings in this cortical region revealed neurons with strong attention-related modulation that was markedly reduced during attention deficits caused by SC inactivation. Many of these neurons also exhibited selectivity for particular visual objects, and this selectivity was also reduced during SC inactivation. Thus, the SC exerts a causal influence on high-level visual processing in cortex at a surprisingly late stage where attention and object selectivity converge, perhaps determined by the elemental forms of perceptual processing the SC has supported since before there was a neocortex.},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}

Close

The evolution of the primate brain is marked by a dramatic increase in the number of neocortical areas that process visual information 1. This cortical expansion supports two hallmarks of high-level primate vision - the ability to selectively attend to particular visual features 2 and the ability to recognize a seemingly limitless number of complex visual objects 3. Given their prominent roles in high-level vision for primates, it is commonly assumed that these cortical processes supersede the earlier versions of these functions accomplished by the evolutionarily older brain structures that lie beneath the cortex. Contrary to this view, here we show that the superior colliculus (SC), a midbrain structure conserved across all vertebrates 4, is necessary for the normal expression of attention-related modulation and object selectivity in a newly identified region of macaque temporal cortex. Using a combination of psychophysics, causal perturbations and fMRI, we identified a localized region in the temporal cortex that is functionally dependent on the SC. Targeted electrophysiological recordings in this cortical region revealed neurons with strong attention-related modulation that was markedly reduced during attention deficits caused by SC inactivation. Many of these neurons also exhibited selectivity for particular visual objects, and this selectivity was also reduced during SC inactivation. Thus, the SC exerts a causal influence on high-level visual processing in cortex at a surprisingly late stage where attention and object selectivity converge, perhaps determined by the elemental forms of perceptual processing the SC has supported since before there was a neocortex.

Close

  • doi:10.1101/841155

Close

Minke De J Boer; Tim Jürgens; Frans W Cornelissen; Deniz Bas

Degraded visual and auditory input individually impair audiovisual emotion recognition from speech-like stimuli , but no evidence for an exacerbated effect from combined degradation Journal Article

Vision Research, 180 , pp. 51–62, 2021.

Abstract | Links | BibTeX

@article{Boer2021,
title = {Degraded visual and auditory input individually impair audiovisual emotion recognition from speech-like stimuli , but no evidence for an exacerbated effect from combined degradation},
author = {Minke De J Boer and Tim Jürgens and Frans W Cornelissen and Deniz Bas},
doi = {10.1016/j.visres.2020.12.002},
year = {2021},
date = {2021-01-01},
journal = {Vision Research},
volume = {180},
pages = {51--62},
abstract = {Emotion recognition requires optimal integration of the multisensory signals from vision and hearing. A sensory loss in either or both modalities can lead to changes in integration and related perceptual strategies. To investigate potential acute effects of combined impairments due to sensory information loss only, we degraded the visual and auditory information in audiovisual video-recordings, and presented these to a group of healthy young volunteers. These degradations intended to approximate some aspects of vision and hearing impairment in simulation. Other aspects, related to advanced age, potential health issues, but also long-term adaptation and cognitive compensation strategies, were not included in the simulations. Besides accuracy of emotion recognition, eye movements were recorded to capture perceptual strategies. Our data show that emotion recognition performance decreases when degraded visual and auditory information are presented in isolation, but simultaneously degrading both modalities does not exacerbate these isolated effects. Moreover, degrading the visual information strongly impacts recognition performance and on viewing behavior. In contrast, degrading auditory information alongside normal or degraded video had little (additional) effect on performance or gaze. Nevertheless, our results hold promise for visually impaired individuals, because the addition of any audio to any video greatly facilitates performance, even though adding audio does not completely compensate for the negative effects of video degradation. Additionally, observers modified their viewing behavior to degraded video in order to maximize their performance. Therefore, optimizing the hearing of visually impaired individuals and teaching them such optimized viewing behavior could be worthwhile endeavors for improving emotion recognition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Emotion recognition requires optimal integration of the multisensory signals from vision and hearing. A sensory loss in either or both modalities can lead to changes in integration and related perceptual strategies. To investigate potential acute effects of combined impairments due to sensory information loss only, we degraded the visual and auditory information in audiovisual video-recordings, and presented these to a group of healthy young volunteers. These degradations intended to approximate some aspects of vision and hearing impairment in simulation. Other aspects, related to advanced age, potential health issues, but also long-term adaptation and cognitive compensation strategies, were not included in the simulations. Besides accuracy of emotion recognition, eye movements were recorded to capture perceptual strategies. Our data show that emotion recognition performance decreases when degraded visual and auditory information are presented in isolation, but simultaneously degrading both modalities does not exacerbate these isolated effects. Moreover, degrading the visual information strongly impacts recognition performance and on viewing behavior. In contrast, degrading auditory information alongside normal or degraded video had little (additional) effect on performance or gaze. Nevertheless, our results hold promise for visually impaired individuals, because the addition of any audio to any video greatly facilitates performance, even though adding audio does not completely compensate for the negative effects of video degradation. Additionally, observers modified their viewing behavior to degraded video in order to maximize their performance. Therefore, optimizing the hearing of visually impaired individuals and teaching them such optimized viewing behavior could be worthwhile endeavors for improving emotion recognition.

Close

  • doi:10.1016/j.visres.2020.12.002

Close

Judith Bek; Emma Gowen; Stefan Vogt; Trevor J Crawford; Ellen Poliakoff; Emma Gowen; Stefan Vogt; Trevor J Crawford; Ellen Poliakoff

Action observation and imitation in Parkinson's disease: The influence of biological and non-biological stimuli Journal Article

Neuropsychologia, 150 , pp. 1–11, 2021.

Abstract | Links | BibTeX

@article{Bek2021,
title = {Action observation and imitation in Parkinson's disease: The influence of biological and non-biological stimuli},
author = {Judith Bek and Emma Gowen and Stefan Vogt and Trevor J Crawford and Ellen Poliakoff and Emma Gowen and Stefan Vogt and Trevor J Crawford and Ellen Poliakoff},
doi = {10.1016/j.neuropsychologia.2020.107690},
year = {2021},
date = {2021-01-01},
journal = {Neuropsychologia},
volume = {150},
pages = {1--11},
publisher = {Elsevier Ltd},
abstract = {Action observation and imitation have been found to influence movement in people with Parkinson's disease (PD), but simple visual stimuli can also guide their movement. To investigate whether action observation may provide a more effective stimulus than other visual cues, the present study examined the effects of observing human pointing movements and simple visual stimuli on hand kinematics and eye movements in people with mild to moderate PD and age-matched controls. In Experiment 1, participants observed videos of movement sequences between horizontal positions, depicted by a simple cue with or without a moving human hand, then imitated the sequence either without further visual input (consecutive task) or while watching the video again (concurrent task). Modulation of movement duration, in accordance with changes in the observed stimulus, increased when the simple cue was accompanied by the hand and in the concurrent task, whereas modulation of horizontal amplitude was greater with the simple cue alone and in the consecutive task. Experiment 2 compared imitation of kinematically-matched dynamic biological (human hand) and non- biological (shape) stimuli, which moved with a high or low vertical trajectory. Both groups exhibited greater modulation for the hand than the shape, and differences in eye movements suggested closer tracking of the hand. Despite producing slower and smaller movements overall, the PD group showed a similar pattern of imitation to controls across tasks and conditions. The findings demonstrate that observing human action influences aspects of movement such as duration or trajectory more strongly than non-biological stimuli, particularly during concurrent imitation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Action observation and imitation have been found to influence movement in people with Parkinson's disease (PD), but simple visual stimuli can also guide their movement. To investigate whether action observation may provide a more effective stimulus than other visual cues, the present study examined the effects of observing human pointing movements and simple visual stimuli on hand kinematics and eye movements in people with mild to moderate PD and age-matched controls. In Experiment 1, participants observed videos of movement sequences between horizontal positions, depicted by a simple cue with or without a moving human hand, then imitated the sequence either without further visual input (consecutive task) or while watching the video again (concurrent task). Modulation of movement duration, in accordance with changes in the observed stimulus, increased when the simple cue was accompanied by the hand and in the concurrent task, whereas modulation of horizontal amplitude was greater with the simple cue alone and in the consecutive task. Experiment 2 compared imitation of kinematically-matched dynamic biological (human hand) and non- biological (shape) stimuli, which moved with a high or low vertical trajectory. Both groups exhibited greater modulation for the hand than the shape, and differences in eye movements suggested closer tracking of the hand. Despite producing slower and smaller movements overall, the PD group showed a similar pattern of imitation to controls across tasks and conditions. The findings demonstrate that observing human action influences aspects of movement such as duration or trajectory more strongly than non-biological stimuli, particularly during concurrent imitation.

Close

  • doi:10.1016/j.neuropsychologia.2020.107690

Close

Michael J Armson; Nicholas B Diamond; Laryssa Levesque; Jennifer D Ryan; Brian Levine

Vividness of recollection is supported by eye movements in individuals with high, but not low trait autobiographical memory Journal Article

Cognition, 206 , pp. 1–8, 2021.

Abstract | Links | BibTeX

@article{Armson2021,
title = {Vividness of recollection is supported by eye movements in individuals with high, but not low trait autobiographical memory},
author = {Michael J Armson and Nicholas B Diamond and Laryssa Levesque and Jennifer D Ryan and Brian Levine},
doi = {10.1016/j.cognition.2020.104487},
year = {2021},
date = {2021-01-01},
journal = {Cognition},
volume = {206},
pages = {1--8},
publisher = {Elsevier},
abstract = {There are marked individual differences in the recollection of personal past events or autobiographical memory (AM). Theory concerning the relationship between mnemonic and visual systems suggests that eye movements promote retrieval of spatiotemporal details from memory, yet assessment of this prediction within naturalistic AM has been limited. We examined the relationship of eye movements to free recall of naturalistic AM and how this relationship is modulated by individual differences in AM capacity. Participants freely recalled past episodes while viewing a blank screen under free and fixed viewing conditions. Memory performance was quantified with the Autobiographical Interview, which separates internal (episodic) and external (non-episodic) details. In Study 1, as a proof of concept, fixation rate was predictive of the number of internal (but not external) details recalled across both free and fixed viewing. In Study 2, using an experimenter-controlled staged event (a museum-style tour) the effect of fixations on free recall of internal (but not external) details was again observed. In this second study, however, the fixation-recall relationship was modulated by individual differences in autobiographical memory, such that the coupling between fixations and internal details was greater for those endorsing higher than lower episodic AM. These results suggest that those with congenitally strong AM rely on the visual system to produce episodic details, whereas those with lower AM retrieve such details via other mechanisms.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

There are marked individual differences in the recollection of personal past events or autobiographical memory (AM). Theory concerning the relationship between mnemonic and visual systems suggests that eye movements promote retrieval of spatiotemporal details from memory, yet assessment of this prediction within naturalistic AM has been limited. We examined the relationship of eye movements to free recall of naturalistic AM and how this relationship is modulated by individual differences in AM capacity. Participants freely recalled past episodes while viewing a blank screen under free and fixed viewing conditions. Memory performance was quantified with the Autobiographical Interview, which separates internal (episodic) and external (non-episodic) details. In Study 1, as a proof of concept, fixation rate was predictive of the number of internal (but not external) details recalled across both free and fixed viewing. In Study 2, using an experimenter-controlled staged event (a museum-style tour) the effect of fixations on free recall of internal (but not external) details was again observed. In this second study, however, the fixation-recall relationship was modulated by individual differences in autobiographical memory, such that the coupling between fixations and internal details was greater for those endorsing higher than lower episodic AM. These results suggest that those with congenitally strong AM rely on the visual system to produce episodic details, whereas those with lower AM retrieve such details via other mechanisms.

Close

  • doi:10.1016/j.cognition.2020.104487

Close

2020

Hanna Brinkmann; Louis Williams; Raphael Rosenberg; Eugene McSorley

Does 'action viewing' really exist? Perceived dynamism and viewing behaviour Journal Article

Art and Perception, 8 (1), pp. 27–48, 2020.

Abstract | Links | BibTeX

@article{Brinkmann2020,
title = {Does 'action viewing' really exist? Perceived dynamism and viewing behaviour},
author = {Hanna Brinkmann and Louis Williams and Raphael Rosenberg and Eugene McSorley},
doi = {10.1163/22134913-20191128},
year = {2020},
date = {2020-12-01},
journal = {Art and Perception},
volume = {8},
number = {1},
pages = {27--48},
publisher = {Brill},
abstract = {Throughout the 20th century, there have been many different forms of abstract painting. While works by some artists, e.g., Piet Mondrian, are usually described as static, others are described as dynamic, such as Jackson Pollock's 'action paintings'. Art historians have assumed that beholders not only conceptualise such differences in depicted dynamics but also mirror these in their viewing behaviour. In an interdisciplinary eye-tracking study, we tested this concept through investigating both the localisation of fixations (polyfocal viewing) and the average duration of fixations as well as saccade velocity, duration and path curvature. We showed 30 different abstract paintings to 40 participants - 20 laypeople and 20 experts (art students) - and used self-reporting to investigate the perceived dynamism of each painting and its relationship with (a) the average number and duration of fixations, (b) the average number, duration and velocity of saccades as well as the amplitude and curvature area of saccade paths, and (c) pleasantness and familiarity ratings. We found that the average number of fixations and saccades, saccade velocity, and pleasantness ratings increase with an increase in perceived dynamism ratings. Meanwhile the saccade duration decreased with an increase in perceived dynamism. Additionally, the analysis showed that experts gave higher dynamic ratings compared to laypeople and were more familiar with the artworks. These results indicate that there is a correlation between perceived dynamism in abstract painting and viewing behaviour - something that has long been assumed by art historians but had never been empirically supported.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Throughout the 20th century, there have been many different forms of abstract painting. While works by some artists, e.g., Piet Mondrian, are usually described as static, others are described as dynamic, such as Jackson Pollock's 'action paintings'. Art historians have assumed that beholders not only conceptualise such differences in depicted dynamics but also mirror these in their viewing behaviour. In an interdisciplinary eye-tracking study, we tested this concept through investigating both the localisation of fixations (polyfocal viewing) and the average duration of fixations as well as saccade velocity, duration and path curvature. We showed 30 different abstract paintings to 40 participants - 20 laypeople and 20 experts (art students) - and used self-reporting to investigate the perceived dynamism of each painting and its relationship with (a) the average number and duration of fixations, (b) the average number, duration and velocity of saccades as well as the amplitude and curvature area of saccade paths, and (c) pleasantness and familiarity ratings. We found that the average number of fixations and saccades, saccade velocity, and pleasantness ratings increase with an increase in perceived dynamism ratings. Meanwhile the saccade duration decreased with an increase in perceived dynamism. Additionally, the analysis showed that experts gave higher dynamic ratings compared to laypeople and were more familiar with the artworks. These results indicate that there is a correlation between perceived dynamism in abstract painting and viewing behaviour - something that has long been assumed by art historians but had never been empirically supported.

Close

  • doi:10.1163/22134913-20191128

Close

Aaron Veldre; Roslyn Wong; Sally Andrews

Reading proficiency predicts the extent of the right, but not left, perceptual span in older readers Journal Article

Attention, Perception, and Psychophysics, pp. 1–9, 2020.

Abstract | Links | BibTeX

@article{Veldre2020a,
title = {Reading proficiency predicts the extent of the right, but not left, perceptual span in older readers},
author = {Aaron Veldre and Roslyn Wong and Sally Andrews},
doi = {10.3758/s13414-020-02185-x},
year = {2020},
date = {2020-11-01},
journal = {Attention, Perception, and Psychophysics},
pages = {1--9},
publisher = {Attention, Perception, & Psychophysics},
abstract = {The gaze-contingent moving-window paradigm was used to assess the size and symmetry of the perceptual span in older readers. The eye movements of 49 cognitively intact older adults (60–88 years of age) were recorded as they read sentences varying in difficulty, and the availability of letter information to the right and left of fixation was manipulated. To reconcile discrepancies in previous estimates of the perceptual span in older readers, individual differences in written language proficiency were assessed with tests of vocabulary, reading comprehension, reading speed, spelling ability, and print exposure. The results revealed that higher proficiency older adults extracted information up to 15 letter spaces to the right of fixation, while lower proficiency readers showed no additional benefit beyond 9 letters to the right. However, all readers showed improvements to reading with the availability of up to 9 letters to the left—confirming previous evidence of reduced perceptual span asymmetry in older readers. The findings raise questions about whether the source of age-related changes in parafoveal processing lies in the adoption of a risky reading strategy involving an increased propensity to both guess upcoming words and make corrective regressions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The gaze-contingent moving-window paradigm was used to assess the size and symmetry of the perceptual span in older readers. The eye movements of 49 cognitively intact older adults (60–88 years of age) were recorded as they read sentences varying in difficulty, and the availability of letter information to the right and left of fixation was manipulated. To reconcile discrepancies in previous estimates of the perceptual span in older readers, individual differences in written language proficiency were assessed with tests of vocabulary, reading comprehension, reading speed, spelling ability, and print exposure. The results revealed that higher proficiency older adults extracted information up to 15 letter spaces to the right of fixation, while lower proficiency readers showed no additional benefit beyond 9 letters to the right. However, all readers showed improvements to reading with the availability of up to 9 letters to the left—confirming previous evidence of reduced perceptual span asymmetry in older readers. The findings raise questions about whether the source of age-related changes in parafoveal processing lies in the adoption of a risky reading strategy involving an increased propensity to both guess upcoming words and make corrective regressions.

Close

  • doi:10.3758/s13414-020-02185-x

Close

Jacob A Westerberg; Alexander Maier; Geoffrey F Woodman; Jeffrey D Schall

Performance monitoring during visual priming Journal Article

Journal of Cognitive Neuroscience, 32 (3), pp. 515–526, 2020.

Abstract | Links | BibTeX

@article{Westerberg2020,
title = {Performance monitoring during visual priming},
author = {Jacob A Westerberg and Alexander Maier and Geoffrey F Woodman and Jeffrey D Schall},
doi = {10.1162/jocn_a_01499},
year = {2020},
date = {2020-11-01},
journal = {Journal of Cognitive Neuroscience},
volume = {32},
number = {3},
pages = {515--526},
publisher = {MIT Press - Journals},
abstract = {Repetitive performance of single-feature (efficient or popout) visual search improves RTs and accuracy. This phenomenon, known as priming of pop-out, has been demonstrated in both humans and macaque monkeys. We investigated the relationship between performance monitoring and priming of pop-out. Neuronal activity in the supplementary eye field (SEF) contributes to performance monitoring and to the generation of performance monitoring signals in the EEG. To determine whether priming depends on performance monitoring, we investigated spiking activity in SEF as well as the concurrent EEG of two monkeys performing a priming of pop-out task. We found that SEF spiking did not modulate with priming. Surprisingly, concurrent EEG did covary with priming. Together, these results suggest that performance monitoring contributes to priming of pop-out. However, this performance monitoring seems not mediated by SEF. This dissociation suggests that EEG indices of performance monitoring arise from multiple, functionally distinct neural generators.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Repetitive performance of single-feature (efficient or popout) visual search improves RTs and accuracy. This phenomenon, known as priming of pop-out, has been demonstrated in both humans and macaque monkeys. We investigated the relationship between performance monitoring and priming of pop-out. Neuronal activity in the supplementary eye field (SEF) contributes to performance monitoring and to the generation of performance monitoring signals in the EEG. To determine whether priming depends on performance monitoring, we investigated spiking activity in SEF as well as the concurrent EEG of two monkeys performing a priming of pop-out task. We found that SEF spiking did not modulate with priming. Surprisingly, concurrent EEG did covary with priming. Together, these results suggest that performance monitoring contributes to priming of pop-out. However, this performance monitoring seems not mediated by SEF. This dissociation suggests that EEG indices of performance monitoring arise from multiple, functionally distinct neural generators.

Close

  • doi:10.1162/jocn_a_01499

Close

Seema Gorur Prasad; Ramesh Kumar Mishra

Reward influences masked free-choice priming Journal Article

Frontiers in Psychology, 11 , pp. 1–15, 2020.

Abstract | Links | BibTeX

@article{Prasad2020,
title = {Reward influences masked free-choice priming},
author = {Seema Gorur Prasad and Ramesh Kumar Mishra},
doi = {10.3389/fpsyg.2020.576430},
year = {2020},
date = {2020-11-01},
journal = {Frontiers in Psychology},
volume = {11},
pages = {1--15},
publisher = {Frontiers Media S.A.},
abstract = {While it is known that reward induces attentional prioritization, it is not clear what effect reward-learning has when associated with stimuli that are not fully perceived. The masked priming paradigm has been extensively used to investigate the indirect impact of brief stimuli on response behavior. Interestingly, the effect of masked primes is observed even when participants choose their responses freely. While classical theories assume this process to be automatic, recent studies have provided evidence for attentional modulations of masked priming effects. Most such studies have manipulated bottom-up or top-down modes of attentional selection, but the role of “newer” forms of attentional control such as reward-learning and selection history remains unclear. In two experiments, with number and arrow primes, we examined whether reward-mediated attentional selection modulates masked priming when responses are chosen freely. In both experiments, we observed that primes associated with high-reward lead to enhanced free-choice priming compared to primes associated with no-reward. The effect was seen on both proportion of choices and response times, and was more evident in the faster responses. In the slower responses, the effect was diminished. Our study adds to the growing literature showing the susceptibility of masked priming to factors related to attention and executive control.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

While it is known that reward induces attentional prioritization, it is not clear what effect reward-learning has when associated with stimuli that are not fully perceived. The masked priming paradigm has been extensively used to investigate the indirect impact of brief stimuli on response behavior. Interestingly, the effect of masked primes is observed even when participants choose their responses freely. While classical theories assume this process to be automatic, recent studies have provided evidence for attentional modulations of masked priming effects. Most such studies have manipulated bottom-up or top-down modes of attentional selection, but the role of “newer” forms of attentional control such as reward-learning and selection history remains unclear. In two experiments, with number and arrow primes, we examined whether reward-mediated attentional selection modulates masked priming when responses are chosen freely. In both experiments, we observed that primes associated with high-reward lead to enhanced free-choice priming compared to primes associated with no-reward. The effect was seen on both proportion of choices and response times, and was more evident in the faster responses. In the slower responses, the effect was diminished. Our study adds to the growing literature showing the susceptibility of masked priming to factors related to attention and executive control.

Close

  • doi:10.3389/fpsyg.2020.576430

Close

Lijin Huang; Weijie Wei; Zhi Liu; Tianhong Zhang; Jijun Wang; Lihua Xu; Weiyu Chen; Olivier Le Meur

Effective schizophrenia recognition using discriminative eye movement features and model-metric based features Journal Article

Pattern Recognition Letters, 138 , pp. 608–616, 2020.

Abstract | Links | BibTeX

@article{Huang2020a,
title = {Effective schizophrenia recognition using discriminative eye movement features and model-metric based features},
author = {Lijin Huang and Weijie Wei and Zhi Liu and Tianhong Zhang and Jijun Wang and Lihua Xu and Weiyu Chen and Olivier {Le Meur}},
doi = {10.1016/j.patrec.2020.09.017},
year = {2020},
date = {2020-10-01},
journal = {Pattern Recognition Letters},
volume = {138},
pages = {608--616},
publisher = {Elsevier B.V.},
abstract = {Eye movement abnormalities have been effective biomarkers that provide the possibility of distinguishing patients with schizophrenia from healthy controls. The existing methods for measuring eye movement abnormalities mostly focus on synchronic parameters, such as fixation duration and saccade amplitude, which can be directly obtained from eye movement data, while lack of considering more thorough features. In this paper, to better characterize eye-tracking dysfunction, we create a dataset containing 100 images with eye movement data of 40 patients and 30 healthy controls via a free-viewing task, and propose two types of features for effective schizophrenia recognition, i.e. the hand-crafted discriminative eye movement features and the model-metric based features via utilizing the computational models of fixation prediction and the metrics of evaluating their prediction performance. Using the proposed features, two commonly used classifiers including support vector machine and random forest have been trained for classification between patients and controls. Experimental results demonstrate the effectiveness of the proposed features for improving classification performance, and the potential that our method can serve as an alternative and promising approach for the computer-aided diagnosis of schizophrenia.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye movement abnormalities have been effective biomarkers that provide the possibility of distinguishing patients with schizophrenia from healthy controls. The existing methods for measuring eye movement abnormalities mostly focus on synchronic parameters, such as fixation duration and saccade amplitude, which can be directly obtained from eye movement data, while lack of considering more thorough features. In this paper, to better characterize eye-tracking dysfunction, we create a dataset containing 100 images with eye movement data of 40 patients and 30 healthy controls via a free-viewing task, and propose two types of features for effective schizophrenia recognition, i.e. the hand-crafted discriminative eye movement features and the model-metric based features via utilizing the computational models of fixation prediction and the metrics of evaluating their prediction performance. Using the proposed features, two commonly used classifiers including support vector machine and random forest have been trained for classification between patients and controls. Experimental results demonstrate the effectiveness of the proposed features for improving classification performance, and the potential that our method can serve as an alternative and promising approach for the computer-aided diagnosis of schizophrenia.

Close

  • doi:10.1016/j.patrec.2020.09.017

Close

Sabrina E Twilhaar; Artem V Belopolsky; Jorrit F Kieviet; Ruurd M Elburg; Jaap Oosterlaan; Jorrit F de Kieviet; Ruurd M van Elburg; Jaap Oosterlaan

Voluntary and involuntary control of attention in adolescents born very preterm: A study of eye movements Journal Article

Child Development, 91 (4), pp. 1272–1283, 2020.

Abstract | Links | BibTeX

@article{Twilhaar2020,
title = {Voluntary and involuntary control of attention in adolescents born very preterm: A study of eye movements},
author = {Sabrina E Twilhaar and Artem V Belopolsky and Jorrit F Kieviet and Ruurd M Elburg and Jaap Oosterlaan and Jorrit F de Kieviet and Ruurd M van Elburg and Jaap Oosterlaan},
doi = {10.1111/cdev.13310},
year = {2020},
date = {2020-09-01},
journal = {Child Development},
volume = {91},
number = {4},
pages = {1272--1283},
publisher = {Blackwell Publishing Inc.},
abstract = {Very preterm birth is associated with attention deficits that interfere with academic performance. A better understanding of attention processes is necessary to support very preterm born children. This study examined voluntary and involuntary attentional control in very preterm born adolescents by measuring saccadic eye movements. Additionally, these control processes were related to symptoms of inattention, intelligence, and academic performance. Participants included 47 very preterm and 61 full-term born 13-years-old adolescents. Oculomotor control was assessed using the antisaccade and oculomotor capture paradigm. Very preterm born adolescents showed deficits in antisaccade but not in oculomotor capture performance, indicating impairments in voluntary but not involuntary attentional control. These impairments mediated the relation between very preterm birth and inattention, intelligence, and academic performance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Very preterm birth is associated with attention deficits that interfere with academic performance. A better understanding of attention processes is necessary to support very preterm born children. This study examined voluntary and involuntary attentional control in very preterm born adolescents by measuring saccadic eye movements. Additionally, these control processes were related to symptoms of inattention, intelligence, and academic performance. Participants included 47 very preterm and 61 full-term born 13-years-old adolescents. Oculomotor control was assessed using the antisaccade and oculomotor capture paradigm. Very preterm born adolescents showed deficits in antisaccade but not in oculomotor capture performance, indicating impairments in voluntary but not involuntary attentional control. These impairments mediated the relation between very preterm birth and inattention, intelligence, and academic performance.

Close

  • doi:10.1111/cdev.13310

Close

Aave Hannus; Harold Bekkering; Frans W Cornelissen

Preview of partial stimulus information in search prioritizes features and conjunctions, not locations Journal Article

Attention, Perception, and Psychophysics, 82 (1), pp. 140–152, 2020.

Abstract | Links | BibTeX

@article{Hannus2020,
title = {Preview of partial stimulus information in search prioritizes features and conjunctions, not locations},
author = {Aave Hannus and Harold Bekkering and Frans W Cornelissen},
doi = {10.3758/s13414-019-01841-1},
year = {2020},
date = {2020-09-01},
journal = {Attention, Perception, and Psychophysics},
volume = {82},
number = {1},
pages = {140--152},
publisher = {Attention, Perception, & Psychophysics},
abstract = {Visual search often requires combining information on distinct visual features such as color and orientation, but how the visual system does this is not fully understood. To better understand this, we showed observers a brief preview of part of a search stimulus—either its color or orientation—before they performed a conjunction search task. Our experimental questions were (1) whether observers would use such previews to prioritize either potential target locations or features, and (2) which neural mechanisms might underlie the observed effects. In two experiments, participants searched for a prespecified target in a display consisting of bar elements, each combining one of two possible colors and one of two possible orientations. Participants responded by making an eye movement to the selected bar. In our first experiment, we found that a preview consisting of colored bars with identical orientation improved saccadic target selection performance, while a preview of oriented gray bars substantially decreased performance. In a follow-up experiment, we found that previews consisting of discs of the same color as the bars (and thus without orientation information) hardly affected performance. Thus, performance improved only when the preview combined color and (noninformative) orientation information. Previews apparently result in a prioritization of features and conjunctions rather than of spatial locations (in the latter case, all previews should have had similar effects). Our results thus also indicate that search for, and prioritization of, combinations involve conjunctively tuned neural mechanisms. These probably reside at the level of the primary visual cortex.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual search often requires combining information on distinct visual features such as color and orientation, but how the visual system does this is not fully understood. To better understand this, we showed observers a brief preview of part of a search stimulus—either its color or orientation—before they performed a conjunction search task. Our experimental questions were (1) whether observers would use such previews to prioritize either potential target locations or features, and (2) which neural mechanisms might underlie the observed effects. In two experiments, participants searched for a prespecified target in a display consisting of bar elements, each combining one of two possible colors and one of two possible orientations. Participants responded by making an eye movement to the selected bar. In our first experiment, we found that a preview consisting of colored bars with identical orientation improved saccadic target selection performance, while a preview of oriented gray bars substantially decreased performance. In a follow-up experiment, we found that previews consisting of discs of the same color as the bars (and thus without orientation information) hardly affected performance. Thus, performance improved only when the preview combined color and (noninformative) orientation information. Previews apparently result in a prioritization of features and conjunctions rather than of spatial locations (in the latter case, all previews should have had similar effects). Our results thus also indicate that search for, and prioritization of, combinations involve conjunctively tuned neural mechanisms. These probably reside at the level of the primary visual cortex.

Close

  • doi:10.3758/s13414-019-01841-1

Close

Francesca Capozzi; Lauren J Human; Jelena Ristic

Attention promotes accurate impression formation Journal Article

Journal of Personality, 88 (3), pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Capozzi2020a,
title = {Attention promotes accurate impression formation},
author = {Francesca Capozzi and Lauren J Human and Jelena Ristic},
doi = {10.1111/jopy.12509},
year = {2020},
date = {2020-09-01},
journal = {Journal of Personality},
volume = {88},
number = {3},
pages = {1--11},
publisher = {Wiley},
abstract = {Objective: An ability to form accurate impressions of others is vital for adaptive social behavior in humans. Here, we examined if attending to persons more is associated with greater accuracy in personality impressions. Method: We asked 42 observers (36 females; mean age = 21 years, age range = 18–28; expected power = 0.96) to form personality impressions of unacquainted individuals (i.e., targets) from video interviews while their attentional behavior was assessed using eye tracking. We examined whether (a) attending more to targets benefited accuracy, (b) attending to specific body parts (e.g., face vs. body) drove this association, and (c) targets' ease of personality readability modulated these effects. Results: Paying more attention to a target was associated with forming more accurate personality impressions. Attention to the whole person contributed to this effect, with this association occurring independently of targets' ease of readability. Conclusions: These findings show that attending more to a person is associated with increased accuracy and thus suggest that attention promotes social adaption by supporting accurate social perception.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: An ability to form accurate impressions of others is vital for adaptive social behavior in humans. Here, we examined if attending to persons more is associated with greater accuracy in personality impressions. Method: We asked 42 observers (36 females; mean age = 21 years, age range = 18–28; expected power = 0.96) to form personality impressions of unacquainted individuals (i.e., targets) from video interviews while their attentional behavior was assessed using eye tracking. We examined whether (a) attending more to targets benefited accuracy, (b) attending to specific body parts (e.g., face vs. body) drove this association, and (c) targets' ease of personality readability modulated these effects. Results: Paying more attention to a target was associated with forming more accurate personality impressions. Attention to the whole person contributed to this effect, with this association occurring independently of targets' ease of readability. Conclusions: These findings show that attending more to a person is associated with increased accuracy and thus suggest that attention promotes social adaption by supporting accurate social perception.

Close

  • doi:10.1111/jopy.12509

Close

Maxi Becker; Tobias Sommer; Simone Kühn

Verbal insight revisited: fMRI evidence for early processing in bilateral insulae for solutions with AHA! experience shortly after trial onset Journal Article

Human Brain Mapping, 41 (1), pp. 30–45, 2020.

Abstract | Links | BibTeX

@article{Becker2020c,
title = {Verbal insight revisited: fMRI evidence for early processing in bilateral insulae for solutions with AHA! experience shortly after trial onset},
author = {Maxi Becker and Tobias Sommer and Simone Kühn},
doi = {10.1002/hbm.24785},
year = {2020},
date = {2020-09-01},
journal = {Human Brain Mapping},
volume = {41},
number = {1},
pages = {30--45},
abstract = {Abstract In insight problem solving solutions with AHA! experience have been assumed to be the consequence of restructuring of a problem which usually takes place shortly before the solution. However, evidence from priming studies suggests that solutions with AHA! are not spontaneously generated during the solution process but already relate to prior subliminal processing. We test this hypothesis by conducting an fMRI study using a modified compound remote associates paradigm which incorporates semantic priming.Weobserve stronger brain activity in bilateral anterior insulae already shortly after trial onset in problems that were latersolvedwiththan without AHA!. This early activity was independent of semantic priming but may be related to other lexical properties of attended words helping to reduce the amount of solutions to look for. In contrast, there was more brain activity in bilateral anterior insulae during solutions that were solved without than with AHA!. This timing (after trial start/during solution) x solution experience (with/without AHA!) interaction was significant. The results suggest that (a) solutions accompanied with AHA! relate to early solution-relevant processing and (b) both solution experiences differ in timingwhen solution-relevant processing takes place. In this context, we discuss the potential role of the anterior insula as part of the salience network involved in problem solving by allocating attentional resources.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Abstract In insight problem solving solutions with AHA! experience have been assumed to be the consequence of restructuring of a problem which usually takes place shortly before the solution. However, evidence from priming studies suggests that solutions with AHA! are not spontaneously generated during the solution process but already relate to prior subliminal processing. We test this hypothesis by conducting an fMRI study using a modified compound remote associates paradigm which incorporates semantic priming.Weobserve stronger brain activity in bilateral anterior insulae already shortly after trial onset in problems that were latersolvedwiththan without AHA!. This early activity was independent of semantic priming but may be related to other lexical properties of attended words helping to reduce the amount of solutions to look for. In contrast, there was more brain activity in bilateral anterior insulae during solutions that were solved without than with AHA!. This timing (after trial start/during solution) x solution experience (with/without AHA!) interaction was significant. The results suggest that (a) solutions accompanied with AHA! relate to early solution-relevant processing and (b) both solution experiences differ in timingwhen solution-relevant processing takes place. In this context, we discuss the potential role of the anterior insula as part of the salience network involved in problem solving by allocating attentional resources.

Close

  • doi:10.1002/hbm.24785

Close

Quan Wang; Carla A Wall; Erin C Barney; Jessica L Bradshaw; Suzanne L Macari; Katarzyna Chawarska; Frederick Shic

Promoting social attention in 3-year-olds with ASD through gaze-contingent eye tracking Journal Article

Autism Research, 13 (1), pp. 61–73, 2020.

Abstract | Links | BibTeX

@article{Wang2020l,
title = {Promoting social attention in 3-year-olds with ASD through gaze-contingent eye tracking},
author = {Quan Wang and Carla A Wall and Erin C Barney and Jessica L Bradshaw and Suzanne L Macari and Katarzyna Chawarska and Frederick Shic},
doi = {10.1002/aur.2199},
year = {2020},
date = {2020-08-01},
journal = {Autism Research},
volume = {13},
number = {1},
pages = {61--73},
publisher = {Wiley},
abstract = {Young children with autism spectrum disorder (ASD) look less toward faces compared to their non-ASD peers, limiting access to social learning. Currently, no technologies directly target these core social attention difficulties. This study examines the feasibility of automated gaze modification training for improving attention to faces in 3-year-olds with ASD. Using free-viewing data from typically developing (TD) controls (n = 41), we implemented gaze-contingent adaptive cueing to redirect children with ASD toward normative looking patterns during viewing of videos of an actress. Children with ASD were randomly assigned to either (a) an adaptive Cue condition (Cue},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Young children with autism spectrum disorder (ASD) look less toward faces compared to their non-ASD peers, limiting access to social learning. Currently, no technologies directly target these core social attention difficulties. This study examines the feasibility of automated gaze modification training for improving attention to faces in 3-year-olds with ASD. Using free-viewing data from typically developing (TD) controls (n = 41), we implemented gaze-contingent adaptive cueing to redirect children with ASD toward normative looking patterns during viewing of videos of an actress. Children with ASD were randomly assigned to either (a) an adaptive Cue condition (Cue

Close

  • doi:10.1002/aur.2199

Close

Taylor R Hayes; John M Henderson

Center bias outperforms image salience but not semantics in accounting for attention during scene viewing Journal Article

Attention, Perception, and Psychophysics, 82 (3), pp. 985–994, 2020.

Abstract | Links | BibTeX

@article{Hayes2020,
title = {Center bias outperforms image salience but not semantics in accounting for attention during scene viewing},
author = {Taylor R Hayes and John M Henderson},
doi = {10.3758/s13414-019-01849-7},
year = {2020},
date = {2020-08-01},
journal = {Attention, Perception, and Psychophysics},
volume = {82},
number = {3},
pages = {985--994},
publisher = {Attention, Perception, & Psychophysics},
abstract = {How do we determine where to focus our attention in real-world scenes? Image saliency theory proposes that our attention is ‘pulled' to scene regions that differ in low-level image features. However, models that formalize image saliency theory often contain significant scene-independent spatial biases. In the present studies, three different viewing tasks were used to evaluate whether image saliency models account for variance in scene fixation density based primarily on scene-dependent, low-level feature contrast, or on their scene-independent spatial biases. For comparison, fixation density was also compared to semantic feature maps (Meaning Maps; Henderson & Hayes, Nature Human Behaviour, 1, 743–747, 2017) that were generated using human ratings of isolated scene patches. The squared correlations (R2) between scene fixation density and each image saliency model's center bias, each full image saliency model, and meaning maps were computed. The results showed that in tasks that produced observer center bias, the image saliency models on average explained 23% less variance in scene fixation density than their center biases alone. In comparison, meaning maps explained on average 10% more variance than center bias alone. We conclude that image saliency theory generalizes poorly to real-world scenes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

How do we determine where to focus our attention in real-world scenes? Image saliency theory proposes that our attention is ‘pulled' to scene regions that differ in low-level image features. However, models that formalize image saliency theory often contain significant scene-independent spatial biases. In the present studies, three different viewing tasks were used to evaluate whether image saliency models account for variance in scene fixation density based primarily on scene-dependent, low-level feature contrast, or on their scene-independent spatial biases. For comparison, fixation density was also compared to semantic feature maps (Meaning Maps; Henderson & Hayes, Nature Human Behaviour, 1, 743–747, 2017) that were generated using human ratings of isolated scene patches. The squared correlations (R2) between scene fixation density and each image saliency model's center bias, each full image saliency model, and meaning maps were computed. The results showed that in tasks that produced observer center bias, the image saliency models on average explained 23% less variance in scene fixation density than their center biases alone. In comparison, meaning maps explained on average 10% more variance than center bias alone. We conclude that image saliency theory generalizes poorly to real-world scenes.

Close

  • doi:10.3758/s13414-019-01849-7

Close

Guillaume Doucet; Roberto A Gulli; Benjamin W Corrigan; Lyndon R Duong; Julio C Martinez-Trujillo

Modulation of local field potentials and neuronal activity in primate hippocampus during saccades Journal Article

Hippocampus, 30 (3), pp. 192–209, 2020.

Abstract | Links | BibTeX

@article{Doucet2020,
title = {Modulation of local field potentials and neuronal activity in primate hippocampus during saccades},
author = {Guillaume Doucet and Roberto A Gulli and Benjamin W Corrigan and Lyndon R Duong and Julio C Martinez-Trujillo},
doi = {10.1002/hipo.23140},
year = {2020},
date = {2020-07-01},
journal = {Hippocampus},
volume = {30},
number = {3},
pages = {192--209},
publisher = {Wiley},
abstract = {Primates use saccades to gather information about objects and their relative spatial arrangement, a process essential for visual perception and memory. It has been proposed that signals linked to saccades reset the phase of local field potential (LFP) oscillations in the hippocampus, providing a temporal window for visual signals to activate neurons in this region and influence memory formation. We investigated this issue by measuring hippocampal LFPs and spikes in two macaques performing different tasks with unconstrained eye movements. We found that LFP phase clustering (PC) in the alpha/beta (8–16 Hz) frequencies followed foveation onsets, while PC in frequencies lower than 8 Hz followed spontaneous saccades, even on a homogeneous background. Saccades to a solid grey background were not followed by increases in local neuronal firing, whereas saccades toward appearing visual stimuli were. Finally, saccade parameters correlated with LFPs phase and amplitude: saccade direction correlated with delta (≤4 Hz) phase, and saccade amplitude with theta (4–8 Hz) power. Our results suggest that signals linked to saccades reach the hippocampus, producing synchronization of delta/theta LFPs without a general activation of local neurons. Moreover, some visual inputs co-occurring with saccades produce LFP synchronization in the alpha/beta bands and elevated neuronal firing. Our findings support the hypothesis that saccade-related signals enact sensory input-dependent plasticity and therefore memory formation in the primate hippocampus.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Primates use saccades to gather information about objects and their relative spatial arrangement, a process essential for visual perception and memory. It has been proposed that signals linked to saccades reset the phase of local field potential (LFP) oscillations in the hippocampus, providing a temporal window for visual signals to activate neurons in this region and influence memory formation. We investigated this issue by measuring hippocampal LFPs and spikes in two macaques performing different tasks with unconstrained eye movements. We found that LFP phase clustering (PC) in the alpha/beta (8–16 Hz) frequencies followed foveation onsets, while PC in frequencies lower than 8 Hz followed spontaneous saccades, even on a homogeneous background. Saccades to a solid grey background were not followed by increases in local neuronal firing, whereas saccades toward appearing visual stimuli were. Finally, saccade parameters correlated with LFPs phase and amplitude: saccade direction correlated with delta (≤4 Hz) phase, and saccade amplitude with theta (4–8 Hz) power. Our results suggest that signals linked to saccades reach the hippocampus, producing synchronization of delta/theta LFPs without a general activation of local neurons. Moreover, some visual inputs co-occurring with saccades produce LFP synchronization in the alpha/beta bands and elevated neuronal firing. Our findings support the hypothesis that saccade-related signals enact sensory input-dependent plasticity and therefore memory formation in the primate hippocampus.

Close

  • doi:10.1002/hipo.23140

Close

Stefan Dowiasch; Peter Wolf; Frank Bremmer

Quantitative comparison of a mobile and a stationary video-based eye-tracker Journal Article

Behavior Research Methods, 52 (2), pp. 667–680, 2020.

Abstract | Links | BibTeX

@article{Dowiasch2020a,
title = {Quantitative comparison of a mobile and a stationary video-based eye-tracker},
author = {Stefan Dowiasch and Peter Wolf and Frank Bremmer},
doi = {10.3758/s13428-019-01267-5},
year = {2020},
date = {2020-06-01},
journal = {Behavior Research Methods},
volume = {52},
number = {2},
pages = {667--680},
publisher = {Springer Science and Business Media LLC},
abstract = {Vision represents the most important sense of primates. To understand visual processing, various different methods are employed—for example, electrophysiology, psychophysics, or eye-tracking. For the latter method, researchers have recently begun to step outside the artificial environments of laboratory setups toward the more natural conditions we usually face in the real world. To get a better understanding of the advantages and limitations of modern mobile eye-trackers, we quantitatively compared one of the most advanced mobile eye-trackers available, the EyeSeeCam, with a commonly used laboratory eye-tracker, the EyeLink II, serving as a gold standard. We aimed to investigate whether or not fully mobile eye-trackers are capable of providing data that would be adequate for direct comparisons with data recorded by stationary eye-trackers. Therefore, we recorded three different, commonly used eye movements—fixations, saccades, and smooth-pursuit eye movements—with both eye-trackers, in successive standardized paradigms in a laboratory setting with eight human subjects. Despite major technical differences between the devices, most eye movement parameters were not statistically different between the two systems. Differences could only be found in overall gaze accuracy and for time-critical parameters such as saccade duration, for which a higher sample frequency is especially useful. Although the stationary EyeLink II system proved to be superior, especially on a single-subject or even a single-trial basis, the ESC showed similar performance for the averaged parameters across both trials and subjects. We concluded that modern mobile eye-trackers are well-suited to providing reliable oculomotor data at the required spatial and temporal resolutions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Vision represents the most important sense of primates. To understand visual processing, various different methods are employed—for example, electrophysiology, psychophysics, or eye-tracking. For the latter method, researchers have recently begun to step outside the artificial environments of laboratory setups toward the more natural conditions we usually face in the real world. To get a better understanding of the advantages and limitations of modern mobile eye-trackers, we quantitatively compared one of the most advanced mobile eye-trackers available, the EyeSeeCam, with a commonly used laboratory eye-tracker, the EyeLink II, serving as a gold standard. We aimed to investigate whether or not fully mobile eye-trackers are capable of providing data that would be adequate for direct comparisons with data recorded by stationary eye-trackers. Therefore, we recorded three different, commonly used eye movements—fixations, saccades, and smooth-pursuit eye movements—with both eye-trackers, in successive standardized paradigms in a laboratory setting with eight human subjects. Despite major technical differences between the devices, most eye movement parameters were not statistically different between the two systems. Differences could only be found in overall gaze accuracy and for time-critical parameters such as saccade duration, for which a higher sample frequency is especially useful. Although the stationary EyeLink II system proved to be superior, especially on a single-subject or even a single-trial basis, the ESC showed similar performance for the averaged parameters across both trials and subjects. We concluded that modern mobile eye-trackers are well-suited to providing reliable oculomotor data at the required spatial and temporal resolutions.

Close

  • doi:10.3758/s13428-019-01267-5

Close

Jaana Simola; Jarmo Kuisma; Johanna K Kaakinen

Attention, memory and preference for direct and indirect print advertisements Journal Article

Journal of Business Research, 111 , pp. 249–261, 2020.

Abstract | Links | BibTeX

@article{Simola2020,
title = {Attention, memory and preference for direct and indirect print advertisements},
author = {Jaana Simola and Jarmo Kuisma and Johanna K Kaakinen},
doi = {10.1016/j.jbusres.2019.06.028},
year = {2020},
date = {2020-04-01},
journal = {Journal of Business Research},
volume = {111},
pages = {249--261},
publisher = {Elsevier Inc.},
abstract = {We examined the effectiveness of direct and indirect advertising. Direct ads openly depict advertised products and brands. In indirect ads, the ad message requires elaboration. Eye movements were recorded while consumers viewed direct and indirect advertisements under fixed (5 s) or unlimited exposure time. Recognition of ads, brand logos and preference for brands were tested under two different delays (after 24 h or 45 min) from the ad exposure. The total viewing time was longer for the indirect ads when exposure time was unlimited. Overall, ad pictorials received more fixations and the brand preference was higher in the indirect condition. Recognition improved for brand logos of indirect ads when tested after the shorter delay. Consumers experienced indirect ads as more original, surprising, intellectually challenging and harder to interpret than direct ads. Current results indicate that indirect ads elicit cognitive elaboration that translates into higher preference and memorability for brands.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We examined the effectiveness of direct and indirect advertising. Direct ads openly depict advertised products and brands. In indirect ads, the ad message requires elaboration. Eye movements were recorded while consumers viewed direct and indirect advertisements under fixed (5 s) or unlimited exposure time. Recognition of ads, brand logos and preference for brands were tested under two different delays (after 24 h or 45 min) from the ad exposure. The total viewing time was longer for the indirect ads when exposure time was unlimited. Overall, ad pictorials received more fixations and the brand preference was higher in the indirect condition. Recognition improved for brand logos of indirect ads when tested after the shorter delay. Consumers experienced indirect ads as more original, surprising, intellectually challenging and harder to interpret than direct ads. Current results indicate that indirect ads elicit cognitive elaboration that translates into higher preference and memorability for brands.

Close

  • doi:10.1016/j.jbusres.2019.06.028

Close

Francesca Beilharz; Andrea Phillipou; David J Castle; Susan L Rossell

Saccadic eye movements in body dysmorphic disorder Journal Article

Journal of Obsessive-Compulsive and Related Disorders, 25 , pp. 1–6, 2020.

Abstract | Links | BibTeX

@article{Beilharz2020,
title = {Saccadic eye movements in body dysmorphic disorder},
author = {Francesca Beilharz and Andrea Phillipou and David J Castle and Susan L Rossell},
doi = {10.1016/j.jocrd.2020.100526},
year = {2020},
date = {2020-04-01},
journal = {Journal of Obsessive-Compulsive and Related Disorders},
volume = {25},
pages = {1--6},
publisher = {Elsevier B.V.},
abstract = {Body dysmorphic disorder (BDD) is characterised by a preoccupation with perceived flaws in appearance, which significantly disrupts functioning and causes distress. The difference in self-perception characteristic of BDD has been related to a bias in visual processing across a variety of stimuli and tasks. However, it is unknown how BDD participants perform on basic saccade tasks using eye tracking. Eighteen BDD and 21 healthy control participants completed a battery of saccadic eye movement tasks (fixation, prosaccade, anti-saccade, and memory guided). No significant differences were noted between the groups regarding behavioural performance or patterns of eye movements; however, there was a trend for BDD participants to make increased anticipatory errors on the prosaccade task. Overall, BDD participants demonstrated largely intact saccadic eye movement characteristics which may differentiate BDD from other obsessive-compulsive related disorders, although future research using larger samples is required. It is consequently argued that abnormalities in visual processing apparent among people with BDD may reflect abnormalities in higher-order visual systems.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Body dysmorphic disorder (BDD) is characterised by a preoccupation with perceived flaws in appearance, which significantly disrupts functioning and causes distress. The difference in self-perception characteristic of BDD has been related to a bias in visual processing across a variety of stimuli and tasks. However, it is unknown how BDD participants perform on basic saccade tasks using eye tracking. Eighteen BDD and 21 healthy control participants completed a battery of saccadic eye movement tasks (fixation, prosaccade, anti-saccade, and memory guided). No significant differences were noted between the groups regarding behavioural performance or patterns of eye movements; however, there was a trend for BDD participants to make increased anticipatory errors on the prosaccade task. Overall, BDD participants demonstrated largely intact saccadic eye movement characteristics which may differentiate BDD from other obsessive-compulsive related disorders, although future research using larger samples is required. It is consequently argued that abnormalities in visual processing apparent among people with BDD may reflect abnormalities in higher-order visual systems.

Close

  • doi:10.1016/j.jocrd.2020.100526

Close

Ming Ray Liao; Brian A Anderson

Reward learning biases the direction of saccades Journal Article

Cognition, 196 , pp. 1–9, 2020.

Abstract | Links | BibTeX

@article{Liao2020a,
title = {Reward learning biases the direction of saccades},
author = {Ming Ray Liao and Brian A Anderson},
doi = {10.1016/j.cognition.2019.104145},
year = {2020},
date = {2020-03-01},
journal = {Cognition},
volume = {196},
pages = {1--9},
publisher = {Elsevier B.V.},
abstract = {The role of associative reward learning in guiding feature-based attention and spatial attention is well established. However, no studies have looked at the extent to which reward learning can modulate the direction of saccades during visual search. Here, we introduced a novel reward learning paradigm to examine whether reward-associated directions of eye movements can modulate performance in different visual search tasks. Participants had to fixate a peripheral target before fixating one of four disks that subsequently appeared in each cardinal position. This was followed by reward feedback contingent upon the direction chosen, where one direction consistently yielded a high reward. Thus, reward was tied to the direction of saccades rather than the absolute location of the stimulus fixated. Participants selected the target in the high-value direction on the majority of trials, demonstrating robust learning of the task contingencies. In an untimed visual foraging task that followed, which was performed in extinction, initial saccades were reliably biased in the previously rewarded-associated direction. In a second experiment, following the same training procedure, eye movements in the previously high-value direction were facilitated in a saccade-to-target task. Our findings suggest that rewarding directional eye movements biases oculomotor search patterns in a manner that is robust to extinction and generalizes across stimuli and task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The role of associative reward learning in guiding feature-based attention and spatial attention is well established. However, no studies have looked at the extent to which reward learning can modulate the direction of saccades during visual search. Here, we introduced a novel reward learning paradigm to examine whether reward-associated directions of eye movements can modulate performance in different visual search tasks. Participants had to fixate a peripheral target before fixating one of four disks that subsequently appeared in each cardinal position. This was followed by reward feedback contingent upon the direction chosen, where one direction consistently yielded a high reward. Thus, reward was tied to the direction of saccades rather than the absolute location of the stimulus fixated. Participants selected the target in the high-value direction on the majority of trials, demonstrating robust learning of the task contingencies. In an untimed visual foraging task that followed, which was performed in extinction, initial saccades were reliably biased in the previously rewarded-associated direction. In a second experiment, following the same training procedure, eye movements in the previously high-value direction were facilitated in a saccade-to-target task. Our findings suggest that rewarding directional eye movements biases oculomotor search patterns in a manner that is robust to extinction and generalizes across stimuli and task.

Close

  • doi:10.1016/j.cognition.2019.104145

Close

Xiao Yang Sui; Hong Zhi Liu; Li Lin Rao

The timing of gaze-contingent decision prompts influences risky choice Journal Article

Cognition, 195 , pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Sui2020,
title = {The timing of gaze-contingent decision prompts influences risky choice},
author = {Xiao Yang Sui and Hong Zhi Liu and Li Lin Rao},
doi = {10.1016/j.cognition.2019.104077},
year = {2020},
date = {2020-02-01},
journal = {Cognition},
volume = {195},
pages = {1--11},
abstract = {Risky decisions are ubiquitous in daily life and are central to human behavior, but little attention has been devoted to exploring whether risky choice can be influenced by gaze direction. In the current study, we used gaze-contingent manipulation to manipulate an individual's gaze while he/she decided between two risky options, and we examined whether risky decisions could be biased toward a randomly determined target. We found that participants' risky choices were biased toward a randomly determined target when they were manipulated to gaze longer at the target option (Study 1},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Risky decisions are ubiquitous in daily life and are central to human behavior, but little attention has been devoted to exploring whether risky choice can be influenced by gaze direction. In the current study, we used gaze-contingent manipulation to manipulate an individual's gaze while he/she decided between two risky options, and we examined whether risky decisions could be biased toward a randomly determined target. We found that participants' risky choices were biased toward a randomly determined target when they were manipulated to gaze longer at the target option (Study 1

Close

  • doi:10.1016/j.cognition.2019.104077

Close

Muxuan Lyu; Kyoung Whan Choe; Omid Kardan; Hiroki P Kotabe; John M Henderson; Marc G Berman

Overt attentional correlates of memorability of scene images and their relationships to scene semantics Journal Article

Journal of Vision, 20 (9), pp. 1–17, 2020.

Abstract | Links | BibTeX

@article{Lyu2020b,
title = {Overt attentional correlates of memorability of scene images and their relationships to scene semantics},
author = {Muxuan Lyu and Kyoung Whan Choe and Omid Kardan and Hiroki P Kotabe and John M Henderson and Marc G Berman},
doi = {10.1167/jov.20.9.2},
year = {2020},
date = {2020-01-01},
journal = {Journal of Vision},
volume = {20},
number = {9},
pages = {1--17},
abstract = {Computer vision-based research has shown that scene semantics (e.g., presence of meaningful objects in a scene) can predict memorability of scene images. Here, we investigated whether and to what extent overt attentional correlates, such as fixation map consistency (also called inter-observer congruency of fixation maps) and fixation counts, mediate the relationship between scene semantics and scene memorability. First, we confirmed that the higher the fixation map consistency of a scene, the higher its memorability. Moreover, both fixation map consistency and its correlation to scene memorability were the highest in the first 2 seconds of viewing, suggesting that meaningful scene features that contribute to producing more consistent fixation maps early in viewing, such as faces and humans, may also be important for scene encoding. Second, we found that the relationship between scene semantics and scene memorability was partially (but not fully) mediated by fixation map consistency and fixation counts, separately as well as together. Third, we found that fixation map consistency, fixation counts, and scene semantics significantly and additively contributed to scene memorability. Together, these results suggest that eye-tracking measurements can complement computer vision-based algorithms and improve overall scene memorability prediction.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Computer vision-based research has shown that scene semantics (e.g., presence of meaningful objects in a scene) can predict memorability of scene images. Here, we investigated whether and to what extent overt attentional correlates, such as fixation map consistency (also called inter-observer congruency of fixation maps) and fixation counts, mediate the relationship between scene semantics and scene memorability. First, we confirmed that the higher the fixation map consistency of a scene, the higher its memorability. Moreover, both fixation map consistency and its correlation to scene memorability were the highest in the first 2 seconds of viewing, suggesting that meaningful scene features that contribute to producing more consistent fixation maps early in viewing, such as faces and humans, may also be important for scene encoding. Second, we found that the relationship between scene semantics and scene memorability was partially (but not fully) mediated by fixation map consistency and fixation counts, separately as well as together. Third, we found that fixation map consistency, fixation counts, and scene semantics significantly and additively contributed to scene memorability. Together, these results suggest that eye-tracking measurements can complement computer vision-based algorithms and improve overall scene memorability prediction.

Close

  • doi:10.1167/jov.20.9.2

Close

Timo Kootstra; Jonas Teuwen; Jeroen Goudsmit; Tanja Nijboer; Michael Dodd; Stefan Van der Stigchel

Machine learning-based classification of viewing behavior using a wide range of statistical oculomotor features Journal Article

Journal of Vision, 20 (9), pp. 1–15, 2020.

Abstract | Links | BibTeX

@article{Kootstra2020,
title = {Machine learning-based classification of viewing behavior using a wide range of statistical oculomotor features},
author = {Timo Kootstra and Jonas Teuwen and Jeroen Goudsmit and Tanja Nijboer and Michael Dodd and Stefan {Van der Stigchel}},
doi = {10.1167/jov.20.9.1},
year = {2020},
date = {2020-01-01},
journal = {Journal of Vision},
volume = {20},
number = {9},
pages = {1--15},
abstract = {Since the seminal work of Yarbus, multiple studies have demonstrated the influence of task-set on oculomotor behavior and the current cognitive state. In more recent years, this field of research has expanded by evaluating the costs of abruptly switching between such different tasks. At the same time, the field of classifying oculomotor behavior has been moving toward more advanced, data-driven methods of decoding data. For the current study, we used a large dataset compiled over multiple experiments and implemented separate state-of-the-art machine learning methods for decoding both cognitive state and task-switching. We found that, by extracting a wide range of oculomotor features, we were able to implement robust classifier models for decoding both cognitive state and task-switching. Our decoding performance highlights the feasibility of this approach, even invariant of image statistics. Additionally, we present a feature ranking for both models, indicating the relative magnitude of different oculomotor features for both classifiers. These rankings indicate a separate set of important predictors for decoding each task, respectively. Finally, we discuss the implications of the current approach related to interpreting the decoding results.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Since the seminal work of Yarbus, multiple studies have demonstrated the influence of task-set on oculomotor behavior and the current cognitive state. In more recent years, this field of research has expanded by evaluating the costs of abruptly switching between such different tasks. At the same time, the field of classifying oculomotor behavior has been moving toward more advanced, data-driven methods of decoding data. For the current study, we used a large dataset compiled over multiple experiments and implemented separate state-of-the-art machine learning methods for decoding both cognitive state and task-switching. We found that, by extracting a wide range of oculomotor features, we were able to implement robust classifier models for decoding both cognitive state and task-switching. Our decoding performance highlights the feasibility of this approach, even invariant of image statistics. Additionally, we present a feature ranking for both models, indicating the relative magnitude of different oculomotor features for both classifiers. These rankings indicate a separate set of important predictors for decoding each task, respectively. Finally, we discuss the implications of the current approach related to interpreting the decoding results.

Close

  • doi:10.1167/jov.20.9.1

Close

Chou P Hung; Chloe Callahan-Flintoft; Paul D Fedele; Kim F Fluitt; Onyekachi Odoemene; Anthony J Walker; Andre V Harrison; Barry D Vaughan; Matthew S Jaswa; Min Wei

Abrupt darkening under high dynamic range (HDR) luminance invokes facilitation for high-contrast targets and grouping by luminance similarity Journal Article

Journal of Vision, 20 (7), pp. 1–16, 2020.

Abstract | Links | BibTeX

@article{Hung2020b,
title = {Abrupt darkening under high dynamic range (HDR) luminance invokes facilitation for high-contrast targets and grouping by luminance similarity},
author = {Chou P Hung and Chloe Callahan-Flintoft and Paul D Fedele and Kim F Fluitt and Onyekachi Odoemene and Anthony J Walker and Andre V Harrison and Barry D Vaughan and Matthew S Jaswa and Min Wei},
doi = {10.1167/jov.20.7.9},
year = {2020},
date = {2020-01-01},
journal = {Journal of Vision},
volume = {20},
number = {7},
pages = {1--16},
abstract = {When scanning across a scene, luminance can vary by up to 100,000-to-1 (high dynamic range, HDR), requiring multiple normalizing mechanisms spanning from the retina to the cortex to support visual acuity and recognition. Vision models based on standard dynamic range (SDR) luminance contrast ratios below 100-to-1 have limited ability to generalize to real-world scenes with HDR luminance. To characterize how orientation and luminance are linked in brain mechanisms for luminance normalization, we measured orientation discrimination of Gabor targets under HDR luminance dynamics. We report a novel phenomenon, that abrupt 10- to 100-fold darkening engages contextual facilitation, distorting the apparent orientation of a high-contrast central target. Surprisingly, facilitation was influenced by grouping by luminance similarity, as well as by the degree of luminance variability in the surround. These results challenge vision models based solely on activity normalization and raise new questions that will lead to models that perform better in real-world scenes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When scanning across a scene, luminance can vary by up to 100,000-to-1 (high dynamic range, HDR), requiring multiple normalizing mechanisms spanning from the retina to the cortex to support visual acuity and recognition. Vision models based on standard dynamic range (SDR) luminance contrast ratios below 100-to-1 have limited ability to generalize to real-world scenes with HDR luminance. To characterize how orientation and luminance are linked in brain mechanisms for luminance normalization, we measured orientation discrimination of Gabor targets under HDR luminance dynamics. We report a novel phenomenon, that abrupt 10- to 100-fold darkening engages contextual facilitation, distorting the apparent orientation of a high-contrast central target. Surprisingly, facilitation was influenced by grouping by luminance similarity, as well as by the degree of luminance variability in the surround. These results challenge vision models based solely on activity normalization and raise new questions that will lead to models that perform better in real-world scenes.

Close

  • doi:10.1167/jov.20.7.9

Close

Shiva Kamkar; Hamid Abrishami Moghaddam; Reza Lashgari; Lauri Oksama; Jie Li; Jukka Hyönä

Effectiveness of "rescue saccades" on the accuracy of tracking multiple moving targets: An eye-tracking study on the effects of target occlusions Journal Article

Journal of Vision, 20 (12), pp. 1–15, 2020.

Abstract | Links | BibTeX

@article{Kamkar2020,
title = {Effectiveness of "rescue saccades" on the accuracy of tracking multiple moving targets: An eye-tracking study on the effects of target occlusions},
author = {Shiva Kamkar and Hamid {Abrishami Moghaddam} and Reza Lashgari and Lauri Oksama and Jie Li and Jukka Hyönä},
doi = {10.1167/jov.20.12.5},
year = {2020},
date = {2020-01-01},
journal = {Journal of Vision},
volume = {20},
number = {12},
pages = {1--15},
abstract = {Occlusion is one of the main challenges in tracking multiple moving objects. In almost all real-world scenarios, a moving object or a stationary obstacle occludes targets partially or completely for a short or long time during their movement. A previous study (Zelinsky & Todor, 2010) reported that subjects make timely saccades toward the object in danger of being occluded. Observers make these so-called "rescue saccades" to prevent target swapping. In this study, we examined whether these saccades are helpful. To this aim, we used as the stimuli recorded videos from natural movement of zebrafish larvae swimming freely in a circular container. We considered two main types of occlusion: object-object occlusions that naturally exist in the videos, and object-occluder occlusions created by adding a stationary doughnut-shape occluder in some videos. Four different scenarios were studied: (1) no occlusions, (2) only object-object occlusions, (3) only object-occluder occlusion, or (4) both object-object and object-occluder occlusions. For each condition, two set sizes (two and four) were applied. Participants' eye movements were recorded during tracking, and rescue saccades were extracted afterward. The results showed that rescue saccades are helpful in handling object-object occlusions but had no reliable effect on tracking through object-occluder occlusions. The presence of occlusions generally increased visual sampling of the scenes; nevertheless, tracking accuracy declined due to occlusion.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Occlusion is one of the main challenges in tracking multiple moving objects. In almost all real-world scenarios, a moving object or a stationary obstacle occludes targets partially or completely for a short or long time during their movement. A previous study (Zelinsky & Todor, 2010) reported that subjects make timely saccades toward the object in danger of being occluded. Observers make these so-called "rescue saccades" to prevent target swapping. In this study, we examined whether these saccades are helpful. To this aim, we used as the stimuli recorded videos from natural movement of zebrafish larvae swimming freely in a circular container. We considered two main types of occlusion: object-object occlusions that naturally exist in the videos, and object-occluder occlusions created by adding a stationary doughnut-shape occluder in some videos. Four different scenarios were studied: (1) no occlusions, (2) only object-object occlusions, (3) only object-occluder occlusion, or (4) both object-object and object-occluder occlusions. For each condition, two set sizes (two and four) were applied. Participants' eye movements were recorded during tracking, and rescue saccades were extracted afterward. The results showed that rescue saccades are helpful in handling object-object occlusions but had no reliable effect on tracking through object-occluder occlusions. The presence of occlusions generally increased visual sampling of the scenes; nevertheless, tracking accuracy declined due to occlusion.

Close

  • doi:10.1167/jov.20.12.5

Close

Marcello Maniglia; Roshni Jogin; Kristina M Visscher; Aaron R Seitz

We don't all look the same; detailed examination of peripheral looking strategies after simulated central vision loss Journal Article

Journal of Vision, 20 (13), pp. 1–14, 2020.

Abstract | Links | BibTeX

@article{Maniglia2020b,
title = {We don't all look the same; detailed examination of peripheral looking strategies after simulated central vision loss},
author = {Marcello Maniglia and Roshni Jogin and Kristina M Visscher and Aaron R Seitz},
doi = {10.1167/jov.20.13.5},
year = {2020},
date = {2020-01-01},
journal = {Journal of Vision},
volume = {20},
number = {13},
pages = {1--14},
abstract = {Loss of central vision can be compensated for in part by increased use of peripheral vision. For example, patients with macular degeneration or those experiencing simulated central vision loss tend to develop eccentric viewing strategies for reading or other visual tasks. The factors driving this learning are still unclear and likely involve complex changes in oculomotor strategies that may differ among people and tasks. Although to date a number of studies have examined reliance on peripheral vision after simulated central vision loss, individual differences in developing peripheral viewing strategies and the extent to which they transfer to untrained tasks have received little attention. Here, we apply a recently published method of characterizing oculomotor strategies after central vision loss to understand the time course of changes in oculomotor strategies through training in 19 healthy individuals with a gaze-contingent display obstructing the central 10° of the visual field. After 10 days of training, we found mean improvements in saccadic re-referencing (the percentage of trials in which the first saccade placed the target outside the scotoma), latency of target acquisition (time interval between target presentation and a saccade putting the target outside the scotoma), and fixation stability. These results are consistent with participants developing compensatory oculomotor strategies as a result of training. However, we also observed substantial individual differences in the formation of eye movement strategies and the extent to which they transferred to an untrained task, likely reflecting both variations in learning rates and patterns of learning. This more complete characterization of peripheral looking strategies and how they change with training may help us understand individual differences in rehabilitation after central vision loss.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Loss of central vision can be compensated for in part by increased use of peripheral vision. For example, patients with macular degeneration or those experiencing simulated central vision loss tend to develop eccentric viewing strategies for reading or other visual tasks. The factors driving this learning are still unclear and likely involve complex changes in oculomotor strategies that may differ among people and tasks. Although to date a number of studies have examined reliance on peripheral vision after simulated central vision loss, individual differences in developing peripheral viewing strategies and the extent to which they transfer to untrained tasks have received little attention. Here, we apply a recently published method of characterizing oculomotor strategies after central vision loss to understand the time course of changes in oculomotor strategies through training in 19 healthy individuals with a gaze-contingent display obstructing the central 10° of the visual field. After 10 days of training, we found mean improvements in saccadic re-referencing (the percentage of trials in which the first saccade placed the target outside the scotoma), latency of target acquisition (time interval between target presentation and a saccade putting the target outside the scotoma), and fixation stability. These results are consistent with participants developing compensatory oculomotor strategies as a result of training. However, we also observed substantial individual differences in the formation of eye movement strategies and the extent to which they transferred to an untrained task, likely reflecting both variations in learning rates and patterns of learning. This more complete characterization of peripheral looking strategies and how they change with training may help us understand individual differences in rehabilitation after central vision loss.

Close

  • doi:10.1167/jov.20.13.5

Close

Raphael Vallat; Alain Nicolas; Perrine Ruby

Brain functional connectivity upon awakening from sleep predicts interindividual differences in dream recall frequency Journal Article

Sleep, 43 (2), pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Vallat2020,
title = {Brain functional connectivity upon awakening from sleep predicts interindividual differences in dream recall frequency},
author = {Raphael Vallat and Alain Nicolas and Perrine Ruby},
doi = {10.1093/sleep/zsaa116},
year = {2020},
date = {2020-01-01},
journal = {Sleep},
volume = {43},
number = {2},
pages = {1--11},
abstract = {Why do some individuals recall dreams every day while others hardly ever recall one? We hypothesized that sleep inertia—the transient period following awakening associated with brain and cognitive alterations—could be a key mechanism to explain interindividual differences in dream recall at awakening. To test this hypothesis, we measured the brain functional connectivity (combined electroencephalography–functional magnetic resonance imaging) and cognition (memory and mental calculation) of high dream recallers (HR},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Why do some individuals recall dreams every day while others hardly ever recall one? We hypothesized that sleep inertia—the transient period following awakening associated with brain and cognitive alterations—could be a key mechanism to explain interindividual differences in dream recall at awakening. To test this hypothesis, we measured the brain functional connectivity (combined electroencephalography–functional magnetic resonance imaging) and cognition (memory and mental calculation) of high dream recallers (HR

Close

  • doi:10.1093/sleep/zsaa116

Close

Jetro J Tuulari; Eeva Leena Kataja; Jukka M Leppänen; John D Lewis; Saara Nolvi; Tuomo Häikiö; Satu J Lehtola; Niloofar Hashempour; Jani Saunavaara; Noora M Scheinin; Riikka Korja; Linnea Karlsson; Hasse Karlsson

Newborn left amygdala volume associates with attention disengagement from fearful faces at eight months Journal Article

Developmental Cognitive Neuroscience, 45 , pp. 1–8, 2020.

Abstract | Links | BibTeX

@article{Tuulari2020,
title = {Newborn left amygdala volume associates with attention disengagement from fearful faces at eight months},
author = {Jetro J Tuulari and Eeva Leena Kataja and Jukka M Leppänen and John D Lewis and Saara Nolvi and Tuomo Häikiö and Satu J Lehtola and Niloofar Hashempour and Jani Saunavaara and Noora M Scheinin and Riikka Korja and Linnea Karlsson and Hasse Karlsson},
doi = {10.1016/j.dcn.2020.100839},
year = {2020},
date = {2020-01-01},
journal = {Developmental Cognitive Neuroscience},
volume = {45},
pages = {1--8},
abstract = {After 5 months of age, infants begin to prioritize attention to fearful over other facial expressions. One key proposition is that amygdala and related early-maturing subcortical network, is important for emergence of this attentional bias – however, empirical data to support these assertions are lacking. In this prospective longitudinal study, we measured amygdala volumes from MR images in 65 healthy neonates at 2–5 weeks of gestation corrected age and attention disengagement from fearful vs. non-fearful facial expressions at 8 months with eye tracking. Overall, infants were less likely to disengage from fearful than happy/neutral faces, demonstrating an age-typical bias for fear. Left, but not right, amygdala volume (corrected for intracranial volume) was positively associated with the likelihood of disengaging attention from fearful faces to a salient lateral distractor (r =.302},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

After 5 months of age, infants begin to prioritize attention to fearful over other facial expressions. One key proposition is that amygdala and related early-maturing subcortical network, is important for emergence of this attentional bias – however, empirical data to support these assertions are lacking. In this prospective longitudinal study, we measured amygdala volumes from MR images in 65 healthy neonates at 2–5 weeks of gestation corrected age and attention disengagement from fearful vs. non-fearful facial expressions at 8 months with eye tracking. Overall, infants were less likely to disengage from fearful than happy/neutral faces, demonstrating an age-typical bias for fear. Left, but not right, amygdala volume (corrected for intracranial volume) was positively associated with the likelihood of disengaging attention from fearful faces to a salient lateral distractor (r =.302

Close

  • doi:10.1016/j.dcn.2020.100839

Close

Nicolas Chevalier; Julie Anne Meaney; Hilary Joy Traut; Yuko Munakata

Adaptiveness in proactive control engagement in children and adults Journal Article

Developmental Cognitive Neuroscience, 46 , pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Chevalier2020,
title = {Adaptiveness in proactive control engagement in children and adults},
author = {Nicolas Chevalier and Julie Anne Meaney and Hilary Joy Traut and Yuko Munakata},
doi = {10.1016/j.dcn.2020.100870},
year = {2020},
date = {2020-01-01},
journal = {Developmental Cognitive Neuroscience},
volume = {46},
pages = {1--11},
publisher = {Elsevier Ltd},
abstract = {Age-related progress in cognitive control reflects more frequent engagement of proactive control during childhood. As proactive preparation for an upcoming task is adaptive only when the task can be reliably predicted, progress in proactive control engagement may rely on more efficient use of contextual cue reliability. Developmental progress may also reflect increasing efficiency in how proactive control is engaged, making this control mode more advantageous with age. To address these possibilities, 6-year-olds, 9-year-olds, and adults completed three versions of a cued task-switching paradigm in which contextual cue reliability was manipulated. When contextual cues were reliable (but not unreliable or uninformative), all age groups showed greater pupil dilation and a more pronounced (pre)cue-locked posterior positivity associated with faster response times, suggesting adaptive engagement of proactive task selection. However, adults additionally showed a larger contingent negative variation (CNV) predicting a further reduction in response times with reliable cues, suggesting motor preparation in adults but not children. Thus, early developing use of contextual cue reliability promotes adaptiveness in proactive control engagement from early childhood; yet, less efficient motor preparation in children makes this control mode overall less advantageous in childhood than adulthood.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Age-related progress in cognitive control reflects more frequent engagement of proactive control during childhood. As proactive preparation for an upcoming task is adaptive only when the task can be reliably predicted, progress in proactive control engagement may rely on more efficient use of contextual cue reliability. Developmental progress may also reflect increasing efficiency in how proactive control is engaged, making this control mode more advantageous with age. To address these possibilities, 6-year-olds, 9-year-olds, and adults completed three versions of a cued task-switching paradigm in which contextual cue reliability was manipulated. When contextual cues were reliable (but not unreliable or uninformative), all age groups showed greater pupil dilation and a more pronounced (pre)cue-locked posterior positivity associated with faster response times, suggesting adaptive engagement of proactive task selection. However, adults additionally showed a larger contingent negative variation (CNV) predicting a further reduction in response times with reliable cues, suggesting motor preparation in adults but not children. Thus, early developing use of contextual cue reliability promotes adaptiveness in proactive control engagement from early childhood; yet, less efficient motor preparation in children makes this control mode overall less advantageous in childhood than adulthood.

Close

  • doi:10.1016/j.dcn.2020.100870

Close

Anthony J Lambert; Tanvi Sharma; Nathan Ryckman

Accident vulnerability and vision for action: A pilot investigation Journal Article

Vision, 4 , pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Lambert2020a,
title = {Accident vulnerability and vision for action: A pilot investigation},
author = {Anthony J Lambert and Tanvi Sharma and Nathan Ryckman},
doi = {10.3390/vision4020026},
year = {2020},
date = {2020-01-01},
journal = {Vision},
volume = {4},
pages = {1--13},
abstract = {Many accidents, such as those involving collisions or trips, appear to involve failures of vision, but the association between accident risk and vision as conventionally assessed is weak or absent. We addressed this conundrum by embracing the distinction inspired by neuroscientific research, between vision for perception and vision for action. A dual-process perspective predicts that accident vulnerability will be associated more strongly with vision for action than vision for perception. In this preliminary investigation, older and younger adults, with relatively high and relatively low self-reported accident vulnerability (Accident Proneness Questionnaire), completed three behavioural assessments targeting vision for perception (Freiburg Visual Acuity Test); vision for action (Vision for Action Test—VAT); and the ability to perform physical actions involving balance, walking and standing (Short Physical Performance Battery). Accident vulnerability was not associated with visual acuity or with performance of physical actions but was associated with VAT performance. VAT assesses the ability to link visual input with a specific action—launching a saccadic eye movement as rapidly as possible, in response to shapes presented in peripheral vision. The predictive relationship between VAT performance and accident vulnerability was independent of age, visual acuity and physical performance scores. Applied implications of these findings are considered.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Many accidents, such as those involving collisions or trips, appear to involve failures of vision, but the association between accident risk and vision as conventionally assessed is weak or absent. We addressed this conundrum by embracing the distinction inspired by neuroscientific research, between vision for perception and vision for action. A dual-process perspective predicts that accident vulnerability will be associated more strongly with vision for action than vision for perception. In this preliminary investigation, older and younger adults, with relatively high and relatively low self-reported accident vulnerability (Accident Proneness Questionnaire), completed three behavioural assessments targeting vision for perception (Freiburg Visual Acuity Test); vision for action (Vision for Action Test—VAT); and the ability to perform physical actions involving balance, walking and standing (Short Physical Performance Battery). Accident vulnerability was not associated with visual acuity or with performance of physical actions but was associated with VAT performance. VAT assesses the ability to link visual input with a specific action—launching a saccadic eye movement as rapidly as possible, in response to shapes presented in peripheral vision. The predictive relationship between VAT performance and accident vulnerability was independent of age, visual acuity and physical performance scores. Applied implications of these findings are considered.

Close

  • doi:10.3390/vision4020026

Close

Darcy E Burgund

Looking at the own-race bias: Eye-tracking investigations of memory for different race faces Journal Article

Visual Cognition, pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Burgund2020,
title = {Looking at the own-race bias: Eye-tracking investigations of memory for different race faces},
author = {Darcy E Burgund},
doi = {10.1080/13506285.2020.1858216},
year = {2020},
date = {2020-01-01},
journal = {Visual Cognition},
pages = {1--13},
publisher = {Taylor & Francis},
abstract = {Humans remember the faces of members of their own race more accurately than the faces of members of other races, in an effect known as the own-race bias. Previous studies indicate that patterns of eye fixations play an important role in this bias, but the exact nature of their influence on face memory is not clear. The present study examined the role of eye fixations on memory for racially East Asian, Black, and White faces in East Asian and White participants. Results revealed greater looking at the eyes of East Asian and White faces than the eyes of Black faces, and greater looking at the nose/mouth of Black faces than the nose/mouth of East Asian and White faces. In addition, longer time looking at the eyes of all faces predicted better memory for all faces, and longer time looking at the nose/mouth of Black faces predicted better memory for Black faces. These findings are best characterized by a model of face memory in which the eyes are critical for all faces, but certain features (e.g., nose/mouth) may be additionally important for certain race faces (e.g., Black faces).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Humans remember the faces of members of their own race more accurately than the faces of members of other races, in an effect known as the own-race bias. Previous studies indicate that patterns of eye fixations play an important role in this bias, but the exact nature of their influence on face memory is not clear. The present study examined the role of eye fixations on memory for racially East Asian, Black, and White faces in East Asian and White participants. Results revealed greater looking at the eyes of East Asian and White faces than the eyes of Black faces, and greater looking at the nose/mouth of Black faces than the nose/mouth of East Asian and White faces. In addition, longer time looking at the eyes of all faces predicted better memory for all faces, and longer time looking at the nose/mouth of Black faces predicted better memory for Black faces. These findings are best characterized by a model of face memory in which the eyes are critical for all faces, but certain features (e.g., nose/mouth) may be additionally important for certain race faces (e.g., Black faces).

Close

  • doi:10.1080/13506285.2020.1858216

Close

Chiara Tortelli; Marco Turi; David C Burr; Paola Binda

Pupillary responses obey Emmert's law and co-vary with autistic traits Journal Article

Journal of Autism and Developmental Disorders, pp. 1–12, 2020.

Abstract | Links | BibTeX

@article{Tortelli2020,
title = {Pupillary responses obey Emmert's law and co-vary with autistic traits},
author = {Chiara Tortelli and Marco Turi and David C Burr and Paola Binda},
doi = {10.1007/s10803-020-04718-7},
year = {2020},
date = {2020-01-01},
journal = {Journal of Autism and Developmental Disorders},
pages = {1--12},
publisher = {Springer US},
abstract = {We measured the pupil response to a light stimulus subject to a size illusion and found that stimuli perceived as larger evoke a stronger pupillary response. The size illusion depends on combining retinal signals with contextual 3D information; contextual processing is thought to vary across individuals, being weaker in individuals with stronger autistic traits. Consistent with this theory, autistic traits correlated negatively with the magnitude of pupil modulations in our sample of neurotypical adults; however, psychophysical measurements of the illusion did not correlate with autistic traits, or with the pupil modulations. This shows that pupillometry provides an accurate objective index of complex perceptual processes, particularly useful for quantifying interindividual differences, and potentially more informative than standard psychophysical measures.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We measured the pupil response to a light stimulus subject to a size illusion and found that stimuli perceived as larger evoke a stronger pupillary response. The size illusion depends on combining retinal signals with contextual 3D information; contextual processing is thought to vary across individuals, being weaker in individuals with stronger autistic traits. Consistent with this theory, autistic traits correlated negatively with the magnitude of pupil modulations in our sample of neurotypical adults; however, psychophysical measurements of the illusion did not correlate with autistic traits, or with the pupil modulations. This shows that pupillometry provides an accurate objective index of complex perceptual processes, particularly useful for quantifying interindividual differences, and potentially more informative than standard psychophysical measures.

Close

  • doi:10.1007/s10803-020-04718-7

Close

Astar Lev; Yoram Braw; Tomer Elbaum; Michael Wagner; Yuri Rassovsky

Eye tracking during a continuous performance test: Utility for assessing ADHD patients Journal Article

Journal of Attention Disorders, pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Lev2020b,
title = {Eye tracking during a continuous performance test: Utility for assessing ADHD patients},
author = {Astar Lev and Yoram Braw and Tomer Elbaum and Michael Wagner and Yuri Rassovsky},
doi = {10.1177/1087054720972786},
year = {2020},
date = {2020-01-01},
journal = {Journal of Attention Disorders},
pages = {1--11},
abstract = {Objective: The use of continuous performance tests (CPTs) for assessing ADHD related cognitive impairment is ubiquitous. Novel psychophysiological measures may enhance the data that is derived from CPTs and thereby improve clinical decision-making regarding diagnosis and treatment. As part of the current study, we integrated an eye tracker with the MOXO-dCPT and assessed the utility of eye movement measures to differentiate ADHD patients and healthy controls. Method: Adult ADHD patients and gender/age-matched healthy controls performed the MOXO-dCPT while their eye movements were monitored (n = 33 per group). Results: ADHD patients spent significantly more time gazing at irrelevant regions, both on the screen and outside of it, than healthy controls. The eye movement measures showed adequate ability to classify ADHD patients. Moreover, a scale that combined eye movement measures enhanced group prediction, compared to the sole use of conventional MOXO-dCPT indices. Conclusions: Integrating an eye tracker with CPTs is a feasible way of enhancing diagnostic precision and shows initial promise for clarifying the cognitive profile of ADHD patients. Pending replication, these findings point toward a promising path for the evolution of existing CPTs.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: The use of continuous performance tests (CPTs) for assessing ADHD related cognitive impairment is ubiquitous. Novel psychophysiological measures may enhance the data that is derived from CPTs and thereby improve clinical decision-making regarding diagnosis and treatment. As part of the current study, we integrated an eye tracker with the MOXO-dCPT and assessed the utility of eye movement measures to differentiate ADHD patients and healthy controls. Method: Adult ADHD patients and gender/age-matched healthy controls performed the MOXO-dCPT while their eye movements were monitored (n = 33 per group). Results: ADHD patients spent significantly more time gazing at irrelevant regions, both on the screen and outside of it, than healthy controls. The eye movement measures showed adequate ability to classify ADHD patients. Moreover, a scale that combined eye movement measures enhanced group prediction, compared to the sole use of conventional MOXO-dCPT indices. Conclusions: Integrating an eye tracker with CPTs is a feasible way of enhancing diagnostic precision and shows initial promise for clarifying the cognitive profile of ADHD patients. Pending replication, these findings point toward a promising path for the evolution of existing CPTs.

Close

  • doi:10.1177/1087054720972786

Close

Athina Manoli; Simon P Liversedge; Edmund J S Sonuga-Barke; Julie A Hadwin

The differential effect of anxiety and ADHD symptoms on inhibitory control and sustained attention for threat stimuli: A go/no-go eye-movement study Journal Article

Journal of Attention Disorders, pp. 1–12, 2020.

Abstract | Links | BibTeX

@article{Manoli2020,
title = {The differential effect of anxiety and ADHD symptoms on inhibitory control and sustained attention for threat stimuli: A go/no-go eye-movement study},
author = {Athina Manoli and Simon P Liversedge and Edmund J S Sonuga-Barke and Julie A Hadwin},
doi = {10.1177/1087054720930809},
year = {2020},
date = {2020-01-01},
journal = {Journal of Attention Disorders},
pages = {1--12},
abstract = {Objective: This study examined the synergistic effects of ADHD and anxiety symptoms on attention and inhibitory control depending on the emotional content of the stimuli. Method: Fifty-four typically developing individuals (27 children/adolescents and 27 adults) completed an eye-movement based emotional Go/No-Go task, using centrally presented (happy, angry) faces and neutral/symbolic stimuli. Sustained attention was measured through saccade latencies and saccadic omission errors (Go trials), and inhibitory control through saccadic commission errors (No-Go trials). ADHD and anxiety were assessed dimensionally. Results: Elevated ADHD symptoms were associated with more commission errors and slower saccade latencies for angry (vs. happy) faces. In contrast, angry faces were linked to faster saccade onsets when anxiety symptoms were high, and this effect prevailed when both anxiety and ADHD symptoms were high. Conclusion: Social threat impacted performance in individuals with sub-clinical anxiety and ADHD differently. The effects of anxiety on threat processing prevailed when both symptoms were high.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: This study examined the synergistic effects of ADHD and anxiety symptoms on attention and inhibitory control depending on the emotional content of the stimuli. Method: Fifty-four typically developing individuals (27 children/adolescents and 27 adults) completed an eye-movement based emotional Go/No-Go task, using centrally presented (happy, angry) faces and neutral/symbolic stimuli. Sustained attention was measured through saccade latencies and saccadic omission errors (Go trials), and inhibitory control through saccadic commission errors (No-Go trials). ADHD and anxiety were assessed dimensionally. Results: Elevated ADHD symptoms were associated with more commission errors and slower saccade latencies for angry (vs. happy) faces. In contrast, angry faces were linked to faster saccade onsets when anxiety symptoms were high, and this effect prevailed when both anxiety and ADHD symptoms were high. Conclusion: Social threat impacted performance in individuals with sub-clinical anxiety and ADHD differently. The effects of anxiety on threat processing prevailed when both symptoms were high.

Close

  • doi:10.1177/1087054720930809

Close

Rany Abend; Mira A Bajaj; Chika Matsumoto; Marissa Yetter; Anita Harrewijn; Elise M Cardinale; Katharina Kircanski; Eli R Lebowitz; Wendy K Silverman; Yair Bar-Haim; Amit Lazarov; Ellen Leibenluft; Melissa Brotman; Daniel S Pine

Converging multi-modal evidence for implicit threat-related bias in pediatric anxiety disorders Journal Article

Journal of Abnormal Child Psychology, pp. 1–14, 2020.

Abstract | Links | BibTeX

@article{Abend2020,
title = {Converging multi-modal evidence for implicit threat-related bias in pediatric anxiety disorders},
author = {Rany Abend and Mira A Bajaj and Chika Matsumoto and Marissa Yetter and Anita Harrewijn and Elise M Cardinale and Katharina Kircanski and Eli R Lebowitz and Wendy K Silverman and Yair Bar-Haim and Amit Lazarov and Ellen Leibenluft and Melissa Brotman and Daniel S Pine},
doi = {10.1007/s10802-020-00712-w},
year = {2020},
date = {2020-01-01},
journal = {Journal of Abnormal Child Psychology},
pages = {1--14},
publisher = {Springer US},
abstract = {This report examines the relationship between pediatric anxiety disorders and implicit bias evoked by threats. To do so, the report uses two tasks that assess implicit bias to negative-valence faces, the first by eye-gaze and the second by measuring body-movement parameters. The report contrasts task performance in 51 treatment-seeking, medication-free pediatric patients with anxiety disorders and 36 healthy peers. Among these youth, 53 completed an eye-gaze task, 74 completed a body-movement task, and 40 completed both tasks. On the eye-gaze task, patients displayed longer gaze duration on negative relative to non-negative valence faces than healthy peers, F(1, 174) = 8.27},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This report examines the relationship between pediatric anxiety disorders and implicit bias evoked by threats. To do so, the report uses two tasks that assess implicit bias to negative-valence faces, the first by eye-gaze and the second by measuring body-movement parameters. The report contrasts task performance in 51 treatment-seeking, medication-free pediatric patients with anxiety disorders and 36 healthy peers. Among these youth, 53 completed an eye-gaze task, 74 completed a body-movement task, and 40 completed both tasks. On the eye-gaze task, patients displayed longer gaze duration on negative relative to non-negative valence faces than healthy peers, F(1, 174) = 8.27

Close

  • doi:10.1007/s10802-020-00712-w

Close

Tatiana Malevich; Elena Rybina; Elizaveta Ivtushok; Liubov Ardasheva; Joseph W MacInnes

No evidence for an independent retinotopic reference frame for inhibition of return Journal Article

Acta Psychologica, 208 , pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Malevich2020a,
title = {No evidence for an independent retinotopic reference frame for inhibition of return},
author = {Tatiana Malevich and Elena Rybina and Elizaveta Ivtushok and Liubov Ardasheva and Joseph W MacInnes},
doi = {10.1016/j.actpsy.2020.103107},
year = {2020},
date = {2020-01-01},
journal = {Acta Psychologica},
volume = {208},
pages = {1--11},
publisher = {Elsevier},
abstract = {Inhibition of return (IOR) represents a delay in responding to a previously inspected location and is viewed as a crucial mechanism that sways attention toward novelty in visual search. Although most visual processing occurs in retinotopic, eye-centered, coordinates, IOR must be coded in spatiotopic, environmental, coordinates to successfully serve its role as a foraging facilitator. Early studies supported this suggestion but recent results have shown that both spatiotopic and retinotopic reference frames of IOR may co-exist. The present study tested possible sources for IOR at the retinotopic location including being part of the spatiotopic IOR gradient, part of hemifield inhibition and being an independent source of IOR. We conducted four experiments that alternated the cue-target spatial distance (discrete and contiguous) and the response modality (manual and saccadic). In all experiments, we tested spatiotopic, retinotopic and neutral (neither spatiotopic nor retinotopic) locations. We did find IOR at both the retinotopic and spatiotopic locations but no evidence for an independent source of retinotopic IOR for either of the response modalities. In fact, we observed the spread of IOR across entire validly cued hemifield including at neutral locations. We conclude that these results indicate a strategy to inhibit the whole cued hemifield or suggest a large horizontal gradient around the spatiotopically cued location. Public significance statement: We perceive the visual world around us as stable despite constant shifts of the retinal image due to saccadic eye movements. In this study, we explore whether Inhibition of return (IOR), a mechanism preventing us from returning to previously attended locations, operates in spatiotopic, world-centered or in retinal, eye-centered coordinates. We tested both saccadic and manual IOR at spatiotopic, retinotopic, and control locations. We did not find an independent retinotopic source of IOR for either of the response modalities. The results suggest that IOR spreads over the whole previously attended visual hemifield or there is a large horizontal spatiotopic gradient. The current results are in line with the idea of IOR being a foraging facilitator in visual search and contribute to our understanding of spatiotopically organized aspects of visual and attentional systems.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Inhibition of return (IOR) represents a delay in responding to a previously inspected location and is viewed as a crucial mechanism that sways attention toward novelty in visual search. Although most visual processing occurs in retinotopic, eye-centered, coordinates, IOR must be coded in spatiotopic, environmental, coordinates to successfully serve its role as a foraging facilitator. Early studies supported this suggestion but recent results have shown that both spatiotopic and retinotopic reference frames of IOR may co-exist. The present study tested possible sources for IOR at the retinotopic location including being part of the spatiotopic IOR gradient, part of hemifield inhibition and being an independent source of IOR. We conducted four experiments that alternated the cue-target spatial distance (discrete and contiguous) and the response modality (manual and saccadic). In all experiments, we tested spatiotopic, retinotopic and neutral (neither spatiotopic nor retinotopic) locations. We did find IOR at both the retinotopic and spatiotopic locations but no evidence for an independent source of retinotopic IOR for either of the response modalities. In fact, we observed the spread of IOR across entire validly cued hemifield including at neutral locations. We conclude that these results indicate a strategy to inhibit the whole cued hemifield or suggest a large horizontal gradient around the spatiotopically cued location. Public significance statement: We perceive the visual world around us as stable despite constant shifts of the retinal image due to saccadic eye movements. In this study, we explore whether Inhibition of return (IOR), a mechanism preventing us from returning to previously attended locations, operates in spatiotopic, world-centered or in retinal, eye-centered coordinates. We tested both saccadic and manual IOR at spatiotopic, retinotopic, and control locations. We did not find an independent retinotopic source of IOR for either of the response modalities. The results suggest that IOR spreads over the whole previously attended visual hemifield or there is a large horizontal spatiotopic gradient. The current results are in line with the idea of IOR being a foraging facilitator in visual search and contribute to our understanding of spatiotopically organized aspects of visual and attentional systems.

Close

  • doi:10.1016/j.actpsy.2020.103107

Close

Aleksandra Mitrovic; Lisa Mira Hegelmaier; Helmut Leder; Matthew Pelowski

Does beauty capture the eye, even if it's not (overtly) adaptive? A comparative eye-tracking study of spontaneous attention and visual preference with VAST abstract art Journal Article

Acta Psychologica, 209 , pp. 1–10, 2020.

Abstract | Links | BibTeX

@article{Mitrovic2020,
title = {Does beauty capture the eye, even if it's not (overtly) adaptive? A comparative eye-tracking study of spontaneous attention and visual preference with VAST abstract art},
author = {Aleksandra Mitrovic and Lisa Mira Hegelmaier and Helmut Leder and Matthew Pelowski},
doi = {10.1016/j.actpsy.2020.103133},
year = {2020},
date = {2020-01-01},
journal = {Acta Psychologica},
volume = {209},
pages = {1--10},
publisher = {Elsevier},
abstract = {Studies have routinely shown that individuals spend more time spontaneously looking at people or at mimetic scenes that they subsequently judge to be more aesthetically appealing. This “beauty demands longer looks” phenomenon is typically explained by biological relevance, personal utility, or other survival factors, with visual attraction often driven by structural features (symmetry, texture), which may signify fitness and to which most humans tend to respond similarly. However, what of objects that have less overtly adaptive relevance? Here, we consider whether people also look longer at abstract art with little associative/mimetic content that they subsequently rate for higher aesthetic appeal. We employed the “Visual aesthetic sensitivity test” (VAST), which consists of pairs of matched abstract designs with one example of each pair argued to be objectively ‘aesthetically better' in regards to low-level features, thus offering a potential contrast between ‘objective' (physical feature-based) and ‘subjective' (personal taste-based) assessments. Participants (29 women) first looked at image pairs without a specific task and then in three follow-up blocks indicated their preference within the pairs and rated the individual images for liking and for presumed ratings by an art expert. More preferred designs were looked at longer. However, longer looking only occurred in line with participants' subjective tastes. This suggests a general correlation of attention and visual beauty, which—in abstract art—may nonetheless be related to features that are not identified by experts as more generally appealing and thus may not directly map to other (more utility-related) stimuli types.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Studies have routinely shown that individuals spend more time spontaneously looking at people or at mimetic scenes that they subsequently judge to be more aesthetically appealing. This “beauty demands longer looks” phenomenon is typically explained by biological relevance, personal utility, or other survival factors, with visual attraction often driven by structural features (symmetry, texture), which may signify fitness and to which most humans tend to respond similarly. However, what of objects that have less overtly adaptive relevance? Here, we consider whether people also look longer at abstract art with little associative/mimetic content that they subsequently rate for higher aesthetic appeal. We employed the “Visual aesthetic sensitivity test” (VAST), which consists of pairs of matched abstract designs with one example of each pair argued to be objectively ‘aesthetically better' in regards to low-level features, thus offering a potential contrast between ‘objective' (physical feature-based) and ‘subjective' (personal taste-based) assessments. Participants (29 women) first looked at image pairs without a specific task and then in three follow-up blocks indicated their preference within the pairs and rated the individual images for liking and for presumed ratings by an art expert. More preferred designs were looked at longer. However, longer looking only occurred in line with participants' subjective tastes. This suggests a general correlation of attention and visual beauty, which—in abstract art—may nonetheless be related to features that are not identified by experts as more generally appealing and thus may not directly map to other (more utility-related) stimuli types.

Close

  • doi:10.1016/j.actpsy.2020.103133

Close

Marcus Nyström; Diederick C Niehorster; Richard Andersson; Ignace Hooge

The Tobii Pro Spectrum: A useful tool for studying microsaccades? Journal Article

Behavior Research Methods, pp. 1–19, 2020.

Abstract | Links | BibTeX

@article{Nystroem2020,
title = {The Tobii Pro Spectrum: A useful tool for studying microsaccades?},
author = {Marcus Nyström and Diederick C Niehorster and Richard Andersson and Ignace Hooge},
doi = {10.3758/s13428-020-01430-3},
year = {2020},
date = {2020-01-01},
journal = {Behavior Research Methods},
pages = {1--19},
abstract = {Due to its reported high sampling frequency and precision, the Tobii Pro Spectrum is of potential interest to researchers who want to study small eye movements during fixation. We test how suitable the Tobii Pro Spectrum is for research on microsaccades by computing data-quality measures and common properties of microsaccades and comparing these to the currently most used system in this field: the EyeLink 1000 Plus. Results show that the EyeLink data provide higher RMS precision and microsaccade rates compared with data acquired with the Tobii Pro Spectrum. However, both systems provide microsaccades with similar directions and shapes, as well as rates consistent with previous literature. Data acquired at 1200 Hz with the Tobii Pro Spectrum provide results that are more similar to the EyeLink, compared to data acquired at 600 Hz. We conclude that the Tobii Pro Spectrum is a useful tool for researchers investigating microsaccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Due to its reported high sampling frequency and precision, the Tobii Pro Spectrum is of potential interest to researchers who want to study small eye movements during fixation. We test how suitable the Tobii Pro Spectrum is for research on microsaccades by computing data-quality measures and common properties of microsaccades and comparing these to the currently most used system in this field: the EyeLink 1000 Plus. Results show that the EyeLink data provide higher RMS precision and microsaccade rates compared with data acquired with the Tobii Pro Spectrum. However, both systems provide microsaccades with similar directions and shapes, as well as rates consistent with previous literature. Data acquired at 1200 Hz with the Tobii Pro Spectrum provide results that are more similar to the EyeLink, compared to data acquired at 600 Hz. We conclude that the Tobii Pro Spectrum is a useful tool for researchers investigating microsaccades.

Close

  • doi:10.3758/s13428-020-01430-3

Close

David Clewett; Camille Gasser; Lila Davachi

Pupil-linked arousal signals track the temporal organization of events in memory Journal Article

Nature Communications, 11 , pp. 1–14, 2020.

Abstract | Links | BibTeX

@article{Clewett2020,
title = {Pupil-linked arousal signals track the temporal organization of events in memory},
author = {David Clewett and Camille Gasser and Lila Davachi},
doi = {10.1038/s41467-020-17851-9},
year = {2020},
date = {2020-01-01},
journal = {Nature Communications},
volume = {11},
pages = {1--14},
publisher = {Springer US},
abstract = {Everyday life unfolds continuously, yet we tend to remember past experiences as discrete event sequences or episodes. Although this phenomenon has been well documented, the neuromechanisms that support the transformation of continuous experience into distinct and memorable episodes remain unknown. Here, we show that changes in context, or event boundaries, elicit a burst of autonomic arousal, as indexed by pupil dilation. Event boundaries also lead to the segmentation of adjacent episodes in later memory, evidenced by changes in memory for the temporal duration, order, and perceptual details of recent event sequences. These subjective and objective changes in temporal memory are also related to distinct temporal features of pupil dilations to boundaries as well as to the temporal stability of more prolonged pupil-linked arousal states. Collectively, our findings suggest that pupil measures reflect both stability and change in ongoing mental context representations, which in turn shape the temporal structure of memory.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Everyday life unfolds continuously, yet we tend to remember past experiences as discrete event sequences or episodes. Although this phenomenon has been well documented, the neuromechanisms that support the transformation of continuous experience into distinct and memorable episodes remain unknown. Here, we show that changes in context, or event boundaries, elicit a burst of autonomic arousal, as indexed by pupil dilation. Event boundaries also lead to the segmentation of adjacent episodes in later memory, evidenced by changes in memory for the temporal duration, order, and perceptual details of recent event sequences. These subjective and objective changes in temporal memory are also related to distinct temporal features of pupil dilations to boundaries as well as to the temporal stability of more prolonged pupil-linked arousal states. Collectively, our findings suggest that pupil measures reflect both stability and change in ongoing mental context representations, which in turn shape the temporal structure of memory.

Close

  • doi:10.1038/s41467-020-17851-9

Close

Vladislav I Zubov; Tatiana E Petrova

Lexically or grammatically adapted texts: What is easier to process for secondary school children? Journal Article

Procedia Computer Science, 176 , pp. 2117–2124, 2020.

Abstract | Links | BibTeX

@article{Zubov2020,
title = {Lexically or grammatically adapted texts: What is easier to process for secondary school children?},
author = {Vladislav I Zubov and Tatiana E Petrova},
doi = {10.1016/j.procs.2020.09.248},
year = {2020},
date = {2020-01-01},
journal = {Procedia Computer Science},
volume = {176},
pages = {2117--2124},
publisher = {Elsevier B.V.},
abstract = {This article presents the results of an eye-tracking experiment on Russian language material, exploring the reading process in secondary school children with general speech underdevelopment. The objective of the study is to reveal what type of a text is better to use to make the reading and comprehension easier: lexically adapted text or grammatically adapted text? The data from Russian-speaking participants from the compulsory school (experimental group) and 28 secondary school children with normal speech development (control group) indicate that both types of adaptation proved to be efficient for recalling the information from the text. Though, we revealed that in teenagers with language disorders in anamnesis lower perceptual processes are partially compensated (parameters of eye movements), but higher comprehension processes remain affected.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This article presents the results of an eye-tracking experiment on Russian language material, exploring the reading process in secondary school children with general speech underdevelopment. The objective of the study is to reveal what type of a text is better to use to make the reading and comprehension easier: lexically adapted text or grammatically adapted text? The data from Russian-speaking participants from the compulsory school (experimental group) and 28 secondary school children with normal speech development (control group) indicate that both types of adaptation proved to be efficient for recalling the information from the text. Though, we revealed that in teenagers with language disorders in anamnesis lower perceptual processes are partially compensated (parameters of eye movements), but higher comprehension processes remain affected.

Close

  • doi:10.1016/j.procs.2020.09.248

Close

Tianlong Zu; John Hutson; Lester C Loschky; Sanjay N Rebello

Using eye movements to measure intrinsic, extraneous, and germane load in a multimedia learning environment Journal Article

Journal of Educational Psychology, 112 (7), pp. 1338–1352, 2020.

Abstract | Links | BibTeX

@article{Zu2020,
title = {Using eye movements to measure intrinsic, extraneous, and germane load in a multimedia learning environment},
author = {Tianlong Zu and John Hutson and Lester C Loschky and Sanjay N Rebello},
doi = {10.1037/edu0000441},
year = {2020},
date = {2020-01-01},
journal = {Journal of Educational Psychology},
volume = {112},
number = {7},
pages = {1338--1352},
abstract = {In a previous study, DeLeeuw and Mayer (2008) found support for the triarchic model of cognitive load (Sweller, Van Merrienboer, & Paas, 1998, 2019) by showing that three different metrics could be used to independently measure 3 hypothesized types of cognitive load: intrinsic, extraneous, and germane. However, 2 of the 3 metrics that the authors used were intrusive in nature because learning had to be stopped momentarily to complete the measures. The current study extends the design of DeLeeuw and Mayer (2008) by investigating whether learners' eye movement behavior can be used to measure the three proposed types of cognitive load without interrupting learning. During a 1-hr experiment, we presented a multimedia lesson explaining the mechanism of electric motors to participants who had low prior knowledge of this topic. First, we replicated the main results of DeLeeuw and Mayer (2008), providing further support for the triarchic structure of cognitive load. Second, we identified eye movement measures that differentiated the three types of cognitive load. These findings were independent of participants' working memory capacity. Together, these results provide further evidence for the triarchic nature of cognitive load (Sweller et al., 1998, 2019), and are a first step toward online measures of cognitive load that could potentially be implemented into computer assisted learning technologies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In a previous study, DeLeeuw and Mayer (2008) found support for the triarchic model of cognitive load (Sweller, Van Merrienboer, & Paas, 1998, 2019) by showing that three different metrics could be used to independently measure 3 hypothesized types of cognitive load: intrinsic, extraneous, and germane. However, 2 of the 3 metrics that the authors used were intrusive in nature because learning had to be stopped momentarily to complete the measures. The current study extends the design of DeLeeuw and Mayer (2008) by investigating whether learners' eye movement behavior can be used to measure the three proposed types of cognitive load without interrupting learning. During a 1-hr experiment, we presented a multimedia lesson explaining the mechanism of electric motors to participants who had low prior knowledge of this topic. First, we replicated the main results of DeLeeuw and Mayer (2008), providing further support for the triarchic structure of cognitive load. Second, we identified eye movement measures that differentiated the three types of cognitive load. These findings were independent of participants' working memory capacity. Together, these results provide further evidence for the triarchic nature of cognitive load (Sweller et al., 1998, 2019), and are a first step toward online measures of cognitive load that could potentially be implemented into computer assisted learning technologies.

Close

  • doi:10.1037/edu0000441

Close

Joshua Zonca; Giorgio Coricelli; Luca Polonio

Gaze data reveal individual differences in relational representation processes Journal Article

Journal of Experimental Psychology: Learning, Memory, and Cognition, 46 (2), pp. 257–279, 2020.

Abstract | Links | BibTeX

@article{Zonca2020,
title = {Gaze data reveal individual differences in relational representation processes},
author = {Joshua Zonca and Giorgio Coricelli and Luca Polonio},
doi = {10.1037/xlm0000723},
year = {2020},
date = {2020-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {46},
number = {2},
pages = {257--279},
publisher = {American Psychological Association Inc.},
abstract = {In our everyday life, we often need to anticipate the potential occurrence of events and their consequences. In this context, the way we represent contingencies can determine our ability to adapt to the environment. However, it is not clear how agents encode and organize available knowledge about the future to react to possible states of the world. In the present study, we investigated the process of contingency representation with three eye-tracking experiments. In Experiment 1, we introduced a novel relational-inference task in which participants had to learn and represent conditional rules regulating the occurrence of interdependent future events. A cluster analysis on early gaze data revealed the existence of 2 distinct types of encoders. A group of (sophisticated) participants built exhaustive contingency models that explicitly linked states with each of their potential consequences. Another group of (unsophisticated) participants simply learned binary conditional rules without exploring the underlying relational complexity. Analyses of individual cognitive measures revealed that cognitive reflection is associated with the emergence of either sophisticated or unsophisticated representation behavior. In Experiment 2, we observed that unsophisticated participants switched toward the sophisticated strategy after having received information about its existence, suggesting that representation behavior was modulated by strategy generation mechanisms. In Experiment 3, we showed that the heterogeneity in representation strategy emerges also in conditional reasoning with verbal sequences, indicating the existence of a general disposition in building either sophisticated or unsophisticated models of contingencies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In our everyday life, we often need to anticipate the potential occurrence of events and their consequences. In this context, the way we represent contingencies can determine our ability to adapt to the environment. However, it is not clear how agents encode and organize available knowledge about the future to react to possible states of the world. In the present study, we investigated the process of contingency representation with three eye-tracking experiments. In Experiment 1, we introduced a novel relational-inference task in which participants had to learn and represent conditional rules regulating the occurrence of interdependent future events. A cluster analysis on early gaze data revealed the existence of 2 distinct types of encoders. A group of (sophisticated) participants built exhaustive contingency models that explicitly linked states with each of their potential consequences. Another group of (unsophisticated) participants simply learned binary conditional rules without exploring the underlying relational complexity. Analyses of individual cognitive measures revealed that cognitive reflection is associated with the emergence of either sophisticated or unsophisticated representation behavior. In Experiment 2, we observed that unsophisticated participants switched toward the sophisticated strategy after having received information about its existence, suggesting that representation behavior was modulated by strategy generation mechanisms. In Experiment 3, we showed that the heterogeneity in representation strategy emerges also in conditional reasoning with verbal sequences, indicating the existence of a general disposition in building either sophisticated or unsophisticated models of contingencies.

Close

  • doi:10.1037/xlm0000723

Close

Artyom Zinchenko; Markus Conci; Thomas Töllner; Hermann J Müller; Thomas Geyer

Automatic guidance (and misguidance) of visuospatial attention by acquired scene memory: Evidence from an N1pc polarity reversal Journal Article

Psychological Science, 31 (12), pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Zinchenko2020a,
title = {Automatic guidance (and misguidance) of visuospatial attention by acquired scene memory: Evidence from an N1pc polarity reversal},
author = {Artyom Zinchenko and Markus Conci and Thomas Töllner and Hermann J Müller and Thomas Geyer},
doi = {10.1177/0956797620954815},
year = {2020},
date = {2020-01-01},
journal = {Psychological Science},
volume = {31},
number = {12},
pages = {1--13},
abstract = {Visual search is facilitated when the target is repeatedly encountered at a fixed position within an invariant (vs. randomly variable) distractor layout—that is, when the layout is learned and guides attention to the target, a phenomenon known as contextual cuing. Subsequently changing the target location within a learned layout abolishes contextual cuing, which is difficult to relearn. Here, we used lateralized event-related electroencephalogram (EEG) potentials to explore memory-based attentional guidance (N = 16). The results revealed reliable contextual cuing during initial learning and an associated EEG-amplitude increase for repeated layouts in attention-related components, starting with an early posterior negativity (N1pc, 80–180 ms). When the target was relocated to the opposite hemifield following learning, contextual cuing was effectively abolished, and the N1pc was reversed in polarity (indicative of persistent misguidance of attention to the original target location). Thus, once learned, repeated layouts trigger attentional-priority signals from memory that proactively interfere with contextual relearning after target relocation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual search is facilitated when the target is repeatedly encountered at a fixed position within an invariant (vs. randomly variable) distractor layout—that is, when the layout is learned and guides attention to the target, a phenomenon known as contextual cuing. Subsequently changing the target location within a learned layout abolishes contextual cuing, which is difficult to relearn. Here, we used lateralized event-related electroencephalogram (EEG) potentials to explore memory-based attentional guidance (N = 16). The results revealed reliable contextual cuing during initial learning and an associated EEG-amplitude increase for repeated layouts in attention-related components, starting with an early posterior negativity (N1pc, 80–180 ms). When the target was relocated to the opposite hemifield following learning, contextual cuing was effectively abolished, and the N1pc was reversed in polarity (indicative of persistent misguidance of attention to the original target location). Thus, once learned, repeated layouts trigger attentional-priority signals from memory that proactively interfere with contextual relearning after target relocation.

Close

  • doi:10.1177/0956797620954815

Close

Artyom Zinchenko; Markus Conci; Johannes Hauser; Hermann J Müller; Thomas Geyer

Distributed attention beats the down-side of statistical context learning in visual search Journal Article

Journal of Vision, 20 (7), pp. 1–14, 2020.

Abstract | Links | BibTeX

@article{Zinchenko2020,
title = {Distributed attention beats the down-side of statistical context learning in visual search},
author = {Artyom Zinchenko and Markus Conci and Johannes Hauser and Hermann J Müller and Thomas Geyer},
doi = {10.1167/JOV.20.7.4},
year = {2020},
date = {2020-01-01},
journal = {Journal of Vision},
volume = {20},
number = {7},
pages = {1--14},
abstract = {Learnt target-distractor contexts guide visual search. However, updating a previously acquired target-distractor memory subsequent to a change of the target location has been found to be rather inefficient and slow. These results show that the imperviousness of contextual memory to incorporating relocated targets is particularly pronounced when observers adopt a narrow focus of attention to perform a rather difficult form-conjunction search task. By contrast, when they adopt a broad attentional distribution, context-based memories can be updated more readily because this mode promotes the acquisition of more global contextual representations that continue to provide effective cues even after target relocation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Learnt target-distractor contexts guide visual search. However, updating a previously acquired target-distractor memory subsequent to a change of the target location has been found to be rather inefficient and slow. These results show that the imperviousness of contextual memory to incorporating relocated targets is particularly pronounced when observers adopt a narrow focus of attention to perform a rather difficult form-conjunction search task. By contrast, when they adopt a broad attentional distribution, context-based memories can be updated more readily because this mode promotes the acquisition of more global contextual representations that continue to provide effective cues even after target relocation.

Close

  • doi:10.1167/JOV.20.7.4

Close

Josua Zimmermann; Dominik R Bach

Impact of a reminder/extinction procedure on threat-conditioned pupil size and skin conductance responses Journal Article

Learning & Memory, 27 (4), pp. 164–172, 2020.

Abstract | Links | BibTeX

@article{Zimmermann2020b,
title = {Impact of a reminder/extinction procedure on threat-conditioned pupil size and skin conductance responses},
author = {Josua Zimmermann and Dominik R Bach},
doi = {10.1101/lm.050211.119},
year = {2020},
date = {2020-01-01},
journal = {Learning & Memory},
volume = {27},
number = {4},
pages = {164--172},
abstract = {A reminder can render consolidated memory labile and susceptible to amnesic agents during a reconsolidation window. For the case of threat memory (also termed fear memory), it has been suggested that extinction training during this reconsolidation window has the same disruptive impact. This procedure could provide a powerful therapeutic principle for treatment of unwanted aversive memories. However, human research yielded contradictory results. Notably, all published positive replications quantified threat memory by conditioned skin conductance responses (SCR). Yet, other studies measuring SCR and/or fear-potentiated startle failed to observe an effect of a reminder/extinction procedure on the return of fear. Here we sought to shed light on this discrepancy by using a different autonomic response, namely, conditioned pupil dilation, in addition to SCR, in a replication of the original human study. N = 71 humans underwent a 3-d threat conditioning, reminder/extinction, and reinstatement, procedure with 2 CS+, of which one was reminded. Participants successfully learned the threat association on day 1, extinguished conditioned responding on day 2, and showed reinstatement on day 3. However, there was no difference in conditioned responding between the reminded and the nonreminded CS, neither in pupil size nor SCR. Thus, we found no evidence that a reminder trial before extinction prevents the return of threat-conditioned responding.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A reminder can render consolidated memory labile and susceptible to amnesic agents during a reconsolidation window. For the case of threat memory (also termed fear memory), it has been suggested that extinction training during this reconsolidation window has the same disruptive impact. This procedure could provide a powerful therapeutic principle for treatment of unwanted aversive memories. However, human research yielded contradictory results. Notably, all published positive replications quantified threat memory by conditioned skin conductance responses (SCR). Yet, other studies measuring SCR and/or fear-potentiated startle failed to observe an effect of a reminder/extinction procedure on the return of fear. Here we sought to shed light on this discrepancy by using a different autonomic response, namely, conditioned pupil dilation, in addition to SCR, in a replication of the original human study. N = 71 humans underwent a 3-d threat conditioning, reminder/extinction, and reinstatement, procedure with 2 CS+, of which one was reminded. Participants successfully learned the threat association on day 1, extinguished conditioned responding on day 2, and showed reinstatement on day 3. However, there was no difference in conditioned responding between the reminded and the nonreminded CS, neither in pupil size nor SCR. Thus, we found no evidence that a reminder trial before extinction prevents the return of threat-conditioned responding.

Close

  • doi:10.1101/lm.050211.119

Close

Eckart Zimmermann; Marta Ghio; Giulio Pergola; Benno Koch; Michael Schwarz; Christian Bellebaum

Separate and overlapping functional roles for efference copies in the human thalamus Journal Article

Neuropsychologia, 147 , pp. 1–9, 2020.

Abstract | Links | BibTeX

@article{Zimmermann2020a,
title = {Separate and overlapping functional roles for efference copies in the human thalamus},
author = {Eckart Zimmermann and Marta Ghio and Giulio Pergola and Benno Koch and Michael Schwarz and Christian Bellebaum},
doi = {10.1016/j.neuropsychologia.2020.107558},
year = {2020},
date = {2020-01-01},
journal = {Neuropsychologia},
volume = {147},
pages = {1--9},
publisher = {Elsevier Ltd},
abstract = {How the perception of space is generated from the multiple maps in the brain is still an unsolved mystery in neuroscience. A neural pathway ascending from the superior colliculus through the medio-dorsal (MD) nucleus of thalamus to the frontal eye field has been identified in monkeys that conveys efference copy information about the metrics of upcoming eye movements. Information sent through this pathway stabilizes vision across saccades. We investigated whether this motor plan information might also shape spatial perception even when no saccades are performed. We studied patients with medial or lateral thalamic lesions (likely involving either the MD or the ventrolateral (VL) nuclei). Patients performed a double-step task testing motor updating, a trans-saccadic localization task testing visual updating, and a localization task during fixation testing a general role of motor signals for visual space in the absence of eye movements. Single patients with medial or lateral thalamic lesions showed deficits in the double-step task, reflecting insufficient transfer of efference copy. However, only a patient with a medial lesion showed impaired performance in the trans-saccadic localization task, suggesting that different types of efference copies contribute to motor and visual updating. During fixation, the MD patient localized stationary stimuli more accurately than healthy controls, suggesting that patients compensate the deficit in visual prediction of saccades - induced by the thalamic lesion - by relying on stationary visual references. We conclude that partially separable efference copy signals contribute to motor and visual stability in company of purely visual signals that are equally effective in supporting trans-saccadic perception.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

How the perception of space is generated from the multiple maps in the brain is still an unsolved mystery in neuroscience. A neural pathway ascending from the superior colliculus through the medio-dorsal (MD) nucleus of thalamus to the frontal eye field has been identified in monkeys that conveys efference copy information about the metrics of upcoming eye movements. Information sent through this pathway stabilizes vision across saccades. We investigated whether this motor plan information might also shape spatial perception even when no saccades are performed. We studied patients with medial or lateral thalamic lesions (likely involving either the MD or the ventrolateral (VL) nuclei). Patients performed a double-step task testing motor updating, a trans-saccadic localization task testing visual updating, and a localization task during fixation testing a general role of motor signals for visual space in the absence of eye movements. Single patients with medial or lateral thalamic lesions showed deficits in the double-step task, reflecting insufficient transfer of efference copy. However, only a patient with a medial lesion showed impaired performance in the trans-saccadic localization task, suggesting that different types of efference copies contribute to motor and visual updating. During fixation, the MD patient localized stationary stimuli more accurately than healthy controls, suggesting that patients compensate the deficit in visual prediction of saccades - induced by the thalamic lesion - by relying on stationary visual references. We conclude that partially separable efference copy signals contribute to motor and visual stability in company of purely visual signals that are equally effective in supporting trans-saccadic perception.

Close

  • doi:10.1016/j.neuropsychologia.2020.107558

Close

Eckart Zimmermann

Saccade suppression depends on context Journal Article

eLife, 9 , pp. 1–16, 2020.

Abstract | Links | BibTeX

@article{Zimmermann2020,
title = {Saccade suppression depends on context},
author = {Eckart Zimmermann},
doi = {10.7554/eLife.49700},
year = {2020},
date = {2020-01-01},
journal = {eLife},
volume = {9},
pages = {1--16},
abstract = {Although our eyes are in constant movement, we remain unaware of the high-speed stimulation produced by the retinal displacement. Vision is drastically reduced at the time of saccades. Here, I investigated whether the reduction of the unwanted disturbance could be established through a saccade-contingent habituation to intra-saccadic displacements. In more than 100 context trials, participants were exposed either to an intra-saccadic or to a post-saccadic disturbance or to no disturbance at all. After induction of a specific context, I measured peri-saccadic suppression. Displacement discrimination thresholds of observers were high after participants were exposed to an intra-saccadic disturbance. However, after exposure to a post-saccadic disturbance or a context without any intra-saccadic stimulation, displacement discrimination improved such that observers were able to see shifts as during fixation. Saccade-contingent habituation might explain why we do not perceive trans-saccadic retinal stimulation during saccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Although our eyes are in constant movement, we remain unaware of the high-speed stimulation produced by the retinal displacement. Vision is drastically reduced at the time of saccades. Here, I investigated whether the reduction of the unwanted disturbance could be established through a saccade-contingent habituation to intra-saccadic displacements. In more than 100 context trials, participants were exposed either to an intra-saccadic or to a post-saccadic disturbance or to no disturbance at all. After induction of a specific context, I measured peri-saccadic suppression. Displacement discrimination thresholds of observers were high after participants were exposed to an intra-saccadic disturbance. However, after exposure to a post-saccadic disturbance or a context without any intra-saccadic stimulation, displacement discrimination improved such that observers were able to see shifts as during fixation. Saccade-contingent habituation might explain why we do not perceive trans-saccadic retinal stimulation during saccades.

Close

  • doi:10.7554/eLife.49700

Close

Mengyan Zhu; Xiangling Zhuang; Guojie Ma

Readers extract semantic information from parafoveal two-character synonyms in Chinese reading Journal Article

Reading and Writing, pp. 1–18, 2020.

Abstract | Links | BibTeX

@article{Zhu2020b,
title = {Readers extract semantic information from parafoveal two-character synonyms in Chinese reading},
author = {Mengyan Zhu and Xiangling Zhuang and Guojie Ma},
doi = {10.1007/s11145-020-10092-8},
year = {2020},
date = {2020-01-01},
journal = {Reading and Writing},
pages = {1--18},
publisher = {Springer Netherlands},
abstract = {In Chinese reading, the possibility and mechanism of semantic parafoveal processing has been debated for a long time. To advance the topic, “semantic preview benefit” in Chinese reading was reexamined, with a specific focus on how it is affected by the semantic relatedness between preview and target words at the two-character word level. Eighty critical two-character words were selected as target words. Reading tasks with gaze-contingent boundary paradigms were used to study whether different semantic-relatedness preview conditions influenced parafoveal processing. The data showed that synonyms (the most closely related preview) produced significant preview benefit compared with the semantic-related (non-synonyms) condition, even when plausibility was controlled. This result indicates that the larger extent of semantic preview benefit is mainly caused by the larger semantic relatedness between preview and target words. Moreover, plausibility is not the only cause of semantic preview benefit in Chinese reading. These findings improve the current understanding of the mechanism of parafoveal processing in Chinese reading and the implications on modeling eye movement control are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In Chinese reading, the possibility and mechanism of semantic parafoveal processing has been debated for a long time. To advance the topic, “semantic preview benefit” in Chinese reading was reexamined, with a specific focus on how it is affected by the semantic relatedness between preview and target words at the two-character word level. Eighty critical two-character words were selected as target words. Reading tasks with gaze-contingent boundary paradigms were used to study whether different semantic-relatedness preview conditions influenced parafoveal processing. The data showed that synonyms (the most closely related preview) produced significant preview benefit compared with the semantic-related (non-synonyms) condition, even when plausibility was controlled. This result indicates that the larger extent of semantic preview benefit is mainly caused by the larger semantic relatedness between preview and target words. Moreover, plausibility is not the only cause of semantic preview benefit in Chinese reading. These findings improve the current understanding of the mechanism of parafoveal processing in Chinese reading and the implications on modeling eye movement control are discussed.

Close

  • doi:10.1007/s11145-020-10092-8

Close

9138 entries « ‹ 1 of 92 › »

Let's Keep in Touch

SR Research Eye Tracking

NEWSLETTER SIGNUPNEWSLETTER ARCHIVE

Footer

Contact

info@sr-research.com
Phone: +1-613-271-8686
Toll Free: 1-866-821-0731
Fax: 613-482-4866

Quick Links

PRODUCTS

SOLUTIONS

SUPPORT FORUM

Legal Information

Legal Notice

Privacy Policy

EyeLink® eye trackers are intended for research purposes only and should not be used in the treatment or diagnosis of any medical condition.

Featured Blog Post

EyeLink Eye-Tracking Articles

2020 EyeLink Publication Update

Copyright © 2020 SR Research Ltd. All Rights Reserved. EyeLink is a registered trademark of SR Research Ltd.