• Skip to primary navigation
  • Skip to main content
  • Skip to footer
SR Research Logo

SR Research

Fast, Accurate, Reliable Eye Tracking

  • Hardware
    • EyeLink 1000 Plus
    • EyeLink Portable Duo
    • fMRI and MEG Systems
    • EyeLink II
    • Hardware Integration
  • Software
    • Experiment Builder
    • Data Viewer
    • WebLink
    • Software Integration
  • Solutions
    • Reading / Language
    • Developmental
    • fMRI / MEG
    • More…
  • Support
    • Forum
    • Resources
    • Workshops
    • Lab Visits
  • About
    • About Eye Tracking
    • History
    • Manufacturing
    • Careers
  • Blog
  • Contact
  • 中文

EyeLink Eye Tracking Publications Library

All EyeLink Publications

All 9000+ peer-reviewed EyeLink research publications up until 2020 (with some early 2021s) are listed below by year. You can search the publications library using key words such as Visual Search, Smooth Pursuit, Parkinsons, etc. You can also search for individual author names. Eye tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye tracking paper, please email us!

All EyeLink publications are also available for download / import into reference management software as a single Bibtex (.bib) file.

 

9138 entries « ‹ 2 of 92 › »

2020

Jing Zhu; Zihan Wang; Tao Gong; Shuai Zeng; Xiaowei Li; Bin Hu; Jianxiu Li; Shuting Sun; Lan Zhang

An improved classification model for depression detection using EEG and eye tracking data Journal Article

IEEE Transactions on Nanobioscience, 19 (3), pp. 527–537, 2020.

Abstract | Links | BibTeX

@article{Zhu2020a,
title = {An improved classification model for depression detection using EEG and eye tracking data},
author = {Jing Zhu and Zihan Wang and Tao Gong and Shuai Zeng and Xiaowei Li and Bin Hu and Jianxiu Li and Shuting Sun and Lan Zhang},
doi = {10.1109/TNB.2020.2990690},
year = {2020},
date = {2020-01-01},
journal = {IEEE Transactions on Nanobioscience},
volume = {19},
number = {3},
pages = {527--537},
abstract = {At present, depression has become a main health burden in the world. However, there are many problems with the diagnosis of depression, such as low patient cooperation, subjective bias and low accuracy. Therefore, reliable and objective evaluation method is needed to achieve effective depression detection. Electroencephalogram (EEG) and eye movements (EMs) data have been widely used for depression detection due to their advantages of easy recording and non-invasion. This research proposes a content based ensemble method (CBEM) to promote the depression detection accuracy, both static and dynamic CBEM were discussed. In the proposed model, EEG or EMs dataset was divided into subsets by the context of the experiments, and then a majority vote strategy was used to determine the subjects' label. The validation of the method is testified on two datasets which included free viewing eye tracking and resting-state EEG, and these two datasets have 36,34 subjects respectively. For these two datasets, CBEM achieves accuracies of 82.5% and 92.65% respectively. The results show that CBEM outperforms traditional classification methods. Our findings provide an effective solution for promoting the accuracy of depression identification, and provide an effective method for identification of depression, which in the future could be used for the auxiliary diagnosis of depression.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

At present, depression has become a main health burden in the world. However, there are many problems with the diagnosis of depression, such as low patient cooperation, subjective bias and low accuracy. Therefore, reliable and objective evaluation method is needed to achieve effective depression detection. Electroencephalogram (EEG) and eye movements (EMs) data have been widely used for depression detection due to their advantages of easy recording and non-invasion. This research proposes a content based ensemble method (CBEM) to promote the depression detection accuracy, both static and dynamic CBEM were discussed. In the proposed model, EEG or EMs dataset was divided into subsets by the context of the experiments, and then a majority vote strategy was used to determine the subjects' label. The validation of the method is testified on two datasets which included free viewing eye tracking and resting-state EEG, and these two datasets have 36,34 subjects respectively. For these two datasets, CBEM achieves accuracies of 82.5% and 92.65% respectively. The results show that CBEM outperforms traditional classification methods. Our findings provide an effective solution for promoting the accuracy of depression identification, and provide an effective method for identification of depression, which in the future could be used for the auxiliary diagnosis of depression.

Close

  • doi:10.1109/TNB.2020.2990690

Close

Jiawen Zhu; Kara Dawson; Albert D Ritzhaupt; Pavlo Pasha Antonenko

Investigating how multimedia and modality design principles influence student learning performance, satisfaction, mental effort, and visual attention Journal Article

Journal of Educational Multimedia and Hypermedia, 29 (3), pp. 265–284, 2020.

Abstract | BibTeX

@article{Zhu2020,
title = {Investigating how multimedia and modality design principles influence student learning performance, satisfaction, mental effort, and visual attention},
author = {Jiawen Zhu and Kara Dawson and Albert D Ritzhaupt and Pavlo Pasha Antonenko},
year = {2020},
date = {2020-01-01},
journal = {Journal of Educational Multimedia and Hypermedia},
volume = {29},
number = {3},
pages = {265--284},
abstract = {This study investigated the effects of multimedia and modal- ity design principles using a learning intervention about Aus- tralia with a sample of college students and employing mea- sures of learning outcomes, visual attention, satisfaction, and mental effort. Seventy-five college students were systemati- cally assigned to one of four conditions: a) text with pictures, b) text without pictures, c) narration with pictures, or d) nar- ration without pictures. No significant differences were found among the four groups in learning performance, satisfaction, or self-reported mental effort, and participants rarely focused their visual attention on the representational pictures provid- ed in the intervention. Neither the multimedia nor the modal- ity principles held true in this study. However, participants in narration environments focused significantly more visual at- tention on the “Next” button, a navigational aid included on all slides. This study contributes to the research on visual at- tention and navigational aids in multimedia learning, and it suggests such features may cause distractions, particularly when spoken text is provided without on-screen text. The paper also offers implications for the design of multimedia learning},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study investigated the effects of multimedia and modal- ity design principles using a learning intervention about Aus- tralia with a sample of college students and employing mea- sures of learning outcomes, visual attention, satisfaction, and mental effort. Seventy-five college students were systemati- cally assigned to one of four conditions: a) text with pictures, b) text without pictures, c) narration with pictures, or d) nar- ration without pictures. No significant differences were found among the four groups in learning performance, satisfaction, or self-reported mental effort, and participants rarely focused their visual attention on the representational pictures provid- ed in the intervention. Neither the multimedia nor the modal- ity principles held true in this study. However, participants in narration environments focused significantly more visual at- tention on the “Next” button, a navigational aid included on all slides. This study contributes to the research on visual at- tention and navigational aids in multimedia learning, and it suggests such features may cause distractions, particularly when spoken text is provided without on-screen text. The paper also offers implications for the design of multimedia learning

Close

Yan Zhou

Psychological analysis of online teaching in colleges based on eye-tracking technology Journal Article

Revista Argentina de Clinica Psicologica, 29 (2), pp. 523–529, 2020.

Abstract | Links | BibTeX

@article{Zhou2020a,
title = {Psychological analysis of online teaching in colleges based on eye-tracking technology},
author = {Yan Zhou},
doi = {10.24205/03276716.2020.272},
year = {2020},
date = {2020-01-01},
journal = {Revista Argentina de Clinica Psicologica},
volume = {29},
number = {2},
pages = {523--529},
abstract = {Eye-tracking technology has been widely adopted to capture the psychological changes of college students in the learning process. With the aid of eye-tracking technology this paper establishes a psychological analysis model for students in online teaching. Four eye movement parameters were selected for the model, including pupil diameter, fixation time, re-reading time and retrospective time. A total of 100 college students were selected for an eye movement test in online teaching environment. The test data were analyzed on the SPSS software. The results show that the eye movement parameters are greatly affected by the key points in teaching and the contents that interest the students; the two influencing factors can arouse and attract the students' attention in the teaching process. The research results provide an important reference for the psychological study of online teaching in colleges.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye-tracking technology has been widely adopted to capture the psychological changes of college students in the learning process. With the aid of eye-tracking technology this paper establishes a psychological analysis model for students in online teaching. Four eye movement parameters were selected for the model, including pupil diameter, fixation time, re-reading time and retrospective time. A total of 100 college students were selected for an eye movement test in online teaching environment. The test data were analyzed on the SPSS software. The results show that the eye movement parameters are greatly affected by the key points in teaching and the contents that interest the students; the two influencing factors can arouse and attract the students' attention in the teaching process. The research results provide an important reference for the psychological study of online teaching in colleges.

Close

  • doi:10.24205/03276716.2020.272

Close

Peng Zhou; Weiyi Ma; Likan Zhan

A deficit in using prosodic cues to understand communicative intentions by children with autism spectrum disorders: An eye-tracking study Journal Article

First Language, 40 (1), pp. 41–63, 2020.

Abstract | Links | BibTeX

@article{Zhou2020,
title = {A deficit in using prosodic cues to understand communicative intentions by children with autism spectrum disorders: An eye-tracking study},
author = {Peng Zhou and Weiyi Ma and Likan Zhan},
doi = {10.1177/0142723719885270},
year = {2020},
date = {2020-01-01},
journal = {First Language},
volume = {40},
number = {1},
pages = {41--63},
abstract = {The present study investigated whether Mandarin-speaking preschool children with autism spectrum disorders (ASD) were able to use prosodic cues to understand others' communicative intentions. Using the visual world eye-tracking paradigm, the study found that unlike typically developing (TD) 4-year-olds, both 4-year-olds with ASD and 5-year-olds with ASD exhibited an eye gaze pattern that reflected their inability to use prosodic cues to infer the intended meaning of the speaker. Their performance was relatively independent of their verbal IQ and mean length of utterance. In addition, the findings also show that there was no development in this ability from 4 years of age to 5 years of age. The findings indicate that Mandarin-speaking preschool children with ASD exhibit a deficit in using prosodic cues to understand the communicative intentions of the speaker, and this ability might be inherently impaired in ASD.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study investigated whether Mandarin-speaking preschool children with autism spectrum disorders (ASD) were able to use prosodic cues to understand others' communicative intentions. Using the visual world eye-tracking paradigm, the study found that unlike typically developing (TD) 4-year-olds, both 4-year-olds with ASD and 5-year-olds with ASD exhibited an eye gaze pattern that reflected their inability to use prosodic cues to infer the intended meaning of the speaker. Their performance was relatively independent of their verbal IQ and mean length of utterance. In addition, the findings also show that there was no development in this ability from 4 years of age to 5 years of age. The findings indicate that Mandarin-speaking preschool children with ASD exhibit a deficit in using prosodic cues to understand the communicative intentions of the speaker, and this ability might be inherently impaired in ASD.

Close

  • doi:10.1177/0142723719885270

Close

Wei Zheng; Yizhen Wang; Xiaolu Wang

The effect of salience on Chinese pun comprehension: A visual world paradigm study Journal Article

Frontiers in Psychology, 11 , pp. 1–12, 2020.

Abstract | Links | BibTeX

@article{Zheng2020,
title = {The effect of salience on Chinese pun comprehension: A visual world paradigm study},
author = {Wei Zheng and Yizhen Wang and Xiaolu Wang},
doi = {10.3389/fpsyg.2020.00116},
year = {2020},
date = {2020-01-01},
journal = {Frontiers in Psychology},
volume = {11},
pages = {1--12},
abstract = {The present study adopted the printed-word visual world paradigm to investigate the salience effect on Chinese pun comprehension. In such an experiment, participants listen to a spoken sentence while looking at a visual display of four printed words (including a semantic competitor, a phonological competitor, and two unrelated distractors). Previous studies based on alphabetic languages have found robust phonological effects (participants fixated more at phonological competitors than distractors during the unfolding of the spoken target words), while controversy remains regarding the existence of a similar semantic effect. A recent Chinese study reported reliable semantic effects in two experiments using this paradigm, suggesting that Chinese participants could actively map the semantic input from the auditory modality with the semantic information retrieved from printed words. In light of their study, we designed an experiment with two conditions: a replication condition to test the validity of using the printed-word world paradigm in Chinese semantic research, and a pun condition to assess the role played by salience during pun comprehension. Indeed, global analyses have revealed robust semantic effects in both experimental conditions, where participants were found more attracted to the semantic competitors than to the distractors with the emergence of target words. More importantly, the local analyses from the pun condition have shown that the participants were more attracted to the semantic competitors related to the salient meaning of the ambiguous word in a pun than to those related to the less salient meanings within 200 ms after target word offset. This finding suggests that the salient meaning of the ambiguous word in a pun is activated and assessed faster than its less salient counterpart. The initial advantage observed in the present study is consistent with the prediction of the graded salience hypothesis rather than the direct access model.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study adopted the printed-word visual world paradigm to investigate the salience effect on Chinese pun comprehension. In such an experiment, participants listen to a spoken sentence while looking at a visual display of four printed words (including a semantic competitor, a phonological competitor, and two unrelated distractors). Previous studies based on alphabetic languages have found robust phonological effects (participants fixated more at phonological competitors than distractors during the unfolding of the spoken target words), while controversy remains regarding the existence of a similar semantic effect. A recent Chinese study reported reliable semantic effects in two experiments using this paradigm, suggesting that Chinese participants could actively map the semantic input from the auditory modality with the semantic information retrieved from printed words. In light of their study, we designed an experiment with two conditions: a replication condition to test the validity of using the printed-word world paradigm in Chinese semantic research, and a pun condition to assess the role played by salience during pun comprehension. Indeed, global analyses have revealed robust semantic effects in both experimental conditions, where participants were found more attracted to the semantic competitors than to the distractors with the emergence of target words. More importantly, the local analyses from the pun condition have shown that the participants were more attracted to the semantic competitors related to the salient meaning of the ambiguous word in a pun than to those related to the less salient meanings within 200 ms after target word offset. This finding suggests that the salient meaning of the ambiguous word in a pun is activated and assessed faster than its less salient counterpart. The initial advantage observed in the present study is consistent with the prediction of the graded salience hypothesis rather than the direct access model.

Close

  • doi:10.3389/fpsyg.2020.00116

Close

Chenzhu Zhao

Near or far? The effect of latest booking time on hotel booking intention: Based on eye-tracking experiments Journal Article

International Journal of Frontiers in Sociology, 2 (7), pp. 1–12, 2020.

Abstract | Links | BibTeX

@article{Zhao2020a,
title = {Near or far? The effect of latest booking time on hotel booking intention: Based on eye-tracking experiments},
author = {Chenzhu Zhao},
doi = {10.25236/IJFS.2020.020701},
year = {2020},
date = {2020-01-01},
journal = {International Journal of Frontiers in Sociology},
volume = {2},
number = {7},
pages = {1--12},
abstract = {Online travel agencies (OTAs) depends on marketing clues to reduce the consumer uncertainty perceptions of online travel-related products. The latest booking time (LBT) provided by the consumer has a significant impact on purchasing decisions. This study aims to explore the effect of LBT on consumer visual attention and booking intention along with the moderation effect of online comment valence (OCV). Since eye movement is bound up with the transfer of visual attention, eye-tracking is used to record visual attention of consumer. Our research used a 3 (LBT: near vs. medium vs. far) × 3 (OCV: high vs. medium vs. low) design to conduct the experiments. The main findings showed the following:(1) LBT can obviously increase the visual attention to the whole advertisements and improve the booking intention;(2) OCV moderates the effect of LBT on both visual attention to the whole advertisements and booking intention. Only when OCV are medium and high, LBT can obviously improve attention to the whole advertisements and increase consumers' booking intention. The experiment results show that OTAs can improve the advertising effectiveness by adding LBT label, but LBT have no effect with low-level OCV.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Online travel agencies (OTAs) depends on marketing clues to reduce the consumer uncertainty perceptions of online travel-related products. The latest booking time (LBT) provided by the consumer has a significant impact on purchasing decisions. This study aims to explore the effect of LBT on consumer visual attention and booking intention along with the moderation effect of online comment valence (OCV). Since eye movement is bound up with the transfer of visual attention, eye-tracking is used to record visual attention of consumer. Our research used a 3 (LBT: near vs. medium vs. far) × 3 (OCV: high vs. medium vs. low) design to conduct the experiments. The main findings showed the following:(1) LBT can obviously increase the visual attention to the whole advertisements and improve the booking intention;(2) OCV moderates the effect of LBT on both visual attention to the whole advertisements and booking intention. Only when OCV are medium and high, LBT can obviously improve attention to the whole advertisements and increase consumers' booking intention. The experiment results show that OTAs can improve the advertising effectiveness by adding LBT label, but LBT have no effect with low-level OCV.

Close

  • doi:10.25236/IJFS.2020.020701

Close

Bin Zhao; Jinfeng Huang; Gaoyan Zhang; Jianwu Dang; Minbo Chen; Yingjian Fu; Longbiao Wang

Brain network reconstruction of speech production based on electro-encephalography and eye movement Journal Article

Acoustical Science and Technology, 41 (1), pp. 349–350, 2020.

Abstract | Links | BibTeX

@article{Zhao2020,
title = {Brain network reconstruction of speech production based on electro-encephalography and eye movement},
author = {Bin Zhao and Jinfeng Huang and Gaoyan Zhang and Jianwu Dang and Minbo Chen and Yingjian Fu and Longbiao Wang},
doi = {10.1250/ast.41.349},
year = {2020},
date = {2020-01-01},
journal = {Acoustical Science and Technology},
volume = {41},
number = {1},
pages = {349--350},
abstract = {To fully understand the brain mechanism associated with speech functions, it is necessary to unfold the spatiotemporal brain dynamics during the whole speech processing range [1]. However, previous functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) studies focused on cerebral activation patterns and their regional functions, while lacking information of the time courses [2]. In contrast, electroencephalography (EEG) and magneto- encephalography (MEG) with high temporal resolution are inferior in source localization, and are also easily buried in electromagnetic artifacts from muscular actions in articulation, thus interfering with the analysis. In this study, we introduced a novel multimodal data acquisition system to collect EEG, eye movement, and speech in an oral reading task. The behavior data (eye movement and speech) were used for segmenting cognitive stages. EEG data went through independent component analyses (ICA), component clustering, and time-varying (adaptive) multi-variate autoregressive modeling [3] for estimating the spatiotemporal causal interactions among brain regions in each cognitive and speech process. Statistical analyses and literature review were followed to interpret the brain dynamic results for better understanding the speech functions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

To fully understand the brain mechanism associated with speech functions, it is necessary to unfold the spatiotemporal brain dynamics during the whole speech processing range [1]. However, previous functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) studies focused on cerebral activation patterns and their regional functions, while lacking information of the time courses [2]. In contrast, electroencephalography (EEG) and magneto- encephalography (MEG) with high temporal resolution are inferior in source localization, and are also easily buried in electromagnetic artifacts from muscular actions in articulation, thus interfering with the analysis. In this study, we introduced a novel multimodal data acquisition system to collect EEG, eye movement, and speech in an oral reading task. The behavior data (eye movement and speech) were used for segmenting cognitive stages. EEG data went through independent component analyses (ICA), component clustering, and time-varying (adaptive) multi-variate autoregressive modeling [3] for estimating the spatiotemporal causal interactions among brain regions in each cognitive and speech process. Statistical analyses and literature review were followed to interpret the brain dynamic results for better understanding the speech functions.

Close

  • doi:10.1250/ast.41.349

Close

Y Zhang; Q Yuan

Effect of the combination of biofeedback and sequential psychotherapy on the cognitive function of trauma patients based on the fusion of set theory model Journal Article

Indian Journal of Pharmaceutical Sciences, 82 , pp. 32–40, 2020.

Abstract | Links | BibTeX

@article{Zhang2020f,
title = {Effect of the combination of biofeedback and sequential psychotherapy on the cognitive function of trauma patients based on the fusion of set theory model},
author = {Y Zhang and Q Yuan},
doi = {10.36468/pharmaceutical-sciences.spl.78},
year = {2020},
date = {2020-01-01},
journal = {Indian Journal of Pharmaceutical Sciences},
volume = {82},
pages = {32--40},
abstract = {This study intended to take a special group of trauma patients as research subjects to propose a method analysing the effect of combination of biofeedback and sequential psychotherapy based on the fusion of the set theory model on the cognitive function of these patients with trauma. The occurrence and development of post-traumatic stress disorder and the cognitive function is investigated. The set theory model is used in this study to carry out a survey on the effect of the combination of biofeedback and sequential psychotherapy on patients with post-traumatic stress disorder to describe the occurrence, development, change trajectory and time course characteristics of post-traumatic stress disorder. The set theory model was employed to investigate the cognitive development characteristics of these trauma patients. In addition, through the set theory model, psychological behavior mechanism for the occurrence and development of post-traumatic stress disorder is revealed. The study of the combination of biofeedback and sequential psychotherapy is adopted to investigate the effect of the post-traumatic stress disorder on the cognitive function of the trauma patients. The results of this study could be used to provide scientific advice for the placement and psychological assistance of trauma patients in future, to provide a scientific basis for a targeted psychological intervention and overall planning of the intervention, and to provide scientific and objective indicators and methods for the diagnosis and assessment of intervention of traumatic psychology in patients with trauma in the future.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study intended to take a special group of trauma patients as research subjects to propose a method analysing the effect of combination of biofeedback and sequential psychotherapy based on the fusion of the set theory model on the cognitive function of these patients with trauma. The occurrence and development of post-traumatic stress disorder and the cognitive function is investigated. The set theory model is used in this study to carry out a survey on the effect of the combination of biofeedback and sequential psychotherapy on patients with post-traumatic stress disorder to describe the occurrence, development, change trajectory and time course characteristics of post-traumatic stress disorder. The set theory model was employed to investigate the cognitive development characteristics of these trauma patients. In addition, through the set theory model, psychological behavior mechanism for the occurrence and development of post-traumatic stress disorder is revealed. The study of the combination of biofeedback and sequential psychotherapy is adopted to investigate the effect of the post-traumatic stress disorder on the cognitive function of the trauma patients. The results of this study could be used to provide scientific advice for the placement and psychological assistance of trauma patients in future, to provide a scientific basis for a targeted psychological intervention and overall planning of the intervention, and to provide scientific and objective indicators and methods for the diagnosis and assessment of intervention of traumatic psychology in patients with trauma in the future.

Close

  • doi:10.36468/pharmaceutical-sciences.spl.78

Close

Xinru Zhang; Zhongling Pi; Chenyu Li; Weiping Hu

Intrinsic motivation enhances online group creativity via promoting members' effort, not interaction Journal Article

British Journal of Educational Technology, pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Zhang2020eb,
title = {Intrinsic motivation enhances online group creativity via promoting members' effort, not interaction},
author = {Xinru Zhang and Zhongling Pi and Chenyu Li and Weiping Hu},
doi = {10.1111/bjet.13045},
year = {2020},
date = {2020-01-01},
journal = {British Journal of Educational Technology},
pages = {1--13},
abstract = {Intrinsic motivation is seen as the principal source of vitality in educational settings. This study examined whether intrinsic motivation promoted online group creativity and tested a cognitive mechanism that might explain this effect. University students (N = 72; 61 women) who volunteered to participate were asked to fulfill a creative task with a peer using online software. The peer was actually a fake participant who was programed to send prepared answers in sequence. Ratings of creativity (fluency, flexibility and originality) and eye movement data (focus on own vs. peer's ideas on the screen) were used to compare students who were induced to have high intrinsic motivation and those induced to have low intrinsic motivation. Results showed that compared to participants with low intrinsic motivation, those with high intrinsic motivation showed higher fluency and flexibility on the creative task and spent a larger percentage of time looking at their own ideas on the screen. The two groups did not differ in how much they looked at the peer's ideas. In addition, students' percentage dwell time on their own ideas mediated the beneficial effect of intrinsic motivation on idea fluency. These results suggest that although intrinsic motivation could enhance the fluency of creative ideas in an online group, it does not necessarily promote interaction among group members. Given the importance of interaction in online group setting, findings of this study suggest that in addition to enhancing intrinsic motivation, other measures should be taken to promote the interaction behavior in online groups. Practitioner Notes What is already known about this topic The generation of creative ideas in group settings calls for both individual effort and cognitive stimulation from other members. Intrinsic motivation has been shown to foster creativity in face-to-face groups, which is primarily due the promotion of individual effort. In online group settings, students' creativity tends to rely on intrinsic motivation because the extrinsic motivation typically provided by teachers' supervision and peer pressure in face-to-face settings is minimized online. What this paper adds Creative performance in online groups benefits from intrinsic motivation. Intrinsic motivation promotes creativity through an individual's own cognitive effort instead of interaction among members. Implications for practice and/or policy Improving students' intrinsic motivation is an effective way to promote creativity in online groups. Teachers should take additional steps to encourage students to interact more with each other in online groups.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Intrinsic motivation is seen as the principal source of vitality in educational settings. This study examined whether intrinsic motivation promoted online group creativity and tested a cognitive mechanism that might explain this effect. University students (N = 72; 61 women) who volunteered to participate were asked to fulfill a creative task with a peer using online software. The peer was actually a fake participant who was programed to send prepared answers in sequence. Ratings of creativity (fluency, flexibility and originality) and eye movement data (focus on own vs. peer's ideas on the screen) were used to compare students who were induced to have high intrinsic motivation and those induced to have low intrinsic motivation. Results showed that compared to participants with low intrinsic motivation, those with high intrinsic motivation showed higher fluency and flexibility on the creative task and spent a larger percentage of time looking at their own ideas on the screen. The two groups did not differ in how much they looked at the peer's ideas. In addition, students' percentage dwell time on their own ideas mediated the beneficial effect of intrinsic motivation on idea fluency. These results suggest that although intrinsic motivation could enhance the fluency of creative ideas in an online group, it does not necessarily promote interaction among group members. Given the importance of interaction in online group setting, findings of this study suggest that in addition to enhancing intrinsic motivation, other measures should be taken to promote the interaction behavior in online groups. Practitioner Notes What is already known about this topic The generation of creative ideas in group settings calls for both individual effort and cognitive stimulation from other members. Intrinsic motivation has been shown to foster creativity in face-to-face groups, which is primarily due the promotion of individual effort. In online group settings, students' creativity tends to rely on intrinsic motivation because the extrinsic motivation typically provided by teachers' supervision and peer pressure in face-to-face settings is minimized online. What this paper adds Creative performance in online groups benefits from intrinsic motivation. Intrinsic motivation promotes creativity through an individual's own cognitive effort instead of interaction among members. Implications for practice and/or policy Improving students' intrinsic motivation is an effective way to promote creativity in online groups. Teachers should take additional steps to encourage students to interact more with each other in online groups.

Close

  • doi:10.1111/bjet.13045

Close

Manman Zhang; Simon P Liversedge; Xuejun Bai; Guoli Yan; Chuanli Zang

The influence of foveal lexical processing load on parafoveal preview and saccadic targeting during Chinese reading Journal Article

Acta Psychologica Sinica, 52 (8), pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Zhang2020d,
title = {The influence of foveal lexical processing load on parafoveal preview and saccadic targeting during Chinese reading},
author = {Manman Zhang and Simon P Liversedge and Xuejun Bai and Guoli Yan and Chuanli Zang},
doi = {10.1037/xhp0000644},
year = {2020},
date = {2020-01-01},
journal = {Acta Psychologica Sinica},
volume = {52},
number = {8},
pages = {1--11},
abstract = {Parafoveal pre-processing contributes to highly efficient reading for skilled readers. Research has demonstrated that high-skilled or fast readers extract more parafoveal information from a wider parafoveal region more efficiently compared to less-skilled or slow readers. It is argued that individual differences in parafoveal preview are due to high-skilled or fast readers focusing less of their at- tention on foveal word processing than less-skilled or slow readers. In other words, foveal processing difficulty might modulate an individual's amount of parafoveal preview (i.e., Foveal Load Hypothesis). However, few studies have provided evidence in support of this claim. Therefore, the present study aimed to explore whether and how foveal lexical processing load modulates parafoveal preview of readers with different reading speeds (a commonly used measurement of reading skill or reading proficiency). By using a three-minute reading comprehension task, 28 groups of fast and slow readers were selected from 300 participants (234 were valid) according to their reading speed in the current study. Participants were then asked to read sentences while their eye movements were recorded using an Eyelink 1000 eye-tracker. Each experimental sentence contained a pre-target word that varied in lexical frequency to manipulate foveal processing load (low load: high frequency; high load: low frequency), and a target word ma- nipulated for preview (identical or pseudocharacter) within the boundary paradigm. Global analyses showed that, although fast readers had similar accuracy of reading comprehension to slow readers, they had shorter reading times, longer forward saccades, made fewer fixations and regressions, and had higher reading speeds compared to slow readers, indicating that our selection of fast and slow readers was highly effective. The pre-target word analyses showed that there was a main effect of word frequency on first-pass reading times, indicating an effective manipulation of foveal load. Addition- ally, there were significant interactions of Reading Group × Word Frequency, and Reading Group × Word Frequency × Parafoveal Preview for first fixation and single fixation durations, showing that the frequency effects were reliable for fast readers rather than for slow readers with pseudocharacter previews, while the frequency effects were similar for the two groups with identical previews. However, the target word analyses did not show any three-way or two-way interactions for the first-pass reading times as well as for skipping probability. To be specific, the first-pass reading times were shorter at the target word with identical previews in relation to pseudocharacter previews (i.e., preview benefit effects); importantly, similar size effects occurred for both fast readers and slow readers. The findings in the present study suggest that lexical information from the currently fixated word can be extracted and can be used quickly for fast readers, while such information is used later for slow readers. This, however, does not result in more (or less) preview benefit for fast readers in relation to slow readers. In conclusion, foveal lexical processing does not modulate preview benefit for fast and slow readers, and the present results provide no support for the Foveal Load Hypothesis. Our findings of foveal load effects on parafoveal preview for fast and slow readers cannot be readily explained by current computational models (e.g., E-Z Reader model and SWIFT model).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Parafoveal pre-processing contributes to highly efficient reading for skilled readers. Research has demonstrated that high-skilled or fast readers extract more parafoveal information from a wider parafoveal region more efficiently compared to less-skilled or slow readers. It is argued that individual differences in parafoveal preview are due to high-skilled or fast readers focusing less of their at- tention on foveal word processing than less-skilled or slow readers. In other words, foveal processing difficulty might modulate an individual's amount of parafoveal preview (i.e., Foveal Load Hypothesis). However, few studies have provided evidence in support of this claim. Therefore, the present study aimed to explore whether and how foveal lexical processing load modulates parafoveal preview of readers with different reading speeds (a commonly used measurement of reading skill or reading proficiency). By using a three-minute reading comprehension task, 28 groups of fast and slow readers were selected from 300 participants (234 were valid) according to their reading speed in the current study. Participants were then asked to read sentences while their eye movements were recorded using an Eyelink 1000 eye-tracker. Each experimental sentence contained a pre-target word that varied in lexical frequency to manipulate foveal processing load (low load: high frequency; high load: low frequency), and a target word ma- nipulated for preview (identical or pseudocharacter) within the boundary paradigm. Global analyses showed that, although fast readers had similar accuracy of reading comprehension to slow readers, they had shorter reading times, longer forward saccades, made fewer fixations and regressions, and had higher reading speeds compared to slow readers, indicating that our selection of fast and slow readers was highly effective. The pre-target word analyses showed that there was a main effect of word frequency on first-pass reading times, indicating an effective manipulation of foveal load. Addition- ally, there were significant interactions of Reading Group × Word Frequency, and Reading Group × Word Frequency × Parafoveal Preview for first fixation and single fixation durations, showing that the frequency effects were reliable for fast readers rather than for slow readers with pseudocharacter previews, while the frequency effects were similar for the two groups with identical previews. However, the target word analyses did not show any three-way or two-way interactions for the first-pass reading times as well as for skipping probability. To be specific, the first-pass reading times were shorter at the target word with identical previews in relation to pseudocharacter previews (i.e., preview benefit effects); importantly, similar size effects occurred for both fast readers and slow readers. The findings in the present study suggest that lexical information from the currently fixated word can be extracted and can be used quickly for fast readers, while such information is used later for slow readers. This, however, does not result in more (or less) preview benefit for fast readers in relation to slow readers. In conclusion, foveal lexical processing does not modulate preview benefit for fast and slow readers, and the present results provide no support for the Foveal Load Hypothesis. Our findings of foveal load effects on parafoveal preview for fast and slow readers cannot be readily explained by current computational models (e.g., E-Z Reader model and SWIFT model).

Close

  • doi:10.1037/xhp0000644

Close

Li Zhang; Guoli Yan; Li Zhou; Zebo Lan; Valerie Benson

The influence of irrelevant visual distractors on eye movement control in Chinese children with autism spectrum disorder: Evidence from the remote distractor paradigm Journal Article

Journal of Autism and Developmental Disorders, 50 , pp. 500–512, 2020.

Abstract | Links | BibTeX

@article{Zhang2020e,
title = {The influence of irrelevant visual distractors on eye movement control in Chinese children with autism spectrum disorder: Evidence from the remote distractor paradigm},
author = {Li Zhang and Guoli Yan and Li Zhou and Zebo Lan and Valerie Benson},
doi = {10.1007/s10803-019-04271-y},
year = {2020},
date = {2020-01-01},
journal = {Journal of Autism and Developmental Disorders},
volume = {50},
pages = {500--512},
publisher = {Springer US},
abstract = {The current study examined eye movement control in autistic (ASD) children. Simple targets were presented in isolation, or with central, parafoveal, or peripheral distractors synchronously. Sixteen children with ASD (47–81 months) and nineteen age and IQ matched typically developing children were instructed to look to the target as accurately and quickly as possible. Both groups showed high proportions (40%) of saccadic errors towards parafoveal and peripheral distractors. For correctly executed eye movements to the targets, centrally presented distractors produced the longest latencies (time taken to initiate eye movements), followed by parafoveal and peripheral distractor conditions. Central distractors had a greater effect in the ASD group, indicating evidence for potential atypical voluntary attentional control in ASD children.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The current study examined eye movement control in autistic (ASD) children. Simple targets were presented in isolation, or with central, parafoveal, or peripheral distractors synchronously. Sixteen children with ASD (47–81 months) and nineteen age and IQ matched typically developing children were instructed to look to the target as accurately and quickly as possible. Both groups showed high proportions (40%) of saccadic errors towards parafoveal and peripheral distractors. For correctly executed eye movements to the targets, centrally presented distractors produced the longest latencies (time taken to initiate eye movements), followed by parafoveal and peripheral distractor conditions. Central distractors had a greater effect in the ASD group, indicating evidence for potential atypical voluntary attentional control in ASD children.

Close

  • doi:10.1007/s10803-019-04271-y

Close

Hui Zhang; Ping Wang; Tinghu Kang

Aesthetic experience of field cognitive style in the appreciation of cursive and running scripts: An eye movement study Journal Article

Art and Design Review, 8 , pp. 215–227, 2020.

Abstract | Links | BibTeX

@article{Zhang2020c,
title = {Aesthetic experience of field cognitive style in the appreciation of cursive and running scripts: An eye movement study},
author = {Hui Zhang and Ping Wang and Tinghu Kang},
doi = {10.4236/adr.2020.84017},
year = {2020},
date = {2020-01-01},
journal = {Art and Design Review},
volume = {8},
pages = {215--227},
abstract = {This study compares the characteristics of the aesthetic experience of different cognitive styles in calligraphy style. The study used a cursive script and running script as experimental materials and the EyeLink 1000 Plus eye tracker to record eye movements while viewing calligraphy. The results showed that, in the overall analysis, there were differences in the field cogni-tion style in total fixation counts, saccade amplitude, and saccade counts and differences in the calligraphic style in total fixation counts and saccade counts. Further local analysis found significant differences in the field cogni-tive style in mean pupil diameter, fixation counts, and regression in count, and that there were differences in fixation counts and regression in count in the calligraphic style, as well as interactions with the area of interest. The results indicate that the field cognitive style is characterized by different aesthetic experiences in calligraphy appreciation and that there are aesthetic preferences in calligraphy style.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study compares the characteristics of the aesthetic experience of different cognitive styles in calligraphy style. The study used a cursive script and running script as experimental materials and the EyeLink 1000 Plus eye tracker to record eye movements while viewing calligraphy. The results showed that, in the overall analysis, there were differences in the field cogni-tion style in total fixation counts, saccade amplitude, and saccade counts and differences in the calligraphic style in total fixation counts and saccade counts. Further local analysis found significant differences in the field cogni-tive style in mean pupil diameter, fixation counts, and regression in count, and that there were differences in fixation counts and regression in count in the calligraphic style, as well as interactions with the area of interest. The results indicate that the field cognitive style is characterized by different aesthetic experiences in calligraphy appreciation and that there are aesthetic preferences in calligraphy style.

Close

  • doi:10.4236/adr.2020.84017

Close

Hanshu Zhang; Joseph W Houpt

Exaggerated prevalence effect with the explicit prevalence information: The description-experience gap in visual search Journal Article

Attention, Perception, and Psychophysics, 82 (7), pp. 3340–3356, 2020.

Abstract | Links | BibTeX

@article{Zhang2020b,
title = {Exaggerated prevalence effect with the explicit prevalence information: The description-experience gap in visual search},
author = {Hanshu Zhang and Joseph W Houpt},
doi = {10.3758/s13414-020-02045-8},
year = {2020},
date = {2020-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {82},
number = {7},
pages = {3340--3356},
publisher = {Attention, Perception, & Psychophysics},
abstract = {Despite the increasing focus on target prevalence in visual search research, few papers have thoroughly examined the effect of how target prevalence is communicated. Findings in the judgment and decision-making literature have demonstrated that people behave differently depending on whether probabilistic information is made explicit or learned through experience, hence there is potential for a similar difference when communicating prevalence in visual search. Our current research examined how visual search changes depending on whether the target prevalence information was explicitly given to observers or they learned the prevalence through experience with additional manipulations of target reward and salience. We found that when the target prevalence was low, learning prevalence from experience resulted in more target-present responses and longer search times before quitting compared to when observers were explicitly informed of the target probability. The discrepancy narrowed with increased prevalence and reversed in the high target prevalence condition. Eye-tracking results indicated that search with experience consistently resulted in longer fixation durations, with the largest difference in low-prevalence conditions. Longer search time was primarily due to observers re-visited more items. Our work addressed the importance of exploring influences brought by probability communication in future prevalence visual search studies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Despite the increasing focus on target prevalence in visual search research, few papers have thoroughly examined the effect of how target prevalence is communicated. Findings in the judgment and decision-making literature have demonstrated that people behave differently depending on whether probabilistic information is made explicit or learned through experience, hence there is potential for a similar difference when communicating prevalence in visual search. Our current research examined how visual search changes depending on whether the target prevalence information was explicitly given to observers or they learned the prevalence through experience with additional manipulations of target reward and salience. We found that when the target prevalence was low, learning prevalence from experience resulted in more target-present responses and longer search times before quitting compared to when observers were explicitly informed of the target probability. The discrepancy narrowed with increased prevalence and reversed in the high target prevalence condition. Eye-tracking results indicated that search with experience consistently resulted in longer fixation durations, with the largest difference in low-prevalence conditions. Longer search time was primarily due to observers re-visited more items. Our work addressed the importance of exploring influences brought by probability communication in future prevalence visual search studies.

Close

  • doi:10.3758/s13414-020-02045-8

Close

Han Zhang; Chuyan Qu; Kevin F Miller; Kai S Cortina

Missing the joke: Reduced rereading of garden-path jokes during mind-wandering Journal Article

Journal of experimental psychology. Learning, memory, and cognition, 46 (4), pp. 638–648, 2020.

Abstract | Links | BibTeX

@article{Zhang2020g,
title = {Missing the joke: Reduced rereading of garden-path jokes during mind-wandering},
author = {Han Zhang and Chuyan Qu and Kevin F Miller and Kai S Cortina},
doi = {10.1037/xlm0000745},
year = {2020},
date = {2020-01-01},
journal = {Journal of experimental psychology. Learning, memory, and cognition},
volume = {46},
number = {4},
pages = {638--648},
abstract = {Mind-wandering (i.e., thoughts irrelevant to the current task) occurs frequently during reading. The current study examined whether mind-wandering was associated with reduced rereading when the reader read the so-called garden-path jokes. In a garden-path joke, the reader's initial interpretation is violated by the final punchline, and the violation creates a semantic incongruity that needs to be resolved (e.g., "My girlfriend has read so many negative things about smoking. Therefore, she decided to quit reading."). Rereading text prior to the punchline can help resolve the incongruity. In a main study and a preregistered replication, participants read jokes and nonfunny controls embedded in filler texts and responded to thought probes that assessed intentional and unintentional mind-wandering. Results were consistent across the two studies: When the reader was not mind-wandering, jokes elicited more rereading (from the punchline) than the nonfunny controls did, and had a recall advantage over the nonfunny controls. During mind-wandering, however, the additional eye movement processing and the recall advantage of jokes were generally reduced. These results show that mind-wandering is associated with reduced rereading, which is important for resolving higher level comprehension difficulties. (PsycInfo Database Record (c) 2020 APA, all rights reserved).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Mind-wandering (i.e., thoughts irrelevant to the current task) occurs frequently during reading. The current study examined whether mind-wandering was associated with reduced rereading when the reader read the so-called garden-path jokes. In a garden-path joke, the reader's initial interpretation is violated by the final punchline, and the violation creates a semantic incongruity that needs to be resolved (e.g., "My girlfriend has read so many negative things about smoking. Therefore, she decided to quit reading."). Rereading text prior to the punchline can help resolve the incongruity. In a main study and a preregistered replication, participants read jokes and nonfunny controls embedded in filler texts and responded to thought probes that assessed intentional and unintentional mind-wandering. Results were consistent across the two studies: When the reader was not mind-wandering, jokes elicited more rereading (from the punchline) than the nonfunny controls did, and had a recall advantage over the nonfunny controls. During mind-wandering, however, the additional eye movement processing and the recall advantage of jokes were generally reduced. These results show that mind-wandering is associated with reduced rereading, which is important for resolving higher level comprehension difficulties. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Close

  • doi:10.1037/xlm0000745

Close

Bao Zhang; Shuhui Liu; Cenlou Hu; Ziwen Luo; Sai Huang; Jie Sui

Enhanced memory-driven attentional capture in action video game players Journal Article

Computers in Human Behavior, 107 , pp. 1–7, 2020.

Abstract | Links | BibTeX

@article{Zhang2020a,
title = {Enhanced memory-driven attentional capture in action video game players},
author = {Bao Zhang and Shuhui Liu and Cenlou Hu and Ziwen Luo and Sai Huang and Jie Sui},
doi = {10.1016/j.chb.2020.106271},
year = {2020},
date = {2020-01-01},
journal = {Computers in Human Behavior},
volume = {107},
pages = {1--7},
publisher = {Elsevier Ltd},
abstract = {Action video game players (AVGPs) have been shown to have an enhanced cognitive control ability to reduce stimulus-driven attentional capture (e.g., from an exogenous salient distractor) compared with non-action video game players (NVGPs). Here we examined whether these benefits could extend to the memory-driven attentional capture (i.e., working memory (WM) representations bias visual attention toward a matching distractor). AVGPs and NVGPs were instructed to complete a visual search task while actively maintaining 1, 2 or 4 items in WM. There was a robust advantage to the memory-driven attentional capture in reaction time and first eye movement fixation in the AVGPs compared to the NVGPs when they had to maintain one item in WM. Moreover, the effect of memory-driven attentional capture was maintained in the AVGPs when the WM load was increased, but it was eliminated in the NVGPs. The results suggest that AVGPs may devote more attentional resources to sustaining the cognitive control rather than to suppressing the attentional capture driven by the active WM representations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Action video game players (AVGPs) have been shown to have an enhanced cognitive control ability to reduce stimulus-driven attentional capture (e.g., from an exogenous salient distractor) compared with non-action video game players (NVGPs). Here we examined whether these benefits could extend to the memory-driven attentional capture (i.e., working memory (WM) representations bias visual attention toward a matching distractor). AVGPs and NVGPs were instructed to complete a visual search task while actively maintaining 1, 2 or 4 items in WM. There was a robust advantage to the memory-driven attentional capture in reaction time and first eye movement fixation in the AVGPs compared to the NVGPs when they had to maintain one item in WM. Moreover, the effect of memory-driven attentional capture was maintained in the AVGPs when the WM load was increased, but it was eliminated in the NVGPs. The results suggest that AVGPs may devote more attentional resources to sustaining the cognitive control rather than to suppressing the attentional capture driven by the active WM representations.

Close

  • doi:10.1016/j.chb.2020.106271

Close

Zehui Zhan; Jun Wu; Hu Mei; Qianyi Wu; Patrick S W Fong

Individual difference on reading ability tested by eye-tracking: From perspective of gender Journal Article

Interactive Technology and Smart Education, 17 (3), pp. 267–283, 2020.

Abstract | Links | BibTeX

@article{Zhan2020,
title = {Individual difference on reading ability tested by eye-tracking: From perspective of gender},
author = {Zehui Zhan and Jun Wu and Hu Mei and Qianyi Wu and Patrick S W Fong},
doi = {10.1108/ITSE-12-2019-0082},
year = {2020},
date = {2020-01-01},
journal = {Interactive Technology and Smart Education},
volume = {17},
number = {3},
pages = {267--283},
abstract = {Purpose: This paper aims to investigate the individual difference on digital reading, by examining the eye-tracking records of male and female readers with different reading ability (including their pupil size, blink rate, fixation rate, fixation duration, saccade rate, saccade duration, saccade amplitude and regression rate). Design/methodology/approach: A total of 74 participants were selected according to 6,520 undergraduate students' university entrance exam scores and the follow-up reading assessments. Half of them are men and half are women, with the top 3% good readers and the bottom 3% poor readers, from different disciplines. Findings: Results indicated that the major gender differences on reading abilities were indicated by saccade duration, regression rate and blink rate. The major effects on reading ability have a larger effect size than the major effect on gender. Among all the indicators that have been examined, blink rate and regression rates are the most sensitive to the gender attribute, while the fixation rate and saccade amplitude showed the least sensitiveness. Originality/value: This finding could be helpful for user modeling with eye-tracking data in intelligent tutoring systems, where necessary adjustments might be needed according to users' individual differences. In this way, instructors could be able to provide purposeful guidance according to what the learners had seen and personalized the experience of digital reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Purpose: This paper aims to investigate the individual difference on digital reading, by examining the eye-tracking records of male and female readers with different reading ability (including their pupil size, blink rate, fixation rate, fixation duration, saccade rate, saccade duration, saccade amplitude and regression rate). Design/methodology/approach: A total of 74 participants were selected according to 6,520 undergraduate students' university entrance exam scores and the follow-up reading assessments. Half of them are men and half are women, with the top 3% good readers and the bottom 3% poor readers, from different disciplines. Findings: Results indicated that the major gender differences on reading abilities were indicated by saccade duration, regression rate and blink rate. The major effects on reading ability have a larger effect size than the major effect on gender. Among all the indicators that have been examined, blink rate and regression rates are the most sensitive to the gender attribute, while the fixation rate and saccade amplitude showed the least sensitiveness. Originality/value: This finding could be helpful for user modeling with eye-tracking data in intelligent tutoring systems, where necessary adjustments might be needed according to users' individual differences. In this way, instructors could be able to provide purposeful guidance according to what the learners had seen and personalized the experience of digital reading.

Close

  • doi:10.1108/ITSE-12-2019-0082

Close

David Zeugin; Michael P Notter; Jean François Knebel; Silvio Ionta

Temporo-parietal contribution to the mental representations of self/other face Journal Article

Brain and Cognition, 143 , pp. 1–6, 2020.

Abstract | Links | BibTeX

@article{Zeugin2020,
title = {Temporo-parietal contribution to the mental representations of self/other face},
author = {David Zeugin and Michael P Notter and Jean Fran{ç}ois Knebel and Silvio Ionta},
doi = {10.1016/j.bandc.2020.105600},
year = {2020},
date = {2020-01-01},
journal = {Brain and Cognition},
volume = {143},
pages = {1--6},
publisher = {Elsevier},
abstract = {Face recognition requires comparing the current visual input with stored mental representations of faces. Based on its role in visual recognition of faces and mental representation of the body, we hypothesized that the right temporo-parietal junction (rTPJ) could be implicated also in processing mental representation of faces. To test this hypothesis, we asked 30 neurotypical participants to perform mental rotation (laterality judgment of rotated pictures) of self- and other-face images, before and after the inhibition of rTPJ through repetitive transcranial magnetic stimulation. After inhibition of rTPJ the mental rotation of self-face was slower than other-face. In the control condition the mental rotation of self/other faces was not significantly different. This supports that the role of rTPJ extends to mental representation of faces, specifically for the self. Since the experimental task did not require to explicitly recognize identity, we propose that unconscious identity attribution affects also the mental representation of faces. The present study offers insights on the involvement rTPJ in mental representation of faces and proposes that the neural substrate dedicated to mental representation of faces goes beyond the traditional visual and memory areas.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Face recognition requires comparing the current visual input with stored mental representations of faces. Based on its role in visual recognition of faces and mental representation of the body, we hypothesized that the right temporo-parietal junction (rTPJ) could be implicated also in processing mental representation of faces. To test this hypothesis, we asked 30 neurotypical participants to perform mental rotation (laterality judgment of rotated pictures) of self- and other-face images, before and after the inhibition of rTPJ through repetitive transcranial magnetic stimulation. After inhibition of rTPJ the mental rotation of self-face was slower than other-face. In the control condition the mental rotation of self/other faces was not significantly different. This supports that the role of rTPJ extends to mental representation of faces, specifically for the self. Since the experimental task did not require to explicitly recognize identity, we propose that unconscious identity attribution affects also the mental representation of faces. The present study offers insights on the involvement rTPJ in mental representation of faces and proposes that the neural substrate dedicated to mental representation of faces goes beyond the traditional visual and memory areas.

Close

  • doi:10.1016/j.bandc.2020.105600

Close

Paul Zerr; José Pablo Ossandón; Idris Shareef; Stefan Van der Stigchel; Ramesh Kekunnaya; Brigitte Röder

Successful visually guided eye movements following sight restoration after congenital cataracts Journal Article

Journal of Vision, 20 (7), pp. 1–24, 2020.

Abstract | Links | BibTeX

@article{Zerr2020,
title = {Successful visually guided eye movements following sight restoration after congenital cataracts},
author = {Paul Zerr and José Pablo Ossandón and Idris Shareef and Stefan {Van der Stigchel} and Ramesh Kekunnaya and Brigitte Röder},
doi = {10.1167/JOV.20.7.3},
year = {2020},
date = {2020-01-01},
journal = {Journal of Vision},
volume = {20},
number = {7},
pages = {1--24},
abstract = {Sensitive periods have previously been identified for several human visual system functions. Yet, it is unknown to what degree the development of visually guided oculomotor control depends on early visual experience-for example, whether and to what degree humans whose sight was restored after a transient period of congenital visual deprivation are able to conduct visually guided eye movements. In the present study, we developed new calibration and analysis techniques for eye tracking data contaminated with pervasive nystagmus, which is typical for this population. We investigated visually guided eye movements in sight recovery individuals with long periods of visual pattern deprivation (3-36 years) following birth due to congenital, dense, total, bilateral cataracts. As controls we assessed (1) individuals with nystagmus due to causes other than cataracts, (2) individuals with developmental cataracts after cataract removal, and (3) individuals with normal vision. Congenital cataract reversal individuals were able to perform visually guided gaze shifts, even when their blindness had lasted for decades. The typical extensive nystagmus of this group distorted eye movement trajectories, but measures of latency and accuracy were as expected from their prevailing nystagmus-that is, not worse than in the nystagmus control group. To the best of our knowledge, the present quantitative study is the first to investigate the characteristics of oculomotor control in congenital cataract reversal individuals, and it indicates a remarkable effectiveness of visually guided eye movements despite long-lasting periods of visual deprivation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Sensitive periods have previously been identified for several human visual system functions. Yet, it is unknown to what degree the development of visually guided oculomotor control depends on early visual experience-for example, whether and to what degree humans whose sight was restored after a transient period of congenital visual deprivation are able to conduct visually guided eye movements. In the present study, we developed new calibration and analysis techniques for eye tracking data contaminated with pervasive nystagmus, which is typical for this population. We investigated visually guided eye movements in sight recovery individuals with long periods of visual pattern deprivation (3-36 years) following birth due to congenital, dense, total, bilateral cataracts. As controls we assessed (1) individuals with nystagmus due to causes other than cataracts, (2) individuals with developmental cataracts after cataract removal, and (3) individuals with normal vision. Congenital cataract reversal individuals were able to perform visually guided gaze shifts, even when their blindness had lasted for decades. The typical extensive nystagmus of this group distorted eye movement trajectories, but measures of latency and accuracy were as expected from their prevailing nystagmus-that is, not worse than in the nystagmus control group. To the best of our knowledge, the present quantitative study is the first to investigate the characteristics of oculomotor control in congenital cataract reversal individuals, and it indicates a remarkable effectiveness of visually guided eye movements despite long-lasting periods of visual deprivation.

Close

  • doi:10.1167/JOV.20.7.3

Close

Tao Zeng; Yarong Gao; Xiaoya Li

Priming effects of hierarchical graphics on Chinese ambiguous structures Journal Article

American Journal of Psychology and Cognitive Science, 5 (1), pp. 1–13, 2020.

Abstract | BibTeX

@article{Zeng2020a,
title = {Priming effects of hierarchical graphics on Chinese ambiguous structures},
author = {Tao Zeng and Yarong Gao and Xiaoya Li},
year = {2020},
date = {2020-01-01},
journal = {American Journal of Psychology and Cognitive Science},
volume = {5},
number = {1},
pages = {1--13},
abstract = {Inspired by the researches concerning structural priming across cognitive domains, this study investigated the priming effects from hierarchical graphics to Chinese structures. Unlike syntactic priming, structural priming refers to the tendency to repeat or process a current sentence better due to its structural similarity to the previously experienced “prime”, which can be abstract structures and even independent of language, as long as the prime and the target share some aspects of abstract structural representation. Since both abstract graphics and specific structures share similar hierarchical structures, this research conducted the priming experiment with eye tracking technique to verify structural priming effects from hierarchical graphics to Chinese ambiguous structures. The study adopted the sentence comprehension task through EyelinkII which covered two variant ambiguous structures: Quantifier + NP1 + De + NP2 and NP1 + Kan/WangZhe + NP2 + AP. There were 24 sets of materials and every set contained three priming hierarchical graphics and a target sentence. The priming conditions were high-attachment prime condition, low-attachment prime condition and baseline prime condition respectively. The target sentences were ambiguous, for example, liangge xuesheng de jiazhang ‘two parents of the students' or ‘two students' parents'. Then, a question followed, for example, xuesheng jiazhang de shuliangshi? ‘What is the number of parents?' The different choice representing different comprehension of the target sentence, for choice A was liangge ‘two' resulting from high-attachment comprehension, while choice B was buqueding ‘uncertain' resulting from low-attachment comprehension. The comprehension task aimed to verify whether the structure of hierarchical graphics affected the tendency of target sentences comprehension. Results showed that there is priming effect from abstract graphics to Chinese ambiguous structures according to behavioral data and eye movement},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Inspired by the researches concerning structural priming across cognitive domains, this study investigated the priming effects from hierarchical graphics to Chinese structures. Unlike syntactic priming, structural priming refers to the tendency to repeat or process a current sentence better due to its structural similarity to the previously experienced “prime”, which can be abstract structures and even independent of language, as long as the prime and the target share some aspects of abstract structural representation. Since both abstract graphics and specific structures share similar hierarchical structures, this research conducted the priming experiment with eye tracking technique to verify structural priming effects from hierarchical graphics to Chinese ambiguous structures. The study adopted the sentence comprehension task through EyelinkII which covered two variant ambiguous structures: Quantifier + NP1 + De + NP2 and NP1 + Kan/WangZhe + NP2 + AP. There were 24 sets of materials and every set contained three priming hierarchical graphics and a target sentence. The priming conditions were high-attachment prime condition, low-attachment prime condition and baseline prime condition respectively. The target sentences were ambiguous, for example, liangge xuesheng de jiazhang ‘two parents of the students' or ‘two students' parents'. Then, a question followed, for example, xuesheng jiazhang de shuliangshi? ‘What is the number of parents?' The different choice representing different comprehension of the target sentence, for choice A was liangge ‘two' resulting from high-attachment comprehension, while choice B was buqueding ‘uncertain' resulting from low-attachment comprehension. The comprehension task aimed to verify whether the structure of hierarchical graphics affected the tendency of target sentences comprehension. Results showed that there is priming effect from abstract graphics to Chinese ambiguous structures according to behavioral data and eye movement

Close

Hong Zeng; Junjie Shen; Wenming Zheng; Aiguo Song; Jia Liu

Toward measuring target perception: First-order and second-order deep network pipeline for classification of fixation-felated potentials Journal Article

Journal of Healthcare Engineering, pp. 1–15, 2020.

Abstract | Links | BibTeX

@article{Zeng2020,
title = {Toward measuring target perception: First-order and second-order deep network pipeline for classification of fixation-felated potentials},
author = {Hong Zeng and Junjie Shen and Wenming Zheng and Aiguo Song and Jia Liu},
doi = {10.1155/2020/8829451},
year = {2020},
date = {2020-01-01},
journal = {Journal of Healthcare Engineering},
pages = {1--15},
abstract = {The topdown determined visual object perception refers to the ability of a person to identify a prespecified visual target. This paper studies the technical foundation for measuring the target-perceptual ability in a guided visual search task, using the EEG-based brain imaging technique. Specifically, it focuses on the feature representation learning problem for single-trial classification of fixation-related potentials (FRPs). The existing methods either capture only first-order statistics while ignoring second-order statistics in data, or directly extract second-order statistics with covariance matrices estimated with raw FRPs that suffer from low signal-to-noise ratio. In this paper, we propose a new representation learning pipeline involving a low-level convolution subnetwork followed by a high-level Riemannian manifold subnetwork, with a novel midlevel pooling layer bridging them. In this way, the discriminative power of the first-order features can be increased by the convolution subnetwork, while the second-order information in the convolutional features could further be deeply learned with the subsequent Riemannian subnetwork. In particular, the temporal ordering of FRPs is well preserved for the components in our pipeline, which is considered to be a valuable source of discriminant information. The experimental results show that proposed approach leads to improved classification performance and robustness to lack of data over the state-of-the-art ones, thus making it appealing for practical applications in measuring the target-perceptual ability of cognitively impaired patients with the FRP technique.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The topdown determined visual object perception refers to the ability of a person to identify a prespecified visual target. This paper studies the technical foundation for measuring the target-perceptual ability in a guided visual search task, using the EEG-based brain imaging technique. Specifically, it focuses on the feature representation learning problem for single-trial classification of fixation-related potentials (FRPs). The existing methods either capture only first-order statistics while ignoring second-order statistics in data, or directly extract second-order statistics with covariance matrices estimated with raw FRPs that suffer from low signal-to-noise ratio. In this paper, we propose a new representation learning pipeline involving a low-level convolution subnetwork followed by a high-level Riemannian manifold subnetwork, with a novel midlevel pooling layer bridging them. In this way, the discriminative power of the first-order features can be increased by the convolution subnetwork, while the second-order information in the convolutional features could further be deeply learned with the subsequent Riemannian subnetwork. In particular, the temporal ordering of FRPs is well preserved for the components in our pipeline, which is considered to be a valuable source of discriminant information. The experimental results show that proposed approach leads to improved classification performance and robustness to lack of data over the state-of-the-art ones, thus making it appealing for practical applications in measuring the target-perceptual ability of cognitively impaired patients with the FRP technique.

Close

  • doi:10.1155/2020/8829451

Close

Chuanli Zang; Hong Du; Xuejun Bai; Guoli Yan; Simon P Liversedge

Word skipping in Chinese reading: The role of high-frequency preview and syntactic felicity Journal Article

Journal of Experimental Psychology: Learning Memory and Cognition, 46 (4), pp. 603–620, 2020.

Abstract | Links | BibTeX

@article{Zang2020b,
title = {Word skipping in Chinese reading: The role of high-frequency preview and syntactic felicity},
author = {Chuanli Zang and Hong Du and Xuejun Bai and Guoli Yan and Simon P Liversedge},
doi = {10.1037/xlm0000738},
year = {2020},
date = {2020-01-01},
journal = {Journal of Experimental Psychology: Learning Memory and Cognition},
volume = {46},
number = {4},
pages = {603--620},
abstract = {Two experiments are reported to investigate whether Chinese readers skip a high-frequency preview word without taking the syntax of the sentence context into account. In Experiment 1, we manipulated target word syntactic category, frequency, and preview using the boundary paradigm (Rayner, 1975). For high-frequency verb targets, there were identity and pseudocharacter previews alongside a low-frequency noun preview. For low-frequency verb targets, there were identity and pseudocharacter previews alongside a high-frequency noun preview. Results showed that for high-frequency targets, skipping rates were higher for identical previews compared with the syntactically infelicitous alternative low-frequency preview and pseudocharacter previews, however for low-frequency targets, skipping rates were higher for high-frequency previews (even when they were syntactically infelicitous) compared with the other 2 previews. Furthermore, readers were more likely to skip the target when they had a high-frequency, syntactically felicitous preview compared to a high-frequency, syntactically infelicitous preview. The pattern of felicity effects was statistically robust when readers launched saccades from near the target. In Experiment 2, we assessed whether display change awareness influenced the patterns of results in Experiment 1. Results showed that the overall patterns held in Experiment 2 regardless of some readers being more likely to be aware of the display change than others. These results suggest that decisions to skip a word in Chinese reading are primarily based on parafoveal word familiarity, though the syntactic felicity of a parafoveal word also exerts a robust influence for high-frequency previews.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two experiments are reported to investigate whether Chinese readers skip a high-frequency preview word without taking the syntax of the sentence context into account. In Experiment 1, we manipulated target word syntactic category, frequency, and preview using the boundary paradigm (Rayner, 1975). For high-frequency verb targets, there were identity and pseudocharacter previews alongside a low-frequency noun preview. For low-frequency verb targets, there were identity and pseudocharacter previews alongside a high-frequency noun preview. Results showed that for high-frequency targets, skipping rates were higher for identical previews compared with the syntactically infelicitous alternative low-frequency preview and pseudocharacter previews, however for low-frequency targets, skipping rates were higher for high-frequency previews (even when they were syntactically infelicitous) compared with the other 2 previews. Furthermore, readers were more likely to skip the target when they had a high-frequency, syntactically felicitous preview compared to a high-frequency, syntactically infelicitous preview. The pattern of felicity effects was statistically robust when readers launched saccades from near the target. In Experiment 2, we assessed whether display change awareness influenced the patterns of results in Experiment 1. Results showed that the overall patterns held in Experiment 2 regardless of some readers being more likely to be aware of the display change than others. These results suggest that decisions to skip a word in Chinese reading are primarily based on parafoveal word familiarity, though the syntactic felicity of a parafoveal word also exerts a robust influence for high-frequency previews.

Close

  • doi:10.1037/xlm0000738

Close

Polina Zamarashkina; Dina V Popovkina; Anitha Pasupathy

Timing of response onset and offset in macaque V4: stimulus and task dependence Journal Article

Journal of Neurophysiology, 123 (6), pp. 2311–2325, 2020.

Abstract | Links | BibTeX

@article{Zamarashkina2020,
title = {Timing of response onset and offset in macaque V4: stimulus and task dependence},
author = {Polina Zamarashkina and Dina V Popovkina and Anitha Pasupathy},
doi = {10.1152/jn.00586.2019},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neurophysiology},
volume = {123},
number = {6},
pages = {2311--2325},
abstract = {In the primate visual cortex, both the magnitude of the neuronal response and its timing can carry important information about the visual world, but studies typically focus only on response magnitude. Here, we examine the onset and offset latency of the responses of neurons in area V4 of awake, behaving macaques across several experiments in the context of a variety of stimuli and task paradigms. Our results highlight distinct contributions of stimuli and tasks to V4 response latency. We found that response onset latencies are shorter than typically cited (median = 75.5 ms), supporting a role for V4 neurons in rapid object and scene recognition functions. Moreover, onset latencies are longer for smaller stimuli and stimulus outlines, consistent with the hypothesis that longer latencies are associated with higher spatial frequency content. Strikingly, we found that onset latencies showed no significant dependence on stimulus occlusion, unlike in inferotemporal cortex, nor on task demands. Across the V4 population, onset latencies had a broad distribution, reflecting the diversity of feedforward, recurrent, and feedback connections that inform the responses of individual neurons. Response offset latencies, on the other hand, displayed the opposite tendency in their relationship to stimulus and task attributes: they are less influenced by stimulus appearance but are shorter in guided saccade tasks compared with fixation tasks. The observation that response latency is influenced by stimulus- and task-associated factors emphasizes a need to examine response timing alongside firing rate in determining the functional role of area V4.NEW & NOTEWORTHY Onset and offset timing of neuronal responses can provide information about visual environment and neuron's role in visual processing and its anatomical connectivity. In the first comprehensive examination of onset and offset latencies in the intermediate visual cortical area V4, we find neurons respond faster than previously reported, making them ideally suited to contribute to rapid object and scene recognition. While response onset reflects stimulus characteristics, timing of response offset is influenced more by behavioral task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the primate visual cortex, both the magnitude of the neuronal response and its timing can carry important information about the visual world, but studies typically focus only on response magnitude. Here, we examine the onset and offset latency of the responses of neurons in area V4 of awake, behaving macaques across several experiments in the context of a variety of stimuli and task paradigms. Our results highlight distinct contributions of stimuli and tasks to V4 response latency. We found that response onset latencies are shorter than typically cited (median = 75.5 ms), supporting a role for V4 neurons in rapid object and scene recognition functions. Moreover, onset latencies are longer for smaller stimuli and stimulus outlines, consistent with the hypothesis that longer latencies are associated with higher spatial frequency content. Strikingly, we found that onset latencies showed no significant dependence on stimulus occlusion, unlike in inferotemporal cortex, nor on task demands. Across the V4 population, onset latencies had a broad distribution, reflecting the diversity of feedforward, recurrent, and feedback connections that inform the responses of individual neurons. Response offset latencies, on the other hand, displayed the opposite tendency in their relationship to stimulus and task attributes: they are less influenced by stimulus appearance but are shorter in guided saccade tasks compared with fixation tasks. The observation that response latency is influenced by stimulus- and task-associated factors emphasizes a need to examine response timing alongside firing rate in determining the functional role of area V4.NEW & NOTEWORTHY Onset and offset timing of neuronal responses can provide information about visual environment and neuron's role in visual processing and its anatomical connectivity. In the first comprehensive examination of onset and offset latencies in the intermediate visual cortical area V4, we find neurons respond faster than previously reported, making them ideally suited to contribute to rapid object and scene recognition. While response onset reflects stimulus characteristics, timing of response offset is influenced more by behavioral task.

Close

  • doi:10.1152/jn.00586.2019

Close

Mengxi Yun; Takashi Kawai; Masafumi Nejime; Hiroshi Yamada; Masayuki Matsumoto

Signal dynamics of midbrain dopamine neurons during economic decision-making in monkeys Journal Article

Science Advances, 6 , pp. 1–15, 2020.

Abstract | Links | BibTeX

@article{Yun2020,
title = {Signal dynamics of midbrain dopamine neurons during economic decision-making in monkeys},
author = {Mengxi Yun and Takashi Kawai and Masafumi Nejime and Hiroshi Yamada and Masayuki Matsumoto},
doi = {10.1126/sciadv.aba4962},
year = {2020},
date = {2020-01-01},
journal = {Science Advances},
volume = {6},
pages = {1--15},
abstract = {When we make economic choices, the brain first evaluates available options and then decides whether to choose them. Midbrain dopamine neurons are known to reinforce economic choices through their signal evoked by outcomes after decisions are made. However, although critical internal processing is executed while decisions are being made, little is known about the role of dopamine neurons during this period. We found that dopamine neurons exhibited dynamically changing signals related to the internal processing while rhesus monkeys were making decisions. These neurons encoded the value of an option immediately after it was offered and then gradually changed their activity to represent the animal's upcoming choice. Similar dynamics were observed in the orbitofrontal cortex, a center for economic decision-making, but the value-to-choice signal transition was completed earlier in dopamine neurons. Our findings suggest that dopamine neurons are a key component of the neural network that makes choices from values during ongoing decision-making processes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When we make economic choices, the brain first evaluates available options and then decides whether to choose them. Midbrain dopamine neurons are known to reinforce economic choices through their signal evoked by outcomes after decisions are made. However, although critical internal processing is executed while decisions are being made, little is known about the role of dopamine neurons during this period. We found that dopamine neurons exhibited dynamically changing signals related to the internal processing while rhesus monkeys were making decisions. These neurons encoded the value of an option immediately after it was offered and then gradually changed their activity to represent the animal's upcoming choice. Similar dynamics were observed in the orbitofrontal cortex, a center for economic decision-making, but the value-to-choice signal transition was completed earlier in dopamine neurons. Our findings suggest that dopamine neurons are a key component of the neural network that makes choices from values during ongoing decision-making processes.

Close

  • doi:10.1126/sciadv.aba4962

Close

Harun Yörük; Lindsay A Santacroce; Benjamin J Tamber-Rosenau

Reevaluating the sensory recruitment model by manipulating crowding in visual working memory representations Journal Article

Psychonomic Bulletin & Review, 27 (6), pp. 1383–1396, 2020.

Abstract | Links | BibTeX

@article{Yoeruek2020,
title = {Reevaluating the sensory recruitment model by manipulating crowding in visual working memory representations},
author = {Harun Yörük and Lindsay A Santacroce and Benjamin J Tamber-Rosenau},
doi = {10.3758/s13423-020-01757-0},
year = {2020},
date = {2020-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {27},
number = {6},
pages = {1383--1396},
publisher = {Psychonomic Bulletin & Review},
abstract = {The prominent sensory recruitment model argues that visual working memory (WM) is maintained via representations in the same early visual cortex brain regions that initially encode sensory stimuli, either in the identical neural populations as perceptual representations or in distinct neural populations. While recent research seems to reject the former (strong) sensory recruitment model, the latter (flexible) account remains plausible. Moreover, this flexibility could explain a recent result of high theoretical impact (Harrison & Bays, The Journal of Neuroscience, 38 (12), 3116-3123, 2018) – a failure to observe interactions between items held in visual WM – that has been taken to reject the sensory recruitment model. Harrison and Bays (The Journal of Neuroscience, 38 (12), 3116-3123, 2018) tested the sensory recruitment model by comparing the precision of memoranda in radially and tangentially oriented memory arrays. Because perceptual visual crowding effects are greater in radial than tangential arrays, they reasoned that a failure to observe such anisotropy in WM would reject the sensory recruitment model. In the present Registered Report or Replication, we replicated their study with greater sensitivity and extended their task by controlling a potential strategic confound. Specifically, participants might remap memory items to new locations, reducing interactions between proximal memoranda. To combat remapping, we cued participants to report either a memory item or its precise location – with this report cue presented only after a memory maintenance period. Our results suggest that, similar to visual perceptual crowding, location-bound visual memoranda interact with one another when remapping is prevented. Thus, our results support at least a flexible form of the sensory recruitment model.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The prominent sensory recruitment model argues that visual working memory (WM) is maintained via representations in the same early visual cortex brain regions that initially encode sensory stimuli, either in the identical neural populations as perceptual representations or in distinct neural populations. While recent research seems to reject the former (strong) sensory recruitment model, the latter (flexible) account remains plausible. Moreover, this flexibility could explain a recent result of high theoretical impact (Harrison & Bays, The Journal of Neuroscience, 38 (12), 3116-3123, 2018) – a failure to observe interactions between items held in visual WM – that has been taken to reject the sensory recruitment model. Harrison and Bays (The Journal of Neuroscience, 38 (12), 3116-3123, 2018) tested the sensory recruitment model by comparing the precision of memoranda in radially and tangentially oriented memory arrays. Because perceptual visual crowding effects are greater in radial than tangential arrays, they reasoned that a failure to observe such anisotropy in WM would reject the sensory recruitment model. In the present Registered Report or Replication, we replicated their study with greater sensitivity and extended their task by controlling a potential strategic confound. Specifically, participants might remap memory items to new locations, reducing interactions between proximal memoranda. To combat remapping, we cued participants to report either a memory item or its precise location – with this report cue presented only after a memory maintenance period. Our results suggest that, similar to visual perceptual crowding, location-bound visual memoranda interact with one another when remapping is prevented. Thus, our results support at least a flexible form of the sensory recruitment model.

Close

  • doi:10.3758/s13423-020-01757-0

Close

Ashley York; Stefanie I Becker

Top-down modulation of gaze capture: Feature similarity, optimal tuning, or tuning to relative features? Journal Article

Journal of Vision, 20 (4), pp. 1–16, 2020.

Abstract | Links | BibTeX

@article{York2020a,
title = {Top-down modulation of gaze capture: Feature similarity, optimal tuning, or tuning to relative features?},
author = {Ashley York and Stefanie I Becker},
doi = {10.1167/jov.20.4.6},
year = {2020},
date = {2020-01-01},
journal = {Journal of Vision},
volume = {20},
number = {4},
pages = {1--16},
abstract = {It is well-known that we can tune attention to specific features (e.g., colors). Originally, it was believed that attention would always be tuned to the exact feature value of the sought-after target (e.g., orange). However, subsequent studies showed that selection is often geared towards target-dissimilar items, which was variably attributed to (1) tuning attention to the relative target feature that distinguishes the target from other items in the surround (e.g., reddest item; relational tuning), (2) tuning attention to a shifted target feature that allows more optimal target selection (e.g., reddish orange; optimal tuning), or (3) broad attentional tuning and selection of the most salient item that is still similar to the target (combined similarity/saliency). The present study used a color search task and assessed gaze capture by differently coloured distractors to distinguish between the three accounts. The results of the first experiment showed that a very target-dissimilar distractor that matched the relative color of the target but was outside of the area of optimal tuning still captured very strongly. As shown by a control condition and a control experiment, bottom-up saliency modulated capture only weakly, ruling out a combined similarity-saliency account. With this, the results support the relational account that attention is tuned to the relative target feature (e.g., reddest), not an optimal feature value or the target feature.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It is well-known that we can tune attention to specific features (e.g., colors). Originally, it was believed that attention would always be tuned to the exact feature value of the sought-after target (e.g., orange). However, subsequent studies showed that selection is often geared towards target-dissimilar items, which was variably attributed to (1) tuning attention to the relative target feature that distinguishes the target from other items in the surround (e.g., reddest item; relational tuning), (2) tuning attention to a shifted target feature that allows more optimal target selection (e.g., reddish orange; optimal tuning), or (3) broad attentional tuning and selection of the most salient item that is still similar to the target (combined similarity/saliency). The present study used a color search task and assessed gaze capture by differently coloured distractors to distinguish between the three accounts. The results of the first experiment showed that a very target-dissimilar distractor that matched the relative color of the target but was outside of the area of optimal tuning still captured very strongly. As shown by a control condition and a control experiment, bottom-up saliency modulated capture only weakly, ruling out a combined similarity-saliency account. With this, the results support the relational account that attention is tuned to the relative target feature (e.g., reddest), not an optimal feature value or the target feature.

Close

  • doi:10.1167/jov.20.4.6

Close

Ashley A York; David K Sewell; Stefanie I Becker

Dual target search: Attention tuned to relative features, both within and across feature dimensions Journal Article

Journal of Experimental Psychology: Human Perception and Performance, 46 (11), pp. 1368–1386, 2020.

Abstract | Links | BibTeX

@article{York2020,
title = {Dual target search: Attention tuned to relative features, both within and across feature dimensions},
author = {Ashley A York and David K Sewell and Stefanie I Becker},
doi = {10.1037/xhp0000851},
year = {2020},
date = {2020-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {46},
number = {11},
pages = {1368--1386},
abstract = {Current models of attention propose that we can tune attention in a top-down controlled manner to a specific feature value (e.g., shape, color) to find specific items (e.g., a red car; feature-specific search). However, subsequent research has shown that attention is often tuned in a context-dependent manner to the relative features that distinguish a sought-after target from other surrounding nontarget items (e.g., larger, bluer, and faster; relational search). Currently, it is unknown whether search will be featurespecific or relational in search for multiple targets with different attributes. In the present study, observers had to search for 2 targets that differed either across 2 stimulus dimensions (color, motion; Experiment 1) or within the same stimulus dimension (color; Experiment 2: orange/redder or aqua/bluer). We distinguished between feature-specific and relational search by measuring eye movements to different types of irrelevant distractors (e.g., relatively matching vs. feature-matching). The results showed that attention was biased to the 2 relative features of the targets, both across different feature dimensions (i.e., motion and color) and within a single dimension (i.e., 2 colors; bluer and redder). The results were not due to automatic intertrial effects (dimension weighting or feature priming), and we found only small effects for valid precueing of the target feature, indicating that relational search for two targets was conducted with relative ease. This is the first demonstration that attention is top-down biased to the relative target features in dual target search, which shows that the relational account generalizes to multiple target search.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Current models of attention propose that we can tune attention in a top-down controlled manner to a specific feature value (e.g., shape, color) to find specific items (e.g., a red car; feature-specific search). However, subsequent research has shown that attention is often tuned in a context-dependent manner to the relative features that distinguish a sought-after target from other surrounding nontarget items (e.g., larger, bluer, and faster; relational search). Currently, it is unknown whether search will be featurespecific or relational in search for multiple targets with different attributes. In the present study, observers had to search for 2 targets that differed either across 2 stimulus dimensions (color, motion; Experiment 1) or within the same stimulus dimension (color; Experiment 2: orange/redder or aqua/bluer). We distinguished between feature-specific and relational search by measuring eye movements to different types of irrelevant distractors (e.g., relatively matching vs. feature-matching). The results showed that attention was biased to the 2 relative features of the targets, both across different feature dimensions (i.e., motion and color) and within a single dimension (i.e., 2 colors; bluer and redder). The results were not due to automatic intertrial effects (dimension weighting or feature priming), and we found only small effects for valid precueing of the target feature, indicating that relational search for two targets was conducted with relative ease. This is the first demonstration that attention is top-down biased to the relative target features in dual target search, which shows that the relational account generalizes to multiple target search.

Close

  • doi:10.1037/xhp0000851

Close

Tehrim Yoon; Afareen Jaleel; Alaa A Ahmed; Reza Shadmehr

Saccade vigor and the subjective economic value of visual stimuli Journal Article

Journal of Neurophysiology, 123 (6), pp. 2161–2172, 2020.

Abstract | Links | BibTeX

@article{Yoon2020,
title = {Saccade vigor and the subjective economic value of visual stimuli},
author = {Tehrim Yoon and Afareen Jaleel and Alaa A Ahmed and Reza Shadmehr},
doi = {10.1152/jn.00700.2019},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neurophysiology},
volume = {123},
number = {6},
pages = {2161--2172},
abstract = {Decisions are made based on the subjective value that the brain assigns to options. However, subjective value is a mathematical construct that cannot be measured directly, but rather is inferred from choices. Recent results have demonstrated that reaction time, amplitude, and velocity of movements are modulated by reward, raising the possibility that there is a link between how the brain evaluates an option and how it controls movements toward that option. Here, we asked people to choose among risky options represented by abstract stimuli, some associated with gain (points in a game), and others with loss. From their choices we estimated the subjective value that they assigned to each stimulus. In probe trials, a single stimulus appeared at center, instructing subjects to make a saccade to a peripheral target. We found that the reaction time, peak velocity, and amplitude of the peripherally directed saccade varied roughly linearly with the subjective value that the participant had assigned to the central stimulus: reaction time was shorter, velocity was higher, and amplitude was larger for stimuli that the participant valued more. Naturally, participants differed in how much they valued a given stimulus. Remarkably, those who valued a stimulus more, as evidenced by their choices in decision trials, tended to move with shorter reaction time and greater velocity in response to that stimulus in probe trials. Overall, the reaction time of the saccade in response to a stimulus partly predicted the subjective value that the brain assigned to that stimulus.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Decisions are made based on the subjective value that the brain assigns to options. However, subjective value is a mathematical construct that cannot be measured directly, but rather is inferred from choices. Recent results have demonstrated that reaction time, amplitude, and velocity of movements are modulated by reward, raising the possibility that there is a link between how the brain evaluates an option and how it controls movements toward that option. Here, we asked people to choose among risky options represented by abstract stimuli, some associated with gain (points in a game), and others with loss. From their choices we estimated the subjective value that they assigned to each stimulus. In probe trials, a single stimulus appeared at center, instructing subjects to make a saccade to a peripheral target. We found that the reaction time, peak velocity, and amplitude of the peripherally directed saccade varied roughly linearly with the subjective value that the participant had assigned to the central stimulus: reaction time was shorter, velocity was higher, and amplitude was larger for stimuli that the participant valued more. Naturally, participants differed in how much they valued a given stimulus. Remarkably, those who valued a stimulus more, as evidenced by their choices in decision trials, tended to move with shorter reaction time and greater velocity in response to that stimulus in probe trials. Overall, the reaction time of the saccade in response to a stimulus partly predicted the subjective value that the brain assigned to that stimulus.

Close

  • doi:10.1152/jn.00700.2019

Close

Seng Bum Michael Yoo; Benjamin Y Hayden

The transition from evaluation to selection involves neural subspace reorganization in c ore reward regions Journal Article

Neuron, 105 (4), pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Yoo2020,
title = {The transition from evaluation to selection involves neural subspace reorganization in c ore reward regions},
author = {Seng Bum Michael Yoo and Benjamin Y Hayden},
doi = {10.1016/j.neuron.2019.11.013},
year = {2020},
date = {2020-01-01},
journal = {Neuron},
volume = {105},
number = {4},
pages = {1--13},
publisher = {Elsevier Inc.},
abstract = {Economic choice proceeds from evaluation, in which we contemplate options, to selection, in which we weigh options and choose one. These stages must be differentiated so that decision makers do not proceed to selection before evaluation is complete. We examined responses of neurons in two core reward regions, orbitofrontal (OFC) and ventromedial prefrontal cortex (vmPFC), during two-option choice with asynchronous offer presentation. Our data suggest that neurons selective during the first (presumed evaluation) and second (presumed comparison and selection) offer epochs come from a single pool. Stage transition is accompanied by a shift toward orthogonality in the low-dimensional population response manifold. Nonetheless, the relative position of each option in driving responses in the population subspace is preserved. The orthogonalization we observe supports the hypothesis that the transition from evaluation to selection leads to reorganization of response subspace and suggests a mechanism by which value-related signals are prevented from prematurely driving choice.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Economic choice proceeds from evaluation, in which we contemplate options, to selection, in which we weigh options and choose one. These stages must be differentiated so that decision makers do not proceed to selection before evaluation is complete. We examined responses of neurons in two core reward regions, orbitofrontal (OFC) and ventromedial prefrontal cortex (vmPFC), during two-option choice with asynchronous offer presentation. Our data suggest that neurons selective during the first (presumed evaluation) and second (presumed comparison and selection) offer epochs come from a single pool. Stage transition is accompanied by a shift toward orthogonality in the low-dimensional population response manifold. Nonetheless, the relative position of each option in driving responses in the population subspace is preserved. The orthogonalization we observe supports the hypothesis that the transition from evaluation to selection leads to reorganization of response subspace and suggests a mechanism by which value-related signals are prevented from prematurely driving choice.

Close

  • doi:10.1016/j.neuron.2019.11.013

Close

Hörmet Yiltiz; David J Heeger; Michael S Landy

Contingent adaptation in masking and surround suppression Journal Article

Vision Research, 166 , pp. 72–80, 2020.

Abstract | Links | BibTeX

@article{Yiltiz2020,
title = {Contingent adaptation in masking and surround suppression},
author = {Hörmet Yiltiz and David J Heeger and Michael S Landy},
doi = {10.1016/j.visres.2019.11.004},
year = {2020},
date = {2020-01-01},
journal = {Vision Research},
volume = {166},
pages = {72--80},
publisher = {Elsevier},
abstract = {Adaptation is the process that changes a neuron's response based on recent inputs. In the traditional model, a neuron's state of adaptation depends on the recent input to that neuron alone, whereas in a recently introduced model (Hebbian normalization), adaptation depends on the structure of neural correlated firing. In particular, increased response products between pairs of neurons leads to increased mutual suppression. We test a psychophysical prediction of this model: adaptation should depend on 2nd-order statistics of input stimuli. That is, if two stimuli excite two distinct sub-populations of neurons, then presenting those stimuli simultaneously during adaptation should strengthen mutual suppression between those subpopulations. We confirm this prediction in two experiments. In the first, pairing two gratings synchronously during adaptation (i.e., a plaid) rather than asynchronously (interleaving the two gratings in time) leads to increased effectiveness of one pattern for masking the other. In the second, pairing the gratings in a center-surround configuration results in reduced apparent contrast for the central grating when paired with the same surround (as compared with a condition in which the central grating appears with a different surround at test than during adaptation). These results are consistent with the prediction that an increase in response covariance leads to greater mutual suppression between neurons. This effect is detectable both at threshold (masking) and well above threshold (apparent contrast).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Adaptation is the process that changes a neuron's response based on recent inputs. In the traditional model, a neuron's state of adaptation depends on the recent input to that neuron alone, whereas in a recently introduced model (Hebbian normalization), adaptation depends on the structure of neural correlated firing. In particular, increased response products between pairs of neurons leads to increased mutual suppression. We test a psychophysical prediction of this model: adaptation should depend on 2nd-order statistics of input stimuli. That is, if two stimuli excite two distinct sub-populations of neurons, then presenting those stimuli simultaneously during adaptation should strengthen mutual suppression between those subpopulations. We confirm this prediction in two experiments. In the first, pairing two gratings synchronously during adaptation (i.e., a plaid) rather than asynchronously (interleaving the two gratings in time) leads to increased effectiveness of one pattern for masking the other. In the second, pairing the gratings in a center-surround configuration results in reduced apparent contrast for the central grating when paired with the same surround (as compared with a condition in which the central grating appears with a different surround at test than during adaptation). These results are consistent with the prediction that an increase in response covariance leads to greater mutual suppression between neurons. This effect is detectable both at threshold (masking) and well above threshold (apparent contrast).

Close

  • doi:10.1016/j.visres.2019.11.004

Close

Wei Yang; Xinyu Fu

Effects of English capitals on reading performance of Chinese learners: Evidence from eye tracking Journal Article

International Journal of Asian Language Processing, 30 (1), pp. 1–14, 2020.

Abstract | Links | BibTeX

@article{Yang2020b,
title = {Effects of English capitals on reading performance of Chinese learners: Evidence from eye tracking},
author = {Wei Yang and Xinyu Fu},
doi = {10.1142/s2717554520500046},
year = {2020},
date = {2020-01-01},
journal = {International Journal of Asian Language Processing},
volume = {30},
number = {1},
pages = {1--14},
abstract = {Native English speakers need more time to recognize capital letters in reading, yet the influence of capitals upon Chinese learners' reading performance is seldom studied. We conducted an eye tracker experiment to explore the cognitive features of Chinese learners in reading texts containing capital letters. The effect of English proficiency on capital letter reading is also studied. The results showed that capitals significantly increase the cognitive load in Chinese learners' reading process, complicate their cognitive processing, and lower their reading efficiency. The perception of capital letters of Chinese learners is found to be an isolated event and may influence the word-superiority effect. English majors, who possess relatively stronger English logical thinking capability than non-English majors, face the same difficulty as the non-English majors do if no practice of capital letter reading has been done.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Native English speakers need more time to recognize capital letters in reading, yet the influence of capitals upon Chinese learners' reading performance is seldom studied. We conducted an eye tracker experiment to explore the cognitive features of Chinese learners in reading texts containing capital letters. The effect of English proficiency on capital letter reading is also studied. The results showed that capitals significantly increase the cognitive load in Chinese learners' reading process, complicate their cognitive processing, and lower their reading efficiency. The perception of capital letters of Chinese learners is found to be an isolated event and may influence the word-superiority effect. English majors, who possess relatively stronger English logical thinking capability than non-English majors, face the same difficulty as the non-English majors do if no practice of capital letter reading has been done.

Close

  • doi:10.1142/s2717554520500046

Close

Yang Xie-lan; He Wen-guang

Argument ambiguities make subject relative clause more difficult to process than object relative clause in Mandarin Journal Article

Journal of Literature and Art Studies, 10 (2), pp. 102–113, 2020.

Abstract | Links | BibTeX

@article{YangXie2020,
title = {Argument ambiguities make subject relative clause more difficult to process than object relative clause in Mandarin},
author = {Yang Xie-lan and He Wen-guang},
doi = {10.17265/2159-5836/2020.02.003},
year = {2020},
date = {2020-01-01},
journal = {Journal of Literature and Art Studies},
volume = {10},
number = {2},
pages = {102--113},
abstract = {Relative clauses (RCs) processing has been a hot issue in decades. Studies from head-initial languages have found that SRCs were easier to comprehend than ORCs, and many different models were constructed to account for SRCs preference. Chinese are head-final languages and the head noun phrases are behind the internal-clause. Such a great difference in syntactic structure made Chinese to be the optimum material to test the models mentioned above. In the paper, two experiments were carried out using eye-movement tracking methods to explore the difficulty with RCs processing in mandarin, and results showed that: when the two noun phrases were both from animate category, SRCs were more difficult than ORCs, but when the noun phrases in the internal-relative-clause were inanimate and the noun phrases in the matrix were animate, the difficulties with SRCs were greatly reduced. Based on these findings, we hold that the linear syntactic distance between the gap and the filler was not the key reason to the difficulty with SRCs. On the contrary, the ambiguity in argument construction may be the most important.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Relative clauses (RCs) processing has been a hot issue in decades. Studies from head-initial languages have found that SRCs were easier to comprehend than ORCs, and many different models were constructed to account for SRCs preference. Chinese are head-final languages and the head noun phrases are behind the internal-clause. Such a great difference in syntactic structure made Chinese to be the optimum material to test the models mentioned above. In the paper, two experiments were carried out using eye-movement tracking methods to explore the difficulty with RCs processing in mandarin, and results showed that: when the two noun phrases were both from animate category, SRCs were more difficult than ORCs, but when the noun phrases in the internal-relative-clause were inanimate and the noun phrases in the matrix were animate, the difficulties with SRCs were greatly reduced. Based on these findings, we hold that the linear syntactic distance between the gap and the filler was not the key reason to the difficulty with SRCs. On the contrary, the ambiguity in argument construction may be the most important.

Close

  • doi:10.17265/2159-5836/2020.02.003

Close

Shaorong Yan; Florian T Jaeger

Expectation adaptation during natural reading Journal Article

Language, Cognition and Neuroscience, 35 (10), pp. 1394–1422, 2020.

Abstract | Links | BibTeX

@article{Yan2020b,
title = {Expectation adaptation during natural reading},
author = {Shaorong Yan and Florian T Jaeger},
doi = {10.1080/23273798.2020.1784447},
year = {2020},
date = {2020-01-01},
journal = {Language, Cognition and Neuroscience},
volume = {35},
number = {10},
pages = {1394--1422},
publisher = {Taylor & Francis},
abstract = {Implicit expectations play a central role in sentence processing. These expectations are often assumed to be static or change only at relatively slow time scales. Some theoretical proposals, however, hold that comprehenders continuously adapt their expectations based on recent input. Existing evidence has relied heavily on self-paced reading, which requires familiarisation with a novel task. We instead employ eye-tracking reading to investigate the role of expectation adaptation during speeds and task demands more closely resembling natural reading. In two experiments, subjects read sentences that contained higher than expected proportions of a previously highly unexpected structure (reduced relative clauses). We test how this change in the statistics of structures within the experiment affects reading: if subjects adapt their expectations, reading times for the unexpected structure should decrease over the course of the experiment. This prediction is confirmed in both experiments. Significant effects of the changing statistics are observed for regression-related measures but not first-pass reading measures. We discuss possible accounts of this pattern in the eye-movement record.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Implicit expectations play a central role in sentence processing. These expectations are often assumed to be static or change only at relatively slow time scales. Some theoretical proposals, however, hold that comprehenders continuously adapt their expectations based on recent input. Existing evidence has relied heavily on self-paced reading, which requires familiarisation with a novel task. We instead employ eye-tracking reading to investigate the role of expectation adaptation during speeds and task demands more closely resembling natural reading. In two experiments, subjects read sentences that contained higher than expected proportions of a previously highly unexpected structure (reduced relative clauses). We test how this change in the statistics of structures within the experiment affects reading: if subjects adapt their expectations, reading times for the unexpected structure should decrease over the course of the experiment. This prediction is confirmed in both experiments. Significant effects of the changing statistics are observed for regression-related measures but not first-pass reading measures. We discuss possible accounts of this pattern in the eye-movement record.

Close

  • doi:10.1080/23273798.2020.1784447

Close

Ming Yan; Hong Li; Yongqiang Su; Yuqing Cao; Jinger Pan

The perceptual span and individual differences among Chinese children Journal Article

Scientific Studies of Reading, 24 (6), pp. 520–530, 2020.

Abstract | Links | BibTeX

@article{Yan2020a,
title = {The perceptual span and individual differences among Chinese children},
author = {Ming Yan and Hong Li and Yongqiang Su and Yuqing Cao and Jinger Pan},
doi = {10.1080/10888438.2020.1713789},
year = {2020},
date = {2020-01-01},
journal = {Scientific Studies of Reading},
volume = {24},
number = {6},
pages = {520--530},
publisher = {Routledge},
abstract = {In the present study, we explored the perceptual span of typically developing Chinese children in Grade 3 (G3) during their reading of age-appropriate sentences, utilizing the gaze contingent moving window paradigm. Overall, these Chinese children had a smaller perceptual span than adults, covering only one character leftward and two characters rightward of the currently fixated one. In addition, individual differences in reading ability (i.e., number of characters correctly read aloud per minute) influenced the size of the perceptual span. Fluent readers' reading and eye-movement parameters benefited from previewing the third upcoming characters, whereas non-fluent readers reached their asymptotic performances in a smaller window revealing rightwards by only two characters. These results suggest that the perceptual span is modulated dynamically by reading ability. Non-fluent readers need to focus their attention on foveal words, leading to narrowed perceptual span and reduced parafoveal processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the present study, we explored the perceptual span of typically developing Chinese children in Grade 3 (G3) during their reading of age-appropriate sentences, utilizing the gaze contingent moving window paradigm. Overall, these Chinese children had a smaller perceptual span than adults, covering only one character leftward and two characters rightward of the currently fixated one. In addition, individual differences in reading ability (i.e., number of characters correctly read aloud per minute) influenced the size of the perceptual span. Fluent readers' reading and eye-movement parameters benefited from previewing the third upcoming characters, whereas non-fluent readers reached their asymptotic performances in a smaller window revealing rightwards by only two characters. These results suggest that the perceptual span is modulated dynamically by reading ability. Non-fluent readers need to focus their attention on foveal words, leading to narrowed perceptual span and reduced parafoveal processing.

Close

  • doi:10.1080/10888438.2020.1713789

Close

Guoli Yan; Zebo Lan; Zhu Meng; Yingchao Wang; Valerie Benson

Phonological coding during sentence reading in Chinese deaf readers: An eye-tracking study Journal Article

Scientific Studies of Reading, pp. 1–17, 2020.

Abstract | Links | BibTeX

@article{Yan2020,
title = {Phonological coding during sentence reading in Chinese deaf readers: An eye-tracking study},
author = {Guoli Yan and Zebo Lan and Zhu Meng and Yingchao Wang and Valerie Benson},
doi = {10.1080/10888438.2020.1778000},
year = {2020},
date = {2020-01-01},
journal = {Scientific Studies of Reading},
pages = {1--17},
publisher = {Routledge},
abstract = {Phonological coding plays an important role in reading for hearing students. Experimental findings regarding phonological coding in deaf readers are controversial, and whether deaf readers are able to use phonological coding remains unclear. In the current study we examined whether Chinese deaf students could use phonological coding during sentence reading. Deaf middle school students, chronological age-matched hearing students, and reading ability-matched hearing students had their eye movements recorded as they read sentences containing correctly spelled characters, homophones, or unrelated characters. Both hearing groups had shorter total reading times on homophones than they did on unrelated characters. In contrast, no significant difference was found between homophones and unrelated characters for the deaf students. However, when the deaf group was divided into more-skilled and less-skilled readers according to their scores on reading fluency, the homophone advantage noted for the hearing controls was also observed for the more-skilled deaf students.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Phonological coding plays an important role in reading for hearing students. Experimental findings regarding phonological coding in deaf readers are controversial, and whether deaf readers are able to use phonological coding remains unclear. In the current study we examined whether Chinese deaf students could use phonological coding during sentence reading. Deaf middle school students, chronological age-matched hearing students, and reading ability-matched hearing students had their eye movements recorded as they read sentences containing correctly spelled characters, homophones, or unrelated characters. Both hearing groups had shorter total reading times on homophones than they did on unrelated characters. In contrast, no significant difference was found between homophones and unrelated characters for the deaf students. However, when the deaf group was divided into more-skilled and less-skilled readers according to their scores on reading fluency, the homophone advantage noted for the hearing controls was also observed for the more-skilled deaf students.

Close

  • doi:10.1080/10888438.2020.1778000

Close

Shimpei Yamagishi; Shigeto Furukawa

Factors influencing saccadic reaction time: Effect of task modality, stimulus saliency, spatial congruency of stimuli, and pupil size Journal Article

Frontiers in Human Neuroscience, 14 , pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Yamagishi2020,
title = {Factors influencing saccadic reaction time: Effect of task modality, stimulus saliency, spatial congruency of stimuli, and pupil size},
author = {Shimpei Yamagishi and Shigeto Furukawa},
doi = {10.3389/fnhum.2020.571893},
year = {2020},
date = {2020-01-01},
journal = {Frontiers in Human Neuroscience},
volume = {14},
pages = {1--11},
abstract = {It is often assumed that the reaction time of a saccade toward visual and/or auditory stimuli reflects the sensitivities of our oculomotor-orienting system to stimulus saliency. Endogenous factors, as well as stimulus-related factors, would also affect the saccadic reaction time (SRT). However, it was not clear how these factors interact and to what extent visual and auditory-targeting saccades are accounted for by common mechanisms. The present study examined the effect of, and the interaction between, stimulus saliency and audiovisual spatial congruency on the SRT for visual- and for auditory-target conditions. We also analyzed pre-target pupil size to examine the relationship between saccade preparation and pupil size. Pupil size is considered to reflect arousal states coupling with locus-coeruleus (LC) activity during a cognitive task. The main findings were that (1) the pattern of the examined effects on the SRT varied between visual- and auditory-auditory target conditions, (2) the effect of stimulus saliency was significant for the visual-target condition, but not significant for the auditory-target condition, (3) Pupil velocity, not absolute pupil size, was sensitive to task set (i.e., visual-targeting saccade vs. auditory-targeting saccade), and (4) there was a significant correlation between the pre-saccade absolute pupil size and the SRTs for the visual-target condition but not for the auditory-target condition. The discrepancy between target modalities for the effect of pupil velocity and between the absolute pupil size and pupil velocity for the correlation with SRT may imply that the pupil effect for the visual-target condition was caused by a modality-specific link between pupil size modulation and the SC rather than by the LC-NE (locus coeruleus-norepinephrine) system. These results support the idea that different threshold mechanisms in the SC may be involved in the initiation of saccades toward visual and auditory targets.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It is often assumed that the reaction time of a saccade toward visual and/or auditory stimuli reflects the sensitivities of our oculomotor-orienting system to stimulus saliency. Endogenous factors, as well as stimulus-related factors, would also affect the saccadic reaction time (SRT). However, it was not clear how these factors interact and to what extent visual and auditory-targeting saccades are accounted for by common mechanisms. The present study examined the effect of, and the interaction between, stimulus saliency and audiovisual spatial congruency on the SRT for visual- and for auditory-target conditions. We also analyzed pre-target pupil size to examine the relationship between saccade preparation and pupil size. Pupil size is considered to reflect arousal states coupling with locus-coeruleus (LC) activity during a cognitive task. The main findings were that (1) the pattern of the examined effects on the SRT varied between visual- and auditory-auditory target conditions, (2) the effect of stimulus saliency was significant for the visual-target condition, but not significant for the auditory-target condition, (3) Pupil velocity, not absolute pupil size, was sensitive to task set (i.e., visual-targeting saccade vs. auditory-targeting saccade), and (4) there was a significant correlation between the pre-saccade absolute pupil size and the SRTs for the visual-target condition but not for the auditory-target condition. The discrepancy between target modalities for the effect of pupil velocity and between the absolute pupil size and pupil velocity for the correlation with SRT may imply that the pupil effect for the visual-target condition was caused by a modality-specific link between pupil size modulation and the SC rather than by the LC-NE (locus coeruleus-norepinephrine) system. These results support the idea that different threshold mechanisms in the SC may be involved in the initiation of saccades toward visual and auditory targets.

Close

  • doi:10.3389/fnhum.2020.571893

Close

Shuwei Xue; Arthur M Jacobs; Jana Lüdtke

What is the difference? Rereading Shakespeare's sonnets — An eye tracking study Journal Article

Frontiers in Psychology, 11 , pp. 1–14, 2020.

Abstract | Links | BibTeX

@article{Xue2020a,
title = {What is the difference? Rereading Shakespeare's sonnets — An eye tracking study},
author = {Shuwei Xue and Arthur M Jacobs and Jana Lüdtke},
doi = {10.3389/fpsyg.2020.00421},
year = {2020},
date = {2020-01-01},
journal = {Frontiers in Psychology},
volume = {11},
pages = {1--14},
abstract = {Texts are often reread in everyday life, but most studies of rereading have been based on expository texts, not on literary ones such as poems, though literary texts may be reread more often than others. To correct this bias, the present study is based on two of Shakespeare's sonnets. Eye movements were recorded, as participants read a sonnet then read it again after a few minutes. After each reading, comprehension and appreciation were measured with the help of a questionnaire. In general, compared to the first reading, rereading improved the fluency of reading (shorter total reading times, shorter regression times, and lower fixation probability) and the depth of comprehension. Contrary to the other rereading studies using literary texts, no increase in appreciation was apparent. Moreover, results from a predictive modeling analysis showed that readers' eye movements were determined by the same critical psycholinguistic features throughout the two sessions. Apparently, even in the case of poetry, the eye movement control in reading is determined mainly by surface features of the text, unaffected by repetition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Texts are often reread in everyday life, but most studies of rereading have been based on expository texts, not on literary ones such as poems, though literary texts may be reread more often than others. To correct this bias, the present study is based on two of Shakespeare's sonnets. Eye movements were recorded, as participants read a sonnet then read it again after a few minutes. After each reading, comprehension and appreciation were measured with the help of a questionnaire. In general, compared to the first reading, rereading improved the fluency of reading (shorter total reading times, shorter regression times, and lower fixation probability) and the depth of comprehension. Contrary to the other rereading studies using literary texts, no increase in appreciation was apparent. Moreover, results from a predictive modeling analysis showed that readers' eye movements were determined by the same critical psycholinguistic features throughout the two sessions. Apparently, even in the case of poetry, the eye movement control in reading is determined mainly by surface features of the text, unaffected by repetition.

Close

  • doi:10.3389/fpsyg.2020.00421

Close

Cheng Xue; Antonino Calapai; Julius Krumbiegel; Stefan Treue

Sustained spatial attention accounts for the direction bias of human microsaccades Journal Article

Scientific Reports, 10 , pp. 1–10, 2020.

Abstract | Links | BibTeX

@article{Xue2020,
title = {Sustained spatial attention accounts for the direction bias of human microsaccades},
author = {Cheng Xue and Antonino Calapai and Julius Krumbiegel and Stefan Treue},
doi = {10.1038/s41598-020-77455-7},
year = {2020},
date = {2020-01-01},
journal = {Scientific Reports},
volume = {10},
pages = {1--10},
publisher = {Nature Publishing Group UK},
abstract = {Small ballistic eye movements, so called microsaccades, occur even while foveating an object. Previous studies using covert attention tasks have shown that shortly after a symbolic spatial cue, specifying a behaviorally relevant location, microsaccades tend to be directed toward the cued location. This suggests that microsaccades can serve as an index for the covert orientation of spatial attention. However, this hypothesis faces two major challenges: First, effects associated with visual spatial attention are hard to distinguish from those that associated with the contemplation of foveating a peripheral stimulus. Second, it is less clear whether endogenously sustained attention alone can bias microsaccade directions without a spatial cue on each trial. To address the first issue, we investigated the direction of microsaccades in human subjects while they attended to a behaviorally relevant location and prepared a response eye movement either toward or away from this location. We find that directions of microsaccades are biased toward the attended location rather than towards the saccade target. To tackle the second issue, we verbally indicated the location to attend before the start of each block of trials, to exclude potential visual cue-specific effects on microsaccades. Our results indicate that sustained spatial attention alone reliably produces the microsaccade direction effect. Overall, our findings demonstrate that sustained spatial attention alone, even in the absence of saccade planning or a spatial cue, is sufficient to explain the direction bias observed in microsaccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Small ballistic eye movements, so called microsaccades, occur even while foveating an object. Previous studies using covert attention tasks have shown that shortly after a symbolic spatial cue, specifying a behaviorally relevant location, microsaccades tend to be directed toward the cued location. This suggests that microsaccades can serve as an index for the covert orientation of spatial attention. However, this hypothesis faces two major challenges: First, effects associated with visual spatial attention are hard to distinguish from those that associated with the contemplation of foveating a peripheral stimulus. Second, it is less clear whether endogenously sustained attention alone can bias microsaccade directions without a spatial cue on each trial. To address the first issue, we investigated the direction of microsaccades in human subjects while they attended to a behaviorally relevant location and prepared a response eye movement either toward or away from this location. We find that directions of microsaccades are biased toward the attended location rather than towards the saccade target. To tackle the second issue, we verbally indicated the location to attend before the start of each block of trials, to exclude potential visual cue-specific effects on microsaccades. Our results indicate that sustained spatial attention alone reliably produces the microsaccade direction effect. Overall, our findings demonstrate that sustained spatial attention alone, even in the absence of saccade planning or a spatial cue, is sufficient to explain the direction bias observed in microsaccades.

Close

  • doi:10.1038/s41598-020-77455-7

Close

Hongge Xu; Jing Samantha Pan; Xiaoye Michael Wang; Geoffrey P Bingham

Information for perceiving blurry events: Optic flow and color are additive Journal Article

Attention, Perception, and Psychophysics, pp. 1–10, 2020.

Abstract | Links | BibTeX

@article{Xu2020,
title = {Information for perceiving blurry events: Optic flow and color are additive},
author = {Hongge Xu and Jing Samantha Pan and Xiaoye Michael Wang and Geoffrey P Bingham},
doi = {10.3758/s13414-020-02135-7},
year = {2020},
date = {2020-01-01},
journal = {Attention, Perception, and Psychophysics},
pages = {1--10},
publisher = {Attention, Perception, & Psychophysics},
abstract = {Information used in visual event perception includes both static image structure projected from opaque object surfaces and dynamic optic flow generated by motion. Events presented in static blurry grayscale displays have been shown to be recognized only when and after presented with optic flow. In this study, we investigate the effects of optic flow and color on identifying blurry events by studying the identification accuracy and eye-movement patterns. Three types of color displays were tested: grayscale, original colors, or rearranged colors (where the RGB values of the original colors were adjusted). In each color condition, participants identified 12 blurry events in five experimental phases. In the first two phases, static blurry images were presented alone or sequentially with a motion mask between consecutive frames, and identification was poor. In Phase 3, where optic flow was added, identification was comparably good. In Phases 4 and 5, motion was removed, but identification remained good. Thus, optic flow improved event identification during and after its presentation. Color also improved performance, where participants were consistently better at identifying color displays than grayscale or rearranged color displays. Importantly, the effects of optic flow and color were additive. Finally, in both motion and postmotion phases, a significant portion of eye fixations fell in strong optic flow areas, suggesting that participants continued to look where flow was available even after it stopped. We infer that optic flow specified depth structure in the blurry image structure and yielded an improvement in identification from static blurry images.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Information used in visual event perception includes both static image structure projected from opaque object surfaces and dynamic optic flow generated by motion. Events presented in static blurry grayscale displays have been shown to be recognized only when and after presented with optic flow. In this study, we investigate the effects of optic flow and color on identifying blurry events by studying the identification accuracy and eye-movement patterns. Three types of color displays were tested: grayscale, original colors, or rearranged colors (where the RGB values of the original colors were adjusted). In each color condition, participants identified 12 blurry events in five experimental phases. In the first two phases, static blurry images were presented alone or sequentially with a motion mask between consecutive frames, and identification was poor. In Phase 3, where optic flow was added, identification was comparably good. In Phases 4 and 5, motion was removed, but identification remained good. Thus, optic flow improved event identification during and after its presentation. Color also improved performance, where participants were consistently better at identifying color displays than grayscale or rearranged color displays. Importantly, the effects of optic flow and color were additive. Finally, in both motion and postmotion phases, a significant portion of eye fixations fell in strong optic flow areas, suggesting that participants continued to look where flow was available even after it stopped. We infer that optic flow specified depth structure in the blurry image structure and yielded an improvement in identification from static blurry images.

Close

  • doi:10.3758/s13414-020-02135-7

Close

Jianping Xiong; Xiaokang Jin; Weili Li

The influence of situational regulation on the information processing of promotional and preventive self-regulatory individuals: Evidence From eye movements Journal Article

Frontiers in Psychology, 11 , pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Xiong2020,
title = {The influence of situational regulation on the information processing of promotional and preventive self-regulatory individuals: Evidence From eye movements},
author = {Jianping Xiong and Xiaokang Jin and Weili Li},
doi = {10.3389/fpsyg.2020.531147},
year = {2020},
date = {2020-01-01},
journal = {Frontiers in Psychology},
volume = {11},
pages = {1--11},
abstract = {Regulatory focus theory uses two different motivation focus systems—promotional and preventive—to describe how individuals approach positive goals and avoid negative goals. Moreover, the regulatory focus can manifest as chronic personality characteristics and can be situationally induced by tasks or the environment. The current study employed eye-tracking methodology to investigate how individuals who differ in their chronic regulatory focus (promotional vs. preventive) process information (Experiment 1) and whether an induced experimental situation could modulate features of their information processing (Experiment 2). Both experiments used a 3 × 3 grid information-processing task, containing eight information cells and a fixation cell; half the information cells were characterized by attribute-based information, and the other half by alternative-based information. We asked the subjects to view the grid based on their personal preferences and choose one of the virtual products presented in this grid to "purchase" by the end of each trial. Results of Experiment 1 show that promotional individuals do not exhibit a clear preference between the two types of information, whereas preventive individuals tend to fixate longer on the alternative-based information. In Experiment 2, we induced the situational regulatory focus via experimental tasks before the information-processing task. The results demonstrate that the behavioral motivation is significantly enhanced, thereby increasing the depth of the preferred mode of information processing, when the chronic regulatory focus matches the situational focus. In contrast, individuals process information more thoroughly, using both processing modes, in the non-fit condition, i.e., when the focuses do not match.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Regulatory focus theory uses two different motivation focus systems—promotional and preventive—to describe how individuals approach positive goals and avoid negative goals. Moreover, the regulatory focus can manifest as chronic personality characteristics and can be situationally induced by tasks or the environment. The current study employed eye-tracking methodology to investigate how individuals who differ in their chronic regulatory focus (promotional vs. preventive) process information (Experiment 1) and whether an induced experimental situation could modulate features of their information processing (Experiment 2). Both experiments used a 3 × 3 grid information-processing task, containing eight information cells and a fixation cell; half the information cells were characterized by attribute-based information, and the other half by alternative-based information. We asked the subjects to view the grid based on their personal preferences and choose one of the virtual products presented in this grid to "purchase" by the end of each trial. Results of Experiment 1 show that promotional individuals do not exhibit a clear preference between the two types of information, whereas preventive individuals tend to fixate longer on the alternative-based information. In Experiment 2, we induced the situational regulatory focus via experimental tasks before the information-processing task. The results demonstrate that the behavioral motivation is significantly enhanced, thereby increasing the depth of the preferred mode of information processing, when the chronic regulatory focus matches the situational focus. In contrast, individuals process information more thoroughly, using both processing modes, in the non-fit condition, i.e., when the focuses do not match.

Close

  • doi:10.3389/fpsyg.2020.531147

Close

Xin Yu Xie; Xing Nan Zhao; Cong Yu

Perceptual learning of motion direction discrimination: Location specificity and the uncertain roles of dorsal and ventral areas Journal Article

Vision Research, 175 , pp. 51–57, 2020.

Abstract | Links | BibTeX

@article{Xie2020d,
title = {Perceptual learning of motion direction discrimination: Location specificity and the uncertain roles of dorsal and ventral areas},
author = {Xin Yu Xie and Xing Nan Zhao and Cong Yu},
doi = {10.1016/j.visres.2020.06.003},
year = {2020},
date = {2020-01-01},
journal = {Vision Research},
volume = {175},
pages = {51--57},
publisher = {Elsevier},
abstract = {One interesting observation of perceptual learning is the asymmetric transfer between stimuli at different external noise levels: learning at zero/low noise can transfer significantly to the same stimulus at high noise, but not vice versa. The mechanisms underlying this asymmetric transfer have been investigated by psychophysical, neurophysiological, brain imaging, and computational modeling studies. One study (PNAS 113 (2016) 5724–5729) reported that rTMS stimulations of dorsal and ventral areas impair motion direction discrimination of moving dot stimuli at 40% coherent (“noisy”) and 100% coherent (zero-noise) levels, respectively. However, after direction training at 100% coherence, only rTMS stimulation of the ventral cortex is effective, disturbing direction discrimination at both coherence levels. These results were interpreted as learning-induced changes of functional specializations of visual areas. We have concerns with the behavioral data of this study. First, contrary to the report of highly location-specific motion direction learning, our replicating experiment showed substantial learning transfer (e.g., transfer/learning ratio = 81.9%. vs 14.8% at 100% coherence). Second and more importantly, we found complete transfer of direction learning from 40% to 100% coherence, a critical baseline that is missing in this study. The transfer effect suggests that similar brain mechanisms underlie motion direction processing at two coherence levels. Therefore, this study's conclusions regarding the roles of dorsal and ventral areas in motion direction processing at two coherence levels, as well as the effects of perceptual learning, are not supported by proper experimental evidence. It remains unexplained why distinct impacts of dorsal and ventral rTMS stimulations on motion direction discrimination were observed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

One interesting observation of perceptual learning is the asymmetric transfer between stimuli at different external noise levels: learning at zero/low noise can transfer significantly to the same stimulus at high noise, but not vice versa. The mechanisms underlying this asymmetric transfer have been investigated by psychophysical, neurophysiological, brain imaging, and computational modeling studies. One study (PNAS 113 (2016) 5724–5729) reported that rTMS stimulations of dorsal and ventral areas impair motion direction discrimination of moving dot stimuli at 40% coherent (“noisy”) and 100% coherent (zero-noise) levels, respectively. However, after direction training at 100% coherence, only rTMS stimulation of the ventral cortex is effective, disturbing direction discrimination at both coherence levels. These results were interpreted as learning-induced changes of functional specializations of visual areas. We have concerns with the behavioral data of this study. First, contrary to the report of highly location-specific motion direction learning, our replicating experiment showed substantial learning transfer (e.g., transfer/learning ratio = 81.9%. vs 14.8% at 100% coherence). Second and more importantly, we found complete transfer of direction learning from 40% to 100% coherence, a critical baseline that is missing in this study. The transfer effect suggests that similar brain mechanisms underlie motion direction processing at two coherence levels. Therefore, this study's conclusions regarding the roles of dorsal and ventral areas in motion direction processing at two coherence levels, as well as the effects of perceptual learning, are not supported by proper experimental evidence. It remains unexplained why distinct impacts of dorsal and ventral rTMS stimulations on motion direction discrimination were observed.

Close

  • doi:10.1016/j.visres.2020.06.003

Close

Xin Yu Xie; Cong Yu

A new format of perceptual learning based on evidence abstraction from multiple stimuli Journal Article

Journal of Vision, 20 (2), pp. 1–9, 2020.

Abstract | Links | BibTeX

@article{Xie2020c,
title = {A new format of perceptual learning based on evidence abstraction from multiple stimuli},
author = {Xin Yu Xie and Cong Yu},
doi = {10.1167/jov.20.2.5},
year = {2020},
date = {2020-01-01},
journal = {Journal of Vision},
volume = {20},
number = {2},
pages = {1--9},
abstract = {Perceptual learning, which improves stimulus discrimination, typically results from training with a single stimulus condition. Two major learning mechanisms, early cortical neural plasticity and response reweighting, have been proposed. Here we report a new format of perceptual learning that by design may have bypassed these mechanisms. Instead, it is more likely based on abstracted stimulus evidence from multiple stimulus conditions. Specifically, we had observers practice orientation discrimination with Gabors or symmetric dot patterns at up to 47 random or rotating location x orientation conditions. Although each condition received sparse trials (12 trials/session), the practice produced significant orientation learning. Learning also transferred to a Gabor at a single untrained condition with two-to three-times lower orientation thresholds. Moreover, practicing a single stimulus condition with matched trial frequency (12 trials/session) failed to produce significant learning. These results suggest that learning with multiple stimulus conditions may not come from early cortical plasticity or response reweighting with each particular condition. Rather, it may materialize through a new format of perceptual learning, in which orientation evidence invariant to particular orientations and locations is first abstracted from multiple stimulus conditions and then reweighted by later learning mechanisms. The coarse-to-fine transfer of orientation learning from multiple Gabors or symmetric dot patterns to a single Gabor also suggest the involvement of orientation concept learning by the learning mechanisms.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Perceptual learning, which improves stimulus discrimination, typically results from training with a single stimulus condition. Two major learning mechanisms, early cortical neural plasticity and response reweighting, have been proposed. Here we report a new format of perceptual learning that by design may have bypassed these mechanisms. Instead, it is more likely based on abstracted stimulus evidence from multiple stimulus conditions. Specifically, we had observers practice orientation discrimination with Gabors or symmetric dot patterns at up to 47 random or rotating location x orientation conditions. Although each condition received sparse trials (12 trials/session), the practice produced significant orientation learning. Learning also transferred to a Gabor at a single untrained condition with two-to three-times lower orientation thresholds. Moreover, practicing a single stimulus condition with matched trial frequency (12 trials/session) failed to produce significant learning. These results suggest that learning with multiple stimulus conditions may not come from early cortical plasticity or response reweighting with each particular condition. Rather, it may materialize through a new format of perceptual learning, in which orientation evidence invariant to particular orientations and locations is first abstracted from multiple stimulus conditions and then reweighted by later learning mechanisms. The coarse-to-fine transfer of orientation learning from multiple Gabors or symmetric dot patterns to a single Gabor also suggest the involvement of orientation concept learning by the learning mechanisms.

Close

  • doi:10.1167/jov.20.2.5

Close

Xin Yu Xie; Lei Liu; Cong Yu

A new perceptual training strategy to improve vision impaired by central vision loss Journal Article

Vision Research, 174 , pp. 69–76, 2020.

Abstract | Links | BibTeX

@article{Xie2020b,
title = {A new perceptual training strategy to improve vision impaired by central vision loss},
author = {Xin Yu Xie and Lei Liu and Cong Yu},
doi = {10.1016/j.visres.2020.05.010},
year = {2020},
date = {2020-01-01},
journal = {Vision Research},
volume = {174},
pages = {69--76},
abstract = {Patients with central vision loss depend on peripheral vision for everyday functions. A preferred retinal locus (PRL) on the intact retina is commonly trained as a new “fovea” to help. However, reprogramming the fovea-centered oculomotor control is difficult, so saccades often bring the defunct fovea to block the target. Aligning PRL with distant targets also requires multiple saccades and sometimes head movements. To overcome these problems, we attempted to train normal-sighted observers to form a preferred retinal annulus (PRA) around a simulated scotoma, so that they could rely on the same fovea-centered oculomotor system and make short saccades to align PRA with the target. Observers with an invisible simulated central scotoma (5° radius) practiced making saccades to see a tumbling-E target at 10° eccentricity. The otherwise blurred E target became clear when saccades brought a scotoma-abutting clear window (2° radius) to it. The location of the clear window was either fixed for PRL training, or changing among 12 locations for PRA training. Various cues aided the saccades through training. Practice quickly established a PRL or PRA. Comparing to PRL-trained observers whose first saccade persistently blocked the target with scotoma, PRA-trained observers produced more accurate first saccade. The benefits of more accurate PRA-based saccades also outweighed the costs of slower latency. PRA training may provide an efficient strategy to cope with central vision loss, especially for aging patients who have major difficulties adapting to a PRL.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Patients with central vision loss depend on peripheral vision for everyday functions. A preferred retinal locus (PRL) on the intact retina is commonly trained as a new “fovea” to help. However, reprogramming the fovea-centered oculomotor control is difficult, so saccades often bring the defunct fovea to block the target. Aligning PRL with distant targets also requires multiple saccades and sometimes head movements. To overcome these problems, we attempted to train normal-sighted observers to form a preferred retinal annulus (PRA) around a simulated scotoma, so that they could rely on the same fovea-centered oculomotor system and make short saccades to align PRA with the target. Observers with an invisible simulated central scotoma (5° radius) practiced making saccades to see a tumbling-E target at 10° eccentricity. The otherwise blurred E target became clear when saccades brought a scotoma-abutting clear window (2° radius) to it. The location of the clear window was either fixed for PRL training, or changing among 12 locations for PRA training. Various cues aided the saccades through training. Practice quickly established a PRL or PRA. Comparing to PRL-trained observers whose first saccade persistently blocked the target with scotoma, PRA-trained observers produced more accurate first saccade. The benefits of more accurate PRA-based saccades also outweighed the costs of slower latency. PRA training may provide an efficient strategy to cope with central vision loss, especially for aging patients who have major difficulties adapting to a PRL.

Close

  • doi:10.1016/j.visres.2020.05.010

Close

Fang Xie; Jingxin Wang; Lisha Hao; Xue Zhang; Kayleigh L Warrington

Perceptual Span Is Independent of Font Size for Older and Young Readers: Evidence From Chinese Journal Article

Psychology and Aging, 2020.

Abstract | Links | BibTeX

@article{Xie2020a,
title = {Perceptual Span Is Independent of Font Size for Older and Young Readers: Evidence From Chinese},
author = {Fang Xie and Jingxin Wang and Lisha Hao and Xue Zhang and Kayleigh L Warrington},
doi = {10.1037/pag0000549},
year = {2020},
date = {2020-01-01},
journal = {Psychology and Aging},
abstract = {Research suggests that visual acuity plays a more important role in parafoveal processing in Chinese reading than in spaced alphabetic languages, such that in Chinese, as the font size increases, the size of the perceptual span decreases. The lack of spaces and the complexity of written Chinese may make characters in eccentric positions particularly hard to process. Older adults generally have poorer visual capabilities than young adults, particularly in parafoveal vision, and so may find large characters in the parafovea particularly hard to process compared with smaller characters because of their greater eccentricity. Therefore, the effect of font size on the perceptual span may be larger for older readers. Crucially, this possibility has not previously been investigated; however, this may represent a unique source of age-related reading difficulty in logographic languages. Accordingly, to explore the relationship between font size and parafoveal processing for both older and young adult readers, we manipulated font size and the amount of parafoveal information available with different masking stimuli in 2 silent-reading experiments. The results show that decreasing the font size disrupted reading behavior more for older readers, such that reading times were longer for smaller characters, but crucially, the influence of font size on the perceptual span was absent for both age groups. These findings provide new insight into age-related reading difficulty in Chinese by revealing that older adults can successfully process substantial parafoveal information across a range of font sizes. This indicates that older adults' parafoveal processing may be more robust than previously considered.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Research suggests that visual acuity plays a more important role in parafoveal processing in Chinese reading than in spaced alphabetic languages, such that in Chinese, as the font size increases, the size of the perceptual span decreases. The lack of spaces and the complexity of written Chinese may make characters in eccentric positions particularly hard to process. Older adults generally have poorer visual capabilities than young adults, particularly in parafoveal vision, and so may find large characters in the parafovea particularly hard to process compared with smaller characters because of their greater eccentricity. Therefore, the effect of font size on the perceptual span may be larger for older readers. Crucially, this possibility has not previously been investigated; however, this may represent a unique source of age-related reading difficulty in logographic languages. Accordingly, to explore the relationship between font size and parafoveal processing for both older and young adult readers, we manipulated font size and the amount of parafoveal information available with different masking stimuli in 2 silent-reading experiments. The results show that decreasing the font size disrupted reading behavior more for older readers, such that reading times were longer for smaller characters, but crucially, the influence of font size on the perceptual span was absent for both age groups. These findings provide new insight into age-related reading difficulty in Chinese by revealing that older adults can successfully process substantial parafoveal information across a range of font sizes. This indicates that older adults' parafoveal processing may be more robust than previously considered.

Close

  • doi:10.1037/pag0000549

Close

Fang Xie; Victoria A McGowan; Min Chang; Lin Li; Sarah J White; Kevin B Paterson; Jingxin Wang; Kayleigh L Warrington

Revealing similarities in the perceptual span of young and older Chinese readers Journal Article

Quarterly Journal of Experimental Psychology, 73 (8), pp. 1189–1205, 2020.

Abstract | Links | BibTeX

@article{Xie2020,
title = {Revealing similarities in the perceptual span of young and older Chinese readers},
author = {Fang Xie and Victoria A McGowan and Min Chang and Lin Li and Sarah J White and Kevin B Paterson and Jingxin Wang and Kayleigh L Warrington},
doi = {10.1177/1747021819899826},
year = {2020},
date = {2020-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {73},
number = {8},
pages = {1189--1205},
abstract = {Older readers (aged 65+ years) of both alphabetic languages and character-based languages like Chinese read more slowly than their younger counterparts (aged 18–30 years). A possible explanation for this slowdown is that, due to age-related visual and cognitive declines, older readers have a smaller perceptual span and so acquire less information on each fixational pause. However, although aging effects on the perceptual span have been investigated for alphabetic languages, no such studies have been reported to date for character-based languages like Chinese. Accordingly, we investigated this issue in three experiments that used different gaze-contingent moving window paradigms to assess the perceptual span of young and older Chinese readers. In these experiments, text was shown either entirely as normal or normal only within a narrow region (window) comprising either the fixated word, the fixated word, and one word to its left, or the fixated word and either one or two words to its right. Characters outside these windows were replaced using a pattern mask (Experiment 1) or a visually similar character (Experiment 2), or blurred to render them unidentifiable (Experiment 3). Sentence reading times were overall longer for the older compared with the younger adults and differed systematically across display conditions. Crucially, however, the effects of display condition were essentially the same across the two age groups, indicating that the perceptual span for Chinese does not differ substantially for the older and young adults. We discuss these findings in relation to other evidence suggesting the perceptual span is preserved in older adulthood.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Older readers (aged 65+ years) of both alphabetic languages and character-based languages like Chinese read more slowly than their younger counterparts (aged 18–30 years). A possible explanation for this slowdown is that, due to age-related visual and cognitive declines, older readers have a smaller perceptual span and so acquire less information on each fixational pause. However, although aging effects on the perceptual span have been investigated for alphabetic languages, no such studies have been reported to date for character-based languages like Chinese. Accordingly, we investigated this issue in three experiments that used different gaze-contingent moving window paradigms to assess the perceptual span of young and older Chinese readers. In these experiments, text was shown either entirely as normal or normal only within a narrow region (window) comprising either the fixated word, the fixated word, and one word to its left, or the fixated word and either one or two words to its right. Characters outside these windows were replaced using a pattern mask (Experiment 1) or a visually similar character (Experiment 2), or blurred to render them unidentifiable (Experiment 3). Sentence reading times were overall longer for the older compared with the younger adults and differed systematically across display conditions. Crucially, however, the effects of display condition were essentially the same across the two age groups, indicating that the perceptual span for Chinese does not differ substantially for the older and young adults. We discuss these findings in relation to other evidence suggesting the perceptual span is preserved in older adulthood.

Close

  • doi:10.1177/1747021819899826

Close

Ye Xia; Mauro Manassi; Ken Nakayama; Karl Zipser; David Whitney

Visual crowding in driving Journal Article

Journal of Vision, 20 (6), pp. 1–17, 2020.

Abstract | Links | BibTeX

@article{Xia2020a,
title = {Visual crowding in driving},
author = {Ye Xia and Mauro Manassi and Ken Nakayama and Karl Zipser and David Whitney},
doi = {10.1167/jov.20.6.1},
year = {2020},
date = {2020-01-01},
journal = {Journal of Vision},
volume = {20},
number = {6},
pages = {1--17},
abstract = {Visual crowding-the deleterious influence of nearby objects on object recognition-is considered to be a major bottleneck for object recognition in cluttered environments. Although crowding has been studied for decades with static and artificial stimuli, it is still unclear how crowding operates when viewing natural dynamic scenes in real-life situations. For example, driving is a frequent and potentially fatal real-life situation where crowding may play a critical role. In order to investigate the role of crowding in this kind of situation, we presented observers with naturalistic driving videos and recorded their eye movements while they performed a simulated driving task. We found that the saccade localization on pedestrians was impacted by visual clutter, in a manner consistent with the diagnostic criteria of crowding (Bouma's rule of thumb, flanker similarity tuning, and the radial-tangential anisotropy). In order to further confirm that altered saccadic localization is a behavioral consequence of crowding, we also showed that crowding occurs in the recognition of cluttered pedestrians in a more conventional crowding paradigm. We asked participants to discriminate the gender of pedestrians in static video frames and found that the altered saccadic localization correlated with the degree of crowding of the saccade targets. Taken together, our results provide strong evidence that crowding impacts both recognition and goal-directed actions in natural driving situations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual crowding-the deleterious influence of nearby objects on object recognition-is considered to be a major bottleneck for object recognition in cluttered environments. Although crowding has been studied for decades with static and artificial stimuli, it is still unclear how crowding operates when viewing natural dynamic scenes in real-life situations. For example, driving is a frequent and potentially fatal real-life situation where crowding may play a critical role. In order to investigate the role of crowding in this kind of situation, we presented observers with naturalistic driving videos and recorded their eye movements while they performed a simulated driving task. We found that the saccade localization on pedestrians was impacted by visual clutter, in a manner consistent with the diagnostic criteria of crowding (Bouma's rule of thumb, flanker similarity tuning, and the radial-tangential anisotropy). In order to further confirm that altered saccadic localization is a behavioral consequence of crowding, we also showed that crowding occurs in the recognition of cluttered pedestrians in a more conventional crowding paradigm. We asked participants to discriminate the gender of pedestrians in static video frames and found that the altered saccadic localization correlated with the degree of crowding of the saccade targets. Taken together, our results provide strong evidence that crowding impacts both recognition and goal-directed actions in natural driving situations.

Close

  • doi:10.1167/jov.20.6.1

Close

Yanfang Xia; Filip Melinscak; Dominik R Bach

Saccadic scanpath length: an index for human threat conditioning Journal Article

Behavior Research Methods, pp. 1–14, 2020.

Abstract | Links | BibTeX

@article{Xia2020,
title = {Saccadic scanpath length: an index for human threat conditioning},
author = {Yanfang Xia and Filip Melinscak and Dominik R Bach},
doi = {10.3758/s13428-020-01490-5},
year = {2020},
date = {2020-01-01},
journal = {Behavior Research Methods},
pages = {1--14},
publisher = {Behavior Research Methods},
abstract = {Threat-conditioned cues are thought to capture overt attention in a bottom-up process. Quantification of this phenomenon typically relies on cue competition paradigms. Here, we sought to exploit gaze patterns during exclusive presentation of a visual conditioned stimulus, in order to quantify human threat conditioning. To this end, we capitalized on a summary statistic of visual search during CS presentation, scanpath length. During a simple delayed threat conditioning paradigm with full-screen monochrome conditioned stimuli (CS), we observed shorter scanpath length during CS+ compared to CS- presentation. Retrodictive validity, i.e., effect size to distinguish CS+ and CS-, was maximized by considering a 2-s time window before US onset. Taking into account the shape of the scan speed response resulted in similar retrodictive validity. The mechanism underlying shorter scanpath length appeared to be longer fixation duration and more fixation on the screen center during CS+ relative to CS- presentation. These findings were replicated in a second experiment with similar setup, and further confirmed in a third experiment using full-screen patterns as CS. This experiment included an extinction session during which scanpath differences appeared to extinguish. In a fourth experiment with auditory CS and instruction to fixate screen center, no scanpath length differences were observed. In conclusion, our study suggests scanpath length as a visual search summary statistic, which may be used as complementary measure to quantify threat conditioning with retrodictive validity similar to that of skin conductance responses.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Threat-conditioned cues are thought to capture overt attention in a bottom-up process. Quantification of this phenomenon typically relies on cue competition paradigms. Here, we sought to exploit gaze patterns during exclusive presentation of a visual conditioned stimulus, in order to quantify human threat conditioning. To this end, we capitalized on a summary statistic of visual search during CS presentation, scanpath length. During a simple delayed threat conditioning paradigm with full-screen monochrome conditioned stimuli (CS), we observed shorter scanpath length during CS+ compared to CS- presentation. Retrodictive validity, i.e., effect size to distinguish CS+ and CS-, was maximized by considering a 2-s time window before US onset. Taking into account the shape of the scan speed response resulted in similar retrodictive validity. The mechanism underlying shorter scanpath length appeared to be longer fixation duration and more fixation on the screen center during CS+ relative to CS- presentation. These findings were replicated in a second experiment with similar setup, and further confirmed in a third experiment using full-screen patterns as CS. This experiment included an extinction session during which scanpath differences appeared to extinguish. In a fourth experiment with auditory CS and instruction to fixate screen center, no scanpath length differences were observed. In conclusion, our study suggests scanpath length as a visual search summary statistic, which may be used as complementary measure to quantify threat conditioning with retrodictive validity similar to that of skin conductance responses.

Close

  • doi:10.3758/s13428-020-01490-5

Close

Jordana S Wynn; Jennifer D Ryan; Morris Moscovitch

Effects of prior knowledge on active vision and memory in younger and older adults Journal Article

Journal of Experimental Psychology: General, 149 (3), pp. 518–529, 2020.

Abstract | Links | BibTeX

@article{Wynn2020b,
title = {Effects of prior knowledge on active vision and memory in younger and older adults},
author = {Jordana S Wynn and Jennifer D Ryan and Morris Moscovitch},
doi = {10.1037/xge0000657},
year = {2020},
date = {2020-01-01},
journal = {Journal of Experimental Psychology: General},
volume = {149},
number = {3},
pages = {518--529},
abstract = {In our daily lives we rely on prior knowledge to make predictions about the world around us such as where to search for and locate common objects. Yet, equally important in visual search is the ability to inhibit such processes when those predictions fail. Mounting evidence suggests that relative to younger adults, older adults have difficulty retrieving episodic memories and inhibiting prior knowledge, even when that knowledge is detrimental to the task at hand. However, the consequences of these age-related changes for visual search remain unclear. In the present study, we used eye movement monitoring to investigate whether overreliance on prior knowledge alters the gaze patterns and performance of older adults during visual search. Younger and older adults searched for target objects in congruent or incongruent locations in real-world scenes. As predicted, targets in congruent locations were detected faster than targets in incongruent locations, and this effect was enhanced in older adults. Analysis of viewing behavior revealed that prior knowledge effects emerged early in search, as evidenced by initial saccades, and continued throughout search, with greater viewing of congruent regions by older relative to younger adults, suggesting that schema biasing of online processing increases with age. Finally, both younger and older adults showed enhanced memory for the location of congruent targets and the identity of incongruent targets, with schema-guided viewing during search predicting poor memory for schema-incongruent targets in younger adults on both tasks. Our results provide novel evidence that older adults' overreliance on prior knowledge has consequences for both active vision and memory.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In our daily lives we rely on prior knowledge to make predictions about the world around us such as where to search for and locate common objects. Yet, equally important in visual search is the ability to inhibit such processes when those predictions fail. Mounting evidence suggests that relative to younger adults, older adults have difficulty retrieving episodic memories and inhibiting prior knowledge, even when that knowledge is detrimental to the task at hand. However, the consequences of these age-related changes for visual search remain unclear. In the present study, we used eye movement monitoring to investigate whether overreliance on prior knowledge alters the gaze patterns and performance of older adults during visual search. Younger and older adults searched for target objects in congruent or incongruent locations in real-world scenes. As predicted, targets in congruent locations were detected faster than targets in incongruent locations, and this effect was enhanced in older adults. Analysis of viewing behavior revealed that prior knowledge effects emerged early in search, as evidenced by initial saccades, and continued throughout search, with greater viewing of congruent regions by older relative to younger adults, suggesting that schema biasing of online processing increases with age. Finally, both younger and older adults showed enhanced memory for the location of congruent targets and the identity of incongruent targets, with schema-guided viewing during search predicting poor memory for schema-incongruent targets in younger adults on both tasks. Our results provide novel evidence that older adults' overreliance on prior knowledge has consequences for both active vision and memory.

Close

  • doi:10.1037/xge0000657

Close

Jordana S Wynn; Jennifer D Ryan; Bradley R Buchsbaum

Eye movements support behavioral pattern completion Journal Article

Proceedings of the National Academy of Sciences, 117 (11), pp. 6246–6254, 2020.

Abstract | Links | BibTeX

@article{Wynn2020a,
title = {Eye movements support behavioral pattern completion},
author = {Jordana S Wynn and Jennifer D Ryan and Bradley R Buchsbaum},
doi = {10.1073/pnas.1917586117},
year = {2020},
date = {2020-01-01},
journal = {Proceedings of the National Academy of Sciences},
volume = {117},
number = {11},
pages = {6246--6254},
abstract = {The ability to recall a detailed event from a simple reminder is supported by pattern completion, a cognitive operation performed by the hippocampus wherein existing mnemonic representations are retrieved from incomplete input. In behavioral studies, pattern completion is often inferred through the false endorsement of lure (i.e., similar) items as old. However, evidence that such a response is due to the specific retrieval of a similar, previously encoded item is severely lacking. We used eye movement (EM) monitoring during a partial-cue recognition memory task to index reinstatement of lure images behaviorally via the recapitulation of encoding-related EMs or gaze reinstatement. Participants reinstated encoding-related EMs following degraded retrieval cues and this reinstatement was negatively correlated with accuracy for lure images, suggesting that retrieval of existing representations (i.e., pattern completion) underlies lure false alarms. Our findings provide evidence linking gaze reinstatement and pattern completion and advance a functional role for EMs in memory retrieval.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The ability to recall a detailed event from a simple reminder is supported by pattern completion, a cognitive operation performed by the hippocampus wherein existing mnemonic representations are retrieved from incomplete input. In behavioral studies, pattern completion is often inferred through the false endorsement of lure (i.e., similar) items as old. However, evidence that such a response is due to the specific retrieval of a similar, previously encoded item is severely lacking. We used eye movement (EM) monitoring during a partial-cue recognition memory task to index reinstatement of lure images behaviorally via the recapitulation of encoding-related EMs or gaze reinstatement. Participants reinstated encoding-related EMs following degraded retrieval cues and this reinstatement was negatively correlated with accuracy for lure images, suggesting that retrieval of existing representations (i.e., pattern completion) underlies lure false alarms. Our findings provide evidence linking gaze reinstatement and pattern completion and advance a functional role for EMs in memory retrieval.

Close

  • doi:10.1073/pnas.1917586117

Close

Chao Jung Wu; Chia Yu Liu; Chung Hsuan Yang; Yu Cin Jian

Eye-movements reveal children's deliberative thinking and predict performance on arithmetic word problems Journal Article

European Journal of Psychology of Education, pp. 1–18, 2020.

Abstract | Links | BibTeX

@article{Wu2020,
title = {Eye-movements reveal children's deliberative thinking and predict performance on arithmetic word problems},
author = {Chao Jung Wu and Chia Yu Liu and Chung Hsuan Yang and Yu Cin Jian},
doi = {10.1007/s10212-020-00461-w},
year = {2020},
date = {2020-01-01},
journal = {European Journal of Psychology of Education},
pages = {1--18},
abstract = {Despite decades of research on the close link between eye movements and human cognitive processes, the exact nature of the link between eye movements and deliberative thinking in problem-solving remains unknown. Thus, this study explored the critical eye-movement indicators of deliberative thinking and investigated whether visual behaviors could predict performance on arithmetic word problems of various difficulties. An eye tracker and test were employed to collect 69 sixth-graders' eye-movement behaviors and responses. No significant difference was found between the successful and unsuccessful groups on the simple problems, but on the difficult problems, the successful problem-solvers demonstrated significantly greater gaze aversion, longer fixations, and spontaneous reflections. Notably, the model incorporating RT-TFD, NOF of 500 ms, and pupil size indicators could best predict participants' performance, with an overall hit rate of 74%, rising to 80% when reading comprehension screening test scores were included. These results reveal the solvers' engagement strategies or show that successful problem-solvers were well aware of problem difficulty and could regulate their cognitive resources efficiently. This study sheds light on the development of an adapted learning system with embedded eye tracking to further predict students' visual behaviors, provide real-time feedback, and improve their problem-solving performance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Despite decades of research on the close link between eye movements and human cognitive processes, the exact nature of the link between eye movements and deliberative thinking in problem-solving remains unknown. Thus, this study explored the critical eye-movement indicators of deliberative thinking and investigated whether visual behaviors could predict performance on arithmetic word problems of various difficulties. An eye tracker and test were employed to collect 69 sixth-graders' eye-movement behaviors and responses. No significant difference was found between the successful and unsuccessful groups on the simple problems, but on the difficult problems, the successful problem-solvers demonstrated significantly greater gaze aversion, longer fixations, and spontaneous reflections. Notably, the model incorporating RT-TFD, NOF of 500 ms, and pupil size indicators could best predict participants' performance, with an overall hit rate of 74%, rising to 80% when reading comprehension screening test scores were included. These results reveal the solvers' engagement strategies or show that successful problem-solvers were well aware of problem difficulty and could regulate their cognitive resources efficiently. This study sheds light on the development of an adapted learning system with embedded eye tracking to further predict students' visual behaviors, provide real-time feedback, and improve their problem-solving performance.

Close

  • doi:10.1007/s10212-020-00461-w

Close

Karlijn Woutersen; Anna C Geuzebroek; Albert V van den Berg; Jeroen Goossens

Useful field of view performance in the intact visual field of hemianopia patients Journal Article

Investigative Ophthalmology and Visual Science, 61 (5), pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Woutersen2020,
title = {Useful field of view performance in the intact visual field of hemianopia patients},
author = {Karlijn Woutersen and Anna C Geuzebroek and Albert V van den Berg and Jeroen Goossens},
doi = {10.1167/IOVS.61.5.43},
year = {2020},
date = {2020-01-01},
journal = {Investigative Ophthalmology and Visual Science},
volume = {61},
number = {5},
pages = {1--11},
abstract = {PURPOSE. Postchiasmatic brain damage commonly results in an area of reduced visual sensitivity or blindness in the contralesional hemifield. Previous studies have shown that the ipsilesional visual field can be impaired too. Here, we examine whether assessing visual functioning of the “intact” ipsilesional visual field can be useful to understand difficulties experienced by patients with visual field defects. METHODS. We compared the performance of 14 patients on a customized version of the useful field of view test that presents stimuli in both hemifields but only assesses functioning of their intact visual half-field (iUFOV) with that of equivalent hemifield assessments in 17 age-matched healthy control participants. In addition, we mapped visual field sensitivity with the Humphrey Field Analyzer. Last, we used an adapted version of the National Eye Institute Visual Quality of Life-25 to measure their experienced visual quality of life. RESULTS. We found that patients performed worse on the second and third iUFOV subtests, but not on the first subtest. Furthermore, patients scored significantly worse on almost every subscale, except ocular pain. Summed iUFOV scores (assessing the intact hemifield only) and Humphrey field analyzer scores (assessing both hemifields combined) showed almost similar correlations with the subscale scores of the adapted National Eye Institute Visual Quality of Life-25. CONCLUSIONS. The iUFOV test is sensitive to deficits in the visual field that are not picked up by traditional perimetry. We therefore believe this task is of interest for patients with postchiasmatic brain lesions and should be investigated further.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

PURPOSE. Postchiasmatic brain damage commonly results in an area of reduced visual sensitivity or blindness in the contralesional hemifield. Previous studies have shown that the ipsilesional visual field can be impaired too. Here, we examine whether assessing visual functioning of the “intact” ipsilesional visual field can be useful to understand difficulties experienced by patients with visual field defects. METHODS. We compared the performance of 14 patients on a customized version of the useful field of view test that presents stimuli in both hemifields but only assesses functioning of their intact visual half-field (iUFOV) with that of equivalent hemifield assessments in 17 age-matched healthy control participants. In addition, we mapped visual field sensitivity with the Humphrey Field Analyzer. Last, we used an adapted version of the National Eye Institute Visual Quality of Life-25 to measure their experienced visual quality of life. RESULTS. We found that patients performed worse on the second and third iUFOV subtests, but not on the first subtest. Furthermore, patients scored significantly worse on almost every subscale, except ocular pain. Summed iUFOV scores (assessing the intact hemifield only) and Humphrey field analyzer scores (assessing both hemifields combined) showed almost similar correlations with the subscale scores of the adapted National Eye Institute Visual Quality of Life-25. CONCLUSIONS. The iUFOV test is sensitive to deficits in the visual field that are not picked up by traditional perimetry. We therefore believe this task is of interest for patients with postchiasmatic brain lesions and should be investigated further.

Close

  • doi:10.1167/IOVS.61.5.43

Close

Clifford I Workman; Keith J Yoder; Jean Decety

The dark side of morality–neural mechanisms underpinning moral convictions and support for violence Journal Article

AJOB Neuroscience, 11 (4), pp. 269–284, 2020.

Abstract | Links | BibTeX

@article{Workman2020,
title = {The dark side of morality–neural mechanisms underpinning moral convictions and support for violence},
author = {Clifford I Workman and Keith J Yoder and Jean Decety},
doi = {10.1080/21507740.2020.1811798},
year = {2020},
date = {2020-01-01},
journal = {AJOB Neuroscience},
volume = {11},
number = {4},
pages = {269--284},
abstract = {People are motivated by shared social values that, when held with moral conviction, can serve as compelling mandates capable of facilitating support for ideological violence. The current study examined this dark side of morality by identifying specific cognitive and neural mechanisms associated with beliefs about the appropriateness of sociopolitical violence, and determining the extent to which the engagement of these mechanisms was predicted by moral convictions. Participants reported their moral convictions about a variety of sociopolitical issues prior to undergoing functional MRI scanning. During scanning, they were asked to evaluate the appropriateness of violent protests that were ostensibly congruent or incongruent with their views about sociopolitical issues. Complementary univariate and multivariate analytical strategies comparing neural responses to congruent and incongruent violence identified neural mechanisms implicated in processing salience and in the encoding of subjective value. As predicted, neuro-hemodynamic response was modulated parametrically by individuals' beliefs about the appropriateness of congruent relative to incongruent sociopolitical violence in ventromedial prefrontal cortex, and by moral conviction in ventral striatum. Overall moral conviction was predicted by neural response to congruent relative to incongruent violence in amygdala. Together, these findings indicate that moral conviction about sociopolitical issues serves to increase their subjective value, overriding natural aversion to interpersonal harm.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

People are motivated by shared social values that, when held with moral conviction, can serve as compelling mandates capable of facilitating support for ideological violence. The current study examined this dark side of morality by identifying specific cognitive and neural mechanisms associated with beliefs about the appropriateness of sociopolitical violence, and determining the extent to which the engagement of these mechanisms was predicted by moral convictions. Participants reported their moral convictions about a variety of sociopolitical issues prior to undergoing functional MRI scanning. During scanning, they were asked to evaluate the appropriateness of violent protests that were ostensibly congruent or incongruent with their views about sociopolitical issues. Complementary univariate and multivariate analytical strategies comparing neural responses to congruent and incongruent violence identified neural mechanisms implicated in processing salience and in the encoding of subjective value. As predicted, neuro-hemodynamic response was modulated parametrically by individuals' beliefs about the appropriateness of congruent relative to incongruent sociopolitical violence in ventromedial prefrontal cortex, and by moral conviction in ventral striatum. Overall moral conviction was predicted by neural response to congruent relative to incongruent violence in amygdala. Together, these findings indicate that moral conviction about sociopolitical issues serves to increase their subjective value, overriding natural aversion to interpersonal harm.

Close

  • doi:10.1080/21507740.2020.1811798

Close

Brent Wolter; Junko Yamashita; Chi Yui Leung

Conceptual transfer and lexical development in adjectives of space: Evidence from judgments, reaction times, and eye tracking Journal Article

Applied Psycholinguistics, 41 (3), pp. 595–625, 2020.

Abstract | Links | BibTeX

@article{Wolter2020,
title = {Conceptual transfer and lexical development in adjectives of space: Evidence from judgments, reaction times, and eye tracking},
author = {Brent Wolter and Junko Yamashita and Chi Yui Leung},
doi = {10.1017/S0142716420000107},
year = {2020},
date = {2020-01-01},
journal = {Applied Psycholinguistics},
volume = {41},
number = {3},
pages = {595--625},
abstract = {This study investigated conceptual transfer and lexical development for spatial adjectives using participant judgments, reaction times, and eye-tracking measures. The study focused on the Japanese adjective semai and its partially equivalent English translation narrow. The study presented participants with images depicting two rooms with slight differences in height and width and asked them to identify which room was narrower. The only variation was the language in which the instructions were given: native language (L1) instructions for two L1 control groups, second language (L2) instructions for the experimental group (L1 Japanese speakers of L2 English). The results showed fundamental differences in processing between the control groups in respect to the judgments and reaction times, but not for the eye-tracking measures. Furthermore, the experimental group's behavior indicated a conceptual understanding of narrow that was in line with developments in proficiency, but also limited to the judgment and reaction time measures. Based on these findings, we conclude that (a) conceptual transfer affects processing on receptive language tasks, and (b) L2 conceptual representations come to resemble those of native speakers as learners develop their lexical knowledge. However, we also suggest that (c) although conceptualizations likely affect cognitive functions, our eye-tracking data were too crude to capture this.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study investigated conceptual transfer and lexical development for spatial adjectives using participant judgments, reaction times, and eye-tracking measures. The study focused on the Japanese adjective semai and its partially equivalent English translation narrow. The study presented participants with images depicting two rooms with slight differences in height and width and asked them to identify which room was narrower. The only variation was the language in which the instructions were given: native language (L1) instructions for two L1 control groups, second language (L2) instructions for the experimental group (L1 Japanese speakers of L2 English). The results showed fundamental differences in processing between the control groups in respect to the judgments and reaction times, but not for the eye-tracking measures. Furthermore, the experimental group's behavior indicated a conceptual understanding of narrow that was in line with developments in proficiency, but also limited to the judgment and reaction time measures. Based on these findings, we conclude that (a) conceptual transfer affects processing on receptive language tasks, and (b) L2 conceptual representations come to resemble those of native speakers as learners develop their lexical knowledge. However, we also suggest that (c) although conceptualizations likely affect cognitive functions, our eye-tracking data were too crude to capture this.

Close

  • doi:10.1017/S0142716420000107

Close

Luca Wollenberg; Nina M Hanning; Heiner Deubel

Visual attention and eye movement control during oculomotor competition Journal Article

Journal of Vision, 20 (9), pp. 1–17, 2020.

Abstract | Links | BibTeX

@article{Wollenberg2020,
title = {Visual attention and eye movement control during oculomotor competition},
author = {Luca Wollenberg and Nina M Hanning and Heiner Deubel},
doi = {10.1167/JOV.20.9.16},
year = {2020},
date = {2020-01-01},
journal = {Journal of Vision},
volume = {20},
number = {9},
pages = {1--17},
abstract = {Saccadic eye movements are typically preceded by selective shifts of visual attention. Recent evidence, however, suggests that oculomotor selection can occur in the absence of attentional selection when saccades erroneously land in between nearby competing objects (saccade averaging). This study combined a saccade task with a visual discrimination task to investigate saccade target selection during episodes of competition between a saccade target and a nearby distractor. We manipulated the spatial predictability of target and distractor locations and asked participants to execute saccades upon variably delayed go-signals. This allowed us to systematically investigate the capacity to exert top-down eye movement control (as reflected in saccade endpoints) based on the spatiotemporal dynamics of visual attention during movement preparation (measured as visual sensitivity). Our data demonstrate that the predictability of target and distractor locations, despite not affecting the deployment of visual attention prior to movement preparation, largely improved the accuracy of short-latency saccades. Under spatial uncertainty, a short go-signal delay likewise enhanced saccade accuracy substantially, which was associated with a more selective deployment of attentional resources to the saccade target. Moreover, we observed a systematic relationship between the deployment of visual attention and saccade accuracy, with visual discrimination performance being significantly enhanced at the saccade target relative to the distractor only before the execution of saccades accurately landing at the saccade target. Our results provide novel insights linking top-down eye movement control to the operation of selective visual attention during movement preparation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Saccadic eye movements are typically preceded by selective shifts of visual attention. Recent evidence, however, suggests that oculomotor selection can occur in the absence of attentional selection when saccades erroneously land in between nearby competing objects (saccade averaging). This study combined a saccade task with a visual discrimination task to investigate saccade target selection during episodes of competition between a saccade target and a nearby distractor. We manipulated the spatial predictability of target and distractor locations and asked participants to execute saccades upon variably delayed go-signals. This allowed us to systematically investigate the capacity to exert top-down eye movement control (as reflected in saccade endpoints) based on the spatiotemporal dynamics of visual attention during movement preparation (measured as visual sensitivity). Our data demonstrate that the predictability of target and distractor locations, despite not affecting the deployment of visual attention prior to movement preparation, largely improved the accuracy of short-latency saccades. Under spatial uncertainty, a short go-signal delay likewise enhanced saccade accuracy substantially, which was associated with a more selective deployment of attentional resources to the saccade target. Moreover, we observed a systematic relationship between the deployment of visual attention and saccade accuracy, with visual discrimination performance being significantly enhanced at the saccade target relative to the distractor only before the execution of saccades accurately landing at the saccade target. Our results provide novel insights linking top-down eye movement control to the operation of selective visual attention during movement preparation.

Close

  • doi:10.1167/JOV.20.9.16

Close

Christian Wolf; Markus Lappe

Top-down control of saccades requires inhibition of suddenly appearing stimuli Journal Article

Attention, Perception, and Psychophysics, 82 (8), pp. 3863–3877, 2020.

Abstract | Links | BibTeX

@article{Wolf2020,
title = {Top-down control of saccades requires inhibition of suddenly appearing stimuli},
author = {Christian Wolf and Markus Lappe},
doi = {10.3758/s13414-020-02101-3},
year = {2020},
date = {2020-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {82},
number = {8},
pages = {3863--3877},
publisher = {Attention, Perception, & Psychophysics},
abstract = {Humans scan their visual environment using saccade eye movements. Where we look is influenced by bottom-up salience and top-down factors, like value. For reactive saccades in response to suddenly appearing stimuli, it has been shown that short-latency saccades are biased towards salience, and that top-down control increases with increasing latency. Here, we show, in a series of six experiments, that this transition towards top-down control is not determined by the time it takes to integrate value information into the saccade plan, but by the time it takes to inhibit suddenly appearing salient stimuli. Participants made consecutive saccades to three fixation crosses and a vertical bar consisting of a high-salient and a rewarded low-salient region. Endpoints on the bar were biased towards salience whenever it appeared or reappeared shortly before the last saccade was initiated. This was also true when the eye movement was already planned. When the location of the suddenly appearing salient region was predictable, saccades were aimed in the opposite direction to nullify this sudden onset effect. Successfully inhibiting salience, however, could only be achieved by previewing the target. These findings highlight the importance of inhibition for top-down eye-movement control.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Humans scan their visual environment using saccade eye movements. Where we look is influenced by bottom-up salience and top-down factors, like value. For reactive saccades in response to suddenly appearing stimuli, it has been shown that short-latency saccades are biased towards salience, and that top-down control increases with increasing latency. Here, we show, in a series of six experiments, that this transition towards top-down control is not determined by the time it takes to integrate value information into the saccade plan, but by the time it takes to inhibit suddenly appearing salient stimuli. Participants made consecutive saccades to three fixation crosses and a vertical bar consisting of a high-salient and a rewarded low-salient region. Endpoints on the bar were biased towards salience whenever it appeared or reappeared shortly before the last saccade was initiated. This was also true when the eye movement was already planned. When the location of the suddenly appearing salient region was predictable, saccades were aimed in the opposite direction to nullify this sudden onset effect. Successfully inhibiting salience, however, could only be achieved by previewing the target. These findings highlight the importance of inhibition for top-down eye-movement control.

Close

  • doi:10.3758/s13414-020-02101-3

Close

Lisa Wirz; Lars Schwabe

Prioritized attentional processing: Acute stress, memory and stimulus emotionality facilitate attentional disengagement Journal Article

Neuropsychologia, 138 , pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Wirz2020,
title = {Prioritized attentional processing: Acute stress, memory and stimulus emotionality facilitate attentional disengagement},
author = {Lisa Wirz and Lars Schwabe},
doi = {10.1016/j.neuropsychologia.2020.107334},
year = {2020},
date = {2020-01-01},
journal = {Neuropsychologia},
volume = {138},
pages = {1--13},
publisher = {Elsevier Ltd},
abstract = {Rapid attentional orienting toward relevant stimuli and efficient disengagement from irrelevant stimuli are critical for survival. Here, we examined the roles of memory processes, emotional arousal and acute stress in attentional disengagement. To this end, 64 healthy participants encoded negative and neutral facial expressions and, after being exposed to a stress or control manipulation, performed an attention task in which they had to disengage from these previously encoded as well as novel face stimuli. During the attention task, electroencephalography (EEG) and pupillometry data were recorded. Our results showed overall faster reaction times after acute stress and when participants had to disengage from emotionally negative or old facial expressions. Further, pupil dilations were larger in response to neutral faces. During disengagement, our EEG data revealed a reduced N2pc amplitude when participants disengaged from neutral compared to negative facial expressions when these were not presented before, as well as earlier onset latencies for the N400f (for disengagement from negative and old faces), the N2pc, and the LPP (for disengagement from negative faces). In addition, early visual processing of negative faces, as reflected in the P1 amplitude, was enhanced specifically in stressed participants. Our findings indicate that attentional disengagement is improved for negative and familiar stimuli and that stress facilitates not only attentional disengagement but also emotional processing in general. Together, these processes may represent important mechanisms enabling efficient performance and rapid threat detection.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Rapid attentional orienting toward relevant stimuli and efficient disengagement from irrelevant stimuli are critical for survival. Here, we examined the roles of memory processes, emotional arousal and acute stress in attentional disengagement. To this end, 64 healthy participants encoded negative and neutral facial expressions and, after being exposed to a stress or control manipulation, performed an attention task in which they had to disengage from these previously encoded as well as novel face stimuli. During the attention task, electroencephalography (EEG) and pupillometry data were recorded. Our results showed overall faster reaction times after acute stress and when participants had to disengage from emotionally negative or old facial expressions. Further, pupil dilations were larger in response to neutral faces. During disengagement, our EEG data revealed a reduced N2pc amplitude when participants disengaged from neutral compared to negative facial expressions when these were not presented before, as well as earlier onset latencies for the N400f (for disengagement from negative and old faces), the N2pc, and the LPP (for disengagement from negative faces). In addition, early visual processing of negative faces, as reflected in the P1 amplitude, was enhanced specifically in stressed participants. Our findings indicate that attentional disengagement is improved for negative and familiar stimuli and that stress facilitates not only attentional disengagement but also emotional processing in general. Together, these processes may represent important mechanisms enabling efficient performance and rapid threat detection.

Close

  • doi:10.1016/j.neuropsychologia.2020.107334

Close

Heather Winskel; Theeraporn Ratitamkul

The initial functional unit when naming words and pseudowords in Thai: Evidence from masked priming Journal Article

Journal of Psycholinguistic Research, 49 (2), pp. 275–290, 2020.

Abstract | Links | BibTeX

@article{Winskel2020,
title = {The initial functional unit when naming words and pseudowords in Thai: Evidence from masked priming},
author = {Heather Winskel and Theeraporn Ratitamkul},
doi = {10.1007/s10936-020-09687-7},
year = {2020},
date = {2020-01-01},
journal = {Journal of Psycholinguistic Research},
volume = {49},
number = {2},
pages = {275--290},
publisher = {Springer US},
abstract = {Cross-linguistic research indicates that the initial unit used to build an ortho-phonological representation can vary between languages and is related to the particular characteristics of the language. Thai is particularly interesting as it has both syllabic and phonemic characteristics. Using the masked priming paradigm, we examined the functional unit that is initially activated when naming monosyllabic Thai words (Experiment 1) and pseudowords (Experiment 2). In Experiment 1, the response times to the onset prime and identity (onset + vowel) conditions were not significantly different but were both significantly faster than the control prime (onset different). In Experiment 2, pseudowords were used so that the effects of orthographic vowel position could be examined. In Thai, vowels can precede the consonant in writing but phonologically follow it in speech (e.g., the written word ‘odg' would be spoken as /dog/) whereas other vowels are spoken in the order that they are written. Similar results were found as in Experiment 1, as the identity prime did not have a greater facilitatory effect than the onset consonant prime. Notably, there were no orthographic effects due to orthographic vowel position. These results support the view that the onset is the initial functional unit that is activated when naming Thai visual words/pseudowords using the masked priming paradigm.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Cross-linguistic research indicates that the initial unit used to build an ortho-phonological representation can vary between languages and is related to the particular characteristics of the language. Thai is particularly interesting as it has both syllabic and phonemic characteristics. Using the masked priming paradigm, we examined the functional unit that is initially activated when naming monosyllabic Thai words (Experiment 1) and pseudowords (Experiment 2). In Experiment 1, the response times to the onset prime and identity (onset + vowel) conditions were not significantly different but were both significantly faster than the control prime (onset different). In Experiment 2, pseudowords were used so that the effects of orthographic vowel position could be examined. In Thai, vowels can precede the consonant in writing but phonologically follow it in speech (e.g., the written word ‘odg' would be spoken as /dog/) whereas other vowels are spoken in the order that they are written. Similar results were found as in Experiment 1, as the identity prime did not have a greater facilitatory effect than the onset consonant prime. Notably, there were no orthographic effects due to orthographic vowel position. These results support the view that the onset is the initial functional unit that is activated when naming Thai visual words/pseudowords using the masked priming paradigm.

Close

  • doi:10.1007/s10936-020-09687-7

Close

Elliott G Wimmer; Yunzhe Liu; Neža Vehar; Timothy E J Behrens; Raymond J Dolan

Episodic memory retrieval success is associated with rapid replay of episode content Journal Article

Nature Neuroscience, 23 (8), pp. 1025–1033, 2020.

Abstract | Links | BibTeX

@article{Wimmer2020,
title = {Episodic memory retrieval success is associated with rapid replay of episode content},
author = {Elliott G Wimmer and Yunzhe Liu and Ne{ž}a Vehar and Timothy E J Behrens and Raymond J Dolan},
doi = {10.1038/s41593-020-0649-z},
year = {2020},
date = {2020-01-01},
journal = {Nature Neuroscience},
volume = {23},
number = {8},
pages = {1025--1033},
publisher = {Springer US},
abstract = {Retrieval of everyday experiences is fundamental for informing our future decisions. The fine-grained neurophysiological mechanisms that support such memory retrieval are largely unknown. We studied participants who first experienced, without repetition, unique multicomponent 40–80-s episodes. One day later, they engaged in cued retrieval of these episodes while undergoing magnetoencephalography. By decoding individual episode elements, we found that trial-by-trial successful retrieval was supported by the sequential replay of episode elements, with a temporal compression factor of textgreater60. The direction of replay supporting retrieval, either backward or forward, depended on whether the task goal was to retrieve elements of an episode that followed or preceded, respectively, a retrieval cue. This sequential replay was weaker in very-high-performing participants, in whom instead we found evidence for simultaneous clustered reactivation. Our results demonstrate that memory-mediated decisions are supported by a rapid replay mechanism that can flexibly shift in direction in response to task goals.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Retrieval of everyday experiences is fundamental for informing our future decisions. The fine-grained neurophysiological mechanisms that support such memory retrieval are largely unknown. We studied participants who first experienced, without repetition, unique multicomponent 40–80-s episodes. One day later, they engaged in cued retrieval of these episodes while undergoing magnetoencephalography. By decoding individual episode elements, we found that trial-by-trial successful retrieval was supported by the sequential replay of episode elements, with a temporal compression factor of textgreater60. The direction of replay supporting retrieval, either backward or forward, depended on whether the task goal was to retrieve elements of an episode that followed or preceded, respectively, a retrieval cue. This sequential replay was weaker in very-high-performing participants, in whom instead we found evidence for simultaneous clustered reactivation. Our results demonstrate that memory-mediated decisions are supported by a rapid replay mechanism that can flexibly shift in direction in response to task goals.

Close

  • doi:10.1038/s41593-020-0649-z

Close

Vanessa A D Wilson; Carolin Kade; Sebastian Moeller; Stefan Treue; Igor Kagan; Julia Fischer

Macaque gaze responses to the primatar: A virtual macaque head for social cognition research Journal Article

Frontiers in Psychology, 11 , pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Wilson2020a,
title = {Macaque gaze responses to the primatar: A virtual macaque head for social cognition research},
author = {Vanessa A D Wilson and Carolin Kade and Sebastian Moeller and Stefan Treue and Igor Kagan and Julia Fischer},
doi = {10.3389/fpsyg.2020.01645},
year = {2020},
date = {2020-01-01},
journal = {Frontiers in Psychology},
volume = {11},
pages = {1--13},
abstract = {Following the expanding use and applications of virtual reality in everyday life, realistic virtual stimuli are of increasing interest in cognitive studies. They allow for control of features such as gaze, expression, appearance, and movement, which may help to overcome limitations of using photographs or video recordings to study social responses. In using virtual stimuli however, one must be careful to avoid the uncanny valley effect, where realistic stimuli can be perceived as eerie, and induce an aversion response. At the same time, it is important to establish whether responses to virtual stimuli mirror responses to depictions of a real conspecific. In the current study, we describe the development of a new virtual monkey head with realistic facial features for experiments with nonhuman primates, the “Primatar.” As a first step toward validation, we assessed how monkeys respond to facial images of a prototype of this Primatar compared to images of real monkeys (RMs), and an unrealistic model. We also compared gaze responses between original images and scrambled as well as obfuscated versions of these images. We measured looking time to images in six freely moving long-tailed macaques (Macaca fascicularis) and gaze exploration behavior in three rhesus macaques (Macaca mulatta). Both groups showed more signs of overt attention to original images than scrambled or obfuscated images. In addition, we found no evidence for an uncanny valley effect; since for both groups, looking times did not differ between real, realistic, or unrealistic images. These results provide important data for further development of our Primatar for use in social cognition studies and more generally for cognitive research with virtual stimuli in nonhuman primates. Future research on the absence of an uncanny valley effect in macaques is needed, to elucidate the roots of this mechanism in humans.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Following the expanding use and applications of virtual reality in everyday life, realistic virtual stimuli are of increasing interest in cognitive studies. They allow for control of features such as gaze, expression, appearance, and movement, which may help to overcome limitations of using photographs or video recordings to study social responses. In using virtual stimuli however, one must be careful to avoid the uncanny valley effect, where realistic stimuli can be perceived as eerie, and induce an aversion response. At the same time, it is important to establish whether responses to virtual stimuli mirror responses to depictions of a real conspecific. In the current study, we describe the development of a new virtual monkey head with realistic facial features for experiments with nonhuman primates, the “Primatar.” As a first step toward validation, we assessed how monkeys respond to facial images of a prototype of this Primatar compared to images of real monkeys (RMs), and an unrealistic model. We also compared gaze responses between original images and scrambled as well as obfuscated versions of these images. We measured looking time to images in six freely moving long-tailed macaques (Macaca fascicularis) and gaze exploration behavior in three rhesus macaques (Macaca mulatta). Both groups showed more signs of overt attention to original images than scrambled or obfuscated images. In addition, we found no evidence for an uncanny valley effect; since for both groups, looking times did not differ between real, realistic, or unrealistic images. These results provide important data for further development of our Primatar for use in social cognition studies and more generally for cognitive research with virtual stimuli in nonhuman primates. Future research on the absence of an uncanny valley effect in macaques is needed, to elucidate the roots of this mechanism in humans.

Close

  • doi:10.3389/fpsyg.2020.01645

Close

Tommy J Wilson; John J Foxe

Cross-frequency coupling of alpha oscillatory power to the entrainment rhythm of a spatially attended input stream Journal Article

Cognitive Neuroscience, 11 (1-2), pp. 71–91, 2020.

Abstract | Links | BibTeX

@article{Wilson2020,
title = {Cross-frequency coupling of alpha oscillatory power to the entrainment rhythm of a spatially attended input stream},
author = {Tommy J Wilson and John J Foxe},
doi = {10.1080/17588928.2019.1627303},
year = {2020},
date = {2020-01-01},
journal = {Cognitive Neuroscience},
volume = {11},
number = {1-2},
pages = {71--91},
publisher = {Routledge},
abstract = {Neural entrainment and alpha oscillatory power (8–14 Hz) are mechanisms of selective attention. The extent to which these two mechanisms interact, especially in the context of visuospatial attention, is unclear. Here, we show that spatial attention to a delta-frequency, rhythmic visual stimulus in one hemifield results in phase-amplitude coupling between the delta-phase of an entrained frontal source and alpha power generated by ipsilateral visuocortical regions. The driving of ipsilateral alpha power by frontal delta also correlates with task performance. Our analyses suggest that neural entrainment may serve a previously underappreciated role in coordinating macroscale brain networks and that inhibition of processing by alpha power can be coupled to an attended temporal structure. Finally, we note that the observed coupling bolsters one dominant hypothesis of modern cognitive neuroscience, that macroscale brain networks and distributed neural computation are coordinated by oscillatory synchrony and cross-frequency interactions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Neural entrainment and alpha oscillatory power (8–14 Hz) are mechanisms of selective attention. The extent to which these two mechanisms interact, especially in the context of visuospatial attention, is unclear. Here, we show that spatial attention to a delta-frequency, rhythmic visual stimulus in one hemifield results in phase-amplitude coupling between the delta-phase of an entrained frontal source and alpha power generated by ipsilateral visuocortical regions. The driving of ipsilateral alpha power by frontal delta also correlates with task performance. Our analyses suggest that neural entrainment may serve a previously underappreciated role in coordinating macroscale brain networks and that inhibition of processing by alpha power can be coupled to an attended temporal structure. Finally, we note that the observed coupling bolsters one dominant hypothesis of modern cognitive neuroscience, that macroscale brain networks and distributed neural computation are coordinated by oscillatory synchrony and cross-frequency interactions.

Close

  • doi:10.1080/17588928.2019.1627303

Close

Niklas Wilming; Peter R Murphy; Florent Meyniel; Tobias H Donner

Large-scale dynamics of perceptual decision information across human cortex Journal Article

Nature Communications, 11 , pp. 1–14, 2020.

Abstract | Links | BibTeX

@article{Wilming2020,
title = {Large-scale dynamics of perceptual decision information across human cortex},
author = {Niklas Wilming and Peter R Murphy and Florent Meyniel and Tobias H Donner},
doi = {10.1038/s41467-020-18826-6},
year = {2020},
date = {2020-01-01},
journal = {Nature Communications},
volume = {11},
pages = {1--14},
publisher = {Springer US},
abstract = {Perceptual decisions entail the accumulation of sensory evidence for a particular choice towards an action plan. An influential framework holds that sensory cortical areas encode the instantaneous sensory evidence and downstream, action-related regions accumulate this evidence. The large-scale distribution of this computation across the cerebral cortex has remained largely elusive. Here, we develop a regionally-specific magnetoencephalography decoding approach to exhaustively map the dynamics of stimulus- and choice-specific signals across the human cortical surface during a visual decision. Comparison with the evidence accumulation dynamics inferred from behavior disentangles stimulus-dependent and endogenous components of choice-predictive activity across the visual cortical hierarchy. We find such an endogenous component in early visual cortex (including V1), which is expressed in a low (textless20 Hz) frequency band and tracks, with delay, the build-up of choice-predictive activity in (pre-) motor regions. Our results are consistent with choice- and frequency-specific cortical feedback signaling during decision formation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Perceptual decisions entail the accumulation of sensory evidence for a particular choice towards an action plan. An influential framework holds that sensory cortical areas encode the instantaneous sensory evidence and downstream, action-related regions accumulate this evidence. The large-scale distribution of this computation across the cerebral cortex has remained largely elusive. Here, we develop a regionally-specific magnetoencephalography decoding approach to exhaustively map the dynamics of stimulus- and choice-specific signals across the human cortical surface during a visual decision. Comparison with the evidence accumulation dynamics inferred from behavior disentangles stimulus-dependent and endogenous components of choice-predictive activity across the visual cortical hierarchy. We find such an endogenous component in early visual cortex (including V1), which is expressed in a low (textless20 Hz) frequency band and tracks, with delay, the build-up of choice-predictive activity in (pre-) motor regions. Our results are consistent with choice- and frequency-specific cortical feedback signaling during decision formation.

Close

  • doi:10.1038/s41467-020-18826-6

Close

Louis Williams; Eugene McSorley; Rachel McCloy

Enhanced associations with actions of the artist influence gaze behaviour Journal Article

i-Perception, 11 (2), pp. 1–25, 2020.

Abstract | Links | BibTeX

@article{Williams2020a,
title = {Enhanced associations with actions of the artist influence gaze behaviour},
author = {Louis Williams and Eugene McSorley and Rachel McCloy},
doi = {10.1177/2041669520911059},
year = {2020},
date = {2020-01-01},
journal = {i-Perception},
volume = {11},
number = {2},
pages = {1--25},
abstract = {The aesthetic experience of the perceiver of art has been suggested to relate to the art-making process of the artist. The artist's gestures during the creation process have been stated to influence the perceiver's art-viewing experience. However, limited studies explore the art-viewing experience in relation to the creative process of the artist. We introduced eye-tracking measures to further establish how congruent actions with the artist influence perceiver's gaze behaviour. Experiments 1 and 2 showed that simultaneous congruent and incongruent actions do not influence gaze behaviour. However, brushstroke paintings were found to be more pleasing than pointillism paintings. In Experiment 3, participants were trained to associate painting actions with hand primes to enhance visuomotor and visuovisual associations with the artist's actions. A greater amount of time was spent fixating brushstroke paintings when presented with a congruent prime compared with an incongruent prime, and fewer fixations were made to these styles of paintings when presented with an incongruent prime. The results suggest that explicit links that allow perceivers to resonate with the artist's actions lead to greater exploration of preferred artwork styles.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The aesthetic experience of the perceiver of art has been suggested to relate to the art-making process of the artist. The artist's gestures during the creation process have been stated to influence the perceiver's art-viewing experience. However, limited studies explore the art-viewing experience in relation to the creative process of the artist. We introduced eye-tracking measures to further establish how congruent actions with the artist influence perceiver's gaze behaviour. Experiments 1 and 2 showed that simultaneous congruent and incongruent actions do not influence gaze behaviour. However, brushstroke paintings were found to be more pleasing than pointillism paintings. In Experiment 3, participants were trained to associate painting actions with hand primes to enhance visuomotor and visuovisual associations with the artist's actions. A greater amount of time was spent fixating brushstroke paintings when presented with a congruent prime compared with an incongruent prime, and fewer fixations were made to these styles of paintings when presented with an incongruent prime. The results suggest that explicit links that allow perceivers to resonate with the artist's actions lead to greater exploration of preferred artwork styles.

Close

  • doi:10.1177/2041669520911059

Close

Lauren Williams; Ann Carrigan; William Auffermann; Megan Mills; Anina Rich; Joann Elmore; Trafton Drew

The invisible breast cancer: Experience does not protect against inattentional blindness to clinically relevant findings in radiology Journal Article

Psychonomic Bulletin & Review, pp. 1–9, 2020.

Abstract | Links | BibTeX

@article{Williams2020,
title = {The invisible breast cancer: Experience does not protect against inattentional blindness to clinically relevant findings in radiology},
author = {Lauren Williams and Ann Carrigan and William Auffermann and Megan Mills and Anina Rich and Joann Elmore and Trafton Drew},
doi = {10.3758/s13423-020-01826-4},
year = {2020},
date = {2020-01-01},
journal = {Psychonomic Bulletin & Review},
pages = {1--9},
publisher = {Psychonomic Bulletin & Review},
abstract = {Retrospectively obvious events are frequently missed when attention is engaged in another task—a phenomenon known as inattentional blindness. Although the task characteristics that predict inattentional blindness rates are relatively well understood, the observer characteristics that predict inattentional blindness rates are largely unknown. Previously, expert radiologists showed a surprising rate of inattentional blindness to a gorilla photoshopped into a CT scan during lung-cancer screening. However, inattentional blindness rates were higher for a group of naïve observers performing the same task, suggesting that perceptual expertise may provide protection against inattentional blindness. Here, we tested whether expertise in radiology predicts inattentional blindness rates for unexpected abnormalities that were clinically relevant. Fifty radiologists evaluated CT scans for lung cancer. The final case contained a large (9.1 cm) breast mass and lymphadenopathy. When their attention was focused on searching for lung nodules, 66% of radiologists did not detect breast cancer and 30% did not detect lymphadenopathy. In contrast, only 3% and 10% of radiologists (N = 30), respectively, missed these abnormalities in a follow-up study when searching for a broader range of abnormalities. Neither experience, primary task performance, nor search behavior predicted which radiologists missed the unexpected abnormalities. These findings suggest perceptual expertise does not protect against inattentional blindness, even for unexpected stimuli that are within the domain of expertise.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Retrospectively obvious events are frequently missed when attention is engaged in another task—a phenomenon known as inattentional blindness. Although the task characteristics that predict inattentional blindness rates are relatively well understood, the observer characteristics that predict inattentional blindness rates are largely unknown. Previously, expert radiologists showed a surprising rate of inattentional blindness to a gorilla photoshopped into a CT scan during lung-cancer screening. However, inattentional blindness rates were higher for a group of naïve observers performing the same task, suggesting that perceptual expertise may provide protection against inattentional blindness. Here, we tested whether expertise in radiology predicts inattentional blindness rates for unexpected abnormalities that were clinically relevant. Fifty radiologists evaluated CT scans for lung cancer. The final case contained a large (9.1 cm) breast mass and lymphadenopathy. When their attention was focused on searching for lung nodules, 66% of radiologists did not detect breast cancer and 30% did not detect lymphadenopathy. In contrast, only 3% and 10% of radiologists (N = 30), respectively, missed these abnormalities in a follow-up study when searching for a broader range of abnormalities. Neither experience, primary task performance, nor search behavior predicted which radiologists missed the unexpected abnormalities. These findings suggest perceptual expertise does not protect against inattentional blindness, even for unexpected stimuli that are within the domain of expertise.

Close

  • doi:10.3758/s13423-020-01826-4

Close

Thomas D W Wilcockson; Edwin J Burns; Baiqiang Xia; Jeremy Tree; Trevor J Crawford

Atypically heterogeneous vertical first fixations to faces in a case series of people with developmental prosopagnosia Journal Article

Visual Cognition, 28 (4), pp. 311–323, 2020.

Abstract | Links | BibTeX

@article{Wilcockson2020,
title = {Atypically heterogeneous vertical first fixations to faces in a case series of people with developmental prosopagnosia},
author = {Thomas D W Wilcockson and Edwin J Burns and Baiqiang Xia and Jeremy Tree and Trevor J Crawford},
doi = {10.1080/13506285.2020.1797968},
year = {2020},
date = {2020-01-01},
journal = {Visual Cognition},
volume = {28},
number = {4},
pages = {311--323},
publisher = {Taylor & Francis},
abstract = {When people recognize faces, they normally move their eyes so that their first fixation is in the optimal location for efficient perceptual processing. This location is found just below the centre-point between the eyes. This type of attentional bias could be partly innate, but also an inevitable developmental process that aids our ability to recognize faces. We investigated whether a group of people with developmental prosopagnosia would also demonstrate neurotypical first fixation locations when recognizing faces during an eye-tracking task. We found evidence that adults with prosopagnosia had atypically heterogeneous first fixations in comparison to controls. However, differences were limited to the vertical, but not horizontal, plane of the face. We interpret these findings by suggesting that subtle changes to face-based eye movement patterns in developmental prosopagnosia may underpin their face recognition impairments, and suggest future work is still needed to address this possibility.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When people recognize faces, they normally move their eyes so that their first fixation is in the optimal location for efficient perceptual processing. This location is found just below the centre-point between the eyes. This type of attentional bias could be partly innate, but also an inevitable developmental process that aids our ability to recognize faces. We investigated whether a group of people with developmental prosopagnosia would also demonstrate neurotypical first fixation locations when recognizing faces during an eye-tracking task. We found evidence that adults with prosopagnosia had atypically heterogeneous first fixations in comparison to controls. However, differences were limited to the vertical, but not horizontal, plane of the face. We interpret these findings by suggesting that subtle changes to face-based eye movement patterns in developmental prosopagnosia may underpin their face recognition impairments, and suggest future work is still needed to address this possibility.

Close

  • doi:10.1080/13506285.2020.1797968

Close

Steven Wiesner; Ian W Baumgart; Xin Huang

Spatial arrangement drastically changes the neural representation of multiple visual stimuli that compete in more than one feature domain Journal Article

Journal of Neuroscience, 40 (9), pp. 1834–1848, 2020.

Abstract | Links | BibTeX

@article{Wiesner2020,
title = {Spatial arrangement drastically changes the neural representation of multiple visual stimuli that compete in more than one feature domain},
author = {Steven Wiesner and Ian W Baumgart and Xin Huang},
doi = {10.1523/JNEUROSCI.1950-19.2020},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neuroscience},
volume = {40},
number = {9},
pages = {1834--1848},
abstract = {Natural scenes often contain multiple objects and surfaces. However, how neurons in the visual cortex represent multiple visual stimuli is not well understood. Previous studies have shown that, when multiple stimuli compete in one feature domain, the evoked neuronal response is biased toward the stimulus that has a stronger signal strength. We recorded from two male macaques to investigate how neurons in the middle temporal cortex (MT) represent multiple stimuli that compete in more than one feature domain. Visual stimuli were two random-dot patches moving in different directions. One stimulus had low luminance contrast and moved with high coherence, whereas the other had high contrast and moved with low coherence. We found that how MT neurons represent multiple stimuli depended on the spatial arrangement. When two stimuli were overlapping, MT responses were dominated by the stimulus component that had high contrast. When two stimuli were spatially separated within the receptive fields, the contrast dominance was abolished. We found the same results when using contrast to compete with motion speed. Our neural data and computer simulations using a V1-MT model suggest that the contrast dominance found with overlapping stimuli is due to normalization occurring at an input stage fed to MT, and MT neurons cannot overturn this bias based on their own feature selectivity. The interaction between spatially separated stimuli can largely be explained by normalization within MT. Our results revealed new rules on stimulus competition and highlighted the impact of hierarchical processing on representing multiple stimuli in the visual cortex.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Natural scenes often contain multiple objects and surfaces. However, how neurons in the visual cortex represent multiple visual stimuli is not well understood. Previous studies have shown that, when multiple stimuli compete in one feature domain, the evoked neuronal response is biased toward the stimulus that has a stronger signal strength. We recorded from two male macaques to investigate how neurons in the middle temporal cortex (MT) represent multiple stimuli that compete in more than one feature domain. Visual stimuli were two random-dot patches moving in different directions. One stimulus had low luminance contrast and moved with high coherence, whereas the other had high contrast and moved with low coherence. We found that how MT neurons represent multiple stimuli depended on the spatial arrangement. When two stimuli were overlapping, MT responses were dominated by the stimulus component that had high contrast. When two stimuli were spatially separated within the receptive fields, the contrast dominance was abolished. We found the same results when using contrast to compete with motion speed. Our neural data and computer simulations using a V1-MT model suggest that the contrast dominance found with overlapping stimuli is due to normalization occurring at an input stage fed to MT, and MT neurons cannot overturn this bias based on their own feature selectivity. The interaction between spatially separated stimuli can largely be explained by normalization within MT. Our results revealed new rules on stimulus competition and highlighted the impact of hierarchical processing on representing multiple stimuli in the visual cortex.

Close

  • doi:10.1523/JNEUROSCI.1950-19.2020

Close

Jonathon Whitlock; Yi Pei Lo; Yi Chieh Chiu; Lili Sahakyan

Eye movement analyses of strong and weak memories and goal-driven forgetting Journal Article

Cognition, 204 , pp. 1–15, 2020.

Abstract | Links | BibTeX

@article{Whitlock2020,
title = {Eye movement analyses of strong and weak memories and goal-driven forgetting},
author = {Jonathon Whitlock and Yi Pei Lo and Yi Chieh Chiu and Lili Sahakyan},
doi = {10.1016/j.cognition.2020.104391},
year = {2020},
date = {2020-01-01},
journal = {Cognition},
volume = {204},
pages = {1--15},
publisher = {Elsevier},
abstract = {Research indicates that eye movements can reveal expressions of memory for previously studied relationships. Specifically, eye movements are disproportionately drawn to test items that were originally studied with the test scene, compared to other equally familiar items in the test display – an effect known as preferential viewing (e.g., Hannula, Ryan, Tranel, & Cohen, 2007). Across four studies we assessed how strength-based differences in memory are reflected in preferential viewing. Participants studied objects superimposed on background scenes and were tested with three-object displays superimposed on the scenes viewed previously. Eye movements were monitored at test. In Experiment 1 we employed an item-method directed forgetting (DF) procedure to manipulate memory strength. In Experiment 2, viewing patterns were examined across differences in memory strength assessed through subjective confidence ratings. In Experiment 3, we used spaced repetitions to objectively strengthen items, and Experiment 4 involved a list-method DF manipulation. Across all experiments, eye movements consistently differentiated the effect of DF from other strength-based differences in memory, producing different viewing patterns. They also differentiated between incidental and successful intentional forgetting. Finally, despite a null effect in recognition accuracy in list-method DF, viewing patterns revealed both common as well as critical differences between list-method DF and item-method DF. We discuss the eye movement findings from the perspective of theoretical accounts of DF and other strength-based differences in memory.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Research indicates that eye movements can reveal expressions of memory for previously studied relationships. Specifically, eye movements are disproportionately drawn to test items that were originally studied with the test scene, compared to other equally familiar items in the test display – an effect known as preferential viewing (e.g., Hannula, Ryan, Tranel, & Cohen, 2007). Across four studies we assessed how strength-based differences in memory are reflected in preferential viewing. Participants studied objects superimposed on background scenes and were tested with three-object displays superimposed on the scenes viewed previously. Eye movements were monitored at test. In Experiment 1 we employed an item-method directed forgetting (DF) procedure to manipulate memory strength. In Experiment 2, viewing patterns were examined across differences in memory strength assessed through subjective confidence ratings. In Experiment 3, we used spaced repetitions to objectively strengthen items, and Experiment 4 involved a list-method DF manipulation. Across all experiments, eye movements consistently differentiated the effect of DF from other strength-based differences in memory, producing different viewing patterns. They also differentiated between incidental and successful intentional forgetting. Finally, despite a null effect in recognition accuracy in list-method DF, viewing patterns revealed both common as well as critical differences between list-method DF and item-method DF. We discuss the eye movement findings from the perspective of theoretical accounts of DF and other strength-based differences in memory.

Close

  • doi:10.1016/j.cognition.2020.104391

Close

Alex L White; John Palmer; Geoffrey M Boynton

Visual word recognition: Evidence for a serial bottleneck in lexical access Journal Article

Attention, Perception, and Psychophysics, 82 (4), pp. 2000–2017, 2020.

Abstract | Links | BibTeX

@article{White2020,
title = {Visual word recognition: Evidence for a serial bottleneck in lexical access},
author = {Alex L White and John Palmer and Geoffrey M Boynton},
doi = {10.3758/s13414-019-01916-z},
year = {2020},
date = {2020-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {82},
number = {4},
pages = {2000--2017},
publisher = {Attention, Perception, & Psychophysics},
abstract = {Reading is a demanding task, constrained by inherent processing capacity limits. Do those capacity limits allow for multiple words to be recognized in parallel? In a recent study, we measured semantic categorization accuracy for nouns presented in pairs. The words were replaced by post-masks after an interval that was set to each subject's threshold, such that with focused attention they could categorize one word with ~80% accuracy. When subjects tried to divide attention between both words, their accuracy was so impaired that it supported a serial processing model: on each trial, subjects could categorize one word but had to guess about the other. In the experiments reported here, we investigated how our previous result generalizes across two tasks that require lexical access but vary in the depth of semantic processing (semantic categorization and lexical decision), and across different masking stimuli, word lengths, lexical frequencies and visual field positions. In all cases, the serial processing model was supported by two effects: (1) a sufficiently large accuracy deficit with divided compared to focused attention; and (2) a trial-by-trial stimulus processing tradeoff, meaning that the response to one word was more likely to be correct if the response to the other was incorrect. However, when the task was to detect colored letters, neither of those effects occurred, even though the post-masks limited accuracy in the same way. Altogether, the results are consistent with the hypothesis that visual processing of words is parallel but lexical access is serial.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Reading is a demanding task, constrained by inherent processing capacity limits. Do those capacity limits allow for multiple words to be recognized in parallel? In a recent study, we measured semantic categorization accuracy for nouns presented in pairs. The words were replaced by post-masks after an interval that was set to each subject's threshold, such that with focused attention they could categorize one word with ~80% accuracy. When subjects tried to divide attention between both words, their accuracy was so impaired that it supported a serial processing model: on each trial, subjects could categorize one word but had to guess about the other. In the experiments reported here, we investigated how our previous result generalizes across two tasks that require lexical access but vary in the depth of semantic processing (semantic categorization and lexical decision), and across different masking stimuli, word lengths, lexical frequencies and visual field positions. In all cases, the serial processing model was supported by two effects: (1) a sufficiently large accuracy deficit with divided compared to focused attention; and (2) a trial-by-trial stimulus processing tradeoff, meaning that the response to one word was more likely to be correct if the response to the other was incorrect. However, when the task was to detect colored letters, neither of those effects occurred, even though the post-masks limited accuracy in the same way. Altogether, the results are consistent with the hypothesis that visual processing of words is parallel but lexical access is serial.

Close

  • doi:10.3758/s13414-019-01916-z

Close

Nicole Wetzel; Wolfgang Einhäuser; Andreas Widmann

Picture-evoked changes in pupil size predict learning success in children Journal Article

Journal of Experimental Child Psychology, 192 , pp. 1–18, 2020.

Abstract | Links | BibTeX

@article{Wetzel2020,
title = {Picture-evoked changes in pupil size predict learning success in children},
author = {Nicole Wetzel and Wolfgang Einhäuser and Andreas Widmann},
doi = {10.1016/j.jecp.2019.104787},
year = {2020},
date = {2020-01-01},
journal = {Journal of Experimental Child Psychology},
volume = {192},
pages = {1--18},
publisher = {Elsevier Inc.},
abstract = {Episodic memory, the ability to remember past events in time and place, develops during childhood. Much knowledge about the underlying neuronal mechanisms has been gained from methods not suitable for children. We applied pupillometry to study memory encoding and recognition mechanisms. Children aged 8 and 9 years (n = 24) and adults (n = 24) studied a set of visual scenes to later distinguish them from new pictures. Children performed worse than adults, demonstrating immature episodic memory. During memorization, picture-related changes in pupil diameter predicted later successful recognition. This prediction effect was also observed on a single-trial level. During retrieval, novel pictures showed stronger pupil constriction than familiar pictures in both age groups. The statistically independent effects of objective familiarity (previously presented pictures) versus subjective familiarity (pictures evaluated as familiar independent of the prior presentation) suggest dissociable underlying brain mechanisms. In addition, we isolated principal components of the picture-related pupil response that were differently affected by the memorization and retrieval effects. Results are discussed in the context of the maturation of the medial temporal lobe and prefrontal networks. Our results demonstrate the dissociation of distinct contributions to episodic memory with a psychophysiological method that is suitable for a wide age spectrum.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Episodic memory, the ability to remember past events in time and place, develops during childhood. Much knowledge about the underlying neuronal mechanisms has been gained from methods not suitable for children. We applied pupillometry to study memory encoding and recognition mechanisms. Children aged 8 and 9 years (n = 24) and adults (n = 24) studied a set of visual scenes to later distinguish them from new pictures. Children performed worse than adults, demonstrating immature episodic memory. During memorization, picture-related changes in pupil diameter predicted later successful recognition. This prediction effect was also observed on a single-trial level. During retrieval, novel pictures showed stronger pupil constriction than familiar pictures in both age groups. The statistically independent effects of objective familiarity (previously presented pictures) versus subjective familiarity (pictures evaluated as familiar independent of the prior presentation) suggest dissociable underlying brain mechanisms. In addition, we isolated principal components of the picture-related pupil response that were differently affected by the memorization and retrieval effects. Results are discussed in the context of the maturation of the medial temporal lobe and prefrontal networks. Our results demonstrate the dissociation of distinct contributions to episodic memory with a psychophysiological method that is suitable for a wide age spectrum.

Close

  • doi:10.1016/j.jecp.2019.104787

Close

Kimberly B Weldon; Alexandra Woolgar; Anina N Rich; Mark A Williams

Late disruption of central visual field disrupts peripheral perception of form and color Journal Article

PLoS ONE, 15 (1), pp. 1–17, 2020.

Abstract | Links | BibTeX

@article{Weldon2020,
title = {Late disruption of central visual field disrupts peripheral perception of form and color},
author = {Kimberly B Weldon and Alexandra Woolgar and Anina N Rich and Mark A Williams},
doi = {10.1371/journal.pone.0219725},
year = {2020},
date = {2020-01-01},
journal = {PLoS ONE},
volume = {15},
number = {1},
pages = {1--17},
abstract = {Evidence from neuroimaging and brain stimulation studies suggest that visual information about objects in the periphery is fed back to foveal retinotopic cortex in a separate representation that is essential for peripheral perception. The characteristics of this phenomenon have important theoretical implications for the role fovea-specific feedback might play in perception. In this work, we employed a recently developed behavioral paradigm to explore whether late disruption to central visual space impaired perception of color. In the first experiment, participants performed a shape discrimination task on colored novel objects in the periphery while fixating centrally. Consistent with the results from previous work, a visual distractor presented at fixation ~100ms after presentation of the peripheral stimuli impaired sensitivity to differences in peripheral shapes more than a visual distractor presented at other stimulus onset asynchronies. In a second experiment, participants performed a color discrimination task on the same colored objects. In a third experiment, we further tested for this foveal distractor effect with stimuli restricted to a low-level feature by using homogenous color patches. These two latter experiments resulted in a similar pattern of behavior: a central distractor presented at the critical stimulus onset asynchrony impaired sensitivity to peripheral color differences, but, importantly, the magnitude of the effect was stronger when peripheral objects contained complex shape information. These results show a behavioral effect consistent with disrupting feedback to the fovea, in line with the foveal feedback suggested by previous neuroimaging studies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Evidence from neuroimaging and brain stimulation studies suggest that visual information about objects in the periphery is fed back to foveal retinotopic cortex in a separate representation that is essential for peripheral perception. The characteristics of this phenomenon have important theoretical implications for the role fovea-specific feedback might play in perception. In this work, we employed a recently developed behavioral paradigm to explore whether late disruption to central visual space impaired perception of color. In the first experiment, participants performed a shape discrimination task on colored novel objects in the periphery while fixating centrally. Consistent with the results from previous work, a visual distractor presented at fixation ~100ms after presentation of the peripheral stimuli impaired sensitivity to differences in peripheral shapes more than a visual distractor presented at other stimulus onset asynchronies. In a second experiment, participants performed a color discrimination task on the same colored objects. In a third experiment, we further tested for this foveal distractor effect with stimuli restricted to a low-level feature by using homogenous color patches. These two latter experiments resulted in a similar pattern of behavior: a central distractor presented at the critical stimulus onset asynchrony impaired sensitivity to peripheral color differences, but, importantly, the magnitude of the effect was stronger when peripheral objects contained complex shape information. These results show a behavioral effect consistent with disrupting feedback to the fovea, in line with the foveal feedback suggested by previous neuroimaging studies.

Close

  • doi:10.1371/journal.pone.0219725

Close

Katharina Weiß

Exogeneous spatial cueing beyond the near periphery: Cueing effects in a discrimination paradigm at large eccentricities Journal Article

Vision, 4 , pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Weiss2020,
title = {Exogeneous spatial cueing beyond the near periphery: Cueing effects in a discrimination paradigm at large eccentricities},
author = {Katharina Weiß},
doi = {10.3390/vision4010013},
year = {2020},
date = {2020-01-01},
journal = {Vision},
volume = {4},
pages = {1--13},
abstract = {Although visual attention is one of the most thoroughly investigated topics in experimental psychology and vision science, most of this research tends to be restricted to the near periphery. Eccentricities used in attention studies usually do not exceed 20◦ to 30◦, but most studies even make use of considerably smaller maximum eccentricities. Thus, empirical knowledge about attention beyond this range is sparse, probably due to a previous lack of suitable experimental devices to investigate attention in the far periphery. This is currently changing due to the development of temporal high-resolution projectors and head-mounted displays (HMDs) that allow displaying experimental stimuli at far eccentricities. In the present study, visual attention was investigated beyond the near periphery (15◦, 30◦, 56◦ Exp. 1) and (15◦, 35◦, 56◦ Exp. 2) in a peripheral Posner cueing paradigm using a discrimination task with placeholders. Interestingly, cueing effects were revealed for the whole range of eccentricities although the inhomogeneity of the visual field and its functional subdivisions might lead one to suspect otherwise.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Although visual attention is one of the most thoroughly investigated topics in experimental psychology and vision science, most of this research tends to be restricted to the near periphery. Eccentricities used in attention studies usually do not exceed 20◦ to 30◦, but most studies even make use of considerably smaller maximum eccentricities. Thus, empirical knowledge about attention beyond this range is sparse, probably due to a previous lack of suitable experimental devices to investigate attention in the far periphery. This is currently changing due to the development of temporal high-resolution projectors and head-mounted displays (HMDs) that allow displaying experimental stimuli at far eccentricities. In the present study, visual attention was investigated beyond the near periphery (15◦, 30◦, 56◦ Exp. 1) and (15◦, 35◦, 56◦ Exp. 2) in a peripheral Posner cueing paradigm using a discrimination task with placeholders. Interestingly, cueing effects were revealed for the whole range of eccentricities although the inhomogeneity of the visual field and its functional subdivisions might lead one to suspect otherwise.

Close

  • doi:10.3390/vision4010013

Close

Anke Weidmann; Laura Richert; Maximilian Bernecker; Miriam Knauss; Kathlen Priebe; Benedikt Reuter; Martin Bohus; Meike Müller-Engelmann; Thomas Fydrich

Dwelling on verbal but not pictorial threat cues: An eye-tracking study with adult survivors of childhood interpersonal violence Journal Article

Psychological Trauma: Theory, Research, Practice, and Policy, 12 (1), pp. 46–54, 2020.

Abstract | Links | BibTeX

@article{Weidmann2020,
title = {Dwelling on verbal but not pictorial threat cues: An eye-tracking study with adult survivors of childhood interpersonal violence},
author = {Anke Weidmann and Laura Richert and Maximilian Bernecker and Miriam Knauss and Kathlen Priebe and Benedikt Reuter and Martin Bohus and Meike Müller-Engelmann and Thomas Fydrich},
doi = {10.1037/tra0000424},
year = {2020},
date = {2020-01-01},
journal = {Psychological Trauma: Theory, Research, Practice, and Policy},
volume = {12},
number = {1},
pages = {46--54},
abstract = {Objective: Previous studies have found evidence of an attentional bias for trauma-related stimuli in posttraumatic stress disorder (PTSD) using eye-tracking (ET) technlogy. However, it is unclear whether findings for PTSD after traumatic events in adulthood can be transferred to PTSD after interpersonal trauma in childhood. The latter is often accompanied by more complex symptom features, including, for example, affective dysregulation and has not yet been studied using ET. The aim of this study was to explore which components of attention are biased in adult victims of childhood trauma with PTSD compared to those without PTSD. Method: Female participants with (n = 27) or without (n = 27) PTSD who had experienced interpersonal violence in childhood or adolescence watched different traumarelated stimuli (Experiment 1: words, Experiment 2: facial expressions). We analyzed whether traumarelated stimuli were primarily detected (vigilance bias) and/or dwelled on longer (maintenance bias) compared to stimuli of other emotional qualities. Results: For trauma-related words, there was evidence of a maintenance bias but not of a vigilance bias. For trauma-related facial expressions, there was no evidence of any bias. Conclusions: At present, an attentional bias to trauma-related stimuli cannot be considered as robust in PTSD following trauma in childhood compared to that of PTSD following trauma in adulthood. The findings are discussed with respect to difficulties attributing effects specifically to PTSD in this highly comorbid though understudied population.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: Previous studies have found evidence of an attentional bias for trauma-related stimuli in posttraumatic stress disorder (PTSD) using eye-tracking (ET) technlogy. However, it is unclear whether findings for PTSD after traumatic events in adulthood can be transferred to PTSD after interpersonal trauma in childhood. The latter is often accompanied by more complex symptom features, including, for example, affective dysregulation and has not yet been studied using ET. The aim of this study was to explore which components of attention are biased in adult victims of childhood trauma with PTSD compared to those without PTSD. Method: Female participants with (n = 27) or without (n = 27) PTSD who had experienced interpersonal violence in childhood or adolescence watched different traumarelated stimuli (Experiment 1: words, Experiment 2: facial expressions). We analyzed whether traumarelated stimuli were primarily detected (vigilance bias) and/or dwelled on longer (maintenance bias) compared to stimuli of other emotional qualities. Results: For trauma-related words, there was evidence of a maintenance bias but not of a vigilance bias. For trauma-related facial expressions, there was no evidence of any bias. Conclusions: At present, an attentional bias to trauma-related stimuli cannot be considered as robust in PTSD following trauma in childhood compared to that of PTSD following trauma in adulthood. The findings are discussed with respect to difficulties attributing effects specifically to PTSD in this highly comorbid though understudied population.

Close

  • doi:10.1037/tra0000424

Close

Kira Wegner-Clemens; Johannes Rennig; Michael S Beauchamp

A relationship between Autism-Spectrum Quotient and face viewing behavior in 98 participants Journal Article

PLoS ONE, 15 (4), pp. 1–17, 2020.

Abstract | Links | BibTeX

@article{WegnerClemens2020,
title = {A relationship between Autism-Spectrum Quotient and face viewing behavior in 98 participants},
author = {Kira Wegner-Clemens and Johannes Rennig and Michael S Beauchamp},
doi = {10.1371/journal.pone.0230866},
year = {2020},
date = {2020-01-01},
journal = {PLoS ONE},
volume = {15},
number = {4},
pages = {1--17},
abstract = {Faces are one of the most important stimuli that we encounter, but humans vary dramatically in their behavior when viewing a face: some individuals preferentially fixate the eyes, others fixate the mouth, and still others show an intermediate pattern. The determinants of these large individual differences are unknown. However, individuals with Autism Spectrum Disorder (ASD) spend less time fixating the eyes of a viewed face than controls, suggesting the hypothesis that autistic traits in healthy adults might explain individual differences in face viewing behavior. Autistic traits were measured in 98 healthy adults recruited from an academic setting using the Autism-Spectrum Quotient, a validated 50-statement questionnaire. Fixations were measured using a video-based eye tracker while participants viewed two different types of audiovisual movies: short videos of talker speaking single syllables and longer videos of talkers speaking sentences in a social context. For both types of movies, there was a positive correlation between Autism-Spectrum Quotient score and percent of time fixating the lower half of the face that explained from 4% to 10% of the variance in individual face viewing behavior. This effect suggests that in healthy adults, autistic traits are one of many factors that contribute to individual differences in face viewing behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Faces are one of the most important stimuli that we encounter, but humans vary dramatically in their behavior when viewing a face: some individuals preferentially fixate the eyes, others fixate the mouth, and still others show an intermediate pattern. The determinants of these large individual differences are unknown. However, individuals with Autism Spectrum Disorder (ASD) spend less time fixating the eyes of a viewed face than controls, suggesting the hypothesis that autistic traits in healthy adults might explain individual differences in face viewing behavior. Autistic traits were measured in 98 healthy adults recruited from an academic setting using the Autism-Spectrum Quotient, a validated 50-statement questionnaire. Fixations were measured using a video-based eye tracker while participants viewed two different types of audiovisual movies: short videos of talker speaking single syllables and longer videos of talkers speaking sentences in a social context. For both types of movies, there was a positive correlation between Autism-Spectrum Quotient score and percent of time fixating the lower half of the face that explained from 4% to 10% of the variance in individual face viewing behavior. This effect suggests that in healthy adults, autistic traits are one of many factors that contribute to individual differences in face viewing behavior.

Close

  • doi:10.1371/journal.pone.0230866

Close

Signy Wegener; Hua Chen Wang; Kate Nation; Anne Castles

Tracking the evolution of orthographic expectancies over building visual experience Journal Article

Journal of Experimental Child Psychology, 199 , pp. 1–17, 2020.

Abstract | Links | BibTeX

@article{Wegener2020,
title = {Tracking the evolution of orthographic expectancies over building visual experience},
author = {Signy Wegener and Hua Chen Wang and Kate Nation and Anne Castles},
doi = {10.1016/j.jecp.2020.104912},
year = {2020},
date = {2020-01-01},
journal = {Journal of Experimental Child Psychology},
volume = {199},
pages = {1--17},
publisher = {Elsevier Inc.},
abstract = {Literate children can generate expectations about the spellings of newly learned words that they have not yet seen in print. These initial spelling expectations, or orthographic skeletons, have previously been observed at the first orthographic exposure to known spoken words. Here, we asked what happens to the orthographic skeleton over repeated visual exposures. Children in Grade 4 (N = 38) were taught the pronunciations and meanings of one set of 16 novel words, whereas another set were untrained. Spellings of half the items were predictable from their phonology (e.g., nesh), whereas the other half were less predictable (e.g., koyb). Trained and untrained items were subsequently shown in print, embedded in sentences, and eye movements were monitored as children silently read all items over three exposures. A larger effect of spelling predictability for orally trained items compared with untrained items was observed at the first and second orthographic exposures, consistent with the notion that oral vocabulary knowledge had facilitated the formation of spelling expectations. By the third orthographic exposure, this interaction was no longer significant, suggesting that visual experience had begun to update children's spelling expectations. Delayed follow-up testing revealed that when visual exposure was equated, oral training provided a strong persisting benefit to children's written word recognition. Findings suggest that visual exposure can alter children's developing orthographic representations and that this process can be captured dynamically as children read novel words over repeated visual exposures.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Literate children can generate expectations about the spellings of newly learned words that they have not yet seen in print. These initial spelling expectations, or orthographic skeletons, have previously been observed at the first orthographic exposure to known spoken words. Here, we asked what happens to the orthographic skeleton over repeated visual exposures. Children in Grade 4 (N = 38) were taught the pronunciations and meanings of one set of 16 novel words, whereas another set were untrained. Spellings of half the items were predictable from their phonology (e.g., nesh), whereas the other half were less predictable (e.g., koyb). Trained and untrained items were subsequently shown in print, embedded in sentences, and eye movements were monitored as children silently read all items over three exposures. A larger effect of spelling predictability for orally trained items compared with untrained items was observed at the first and second orthographic exposures, consistent with the notion that oral vocabulary knowledge had facilitated the formation of spelling expectations. By the third orthographic exposure, this interaction was no longer significant, suggesting that visual experience had begun to update children's spelling expectations. Delayed follow-up testing revealed that when visual exposure was equated, oral training provided a strong persisting benefit to children's written word recognition. Findings suggest that visual exposure can alter children's developing orthographic representations and that this process can be captured dynamically as children read novel words over repeated visual exposures.

Close

  • doi:10.1016/j.jecp.2020.104912

Close

Sara Jane Webb; Frederick Shic; Michael Murias; Catherine A Sugar; Adam J Naples; Erin Barney; Heather Borland; Gerhard Hellemann; Scott Johnson; Minah Kim; April R Levin; Maura Sabatos-DeVito; Megha Santhosh; Damla Senturk; James Dziura; Raphael A Bernier; Katarzyna Chawarska; Geraldine Dawson; Susan Faja; Shafali Jeste; James McPartland

Biomarker acquisition and quality control for multi-site studies: The Autism Biomarkers Consortium for Clinical Trials Journal Article

Frontiers in Integrative Neuroscience, 13 , pp. 1–15, 2020.

Abstract | Links | BibTeX

@article{Webb2020,
title = {Biomarker acquisition and quality control for multi-site studies: The Autism Biomarkers Consortium for Clinical Trials},
author = {Sara Jane Webb and Frederick Shic and Michael Murias and Catherine A Sugar and Adam J Naples and Erin Barney and Heather Borland and Gerhard Hellemann and Scott Johnson and Minah Kim and April R Levin and Maura Sabatos-DeVito and Megha Santhosh and Damla Senturk and James Dziura and Raphael A Bernier and Katarzyna Chawarska and Geraldine Dawson and Susan Faja and Shafali Jeste and James McPartland},
doi = {10.3389/fnint.2019.00071},
year = {2020},
date = {2020-01-01},
journal = {Frontiers in Integrative Neuroscience},
volume = {13},
pages = {1--15},
abstract = {The objective of the Autism Biomarkers Consortium for Clinical Trials (ABC-CT) is to evaluate a set of lab-based behavioral video tracking (VT), electroencephalography (EEG), and eye tracking (ET) measures for use in clinical trials with children with autism spectrum disorder (ASD). Within the larger organizational structure of the ABC-CT, the Data Acquisition and Analytic Core (DAAC) oversees the standardization of VT, EEG, and ET data acquisition, data processing, and data analysis. This includes designing and documenting data acquisition and analytic protocols and manuals; facilitating site training in acquisition; data acquisition quality control (QC); derivation and validation of dependent variables (DVs); and analytic deliverables including preparation of data for submission to the National Database for Autism Research (NDAR). To oversee consistent application of scientific standards and methodological rigor for data acquisition, processing, and analytics, we developed standard operating procedures that reflect the logistical needs of multi-site research, and the need for well-articulated, transparent processes that can be implemented in future clinical trials. This report details the methodology of the ABC-CT related to acquisition and QC in our Feasibility and Main Study phases. Based on our acquisition metrics from a preplanned interim analysis, we report high levels of acquisition success utilizing VT, EEG, and ET experiments in a relatively large sample of children with ASD and typical development (TD), with data acquired across multiple sites and use of a manualized training and acquisition protocol.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The objective of the Autism Biomarkers Consortium for Clinical Trials (ABC-CT) is to evaluate a set of lab-based behavioral video tracking (VT), electroencephalography (EEG), and eye tracking (ET) measures for use in clinical trials with children with autism spectrum disorder (ASD). Within the larger organizational structure of the ABC-CT, the Data Acquisition and Analytic Core (DAAC) oversees the standardization of VT, EEG, and ET data acquisition, data processing, and data analysis. This includes designing and documenting data acquisition and analytic protocols and manuals; facilitating site training in acquisition; data acquisition quality control (QC); derivation and validation of dependent variables (DVs); and analytic deliverables including preparation of data for submission to the National Database for Autism Research (NDAR). To oversee consistent application of scientific standards and methodological rigor for data acquisition, processing, and analytics, we developed standard operating procedures that reflect the logistical needs of multi-site research, and the need for well-articulated, transparent processes that can be implemented in future clinical trials. This report details the methodology of the ABC-CT related to acquisition and QC in our Feasibility and Main Study phases. Based on our acquisition metrics from a preplanned interim analysis, we report high levels of acquisition success utilizing VT, EEG, and ET experiments in a relatively large sample of children with ASD and typical development (TD), with data acquired across multiple sites and use of a manualized training and acquisition protocol.

Close

  • doi:10.3389/fnint.2019.00071

Close

Ruaridh Weaterton; Shinn Tan; John Adam; Harneet Kaur; Katherine Rennie; Matt Dunn; Sean Ewings; Maria Theodorou; Dan Osborne; Megan Evans; Helena Lee; James Self

Beyond visual acuity: Development of a simple test of the slow-to-see phenomenon in children with infantile nystagmus syndrome Journal Article

Current Eye Research, pp. 1–8, 2020.

Abstract | Links | BibTeX

@article{Weaterton2020,
title = {Beyond visual acuity: Development of a simple test of the slow-to-see phenomenon in children with infantile nystagmus syndrome},
author = {Ruaridh Weaterton and Shinn Tan and John Adam and Harneet Kaur and Katherine Rennie and Matt Dunn and Sean Ewings and Maria Theodorou and Dan Osborne and Megan Evans and Helena Lee and James Self},
doi = {10.1080/02713683.2020.1784438},
year = {2020},
date = {2020-01-01},
journal = {Current Eye Research},
pages = {1--8},
publisher = {Taylor & Francis},
abstract = {Purpose: Conventional static visual acuity testing profoundly underestimates the impact of infantile nystagmus on functional vision. The slow-to-see phenomenon explains why many patients with nystagmus perform well in non-time restricted acuity tests but experience difficulty in certain situations. This is often observed by parents when their child struggles to recognise familiar faces in crowded scenes. A test measuring more than visual acuity could permit a more real-world assessment of visual impact and provide a robust outcome measure for clinical trials. Methods: Children with nystagmus and, age and acuity matched controls attending Southampton General Hospital were recruited for two tasks. In the first, eye-tracking measured the time participants spent looking at an image of their mother when alongside a stranger, this was then repeated with a sine grating and a homogenous grey box. Next, a tablet-based app was developed where participants had to find and press either their mother or a target face from up to 16 faces. Here, response time was measured. The tablet task was refined over multiple iterations. Results: In the eye-tracking task, controls spent significantly longer looking at their mother and the grating (Ptextless0.05). Interestingly, children with nystagmus looked significantly longer at the grating (Ptextless0.05) but not their mother (Ptextgreater0.05). This confirmed a facial target was key to further development. The tablet-based task demonstrated that children with nystagmus take significantly longer to identify the target; this was most pronounced using a 3-minute test with 12-face displays. Conclusion: This study has shown a facial target is key to identifying the time-to-see deficit in infantile nystagmus and provides the basis for an outcome measure for use in clinical treatment trials.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Purpose: Conventional static visual acuity testing profoundly underestimates the impact of infantile nystagmus on functional vision. The slow-to-see phenomenon explains why many patients with nystagmus perform well in non-time restricted acuity tests but experience difficulty in certain situations. This is often observed by parents when their child struggles to recognise familiar faces in crowded scenes. A test measuring more than visual acuity could permit a more real-world assessment of visual impact and provide a robust outcome measure for clinical trials. Methods: Children with nystagmus and, age and acuity matched controls attending Southampton General Hospital were recruited for two tasks. In the first, eye-tracking measured the time participants spent looking at an image of their mother when alongside a stranger, this was then repeated with a sine grating and a homogenous grey box. Next, a tablet-based app was developed where participants had to find and press either their mother or a target face from up to 16 faces. Here, response time was measured. The tablet task was refined over multiple iterations. Results: In the eye-tracking task, controls spent significantly longer looking at their mother and the grating (Ptextless0.05). Interestingly, children with nystagmus looked significantly longer at the grating (Ptextless0.05) but not their mother (Ptextgreater0.05). This confirmed a facial target was key to further development. The tablet-based task demonstrated that children with nystagmus take significantly longer to identify the target; this was most pronounced using a 3-minute test with 12-face displays. Conclusion: This study has shown a facial target is key to identifying the time-to-see deficit in infantile nystagmus and provides the basis for an outcome measure for use in clinical treatment trials.

Close

  • doi:10.1080/02713683.2020.1784438

Close

David E Warren; Tanja C Roembke; Natalie V Covington; Bob McMurray; Melissa C Duff

Cross-situational statistical learning of new words despite bilateral hippocampal damage and severe amnesia Journal Article

Frontiers in Human Neuroscience, 13 , pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Warren2020,
title = {Cross-situational statistical learning of new words despite bilateral hippocampal damage and severe amnesia},
author = {David E Warren and Tanja C Roembke and Natalie V Covington and Bob McMurray and Melissa C Duff},
doi = {10.3389/fnhum.2019.00448},
year = {2020},
date = {2020-01-01},
journal = {Frontiers in Human Neuroscience},
volume = {13},
pages = {1--13},
abstract = {Word learning requires learners to bind together arbitrarily-related phonological, visual, and conceptual information. Prior work suggests that this binding can be robustly achieved via incidental cross-situational statistical exposure to words and referents. When cross-situational statistical learning (CSSL) is tested in the laboratory, there is no information on any given trial to identify the referent of a novel word. However, by tracking which objects co-occur with each word across trials, learners may acquire mappings through statistical association. While CSSL behavior is well-characterized, its brain correlates are not. The arbitrary nature of CSSL mappings suggests hippocampal involvement, but the incremental, statistical nature of the learning raises the possibility of neocortical or procedural learning systems. Prior studies have shown that neurological patients with hippocampal pathology have word-learning impairments, but this has not been tested in a statistical learning paradigm. Here, we used a neuropsychological approach to test whether patients with bilateral hippocampal pathology (N = 3) could learn new words in a CSSL paradigm. In the task, patients and healthy comparison participants completed a CSSL word-learning task in which they acquired eight word/object mappings. During each trial of the CSSL task, participants saw two objects on a computer display, heard one novel word, and selected the most likely referent. Across trials, words were 100% likely to co-occur with their referent, but only 14.3% likely with non-referents. Two of three amnesic patients learned the associations between objects and word forms, although performance was impaired relative to healthy comparison participants. Our findings show that the hippocampus is not strictly necessary for CSSL for words, although it may facilitate such learning. This is consistent with a hybrid account of CSSL supported by implicit and explicit memory systems, and may have translational applications for remediation of (word-) learning deficits in neurological populations with hippocampal pathology.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Word learning requires learners to bind together arbitrarily-related phonological, visual, and conceptual information. Prior work suggests that this binding can be robustly achieved via incidental cross-situational statistical exposure to words and referents. When cross-situational statistical learning (CSSL) is tested in the laboratory, there is no information on any given trial to identify the referent of a novel word. However, by tracking which objects co-occur with each word across trials, learners may acquire mappings through statistical association. While CSSL behavior is well-characterized, its brain correlates are not. The arbitrary nature of CSSL mappings suggests hippocampal involvement, but the incremental, statistical nature of the learning raises the possibility of neocortical or procedural learning systems. Prior studies have shown that neurological patients with hippocampal pathology have word-learning impairments, but this has not been tested in a statistical learning paradigm. Here, we used a neuropsychological approach to test whether patients with bilateral hippocampal pathology (N = 3) could learn new words in a CSSL paradigm. In the task, patients and healthy comparison participants completed a CSSL word-learning task in which they acquired eight word/object mappings. During each trial of the CSSL task, participants saw two objects on a computer display, heard one novel word, and selected the most likely referent. Across trials, words were 100% likely to co-occur with their referent, but only 14.3% likely with non-referents. Two of three amnesic patients learned the associations between objects and word forms, although performance was impaired relative to healthy comparison participants. Our findings show that the hippocampus is not strictly necessary for CSSL for words, although it may facilitate such learning. This is consistent with a hybrid account of CSSL supported by implicit and explicit memory systems, and may have translational applications for remediation of (word-) learning deficits in neurological populations with hippocampal pathology.

Close

  • doi:10.3389/fnhum.2019.00448

Close

Yongchun Wang; Meilin Di; Jingjing Zhao; Saisai Hu; Zhao Yao; Yonghui Wang

Attentional modulation of unconscious inhibitory visuomotor processes: An EEG study Journal Article

Psychophysiology, 57 , pp. 1–12, 2020.

Abstract | Links | BibTeX

@article{Wang2020k,
title = {Attentional modulation of unconscious inhibitory visuomotor processes: An EEG study},
author = {Yongchun Wang and Meilin Di and Jingjing Zhao and Saisai Hu and Zhao Yao and Yonghui Wang},
doi = {10.1111/psyp.13561},
year = {2020},
date = {2020-01-01},
journal = {Psychophysiology},
volume = {57},
pages = {1--12},
abstract = {The present study examined the role of attention in unconscious inhibitory visuomotor processes in three experiments that employed a mixed paradigm including a spatial cueing task and masked prime task. Spatial attention to the prime was manipulated. Specifically, the valid-cue condition (in which the prime obtained more attentional resources) and invalid-cue condition (in which the prime obtained fewer attentional resources) were included. The behavioral results showed that the negative compatibility effect (a behavioral indicator of inhibitory visuomotor processing) in the valid-cue condition was larger than that in the invalid-cue condition. Most importantly, lateralized readiness potential results indicated that the prime-related activation was stronger in the valid-cue condition than in the invalid-cue condition and that the followed inhibition in the compatible trials was also stronger in the valid-cue condition than in the invalid-cue condition. In line with the proposed attentional modulation model, unconscious visuomotor inhibitory processing is modulated by attentional resources.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study examined the role of attention in unconscious inhibitory visuomotor processes in three experiments that employed a mixed paradigm including a spatial cueing task and masked prime task. Spatial attention to the prime was manipulated. Specifically, the valid-cue condition (in which the prime obtained more attentional resources) and invalid-cue condition (in which the prime obtained fewer attentional resources) were included. The behavioral results showed that the negative compatibility effect (a behavioral indicator of inhibitory visuomotor processing) in the valid-cue condition was larger than that in the invalid-cue condition. Most importantly, lateralized readiness potential results indicated that the prime-related activation was stronger in the valid-cue condition than in the invalid-cue condition and that the followed inhibition in the compatible trials was also stronger in the valid-cue condition than in the invalid-cue condition. In line with the proposed attentional modulation model, unconscious visuomotor inhibitory processing is modulated by attentional resources.

Close

  • doi:10.1111/psyp.13561

Close

Xiaoming Wang; Xinbo Zhao; Yanning Zhang

Deep-learning-based reading eye-movement analysis for aiding biometric recognition Journal Article

Neurocomputing, 2020.

Abstract | Links | BibTeX

@article{Wang2020j,
title = {Deep-learning-based reading eye-movement analysis for aiding biometric recognition},
author = {Xiaoming Wang and Xinbo Zhao and Yanning Zhang},
doi = {10.1016/j.neucom.2020.06.137},
year = {2020},
date = {2020-01-01},
journal = {Neurocomputing},
publisher = {Elsevier},
abstract = {Eye-movement recognition is a new type of biometric recognition technology. Without considering the characteristics of the stimuli, the existing eye-movement recognition technology is based on eye-movement trajectory similarity measurements and uses more eye-movement features. Related studies on reading psychology have shown that when reading text, human eye-movements are different between individuals yet stable for a given individual. This paper proposes a type of technology for aiding biometric recognition based on reading eye-movement. By introducing a deep-learning framework, a computational model for reading eye-movement recognition (REMR) was constructed. The model takes the text, fixation, and text-based linguistic feature sequences as inputs and identifies a human subject by measuring the similarity distance between the predicted fixation sequence and the actual one (to be identified). The experimental results show that the fixation sequence similarity recognition algorithm obtained an equal error rate of 19.4% on the test set, and the model obtained an 86.5% Rank-1 recognition rate on the test set.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye-movement recognition is a new type of biometric recognition technology. Without considering the characteristics of the stimuli, the existing eye-movement recognition technology is based on eye-movement trajectory similarity measurements and uses more eye-movement features. Related studies on reading psychology have shown that when reading text, human eye-movements are different between individuals yet stable for a given individual. This paper proposes a type of technology for aiding biometric recognition based on reading eye-movement. By introducing a deep-learning framework, a computational model for reading eye-movement recognition (REMR) was constructed. The model takes the text, fixation, and text-based linguistic feature sequences as inputs and identifies a human subject by measuring the similarity distance between the predicted fixation sequence and the actual one (to be identified). The experimental results show that the fixation sequence similarity recognition algorithm obtained an equal error rate of 19.4% on the test set, and the model obtained an 86.5% Rank-1 recognition rate on the test set.

Close

  • doi:10.1016/j.neucom.2020.06.137

Close

Xi Wang; Andreas Ley; Sebastian Koch; James Hays; Kenneth Holmqvist; Marc Alexa

Computational discrimination between natural images based on gaze during mental imagery Journal Article

Scientific Reports, 10 , pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Wang2020i,
title = {Computational discrimination between natural images based on gaze during mental imagery},
author = {Xi Wang and Andreas Ley and Sebastian Koch and James Hays and Kenneth Holmqvist and Marc Alexa},
doi = {10.1038/s41598-020-69807-0},
year = {2020},
date = {2020-01-01},
journal = {Scientific Reports},
volume = {10},
pages = {1--11},
publisher = {Nature Publishing Group UK},
abstract = {When retrieving image from memory, humans usually move their eyes spontaneously as if the image were in front of them. Such eye movements correlate strongly with the spatial layout of the recalled image content and function as memory cues facilitating the retrieval procedure. However, how close the correlation is between imagery eye movements and the eye movements while looking at the original image is unclear so far. In this work we first quantify the similarity of eye movements between recalling an image and encoding the same image, followed by the investigation on whether comparing such pairs of eye movements can be used for computational image retrieval. Our results show that computational image retrieval based on eye movements during spontaneous imagery is feasible. Furthermore, we show that such a retrieval approach can be generalized to unseen images.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When retrieving image from memory, humans usually move their eyes spontaneously as if the image were in front of them. Such eye movements correlate strongly with the spatial layout of the recalled image content and function as memory cues facilitating the retrieval procedure. However, how close the correlation is between imagery eye movements and the eye movements while looking at the original image is unclear so far. In this work we first quantify the similarity of eye movements between recalling an image and encoding the same image, followed by the investigation on whether comparing such pairs of eye movements can be used for computational image retrieval. Our results show that computational image retrieval based on eye movements during spontaneous imagery is feasible. Furthermore, we show that such a retrieval approach can be generalized to unseen images.

Close

  • doi:10.1038/s41598-020-69807-0

Close

Tianyu Wang; Qingting Tang; Xin Wu; Xu Chen

Attachment anxiety moderates the effect of oxytocin on negative emotion recognition: Evidence from eye-movement data Journal Article

Pharmacology Biochemistry and Behavior, 198 , pp. 1–8, 2020.

Abstract | Links | BibTeX

@article{Wang2020h,
title = {Attachment anxiety moderates the effect of oxytocin on negative emotion recognition: Evidence from eye-movement data},
author = {Tianyu Wang and Qingting Tang and Xin Wu and Xu Chen},
doi = {10.1016/j.pbb.2020.173015},
year = {2020},
date = {2020-01-01},
journal = {Pharmacology Biochemistry and Behavior},
volume = {198},
pages = {1--8},
publisher = {Elsevier},
abstract = {Valence-specific effects of oxytocin have been revealed in a selection of preceding studies, while others report that oxytocin could improve facial recognition, regardless of emotion valence. The reported effect was mediated by increased eye gaze during face processing, and attachment style proved to moderate the effect of oxytocin administration on social behavior and cognition. In this study, we used eye tracking to test whether attachment style moderates the effect of oxytocin on negative emotion recognition, which is crucial for social cognition. We employed a placebo-controlled, double-blind, within-participants design. The participants were 73 healthy individuals (41 men) who received a single dose of intranasal oxytocin (24 IU) on one occasion and a placebo dose on another occasion. Visual attention to the eye region was assessed on both occasions, through the completion of an emotion recognition task. Our results showed that oxytocin increased participants' eye gaze towards facial expressions. Among participants who received oxytocin, as opposed to a placebo, only individuals with high attachment anxiety displayed more eye gaze and less mouth gaze towards facial expression, regardless of emotion valence. Our findings confirmed that oxytocin increases gaze to the eye region, thus improving facial recognition, regardless of emotion valence, this relationship was moderated by attachment anxiety. Further, our results highlighted the importance of considering individual differences when evaluating the effects of oxytocin on emotion recognition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Valence-specific effects of oxytocin have been revealed in a selection of preceding studies, while others report that oxytocin could improve facial recognition, regardless of emotion valence. The reported effect was mediated by increased eye gaze during face processing, and attachment style proved to moderate the effect of oxytocin administration on social behavior and cognition. In this study, we used eye tracking to test whether attachment style moderates the effect of oxytocin on negative emotion recognition, which is crucial for social cognition. We employed a placebo-controlled, double-blind, within-participants design. The participants were 73 healthy individuals (41 men) who received a single dose of intranasal oxytocin (24 IU) on one occasion and a placebo dose on another occasion. Visual attention to the eye region was assessed on both occasions, through the completion of an emotion recognition task. Our results showed that oxytocin increased participants' eye gaze towards facial expressions. Among participants who received oxytocin, as opposed to a placebo, only individuals with high attachment anxiety displayed more eye gaze and less mouth gaze towards facial expression, regardless of emotion valence. Our findings confirmed that oxytocin increases gaze to the eye region, thus improving facial recognition, regardless of emotion valence, this relationship was moderated by attachment anxiety. Further, our results highlighted the importance of considering individual differences when evaluating the effects of oxytocin on emotion recognition.

Close

  • doi:10.1016/j.pbb.2020.173015

Close

Tianlu Wang; Ronald Peeters; Dante Mantini; Céline R Gillebert

Modulating the interhemispheric activity balance in the intraparietal sulcus using real-time fMRI neurofeedback: Development and proof-of-concept Journal Article

NeuroImage: Clinical, 28 , pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Wang2020g,
title = {Modulating the interhemispheric activity balance in the intraparietal sulcus using real-time fMRI neurofeedback: Development and proof-of-concept},
author = {Tianlu Wang and Ronald Peeters and Dante Mantini and Céline R Gillebert},
doi = {10.1016/j.nicl.2020.102513},
year = {2020},
date = {2020-01-01},
journal = {NeuroImage: Clinical},
volume = {28},
pages = {1--13},
abstract = {The intraparietal sulcus (IPS) plays a key role in the distribution of attention across the visual field. In stroke patients, an imbalance between left and right IPS activity has been related to a spatial bias in visual attention characteristic of hemispatial neglect. In this study, we describe the development and implementation of a real- time functional magnetic resonance imaging neurofeedback protocol to noninvasively and volitionally control the interhemispheric IPS activity balance in neurologically healthy participants. Six participants performed three neurofeedback training sessions across three weeks. Half of them trained to voluntarily increase brain activity in left relative to right IPS, while the other half trained to regulate the IPS activity balance in the opposite direction. Before and after the training, we estimated the distribution of attention across the visual field using a whole and partial report task. Over the course of the training, two of the three participants in the left-IPS group increased the activity in the left relative to the right IPS, while the participants in the right-IPS group were not able to regulate the interhemispheric IPS activity balance. We found no evidence for a decrease in resting-state func- tional connectivity between left and right IPS, and the spatial distribution of attention did not change over the course of the experiment. This study indicates the possibility to voluntarily modulate the interhemispheric IPS activity balance. Further research is warranted to examine the effectiveness of this technique in the rehabilitation of post-stroke hemispatial neglect.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The intraparietal sulcus (IPS) plays a key role in the distribution of attention across the visual field. In stroke patients, an imbalance between left and right IPS activity has been related to a spatial bias in visual attention characteristic of hemispatial neglect. In this study, we describe the development and implementation of a real- time functional magnetic resonance imaging neurofeedback protocol to noninvasively and volitionally control the interhemispheric IPS activity balance in neurologically healthy participants. Six participants performed three neurofeedback training sessions across three weeks. Half of them trained to voluntarily increase brain activity in left relative to right IPS, while the other half trained to regulate the IPS activity balance in the opposite direction. Before and after the training, we estimated the distribution of attention across the visual field using a whole and partial report task. Over the course of the training, two of the three participants in the left-IPS group increased the activity in the left relative to the right IPS, while the participants in the right-IPS group were not able to regulate the interhemispheric IPS activity balance. We found no evidence for a decrease in resting-state func- tional connectivity between left and right IPS, and the spatial distribution of attention did not change over the course of the experiment. This study indicates the possibility to voluntarily modulate the interhemispheric IPS activity balance. Further research is warranted to examine the effectiveness of this technique in the rehabilitation of post-stroke hemispatial neglect.

Close

  • doi:10.1016/j.nicl.2020.102513

Close

Shanshan Wang; Dong Yang

The wealth state awareness effect on attention allocation in people from impoverished and affluent groups Journal Article

Frontiers in Psychology, 11 , pp. 1–9, 2020.

Abstract | Links | BibTeX

@article{Wang2020f,
title = {The wealth state awareness effect on attention allocation in people from impoverished and affluent groups},
author = {Shanshan Wang and Dong Yang},
doi = {10.3389/fpsyg.2020.566375},
year = {2020},
date = {2020-01-01},
journal = {Frontiers in Psychology},
volume = {11},
pages = {1--9},
abstract = {Previous studies have shown that poverty influences cognitive abilities and that those who have a negative living environment exhibit worse cognitive performance. In addition, eye measures vary following the manipulation of cognitive processing. We examined the distinctive changes in impoverished and affluent persons during tasks that require a high level of concentration using eye-tracking measures. Based on the poverty effect in impoverished people, this study explored how wealth state awareness (WSA) influences them. It was found that the pupillary state indexes of the impoverished participants significantly changed when their WSA was regarding poverty. The results suggest that awareness of poverty may cause impoverished individuals to engage in tasks with more attention allocation and more concentration in the more difficult tasks but that a WSA regarding wealth does not have such effect on them. WSA has no significant effects on their more affluent peers. The findings of this study can contribute to research on WSA effects on impoverished individuals from the perspective of eye measures.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous studies have shown that poverty influences cognitive abilities and that those who have a negative living environment exhibit worse cognitive performance. In addition, eye measures vary following the manipulation of cognitive processing. We examined the distinctive changes in impoverished and affluent persons during tasks that require a high level of concentration using eye-tracking measures. Based on the poverty effect in impoverished people, this study explored how wealth state awareness (WSA) influences them. It was found that the pupillary state indexes of the impoverished participants significantly changed when their WSA was regarding poverty. The results suggest that awareness of poverty may cause impoverished individuals to engage in tasks with more attention allocation and more concentration in the more difficult tasks but that a WSA regarding wealth does not have such effect on them. WSA has no significant effects on their more affluent peers. The findings of this study can contribute to research on WSA effects on impoverished individuals from the perspective of eye measures.

Close

  • doi:10.3389/fpsyg.2020.566375

Close

Quan Wang; Joseph Chang; Katarzyna Chawarska

Atypical value-driven selective attention in young children with autism spectrum disorder Journal Article

JAMA Neetwork Open, 3 (5), pp. e204928, 2020.

Abstract | Links | BibTeX

@article{Wang2020e,
title = {Atypical value-driven selective attention in young children with autism spectrum disorder},
author = {Quan Wang and Joseph Chang and Katarzyna Chawarska},
doi = {10.1001/jamanetworkopen.2020.4928},
year = {2020},
date = {2020-01-01},
journal = {JAMA Neetwork Open},
volume = {3},
number = {5},
pages = {e204928},
abstract = {Importance: Enhanced selective attention toward nonsocial objects and impaired attention to social stimuli constitute key clinical features of autism spectrum disorder (ASD). Yet, the mechanisms associated with atypical selective attention in ASD are poorly understood, which limits the development of more effective interventions. In typically developing individuals, selective attention to social and nonsocial stimuli is associated with the informational value of the stimuli, which is typically learned over the course of repeated interactions with the stimuli. Objective: To examine value learning (VL) of social and nonsocial stimuli and its association with selective attention in preschoolers with and without ASD. Design, Setting, and Participants: This case-control study compared children with ASD vs children with developmental delay (DD) and children with typical development (TD) recruited between March 3, 2017, and June 13, 2018, at a university-based research laboratory. Participants were preschoolers with ASD, DD, or TD. Main Outcomes and Measures: Procedure consisted of an eye-tracking gaze-contingent VL task involving social (faces) and nonsocial (fractals) stimuli and consisting of baseline, training, and choice test phases. Outcome measures were preferential attention to stimuli reinforced (high value) vs not reinforced (low value) during training. The hypotheses were stated before data collection. Results: Included were 115 preschoolers with ASD (n = 48; mean [SD] age, 38.30 [15.55] months; 37 [77%] boys), DD (n = 31; mean [SD] age, 45.73 [19.49] months; 19 [61%] boys), or TD (n = 36; mean [SD] age, 36.53 [12.39] months; 22 [61%] boys). The groups did not differ in sex distribution; participants with ASD or TD had similar chronological age; and participants with ASD or DD had similar verbal IQ and nonverbal IQ. After training, the ASD group showed preference for the high-value nonsocial stimuli (mean proportion, 0.61 [95% CI, 0.56-0.65]; P textless .001) but not for the high-value social stimuli (mean proportion, 0.51 [95% CI, 0.46-0.56]; P = .58). In contrast, the DD and TD groups demonstrated preference for the high-value social stimuli (DD mean proportion, 0.59 [95% CI, 0.54-0.64]; P = .001 and TD mean proportion, 0.57 [95% CI, 0.53-0.61]; P = .002) but not for the high-value nonsocial stimuli (DD mean proportion, 0.52 [95% CI, 0.44-0.59]; P = .64 and TD mean proportion, 0.50 [95% CI, 0.44-0.57]; P = .91). Controlling for age and nonverbal IQ, autism severity was positively correlated with enhanced learning in the nonsocial domain (r = 0.22; P = .03) and with poorer learning in the social domain (r = -0.26; P = .01). Conclusions and Relevance: Increased attention to objects in preschoolers with ASD may be associated with enhanced VL in the nonsocial domain. When paired with poor VL in the social domain, enhanced value-driven attention to objects may play a formative role in the emergence of autism symptoms by altering attentional priorities and thus learning opportunities in affected children.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Importance: Enhanced selective attention toward nonsocial objects and impaired attention to social stimuli constitute key clinical features of autism spectrum disorder (ASD). Yet, the mechanisms associated with atypical selective attention in ASD are poorly understood, which limits the development of more effective interventions. In typically developing individuals, selective attention to social and nonsocial stimuli is associated with the informational value of the stimuli, which is typically learned over the course of repeated interactions with the stimuli. Objective: To examine value learning (VL) of social and nonsocial stimuli and its association with selective attention in preschoolers with and without ASD. Design, Setting, and Participants: This case-control study compared children with ASD vs children with developmental delay (DD) and children with typical development (TD) recruited between March 3, 2017, and June 13, 2018, at a university-based research laboratory. Participants were preschoolers with ASD, DD, or TD. Main Outcomes and Measures: Procedure consisted of an eye-tracking gaze-contingent VL task involving social (faces) and nonsocial (fractals) stimuli and consisting of baseline, training, and choice test phases. Outcome measures were preferential attention to stimuli reinforced (high value) vs not reinforced (low value) during training. The hypotheses were stated before data collection. Results: Included were 115 preschoolers with ASD (n = 48; mean [SD] age, 38.30 [15.55] months; 37 [77%] boys), DD (n = 31; mean [SD] age, 45.73 [19.49] months; 19 [61%] boys), or TD (n = 36; mean [SD] age, 36.53 [12.39] months; 22 [61%] boys). The groups did not differ in sex distribution; participants with ASD or TD had similar chronological age; and participants with ASD or DD had similar verbal IQ and nonverbal IQ. After training, the ASD group showed preference for the high-value nonsocial stimuli (mean proportion, 0.61 [95% CI, 0.56-0.65]; P textless .001) but not for the high-value social stimuli (mean proportion, 0.51 [95% CI, 0.46-0.56]; P = .58). In contrast, the DD and TD groups demonstrated preference for the high-value social stimuli (DD mean proportion, 0.59 [95% CI, 0.54-0.64]; P = .001 and TD mean proportion, 0.57 [95% CI, 0.53-0.61]; P = .002) but not for the high-value nonsocial stimuli (DD mean proportion, 0.52 [95% CI, 0.44-0.59]; P = .64 and TD mean proportion, 0.50 [95% CI, 0.44-0.57]; P = .91). Controlling for age and nonverbal IQ, autism severity was positively correlated with enhanced learning in the nonsocial domain (r = 0.22; P = .03) and with poorer learning in the social domain (r = -0.26; P = .01). Conclusions and Relevance: Increased attention to objects in preschoolers with ASD may be associated with enhanced VL in the nonsocial domain. When paired with poor VL in the social domain, enhanced value-driven attention to objects may play a formative role in the emergence of autism symptoms by altering attentional priorities and thus learning opportunities in affected children.

Close

  • doi:10.1001/jamanetworkopen.2020.4928

Close

Ping Wang; Pan Ya; Deshun Li; Shuangjun Lv; Dongsheng Yang

Nystagmus with pendular low amplitude, high frequency components (PLAHF) in association with retinal disease Journal Article

Strabismus, 28 (1), pp. 3–6, 2020.

Abstract | Links | BibTeX

@article{Wang2020d,
title = {Nystagmus with pendular low amplitude, high frequency components (PLAHF) in association with retinal disease},
author = {Ping Wang and Pan Ya and Deshun Li and Shuangjun Lv and Dongsheng Yang},
doi = {10.1080/09273972.2019.1707237},
year = {2020},
date = {2020-01-01},
journal = {Strabismus},
volume = {28},
number = {1},
pages = {3--6},
publisher = {Taylor & Francis},
abstract = {Purposes: To establish a relation between pendular low amplitude high frequency (PLAHF) components and congenital retinal disorders. Methods: Patients who showed PLAHF components in their eye-movement recording between January 2016 to January 2019 were included. Best corrected visual acuity (BCVA), refraction, strabismus assessment, fundus photograph, spectral domain-optical coherence tomography (SD-OCT), full-field electroretinography (f-ERG), clinical ophthalmological examination, and gene tests were used to determine their clinical conditions, especially their retina conditions in all patients. Results: Among 136 patients there were 76 males and 60 females with mean age of 11.4.5 ± 4.5 years. Pure PLAHF waveforms were found in 38 patients (28%), the amplitude of the PLAHF was 2°±1.6° and frequency was 5–10 Hz. Superimposed PLAHF waveforms were found in 98 patients (72%). BCVA was worse than Log MAR1.0 in 94 patients (69%), between LogMar 0.5–1.0 (20/63-20/200) in 30 cases (22%); higher than LogMar 0.5 (20/63) in 12 cases (9%). Fifty-eight patients were diagnosed with exotropia and six patients with esotropia. Abnormal Fundus were found in 71 cases (52%), fovea hypoplasia was identified with OCT in 95 cases (70%) and retinal thinning in 92 cases (68%). Abnormal on-off VEP were found in 116 cases (85%). The f-ERG responses were reduced in all patients. In 46 patients, gene mutations were found to related to retinal disease, including 3 congenital stationary night blindness (CSNB), 14 achromatopsia (ACHM), 5 Aland Island eye disease (AIED), 7 Alstrom syndrome (AS), 11 Leber congenital amaurosis (LCA), 6 cone-rod dystrophy (CRD). Conclusions: Patients presenting with PLAHF usually had retinal disorders.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Purposes: To establish a relation between pendular low amplitude high frequency (PLAHF) components and congenital retinal disorders. Methods: Patients who showed PLAHF components in their eye-movement recording between January 2016 to January 2019 were included. Best corrected visual acuity (BCVA), refraction, strabismus assessment, fundus photograph, spectral domain-optical coherence tomography (SD-OCT), full-field electroretinography (f-ERG), clinical ophthalmological examination, and gene tests were used to determine their clinical conditions, especially their retina conditions in all patients. Results: Among 136 patients there were 76 males and 60 females with mean age of 11.4.5 ± 4.5 years. Pure PLAHF waveforms were found in 38 patients (28%), the amplitude of the PLAHF was 2°±1.6° and frequency was 5–10 Hz. Superimposed PLAHF waveforms were found in 98 patients (72%). BCVA was worse than Log MAR1.0 in 94 patients (69%), between LogMar 0.5–1.0 (20/63-20/200) in 30 cases (22%); higher than LogMar 0.5 (20/63) in 12 cases (9%). Fifty-eight patients were diagnosed with exotropia and six patients with esotropia. Abnormal Fundus were found in 71 cases (52%), fovea hypoplasia was identified with OCT in 95 cases (70%) and retinal thinning in 92 cases (68%). Abnormal on-off VEP were found in 116 cases (85%). The f-ERG responses were reduced in all patients. In 46 patients, gene mutations were found to related to retinal disease, including 3 congenital stationary night blindness (CSNB), 14 achromatopsia (ACHM), 5 Aland Island eye disease (AIED), 7 Alstrom syndrome (AS), 11 Leber congenital amaurosis (LCA), 6 cone-rod dystrophy (CRD). Conclusions: Patients presenting with PLAHF usually had retinal disorders.

Close

  • doi:10.1080/09273972.2019.1707237

Close

Jingxin Wang; Fang Xie; Liyuan He; Katie L Meadmore; Kevin B Paterson; Valerie Benson

Eye movements reveal a similar positivity effect in Chinese and UK older adults Journal Article

Quarterly Journal of Experimental Psychology, 73 (11), pp. 1921–1929, 2020.

Abstract | Links | BibTeX

@article{Wang2020cb,
title = {Eye movements reveal a similar positivity effect in Chinese and UK older adults},
author = {Jingxin Wang and Fang Xie and Liyuan He and Katie L Meadmore and Kevin B Paterson and Valerie Benson},
doi = {10.1177/1747021820935861},
year = {2020},
date = {2020-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {73},
number = {11},
pages = {1921--1929},
abstract = {The “positivity effect” (PE) reflects an age-related increase in the preference for positive over negative information in attention and memory. The present experiment investigated whether Chinese and UK participants produce a similar PE. In one experiment, we presented pleasant, unpleasant, and neutral pictures simultaneously and participants decided which picture they liked or disliked on a third of trials, respectively. We recorded participants' eye movements during this task and compared time looking at, and memory for, pictures. The results suggest that older but not younger adults from both China and UK participant groups showed a preference to focus on and remember pleasant pictures, providing evidence of a PE in both cultures. Bayes Factor analysis supported these observations. These findings are consistent with the view that older people preferentially focus on positive emotional information, and that this effect is observed cross-culturally.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The “positivity effect” (PE) reflects an age-related increase in the preference for positive over negative information in attention and memory. The present experiment investigated whether Chinese and UK participants produce a similar PE. In one experiment, we presented pleasant, unpleasant, and neutral pictures simultaneously and participants decided which picture they liked or disliked on a third of trials, respectively. We recorded participants' eye movements during this task and compared time looking at, and memory for, pictures. The results suggest that older but not younger adults from both China and UK participant groups showed a preference to focus on and remember pleasant pictures, providing evidence of a PE in both cultures. Bayes Factor analysis supported these observations. These findings are consistent with the view that older people preferentially focus on positive emotional information, and that this effect is observed cross-culturally.

Close

  • doi:10.1177/1747021820935861

Close

Jiahui Wang; Pavlo Antonenko; Kara Dawson

Does visual attention to the instructor in online video affect learning and learner perceptions? An eye-tracking analysis Journal Article

Computers and Education, 146 , pp. 1–16, 2020.

Abstract | Links | BibTeX

@article{Wang2020b,
title = {Does visual attention to the instructor in online video affect learning and learner perceptions? An eye-tracking analysis},
author = {Jiahui Wang and Pavlo Antonenko and Kara Dawson},
doi = {10.1016/j.compedu.2019.103779},
year = {2020},
date = {2020-01-01},
journal = {Computers and Education},
volume = {146},
pages = {1--16},
publisher = {Elsevier Ltd},
abstract = {An increasing number of instructional videos online integrate a real instructor on the video screen. So far, the empirical evidence from previous studies has been limited and conflicting, and none of the studies have explored how learners' allocation of visual attention to the on-screen instructor influences learning and learner perceptions. Therefore, this study aimed to disentangle a) how instructor presence in online videos affects learning, learner perceptions (i.e., cognitive load, judgment of learning, satisfaction, situational interest), and visual attention distribution and b) to what extent visual attention patterns in instructor-present videos predict learning and learner perceptions. Sixty college students each watched two videos on Statistics, one on an easy topic and the other one on a difficult topic, with each in one of the two video formats: instructor-present or instructor-absent. Their eye movements were simultaneously registered using a desktop-mounted eye tracker. Afterwards, participants self-reported their cognitive load, judgment of learning, satisfaction, and situational interest for both videos, and feelings toward seeing the instructor for the instructor-present videos. Learning from the two videos was measured using retention and transfer questions. Findings indicated instructor presence a) improved transfer performance for the difficult topic, b) reduced cognitive load for the difficult topic, c) increased judgment of learning for the difficult topic, and d) enhanced satisfaction and situational interest for both topics. Most participants expressed a positive feeling toward the instructor. Results also showed the instructor attracted a considerable amount of overt visual attention in both videos, and the amount of attention allocated to the instructor positively predicted participants' satisfaction level for both topics.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

An increasing number of instructional videos online integrate a real instructor on the video screen. So far, the empirical evidence from previous studies has been limited and conflicting, and none of the studies have explored how learners' allocation of visual attention to the on-screen instructor influences learning and learner perceptions. Therefore, this study aimed to disentangle a) how instructor presence in online videos affects learning, learner perceptions (i.e., cognitive load, judgment of learning, satisfaction, situational interest), and visual attention distribution and b) to what extent visual attention patterns in instructor-present videos predict learning and learner perceptions. Sixty college students each watched two videos on Statistics, one on an easy topic and the other one on a difficult topic, with each in one of the two video formats: instructor-present or instructor-absent. Their eye movements were simultaneously registered using a desktop-mounted eye tracker. Afterwards, participants self-reported their cognitive load, judgment of learning, satisfaction, and situational interest for both videos, and feelings toward seeing the instructor for the instructor-present videos. Learning from the two videos was measured using retention and transfer questions. Findings indicated instructor presence a) improved transfer performance for the difficult topic, b) reduced cognitive load for the difficult topic, c) increased judgment of learning for the difficult topic, and d) enhanced satisfaction and situational interest for both topics. Most participants expressed a positive feeling toward the instructor. Results also showed the instructor attracted a considerable amount of overt visual attention in both videos, and the amount of attention allocated to the instructor positively predicted participants' satisfaction level for both topics.

Close

  • doi:10.1016/j.compedu.2019.103779

Close

Chin An Wang; Jeff Huang; Donald C Brien; Douglas P Munoz

Saliency and priority modulation in a pop-out paradigm: Pupil size and microsaccades Journal Article

Biological Psychology, 153 , pp. 1–9, 2020.

Abstract | Links | BibTeX

@article{Wang2020ab,
title = {Saliency and priority modulation in a pop-out paradigm: Pupil size and microsaccades},
author = {Chin An Wang and Jeff Huang and Donald C Brien and Douglas P Munoz},
doi = {10.1016/j.biopsycho.2020.107901},
year = {2020},
date = {2020-01-01},
journal = {Biological Psychology},
volume = {153},
pages = {1--9},
abstract = {A salient stimulus can trigger a coordinated orienting response consisting of a saccade, pupil, and microsaccadic responses. Saliency models predict that the degree of visual conspicuity of all visual stimuli guides visual orienting. By presenting a multiple-item array that included an oddball colored item (pop-out), randomly mixed colored items (mixed-color), or single-color items (single-color), we examined the effects of saliency and priority (saliency + relevancy) on pupil size and microsaccade responses. Larger pupil responses were produced in the pop-out compared to the mixed-color or single-color conditions after stimulus presentation. However, the saliency modulation on microsaccades was not significant. Furthermore, although goal-relevancy information did not modulate pupil responses and microsaccade rate, microsaccade direction was biased toward the pop-out item when it was the subsequent saccadic target. Together, our results demonstrate saliency modulation on pupil size and priority effects on microsaccade direction during visual pop-out.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A salient stimulus can trigger a coordinated orienting response consisting of a saccade, pupil, and microsaccadic responses. Saliency models predict that the degree of visual conspicuity of all visual stimuli guides visual orienting. By presenting a multiple-item array that included an oddball colored item (pop-out), randomly mixed colored items (mixed-color), or single-color items (single-color), we examined the effects of saliency and priority (saliency + relevancy) on pupil size and microsaccade responses. Larger pupil responses were produced in the pop-out compared to the mixed-color or single-color conditions after stimulus presentation. However, the saliency modulation on microsaccades was not significant. Furthermore, although goal-relevancy information did not modulate pupil responses and microsaccade rate, microsaccade direction was biased toward the pop-out item when it was the subsequent saccadic target. Together, our results demonstrate saliency modulation on pupil size and priority effects on microsaccade direction during visual pop-out.

Close

  • doi:10.1016/j.biopsycho.2020.107901

Close

Aiping Wang; Ming Yan; Bei Wang; Gaoding Jia; Albrecht W Inhoff

The perceptual span in Tibetan reading Journal Article

Psychological Research, pp. 1–10, 2020.

Abstract | Links | BibTeX

@article{Wang2020c,
title = {The perceptual span in Tibetan reading},
author = {Aiping Wang and Ming Yan and Bei Wang and Gaoding Jia and Albrecht W Inhoff},
doi = {10.1007/s00426-020-01313-4},
year = {2020},
date = {2020-01-01},
journal = {Psychological Research},
pages = {1--10},
publisher = {Springer Berlin Heidelberg},
abstract = {Tibetan script differs from other alphabetic writing systems in that word forms can be composed of horizontally and vertically arrayed characters. To examine information extraction during the reading of this script, eye movements of native readers were recorded and used to control the size of a window of legible text that moved in synchrony with the eyes. Letters outside the window were masked, and no viewing constraints were imposed in a control condition. Comparisons of window conditions with the control condition showed that reading speed and oculomotor activity matched the control condition, when windows revealed three letters to the left and seven to eight letters to the right of a fixated letter location. Cross-script comparisons indicate that this perceptual span is smaller than for English and larger than for Chinese script. We suggest that the information density of a writing system influences the perceptual span during reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Tibetan script differs from other alphabetic writing systems in that word forms can be composed of horizontally and vertically arrayed characters. To examine information extraction during the reading of this script, eye movements of native readers were recorded and used to control the size of a window of legible text that moved in synchrony with the eyes. Letters outside the window were masked, and no viewing constraints were imposed in a control condition. Comparisons of window conditions with the control condition showed that reading speed and oculomotor activity matched the control condition, when windows revealed three letters to the left and seven to eight letters to the right of a fixated letter location. Cross-script comparisons indicate that this perceptual span is smaller than for English and larger than for Chinese script. We suggest that the information density of a writing system influences the perceptual span during reading.

Close

  • doi:10.1007/s00426-020-01313-4

Close

Quan Wan; Ying Cai; Jason Samaha; Bradley R Postle

Tracking stimulus representation across a 2-back visual working memory task: Tracking 2-back representation Journal Article

Royal Society Open Science, 7 , pp. 1–18, 2020.

Abstract | Links | BibTeX

@article{Wan2020,
title = {Tracking stimulus representation across a 2-back visual working memory task: Tracking 2-back representation},
author = {Quan Wan and Ying Cai and Jason Samaha and Bradley R Postle},
doi = {10.1098/rsos.190228rsos190228},
year = {2020},
date = {2020-01-01},
journal = {Royal Society Open Science},
volume = {7},
pages = {1--18},
abstract = {How does the neural representation of visual working memory content vary with behavioural priority? To address this, we recorded electroencephalography (EEG) while subjects performed a continuous-performance 2-back working memory task with oriented-grating stimuli. We tracked the transition of the neural representation of an item (n) from its initial encoding, to the status of 'unprioritized memory item' (UMI), and back to 'prioritized memory item', with multivariate inverted encoding modelling. Results showed that the representational format was remapped from its initially encoded format into a distinctive 'opposite' representational format when it became a UMI and then mapped back into its initial format when subsequently prioritized in anticipation of its comparison with item n + 2. Thus, contrary to the default assumption that the activity representing an item in working memory might simply get weaker when it is deprioritized, it may be that a process of priority-based remapping helps to protect remembered information when it is not in the focus of attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

How does the neural representation of visual working memory content vary with behavioural priority? To address this, we recorded electroencephalography (EEG) while subjects performed a continuous-performance 2-back working memory task with oriented-grating stimuli. We tracked the transition of the neural representation of an item (n) from its initial encoding, to the status of 'unprioritized memory item' (UMI), and back to 'prioritized memory item', with multivariate inverted encoding modelling. Results showed that the representational format was remapped from its initially encoded format into a distinctive 'opposite' representational format when it became a UMI and then mapped back into its initial format when subsequently prioritized in anticipation of its comparison with item n + 2. Thus, contrary to the default assumption that the activity representing an item in working memory might simply get weaker when it is deprioritized, it may be that a process of priority-based remapping helps to protect remembered information when it is not in the focus of attention.

Close

  • doi:10.1098/rsos.190228rsos190228

Close

Kerri Walter; Yesenia Taveras-Cruz; Peter Bex

Transfer and retention of oculomotor alignment rehabilitation training Journal Article

Journal of Vision, 20 (8), pp. 1–16, 2020.

Abstract | Links | BibTeX

@article{Walter2020,
title = {Transfer and retention of oculomotor alignment rehabilitation training},
author = {Kerri Walter and Yesenia Taveras-Cruz and Peter Bex},
doi = {10.1167/JOV.20.8.9},
year = {2020},
date = {2020-01-01},
journal = {Journal of Vision},
volume = {20},
number = {8},
pages = {1--16},
abstract = {Ocular alignment defects such as strabismus affect around 5% of people and are associated with binocular vision impairments. Current nonsurgical treatments are controversial and have high levels of recidivism. In this study, we developed a rehabilitation method for ocular alignment training and examined the rate of learning, transfer to untrained alignments, and retention over time. Ocular alignment was controlled with a real-time dichoptic feedback paradigm where a static fixation target and white gaze-contingent ring were presented to the dominant eye and a black gaze-contingent ring with no fixation target was presented to the nondominant eye. Observers were required to move their eyes to center the rings on the target, with real-time feedback provided by the size of the rings. Offsetting the ring of the nondominant temporal or nasal visual field required convergent or divergent ocular deviation, respectively, to center the ring on the fixation target. Learning was quantified as the time taken to achieve target deviation of 2° (easy, E) or 4° (hard, H) for convergence (CE, CH) or divergence (DE, DH) over 40 trials. Thirty-two normally sighted observers completed two training sequences separated by one week. Subjects were randomly assigned to a training sequence: CE-CH-DE, CH-CE-DE, DE-DH-CE, or DH-DE-CE. The results showed that training was retained over the course of approximately one week across all conditions. Training on an easy deviation angle transferred to untrained hard angles within convergence or divergence but not between these directions. We conclude that oculomotor alignment can be rapidly trained, retained, and transferred with a feedback-based dichoptic paradigm. Feedback-based oculomotor training may therefore provide a noninvasive method for the rehabilitation of ocular alignment defects.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Ocular alignment defects such as strabismus affect around 5% of people and are associated with binocular vision impairments. Current nonsurgical treatments are controversial and have high levels of recidivism. In this study, we developed a rehabilitation method for ocular alignment training and examined the rate of learning, transfer to untrained alignments, and retention over time. Ocular alignment was controlled with a real-time dichoptic feedback paradigm where a static fixation target and white gaze-contingent ring were presented to the dominant eye and a black gaze-contingent ring with no fixation target was presented to the nondominant eye. Observers were required to move their eyes to center the rings on the target, with real-time feedback provided by the size of the rings. Offsetting the ring of the nondominant temporal or nasal visual field required convergent or divergent ocular deviation, respectively, to center the ring on the fixation target. Learning was quantified as the time taken to achieve target deviation of 2° (easy, E) or 4° (hard, H) for convergence (CE, CH) or divergence (DE, DH) over 40 trials. Thirty-two normally sighted observers completed two training sequences separated by one week. Subjects were randomly assigned to a training sequence: CE-CH-DE, CH-CE-DE, DE-DH-CE, or DH-DE-CE. The results showed that training was retained over the course of approximately one week across all conditions. Training on an easy deviation angle transferred to untrained hard angles within convergence or divergence but not between these directions. We conclude that oculomotor alignment can be rapidly trained, retained, and transferred with a feedback-based dichoptic paradigm. Feedback-based oculomotor training may therefore provide a noninvasive method for the rehabilitation of ocular alignment defects.

Close

  • doi:10.1167/JOV.20.8.9

Close

Calen R Walshe; Wilson S Geisler

Detection of occluding targets in natural backgrounds Journal Article

Journal of Vision, 20 (13:14), pp. 1–20, 2020.

Abstract | BibTeX

@article{Walshe2020,
title = {Detection of occluding targets in natural backgrounds},
author = {Calen R Walshe and Wilson S Geisler},
year = {2020},
date = {2020-01-01},
journal = {Journal of Vision},
volume = {20},
number = {13:14},
pages = {1--20},
abstract = {Detection of target objects in the surrounding environment is a common visual task. There is a vast psychophysical and modeling literature concerning the detection of targets in artificial and natural backgrounds. Most studies involve detection of additive targets or of some form of image distortion. Although much has been learned from these studies, the targets that most often occur under natural conditions are neither additive nor distorting; rather, they are opaque targets that occlude the backgrounds behind them. Here, we describe our efforts to measure and model detection of occluding targets in natural backgrounds. To systematically vary the properties of the backgrounds, we used the constrained sampling approach of Sebastian, Abrams, and Geisler (2017). Specifically, millions of calibrated gray-scale natural-image patches were sorted into a 3D histogram along the dimensions of luminance, contrast, and phase-invariant similarity to the target. Eccentricity psychometric functions (accuracy as a function of retinal eccentricity) were measured for four different occluding targets and 15 different combinations of background luminance, contrast, and similarity, with a different randomly sampled background on each trial. The complex pattern of results was consistent across the three subjects, and was largely explained by a principled model observer (with only a single efficiency parameter) that combines three image cues (pattern, silhouette, and edge) and four well-known properties of the human visual system (optical blur, blurring and downsampling by the ganglion cells, divisive normalization, intrinsic position uncertainty). The model also explains the thresholds for additive foveal targets in natural backgrounds reported in Sebastian et al. (2017).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Detection of target objects in the surrounding environment is a common visual task. There is a vast psychophysical and modeling literature concerning the detection of targets in artificial and natural backgrounds. Most studies involve detection of additive targets or of some form of image distortion. Although much has been learned from these studies, the targets that most often occur under natural conditions are neither additive nor distorting; rather, they are opaque targets that occlude the backgrounds behind them. Here, we describe our efforts to measure and model detection of occluding targets in natural backgrounds. To systematically vary the properties of the backgrounds, we used the constrained sampling approach of Sebastian, Abrams, and Geisler (2017). Specifically, millions of calibrated gray-scale natural-image patches were sorted into a 3D histogram along the dimensions of luminance, contrast, and phase-invariant similarity to the target. Eccentricity psychometric functions (accuracy as a function of retinal eccentricity) were measured for four different occluding targets and 15 different combinations of background luminance, contrast, and similarity, with a different randomly sampled background on each trial. The complex pattern of results was consistent across the three subjects, and was largely explained by a principled model observer (with only a single efficiency parameter) that combines three image cues (pattern, silhouette, and edge) and four well-known properties of the human visual system (optical blur, blurring and downsampling by the ganglion cells, divisive normalization, intrinsic position uncertainty). The model also explains the thresholds for additive foveal targets in natural backgrounds reported in Sebastian et al. (2017).

Close

Stephen C Walenchok; Stephen D Goldinger; Michael C Hout

The confirmation and prevalence biases in visual search reflect separate underlying processes Journal Article

Journal of experimental psychology. Human perception and performance, 46 (3), pp. 274–291, 2020.

Abstract | Links | BibTeX

@article{Walenchok2020,
title = {The confirmation and prevalence biases in visual search reflect separate underlying processes},
author = {Stephen C Walenchok and Stephen D Goldinger and Michael C Hout},
doi = {10.1037/xhp0000714},
year = {2020},
date = {2020-01-01},
journal = {Journal of experimental psychology. Human perception and performance},
volume = {46},
number = {3},
pages = {274--291},
abstract = {Research by Rajsic, Wilson, and Pratt (2015, 2017) suggests that people are biased to use a target-confirming strategy when performing simple visual search. In 3 experiments, we sought to determine whether another stubborn phenomenon in visual search, the low-prevalence effect (Wolfe, Horowitz, & Kenner, 2005), would modulate this confirmatory bias. We varied the reliability of the initial cue: For some people, targets usually occurred in the cued color (high prevalence). For others, targets rarely matched the cues (low prevalence). High cue-target prevalence exacerbated the confirmation bias, indexed via search response times (RTs) and eye-tracking measures. Surprisingly, given low cue-target prevalence, people remained biased to examine cue-colored letters, even though cue-colored targets were exceedingly rare. At the same time, people were more fluent at detecting the more common, cue-mismatching targets. The findings suggest that attention is guided to "confirm" the more available cued target template, but prevalence learning over time determines how fluently objects are perceptually appreciated.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Research by Rajsic, Wilson, and Pratt (2015, 2017) suggests that people are biased to use a target-confirming strategy when performing simple visual search. In 3 experiments, we sought to determine whether another stubborn phenomenon in visual search, the low-prevalence effect (Wolfe, Horowitz, & Kenner, 2005), would modulate this confirmatory bias. We varied the reliability of the initial cue: For some people, targets usually occurred in the cued color (high prevalence). For others, targets rarely matched the cues (low prevalence). High cue-target prevalence exacerbated the confirmation bias, indexed via search response times (RTs) and eye-tracking measures. Surprisingly, given low cue-target prevalence, people remained biased to examine cue-colored letters, even though cue-colored targets were exceedingly rare. At the same time, people were more fluent at detecting the more common, cue-mismatching targets. The findings suggest that attention is guided to "confirm" the more available cued target template, but prevalence learning over time determines how fluently objects are perceptually appreciated.

Close

  • doi:10.1037/xhp0000714

Close

Daniel Voyer; Jean Saint-Aubin; Katelyn Altman; Randi A Doyle

Sex differences in tests of mental rotation: Direct manipulation of strategies with eye-tracking Journal Article

Journal of Experimental Psychology: Human Perception and Performance, 46 (9), pp. 871–889, 2020.

Abstract | Links | BibTeX

@article{Voyer2020,
title = {Sex differences in tests of mental rotation: Direct manipulation of strategies with eye-tracking},
author = {Daniel Voyer and Jean Saint-Aubin and Katelyn Altman and Randi A Doyle},
doi = {10.1037/xhp0000752},
year = {2020},
date = {2020-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {46},
number = {9},
pages = {871--889},
abstract = {We conducted what is likely the first large-scale comprehensive eye tracking investigation of the cognitive processes involved in the psychometric mental rotation task with three experiments comparing the performance of men and women on tests of mental rotation with blocks and human figures as stimuli. In all 3 experiments, men achieved higher mean accuracy than women on both tests and all participants showed improved performance on the human figures compared with the blocks. Experiment 1 used a moving window paradigm to elicit a piecemeal processing strategy, whereas Experiment 2 utilized that approach to encourage a holistic processing strategy. In these 2 experiments the pattern of eye fixations suggested that differences in processing between blocks and human figures can be accounted for by the greater difficulty of rotating block compared with human figures. Results also produced little support for the hypothesis that men favor a holistic strategy whereas women favor a piecemeal approach. In addition, these experiments did not support the notion that using human figures as stimuli promotes a holistic strategy whereas block figures invoke a piecemeal strategy. As a follow up, in Experiment 3 we used a free viewing procedure and examined 4 possible explanations of sex differences in mental rotation predicting different patterns of eye tracking (cognitive processing style, leaping, ocular efficiency) or offline processing (working memory). Results provided partial support for variations of the cognitive processing style hypotheses. The implications for common explanations of sex differences in mental rotation are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We conducted what is likely the first large-scale comprehensive eye tracking investigation of the cognitive processes involved in the psychometric mental rotation task with three experiments comparing the performance of men and women on tests of mental rotation with blocks and human figures as stimuli. In all 3 experiments, men achieved higher mean accuracy than women on both tests and all participants showed improved performance on the human figures compared with the blocks. Experiment 1 used a moving window paradigm to elicit a piecemeal processing strategy, whereas Experiment 2 utilized that approach to encourage a holistic processing strategy. In these 2 experiments the pattern of eye fixations suggested that differences in processing between blocks and human figures can be accounted for by the greater difficulty of rotating block compared with human figures. Results also produced little support for the hypothesis that men favor a holistic strategy whereas women favor a piecemeal approach. In addition, these experiments did not support the notion that using human figures as stimuli promotes a holistic strategy whereas block figures invoke a piecemeal strategy. As a follow up, in Experiment 3 we used a free viewing procedure and examined 4 possible explanations of sex differences in mental rotation predicting different patterns of eye tracking (cognitive processing style, leaping, ocular efficiency) or offline processing (working memory). Results provided partial support for variations of the cognitive processing style hypotheses. The implications for common explanations of sex differences in mental rotation are discussed.

Close

  • doi:10.1037/xhp0000752

Close

Christoph J Völter; Sabrina Karl; Ludwig Huber

Dogs accurately track a moving object on a screen and anticipate its destination Journal Article

Scientific Reports, 10 , pp. 1–10, 2020.

Abstract | Links | BibTeX

@article{Voelter2020,
title = {Dogs accurately track a moving object on a screen and anticipate its destination},
author = {Christoph J Völter and Sabrina Karl and Ludwig Huber},
doi = {10.1038/s41598-020-72506-5},
year = {2020},
date = {2020-01-01},
journal = {Scientific Reports},
volume = {10},
pages = {1--10},
publisher = {Nature Publishing Group UK},
abstract = {The prediction of upcoming events is of importance not only to humans and non-human primates but also to other animals that live in complex environments with lurking threats or moving prey. In this study, we examined motion tracking and anticipatory looking in dogs in two eye-tracking experiments. In Experiment 1, we presented pet dogs (N = 14) with a video depicting how two players threw a Frisbee back and forth multiple times. The horizontal movement of the Frisbee explained a substantial amount of variance of the dogs' horizontal eye movements. With increasing duration of the video, the dogs looked at the catcher before the Frisbee arrived. In Experiment 2, we showed the dogs (N = 12) the same video recording. This time, however, we froze and rewound parts of the video to examine how the dogs would react to surprising events (i.e., the Frisbee hovering in midair and reversing its direction). The Frisbee again captured the dogs' attention, particularly when the video was frozen and rewound for the first time. Additionally, the dogs looked faster at the catcher when the video moved forward compared to when it was rewound. We conclude that motion tracking and anticipatory looking paradigms provide promising tools for future cognitive research with canids.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The prediction of upcoming events is of importance not only to humans and non-human primates but also to other animals that live in complex environments with lurking threats or moving prey. In this study, we examined motion tracking and anticipatory looking in dogs in two eye-tracking experiments. In Experiment 1, we presented pet dogs (N = 14) with a video depicting how two players threw a Frisbee back and forth multiple times. The horizontal movement of the Frisbee explained a substantial amount of variance of the dogs' horizontal eye movements. With increasing duration of the video, the dogs looked at the catcher before the Frisbee arrived. In Experiment 2, we showed the dogs (N = 12) the same video recording. This time, however, we froze and rewound parts of the video to examine how the dogs would react to surprising events (i.e., the Frisbee hovering in midair and reversing its direction). The Frisbee again captured the dogs' attention, particularly when the video was frozen and rewound for the first time. Additionally, the dogs looked faster at the catcher when the video moved forward compared to when it was rewound. We conclude that motion tracking and anticipatory looking paradigms provide promising tools for future cognitive research with canids.

Close

  • doi:10.1038/s41598-020-72506-5

Close

Benjamin Voloh; Mariann Oemisch; Thilo Womelsdorf

Phase of firing coding of learning variables across the fronto-striatal network during feature-based learning Journal Article

Nature Communications, 11 , pp. 1–16, 2020.

Abstract | Links | BibTeX

@article{Voloh2020,
title = {Phase of firing coding of learning variables across the fronto-striatal network during feature-based learning},
author = {Benjamin Voloh and Mariann Oemisch and Thilo Womelsdorf},
doi = {10.1038/s41467-020-18435-3},
year = {2020},
date = {2020-01-01},
journal = {Nature Communications},
volume = {11},
pages = {1--16},
publisher = {Springer US},
abstract = {The prefrontal cortex and striatum form a recurrent network whose spiking activity encodes multiple types of learning-relevant information. This spike-encoded information is evident in average firing rates, but finer temporal coding might allow multiplexing and enhanced readout across the connected network. We tested this hypothesis in the fronto-striatal network of nonhuman primates during reversal learning of feature values. We found that populations of neurons encoding choice outcomes, outcome prediction errors, and outcome history in their firing rates also carry significant information in their phase-of-firing at a 10–25 Hz band-limited beta frequency at which they synchronize across lateral prefrontal cortex, anterior cingulate cortex and anterior striatum when outcomes were processed. The phase-of-firing code exceeds information that can be obtained from firing rates alone and is evident for inter-areal connections between anterior cingulate cortex, lateral prefrontal cortex and anterior striatum. For the majority of connections, the phase-of-firing information gain is maximal at phases of the beta cycle that were offset from the preferred spiking phase of neurons. Taken together, these findings document enhanced information of three important learning variables at specific phases of firing in the beta cycle at an inter-areally shared beta oscillation frequency during goal-directed behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The prefrontal cortex and striatum form a recurrent network whose spiking activity encodes multiple types of learning-relevant information. This spike-encoded information is evident in average firing rates, but finer temporal coding might allow multiplexing and enhanced readout across the connected network. We tested this hypothesis in the fronto-striatal network of nonhuman primates during reversal learning of feature values. We found that populations of neurons encoding choice outcomes, outcome prediction errors, and outcome history in their firing rates also carry significant information in their phase-of-firing at a 10–25 Hz band-limited beta frequency at which they synchronize across lateral prefrontal cortex, anterior cingulate cortex and anterior striatum when outcomes were processed. The phase-of-firing code exceeds information that can be obtained from firing rates alone and is evident for inter-areal connections between anterior cingulate cortex, lateral prefrontal cortex and anterior striatum. For the majority of connections, the phase-of-firing information gain is maximal at phases of the beta cycle that were offset from the preferred spiking phase of neurons. Taken together, these findings document enhanced information of three important learning variables at specific phases of firing in the beta cycle at an inter-areally shared beta oscillation frequency during goal-directed behavior.

Close

  • doi:10.1038/s41467-020-18435-3

Close

Margreet Vogelzang; Francesca Foppolo; Maria Teresa Guasti; Hedderik van Rijn; Petra Hendriks

Reasoning about alternative forms is costly: The processing of null and overt pronouns in Italian using pupillary responses Journal Article

Discourse Processes, 57 (2), pp. 158–183, 2020.

Abstract | Links | BibTeX

@article{Vogelzang2020,
title = {Reasoning about alternative forms is costly: The processing of null and overt pronouns in Italian using pupillary responses},
author = {Margreet Vogelzang and Francesca Foppolo and Maria Teresa Guasti and Hedderik van Rijn and Petra Hendriks},
doi = {10.1080/0163853X.2019.1591127},
year = {2020},
date = {2020-01-01},
journal = {Discourse Processes},
volume = {57},
number = {2},
pages = {158--183},
publisher = {Routledge},
abstract = {Different words generally have different meanings. However, some words seemingly share similar meanings. An example are null and overt pronouns in Italian, which both refer to an individual in the discourse. Is the interpretation and processing of a form affected by the existence of another form with a similar meaning? With a pupillary response study, we show that null and overt pronouns are processed differently. Specifically, null pronouns are found to be less costly to process than overt pronouns. We argue that this difference is caused by an additional reasoning step that is needed to process marked overt pronouns but not unmarked null pronouns. A comparison with data from Dutch, a language with overt but no null pronouns, demonstrates that Italian pronouns are processed differently from Dutch pronouns. These findings suggest that the processing of a marked form is influenced by alternative forms within the same language, making its processing costly.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Different words generally have different meanings. However, some words seemingly share similar meanings. An example are null and overt pronouns in Italian, which both refer to an individual in the discourse. Is the interpretation and processing of a form affected by the existence of another form with a similar meaning? With a pupillary response study, we show that null and overt pronouns are processed differently. Specifically, null pronouns are found to be less costly to process than overt pronouns. We argue that this difference is caused by an additional reasoning step that is needed to process marked overt pronouns but not unmarked null pronouns. A comparison with data from Dutch, a language with overt but no null pronouns, demonstrates that Italian pronouns are processed differently from Dutch pronouns. These findings suggest that the processing of a marked form is influenced by alternative forms within the same language, making its processing costly.

Close

  • doi:10.1080/0163853X.2019.1591127

Close

Jorrig Vogels; David M Howcroft; Elli Tourtouri; Vera Demberg

How speakers adapt object descriptions to listeners under load Journal Article

Language, Cognition and Neuroscience, 35 (1), pp. 78–92, 2020.

Abstract | Links | BibTeX

@article{Vogels2020,
title = {How speakers adapt object descriptions to listeners under load},
author = {Jorrig Vogels and David M Howcroft and Elli Tourtouri and Vera Demberg},
doi = {10.1080/23273798.2019.1648839},
year = {2020},
date = {2020-01-01},
journal = {Language, Cognition and Neuroscience},
volume = {35},
number = {1},
pages = {78--92},
publisher = {Taylor & Francis},
abstract = {A controversial issue in psycholinguistics is the degree to which speakers employ audience design during language production. Hypothesising that a consideration of the listener's needs is particularly relevant when the listener is under cognitive load, we had speakers describe objects for a listener performing an easy or a difficult simulated driving task. We predicted that speakers would introduce more redundancy in their descriptions in the difficult driving task, thereby accommodating the listener's reduced cognitive capacity. The results showed that speakers did not adapt their descriptions to a change in the listener's cognitive load. However, speakers who had experienced the driving task themselves before and who were presented with the difficult driving task first were more redundant than other speakers. These findings may suggest that speakers only consider the listener's needs in the presence of strong enough cues, and do not update their beliefs about these needs during the task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A controversial issue in psycholinguistics is the degree to which speakers employ audience design during language production. Hypothesising that a consideration of the listener's needs is particularly relevant when the listener is under cognitive load, we had speakers describe objects for a listener performing an easy or a difficult simulated driving task. We predicted that speakers would introduce more redundancy in their descriptions in the difficult driving task, thereby accommodating the listener's reduced cognitive capacity. The results showed that speakers did not adapt their descriptions to a change in the listener's cognitive load. However, speakers who had experienced the driving task themselves before and who were presented with the difficult driving task first were more redundant than other speakers. These findings may suggest that speakers only consider the listener's needs in the presence of strong enough cues, and do not update their beliefs about these needs during the task.

Close

  • doi:10.1080/23273798.2019.1648839

Close

Lena Vogelgesang; Christoph Reichert; Hermann Hinrichs; Hans Jochen Heinze; Stefan Dürschmid

Early shift of attention is not regulated by mind wandering in visual search Journal Article

Frontiers in Neuroscience, 14 , pp. 1–12, 2020.

Abstract | Links | BibTeX

@article{Vogelgesang2020,
title = {Early shift of attention is not regulated by mind wandering in visual search},
author = {Lena Vogelgesang and Christoph Reichert and Hermann Hinrichs and Hans Jochen Heinze and Stefan Dürschmid},
doi = {10.3389/fnins.2020.552637},
year = {2020},
date = {2020-01-01},
journal = {Frontiers in Neuroscience},
volume = {14},
pages = {1--12},
abstract = {Unique to humans is the ability to report subjective awareness of a broad repertoire of external and internal events. Even when asked to focus on external information, the human's mind repeatedly wanders to task-unrelated thoughts, which limits reading comprehension or the ability to withhold automated manual responses. This led to the attentional decoupling account of mind wandering (MW). However, manual responses are not an ideal parameter to study attentional decoupling, given that during MW, the online adjustment of manual motor responses is impaired. Hence, whether early attentional mechanisms are indeed downregulated during MW or only motor responses being slowed is not clear. In contrast to manual motor responses, eye movements are considered a sensitive proxy of attentional shifts. Using a simple target detection task, we asked subjects to indicate whether a target was presented within a visual search display by pressing a button while we recorded eye movements and unpredictably asked the subjects to rate their actual level of MW. Generally, manual reaction times increased with MW, both in target absent and present trials. But importantly, even in trials with MW, subjects detected earlier a presented than an absent target. The decoupling account would predict more fixations of the target before pressing the button during MW. However, our results did not corroborate this assumption. Most importantly, subject's time to direct gaze at the target was equally fast in trials with and without MW. Our results corroborate our hypothesis that during MW early, bottom–up driven attentional processes are not decoupled but selectively manual motor responses are slowed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Unique to humans is the ability to report subjective awareness of a broad repertoire of external and internal events. Even when asked to focus on external information, the human's mind repeatedly wanders to task-unrelated thoughts, which limits reading comprehension or the ability to withhold automated manual responses. This led to the attentional decoupling account of mind wandering (MW). However, manual responses are not an ideal parameter to study attentional decoupling, given that during MW, the online adjustment of manual motor responses is impaired. Hence, whether early attentional mechanisms are indeed downregulated during MW or only motor responses being slowed is not clear. In contrast to manual motor responses, eye movements are considered a sensitive proxy of attentional shifts. Using a simple target detection task, we asked subjects to indicate whether a target was presented within a visual search display by pressing a button while we recorded eye movements and unpredictably asked the subjects to rate their actual level of MW. Generally, manual reaction times increased with MW, both in target absent and present trials. But importantly, even in trials with MW, subjects detected earlier a presented than an absent target. The decoupling account would predict more fixations of the target before pressing the button during MW. However, our results did not corroborate this assumption. Most importantly, subject's time to direct gaze at the target was equally fast in trials with and without MW. Our results corroborate our hypothesis that during MW early, bottom–up driven attentional processes are not decoupled but selectively manual motor responses are slowed.

Close

  • doi:10.3389/fnins.2020.552637

Close

Kasper Vinken; Xavier Boix; Gabriel Kreiman

Incorporating neuronal fatigue in deep neural networks captures dynamics of adaptation in neurophysiology and perception Journal Article

Science Advances, 6 , pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Vinken2020,
title = {Incorporating neuronal fatigue in deep neural networks captures dynamics of adaptation in neurophysiology and perception},
author = {Kasper Vinken and Xavier Boix and Gabriel Kreiman},
doi = {10.1101/642777},
year = {2020},
date = {2020-01-01},
journal = {Science Advances},
volume = {6},
pages = {1--13},
abstract = {Adaptation is a fundamental property of the visual system that molds how an object is processed and perceived in its temporal context. It is unknown whether adaptation requires a circuit level implementation or whether it emerges from neuronally intrinsic biophysical processes. Here we combined neurophysiological recordings, psychophysics, and deep convolutional neural network computational models to test the hypothesis that a neuronally intrinsic, biophysically plausible, fatigue mechanism is sufficient to account for the hallmark properties of adaptation. The proposed model captured neural signatures of adaptation including repetition suppression and novelty detection. At the behavioral level, the proposed model was consistent with perceptual aftereffects. Furthermore, adapting to prevailing but irrelevant inputs improves object recognition and the adaptation computations can be trained in a network trained to maximize recognition performance. These results show that an intrinsic fatigue mechanism can account for key neurophysiological and perceptual properties and enhance visual processing by incorporating temporal context.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Adaptation is a fundamental property of the visual system that molds how an object is processed and perceived in its temporal context. It is unknown whether adaptation requires a circuit level implementation or whether it emerges from neuronally intrinsic biophysical processes. Here we combined neurophysiological recordings, psychophysics, and deep convolutional neural network computational models to test the hypothesis that a neuronally intrinsic, biophysically plausible, fatigue mechanism is sufficient to account for the hallmark properties of adaptation. The proposed model captured neural signatures of adaptation including repetition suppression and novelty detection. At the behavioral level, the proposed model was consistent with perceptual aftereffects. Furthermore, adapting to prevailing but irrelevant inputs improves object recognition and the adaptation computations can be trained in a network trained to maximize recognition performance. These results show that an intrinsic fatigue mechanism can account for key neurophysiological and perceptual properties and enhance visual processing by incorporating temporal context.

Close

  • doi:10.1101/642777

Close

Emma Vilarem; Jorge L Armony; Julie Grèzes

Action opportunities modulate attention allocation under social threat Journal Article

Emotion, 20 (5), pp. 890–903, 2020.

Abstract | Links | BibTeX

@article{Vilarem2020,
title = {Action opportunities modulate attention allocation under social threat},
author = {Emma Vilarem and Jorge L Armony and Julie Gr{è}zes},
doi = {10.1037/emo0000598},
year = {2020},
date = {2020-01-01},
journal = {Emotion},
volume = {20},
number = {5},
pages = {890--903},
abstract = {When entering a subway car affording multiple targets for action, how do we decide, very quickly, where to sit, particularly when in the presence of a potential danger? It is unclear, from existing motor and emotion theories, whether our attention would be allocated toward the seat on which we intend to sit on or whether it would be oriented toward an individual that signals the presence of potential danger. To address this question, we explored spontaneous action choices and attention allocation in a realistic context, where a threat-related signal (an angry or fearful individual) and the target for action in that situation could compete for attentional priority. Results showed that participants chose the actions that avoided angry individuals and were more confident when approaching those with a fearful expression. In addition, covert and overt measures of attention showed a stronger avoidance effect for angry, compared to fearful, individuals. Crucially, these effects of anger and fear on attention allocation required the presence of action possibilities in the scene. Taken together, our findings show that in a realistic context offering competing action possibilities, threat-related distractors shape both action selection and attention allocation accordingly to their social function.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When entering a subway car affording multiple targets for action, how do we decide, very quickly, where to sit, particularly when in the presence of a potential danger? It is unclear, from existing motor and emotion theories, whether our attention would be allocated toward the seat on which we intend to sit on or whether it would be oriented toward an individual that signals the presence of potential danger. To address this question, we explored spontaneous action choices and attention allocation in a realistic context, where a threat-related signal (an angry or fearful individual) and the target for action in that situation could compete for attentional priority. Results showed that participants chose the actions that avoided angry individuals and were more confident when approaching those with a fearful expression. In addition, covert and overt measures of attention showed a stronger avoidance effect for angry, compared to fearful, individuals. Crucially, these effects of anger and fear on attention allocation required the presence of action possibilities in the scene. Taken together, our findings show that in a realistic context offering competing action possibilities, threat-related distractors shape both action selection and attention allocation accordingly to their social function.

Close

  • doi:10.1037/emo0000598

Close

Pedro G Vieira; Matthew R Krause; Christopher C Pack

tACS entrains neural activity while somatosensory input is blocked Journal Article

PLoS Biology, 18 (10), pp. 1–14, 2020.

Abstract | Links | BibTeX

@article{Vieira2020,
title = {tACS entrains neural activity while somatosensory input is blocked},
author = {Pedro G Vieira and Matthew R Krause and Christopher C Pack},
doi = {10.1371/journal.pbio.3000834},
year = {2020},
date = {2020-01-01},
journal = {PLoS Biology},
volume = {18},
number = {10},
pages = {1--14},
abstract = {Transcranial alternating current stimulation (tACS) modulates brain activity by passing electrical current through electrodes that are attached to the scalp. Because it is safe and noninvasive, tACS holds great promise as a tool for basic research and clinical treatment. However, little is known about how tACS ultimately influences neural activity. One hypothesis is that tACS affects neural responses directly, by producing electrical fields that interact with the brain's endogenous electrical activity. By controlling the shape and location of these electric fields, one could target brain regions associated with particular behaviors or symptoms. However, an alternative hypothesis is that tACS affects neural activity indirectly, via peripheral sensory afferents. In particular, it has often been hypothesized that tACS acts on sensory fibers in the skin, which in turn provide rhythmic input to central neurons. In this case, there would be little possibility of targeted brain stimulation, as the regions modulated by tACS would depend entirely on the somatosensory pathways originating in the skin around the stimulating electrodes. Here, we directly test these competing hypotheses by recording single-unit activity in the hippocampus and visual cortex of alert monkeys receiving tACS. We find that tACS entrains neuronal activity in both regions, so that cells fire synchronously with the stimulation. Blocking somatosensory input with a topical anesthetic does not significantly alter these neural entrainment effects. These data are therefore consistent with the direct stimulation hypothesis and suggest that peripheral somatosensory stimulation is not required for tACS to entrain neurons.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Transcranial alternating current stimulation (tACS) modulates brain activity by passing electrical current through electrodes that are attached to the scalp. Because it is safe and noninvasive, tACS holds great promise as a tool for basic research and clinical treatment. However, little is known about how tACS ultimately influences neural activity. One hypothesis is that tACS affects neural responses directly, by producing electrical fields that interact with the brain's endogenous electrical activity. By controlling the shape and location of these electric fields, one could target brain regions associated with particular behaviors or symptoms. However, an alternative hypothesis is that tACS affects neural activity indirectly, via peripheral sensory afferents. In particular, it has often been hypothesized that tACS acts on sensory fibers in the skin, which in turn provide rhythmic input to central neurons. In this case, there would be little possibility of targeted brain stimulation, as the regions modulated by tACS would depend entirely on the somatosensory pathways originating in the skin around the stimulating electrodes. Here, we directly test these competing hypotheses by recording single-unit activity in the hippocampus and visual cortex of alert monkeys receiving tACS. We find that tACS entrains neuronal activity in both regions, so that cells fire synchronously with the stimulation. Blocking somatosensory input with a topical anesthetic does not significantly alter these neural entrainment effects. These data are therefore consistent with the direct stimulation hypothesis and suggest that peripheral somatosensory stimulation is not required for tACS to entrain neurons.

Close

  • doi:10.1371/journal.pbio.3000834

Close

9138 entries « ‹ 2 of 92 › »

Let's Keep in Touch

SR Research Eye Tracking

NEWSLETTER SIGNUPNEWSLETTER ARCHIVE

Footer

Contact

info@sr-research.com
Phone: +1-613-271-8686
Toll Free: 1-866-821-0731
Fax: 613-482-4866

Quick Links

PRODUCTS

SOLUTIONS

SUPPORT FORUM

Legal Information

Legal Notice

Privacy Policy

EyeLink® eye trackers are intended for research purposes only and should not be used in the treatment or diagnosis of any medical condition.

Featured Blog Post

EyeLink Eye-Tracking Articles

2020 EyeLink Publication Update

Copyright © 2020 SR Research Ltd. All Rights Reserved. EyeLink is a registered trademark of SR Research Ltd.