• Skip to main content
  • Skip to header right navigation
  • Skip to site footer
Fast, Accurate, Reliable Eye Tracking

高速、精准和可靠的眼动追踪解决方案

  • 硬件
    • EyeLink 1000 Plus
    • EyeLink Portable Duo
    • fMRI和MEG系统
    • EyeLink II
    • 硬件集成
  • 软件
    • Experiment Builder
    • Data Viewer
    • WebLink
    • 软件集成
    • Purchase Licenses
  • 解决方案
    • 阅读与语言
    • 发展研究
    • fMRI 和 MEG
    • EEG 和 fNIRS
    • 临床与眼动机制研究
    • 认知性
    • 可用性与应用研究
    • 非人类 灵长类动物
  • 技术支持
    • 论坛
    • 资源
    • 有用的应用程序
    • 训练
  • 关于
    • 关于我们
    • EyeLink出版物
    • 新闻
    • 制造
    • 职业生涯
    • 关于眼动追踪
    • 新闻通讯
  • 博客
  • 联系
  • English
eye tracking research

EyeLink眼球跟踪出版物库

全部EyeLink出版物

截至2021,所有10000多份经同行评审的EyeLink研究出版物(其中一些在2022年初)以下按年份列出。您可以使用视觉搜索、平滑追踪、帕金森氏症等关键字搜索出版物库。您还可以搜索单个作者的姓名。按研究领域分组的眼球追踪研究可在解决方案页面上找到。如果我们遗漏了任何EyeLink眼球追踪文件,请给我们发电子邮件!

10162 entries « ‹ 2 of 102 › »

2021

Victoria Yaneva; Brian E. Clauser; Amy Morales; Miguel Paniagua

Using eye‐tracking data as part of the validity argument for multiple‐choice questions: A demonstration Journal Article

In: Journal of Educational Measurement, pp. 1–23, 2021.

Abstract | Links | BibTeX

@article{Yaneva2021,
title = {Using eye‐tracking data as part of the validity argument for multiple‐choice questions: A demonstration},
author = {Victoria Yaneva and Brian E. Clauser and Amy Morales and Miguel Paniagua},
doi = {10.1111/jedm.12304},
year = {2021},
date = {2021-01-01},
journal = {Journal of Educational Measurement},
pages = {1--23},
abstract = {Eye-tracking technology can create a record of the location and duration of visual fixations as a test-taker reads test questions. Although the cognitive process the test- taker is using cannot be directly observed, eye-tracking data can support inferences about these unobserved cognitive processes. This type of information has the potential to support improved test design and to contribute to an overall validity argument for the inferences and uses made based on test scores. Although several authors have referred to the potential usefulness of eye-tracking data, there are relatively few published studies that provide examples ofthat use. In this paper, we report the results an eye-tracking study designed to evaluate how the presence of the options in multiple-choice questions impacts the way medical students responded to questions designed to evaluate clinical reasoning. Examples of the types of data that can be extracted are presented. We then discuss the implications ofthese results for evaluating the validity of inferences made based on the type of items used in this study.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye-tracking technology can create a record of the location and duration of visual fixations as a test-taker reads test questions. Although the cognitive process the test- taker is using cannot be directly observed, eye-tracking data can support inferences about these unobserved cognitive processes. This type of information has the potential to support improved test design and to contribute to an overall validity argument for the inferences and uses made based on test scores. Although several authors have referred to the potential usefulness of eye-tracking data, there are relatively few published studies that provide examples ofthat use. In this paper, we report the results an eye-tracking study designed to evaluate how the presence of the options in multiple-choice questions impacts the way medical students responded to questions designed to evaluate clinical reasoning. Examples of the types of data that can be extracted are presented. We then discuss the implications ofthese results for evaluating the validity of inferences made based on the type of items used in this study.

Close

  • doi:10.1111/jedm.12304

Close

Guoli Yan; Zebo Lan; Zhu Meng; Yingchao Wang; Valerie Benson

Phonological coding during sentence reading in Chinese deaf readers: An eye-tracking study Journal Article

In: Scientific Studies of Reading, vol. 25, no. 4, pp. 287–303, 2021.

Abstract | Links | BibTeX

@article{Yan2021a,
title = {Phonological coding during sentence reading in Chinese deaf readers: An eye-tracking study},
author = {Guoli Yan and Zebo Lan and Zhu Meng and Yingchao Wang and Valerie Benson},
doi = {10.1080/10888438.2020.1778000},
year = {2021},
date = {2021-01-01},
journal = {Scientific Studies of Reading},
volume = {25},
number = {4},
pages = {287--303},
publisher = {Routledge},
abstract = {Phonological coding plays an important role in reading for hearing students. Experimental findings regarding phonological coding in deaf readers are controversial, and whether deaf readers are able to use phonological coding remains unclear. In the current study we examined whether Chinese deaf students could use phonological coding during sentence reading. Deaf middle school students, chronological age-matched hearing students, and reading ability-matched hearing students had their eye movements recorded as they read sentences containing correctly spelled characters, homophones, or unrelated characters. Both hearing groups had shorter total reading times on homophones than they did on unrelated characters. In contrast, no significant difference was found between homophones and unrelated characters for the deaf students. However, when the deaf group was divided into more-skilled and less-skilled readers according to their scores on reading fluency, the homophone advantage noted for the hearing controls was also observed for the more-skilled deaf students.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Phonological coding plays an important role in reading for hearing students. Experimental findings regarding phonological coding in deaf readers are controversial, and whether deaf readers are able to use phonological coding remains unclear. In the current study we examined whether Chinese deaf students could use phonological coding during sentence reading. Deaf middle school students, chronological age-matched hearing students, and reading ability-matched hearing students had their eye movements recorded as they read sentences containing correctly spelled characters, homophones, or unrelated characters. Both hearing groups had shorter total reading times on homophones than they did on unrelated characters. In contrast, no significant difference was found between homophones and unrelated characters for the deaf students. However, when the deaf group was divided into more-skilled and less-skilled readers according to their scores on reading fluency, the homophone advantage noted for the hearing controls was also observed for the more-skilled deaf students.

Close

  • doi:10.1080/10888438.2020.1778000

Close

Chuyao Yan; Tao He; Zhiguo Wang

Predictive remapping leaves a behaviorally measurable attentional trace on eye-centered brain maps Journal Article

In: Psychonomic Bulletin & Review, vol. 28, no. 4, pp. 1243–1251, 2021.

Abstract | Links | BibTeX

@article{Yan2021,
title = {Predictive remapping leaves a behaviorally measurable attentional trace on eye-centered brain maps},
author = {Chuyao Yan and Tao He and Zhiguo Wang},
doi = {10.3758/s13423-021-01893-1},
year = {2021},
date = {2021-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {28},
number = {4},
pages = {1243--1251},
publisher = {Psychonomic Bulletin & Review},
abstract = {How does the brain maintain spatial attention despite the retinal displacement of objects by saccades? A possible solution is to use the vector of an upcoming saccade to compensate for the shift of objects on eye-centered (retinotopic) brain maps. In support of this hypothesis, previous studies have revealed attentional effects at the future retinal locus of an attended object, just before the onset of saccades. A critical yet unresolved theoretical issue is whether predictively remapped attentional effects would persist long enough on eye-centered brain maps, so no external input (goal, expectation, reward, memory, etc.) is needed to maintain spatial attention immediately following saccades. The present study examined this issue with inhibition of return (IOR), an attentional effect that reveals itself in both world-centered and eye-centered coordinates, and predictively remaps before saccades. In the first task, a saccade was introduced to a cueing task (“nonreturn-saccade task”) to show that IOR is coded in world-centered coordinates following saccades. In a second cueing task, two consecutive saccades were executed to trigger remapping and to dissociate the retinal locus relevant to remapping from the cued retinal locus (“return-saccade” task). IOR was observed at the remapped retinal locus 430-ms following the (first) saccade that triggered remapping. A third cueing task (“no-remapping” task) further revealed that the lingering IOR effect left by remapping was not confounded by the attention spillover. These results together show that predictive remapping leaves a robust attentional trace on eye-centered brain maps. This retinotopic trace is sufficient to sustain spatial attention for a few hundred milliseconds following saccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

How does the brain maintain spatial attention despite the retinal displacement of objects by saccades? A possible solution is to use the vector of an upcoming saccade to compensate for the shift of objects on eye-centered (retinotopic) brain maps. In support of this hypothesis, previous studies have revealed attentional effects at the future retinal locus of an attended object, just before the onset of saccades. A critical yet unresolved theoretical issue is whether predictively remapped attentional effects would persist long enough on eye-centered brain maps, so no external input (goal, expectation, reward, memory, etc.) is needed to maintain spatial attention immediately following saccades. The present study examined this issue with inhibition of return (IOR), an attentional effect that reveals itself in both world-centered and eye-centered coordinates, and predictively remaps before saccades. In the first task, a saccade was introduced to a cueing task (“nonreturn-saccade task”) to show that IOR is coded in world-centered coordinates following saccades. In a second cueing task, two consecutive saccades were executed to trigger remapping and to dissociate the retinal locus relevant to remapping from the cued retinal locus (“return-saccade” task). IOR was observed at the remapped retinal locus 430-ms following the (first) saccade that triggered remapping. A third cueing task (“no-remapping” task) further revealed that the lingering IOR effect left by remapping was not confounded by the attention spillover. These results together show that predictive remapping leaves a robust attentional trace on eye-centered brain maps. This retinotopic trace is sufficient to sustain spatial attention for a few hundred milliseconds following saccades.

Close

  • doi:10.3758/s13423-021-01893-1

Close

Jumpei Yamashita; Hiroki Terashima; Makoto Yoneya; Kazushi Maruya; Hidetaka Koya; Haruo Oishi; Hiroyuki Nakamura; Takatsune Kumada

Pupillary fluctuation amplitude before target presentation reflects short-term vigilance level in Psychomotor Vigilance Tasks Journal Article

In: PLoS ONE, vol. 16, no. 9, pp. 1–22, 2021.

Abstract | Links | BibTeX

@article{Yamashita2021,
title = {Pupillary fluctuation amplitude before target presentation reflects short-term vigilance level in Psychomotor Vigilance Tasks},
author = {Jumpei Yamashita and Hiroki Terashima and Makoto Yoneya and Kazushi Maruya and Hidetaka Koya and Haruo Oishi and Hiroyuki Nakamura and Takatsune Kumada},
doi = {10.1371/journal.pone.0256953},
year = {2021},
date = {2021-01-01},
journal = {PLoS ONE},
volume = {16},
number = {9},
pages = {1--22},
abstract = {Our daily activities require vigilance. Therefore, it is useful to externally monitor and predict our vigilance level using a straightforward method. It is known that the vigilance level is linked to pupillary fluctuations via Locus Coeruleus and Norepinephrine (LC-NE) system. However, previous methods of estimating long-term vigilance require monitoring pupillary fluctuations at rest over a long period. We developed a method of predicting the short-term vigilance level by monitoring pupillary fluctuation for a shorter period consisting of several seconds. The LC activity also fluctuates at a timescale of seconds. Therefore, we hypothesized that the short-term vigilance level could be estimated using pupillary fluctuations in a short period and quantified their amplitude as the Micro-Pupillary Unrest Index (M-PUI). We found an intra-individual trial-by-trial positive correlation between Reaction Time (RT) reflecting the short-term vigilance level and M-PUI in the period immediately before the target onset in a Psychomotor Vigilance Task (PVT). This relationship was most evident when the fluctuation was smoothed by a Hanning window of approximately 50 to 100 ms (including cases of down-sampled data at 100 and 50 Hz), and M-PUI was calculated in the period up to one or two seconds before the target onset. These results suggest that M-PUI can monitor and predict fluctuating levels of vigilance. M-PUI is also useful for examining pupillary fluctuations in a short period for elucidating the psychophysiological mechanisms of short-term vigilance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Our daily activities require vigilance. Therefore, it is useful to externally monitor and predict our vigilance level using a straightforward method. It is known that the vigilance level is linked to pupillary fluctuations via Locus Coeruleus and Norepinephrine (LC-NE) system. However, previous methods of estimating long-term vigilance require monitoring pupillary fluctuations at rest over a long period. We developed a method of predicting the short-term vigilance level by monitoring pupillary fluctuation for a shorter period consisting of several seconds. The LC activity also fluctuates at a timescale of seconds. Therefore, we hypothesized that the short-term vigilance level could be estimated using pupillary fluctuations in a short period and quantified their amplitude as the Micro-Pupillary Unrest Index (M-PUI). We found an intra-individual trial-by-trial positive correlation between Reaction Time (RT) reflecting the short-term vigilance level and M-PUI in the period immediately before the target onset in a Psychomotor Vigilance Task (PVT). This relationship was most evident when the fluctuation was smoothed by a Hanning window of approximately 50 to 100 ms (including cases of down-sampled data at 100 and 50 Hz), and M-PUI was calculated in the period up to one or two seconds before the target onset. These results suggest that M-PUI can monitor and predict fluctuating levels of vigilance. M-PUI is also useful for examining pupillary fluctuations in a short period for elucidating the psychophysiological mechanisms of short-term vigilance.

Close

  • doi:10.1371/journal.pone.0256953

Close

Hongge Xu; Jing Samantha Pan; Xiaoye Michael Wang; Geoffrey P. Bingham

Information for perceiving blurry events: Optic flow and color are additive Journal Article

In: Attention, Perception, and Psychophysics, vol. 83, no. 1, pp. 389–398, 2021.

Abstract | Links | BibTeX

@article{Xu2021,
title = {Information for perceiving blurry events: Optic flow and color are additive},
author = {Hongge Xu and Jing Samantha Pan and Xiaoye Michael Wang and Geoffrey P. Bingham},
doi = {10.3758/s13414-020-02135-7},
year = {2021},
date = {2021-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {83},
number = {1},
pages = {389--398},
publisher = {Attention, Perception, & Psychophysics},
abstract = {Information used in visual event perception includes both static image structure projected from opaque object surfaces and dynamic optic flow generated by motion. Events presented in static blurry grayscale displays have been shown to be recognized only when and after presented with optic flow. In this study, we investigate the effects of optic flow and color on identifying blurry events by studying the identification accuracy and eye-movement patterns. Three types of color displays were tested: grayscale, original colors, or rearranged colors (where the RGB values of the original colors were adjusted). In each color condition, participants identified 12 blurry events in five experimental phases. In the first two phases, static blurry images were presented alone or sequentially with a motion mask between consecutive frames, and identification was poor. In Phase 3, where optic flow was added, identification was comparably good. In Phases 4 and 5, motion was removed, but identification remained good. Thus, optic flow improved event identification during and after its presentation. Color also improved performance, where participants were consistently better at identifying color displays than grayscale or rearranged color displays. Importantly, the effects of optic flow and color were additive. Finally, in both motion and postmotion phases, a significant portion of eye fixations fell in strong optic flow areas, suggesting that participants continued to look where flow was available even after it stopped. We infer that optic flow specified depth structure in the blurry image structure and yielded an improvement in identification from static blurry images.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Information used in visual event perception includes both static image structure projected from opaque object surfaces and dynamic optic flow generated by motion. Events presented in static blurry grayscale displays have been shown to be recognized only when and after presented with optic flow. In this study, we investigate the effects of optic flow and color on identifying blurry events by studying the identification accuracy and eye-movement patterns. Three types of color displays were tested: grayscale, original colors, or rearranged colors (where the RGB values of the original colors were adjusted). In each color condition, participants identified 12 blurry events in five experimental phases. In the first two phases, static blurry images were presented alone or sequentially with a motion mask between consecutive frames, and identification was poor. In Phase 3, where optic flow was added, identification was comparably good. In Phases 4 and 5, motion was removed, but identification remained good. Thus, optic flow improved event identification during and after its presentation. Color also improved performance, where participants were consistently better at identifying color displays than grayscale or rearranged color displays. Importantly, the effects of optic flow and color were additive. Finally, in both motion and postmotion phases, a significant portion of eye fixations fell in strong optic flow areas, suggesting that participants continued to look where flow was available even after it stopped. We infer that optic flow specified depth structure in the blurry image structure and yielded an improvement in identification from static blurry images.

Close

  • doi:10.3758/s13414-020-02135-7

Close

Jia Qiong Xie; Detlef H. Rost; Fu Xing Wang; Jin Liang Wang; Rebecca L. Monk

The association between excessive social media use and distraction: An eye movement tracking study Journal Article

In: Information and Management, vol. 58, no. 2, pp. 1–12, 2021.

Abstract | Links | BibTeX

@article{Xie2021a,
title = {The association between excessive social media use and distraction: An eye movement tracking study},
author = {Jia Qiong Xie and Detlef H. Rost and Fu Xing Wang and Jin Liang Wang and Rebecca L. Monk},
doi = {10.1016/j.im.2020.103415},
year = {2021},
date = {2021-01-01},
journal = {Information and Management},
volume = {58},
number = {2},
pages = {1--12},
publisher = {Elsevier B.V.},
abstract = {Drawing on scan-and-shift hypothesis and scattered attention hypothesis, this article attempted to explore the association between excessive social media use (ESMU) and distraction from an information engagement perspective. In study 1, the results, based on 743 questionnaires completed by Chinese college students, showed that ESMU is related to distraction in daily life. In Study 2, eye-tracking technology was used to investigate the distraction and performance of excessive microblog users when performing the modified Stroop task. The results showed that excessive microblog users had difficulties suppressing interference information than non-microblog users, resulting in poor performance. Theoretical and practical implications are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Drawing on scan-and-shift hypothesis and scattered attention hypothesis, this article attempted to explore the association between excessive social media use (ESMU) and distraction from an information engagement perspective. In study 1, the results, based on 743 questionnaires completed by Chinese college students, showed that ESMU is related to distraction in daily life. In Study 2, eye-tracking technology was used to investigate the distraction and performance of excessive microblog users when performing the modified Stroop task. The results showed that excessive microblog users had difficulties suppressing interference information than non-microblog users, resulting in poor performance. Theoretical and practical implications are discussed.

Close

  • doi:10.1016/j.im.2020.103415

Close

Guangming Xie; Wenbo Du; Hongping Yuan; Yushi Jiang

Promoting reviewer-related attribution: Moderately complex presentation of mixed opinions activates the analytic process Journal Article

In: Sustainability, vol. 13, no. 2, pp. 1–28, 2021.

Abstract | Links | BibTeX

@article{Xie2021,
title = {Promoting reviewer-related attribution: Moderately complex presentation of mixed opinions activates the analytic process},
author = {Guangming Xie and Wenbo Du and Hongping Yuan and Yushi Jiang},
doi = {10.3390/su13020441},
year = {2021},
date = {2021-01-01},
journal = {Sustainability},
volume = {13},
number = {2},
pages = {1--28},
abstract = {Using metacognition and dual process theories, this paper studied the role of types of presentation of mixed opinions in mitigating negative impacts of online word of mouth (WOM) dispersion on consumer's purchasing decisions. Two studies were implemented, respectively. By employing an eye-tracking approach, study 1 recorded consumer's attention to WOM dispersion. The results show that the activation of the analytic system can improve reviewer-related attribution options. In study 2, three kinds of presentation of mixed opinions originating from China's leading online platform were compared. The results demonstrated that mixed opinions expressed in moderately complex form, integrating average ratings and reviewers' impressions of products, was effective in promoting reviewer-related attribution choices. However, too-complicated presentation types of WOM dispersion can impose excessively on consumers' cognitive load and eventually fail to activate the analytic system for promoting reviewer-related attribution choices. The main contribution of this paper lies in that consumer attribution-related choices are supplemented, which provides new insights into information consistency in consumer research. The managerial and theoretical significance of this paper are discussed in order to better understand the purchasing decisions of consumers.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Using metacognition and dual process theories, this paper studied the role of types of presentation of mixed opinions in mitigating negative impacts of online word of mouth (WOM) dispersion on consumer's purchasing decisions. Two studies were implemented, respectively. By employing an eye-tracking approach, study 1 recorded consumer's attention to WOM dispersion. The results show that the activation of the analytic system can improve reviewer-related attribution options. In study 2, three kinds of presentation of mixed opinions originating from China's leading online platform were compared. The results demonstrated that mixed opinions expressed in moderately complex form, integrating average ratings and reviewers' impressions of products, was effective in promoting reviewer-related attribution choices. However, too-complicated presentation types of WOM dispersion can impose excessively on consumers' cognitive load and eventually fail to activate the analytic system for promoting reviewer-related attribution choices. The main contribution of this paper lies in that consumer attribution-related choices are supplemented, which provides new insights into information consistency in consumer research. The managerial and theoretical significance of this paper are discussed in order to better understand the purchasing decisions of consumers.

Close

  • doi:10.3390/su13020441

Close

Xue-Zhen Xiao; Gaoding Jia; Aiping Wang

Semantic preview benefit of Tibetan-Chinese bilinguals during Chinese reading Journal Article

In: Language Learning and Development, pp. 1–15, 2021.

Abstract | Links | BibTeX

@article{Xiao2021a,
title = {Semantic preview benefit of Tibetan-Chinese bilinguals during Chinese reading},
author = {Xue-Zhen Xiao and Gaoding Jia and Aiping Wang},
doi = {10.1080/15475441.2021.2003198},
year = {2021},
date = {2021-01-01},
journal = {Language Learning and Development},
pages = {1--15},
publisher = {Psychology Press},
abstract = {When reading Chinese, skilled native readers regularly gain a preview benefit (PB) when the parafoveal word is orthographically or semantically related to the target word. Evidence shows that non-native, beginning Chinese readers can obtain an orthographic PB during Chinese reading, which indicates the parafoveal processing of low-level visual information. However, whether non-native Chinese readers who are more proficient in Chinese can make use of high-level parafoveal information remains unknown. Therefore, this study examined parafoveal processing during Chinese reading among Tibetan-Chinese bilinguals with high Chinese proficiency and compared their PB effects with those from native Chinese readers. Tibetan-Chinese bilinguals demonstrated both orthographic and semantic PB but did not show phonological PB and only differed from native Chinese in the identical PB when preview characters were identical to the targets. These findings demonstrate that non-native Chinese readers can extract semantic informa- tion from parafoveal preview during Chinese reading and highlight the modulation of parafoveal processing efficiency by reading proficiency. The results are in line with the direct route to access the mental lexicon of visual Chinese characters among non-native Chinese speakers. Introduction},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When reading Chinese, skilled native readers regularly gain a preview benefit (PB) when the parafoveal word is orthographically or semantically related to the target word. Evidence shows that non-native, beginning Chinese readers can obtain an orthographic PB during Chinese reading, which indicates the parafoveal processing of low-level visual information. However, whether non-native Chinese readers who are more proficient in Chinese can make use of high-level parafoveal information remains unknown. Therefore, this study examined parafoveal processing during Chinese reading among Tibetan-Chinese bilinguals with high Chinese proficiency and compared their PB effects with those from native Chinese readers. Tibetan-Chinese bilinguals demonstrated both orthographic and semantic PB but did not show phonological PB and only differed from native Chinese in the identical PB when preview characters were identical to the targets. These findings demonstrate that non-native Chinese readers can extract semantic informa- tion from parafoveal preview during Chinese reading and highlight the modulation of parafoveal processing efficiency by reading proficiency. The results are in line with the direct route to access the mental lexicon of visual Chinese characters among non-native Chinese speakers. Introduction

Close

  • doi:10.1080/15475441.2021.2003198

Close

Jingmei Xiao; Jing Huang; Yujun Long; Xiaoyi Wang; Ying Wang; Ye Yang; Gangrui Hei; Mengxi Sun; Jin Zhao; Li Li; Tiannan Shao; Weiyan Wang; Dongyu Kang; Chenchen Liu; Peng Xie; Yuyan Huang; Renrong Wu; Jingping Zhao

Optimizing and individualizing the pharmacological treatment of first-episode Schizophrenic patients: Study protocol for a multicenter clinical trial Journal Article

In: Frontiers in Psychiatry, vol. 12, no. February, pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Xiao2021,
title = {Optimizing and individualizing the pharmacological treatment of first-episode Schizophrenic patients: Study protocol for a multicenter clinical trial},
author = {Jingmei Xiao and Jing Huang and Yujun Long and Xiaoyi Wang and Ying Wang and Ye Yang and Gangrui Hei and Mengxi Sun and Jin Zhao and Li Li and Tiannan Shao and Weiyan Wang and Dongyu Kang and Chenchen Liu and Peng Xie and Yuyan Huang and Renrong Wu and Jingping Zhao},
doi = {10.3389/fpsyt.2021.611070},
year = {2021},
date = {2021-01-01},
journal = {Frontiers in Psychiatry},
volume = {12},
number = {February},
pages = {1--10},
abstract = {Introduction: Affecting $sim$1% of the world population, schizophrenia is known as one of the costliest and most burdensome diseases worldwide. Antipsychotic medications are the main treatment for schizophrenia to control psychotic symptoms and efficiently prevent new crises. However, due to poor compliance, 74% of patients with schizophrenia discontinue medication within 1.5 years, which severely affects recovery and prognosis. Through research on intra and interindividual variability based on a psychopathology–neuropsychology–neuroimage–genetics–physiology-biochemistry model, our main objective is to investigate an optimized and individualized antipsychotic-treatment regimen and precision treatment for first-episode schizophrenic patients. Methods and Analysis: The study is performed in 20 representative hospitals in China. Three subprojects are included. In subproject 1, 1,800 first-episode patients with schizophrenia are randomized into six different antipsychotic monotherapy groups (olanzapine, risperidone, aripiprazole, ziprasidone, amisulpride, and haloperidol) for an 8-week treatment. By identifying a set of potential biomarkers associated with antipsychotic treatment response, we intend to build a prediction model, which includes neuroimaging, epigenetics, environmental stress, neurocognition, eye movement, electrophysiology, and neurological biochemistry indexes. In subproject 2, apart from verifying the prediction model established in subproject 1 based on an independent cohort of 1,800 first-episode patients with schizophrenia, we recruit patients from a verification cohort who did not get an effective response after an 8-week antipsychotic treatment into a randomized double-blind controlled trial with minocycline (200 mg per day) and sulforaphane (3 tables per day) to explore add-on treatment for patients with schizophrenia. Two hundred forty participants are anticipated to be enrolled for each group. In subproject 3, we tend to carry out one trial to construct an intervention strategy for metabolic syndrome induced by antipsychotic treatment and another one to build a prevention strategy for patients at a high risk of metabolic syndrome, which combines metformin and lifestyle intervention. Two hundred participants are anticipated to be enrolled for each group. Ethics and Dissemination: The study protocol has been approved by the Medical Ethics committee of the Second Xiangya Hospital of Central South University (No. 2017027). Results will be disseminated in peer-reviewed journals and at international conferences. Trial Registration: This trial has been registered on Clinicalrials.gov (NCT03451734). The protocol version is V.1.0 (April 23, 2017).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Introduction: Affecting $sim$1% of the world population, schizophrenia is known as one of the costliest and most burdensome diseases worldwide. Antipsychotic medications are the main treatment for schizophrenia to control psychotic symptoms and efficiently prevent new crises. However, due to poor compliance, 74% of patients with schizophrenia discontinue medication within 1.5 years, which severely affects recovery and prognosis. Through research on intra and interindividual variability based on a psychopathology–neuropsychology–neuroimage–genetics–physiology-biochemistry model, our main objective is to investigate an optimized and individualized antipsychotic-treatment regimen and precision treatment for first-episode schizophrenic patients. Methods and Analysis: The study is performed in 20 representative hospitals in China. Three subprojects are included. In subproject 1, 1,800 first-episode patients with schizophrenia are randomized into six different antipsychotic monotherapy groups (olanzapine, risperidone, aripiprazole, ziprasidone, amisulpride, and haloperidol) for an 8-week treatment. By identifying a set of potential biomarkers associated with antipsychotic treatment response, we intend to build a prediction model, which includes neuroimaging, epigenetics, environmental stress, neurocognition, eye movement, electrophysiology, and neurological biochemistry indexes. In subproject 2, apart from verifying the prediction model established in subproject 1 based on an independent cohort of 1,800 first-episode patients with schizophrenia, we recruit patients from a verification cohort who did not get an effective response after an 8-week antipsychotic treatment into a randomized double-blind controlled trial with minocycline (200 mg per day) and sulforaphane (3 tables per day) to explore add-on treatment for patients with schizophrenia. Two hundred forty participants are anticipated to be enrolled for each group. In subproject 3, we tend to carry out one trial to construct an intervention strategy for metabolic syndrome induced by antipsychotic treatment and another one to build a prevention strategy for patients at a high risk of metabolic syndrome, which combines metformin and lifestyle intervention. Two hundred participants are anticipated to be enrolled for each group. Ethics and Dissemination: The study protocol has been approved by the Medical Ethics committee of the Second Xiangya Hospital of Central South University (No. 2017027). Results will be disseminated in peer-reviewed journals and at international conferences. Trial Registration: This trial has been registered on Clinicalrials.gov (NCT03451734). The protocol version is V.1.0 (April 23, 2017).

Close

  • doi:10.3389/fpsyt.2021.611070

Close

Yanfang Xia; Filip Melinscak; Dominik R. Bach

Saccadic scanpath length: an index for human threat conditioning Journal Article

In: Behavior Research Methods, vol. 53, no. 4, pp. 1426–1439, 2021.

Abstract | Links | BibTeX

@article{Xia2021,
title = {Saccadic scanpath length: an index for human threat conditioning},
author = {Yanfang Xia and Filip Melinscak and Dominik R. Bach},
doi = {10.3758/s13428-020-01490-5},
year = {2021},
date = {2021-01-01},
journal = {Behavior Research Methods},
volume = {53},
number = {4},
pages = {1426--1439},
publisher = {Behavior Research Methods},
abstract = {Threat-conditioned cues are thought to capture overt attention in a bottom-up process. Quantification of this phenomenon typically relies on cue competition paradigms. Here, we sought to exploit gaze patterns during exclusive presentation of a visual conditioned stimulus, in order to quantify human threat conditioning. To this end, we capitalized on a summary statistic of visual search during CS presentation, scanpath length. During a simple delayed threat conditioning paradigm with full-screen monochrome conditioned stimuli (CS), we observed shorter scanpath length during CS+ compared to CS- presentation. Retrodictive validity, i.e., effect size to distinguish CS+ and CS-, was maximized by considering a 2-s time window before US onset. Taking into account the shape of the scan speed response resulted in similar retrodictive validity. The mechanism underlying shorter scanpath length appeared to be longer fixation duration and more fixation on the screen center during CS+ relative to CS- presentation. These findings were replicated in a second experiment with similar setup, and further confirmed in a third experiment using full-screen patterns as CS. This experiment included an extinction session during which scanpath differences appeared to extinguish. In a fourth experiment with auditory CS and instruction to fixate screen center, no scanpath length differences were observed. In conclusion, our study suggests scanpath length as a visual search summary statistic, which may be used as complementary measure to quantify threat conditioning with retrodictive validity similar to that of skin conductance responses.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Threat-conditioned cues are thought to capture overt attention in a bottom-up process. Quantification of this phenomenon typically relies on cue competition paradigms. Here, we sought to exploit gaze patterns during exclusive presentation of a visual conditioned stimulus, in order to quantify human threat conditioning. To this end, we capitalized on a summary statistic of visual search during CS presentation, scanpath length. During a simple delayed threat conditioning paradigm with full-screen monochrome conditioned stimuli (CS), we observed shorter scanpath length during CS+ compared to CS- presentation. Retrodictive validity, i.e., effect size to distinguish CS+ and CS-, was maximized by considering a 2-s time window before US onset. Taking into account the shape of the scan speed response resulted in similar retrodictive validity. The mechanism underlying shorter scanpath length appeared to be longer fixation duration and more fixation on the screen center during CS+ relative to CS- presentation. These findings were replicated in a second experiment with similar setup, and further confirmed in a third experiment using full-screen patterns as CS. This experiment included an extinction session during which scanpath differences appeared to extinguish. In a fourth experiment with auditory CS and instruction to fixate screen center, no scanpath length differences were observed. In conclusion, our study suggests scanpath length as a visual search summary statistic, which may be used as complementary measure to quantify threat conditioning with retrodictive validity similar to that of skin conductance responses.

Close

  • doi:10.3758/s13428-020-01490-5

Close

Jordana S. Wynn; Bradley R. Buchsbaum; Jennifer D. Ryan

Encoding and retrieval eye movements mediate age differences in pattern completion Journal Article

In: Cognition, pp. 1–13, 2021.

Abstract | Links | BibTeX

@article{Wynn2021,
title = {Encoding and retrieval eye movements mediate age differences in pattern completion},
author = {Jordana S. Wynn and Bradley R. Buchsbaum and Jennifer D. Ryan},
doi = {10.1016/j.cognition.2021.104746},
year = {2021},
date = {2021-01-01},
journal = {Cognition},
pages = {1--13},
publisher = {Elsevier B.V.},
abstract = {Older adults often mistake new information as ‘old', yet the mechanisms underlying this response bias remain unclear. Typically, false alarms by older adults are thought to reflect pattern completion – the retrieval of a previously encoded stimulus in response to partial input. However, other work suggests that age-related retrieval errors can be accounted for by deficient encoding processes. In the present study, we used eye movement monitoring to quantify age-related changes in behavioral pattern completion as a function of eye movements during both encoding and partially cued retrieval. Consistent with an age-related encoding deficit, older adults executed more gaze fixations and more similar eye movements across repeated image presentations than younger adults, and such effects were predictive of subsequent recognition memory. Analysis of eye movements at retrieval further indicated that in response to partial lure cues, older adults reactivated the similar studied image, indexed by the similarity between encoding and retrieval gaze patterns, and did so more than younger adults. Critically, reactivation of encoded image content via eye movements was associated with lure false alarms in older adults, providing direct evidence for a pattern completion bias. Together, these findings suggest that age-related changes in both encoding and retrieval processes, indexed by eye movements, underlie older adults' increased vulnerability to memory errors.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Older adults often mistake new information as ‘old', yet the mechanisms underlying this response bias remain unclear. Typically, false alarms by older adults are thought to reflect pattern completion – the retrieval of a previously encoded stimulus in response to partial input. However, other work suggests that age-related retrieval errors can be accounted for by deficient encoding processes. In the present study, we used eye movement monitoring to quantify age-related changes in behavioral pattern completion as a function of eye movements during both encoding and partially cued retrieval. Consistent with an age-related encoding deficit, older adults executed more gaze fixations and more similar eye movements across repeated image presentations than younger adults, and such effects were predictive of subsequent recognition memory. Analysis of eye movements at retrieval further indicated that in response to partial lure cues, older adults reactivated the similar studied image, indexed by the similarity between encoding and retrieval gaze patterns, and did so more than younger adults. Critically, reactivation of encoded image content via eye movements was associated with lure false alarms in older adults, providing direct evidence for a pattern completion bias. Together, these findings suggest that age-related changes in both encoding and retrieval processes, indexed by eye movements, underlie older adults' increased vulnerability to memory errors.

Close

  • doi:10.1016/j.cognition.2021.104746

Close

Jessica Wunderlich; Anna Behler; Jens Dreyhaupt; Albert C. Ludolph; Elmar H. Pinkhardt; Jan Kassubek

Diagnostic value of video-oculography in progressive supranuclear palsy: a controlled study in 100 patients Journal Article

In: Journal of Neurology, vol. 268, no. 9, pp. 3467–3475, 2021.

Abstract | Links | BibTeX

@article{Wunderlich2021,
title = {Diagnostic value of video-oculography in progressive supranuclear palsy: a controlled study in 100 patients},
author = {Jessica Wunderlich and Anna Behler and Jens Dreyhaupt and Albert C. Ludolph and Elmar H. Pinkhardt and Jan Kassubek},
doi = {10.1007/s00415-021-10522-9},
year = {2021},
date = {2021-01-01},
journal = {Journal of Neurology},
volume = {268},
number = {9},
pages = {3467--3475},
publisher = {Springer Berlin Heidelberg},
abstract = {Background: The eponymous feature of progressive supranuclear palsy (PSP) is oculomotor impairment which is one of the relevant domains in the Movement Disorder Society diagnostic criteria. Objective: We aimed to investigate the value of specific video-oculographic parameters for the use as diagnostic markers in PSP. Methods: An analysis of video-oculography recordings of 100 PSP patients and 49 age-matched healthy control subjects was performed. Gain of smooth pursuit eye movement and latency, gain, peak eye velocity, asymmetry of downward and upward velocities of saccades as well as rate of saccadic intrusions were analyzed. Results: Vertical saccade velocity and saccadic intrusions allowed for the classification of about 70% and 56% of the patients, respectively. By combining both parameters, almost 80% of the PSP patients were covered, while vertical velocity asymmetry was observed in approximately 34%. All parameters had a specificity of above 95%. The sensitivities were lower with around 50–60% for the velocity and saccadic intrusions and only 27% for vertical asymmetry. Conclusions: In accordance with oculomotor features in the current PSP diagnostic criteria, video-oculographic assessment of vertical saccade velocity and saccadic intrusions resulted in very high specificity. Asymmetry of vertical saccade velocities, in the opposite, did not prove to be useful for diagnostic purposes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: The eponymous feature of progressive supranuclear palsy (PSP) is oculomotor impairment which is one of the relevant domains in the Movement Disorder Society diagnostic criteria. Objective: We aimed to investigate the value of specific video-oculographic parameters for the use as diagnostic markers in PSP. Methods: An analysis of video-oculography recordings of 100 PSP patients and 49 age-matched healthy control subjects was performed. Gain of smooth pursuit eye movement and latency, gain, peak eye velocity, asymmetry of downward and upward velocities of saccades as well as rate of saccadic intrusions were analyzed. Results: Vertical saccade velocity and saccadic intrusions allowed for the classification of about 70% and 56% of the patients, respectively. By combining both parameters, almost 80% of the PSP patients were covered, while vertical velocity asymmetry was observed in approximately 34%. All parameters had a specificity of above 95%. The sensitivities were lower with around 50–60% for the velocity and saccadic intrusions and only 27% for vertical asymmetry. Conclusions: In accordance with oculomotor features in the current PSP diagnostic criteria, video-oculographic assessment of vertical saccade velocity and saccadic intrusions resulted in very high specificity. Asymmetry of vertical saccade velocities, in the opposite, did not prove to be useful for diagnostic purposes.

Close

  • doi:10.1007/s00415-021-10522-9

Close

Yu Wu; Zhixiong Zhuo; Qunyue Liu; Kunyong Yu; Qitang Huang; Jian Liu

The relationships between perceived design intensity, preference, restorativeness and eye movements in designed urban green space Journal Article

In: International Journal of Environmental Research and Public Health, vol. 18, no. 20, pp. 1–16, 2021.

Abstract | Links | BibTeX

@article{Wu2021e,
title = {The relationships between perceived design intensity, preference, restorativeness and eye movements in designed urban green space},
author = {Yu Wu and Zhixiong Zhuo and Qunyue Liu and Kunyong Yu and Qitang Huang and Jian Liu},
doi = {10.3390/ijerph182010944},
year = {2021},
date = {2021-01-01},
journal = {International Journal of Environmental Research and Public Health},
volume = {18},
number = {20},
pages = {1--16},
abstract = {Recent research has demonstrated that landscape design intensity impacts individuals' landscape preferences, which may influence their eye movement. Due to the close relationship between restorativeness and landscape preference, we further explore the relationships between design intensity, preference, restorativeness and eye movements. Specifically, using manipulated images as stimuli for 200 students as participants, the effect of urban green space (UGS) design intensity on landscapes' preference, restorativeness, and eye movement was examined. The results demonstrate that landscape design intensity could contribute to preference and restorativeness and that there is a significant positive relationship between design intensity and eye-tracking metrics, including dwell time percent, fixation percent, fixation count, and visited ranking. Additionally, preference was positively related to restorativeness, dwell time percent, fixation percent, and fixation count, and there is a significant positive relationship between restorativeness and fixation percent. We obtained the most feasible regression equations between design intensity and preference, restorativeness, and eye movement. These results provide a set of guidelines for improving UGS design to achieve its greatest restorative potential and shed new light on the use of eye-tracking technology in landscape perception studies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Recent research has demonstrated that landscape design intensity impacts individuals' landscape preferences, which may influence their eye movement. Due to the close relationship between restorativeness and landscape preference, we further explore the relationships between design intensity, preference, restorativeness and eye movements. Specifically, using manipulated images as stimuli for 200 students as participants, the effect of urban green space (UGS) design intensity on landscapes' preference, restorativeness, and eye movement was examined. The results demonstrate that landscape design intensity could contribute to preference and restorativeness and that there is a significant positive relationship between design intensity and eye-tracking metrics, including dwell time percent, fixation percent, fixation count, and visited ranking. Additionally, preference was positively related to restorativeness, dwell time percent, fixation percent, and fixation count, and there is a significant positive relationship between restorativeness and fixation percent. We obtained the most feasible regression equations between design intensity and preference, restorativeness, and eye movement. These results provide a set of guidelines for improving UGS design to achieve its greatest restorative potential and shed new light on the use of eye-tracking technology in landscape perception studies.

Close

  • doi:10.3390/ijerph182010944

Close

Yingying Wu; Zhenxing Wang; Wanru Lin; Zengyan Ye; Rong Lian

Visual salience accelerates lexical processing and subsequent integration: an eye-movement study Journal Article

In: Journal of Cognitive Psychology, vol. 33, no. 2, pp. 146–156, 2021.

Abstract | Links | BibTeX

@article{Wu2021d,
title = {Visual salience accelerates lexical processing and subsequent integration: an eye-movement study},
author = {Yingying Wu and Zhenxing Wang and Wanru Lin and Zengyan Ye and Rong Lian},
doi = {10.1080/20445911.2021.1879817},
year = {2021},
date = {2021-01-01},
journal = {Journal of Cognitive Psychology},
volume = {33},
number = {2},
pages = {146--156},
publisher = {Taylor & Francis},
abstract = {This study examined how visual salience affects the processing of salient information it highlights (here after called visually salient information), as well as its connection with associated content during online reading. Participants were asked to read descriptive concepts that contained a two-character key concept term with a short definition, and subsequently complete a memory test. The visual salience of the key concept terms was manipulated. The results show that visual salience shortened the reading times of key concept terms, as well as the go-past times of concept definition. In addition, improving the visual salience of the key concept terms helped subjects in the subsequent memory test to make quicker and more accurate judgments regarding incorrect concepts. These results indicate that visual salience accelerates the lexical processing of visually salient information and helps readers build faster and more elaborate connections between visually salient information and associated content in the subsequent integration.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study examined how visual salience affects the processing of salient information it highlights (here after called visually salient information), as well as its connection with associated content during online reading. Participants were asked to read descriptive concepts that contained a two-character key concept term with a short definition, and subsequently complete a memory test. The visual salience of the key concept terms was manipulated. The results show that visual salience shortened the reading times of key concept terms, as well as the go-past times of concept definition. In addition, improving the visual salience of the key concept terms helped subjects in the subsequent memory test to make quicker and more accurate judgments regarding incorrect concepts. These results indicate that visual salience accelerates the lexical processing of visually salient information and helps readers build faster and more elaborate connections between visually salient information and associated content in the subsequent integration.

Close

  • doi:10.1080/20445911.2021.1879817

Close

Xiuyun Wu; Austin C. Rothwell; Miriam Spering; Anna Montagnini

Expectations about motion direction affect perception and anticipatory smooth pursuit differently Journal Article

In: Journal of Neurophysiology, vol. 125, no. 3, pp. 1–41, 2021.

Abstract | Links | BibTeX

@article{Wu2021c,
title = {Expectations about motion direction affect perception and anticipatory smooth pursuit differently},
author = {Xiuyun Wu and Austin C. Rothwell and Miriam Spering and Anna Montagnini},
doi = {10.1152/jn.00630.2020},
year = {2021},
date = {2021-01-01},
journal = {Journal of Neurophysiology},
volume = {125},
number = {3},
pages = {1--41},
abstract = {Smooth pursuit eye movements and visual motion perception rely on the integration of current sensory signals with past experience. Experience shapes our expectation of current visual events and can drive eye movement responses made in anticipation of a target, such as anticipatory pursuit. Previous research revealed consistent effects of expectation on anticipatory pursuit—eye movements follow the expected target direction or speed—and contrasting effects on motion perception, but most studies considered either eye movement or perceptual responses. The current study directly compared effects of direction expectation on perception and anticipatory pursuit within the same direction discrimination task to investigate whether both types of responses are affected similarly or differently. Observers (n = 10) viewed high-coherence random-dot kinematograms (RDKs) moving rightward and leftward with a probability of 50%, 70%, or 90% in a given block of trials to build up an expectation of motion direction. They were asked to judge motion direction of interleaved low-coherence RDKs (0%–15%). Perceptual judgements were compared with changes in anticipatory pursuit eye movements as a function of probability. Results show that anticipatory pursuit velocity scaled with probability and followed direction expectation (attraction bias), whereas perceptual judgments were biased opposite to direction expectation (repulsion bias). Control experiments suggest that the repulsion bias in perception was not caused by retinal slip induced by anticipatory pursuit, or by motion adaptation. We conclude that direction expectation can be processed differently for perception and anticipatory pursuit. NEW & NOTEWORTHY We show that expectations about motion direction that are based on long-term trial history affect perception and anticipatory pursuit differently. Whereas anticipatory pursuit direction was coherent with the expected motion direction (attraction bias), perception was biased opposite to the expected direction (repulsion bias). These opposite biases potentially reveal different ways in which perception and action utilize prior information and support the idea of different information processing for perception and pursuit.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Smooth pursuit eye movements and visual motion perception rely on the integration of current sensory signals with past experience. Experience shapes our expectation of current visual events and can drive eye movement responses made in anticipation of a target, such as anticipatory pursuit. Previous research revealed consistent effects of expectation on anticipatory pursuit—eye movements follow the expected target direction or speed—and contrasting effects on motion perception, but most studies considered either eye movement or perceptual responses. The current study directly compared effects of direction expectation on perception and anticipatory pursuit within the same direction discrimination task to investigate whether both types of responses are affected similarly or differently. Observers (n = 10) viewed high-coherence random-dot kinematograms (RDKs) moving rightward and leftward with a probability of 50%, 70%, or 90% in a given block of trials to build up an expectation of motion direction. They were asked to judge motion direction of interleaved low-coherence RDKs (0%–15%). Perceptual judgements were compared with changes in anticipatory pursuit eye movements as a function of probability. Results show that anticipatory pursuit velocity scaled with probability and followed direction expectation (attraction bias), whereas perceptual judgments were biased opposite to direction expectation (repulsion bias). Control experiments suggest that the repulsion bias in perception was not caused by retinal slip induced by anticipatory pursuit, or by motion adaptation. We conclude that direction expectation can be processed differently for perception and anticipatory pursuit. NEW & NOTEWORTHY We show that expectations about motion direction that are based on long-term trial history affect perception and anticipatory pursuit differently. Whereas anticipatory pursuit direction was coherent with the expected motion direction (attraction bias), perception was biased opposite to the expected direction (repulsion bias). These opposite biases potentially reveal different ways in which perception and action utilize prior information and support the idea of different information processing for perception and pursuit.

Close

  • doi:10.1152/jn.00630.2020

Close

Xiaogang Wu; Aijun Wang; Ming Zhang

How the size of exogenous attentional cues alters visual performance: From response gain to contrast gain Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 74, no. 10, pp. 1773–1783, 2021.

Abstract | Links | BibTeX

@article{Wu2021b,
title = {How the size of exogenous attentional cues alters visual performance: From response gain to contrast gain},
author = {Xiaogang Wu and Aijun Wang and Ming Zhang},
doi = {10.1177/17470218211024829},
year = {2021},
date = {2021-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {74},
number = {10},
pages = {1773--1783},
abstract = {The normalisation model of attention (NMoA) predicts that the attention gain pattern is mediated by changes in the size of the attentional field and stimuli. However, existing studies have not measured gain patterns when the relative sizes of stimuli are changed. To investigate the NMoA, the present study manipulated the attentional field size, namely, the exogenous cue size. Moreover, we assessed whether the relative rather than the absolute size of the attentional field matters, either by holding the target size constant and changing the cue size (Experiments 1–3) or by holding the cue size constant and changing the target size (Experiment 4), in a spatial cueing paradigm of psychophysical procedures. The results show that the gain modulations changed from response gain to contrast gain when the precue size changed from small to large relative to the target size (Experiments 1–3). Moreover, when the target size was once again made larger than the precue size, there was still a change in response gain (Experiment 4). These results suggest that the size of exogenous cues plays an important role in adjusting the attentional field and that relative changes rather than absolute changes to exogenous cue size determine gain modulation. These results are consistent with the prediction of the NMoA and provide novel insights into gain modulations of visual selective attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The normalisation model of attention (NMoA) predicts that the attention gain pattern is mediated by changes in the size of the attentional field and stimuli. However, existing studies have not measured gain patterns when the relative sizes of stimuli are changed. To investigate the NMoA, the present study manipulated the attentional field size, namely, the exogenous cue size. Moreover, we assessed whether the relative rather than the absolute size of the attentional field matters, either by holding the target size constant and changing the cue size (Experiments 1–3) or by holding the cue size constant and changing the target size (Experiment 4), in a spatial cueing paradigm of psychophysical procedures. The results show that the gain modulations changed from response gain to contrast gain when the precue size changed from small to large relative to the target size (Experiments 1–3). Moreover, when the target size was once again made larger than the precue size, there was still a change in response gain (Experiment 4). These results suggest that the size of exogenous cues plays an important role in adjusting the attentional field and that relative changes rather than absolute changes to exogenous cue size determine gain modulation. These results are consistent with the prediction of the NMoA and provide novel insights into gain modulations of visual selective attention.

Close

  • doi:10.1177/17470218211024829

Close

Sanmei Wu; Liangsu Tian; Jiaqiao Chen; Guangyao Chen; Jingxin Wang

Exploring the cognitive mechanism of irrelevant speech effect in chinese reading: Evidence from eye movements Journal Article

In: Acta Psychologica Sinica, vol. 53, no. 7, pp. 729–745, 2021.

Abstract | Links | BibTeX

@article{Wu2021a,
title = {Exploring the cognitive mechanism of irrelevant speech effect in chinese reading: Evidence from eye movements},
author = {Sanmei Wu and Liangsu Tian and Jiaqiao Chen and Guangyao Chen and Jingxin Wang},
doi = {10.3724/SP.J.1041.2021.00729},
year = {2021},
date = {2021-01-01},
journal = {Acta Psychologica Sinica},
volume = {53},
number = {7},
pages = {729--745},
abstract = {A wealth of research shows that irrelevant background speech can interfere with reading behavior. This effect is often described as the irrelevant speech effect (ISE). Two key theories have been proposed to account for this effect; namely, the Phonological-Interference Hypothesis and the Semantic-Interference Hypothesis. Few studies have investigated the irrelevant speech effect in Chinese reading. Moreover, the underlying mechanisms for the effect also remain unclear. Accordingly, with the present research we examined the irrelevant speech effect in Chinese using eye movement measures. Three experiments were conducted to explore the effects of different kinds of background speech. Experiment 1 used simple sentences, Experiment 2 used complex sentence, and Experiment 3 used paragraphs. The participants in each experiment were skilled readers who were undergraduate recruited from the university, who read the sentence while their eye movements were recorded using an EyeLink 1000 eye-tracker (SR Research inc.). The three experiments used the same background speech conditions. In an unintelligible background speech condition, participants heard irrelevant speech in Spanish (which none of the participants could understand), while in an intelligible background speech condition, they heard irrelevant speech in Chinese. Finally, in third condition, the participants read in silence, with no background speech present. The results showed no significant difference in key eye movement measures (total reading time, average fixation duration, number of fixations, number of regressions, total fixation time, and regression path reading time) for the silent compared to the unintelligible background speech condition across all three experiments. In Experiment 1, which used simple sentences as stimuli, there was also no significant difference between the silent and intelligible background speech condition. However, in Experiment 2, which used more complex sentences, normal reading was disrupted in the intelligible background speech condition compared to silence, revealing an ISE for these more difficult sentences. Compared with the silent condition, the intelligible background speech produced longer reading times and average fixation duration, more numbers of fixations and regressions, longer regression path reading time and longer total fixation times. Finally, Experiment 3 also produced evidence for an ISE, with longer total reading times, more fixations, and longer regression path reading times and total reading times in the intelligible background speech condition compared with silence. To sum up, the results of the current three experiments suggest that: (1) unintelligible speech does not disrupt normal reading significantly, contrary to the Phonological-Interference Hypothesis; (2) intelligible background speech can disrupt the reading of complex (but not simpler) sentences and also paragraph reading, supporting the Semantic-Interference Hypothesis. Such findings suggest that irrelevant speech might disrupt later stages of lexical processing and semantic integration in reading, and that this effect is modulated by the difficulty of the reading task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A wealth of research shows that irrelevant background speech can interfere with reading behavior. This effect is often described as the irrelevant speech effect (ISE). Two key theories have been proposed to account for this effect; namely, the Phonological-Interference Hypothesis and the Semantic-Interference Hypothesis. Few studies have investigated the irrelevant speech effect in Chinese reading. Moreover, the underlying mechanisms for the effect also remain unclear. Accordingly, with the present research we examined the irrelevant speech effect in Chinese using eye movement measures. Three experiments were conducted to explore the effects of different kinds of background speech. Experiment 1 used simple sentences, Experiment 2 used complex sentence, and Experiment 3 used paragraphs. The participants in each experiment were skilled readers who were undergraduate recruited from the university, who read the sentence while their eye movements were recorded using an EyeLink 1000 eye-tracker (SR Research inc.). The three experiments used the same background speech conditions. In an unintelligible background speech condition, participants heard irrelevant speech in Spanish (which none of the participants could understand), while in an intelligible background speech condition, they heard irrelevant speech in Chinese. Finally, in third condition, the participants read in silence, with no background speech present. The results showed no significant difference in key eye movement measures (total reading time, average fixation duration, number of fixations, number of regressions, total fixation time, and regression path reading time) for the silent compared to the unintelligible background speech condition across all three experiments. In Experiment 1, which used simple sentences as stimuli, there was also no significant difference between the silent and intelligible background speech condition. However, in Experiment 2, which used more complex sentences, normal reading was disrupted in the intelligible background speech condition compared to silence, revealing an ISE for these more difficult sentences. Compared with the silent condition, the intelligible background speech produced longer reading times and average fixation duration, more numbers of fixations and regressions, longer regression path reading time and longer total fixation times. Finally, Experiment 3 also produced evidence for an ISE, with longer total reading times, more fixations, and longer regression path reading times and total reading times in the intelligible background speech condition compared with silence. To sum up, the results of the current three experiments suggest that: (1) unintelligible speech does not disrupt normal reading significantly, contrary to the Phonological-Interference Hypothesis; (2) intelligible background speech can disrupt the reading of complex (but not simpler) sentences and also paragraph reading, supporting the Semantic-Interference Hypothesis. Such findings suggest that irrelevant speech might disrupt later stages of lexical processing and semantic integration in reading, and that this effect is modulated by the difficulty of the reading task.

Close

  • doi:10.3724/SP.J.1041.2021.00729

Close

Ching-Lin Lin Wu; Shu-Ling Ling Peng; Hsueh-Chih Chih Chen

Why Can People Effectively Access Remote Associations? Eye Movements during Chinese Remote Associates Problem Solving Journal Article

In: Creativity Research Journal, vol. 33, no. 2, pp. 158–167, 2021.

Abstract | Links | BibTeX

@article{Wu2021,
title = {Why Can People Effectively Access Remote Associations? Eye Movements during Chinese Remote Associates Problem Solving},
author = {Ching-Lin Lin Wu and Shu-Ling Ling Peng and Hsueh-Chih Chih Chen},
doi = {10.1080/10400419.2020.1856579},
year = {2021},
date = {2021-01-01},
journal = {Creativity Research Journal},
volume = {33},
number = {2},
pages = {158--167},
publisher = {Routledge},
abstract = {An increasing number of studies have explored the process of how subjects solve problems through remote association. Most research has investigated the relationship between an individual's response to semantic search during the think-aloud operation and the individual's reply performance. Few studies, however, have examined the process of obtaining objective physiological indices. Eye-tracking technology is a powerful tool with which to dissect the process of problem solving, with tracked fixation indices that reflect an individual's internal cognitive mechanisms. This study, based on participants' fixation order for various stimulus words, was the first to introduce the concept of association search span, a concept that can be further divided into distributed association and centralized association. This study recorded 62 participants' eye movement indices in an eye-tracking experiment. The results showed that participants with higher remote association ability used more distributed associations and fewer centralized associations. The results indicated that the stronger remote association ability a participant has, the more likely that participant is to form associations with different stimulus words. It was also found that flexible thinking plays a vital role in the generation of remote associations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

An increasing number of studies have explored the process of how subjects solve problems through remote association. Most research has investigated the relationship between an individual's response to semantic search during the think-aloud operation and the individual's reply performance. Few studies, however, have examined the process of obtaining objective physiological indices. Eye-tracking technology is a powerful tool with which to dissect the process of problem solving, with tracked fixation indices that reflect an individual's internal cognitive mechanisms. This study, based on participants' fixation order for various stimulus words, was the first to introduce the concept of association search span, a concept that can be further divided into distributed association and centralized association. This study recorded 62 participants' eye movement indices in an eye-tracking experiment. The results showed that participants with higher remote association ability used more distributed associations and fewer centralized associations. The results indicated that the stronger remote association ability a participant has, the more likely that participant is to form associations with different stimulus words. It was also found that flexible thinking plays a vital role in the generation of remote associations.

Close

  • doi:10.1080/10400419.2020.1856579

Close

Chao Jung Wu; Chia Yu Liu; Chung Hsuan Yang; Yu Cin Jian

Eye-movements reveal children's deliberative thinking and predict performance on arithmetic word problems Journal Article

In: European Journal of Psychology of Education, vol. 36, no. 1, pp. 91–108, 2021.

Abstract | Links | BibTeX

@article{Wu2021f,
title = {Eye-movements reveal children's deliberative thinking and predict performance on arithmetic word problems},
author = {Chao Jung Wu and Chia Yu Liu and Chung Hsuan Yang and Yu Cin Jian},
doi = {10.1007/s10212-020-00461-w},
year = {2021},
date = {2021-01-01},
journal = {European Journal of Psychology of Education},
volume = {36},
number = {1},
pages = {91--108},
publisher = {European Journal of Psychology of Education},
abstract = {Despite decades of research on the close link between eye movements and human cognitive processes, the exact nature of the link between eye movements and deliberative thinking in problem-solving remains unknown. Thus, this study explored the critical eye-movement indicators of deliberative thinking and investigated whether visual behaviors could predict performance on arithmetic word problems of various difficulties. An eye tracker and test were employed to collect 69 sixth-graders' eye-movement behaviors and responses. No significant difference was found between the successful and unsuccessful groups on the simple problems, but on the difficult problems, the successful problem-solvers demonstrated significantly greater gaze aversion, longer fixations, and spontaneous reflections. Notably, the model incorporating RT-TFD, NOF of 500 ms, and pupil size indicators could best predict participants' performance, with an overall hit rate of 74%, rising to 80% when reading comprehension screening test scores were included. These results reveal the solvers' engagement strategies or show that successful problem-solvers were well aware of problem difficulty and could regulate their cognitive resources efficiently. This study sheds light on the development of an adapted learning system with embedded eye tracking to further predict students' visual behaviors, provide real-time feedback, and improve their problem-solving performance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Despite decades of research on the close link between eye movements and human cognitive processes, the exact nature of the link between eye movements and deliberative thinking in problem-solving remains unknown. Thus, this study explored the critical eye-movement indicators of deliberative thinking and investigated whether visual behaviors could predict performance on arithmetic word problems of various difficulties. An eye tracker and test were employed to collect 69 sixth-graders' eye-movement behaviors and responses. No significant difference was found between the successful and unsuccessful groups on the simple problems, but on the difficult problems, the successful problem-solvers demonstrated significantly greater gaze aversion, longer fixations, and spontaneous reflections. Notably, the model incorporating RT-TFD, NOF of 500 ms, and pupil size indicators could best predict participants' performance, with an overall hit rate of 74%, rising to 80% when reading comprehension screening test scores were included. These results reveal the solvers' engagement strategies or show that successful problem-solvers were well aware of problem difficulty and could regulate their cognitive resources efficiently. This study sheds light on the development of an adapted learning system with embedded eye tracking to further predict students' visual behaviors, provide real-time feedback, and improve their problem-solving performance.

Close

  • doi:10.1007/s10212-020-00461-w

Close

Maren-Isabel Wolf; Maximilian Bruchmann; Gilles Pourtois; Sebastian Schindler; Thomas Straube

Top-down Modulation of early visual processing in V1: Dissociable neurophysiological effects of spatial attention, attentional load and task-relevance Journal Article

In: Cerebral Cortex, pp. 1–17, 2021.

Abstract | Links | BibTeX

@article{Wolf2021a,
title = {Top-down Modulation of early visual processing in V1: Dissociable neurophysiological effects of spatial attention, attentional load and task-relevance},
author = {Maren-Isabel Wolf and Maximilian Bruchmann and Gilles Pourtois and Sebastian Schindler and Thomas Straube},
doi = {10.1093/cercor/bhab342},
year = {2021},
date = {2021-01-01},
journal = {Cerebral Cortex},
pages = {1--17},
abstract = {Until today, there is an ongoing discussion if attention processes interact with the information processing stream already at the level of the C1, the earliest visual electrophysiological response of the cortex. We used two highly powered experiments (each N = 52) and examined the effects of task relevance, spatial attention, and attentional load on individual C1 amplitudes for the upper or lower visual hemifield. Bayesian models revealed evidence for the absence of load effects but substantial modulations by task-relevance and spatial attention. When the C1-eliciting stimulus was a task-irrelevant, interfering distracter, we observed increased C1 amplitudes for spatially unattended stimuli. For spatially attended stimuli, different effects of task-relevance for the two experiments were found. Follow-up exploratory single-trial analyses revealed that subtle but systematic deviations from the eye-gaze position at stimulus onset between conditions substantially influenced the effects of attention and task relevance on C1 amplitudes, especially for the upper visual field. For the subsequent P1 component, attentional modulations were clearly expressed and remained unaffected by these deviations. Collectively, these results suggest that spatial attention, unlike load or task relevance, can exert dissociable top-down modulatory effects at the C1 and P1 levels.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Until today, there is an ongoing discussion if attention processes interact with the information processing stream already at the level of the C1, the earliest visual electrophysiological response of the cortex. We used two highly powered experiments (each N = 52) and examined the effects of task relevance, spatial attention, and attentional load on individual C1 amplitudes for the upper or lower visual hemifield. Bayesian models revealed evidence for the absence of load effects but substantial modulations by task-relevance and spatial attention. When the C1-eliciting stimulus was a task-irrelevant, interfering distracter, we observed increased C1 amplitudes for spatially unattended stimuli. For spatially attended stimuli, different effects of task-relevance for the two experiments were found. Follow-up exploratory single-trial analyses revealed that subtle but systematic deviations from the eye-gaze position at stimulus onset between conditions substantially influenced the effects of attention and task relevance on C1 amplitudes, especially for the upper visual field. For the subsequent P1 component, attentional modulations were clearly expressed and remained unaffected by these deviations. Collectively, these results suggest that spatial attention, unlike load or task relevance, can exert dissociable top-down modulatory effects at the C1 and P1 levels.

Close

  • doi:10.1093/cercor/bhab342

Close

Christian Wolf; Markus Lappe

Salient objects dominate the central fixation bias when orienting toward images Journal Article

In: Journal of Vision, vol. 21, no. 8, pp. 1–21, 2021.

Abstract | Links | BibTeX

@article{Wolf2021,
title = {Salient objects dominate the central fixation bias when orienting toward images},
author = {Christian Wolf and Markus Lappe},
doi = {10.1167/jov.21.8.23},
year = {2021},
date = {2021-01-01},
journal = {Journal of Vision},
volume = {21},
number = {8},
pages = {1--21},
abstract = {Short-latency saccades are often biased toward salient objects or toward the center of images, for example, when inspecting photographs of natural scenes. Here, we measured the contribution of salient objects and central fixation bias to visual selection over time. Participants made saccades to images containing one salient object on a structured background and were instructed to either look at (i) the image center, (ii) the salient object, or (iii) at a cued position halfway in between the two. Results revealed, first, an early involuntary bias toward the image center irrespective of strategic behavior or the location of objects in the image. Second, the salient object bias was stronger than the center bias and prevailed over the latter when they directly competed for visual selection. In a second experiment, we tested whether the center bias depends on how well the image can be segregated from the monitor background. We asked participants to explore images that either did or did not contain a salient object while we manipulated the contrast between image background and monitor background to make the image borders more or less visible. The initial orienting toward the image was not affected by the image-monitor contrast, but only by the presence of objects—with a strong bias toward the center of images containing no object. Yet, a low image-monitor contrast reduced this center bias during the subsequent image exploration},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Short-latency saccades are often biased toward salient objects or toward the center of images, for example, when inspecting photographs of natural scenes. Here, we measured the contribution of salient objects and central fixation bias to visual selection over time. Participants made saccades to images containing one salient object on a structured background and were instructed to either look at (i) the image center, (ii) the salient object, or (iii) at a cued position halfway in between the two. Results revealed, first, an early involuntary bias toward the image center irrespective of strategic behavior or the location of objects in the image. Second, the salient object bias was stronger than the center bias and prevailed over the latter when they directly competed for visual selection. In a second experiment, we tested whether the center bias depends on how well the image can be segregated from the monitor background. We asked participants to explore images that either did or did not contain a salient object while we manipulated the contrast between image background and monitor background to make the image borders more or less visible. The initial orienting toward the image was not affected by the image-monitor contrast, but only by the presence of objects—with a strong bias toward the center of images containing no object. Yet, a low image-monitor contrast reduced this center bias during the subsequent image exploration

Close

  • doi:10.1167/jov.21.8.23

Close

Toby Wise; Yunzhe Liu; Fatima Chowdhury; Raymond J. Dolan

Model-based aversive learning in humans is supported by preferential task state reactivation Journal Article

In: Science Advances, vol. 7, no. 31, pp. 1–15, 2021.

Abstract | Links | BibTeX

@article{Wise2021,
title = {Model-based aversive learning in humans is supported by preferential task state reactivation},
author = {Toby Wise and Yunzhe Liu and Fatima Chowdhury and Raymond J. Dolan},
doi = {10.1126/sciadv.abf9616},
year = {2021},
date = {2021-01-01},
journal = {Science Advances},
volume = {7},
number = {31},
pages = {1--15},
abstract = {Harm avoidance is critical for survival, yet little is known regarding the neural mechanisms supporting avoidance in the absence of trial-and-error experience. Flexible avoidance may be supported by a mental model (i.e., model-based), a process for which neural reactivation and sequential replay have emerged as candidate mechanisms. During an aversive learning task, combined with magnetoencephalography, we show prospective and retrospective reactivation during planning and learning, respectively, coupled to evidence for sequential replay. Specifically, when individuals plan in an aversive context, we find preferential reactivation of subsequently chosen goal states. Stronger reactivation is associated with greater hippocampal theta power. At outcome receipt, unchosen goal states are reactivated regardless of outcome valence. Replay of paths leading to goal states was modulated by outcome valence, with aversive outcomes associated with stronger reverse replay than safe outcomes. Our findings are suggestive of avoidance involving simulation of unexperienced states through hippocampally mediated reactivation and replay.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Harm avoidance is critical for survival, yet little is known regarding the neural mechanisms supporting avoidance in the absence of trial-and-error experience. Flexible avoidance may be supported by a mental model (i.e., model-based), a process for which neural reactivation and sequential replay have emerged as candidate mechanisms. During an aversive learning task, combined with magnetoencephalography, we show prospective and retrospective reactivation during planning and learning, respectively, coupled to evidence for sequential replay. Specifically, when individuals plan in an aversive context, we find preferential reactivation of subsequently chosen goal states. Stronger reactivation is associated with greater hippocampal theta power. At outcome receipt, unchosen goal states are reactivated regardless of outcome valence. Replay of paths leading to goal states was modulated by outcome valence, with aversive outcomes associated with stronger reverse replay than safe outcomes. Our findings are suggestive of avoidance involving simulation of unexperienced states through hippocampally mediated reactivation and replay.

Close

  • doi:10.1126/sciadv.abf9616

Close

Matthew B. Winn; Katherine H. Teece

Listening effort is not the same as speech intelligibility score Journal Article

In: Trends in Hearing, vol. 25, pp. 1–26, 2021.

Abstract | Links | BibTeX

@article{Winn2021a,
title = {Listening effort is not the same as speech intelligibility score},
author = {Matthew B. Winn and Katherine H. Teece},
doi = {10.1177/23312165211027688},
year = {2021},
date = {2021-01-01},
journal = {Trends in Hearing},
volume = {25},
pages = {1--26},
abstract = {Listening effort is a valuable and important notion to measure because it is among the primary complaints of people with hearing loss. It is tempting and intuitive to accept speech intelligibility scores as a proxy for listening effort, but this link is likely oversimplified and lacks actionable explanatory power. This study was conducted to explain the mechanisms of listening effort that are not captured by intelligibility scores, using sentence-repetition tasks where specific kinds of mistakes were prospectively planned or analyzed retrospectively. Effort measured as changes in pupil size among 20 listeners with normal hearing and 19 listeners with cochlear implants. Experiment 1 demonstrates that mental correction of misperceived words increases effort even when responses are correct. Experiment 2 shows that for incorrect responses, listening effort is not a function of the proportion of words correct but is rather driven by the types of errors, position of errors within a sentence, and the need to resolve ambiguity, reflecting how easily the listener can make sense of a perception. A simple taxonomy of error types is provided that is both intuitive and consistent with data from these two experiments. The diversity of errors in these experiments implies that speech perception tasks can be designed prospectively to elicit the mistakes that are more closely linked with effort. Although mental corrective action and number of mistakes can scale together in many experiments, it is possible to dissociate them to advance toward a more explanatory (rather than correlational) account of listening effort.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Listening effort is a valuable and important notion to measure because it is among the primary complaints of people with hearing loss. It is tempting and intuitive to accept speech intelligibility scores as a proxy for listening effort, but this link is likely oversimplified and lacks actionable explanatory power. This study was conducted to explain the mechanisms of listening effort that are not captured by intelligibility scores, using sentence-repetition tasks where specific kinds of mistakes were prospectively planned or analyzed retrospectively. Effort measured as changes in pupil size among 20 listeners with normal hearing and 19 listeners with cochlear implants. Experiment 1 demonstrates that mental correction of misperceived words increases effort even when responses are correct. Experiment 2 shows that for incorrect responses, listening effort is not a function of the proportion of words correct but is rather driven by the types of errors, position of errors within a sentence, and the need to resolve ambiguity, reflecting how easily the listener can make sense of a perception. A simple taxonomy of error types is provided that is both intuitive and consistent with data from these two experiments. The diversity of errors in these experiments implies that speech perception tasks can be designed prospectively to elicit the mistakes that are more closely linked with effort. Although mental corrective action and number of mistakes can scale together in many experiments, it is possible to dissociate them to advance toward a more explanatory (rather than correlational) account of listening effort.

Close

  • doi:10.1177/23312165211027688

Close

Matthew B. Winn; Katherine H. Teece

Slower speaking rate reduces listening effort among listeners with cochlear implants Journal Article

In: Ear and Hearing, vol. 42, no. 3, pp. 584–595, 2021.

Abstract | Links | BibTeX

@article{Winn2021,
title = {Slower speaking rate reduces listening effort among listeners with cochlear implants},
author = {Matthew B. Winn and Katherine H. Teece},
doi = {10.1097/AUD.0000000000000958},
year = {2021},
date = {2021-01-01},
journal = {Ear and Hearing},
volume = {42},
number = {3},
pages = {584--595},
abstract = {OBJECTIVES: Slowed speaking rate was examined for its effects on speech intelligibility, its interaction with the benefit of contextual cues, and the impact of these factors on listening effort in adults with cochlear implants. DESIGN: Participants (n = 21 cochlear implant users) heard high- and low-context sentences that were played at the original speaking rate, as well as a slowed (1.4× duration) speaking rate, using uniform pitch-synchronous time warping. In addition to intelligibility measures, changes in pupil dilation were measured as a time-varying index of processing load or listening effort. Slope of pupil size recovery to baseline after the sentence was used as an index of resolution of perceptual ambiguity. RESULTS: Speech intelligibility was better for high-context compared to low-context sentences and slightly better for slower compared to original-rate speech. Speech rate did not affect magnitude and latency of peak pupil dilation relative to sentence offset. However, baseline pupil size recovered more substantially for slower-rate sentences, suggesting easier processing in the moment after the sentence was over. The effect of slowing speech rate was comparable to changing a sentence from low context to high context. The effect of context on pupil dilation was not observed until after the sentence was over, and one of two analyses suggested that context had greater beneficial effects on listening effort when the speaking rate was slower. These patterns maintained even at perfect sentence intelligibility, suggesting that correct speech repetition does not guarantee efficient or effortless processing. With slower speaking rates, there was less variability in pupil dilation slopes following the sentence, implying mitigation of some of the difficulties shown by individual listeners who would otherwise demonstrate prolonged effort after a sentence is heard. CONCLUSIONS: Slowed speaking rate provides release from listening effort when hearing an utterance, particularly relieving effort that would have lingered after a sentence is over. Context arguably provides even more release from listening effort when speaking rate is slower. The pattern of prolonged pupil dilation for faster speech is consistent with increased need to mentally correct errors, although that exact interpretation cannot be verified with intelligibility data alone or with pupil data alone. A pattern of needing to dwell on a sentence to disambiguate misperceptions likely contributes to difficulty in running conversation where there are few opportunities to pause and resolve recently heard utterances.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

OBJECTIVES: Slowed speaking rate was examined for its effects on speech intelligibility, its interaction with the benefit of contextual cues, and the impact of these factors on listening effort in adults with cochlear implants. DESIGN: Participants (n = 21 cochlear implant users) heard high- and low-context sentences that were played at the original speaking rate, as well as a slowed (1.4× duration) speaking rate, using uniform pitch-synchronous time warping. In addition to intelligibility measures, changes in pupil dilation were measured as a time-varying index of processing load or listening effort. Slope of pupil size recovery to baseline after the sentence was used as an index of resolution of perceptual ambiguity. RESULTS: Speech intelligibility was better for high-context compared to low-context sentences and slightly better for slower compared to original-rate speech. Speech rate did not affect magnitude and latency of peak pupil dilation relative to sentence offset. However, baseline pupil size recovered more substantially for slower-rate sentences, suggesting easier processing in the moment after the sentence was over. The effect of slowing speech rate was comparable to changing a sentence from low context to high context. The effect of context on pupil dilation was not observed until after the sentence was over, and one of two analyses suggested that context had greater beneficial effects on listening effort when the speaking rate was slower. These patterns maintained even at perfect sentence intelligibility, suggesting that correct speech repetition does not guarantee efficient or effortless processing. With slower speaking rates, there was less variability in pupil dilation slopes following the sentence, implying mitigation of some of the difficulties shown by individual listeners who would otherwise demonstrate prolonged effort after a sentence is heard. CONCLUSIONS: Slowed speaking rate provides release from listening effort when hearing an utterance, particularly relieving effort that would have lingered after a sentence is over. Context arguably provides even more release from listening effort when speaking rate is slower. The pattern of prolonged pupil dilation for faster speech is consistent with increased need to mentally correct errors, although that exact interpretation cannot be verified with intelligibility data alone or with pupil data alone. A pattern of needing to dwell on a sentence to disambiguate misperceptions likely contributes to difficulty in running conversation where there are few opportunities to pause and resolve recently heard utterances.

Close

  • doi:10.1097/AUD.0000000000000958

Close

Lena Wimmer; Gregory Currie; Stacie Friend; Heather Jane Ferguson

Testing correlates of lifetime exposure to print fiction following a multi-method approach: Evidence from young and older readers Journal Article

In: Imagination, Cognition and Personality, vol. 41, no. 1, pp. 54–86, 2021.

Abstract | Links | BibTeX

@article{Wimmer2021,
title = {Testing correlates of lifetime exposure to print fiction following a multi-method approach: Evidence from young and older readers},
author = {Lena Wimmer and Gregory Currie and Stacie Friend and Heather Jane Ferguson},
doi = {10.1177/0276236621996244},
year = {2021},
date = {2021-01-01},
journal = {Imagination, Cognition and Personality},
volume = {41},
number = {1},
pages = {54--86},
abstract = {Two pre-registered studies investigated associations of lifetime exposure to fiction, applying a battery of self-report, explicit and implicit indicators. Study 1 ( N = 150 university students) tested the relationships between exposure to fiction and social and moral cognitive abilities in a lab setting, using a correlational design. Results failed to reveal evidence for enhanced social or moral cognition with increasing lifetime exposure to narrative fiction. Study 2 followed a cross-sectional design and compared 50–80 year-old fiction experts ( N = 66), non-fiction experts ( N = 53), and infrequent readers ( N = 77) regarding social cognition, general knowledge, imaginability, and creativity in an online setting. Fiction experts outperformed the remaining groups regarding creativity, but not regarding social cognition or imaginability. In addition, both fiction and non-fiction experts demonstrated higher general knowledge than infrequent readers. Taken together, the present results do not support theories postulating benefits of narrative fiction for social cognition, but suggest that reading fiction may be associated with a specific gain in creativity, and that print (fiction or non-fiction) exposure has a general enhancement effect on world knowledge.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two pre-registered studies investigated associations of lifetime exposure to fiction, applying a battery of self-report, explicit and implicit indicators. Study 1 ( N = 150 university students) tested the relationships between exposure to fiction and social and moral cognitive abilities in a lab setting, using a correlational design. Results failed to reveal evidence for enhanced social or moral cognition with increasing lifetime exposure to narrative fiction. Study 2 followed a cross-sectional design and compared 50–80 year-old fiction experts ( N = 66), non-fiction experts ( N = 53), and infrequent readers ( N = 77) regarding social cognition, general knowledge, imaginability, and creativity in an online setting. Fiction experts outperformed the remaining groups regarding creativity, but not regarding social cognition or imaginability. In addition, both fiction and non-fiction experts demonstrated higher general knowledge than infrequent readers. Taken together, the present results do not support theories postulating benefits of narrative fiction for social cognition, but suggest that reading fiction may be associated with a specific gain in creativity, and that print (fiction or non-fiction) exposure has a general enhancement effect on world knowledge.

Close

  • doi:10.1177/0276236621996244

Close

James P. Wilmott; Melchi M. Michel

Transsaccadic integration of visual information is predictive, attention-based, and spatially precise Journal Article

In: Journal of Vision, vol. 21, no. 8, pp. 1–26, 2021.

Abstract | Links | BibTeX

@article{Wilmott2021,
title = {Transsaccadic integration of visual information is predictive, attention-based, and spatially precise},
author = {James P. Wilmott and Melchi M. Michel},
doi = {10.1167/jov.21.8.14},
year = {2021},
date = {2021-01-01},
journal = {Journal of Vision},
volume = {21},
number = {8},
pages = {1--26},
abstract = {Eye movements produce shifts in the positions of objects in the retinal image, but observers are able to integrate these shifting retinal images into a coherent representation of visual space. This ability is thought to be mediated by attention-dependent saccade-related neural activity that is used by the visual system to anticipate the retinal consequences of impending eye movements. Previous investigations of the perceptual consequences of this predictive activity typically infer attentional allocation using indirect measures such as accuracy or reaction time. Here, we investigated the perceptual consequences of saccades using an objective measure of attentional allocation, reverse correlation. Human observers executed a saccade while monitoring a flickering target object flanked by flickering distractors and reported whether the average luminance of the target was lighter or darker than the background. Successful task performance required subjects to integrate visual information across the saccade. A reverse correlation analysis yielded a spatiotemporal “psychophysical kernel” characterizing how different parts of the stimulus contributed to the luminance decision throughout each trial. Just before the saccade, observers integrated luminance information from a distractor located at the post-saccadic retinal position of the target, indicating a predictive perceptual updating of the target. Observers did not integrate information from distractors placed in alternative locations, even when they were nearer to the target object. We also observed simultaneous predictive perceptual updating for two spatially distinct targets. These findings suggest both that shifting neural representations mediate the coherent representation of visual space, and that these shifts have significant consequences for transsaccadic perception.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye movements produce shifts in the positions of objects in the retinal image, but observers are able to integrate these shifting retinal images into a coherent representation of visual space. This ability is thought to be mediated by attention-dependent saccade-related neural activity that is used by the visual system to anticipate the retinal consequences of impending eye movements. Previous investigations of the perceptual consequences of this predictive activity typically infer attentional allocation using indirect measures such as accuracy or reaction time. Here, we investigated the perceptual consequences of saccades using an objective measure of attentional allocation, reverse correlation. Human observers executed a saccade while monitoring a flickering target object flanked by flickering distractors and reported whether the average luminance of the target was lighter or darker than the background. Successful task performance required subjects to integrate visual information across the saccade. A reverse correlation analysis yielded a spatiotemporal “psychophysical kernel” characterizing how different parts of the stimulus contributed to the luminance decision throughout each trial. Just before the saccade, observers integrated luminance information from a distractor located at the post-saccadic retinal position of the target, indicating a predictive perceptual updating of the target. Observers did not integrate information from distractors placed in alternative locations, even when they were nearer to the target object. We also observed simultaneous predictive perceptual updating for two spatially distinct targets. These findings suggest both that shifting neural representations mediate the coherent representation of visual space, and that these shifts have significant consequences for transsaccadic perception.

Close

  • doi:10.1167/jov.21.8.14

Close

James R. Williamson; Doug Sturim; Trina Vian; Joseph Lacirignola; Trey E. Shenk; Sophia Yuditskaya; Hrishikesh M. Rao; Thomas M. Talavage; Kristin J. Heaton; Thomas F. Quatieri

Using dynamics of eye movements, speech articulation and brain activity to predict and track mTBI screening outcomes Journal Article

In: Frontiers in Neurology, vol. 12, pp. 665338, 2021.

Abstract | Links | BibTeX

@article{Williamson2021,
title = {Using dynamics of eye movements, speech articulation and brain activity to predict and track mTBI screening outcomes},
author = {James R. Williamson and Doug Sturim and Trina Vian and Joseph Lacirignola and Trey E. Shenk and Sophia Yuditskaya and Hrishikesh M. Rao and Thomas M. Talavage and Kristin J. Heaton and Thomas F. Quatieri},
doi = {10.3389/fneur.2021.665338},
year = {2021},
date = {2021-01-01},
journal = {Frontiers in Neurology},
volume = {12},
pages = {665338},
abstract = {Repeated subconcussive blows to the head during sports or other contact activities may have a cumulative and long lasting effect on cognitive functioning. Unobtrusive measurement and tracking of cognitive functioning is needed to enable preventative interventions for people at elevated risk of concussive injury. The focus of the present study is to investigate the potential for using passive measurements of fine motor movements (smooth pursuit eye tracking and read speech) and resting state brain activity (measured using fMRI) to complement existing diagnostic tools, such as the Immediate Post-concussion Assessment and Cognitive Testing (ImPACT), that are used for this purpose. Thirty-one high school American football and soccer athletes were tracked through the course of a sports season. Hypotheses were that (1) measures of complexity of fine motor coordination and of resting state brain activity are predictive of cognitive functioning measured by the ImPACT test, and (2) within-subject changes in these measures over the course of a sports season are predictive of changes in ImPACT scores. The first principal component of the six ImPACT composite scores was used as a latent factor that represents cognitive functioning. This latent factor was positively correlated with four of the ImPACT composites: verbal memory, visual memory, visual motor speed and reaction speed. Strong correlations, ranging between r = 0.26 and r = 0.49, were found between this latent factor and complexity features derived from each sensor modality. Based on a regression model, the complexity features were combined across sensor modalities and used to predict the latent factor on out-of-sample subjects. The predictions correlated with the true latent factor with r = 0.71. Within-subject changes over time were predicted with r = 0.34. These results indicate the potential to predict cognitive performance from passive monitoring of fine motor movements and brain activity, offering initial support for future application in detection of performance deficits associated with subconcussive events.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Repeated subconcussive blows to the head during sports or other contact activities may have a cumulative and long lasting effect on cognitive functioning. Unobtrusive measurement and tracking of cognitive functioning is needed to enable preventative interventions for people at elevated risk of concussive injury. The focus of the present study is to investigate the potential for using passive measurements of fine motor movements (smooth pursuit eye tracking and read speech) and resting state brain activity (measured using fMRI) to complement existing diagnostic tools, such as the Immediate Post-concussion Assessment and Cognitive Testing (ImPACT), that are used for this purpose. Thirty-one high school American football and soccer athletes were tracked through the course of a sports season. Hypotheses were that (1) measures of complexity of fine motor coordination and of resting state brain activity are predictive of cognitive functioning measured by the ImPACT test, and (2) within-subject changes in these measures over the course of a sports season are predictive of changes in ImPACT scores. The first principal component of the six ImPACT composite scores was used as a latent factor that represents cognitive functioning. This latent factor was positively correlated with four of the ImPACT composites: verbal memory, visual memory, visual motor speed and reaction speed. Strong correlations, ranging between r = 0.26 and r = 0.49, were found between this latent factor and complexity features derived from each sensor modality. Based on a regression model, the complexity features were combined across sensor modalities and used to predict the latent factor on out-of-sample subjects. The predictions correlated with the true latent factor with r = 0.71. Within-subject changes over time were predicted with r = 0.34. These results indicate the potential to predict cognitive performance from passive monitoring of fine motor movements and brain activity, offering initial support for future application in detection of performance deficits associated with subconcussive events.

Close

  • doi:10.3389/fneur.2021.665338

Close

Lauren Williams; Ann Carrigan; William Auffermann; Megan Mills; Anina Rich; Joann Elmore; Trafton Drew

The invisible breast cancer: Experience does not protect against inattentional blindness to clinically relevant findings in radiology Journal Article

In: Psychonomic Bulletin & Review, vol. 28, no. 2, pp. 503–511, 2021.

Abstract | Links | BibTeX

@article{Williams2021a,
title = {The invisible breast cancer: Experience does not protect against inattentional blindness to clinically relevant findings in radiology},
author = {Lauren Williams and Ann Carrigan and William Auffermann and Megan Mills and Anina Rich and Joann Elmore and Trafton Drew},
doi = {10.3758/s13423-020-01826-4},
year = {2021},
date = {2021-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {28},
number = {2},
pages = {503--511},
publisher = {Psychonomic Bulletin & Review},
abstract = {Retrospectively obvious events are frequently missed when attention is engaged in another task—a phenomenon known as inattentional blindness. Although the task characteristics that predict inattentional blindness rates are relatively well understood, the observer characteristics that predict inattentional blindness rates are largely unknown. Previously, expert radiologists showed a surprising rate of inattentional blindness to a gorilla photoshopped into a CT scan during lung-cancer screening. However, inattentional blindness rates were higher for a group of naïve observers performing the same task, suggesting that perceptual expertise may provide protection against inattentional blindness. Here, we tested whether expertise in radiology predicts inattentional blindness rates for unexpected abnormalities that were clinically relevant. Fifty radiologists evaluated CT scans for lung cancer. The final case contained a large (9.1 cm) breast mass and lymphadenopathy. When their attention was focused on searching for lung nodules, 66% of radiologists did not detect breast cancer and 30% did not detect lymphadenopathy. In contrast, only 3% and 10% of radiologists (N = 30), respectively, missed these abnormalities in a follow-up study when searching for a broader range of abnormalities. Neither experience, primary task performance, nor search behavior predicted which radiologists missed the unexpected abnormalities. These findings suggest perceptual expertise does not protect against inattentional blindness, even for unexpected stimuli that are within the domain of expertise.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Retrospectively obvious events are frequently missed when attention is engaged in another task—a phenomenon known as inattentional blindness. Although the task characteristics that predict inattentional blindness rates are relatively well understood, the observer characteristics that predict inattentional blindness rates are largely unknown. Previously, expert radiologists showed a surprising rate of inattentional blindness to a gorilla photoshopped into a CT scan during lung-cancer screening. However, inattentional blindness rates were higher for a group of naïve observers performing the same task, suggesting that perceptual expertise may provide protection against inattentional blindness. Here, we tested whether expertise in radiology predicts inattentional blindness rates for unexpected abnormalities that were clinically relevant. Fifty radiologists evaluated CT scans for lung cancer. The final case contained a large (9.1 cm) breast mass and lymphadenopathy. When their attention was focused on searching for lung nodules, 66% of radiologists did not detect breast cancer and 30% did not detect lymphadenopathy. In contrast, only 3% and 10% of radiologists (N = 30), respectively, missed these abnormalities in a follow-up study when searching for a broader range of abnormalities. Neither experience, primary task performance, nor search behavior predicted which radiologists missed the unexpected abnormalities. These findings suggest perceptual expertise does not protect against inattentional blindness, even for unexpected stimuli that are within the domain of expertise.

Close

  • doi:10.3758/s13423-020-01826-4

Close

Lauren H. Williams; Ann J. Carrigan; Megan Mills; William F. Auffermann; Anina N. Rich; Trafton Drew

Characteristics of expert search behavior in volumetric medical image interpretation Journal Article

In: Journal of Medical Imaging, vol. 8, no. 04, pp. 1–24, 2021.

Abstract | Links | BibTeX

@article{Williams2021,
title = {Characteristics of expert search behavior in volumetric medical image interpretation},
author = {Lauren H. Williams and Ann J. Carrigan and Megan Mills and William F. Auffermann and Anina N. Rich and Trafton Drew},
doi = {10.1117/1.jmi.8.4.041208},
year = {2021},
date = {2021-01-01},
journal = {Journal of Medical Imaging},
volume = {8},
number = {04},
pages = {1--24},
abstract = {Purpose: Experienced radiologists have enhanced global processing ability relative to novices, allowing experts to rapidly detect medical abnormalities without performing an exhaustive search. However, evidence for global processing models is primarily limited to two-dimensional image interpretation, and it is unclear whether these findings generalize to volumetric images, which are widely used in clinical practice. We examined whether radiologists searching volumetric images use methods consistent with global processing models of expertise. In addition, we investigated whether search strategy (scanning/drilling) differs with experience level.
Approach: Fifty radiologists with a wide range of experience evaluated chest computed-tomography scans for lung nodules while their eye movements and scrolling behaviors were tracked. Multiple linear regressions were used to determine: (1) how search behaviors differed with years of experience and the number of chest CTs evaluated per week and (2) which search behaviors predicted better performance.
Results: Contrary to global processing models based on 2D images, experience was unrelated to measures of global processing (saccadic amplitude, coverage, time to first fixation, search time, and depth passes) in this task. Drilling behavior was associated with better accuracy than scanning behavior when controlling for observer experience. Greater image coverage was a strong predictor of task accuracy.
Conclusions: Global processing ability may play a relatively small role in volumetric image interpretation, where global scene statistics are not available to radiologists in a single glance. Rather, in volumetric images, it may be more important to engage in search strategies that support a more thorough search of the image.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Purpose: Experienced radiologists have enhanced global processing ability relative to novices, allowing experts to rapidly detect medical abnormalities without performing an exhaustive search. However, evidence for global processing models is primarily limited to two-dimensional image interpretation, and it is unclear whether these findings generalize to volumetric images, which are widely used in clinical practice. We examined whether radiologists searching volumetric images use methods consistent with global processing models of expertise. In addition, we investigated whether search strategy (scanning/drilling) differs with experience level.
Approach: Fifty radiologists with a wide range of experience evaluated chest computed-tomography scans for lung nodules while their eye movements and scrolling behaviors were tracked. Multiple linear regressions were used to determine: (1) how search behaviors differed with years of experience and the number of chest CTs evaluated per week and (2) which search behaviors predicted better performance.
Results: Contrary to global processing models based on 2D images, experience was unrelated to measures of global processing (saccadic amplitude, coverage, time to first fixation, search time, and depth passes) in this task. Drilling behavior was associated with better accuracy than scanning behavior when controlling for observer experience. Greater image coverage was a strong predictor of task accuracy.
Conclusions: Global processing ability may play a relatively small role in volumetric image interpretation, where global scene statistics are not available to radiologists in a single glance. Rather, in volumetric images, it may be more important to engage in search strategies that support a more thorough search of the image.

Close

  • doi:10.1117/1.jmi.8.4.041208

Close

Benedict Wild; Stefan Treue

Comparing the influence of stimulus size and contrast on the perception of moving gratings and random dot patterns-A registered report protocol Journal Article

In: PLoS ONE, vol. 16, no. 6, pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Wild2021,
title = {Comparing the influence of stimulus size and contrast on the perception of moving gratings and random dot patterns-A registered report protocol},
author = {Benedict Wild and Stefan Treue},
doi = {10.1371/journal.pone.0253067},
year = {2021},
date = {2021-01-01},
journal = {PLoS ONE},
volume = {16},
number = {6},
pages = {1--10},
abstract = {Modern accounts of visual motion processing in the primate brain emphasize a hierarchy of different regions within the dorsal visual pathway, especially primary visual cortex (V1) and the middle temporal area (MT). However, recent studies have called the idea of a processing pipeline with fixed contributions to motion perception from each area into doubt. Instead, the role that each area plays appears to depend on properties of the stimulus as well as perceptual history. We propose to test this hypothesis in human subjects by comparing motion perception of two commonly used stimulus types: Drifting sinusoidal gratings (DSGs) and random dot patterns (RDPs). To avoid potential biases in our approach we are pre-registering our study. We will compare the effects of size and contrast levels on the perception of the direction of motion for DSGs and RDPs. In addition, based on intriguing results in a pilot study, we will also explore the effects of a post-stimulus mask. Our approach will offer valuable insights into how motion is processed by the visual system and guide further behavioral and neurophysiological research.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Modern accounts of visual motion processing in the primate brain emphasize a hierarchy of different regions within the dorsal visual pathway, especially primary visual cortex (V1) and the middle temporal area (MT). However, recent studies have called the idea of a processing pipeline with fixed contributions to motion perception from each area into doubt. Instead, the role that each area plays appears to depend on properties of the stimulus as well as perceptual history. We propose to test this hypothesis in human subjects by comparing motion perception of two commonly used stimulus types: Drifting sinusoidal gratings (DSGs) and random dot patterns (RDPs). To avoid potential biases in our approach we are pre-registering our study. We will compare the effects of size and contrast levels on the perception of the direction of motion for DSGs and RDPs. In addition, based on intriguing results in a pilot study, we will also explore the effects of a post-stimulus mask. Our approach will offer valuable insights into how motion is processed by the visual system and guide further behavioral and neurophysiological research.

Close

  • doi:10.1371/journal.pone.0253067

Close

Thomas D. W. Wilcockson; Emmanuel M. Pothos; Ashley M. Osborne; Trevor J. Crawford

Top-down and bottom-up attentional biases for smoking-related stimuli: Comparing dependent and non-dependent smokers Journal Article

In: Addictive Behaviors, vol. 118, pp. 1–7, 2021.

Abstract | Links | BibTeX

@article{Wilcockson2021,
title = {Top-down and bottom-up attentional biases for smoking-related stimuli: Comparing dependent and non-dependent smokers},
author = {Thomas D. W. Wilcockson and Emmanuel M. Pothos and Ashley M. Osborne and Trevor J. Crawford},
doi = {10.1016/j.addbeh.2021.106886},
year = {2021},
date = {2021-01-01},
journal = {Addictive Behaviors},
volume = {118},
pages = {1--7},
publisher = {Elsevier Ltd},
abstract = {Introduction: Substance use causes attentional biases for substance-related stimuli. Both bottom-up (preferential processing) and top-down (inhibitory control) processes are involved in attentional biases. We explored these aspects of attentional bias by using dependent and non-dependent cigarette smokers in order to see whether these two groups would differ in terms of general inhibitory control, bottom-up attentional bias, and top-down attentional biases. This enables us to see whether consumption behaviour would affect these cognitive responses to smoking-related stimuli. Methods: Smokers were categorised as either dependent (N = 26) or non-dependent (N = 34) smokers. A further group of non-smokers (N = 32) were recruited to act as controls. Participants then completed a behavioural inhibition task with general stimuli, a smoking-related eye tracking version of the dot-probe task, and an eye-tracking inhibition task with smoking-related stimuli. Results: Results indicated that dependent smokers had decreased inhibition and increased attentional bias for smoking-related stimuli (and not control stimuli). By contrast, a decreased inhibition for smoking-related stimuli (in comparison to control stimuli) was not observed for non-dependent smokers. Conclusions: Preferential processing of substance-related stimuli may indicate usage of a substance, whereas poor inhibitory control for substance-related stimuli may only emerge if dependence develops. The results suggest that how people engage with substance abuse is important for top-down attentional biases.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Introduction: Substance use causes attentional biases for substance-related stimuli. Both bottom-up (preferential processing) and top-down (inhibitory control) processes are involved in attentional biases. We explored these aspects of attentional bias by using dependent and non-dependent cigarette smokers in order to see whether these two groups would differ in terms of general inhibitory control, bottom-up attentional bias, and top-down attentional biases. This enables us to see whether consumption behaviour would affect these cognitive responses to smoking-related stimuli. Methods: Smokers were categorised as either dependent (N = 26) or non-dependent (N = 34) smokers. A further group of non-smokers (N = 32) were recruited to act as controls. Participants then completed a behavioural inhibition task with general stimuli, a smoking-related eye tracking version of the dot-probe task, and an eye-tracking inhibition task with smoking-related stimuli. Results: Results indicated that dependent smokers had decreased inhibition and increased attentional bias for smoking-related stimuli (and not control stimuli). By contrast, a decreased inhibition for smoking-related stimuli (in comparison to control stimuli) was not observed for non-dependent smokers. Conclusions: Preferential processing of substance-related stimuli may indicate usage of a substance, whereas poor inhibitory control for substance-related stimuli may only emerge if dependence develops. The results suggest that how people engage with substance abuse is important for top-down attentional biases.

Close

  • doi:10.1016/j.addbeh.2021.106886

Close

Anne Wienholz; Derya Nuhbalaoglu; Markus Steinbach; Annika Herrmann; Nivedita Mani

Phonological priming in German sign language: An eye tracking study using the visual world paradigm Journal Article

In: Sign Language & Linguistics, vol. 24, no. 1, pp. 1–32, 2021.

Abstract | Links | BibTeX

@article{Wienholz2021,
title = {Phonological priming in German sign language: An eye tracking study using the visual world paradigm},
author = {Anne Wienholz and Derya Nuhbalaoglu and Markus Steinbach and Annika Herrmann and Nivedita Mani},
doi = {10.1075/sll.19011.wie},
year = {2021},
date = {2021-01-01},
journal = {Sign Language & Linguistics},
volume = {24},
number = {1},
pages = {1--32},
abstract = {A number of studies provide evidence for a phonological priming effect in the recognition of single signs based on phonological parameters and that the specific phonological parameters modulated in the priming effect can influence the robustness of this effect. This eye tracking study on German Sign Language examined phonological priming effects at the sentence level, while varying the phonological relationship between prime-target sign pairs. We recorded participants' eye movements while presenting videos of sentences containing either related or unrelated prime-target sign pairs, and pictures of the target and an unrelated distractor. We observed a phonological priming effect for sign pairs sharing handshape and movement while differing in location parameter. Taken together, the data suggest a difference in the contribution of sign parameters to sign recognition and that sub-lexical features influence sign language processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A number of studies provide evidence for a phonological priming effect in the recognition of single signs based on phonological parameters and that the specific phonological parameters modulated in the priming effect can influence the robustness of this effect. This eye tracking study on German Sign Language examined phonological priming effects at the sentence level, while varying the phonological relationship between prime-target sign pairs. We recorded participants' eye movements while presenting videos of sentences containing either related or unrelated prime-target sign pairs, and pictures of the target and an unrelated distractor. We observed a phonological priming effect for sign pairs sharing handshape and movement while differing in location parameter. Taken together, the data suggest a difference in the contribution of sign parameters to sign recognition and that sub-lexical features influence sign language processing.

Close

  • doi:10.1075/sll.19011.wie

Close

Anne Wienholz; Derya Nuhbalaoglu; Markus Steinbach; Annika Herrmann; Nivedita Mani

Phonological priming in German Sign Language Journal Article

In: Sign Language & Linguistics, vol. 24, no. 1, pp. 4–35, 2021.

Abstract | Links | BibTeX

@article{Wienholz2021a,
title = {Phonological priming in German Sign Language},
author = {Anne Wienholz and Derya Nuhbalaoglu and Markus Steinbach and Annika Herrmann and Nivedita Mani},
doi = {10.1075/sll.19011.wie},
year = {2021},
date = {2021-01-01},
journal = {Sign Language & Linguistics},
volume = {24},
number = {1},
pages = {4--35},
abstract = {A number of studies provide evidence for a phonological priming effect in the recognition of single signs based on phonological parameters and that the specific phonological parameters modulated in the priming effect can influence the robustness of this effect. This eye tracking study on German Sign Language examined phonological priming effects at the sentence level, while varying the phonological relationship between prime-target sign pairs. We recorded participants' eye movements while presenting videos of sentences containing either related or unrelated prime-target sign pairs, and pictures of the target and an unrelated distractor. We observed a phonological priming effect for sign pairs sharing handshape and movement while differing in location parameter. Taken together, the data suggest a difference in the contribution of sign parameters to sign recognition and that sub-lexical features influence sign language processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A number of studies provide evidence for a phonological priming effect in the recognition of single signs based on phonological parameters and that the specific phonological parameters modulated in the priming effect can influence the robustness of this effect. This eye tracking study on German Sign Language examined phonological priming effects at the sentence level, while varying the phonological relationship between prime-target sign pairs. We recorded participants' eye movements while presenting videos of sentences containing either related or unrelated prime-target sign pairs, and pictures of the target and an unrelated distractor. We observed a phonological priming effect for sign pairs sharing handshape and movement while differing in location parameter. Taken together, the data suggest a difference in the contribution of sign parameters to sign recognition and that sub-lexical features influence sign language processing.

Close

  • doi:10.1075/sll.19011.wie

Close

Marlee Whybird; Rachel Coats; Tessa Vuister; Sophie Harrison; Samantha Booth; Melanie Burke

The role of the posterior parietal cortex on cognition: An exploratory study Journal Article

In: Brain Research, vol. 1764, pp. 1–11, 2021.

Abstract | Links | BibTeX

@article{Whybird2021,
title = {The role of the posterior parietal cortex on cognition: An exploratory study},
author = {Marlee Whybird and Rachel Coats and Tessa Vuister and Sophie Harrison and Samantha Booth and Melanie Burke},
doi = {10.1016/j.brainres.2021.147452},
year = {2021},
date = {2021-01-01},
journal = {Brain Research},
volume = {1764},
pages = {1--11},
publisher = {Elsevier B.V.},
abstract = {Theta burst stimulation (TBS) is a form of repetitive transcranial magnetic stimulation (rTMS) that can be used to increase (intermittent TBS) or reduce (continuous TBS) cortical excitability. The current study provides a preliminary report of the effects of iTBS and cTBS in healthy young adults, to investigate the causal role of the posterior parietal cortex (PPC) during the performance of four cognitive functions: attention, inhibition, sequence learning and working memory. A 2 × 2 repeated measures design was incorporated using hemisphere (left/right) and TBS type (iTBS/cTBS) as the independent variables. 20 participants performed the cognitive tasks both before and after TBS stimulation in 4 counterbalanced experimental sessions (left cTBS, right cTBS, left iTBS and right iTBS) spaced 1 week apart. No change in performance was identified for the attentional cueing task after TBS stimulation, however TBS applied to the left PPC decreased reaction time when inhibiting a reflexive response. The sequence learning task revealed differential effects for encoding of the sequence versus the learnt items. cTBS on the right hemisphere resulted in faster responses to learnt sequences, and iTBS on the right hemisphere reduced reaction times during the initial encoding of the sequence. The reaction times in the 2-back working memory task were increased when TBS stimulation was applied to the right hemisphere. Results reveal clear differential effects for tasks explored, and more specifically where TBS stimulation on right PPC could provide a potential for further investigation into improving oculomotor learning by inducing plasticity-like mechanisms in the brain.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Theta burst stimulation (TBS) is a form of repetitive transcranial magnetic stimulation (rTMS) that can be used to increase (intermittent TBS) or reduce (continuous TBS) cortical excitability. The current study provides a preliminary report of the effects of iTBS and cTBS in healthy young adults, to investigate the causal role of the posterior parietal cortex (PPC) during the performance of four cognitive functions: attention, inhibition, sequence learning and working memory. A 2 × 2 repeated measures design was incorporated using hemisphere (left/right) and TBS type (iTBS/cTBS) as the independent variables. 20 participants performed the cognitive tasks both before and after TBS stimulation in 4 counterbalanced experimental sessions (left cTBS, right cTBS, left iTBS and right iTBS) spaced 1 week apart. No change in performance was identified for the attentional cueing task after TBS stimulation, however TBS applied to the left PPC decreased reaction time when inhibiting a reflexive response. The sequence learning task revealed differential effects for encoding of the sequence versus the learnt items. cTBS on the right hemisphere resulted in faster responses to learnt sequences, and iTBS on the right hemisphere reduced reaction times during the initial encoding of the sequence. The reaction times in the 2-back working memory task were increased when TBS stimulation was applied to the right hemisphere. Results reveal clear differential effects for tasks explored, and more specifically where TBS stimulation on right PPC could provide a potential for further investigation into improving oculomotor learning by inducing plasticity-like mechanisms in the brain.

Close

  • doi:10.1016/j.brainres.2021.147452

Close

Stephen Whitmarsh; Christophe Gitton; Veikko Jousmäki; Jérôme Sackur; Catherine Tallon-Baudry

Neuronal correlates of the subjective experience of attention Journal Article

In: European Journal of Neuroscience, no. January, pp. 1–18, 2021.

Abstract | Links | BibTeX

@article{Whitmarsh2021,
title = {Neuronal correlates of the subjective experience of attention},
author = {Stephen Whitmarsh and Christophe Gitton and Veikko Jousmäki and Jérôme Sackur and Catherine Tallon-Baudry},
doi = {10.1111/ejn.15395},
year = {2021},
date = {2021-01-01},
journal = {European Journal of Neuroscience},
number = {January},
pages = {1--18},
abstract = {The effect of top–down attention on stimulus-evoked responses and alpha oscillations and the association between arousal and pupil diameter are well established. However, the relationship between these indices, and their contribution to the subjective experience of attention, remains largely unknown. Participants performed a sustained (10–30 s) attention task in which rare (10%) targets were detected within continuous tactile stimulation (16 Hz). Trials were followed by attention ratings on an 8-point visual scale. Attention ratings correlated negatively with contralateral somatosensory alpha power and positively with pupil diameter. The effect of pupil diameter on attention ratings extended into the following trial, reflecting a sustained aspect of attention related to vigilance. The effect of alpha power did not carry over to the next trial and furthermore mediated the association between pupil diameter and attention ratings. Variations in steady-state amplitude reflected stimulus processing under the influence of alpha oscillations but were only weakly related to subjective ratings of attention. Together, our results show that both alpha power and pupil diameter are reflected in the subjective experience of attention, albeit on different time spans, while continuous stimulus processing might not contribute to the experience of attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The effect of top–down attention on stimulus-evoked responses and alpha oscillations and the association between arousal and pupil diameter are well established. However, the relationship between these indices, and their contribution to the subjective experience of attention, remains largely unknown. Participants performed a sustained (10–30 s) attention task in which rare (10%) targets were detected within continuous tactile stimulation (16 Hz). Trials were followed by attention ratings on an 8-point visual scale. Attention ratings correlated negatively with contralateral somatosensory alpha power and positively with pupil diameter. The effect of pupil diameter on attention ratings extended into the following trial, reflecting a sustained aspect of attention related to vigilance. The effect of alpha power did not carry over to the next trial and furthermore mediated the association between pupil diameter and attention ratings. Variations in steady-state amplitude reflected stimulus processing under the influence of alpha oscillations but were only weakly related to subjective ratings of attention. Together, our results show that both alpha power and pupil diameter are reflected in the subjective experience of attention, albeit on different time spans, while continuous stimulus processing might not contribute to the experience of attention.

Close

  • doi:10.1111/ejn.15395

Close

Veronica Whitford; Marc F. Joanisse

Eye movement measures of within-language and cross-language activation during reading in monolingual and bilingual children and adults: A focus on neighborhood density effects Journal Article

In: Frontiers in Psychology, vol. 12, pp. 674007, 2021.

Abstract | Links | BibTeX

@article{Whitford2021,
title = {Eye movement measures of within-language and cross-language activation during reading in monolingual and bilingual children and adults: A focus on neighborhood density effects},
author = {Veronica Whitford and Marc F. Joanisse},
doi = {10.3389/fpsyg.2021.674007},
year = {2021},
date = {2021-01-01},
journal = {Frontiers in Psychology},
volume = {12},
pages = {674007},
abstract = {We used eye movement measures of first-language (L1) and second-language (L2) paragraph reading to investigate how the activation of multiple lexical candidates, both within and across languages, influences visual word recognition in four different age and language groups: (1) monolingual children; (2) monolingual young adults; (3) bilingual children; and (4) bilingual young adults. More specifically, we focused on within-language and cross-language orthographic neighborhood density effects, while controlling for the potentially confounding effects of orthographic neighborhood frequency. We found facilitatory within-language orthographic neighborhood density effects (i.e., words were easier to process when they had many vs. few orthographic neighbors, evidenced by shorter fixation durations) across the L1 and L2, with larger effects in children vs. adults (especially the bilingual ones) during L1 reading. Similarly, we found facilitatory cross-language neighborhood density effects across the L1 and L2, with no modulatory influence of age or language group. Taken together, our findings suggest that word recognition benefits from the simultaneous activation of visually similar word forms during naturalistic reading, with some evidence of larger effects in children and particularly those whose words may have differentially lower baseline activation levels and/or weaker links between word-related information due to divided language exposure: bilinguals.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We used eye movement measures of first-language (L1) and second-language (L2) paragraph reading to investigate how the activation of multiple lexical candidates, both within and across languages, influences visual word recognition in four different age and language groups: (1) monolingual children; (2) monolingual young adults; (3) bilingual children; and (4) bilingual young adults. More specifically, we focused on within-language and cross-language orthographic neighborhood density effects, while controlling for the potentially confounding effects of orthographic neighborhood frequency. We found facilitatory within-language orthographic neighborhood density effects (i.e., words were easier to process when they had many vs. few orthographic neighbors, evidenced by shorter fixation durations) across the L1 and L2, with larger effects in children vs. adults (especially the bilingual ones) during L1 reading. Similarly, we found facilitatory cross-language neighborhood density effects across the L1 and L2, with no modulatory influence of age or language group. Taken together, our findings suggest that word recognition benefits from the simultaneous activation of visually similar word forms during naturalistic reading, with some evidence of larger effects in children and particularly those whose words may have differentially lower baseline activation levels and/or weaker links between word-related information due to divided language exposure: bilinguals.

Close

  • doi:10.3389/fpsyg.2021.674007

Close

Peter S. Whitehead; Younis Mahmoud; Paul Seli; Tobias Egner

Mind wandering at encoding, but not at retrieval, disrupts one-shot stimulus-control learning Journal Article

In: Attention, Perception, and Psychophysics, vol. 83, no. 7, pp. 2968–2982, 2021.

Abstract | Links | BibTeX

@article{Whitehead2021,
title = {Mind wandering at encoding, but not at retrieval, disrupts one-shot stimulus-control learning},
author = {Peter S. Whitehead and Younis Mahmoud and Paul Seli and Tobias Egner},
doi = {10.3758/s13414-021-02343-9},
year = {2021},
date = {2021-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {83},
number = {7},
pages = {2968--2982},
abstract = {The one-shot pairing of a stimulus with a specific cognitive control process, such as task switching, can bind the two together in memory. The episodic control-binding hypothesis posits that the formation of temporary stimulus-control bindings, which are held in event-files supported by episodic memory, can guide the contextually appropriate application of cognitive control. Across two experiments, we sought to examine the role of task-focused attention in the encoding and implementation of stimulus-control bindings in episodic event-files. In Experiment 1, we obtained self-reports of mind wandering during encoding and implementation of stimulus-control bindings. Results indicated that, whereas mind wandering during the implementation of stimulus-control bindings does not decrease their efficacy, mind wandering during the encoding of these control-state associations interferes with their successful deployment at a later point. In Experiment 2, we complemented these results by using trial-by-trial pupillometry to measure attention, again demonstrating that attention levels at encoding predict the subsequent implementation of stimulus-control bindings better than attention levels at implementation. These results suggest that, although encoding stimulus-control bindings in episodic memory requires active attention and engagement, once encoded, these bindings are automatically deployed to guide behavior when the stimulus recurs. These findings expand our understanding of how cognitive control processes are integrated into episodic event files.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The one-shot pairing of a stimulus with a specific cognitive control process, such as task switching, can bind the two together in memory. The episodic control-binding hypothesis posits that the formation of temporary stimulus-control bindings, which are held in event-files supported by episodic memory, can guide the contextually appropriate application of cognitive control. Across two experiments, we sought to examine the role of task-focused attention in the encoding and implementation of stimulus-control bindings in episodic event-files. In Experiment 1, we obtained self-reports of mind wandering during encoding and implementation of stimulus-control bindings. Results indicated that, whereas mind wandering during the implementation of stimulus-control bindings does not decrease their efficacy, mind wandering during the encoding of these control-state associations interferes with their successful deployment at a later point. In Experiment 2, we complemented these results by using trial-by-trial pupillometry to measure attention, again demonstrating that attention levels at encoding predict the subsequent implementation of stimulus-control bindings better than attention levels at implementation. These results suggest that, although encoding stimulus-control bindings in episodic memory requires active attention and engagement, once encoded, these bindings are automatically deployed to guide behavior when the stimulus recurs. These findings expand our understanding of how cognitive control processes are integrated into episodic event files.

Close

  • doi:10.3758/s13414-021-02343-9

Close

Wen Wen; Yangming Zhang; Sheng Li

Gaze dynamics of feature-based distractor inhibition under prior-knowledge and expectations Journal Article

In: Attention, Perception, and Psychophysics, vol. 83, no. 6, pp. 2430–2440, 2021.

Abstract | Links | BibTeX

@article{Wen2021,
title = {Gaze dynamics of feature-based distractor inhibition under prior-knowledge and expectations},
author = {Wen Wen and Yangming Zhang and Sheng Li},
doi = {10.3758/s13414-021-02308-y},
year = {2021},
date = {2021-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {83},
number = {6},
pages = {2430--2440},
publisher = {Attention, Perception, & Psychophysics},
abstract = {Prior information about distractor facilitates selective attention to task-relevant items and helps the optimization of oculomotor planning. In the present study, we capitalized on gaze-position decoding to examine the dynamics of attentional deployment in a feature-based attentional task that involved two groups of dots (target/distractor dots) moving toward different directions. In Experiment 1, participants were provided with target cues indicating the moving direction of target dots. The results showed that participants were biased toward the cued direction and tracked the target dots throughout the task period. In Experiment 2 and Experiment 3, participants were provided with cues that informed the moving direction of distractor dots. When the distractor cue varied on a trial-by-trial basis (Experiment 2), participants continuously monitored the distractor's direction. However, when the to-be-ignored distractor direction remained constant (Experiment 3), participants would strategically bias their attention to the distractor's direction before the cue onset to reduce the cost of redeployment of attention between trials and reactively suppress further attraction evoked by distractors during the stimulus-on stage. This functional dissociation reflected the distinct influence that expectation produced on ocular control. Taken together, these results suggest that monitoring the distractor's feature is a prerequisite for feature-based attentional inhibition, and this process is facilitated by the predictability of the distractor's feature.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Prior information about distractor facilitates selective attention to task-relevant items and helps the optimization of oculomotor planning. In the present study, we capitalized on gaze-position decoding to examine the dynamics of attentional deployment in a feature-based attentional task that involved two groups of dots (target/distractor dots) moving toward different directions. In Experiment 1, participants were provided with target cues indicating the moving direction of target dots. The results showed that participants were biased toward the cued direction and tracked the target dots throughout the task period. In Experiment 2 and Experiment 3, participants were provided with cues that informed the moving direction of distractor dots. When the distractor cue varied on a trial-by-trial basis (Experiment 2), participants continuously monitored the distractor's direction. However, when the to-be-ignored distractor direction remained constant (Experiment 3), participants would strategically bias their attention to the distractor's direction before the cue onset to reduce the cost of redeployment of attention between trials and reactively suppress further attraction evoked by distractors during the stimulus-on stage. This functional dissociation reflected the distinct influence that expectation produced on ocular control. Taken together, these results suggest that monitoring the distractor's feature is a prerequisite for feature-based attentional inhibition, and this process is facilitated by the predictability of the distractor's feature.

Close

  • doi:10.3758/s13414-021-02308-y

Close

Aurélien Weiss; Valérian Chambon; Junseok K. Lee; Jan Drugowitsch; Valentin Wyart

Interacting with volatile environments stabilizes hidden-state inference and its brain signatures Journal Article

In: Nature Communications, vol. 12, pp. 2228, 2021.

Abstract | Links | BibTeX

@article{Weiss2021,
title = {Interacting with volatile environments stabilizes hidden-state inference and its brain signatures},
author = {Aurélien Weiss and Valérian Chambon and Junseok K. Lee and Jan Drugowitsch and Valentin Wyart},
doi = {10.1038/s41467-021-22396-6},
year = {2021},
date = {2021-01-01},
journal = {Nature Communications},
volume = {12},
pages = {2228},
publisher = {Springer US},
abstract = {Making accurate decisions in uncertain environments requires identifying the generative cause of sensory cues, but also the expected outcomes of possible actions. Although both cognitive processes can be formalized as Bayesian inference, they are commonly studied using different experimental frameworks, making their formal comparison difficult. Here, by framing a reversal learning task either as cue-based or outcome-based inference, we found that humans perceive the same volatile environment as more stable when inferring its hidden state by interaction with uncertain outcomes than by observation of equally uncertain cues. Multivariate patterns of magnetoencephalographic (MEG) activity reflected this behavioral difference in the neural interaction between inferred beliefs and incoming evidence, an effect originating from associative regions in the temporal lobe. Together, these findings indicate that the degree of control over the sampling of volatile environments shapes human learning and decision-making under uncertainty.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Making accurate decisions in uncertain environments requires identifying the generative cause of sensory cues, but also the expected outcomes of possible actions. Although both cognitive processes can be formalized as Bayesian inference, they are commonly studied using different experimental frameworks, making their formal comparison difficult. Here, by framing a reversal learning task either as cue-based or outcome-based inference, we found that humans perceive the same volatile environment as more stable when inferring its hidden state by interaction with uncertain outcomes than by observation of equally uncertain cues. Multivariate patterns of magnetoencephalographic (MEG) activity reflected this behavioral difference in the neural interaction between inferred beliefs and incoming evidence, an effect originating from associative regions in the temporal lobe. Together, these findings indicate that the degree of control over the sampling of volatile environments shapes human learning and decision-making under uncertainty.

Close

  • doi:10.1038/s41467-021-22396-6

Close

Yipu Wei; Jacqueline Evers-Vermeul; Ted M. Sanders; Willem M. Mak

The role of connectives and stance Mmrkers in the processing of subjective causal relations Journal Article

In: Discourse Processes, vol. 58, no. 8, pp. 766–786, 2021.

Abstract | Links | BibTeX

@article{Wei2021,
title = {The role of connectives and stance Mmrkers in the processing of subjective causal relations},
author = {Yipu Wei and Jacqueline Evers-Vermeul and Ted M. Sanders and Willem M. Mak},
doi = {10.1080/0163853X.2021.1893551},
year = {2021},
date = {2021-01-01},
journal = {Discourse Processes},
volume = {58},
number = {8},
pages = {766--786},
publisher = {Routledge},
abstract = {Interpreting subjectivity in causal relations takes effort: Subjective, claim-argument relations are read slower than objective, cause-consequence relations. In an eye-tracking-while-reading experiment, we investigated whether connectives and stance markers can play a facilitative role. Sixty-five Chinese participants read sentences expressing a subjective causal relation, systematically varied in the use of stance markers (no, attitudinal, epistemic) in the first clause and connectives (neutral suoyi “so”, subjective kejian “so”) in the second clause. Results showed that processing subjectivity proceeds highly incrementally: The interplay of the subjectivity markers is visible as the sentence unfolds. Subjective connectives increased reading times, irrespective of the type of stance marker being used. Stance markers did, however, facilitate the processing of modal verbs in subjective relations. We conclude that processing subjectivity involves evaluating how the argument supports the claim and that connectives, modal verbs, and stance markers function as processing instructions that help readers achieve this evaluation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Interpreting subjectivity in causal relations takes effort: Subjective, claim-argument relations are read slower than objective, cause-consequence relations. In an eye-tracking-while-reading experiment, we investigated whether connectives and stance markers can play a facilitative role. Sixty-five Chinese participants read sentences expressing a subjective causal relation, systematically varied in the use of stance markers (no, attitudinal, epistemic) in the first clause and connectives (neutral suoyi “so”, subjective kejian “so”) in the second clause. Results showed that processing subjectivity proceeds highly incrementally: The interplay of the subjectivity markers is visible as the sentence unfolds. Subjective connectives increased reading times, irrespective of the type of stance marker being used. Stance markers did, however, facilitate the processing of modal verbs in subjective relations. We conclude that processing subjectivity involves evaluating how the argument supports the claim and that connectives, modal verbs, and stance markers function as processing instructions that help readers achieve this evaluation.

Close

  • doi:10.1080/0163853X.2021.1893551

Close

Leila Wehbe; Idan Asher Blank; Cory Shain; Richard Futrell; Roger Levy; Titus Von Der Malsburg; Nathaniel Smith; Edward Gibson; Evelina Fedorenko

Incremental language comprehension difficulty predicts activity in the language network but not the multiple demand network Journal Article

In: Cerebral Cortex, vol. 31, no. 9, pp. 4006–4023, 2021.

Abstract | Links | BibTeX

@article{Wehbe2021,
title = {Incremental language comprehension difficulty predicts activity in the language network but not the multiple demand network},
author = {Leila Wehbe and Idan Asher Blank and Cory Shain and Richard Futrell and Roger Levy and Titus Von Der Malsburg and Nathaniel Smith and Edward Gibson and Evelina Fedorenko},
doi = {10.1093/cercor/bhab065},
year = {2021},
date = {2021-01-01},
journal = {Cerebral Cortex},
volume = {31},
number = {9},
pages = {4006--4023},
abstract = {What role do domain-general executive functions play in human language comprehension? To address this question, we examine the relationship between behavioral measures of comprehension and neural activity in the domain-general "multiple demand"(MD) network, which has been linked to constructs like attention, working memory, inhibitory control, and selection, and implicated in diverse goal-directed behaviors. Specifically, functional magnetic resonance imaging data collected during naturalistic story listening are compared with theory-neutral measures of online comprehension difficulty and incremental processing load (reading times and eye-fixation durations). Critically, to ensure that variance in these measures is driven by features of the linguistic stimulus rather than reflecting participant- or trial-level variability, the neuroimaging and behavioral datasets were collected in nonoverlapping samples. We find no behavioral-neural link in functionally localized MD regions; instead, this link is found in the domain-specific, fronto-temporal "core language network,"in both left-hemispheric areas and their right hemispheric homotopic areas. These results argue against strong involvement of domain-general executive circuits in language comprehension.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

What role do domain-general executive functions play in human language comprehension? To address this question, we examine the relationship between behavioral measures of comprehension and neural activity in the domain-general "multiple demand"(MD) network, which has been linked to constructs like attention, working memory, inhibitory control, and selection, and implicated in diverse goal-directed behaviors. Specifically, functional magnetic resonance imaging data collected during naturalistic story listening are compared with theory-neutral measures of online comprehension difficulty and incremental processing load (reading times and eye-fixation durations). Critically, to ensure that variance in these measures is driven by features of the linguistic stimulus rather than reflecting participant- or trial-level variability, the neuroimaging and behavioral datasets were collected in nonoverlapping samples. We find no behavioral-neural link in functionally localized MD regions; instead, this link is found in the domain-specific, fronto-temporal "core language network,"in both left-hemispheric areas and their right hemispheric homotopic areas. These results argue against strong involvement of domain-general executive circuits in language comprehension.

Close

  • doi:10.1093/cercor/bhab065

Close

Thomas G. G. Wegner; Jan Grenzebach; Alexandra Bendixen; Wolfgang Einhäuser

Parameter dependence in visual pattern-component rivalry at onset and during prolonged viewing Journal Article

In: Vision Research, vol. 182, pp. 69–88, 2021.

Abstract | Links | BibTeX

@article{Wegner2021,
title = {Parameter dependence in visual pattern-component rivalry at onset and during prolonged viewing},
author = {Thomas G. G. Wegner and Jan Grenzebach and Alexandra Bendixen and Wolfgang Einhäuser},
doi = {10.1016/j.visres.2020.12.006},
year = {2021},
date = {2021-01-01},
journal = {Vision Research},
volume = {182},
pages = {69--88},
abstract = {In multistability, perceptual interpretations (“percepts”) of ambiguous stimuli alternate over time. There is considerable debate as to whether similar regularities govern the first percept after stimulus onset and percepts during prolonged presentation. We address this question in a visual pattern-component rivalry paradigm by presenting two overlaid drifting gratings, which participants perceived as individual gratings passing in front of each other (“segregated”) or as a plaid (“integrated”). We varied the enclosed angle (“opening angle”) between the gratings (experiments 1 and 2) and stimulus orientation (experiment 2). The relative number of integrated percepts increased monotonically with opening angle. The point of equality, where half of the percepts were integrated, was at a smaller opening angle at onset than during prolonged viewing. The functional dependence of the relative number of integrated percepts on opening angle showed a steeper curve at onset than during prolonged viewing. Dominance durations of integrated percepts were longer at onset than during prolonged viewing and increased with opening angle. The general pattern persisted when stimuli were rotated (experiment 2), despite some perceptual preference for cardinal motion directions over oblique directions. Analysis of eye movements, specifically the slow phase of the optokinetic nystagmus (OKN), confirmed the veridicality of participants' reports and provided a temporal characterization of percept formation after stimulus onset. Together, our results show that the first percept after stimulus onset exhibits a different dependence on stimulus parameters than percepts during prolonged viewing. This underlines the distinct role of the first percept in multistability.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In multistability, perceptual interpretations (“percepts”) of ambiguous stimuli alternate over time. There is considerable debate as to whether similar regularities govern the first percept after stimulus onset and percepts during prolonged presentation. We address this question in a visual pattern-component rivalry paradigm by presenting two overlaid drifting gratings, which participants perceived as individual gratings passing in front of each other (“segregated”) or as a plaid (“integrated”). We varied the enclosed angle (“opening angle”) between the gratings (experiments 1 and 2) and stimulus orientation (experiment 2). The relative number of integrated percepts increased monotonically with opening angle. The point of equality, where half of the percepts were integrated, was at a smaller opening angle at onset than during prolonged viewing. The functional dependence of the relative number of integrated percepts on opening angle showed a steeper curve at onset than during prolonged viewing. Dominance durations of integrated percepts were longer at onset than during prolonged viewing and increased with opening angle. The general pattern persisted when stimuli were rotated (experiment 2), despite some perceptual preference for cardinal motion directions over oblique directions. Analysis of eye movements, specifically the slow phase of the optokinetic nystagmus (OKN), confirmed the veridicality of participants' reports and provided a temporal characterization of percept formation after stimulus onset. Together, our results show that the first percept after stimulus onset exhibits a different dependence on stimulus parameters than percepts during prolonged viewing. This underlines the distinct role of the first percept in multistability.

Close

  • doi:10.1016/j.visres.2020.12.006

Close

Yuehua Wang; Shulan Lu; Derek Harter

Towards collaborative and intelligent learning environments based on eye tracking data and learning analytics: A Survey Journal Article

In: IEEE Access, vol. 9, pp. 137991–138002, 2021.

Abstract | Links | BibTeX

@article{Wang2021m,
title = {Towards collaborative and intelligent learning environments based on eye tracking data and learning analytics: A Survey},
author = {Yuehua Wang and Shulan Lu and Derek Harter},
doi = {10.1109/ACCESS.2021.3117780},
year = {2021},
date = {2021-01-01},
journal = {IEEE Access},
volume = {9},
pages = {137991--138002},
publisher = {IEEE},
abstract = {The current pandemic has significantly impacted educational practices, modifying many aspects of how and when we learn. In particular, remote learning and the use of digital platforms have greatly increased in importance. Online teaching and e-learning provide many benefits for information retention and schedule flexibility in our on-demand world while breaking down barriers caused by geographic location, physical facilities, transportation issues, or physical impediments. However, educators and researchers have noticed that students face a learning and performance decline as a result of this sudden shift to online teaching and e-learning from classrooms around the world. In this paper, we focus on reviewing eye-tracking techniques and systems, data collection and management methods, datasets, and multi-modal learning data analytics for promoting pervasive and proactive learning in educational environments. We then describe and discuss the crucial challenges and open issues of current learning environments and data learning methods. The review and discussion show the potential of transforming traditional ways of teaching and learning in the classroom, and the feasibility of adaptively driving learning processes using eye-tracking, data science, multimodal learning analytics, and artificial intelligence. These findings call for further attention and research on collaborative and intelligent learning systems, plug-and-play devices and software modules, data science, and learning analytics methods for promoting the evolution of face-to-face learning and e-learning environments and enhancing student collaboration, engagement, and success.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The current pandemic has significantly impacted educational practices, modifying many aspects of how and when we learn. In particular, remote learning and the use of digital platforms have greatly increased in importance. Online teaching and e-learning provide many benefits for information retention and schedule flexibility in our on-demand world while breaking down barriers caused by geographic location, physical facilities, transportation issues, or physical impediments. However, educators and researchers have noticed that students face a learning and performance decline as a result of this sudden shift to online teaching and e-learning from classrooms around the world. In this paper, we focus on reviewing eye-tracking techniques and systems, data collection and management methods, datasets, and multi-modal learning data analytics for promoting pervasive and proactive learning in educational environments. We then describe and discuss the crucial challenges and open issues of current learning environments and data learning methods. The review and discussion show the potential of transforming traditional ways of teaching and learning in the classroom, and the feasibility of adaptively driving learning processes using eye-tracking, data science, multimodal learning analytics, and artificial intelligence. These findings call for further attention and research on collaborative and intelligent learning systems, plug-and-play devices and software modules, data science, and learning analytics methods for promoting the evolution of face-to-face learning and e-learning environments and enhancing student collaboration, engagement, and success.

Close

  • doi:10.1109/ACCESS.2021.3117780

Close

Youxi Wang; Xuelian Zang; Hua Zhang; Wei Shen

The processing of the second syllable in recognizing Chinese disyllabic spoken words: Evidence from eye tracking Journal Article

In: Frontiers in Psychology, vol. 12, pp. 681337, 2021.

Abstract | Links | BibTeX

@article{Wang2021l,
title = {The processing of the second syllable in recognizing Chinese disyllabic spoken words: Evidence from eye tracking},
author = {Youxi Wang and Xuelian Zang and Hua Zhang and Wei Shen},
doi = {10.3389/fpsyg.2021.681337},
year = {2021},
date = {2021-01-01},
journal = {Frontiers in Psychology},
volume = {12},
pages = {681337},
abstract = {In the current study, two experiments were conducted to investigate the processing of the second syllable (which was considered as the rhyme at the word level) during Chinese disyllabic spoken word recognition using a printed-word paradigm. In Experiment 1, participants heard a spoken target word and were simultaneously presented with a visual display of four printed words: a target word, a phonological competitor, and two unrelated distractors. The phonological competitors were manipulated to share either full phonemic overlap of the second syllable with targets (the syllabic overlap condition; e.g., 小篆, xiao3zhuan4, “calligraphy” vs. 公转, gong1zhuan4, “revolution”) or the initial phonemic overlap of the second syllable (the sub-syllabic overlap condition; e.g., 圆柱, yuan2zhu4, “cylinder” vs. 公转, gong1zhuan4, “revolution”) with targets. Participants were asked to select the target words and their eye movements were simultaneously recorded. The results did not show any phonological competition effect in either the syllabic overlap condition or the sub-syllabic overlap condition. In Experiment 2, to maximize the likelihood of observing the phonological competition effect, a target-absent version of the printed-word paradigm was adopted, in which target words were removed from the visual display. The results of Experiment 2 showed significant phonological competition effects in both conditions, i.e., more fixations were made to the phonological competitors than to the distractors. Moreover, the phonological competition effect was found to be larger in the syllabic overlap condition than in the sub-syllabic overlap condition. These findings shed light on the effect of the second syllable competition at the word level during spoken word recognition and, more importantly, showed that the initial phonemes of the second syllable at the syllabic level are also accessed during Chinese disyllabic spoken word recognition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the current study, two experiments were conducted to investigate the processing of the second syllable (which was considered as the rhyme at the word level) during Chinese disyllabic spoken word recognition using a printed-word paradigm. In Experiment 1, participants heard a spoken target word and were simultaneously presented with a visual display of four printed words: a target word, a phonological competitor, and two unrelated distractors. The phonological competitors were manipulated to share either full phonemic overlap of the second syllable with targets (the syllabic overlap condition; e.g., 小篆, xiao3zhuan4, “calligraphy” vs. 公转, gong1zhuan4, “revolution”) or the initial phonemic overlap of the second syllable (the sub-syllabic overlap condition; e.g., 圆柱, yuan2zhu4, “cylinder” vs. 公转, gong1zhuan4, “revolution”) with targets. Participants were asked to select the target words and their eye movements were simultaneously recorded. The results did not show any phonological competition effect in either the syllabic overlap condition or the sub-syllabic overlap condition. In Experiment 2, to maximize the likelihood of observing the phonological competition effect, a target-absent version of the printed-word paradigm was adopted, in which target words were removed from the visual display. The results of Experiment 2 showed significant phonological competition effects in both conditions, i.e., more fixations were made to the phonological competitors than to the distractors. Moreover, the phonological competition effect was found to be larger in the syllabic overlap condition than in the sub-syllabic overlap condition. These findings shed light on the effect of the second syllable competition at the word level during spoken word recognition and, more importantly, showed that the initial phonemes of the second syllable at the syllabic level are also accessed during Chinese disyllabic spoken word recognition.

Close

  • doi:10.3389/fpsyg.2021.681337

Close

Yiheng Wang; Yanping Liu

Can longer gaze duration determine risky investment decisions? An interactive perspective Journal Article

In: Journal of Eye Movement Research, vol. 14, no. 4, pp. 1–8, 2021.

Abstract | Links | BibTeX

@article{Wang2021k,
title = {Can longer gaze duration determine risky investment decisions? An interactive perspective},
author = {Yiheng Wang and Yanping Liu},
doi = {10.16910/JEMR.14.4.3},
year = {2021},
date = {2021-01-01},
journal = {Journal of Eye Movement Research},
volume = {14},
number = {4},
pages = {1--8},
abstract = {Can longer gaze duration determine risky investment decisions? Recent studies have tested how gaze influences peopleʼs decisions and the boundary of the gaze effect. The current experiment used adaptive gaze-contingent manipulation by adding a self-determined option to test whether longer gaze duration can determine risky investment decisions. The results showed that both the expected value of each option and the gaze duration influenced peopleʼs decisions. This result was consistent with the attentional diffusion model (aDDM) proposed by Krajbich et al. (2010), which suggests that gaze can influence the choice process by amplify the value of the choice. Therefore, the gaze duration would influence the decision when people do not have clear preference.The result also showed that the similarity between options and the computational difficulty would also influence the gaze effect. This result was inconsistent with prior research that used option similarities to represent difficulty, suggesting that both similarity between options and computational difficulty induce different underlying mechanisms of decision difficulty.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Can longer gaze duration determine risky investment decisions? Recent studies have tested how gaze influences peopleʼs decisions and the boundary of the gaze effect. The current experiment used adaptive gaze-contingent manipulation by adding a self-determined option to test whether longer gaze duration can determine risky investment decisions. The results showed that both the expected value of each option and the gaze duration influenced peopleʼs decisions. This result was consistent with the attentional diffusion model (aDDM) proposed by Krajbich et al. (2010), which suggests that gaze can influence the choice process by amplify the value of the choice. Therefore, the gaze duration would influence the decision when people do not have clear preference.The result also showed that the similarity between options and the computational difficulty would also influence the gaze effect. This result was inconsistent with prior research that used option similarities to represent difficulty, suggesting that both similarity between options and computational difficulty induce different underlying mechanisms of decision difficulty.

Close

  • doi:10.16910/JEMR.14.4.3

Close

Xiangling Wang; Tingting Wang; Ricardo Muñoz Martın; Yanfang Jia

Investigating usability in postediting neural machine translation: Evidence from translation trainees' self-perception and performance Journal Article

In: Across Languages and Cultures, vol. 22, no. 1, pp. 100–123, 2021.

Abstract | Links | BibTeX

@article{Wang2021j,
title = {Investigating usability in postediting neural machine translation: Evidence from translation trainees' self-perception and performance},
author = {Xiangling Wang and Tingting Wang and Ricardo Muñoz Martın and Yanfang Jia},
doi = {10.1556/084.2021.00006},
year = {2021},
date = {2021-01-01},
journal = {Across Languages and Cultures},
volume = {22},
number = {1},
pages = {100--123},
abstract = {This is a report on an empirical study on the usability for translation trainees of neural machine translation systems when post-editing (MTPE). Sixty Chinese translation trainees completed a questionnaire on their perceptions of MTPE's usability. Fifty of them later performed both a post-editing task and a regular translation task, designed to examine MTPE's usability by comparing their performance in terms of text processing speed, effort, and translation quality. Contrasting data collected by the questionnaire, keylogging, eyetracking and retrospective reports we found that, compared with regular, unaided translation, MTPE's usefulness in performance was remarkable: (1) it increased translation trainees' text processing speed and also improved their translation quality; (2) MTPE's ease of use in performance was partly proved in that it significantly reduced informants' effort as measured by (a) fixation duration and fixation counts; (b) total task time; and (c) the number of insertion keystrokes and total keystrokes. However, (3) translation trainees generally perceived MTPE to be useful to increase productivity, but they were skeptical about its use to improve quality. They were neutral towards the ease of use of MTPE.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This is a report on an empirical study on the usability for translation trainees of neural machine translation systems when post-editing (MTPE). Sixty Chinese translation trainees completed a questionnaire on their perceptions of MTPE's usability. Fifty of them later performed both a post-editing task and a regular translation task, designed to examine MTPE's usability by comparing their performance in terms of text processing speed, effort, and translation quality. Contrasting data collected by the questionnaire, keylogging, eyetracking and retrospective reports we found that, compared with regular, unaided translation, MTPE's usefulness in performance was remarkable: (1) it increased translation trainees' text processing speed and also improved their translation quality; (2) MTPE's ease of use in performance was partly proved in that it significantly reduced informants' effort as measured by (a) fixation duration and fixation counts; (b) total task time; and (c) the number of insertion keystrokes and total keystrokes. However, (3) translation trainees generally perceived MTPE to be useful to increase productivity, but they were skeptical about its use to improve quality. They were neutral towards the ease of use of MTPE.

Close

  • doi:10.1556/084.2021.00006

Close

Xi Wang; Kenneth Holmqvist; Marc Alexa

A consensus-based elastic matching algorithm for mapping recall fixations onto encoding fixations in the looking-at-nothing paradigm Journal Article

In: Behavior Research Methods, vol. 53, no. 5, pp. 2049–2068, 2021.

Abstract | Links | BibTeX

@article{Wang2021i,
title = {A consensus-based elastic matching algorithm for mapping recall fixations onto encoding fixations in the looking-at-nothing paradigm},
author = {Xi Wang and Kenneth Holmqvist and Marc Alexa},
doi = {10.3758/s13428-020-01513-1},
year = {2021},
date = {2021-01-01},
journal = {Behavior Research Methods},
volume = {53},
number = {5},
pages = {2049--2068},
publisher = {Behavior Research Methods},
abstract = {We present an algorithmic method for aligning recall fixations with encoding fixations, to be used in looking-at-nothing paradigms that either record recall eye movements during silence or want to speed up data analysis with recordings of recall data during speech. The algorithm utilizes a novel consensus-based elastic matching algorithm to estimate which encoding fixations correspond to later recall fixations. This is not a scanpath comparison method, as fixation sequence order is ignored and only position configurations are used. The algorithm has three internal parameters and is reasonable stable over a wide range of parameter values. We then evaluate the performance of our algorithm by investigating whether the recalled objects identified by the algorithm correspond with independent assessments of what objects in the image are marked as subjectively important. Our results show that the mapped recall fixations align well with important regions of the images. This result is exemplified in four groups of use cases: to investigate the roles of low-level visual features, faces, signs and text, and people of different sizes, in recall of encoded scenes. The plots from these examples corroborate the finding that the algorithm aligns recall fixations with the most likely important regions in the images. Examples also illustrate how the algorithm can differentiate between image objects that have been fixated during silent recall vs those objects that have not been visually attended, even though they were fixated during encoding.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We present an algorithmic method for aligning recall fixations with encoding fixations, to be used in looking-at-nothing paradigms that either record recall eye movements during silence or want to speed up data analysis with recordings of recall data during speech. The algorithm utilizes a novel consensus-based elastic matching algorithm to estimate which encoding fixations correspond to later recall fixations. This is not a scanpath comparison method, as fixation sequence order is ignored and only position configurations are used. The algorithm has three internal parameters and is reasonable stable over a wide range of parameter values. We then evaluate the performance of our algorithm by investigating whether the recalled objects identified by the algorithm correspond with independent assessments of what objects in the image are marked as subjectively important. Our results show that the mapped recall fixations align well with important regions of the images. This result is exemplified in four groups of use cases: to investigate the roles of low-level visual features, faces, signs and text, and people of different sizes, in recall of encoded scenes. The plots from these examples corroborate the finding that the algorithm aligns recall fixations with the most likely important regions in the images. Examples also illustrate how the algorithm can differentiate between image objects that have been fixated during silent recall vs those objects that have not been visually attended, even though they were fixated during encoding.

Close

  • doi:10.3758/s13428-020-01513-1

Close

Wendy Wang; Meaghan Clough; Owen White; Neil Shuey; Anneke Van Der Walt; Joanne Fielding

Detecting cognitive impairment in idiopathic intracranial hypertension using ocular motor and neuropsychological testing Journal Article

In: Frontiers in Neurology, vol. 12, pp. 772513, 2021.

Abstract | Links | BibTeX

@article{Wang2021h,
title = {Detecting cognitive impairment in idiopathic intracranial hypertension using ocular motor and neuropsychological testing},
author = {Wendy Wang and Meaghan Clough and Owen White and Neil Shuey and Anneke Van Der Walt and Joanne Fielding},
doi = {10.3389/fneur.2021.772513},
year = {2021},
date = {2021-01-01},
journal = {Frontiers in Neurology},
volume = {12},
pages = {772513},
abstract = {Objective: To determine whether cognitive impairments in patients with Idiopathic Intracranial Hypertension (IIH) are correlated with changes in visual processing, weight, waist circumference, mood or headache, and whether they change over time. Methods: Twenty-two newly diagnosed IIH patients participated, with a subset assessed longitudinally at 3 and 6 months. Both conventional and novel ocular motor tests of cognition were included: Symbol Digit Modalities Test (SDMT), Stroop Colour and Word Test (SCWT), Digit Span, California Verbal Learning Test (CVLT), prosaccade (PS) task, antisaccade (AS) task, interleaved antisaccade-prosaccade (AS-PS) task. Patients also completed headache, mood, and visual functioning questionnaires. Results: IIH patients performed more poorly than controls on the SDMT (p< 0.001), SCWT (p = 0.021), Digit Span test (p< 0.001) and CVLT (p = 0.004) at baseline, and generated a higher proportion of AS errors in both the AS (p< 0.001) and AS-PS tasks (p = 0.007). Further, IIH patients exhibited prolonged latencies on the cognitively complex AS-PS task (p = 0.034). While weight, waist circumference, headache and mood did not predict performance on any experimental measure, increased retinal nerve fibre layer (RNFL) was associated with AS error rate on both the block [F(3, 19)=3.22},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: To determine whether cognitive impairments in patients with Idiopathic Intracranial Hypertension (IIH) are correlated with changes in visual processing, weight, waist circumference, mood or headache, and whether they change over time. Methods: Twenty-two newly diagnosed IIH patients participated, with a subset assessed longitudinally at 3 and 6 months. Both conventional and novel ocular motor tests of cognition were included: Symbol Digit Modalities Test (SDMT), Stroop Colour and Word Test (SCWT), Digit Span, California Verbal Learning Test (CVLT), prosaccade (PS) task, antisaccade (AS) task, interleaved antisaccade-prosaccade (AS-PS) task. Patients also completed headache, mood, and visual functioning questionnaires. Results: IIH patients performed more poorly than controls on the SDMT (p< 0.001), SCWT (p = 0.021), Digit Span test (p< 0.001) and CVLT (p = 0.004) at baseline, and generated a higher proportion of AS errors in both the AS (p< 0.001) and AS-PS tasks (p = 0.007). Further, IIH patients exhibited prolonged latencies on the cognitively complex AS-PS task (p = 0.034). While weight, waist circumference, headache and mood did not predict performance on any experimental measure, increased retinal nerve fibre layer (RNFL) was associated with AS error rate on both the block [F(3, 19)=3.22

Close

  • doi:10.3389/fneur.2021.772513

Close

Tianlu Wang; Lena M. Hofbauer; Dante Mantini; Céline R. Gillebert

Behavioural and neural effects of eccentricity and visual field during covert visuospatial attention Journal Article

In: Neuroimage: Reports, vol. 1, no. 3, pp. 100039, 2021.

Abstract | Links | BibTeX

@article{Wang2021g,
title = {Behavioural and neural effects of eccentricity and visual field during covert visuospatial attention},
author = {Tianlu Wang and Lena M. Hofbauer and Dante Mantini and Céline R. Gillebert},
doi = {10.1016/j.ynirp.2021.100039},
year = {2021},
date = {2021-01-01},
journal = {Neuroimage: Reports},
volume = {1},
number = {3},
pages = {100039},
abstract = {The attentional priority map plays a key role in the distribution of attention, and is modulated by bottom-up sensory as well as top-down task-dependent factors. The intraparietal sulcus (IPS) is a key candidate to hold a neural representation of the attentional priority map. In the current study, we examined the role of the IPS during covert attention to spatial locations with high or low eccentricity in one or both visual hemifields. To this end, eighteen neurologically healthy participants performed a cued letter discrimination task in which they were endogenously cued to attend to a location at a 5 or 10◦ eccentricity in the left and/or right visual field. We briefly presented a four-letter target array and subsequently probed perceptual performance while acquiring event- related functional MRI data. While behavioural results showed greater letter discrimination performance at the low eccentricity compared to the high eccentricity location, no neural effect of eccentricity was observed. The results further showed that attending to one visual hemifield produced higher activation in the left parietal and occipital cortex compared to attending bilaterally. Future studies may consider increasing the involvement of top-down control of attention to the cued location to study the neural effect of eccentricity, e.g., through manipulating the task difficulty.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The attentional priority map plays a key role in the distribution of attention, and is modulated by bottom-up sensory as well as top-down task-dependent factors. The intraparietal sulcus (IPS) is a key candidate to hold a neural representation of the attentional priority map. In the current study, we examined the role of the IPS during covert attention to spatial locations with high or low eccentricity in one or both visual hemifields. To this end, eighteen neurologically healthy participants performed a cued letter discrimination task in which they were endogenously cued to attend to a location at a 5 or 10◦ eccentricity in the left and/or right visual field. We briefly presented a four-letter target array and subsequently probed perceptual performance while acquiring event- related functional MRI data. While behavioural results showed greater letter discrimination performance at the low eccentricity compared to the high eccentricity location, no neural effect of eccentricity was observed. The results further showed that attending to one visual hemifield produced higher activation in the left parietal and occipital cortex compared to attending bilaterally. Future studies may consider increasing the involvement of top-down control of attention to the cued location to study the neural effect of eccentricity, e.g., through manipulating the task difficulty.

Close

  • doi:10.1016/j.ynirp.2021.100039

Close

Mengsi Wang; Hazel I. Blythe; Simon P. Liversedge

Eye-movement control during learning and scanning of English pseudoword stimuli: Exposure frequency effects and spacing effects in a visual search task Journal Article

In: Attention, Perception, and Psychophysics, vol. 83, no. 8, pp. 3146–3161, 2021.

Abstract | Links | BibTeX

@article{Wang2021f,
title = {Eye-movement control during learning and scanning of English pseudoword stimuli: Exposure frequency effects and spacing effects in a visual search task},
author = {Mengsi Wang and Hazel I. Blythe and Simon P. Liversedge},
doi = {10.3758/s13414-021-02322-0},
year = {2021},
date = {2021-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {83},
number = {8},
pages = {3146--3161},
publisher = {Attention, Perception, & Psychophysics},
abstract = {Wang et al. (Attention, Perception, and Psychophysics, in press, 2021) reported a Landolt-C learning and scanning experiment. In a learning session, they simulated exposure frequency effects successfully by training participants to learn target Landolt-C clusters with different exposures. The rate of learning high-frequency (HF) targets were greater than that of learning low-frequency (LF) targets. In a subsequent scanning session, participants were required to scan text-like Landolt-C strings to detect whether any pre-learnt target was embedded in the strings. The Landolt-C strings were displayed under different spacing formats (i.e., spaced format, unspaced format, and unspaced shaded format). However, the simulated exposure frequency effect did not occur in the scanning session. Wang et al. argued one straightforward reason for this might be because participants failed to maintain the memory of pre-learnt target to the scanning session. In the current study, we employed the same learning and scanning paradigm to investigate whether exposure frequency would occur in a target search task by using easier learning materials – pseudoword stimuli. The learning of pseudoword stimuli was much more successful than Landolt-C stimuli. Interestingly, however, we found a very different rate of learning effect such that the rate of learning LF targets was greater than HF targets. To our surprise, we did not find any influence of exposure frequency on eye movements during scanning even when participants were able to identify pre-learnt pseudowords in strings. Learning rate effect, exposure frequency effects, and saccadic targeting during the scanning of strings under different spacing formats are discussed in this paper.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Wang et al. (Attention, Perception, and Psychophysics, in press, 2021) reported a Landolt-C learning and scanning experiment. In a learning session, they simulated exposure frequency effects successfully by training participants to learn target Landolt-C clusters with different exposures. The rate of learning high-frequency (HF) targets were greater than that of learning low-frequency (LF) targets. In a subsequent scanning session, participants were required to scan text-like Landolt-C strings to detect whether any pre-learnt target was embedded in the strings. The Landolt-C strings were displayed under different spacing formats (i.e., spaced format, unspaced format, and unspaced shaded format). However, the simulated exposure frequency effect did not occur in the scanning session. Wang et al. argued one straightforward reason for this might be because participants failed to maintain the memory of pre-learnt target to the scanning session. In the current study, we employed the same learning and scanning paradigm to investigate whether exposure frequency would occur in a target search task by using easier learning materials – pseudoword stimuli. The learning of pseudoword stimuli was much more successful than Landolt-C stimuli. Interestingly, however, we found a very different rate of learning effect such that the rate of learning LF targets was greater than HF targets. To our surprise, we did not find any influence of exposure frequency on eye movements during scanning even when participants were able to identify pre-learnt pseudowords in strings. Learning rate effect, exposure frequency effects, and saccadic targeting during the scanning of strings under different spacing formats are discussed in this paper.

Close

  • doi:10.3758/s13414-021-02322-0

Close

Jinxia Wang; Xiaoying Sun; Jiachen Lu; Hao Ran Dou; Yi Lei

Generalization gradients for fear and disgust in human associative learning Journal Article

In: Scientific Reports, vol. 11, pp. 14210, 2021.

Abstract | Links | BibTeX

@article{Wang2021e,
title = {Generalization gradients for fear and disgust in human associative learning},
author = {Jinxia Wang and Xiaoying Sun and Jiachen Lu and Hao Ran Dou and Yi Lei},
doi = {10.1038/s41598-021-93544-7},
year = {2021},
date = {2021-01-01},
journal = {Scientific Reports},
volume = {11},
pages = {14210},
publisher = {Nature Publishing Group UK},
abstract = {Previous research indicates that excessive fear is a critical feature in anxiety disorders; however, recent studies suggest that disgust may also contribute to the etiology and maintenance of some anxiety disorders. It remains unclear if differences exist between these two threat-related emotions in conditioning and generalization. Evaluating different patterns of fear and disgust learning would facilitate a deeper understanding of how anxiety disorders develop. In this study, 32 college students completed threat conditioning tasks, including conditioned stimuli paired with frightening or disgusting images. Fear and disgust were divided into two randomly ordered blocks to examine differences by recording subjective US expectancy ratings and eye movements in the conditioning and generalization process. During conditioning, differing US expectancy ratings (fear vs. disgust) were found only on CS-, which may demonstrated that fear is associated with inferior discrimination learning. During the generalization test, participants exhibited greater US expectancy ratings to fear-related GS1 (generalized stimulus) and GS2 relative to disgust GS1 and GS2. Fear led to longer reaction times than disgust in both phases, and the pupil size and fixation duration for fear stimuli were larger than for disgust stimuli, suggesting that disgust generalization has a steeper gradient than fear generalization. These findings provide preliminary evidence for differences between fear- and disgust-related stimuli in conditioning and generalization, and suggest insights into treatment for anxiety and other fear- or disgust-related disorders.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous research indicates that excessive fear is a critical feature in anxiety disorders; however, recent studies suggest that disgust may also contribute to the etiology and maintenance of some anxiety disorders. It remains unclear if differences exist between these two threat-related emotions in conditioning and generalization. Evaluating different patterns of fear and disgust learning would facilitate a deeper understanding of how anxiety disorders develop. In this study, 32 college students completed threat conditioning tasks, including conditioned stimuli paired with frightening or disgusting images. Fear and disgust were divided into two randomly ordered blocks to examine differences by recording subjective US expectancy ratings and eye movements in the conditioning and generalization process. During conditioning, differing US expectancy ratings (fear vs. disgust) were found only on CS-, which may demonstrated that fear is associated with inferior discrimination learning. During the generalization test, participants exhibited greater US expectancy ratings to fear-related GS1 (generalized stimulus) and GS2 relative to disgust GS1 and GS2. Fear led to longer reaction times than disgust in both phases, and the pupil size and fixation duration for fear stimuli were larger than for disgust stimuli, suggesting that disgust generalization has a steeper gradient than fear generalization. These findings provide preliminary evidence for differences between fear- and disgust-related stimuli in conditioning and generalization, and suggest insights into treatment for anxiety and other fear- or disgust-related disorders.

Close

  • doi:10.1038/s41598-021-93544-7

Close

Jingwen Wang; Bernhard Angele; Guojie Ma; Xingshan Li

Repetition causes confusion: Insights to word segmentation during Chinese reading Journal Article

In: Journal of Experimental Psychology: Learning Memory and Cognition, vol. 47, no. 1, pp. 147–156, 2021.

Abstract | Links | BibTeX

@article{Wang2021d,
title = {Repetition causes confusion: Insights to word segmentation during Chinese reading},
author = {Jingwen Wang and Bernhard Angele and Guojie Ma and Xingshan Li},
doi = {10.1037/xlm0000817},
year = {2021},
date = {2021-01-01},
journal = {Journal of Experimental Psychology: Learning Memory and Cognition},
volume = {47},
number = {1},
pages = {147--156},
abstract = {Since there are no spaces between words to mark word boundaries in Chinese, it is common to see 2 identical neighboring characters in natural text. Usually, this occurs when there are 2 adjacent words containing the same character (we will call such a coincidental sequence of 2 identical characters repeated characters). In the present study, we examined how Chinese readers process words when there are repeated characters. In 3 experiments, we compared how Chinese readers process 4-character strings including 2 repeated characters (e.g. 行动动机, pinyin: xíngdòng dòngjī, meaning behavioral motivation) with a control condition where none of the characters repeat (e.g. 行动欲望, pinyin: xíngdòng yùwàng, meaning behavioral desire). In Experiment 1, the 4-character strings were presented for 40 ms and participants were asked to report as many characters as possible. Participants reported the second and third characters less accurately in the repeated condition than the control condition. In Experiments 2A and 2B, we embedded 2 different types of 4-character strings, compound Chinese characters and simple Chinese characters, into the same sentence frames, and asked participants to read these sentences normally. Gaze duration and total time on the second word were significantly longer in the repeated condition. These results suggest that the repeated characters increased the difficulty of word processing. Moreover, the results are consistent with the predictions of serial models, which assumes that words are processed serially in reading. (PsycInfo Database Record (c) 2021 APA, all rights reserved)},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Since there are no spaces between words to mark word boundaries in Chinese, it is common to see 2 identical neighboring characters in natural text. Usually, this occurs when there are 2 adjacent words containing the same character (we will call such a coincidental sequence of 2 identical characters repeated characters). In the present study, we examined how Chinese readers process words when there are repeated characters. In 3 experiments, we compared how Chinese readers process 4-character strings including 2 repeated characters (e.g. 行动动机, pinyin: xíngdòng dòngjī, meaning behavioral motivation) with a control condition where none of the characters repeat (e.g. 行动欲望, pinyin: xíngdòng yùwàng, meaning behavioral desire). In Experiment 1, the 4-character strings were presented for 40 ms and participants were asked to report as many characters as possible. Participants reported the second and third characters less accurately in the repeated condition than the control condition. In Experiments 2A and 2B, we embedded 2 different types of 4-character strings, compound Chinese characters and simple Chinese characters, into the same sentence frames, and asked participants to read these sentences normally. Gaze duration and total time on the second word were significantly longer in the repeated condition. These results suggest that the repeated characters increased the difficulty of word processing. Moreover, the results are consistent with the predictions of serial models, which assumes that words are processed serially in reading. (PsycInfo Database Record (c) 2021 APA, all rights reserved)

Close

  • doi:10.1037/xlm0000817

Close

Jie Z. Wang; Eileen Kowler

Micropursuit and the control of attention and eye movements in dynamic environments Journal Article

In: Journal of Vision, vol. 21, no. 8, pp. 1–27, 2021.

Abstract | Links | BibTeX

@article{Wang2021c,
title = {Micropursuit and the control of attention and eye movements in dynamic environments},
author = {Jie Z. Wang and Eileen Kowler},
doi = {10.1167/jov.21.8.6},
year = {2021},
date = {2021-01-01},
journal = {Journal of Vision},
volume = {21},
number = {8},
pages = {1--27},
abstract = {It is more challenging to plan eye movements during perceptual tasks performed in dynamic displays than in static displays. Decisions about the timing of saccades become more critical, and decisions must also involve smooth eye movements, as well as saccades. The present study examined eye movements when judging which of two moving discs would arrive first, or collide, at a common meeting point. Perceptual discrimination after training was precise (Weber fractions < 6%). Strategies reflected a combined contribution of saccades and smooth eye movements. The preferred strategy was to look near the meeting point when strategies were freely chosen. When strategies were assigned, looking near the meeting point produced better performance than switching between the discs. Smooth eye movements were engaged in two ways: (a) low-velocity smooth eye movements correlated with the motion of each disc (micropursuit) were found while the line of sight remained between the discs; and (b) spontaneous smooth pursuit of the pair of discs occurred after the perceptual report, when the discs moved as a pair along a common path. The results show clear preferences and advantages for those eye movement strategies during dynamic perceptual tasks that require minimal management or effort. In addition, smooth eye movements, whose involvement during perceptual tasks within dynamic displays may have previously escaped notice, provide useful indictors of the strategies used to select information and distribute attention during the performance of dynamic perceptual tasks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It is more challenging to plan eye movements during perceptual tasks performed in dynamic displays than in static displays. Decisions about the timing of saccades become more critical, and decisions must also involve smooth eye movements, as well as saccades. The present study examined eye movements when judging which of two moving discs would arrive first, or collide, at a common meeting point. Perceptual discrimination after training was precise (Weber fractions < 6%). Strategies reflected a combined contribution of saccades and smooth eye movements. The preferred strategy was to look near the meeting point when strategies were freely chosen. When strategies were assigned, looking near the meeting point produced better performance than switching between the discs. Smooth eye movements were engaged in two ways: (a) low-velocity smooth eye movements correlated with the motion of each disc (micropursuit) were found while the line of sight remained between the discs; and (b) spontaneous smooth pursuit of the pair of discs occurred after the perceptual report, when the discs moved as a pair along a common path. The results show clear preferences and advantages for those eye movement strategies during dynamic perceptual tasks that require minimal management or effort. In addition, smooth eye movements, whose involvement during perceptual tasks within dynamic displays may have previously escaped notice, provide useful indictors of the strategies used to select information and distribute attention during the performance of dynamic perceptual tasks.

Close

  • doi:10.1167/jov.21.8.6

Close

Chin An Wang; Kien Trong Nguyen; Chi Hung Juan

Linking pupil size modulated by global luminance and motor preparation to saccade behavior Journal Article

In: Neuroscience, vol. 476, pp. 90–101, 2021.

Abstract | Links | BibTeX

@article{Wang2021b,
title = {Linking pupil size modulated by global luminance and motor preparation to saccade behavior},
author = {Chin An Wang and Kien Trong Nguyen and Chi Hung Juan},
doi = {10.1016/j.neuroscience.2021.09.014},
year = {2021},
date = {2021-01-01},
journal = {Neuroscience},
volume = {476},
pages = {90--101},
publisher = {IBRO},
abstract = {Saccades are rapid eye movements that are used to move the high acuity fovea in a serial manner in the exploration of the visual scene. Stimulus contrast is known to modulate saccade latency and metrics possibly via changing visual activity in the superior colliculus (SC), a midbrain structure causally involved in saccade generation. However, the quality of visual signals should also be modulated by the amount of lights projected onto the retina, which is gated by the size of the pupil. Although absolute pupil size should modulate visual signals and in turn affect saccade responses, research examining this relationship is very limited. Besides, pupil size is associated with motor preparation. However, the role of pupil dilation in saccade metrics remains unexplored. Through varying peripheral background luminance level and target visual contrast in the saccade task, we investigated the role of absolute pupil size and baseline-corrected pupil dilation in saccade latency and metrics. Higher target detection accuracy was obtained with lower background luminance level, and larger absolute pupil diameter correlated with smaller saccade amplitude and higher saccade peak velocities. More interestingly, the comparable modulation between pupil dilation and stimulus contrast was obtained, showing larger pupil dilation (or higher contrast stimuli) correlating with faster saccade latencies, larger amplitude, higher peak velocities, and smaller endpoint deviation. Together, our results demonstrated the influence of absolute pupil size induced by global luminance level and baseline-corrected pupil dilation associated with motor preparation on saccade latency and metrics, implicating the role of the SC in this behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Saccades are rapid eye movements that are used to move the high acuity fovea in a serial manner in the exploration of the visual scene. Stimulus contrast is known to modulate saccade latency and metrics possibly via changing visual activity in the superior colliculus (SC), a midbrain structure causally involved in saccade generation. However, the quality of visual signals should also be modulated by the amount of lights projected onto the retina, which is gated by the size of the pupil. Although absolute pupil size should modulate visual signals and in turn affect saccade responses, research examining this relationship is very limited. Besides, pupil size is associated with motor preparation. However, the role of pupil dilation in saccade metrics remains unexplored. Through varying peripheral background luminance level and target visual contrast in the saccade task, we investigated the role of absolute pupil size and baseline-corrected pupil dilation in saccade latency and metrics. Higher target detection accuracy was obtained with lower background luminance level, and larger absolute pupil diameter correlated with smaller saccade amplitude and higher saccade peak velocities. More interestingly, the comparable modulation between pupil dilation and stimulus contrast was obtained, showing larger pupil dilation (or higher contrast stimuli) correlating with faster saccade latencies, larger amplitude, higher peak velocities, and smaller endpoint deviation. Together, our results demonstrated the influence of absolute pupil size induced by global luminance level and baseline-corrected pupil dilation associated with motor preparation on saccade latency and metrics, implicating the role of the SC in this behavior.

Close

  • doi:10.1016/j.neuroscience.2021.09.014

Close

Chin An Wang; Douglas P. Munoz

Differentiating global luminance, arousal and cognitive signals on pupil size and microsaccades Journal Article

In: European Journal of Neuroscience, vol. 54, no. 10, pp. 7560–7574, 2021.

Abstract | Links | BibTeX

@article{Wang2021,
title = {Differentiating global luminance, arousal and cognitive signals on pupil size and microsaccades},
author = {Chin An Wang and Douglas P. Munoz},
doi = {10.1111/ejn.15508},
year = {2021},
date = {2021-01-01},
journal = {European Journal of Neuroscience},
volume = {54},
number = {10},
pages = {7560--7574},
abstract = {Pupil size reflects a proxy for neural activity associated with global luminance, arousal and cognitive processing. Microsaccades are also modulated by arousal and cognitive processing. Are these effects of arousal and cognitive signals on pupil size and microsaccades coordinated? If so, via what neural mechanisms? We hypothesized that if pupil size and microsaccades are coordinately modulated by these processes, pupil size immediately before microsaccade onset, as an index for ongoing cognitive and arousal processing, should correlate with microsaccade responses during tasks alternating these signals. Here, we examined the relationship between pupil size and microsaccade responses in tasks that included variations in global luminance, arousal and inhibitory control. Higher microsaccade peak velocities correlated with larger pre-microsaccade pupil response related to arousal and inhibitory control signals. In contrast, pupil responses evoked by global luminance signals did not correlate with microsaccade responses. Given the central role of the superior colliculus in microsaccade generation, these results suggest the critical involvement of the superior colliculus to coordinate pupil and microsaccade responses for arousal and inhibitory control modulations, but not for the pupil luminance modulation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Pupil size reflects a proxy for neural activity associated with global luminance, arousal and cognitive processing. Microsaccades are also modulated by arousal and cognitive processing. Are these effects of arousal and cognitive signals on pupil size and microsaccades coordinated? If so, via what neural mechanisms? We hypothesized that if pupil size and microsaccades are coordinately modulated by these processes, pupil size immediately before microsaccade onset, as an index for ongoing cognitive and arousal processing, should correlate with microsaccade responses during tasks alternating these signals. Here, we examined the relationship between pupil size and microsaccade responses in tasks that included variations in global luminance, arousal and inhibitory control. Higher microsaccade peak velocities correlated with larger pre-microsaccade pupil response related to arousal and inhibitory control signals. In contrast, pupil responses evoked by global luminance signals did not correlate with microsaccade responses. Given the central role of the superior colliculus in microsaccade generation, these results suggest the critical involvement of the superior colliculus to coordinate pupil and microsaccade responses for arousal and inhibitory control modulations, but not for the pupil luminance modulation.

Close

  • doi:10.1111/ejn.15508

Close

Chin An Wang; Douglas P. Munoz

Coordination of pupil and saccade responses by the superior colliculus Journal Article

In: Journal of Cognitive Neuroscience, vol. 33, no. 5, pp. 1–35, 2021.

Abstract | Links | BibTeX

@article{Wang2021a,
title = {Coordination of pupil and saccade responses by the superior colliculus},
author = {Chin An Wang and Douglas P. Munoz},
doi = {10.1162/jocn_a_01688},
year = {2021},
date = {2021-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {33},
number = {5},
pages = {1--35},
abstract = {The appearance of a salient stimulus evokes saccadic eye movements and pupil dilation as part of the orienting response. Although the role of the superior colliculus (SC) in saccade and pupil dilation has been established separately, whether and how these responses are coordinated remains unknown. The SC also receives global luminance signals from the retina, but whether global luminance modulates saccade and pupil responses coordinated by the SC remains unknown. Here, we used microstimulation to causally determine how the SC coordinates saccade and pupil responses and whether global luminance modulates these responses by varying stimulation frequency and global luminance in male monkeys. Stimulation frequency modulated saccade and pupil responses, with trial-by-trial correlations between the two responses. Global luminance only modulated pupil, but not saccade, responses. Our results demonstrate an integrated role of the SC on coordinating saccade and pupil responses, characterizing luminance independent modulation in the SC, together elucidating the differentiated pathways underlying this behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The appearance of a salient stimulus evokes saccadic eye movements and pupil dilation as part of the orienting response. Although the role of the superior colliculus (SC) in saccade and pupil dilation has been established separately, whether and how these responses are coordinated remains unknown. The SC also receives global luminance signals from the retina, but whether global luminance modulates saccade and pupil responses coordinated by the SC remains unknown. Here, we used microstimulation to causally determine how the SC coordinates saccade and pupil responses and whether global luminance modulates these responses by varying stimulation frequency and global luminance in male monkeys. Stimulation frequency modulated saccade and pupil responses, with trial-by-trial correlations between the two responses. Global luminance only modulated pupil, but not saccade, responses. Our results demonstrate an integrated role of the SC on coordinating saccade and pupil responses, characterizing luminance independent modulation in the SC, together elucidating the differentiated pathways underlying this behavior.

Close

  • doi:10.1162/jocn_a_01688

Close

Aiping Wang; Ming Yan; Bei Wang; Gaoding Jia; Albrecht W. Inhoff

The perceptual span in Tibetan reading Journal Article

In: Psychological Research, vol. 85, no. 3, pp. 1307–1316, 2021.

Abstract | Links | BibTeX

@article{Wang2021n,
title = {The perceptual span in Tibetan reading},
author = {Aiping Wang and Ming Yan and Bei Wang and Gaoding Jia and Albrecht W. Inhoff},
doi = {10.1007/s00426-020-01313-4},
year = {2021},
date = {2021-01-01},
journal = {Psychological Research},
volume = {85},
number = {3},
pages = {1307--1316},
publisher = {Springer Berlin Heidelberg},
abstract = {Tibetan script differs from other alphabetic writing systems in that word forms can be composed of horizontally and vertically arrayed characters. To examine information extraction during the reading of this script, eye movements of native readers were recorded and used to control the size of a window of legible text that moved in synchrony with the eyes. Letters outside the window were masked, and no viewing constraints were imposed in a control condition. Comparisons of window conditions with the control condition showed that reading speed and oculomotor activity matched the control condition, when windows revealed three letters to the left and seven to eight letters to the right of a fixated letter location. Cross-script comparisons indicate that this perceptual span is smaller than for English and larger than for Chinese script. We suggest that the information density of a writing system influences the perceptual span during reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Tibetan script differs from other alphabetic writing systems in that word forms can be composed of horizontally and vertically arrayed characters. To examine information extraction during the reading of this script, eye movements of native readers were recorded and used to control the size of a window of legible text that moved in synchrony with the eyes. Letters outside the window were masked, and no viewing constraints were imposed in a control condition. Comparisons of window conditions with the control condition showed that reading speed and oculomotor activity matched the control condition, when windows revealed three letters to the left and seven to eight letters to the right of a fixated letter location. Cross-script comparisons indicate that this perceptual span is smaller than for English and larger than for Chinese script. We suggest that the information density of a writing system influences the perceptual span during reading.

Close

  • doi:10.1007/s00426-020-01313-4

Close

Kerri Walter; Peter Bex

Cognitive load influences oculomotor behavior in natural scenes Journal Article

In: Scientific Reports, vol. 11, pp. 12405, 2021.

Abstract | Links | BibTeX

@article{Walter2021,
title = {Cognitive load influences oculomotor behavior in natural scenes},
author = {Kerri Walter and Peter Bex},
doi = {10.1038/s41598-021-91845-5},
year = {2021},
date = {2021-01-01},
journal = {Scientific Reports},
volume = {11},
pages = {12405},
publisher = {Nature Publishing Group UK},
abstract = {Cognitive neuroscience researchers have identified relationships between cognitive load and eye movement behavior that are consistent with oculomotor biomarkers for neurological disorders. We develop an adaptive visual search paradigm that manipulates task difficulty and examine the effect of cognitive load on oculomotor behavior in healthy young adults. Participants (N = 30) free-viewed a sequence of 100 natural scenes for 10 s each, while their eye movements were recorded. After each image, participants completed a 4 alternative forced choice task in which they selected a target object from one of the previously viewed scenes, among 3 distracters of the same object type but from alternate scenes. Following two correct responses, the target object was selected from an image increasingly farther back (N-back) in the image stream; following an incorrect response, N decreased by 1. N-back thus quantifies and individualizes cognitive load. The results show that response latencies increased as N-back increased, and pupil diameter increased with N-back, before decreasing at very high N-back. These findings are consistent with previous studies and confirm that this paradigm was successful in actively engaging working memory, and successfully adapts task difficulty to individual subject's skill levels. We hypothesized that oculomotor behavior would covary with cognitive load. We found that as cognitive load increased, there was a significant decrease in the number of fixations and saccades. Furthermore, the total duration of saccades decreased with the number of events, while the total duration of fixations remained constant, suggesting that as cognitive load increased, subjects made fewer, longer fixations. These results suggest that cognitive load can be tracked with an adaptive visual search task, and that oculomotor strategies are affected as a result of greater cognitive demand in healthy adults.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Cognitive neuroscience researchers have identified relationships between cognitive load and eye movement behavior that are consistent with oculomotor biomarkers for neurological disorders. We develop an adaptive visual search paradigm that manipulates task difficulty and examine the effect of cognitive load on oculomotor behavior in healthy young adults. Participants (N = 30) free-viewed a sequence of 100 natural scenes for 10 s each, while their eye movements were recorded. After each image, participants completed a 4 alternative forced choice task in which they selected a target object from one of the previously viewed scenes, among 3 distracters of the same object type but from alternate scenes. Following two correct responses, the target object was selected from an image increasingly farther back (N-back) in the image stream; following an incorrect response, N decreased by 1. N-back thus quantifies and individualizes cognitive load. The results show that response latencies increased as N-back increased, and pupil diameter increased with N-back, before decreasing at very high N-back. These findings are consistent with previous studies and confirm that this paradigm was successful in actively engaging working memory, and successfully adapts task difficulty to individual subject's skill levels. We hypothesized that oculomotor behavior would covary with cognitive load. We found that as cognitive load increased, there was a significant decrease in the number of fixations and saccades. Furthermore, the total duration of saccades decreased with the number of events, while the total duration of fixations remained constant, suggesting that as cognitive load increased, subjects made fewer, longer fixations. These results suggest that cognitive load can be tracked with an adaptive visual search task, and that oculomotor strategies are affected as a result of greater cognitive demand in healthy adults.

Close

  • doi:10.1038/s41598-021-91845-5

Close

R. Calen Walshe; Wilson Geisler

Efficient allocation of attentional sensitivity gain in visual cortex reduces foveal sensitivity in visual search Journal Article

In: Current Biology, pp. 1–11, 2021.

Abstract | Links | BibTeX

@article{Walshe2021,
title = {Efficient allocation of attentional sensitivity gain in visual cortex reduces foveal sensitivity in visual search},
author = {R. Calen Walshe and Wilson Geisler},
doi = {10.1016/j.cub.2021.10.011},
year = {2021},
date = {2021-01-01},
journal = {Current Biology},
pages = {1--11},
publisher = {Elsevier Ltd.},
abstract = {The human visual system has a high-resolution fovea and a low-resolution periphery. When actively searching for a target, humans perform a covert search during each fixation, and then shift fixation (the fovea) to probable target locations. Previous studies of covert search under carefully controlled conditions provide strong evidence that for simple and small search displays, humans process all potential target locations with the same efficiency that they process those locations when individually cued on each trial. Here, we extend these studies to the case of large displays, in which the target can appear anywhere within the display. These more natural conditions reveal an attentional effect in which sensitivity in the fovea and parafovea is greatly diminished. We show that this "foveal neglect" is the expected consequence of efficiently allocating a fixed total attentional sensitivity gain across the retinotopic map in the visual cortex. We present a formal theory that explains our findings and the previous findings.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The human visual system has a high-resolution fovea and a low-resolution periphery. When actively searching for a target, humans perform a covert search during each fixation, and then shift fixation (the fovea) to probable target locations. Previous studies of covert search under carefully controlled conditions provide strong evidence that for simple and small search displays, humans process all potential target locations with the same efficiency that they process those locations when individually cued on each trial. Here, we extend these studies to the case of large displays, in which the target can appear anywhere within the display. These more natural conditions reveal an attentional effect in which sensitivity in the fovea and parafovea is greatly diminished. We show that this "foveal neglect" is the expected consequence of efficiently allocating a fixed total attentional sensitivity gain across the retinotopic map in the visual cortex. We present a formal theory that explains our findings and the previous findings.

Close

  • doi:10.1016/j.cub.2021.10.011

Close

Josefine Waldthaler; Lena Stock; Charlotte Krüger-Zechlin; Lars Timmermann

Age at Parkinson's disease onset modulates the effect of levodopa on response inhibition: Support for the dopamine overdose hypothesis from the antisaccade task Journal Article

In: Neuropsychologia, vol. 163, pp. 108082, 2021.

Abstract | Links | BibTeX

@article{Waldthaler2021,
title = {Age at Parkinson's disease onset modulates the effect of levodopa on response inhibition: Support for the dopamine overdose hypothesis from the antisaccade task},
author = {Josefine Waldthaler and Lena Stock and Charlotte Krüger-Zechlin and Lars Timmermann},
doi = {10.1016/j.neuropsychologia.2021.108082},
year = {2021},
date = {2021-01-01},
journal = {Neuropsychologia},
volume = {163},
pages = {108082},
publisher = {Elsevier Ltd},
abstract = {The antisaccade task is an established eye-tracking paradigm to explore response inhibition. While many studies showed that antisaccade performance is impaired in Parkinson's disease (PD), the effect of dopaminergic medication is still an area of debate. According to the dopamine overdose hypothesis, intrinsic basal dopamine levels in ventral parts of the striatum determine whether levodopa intake has beneficial or detrimental effects on dopamine-dependent cognitive tasks. The objective of this study was therefore to explore the effect of several disease-related factors on changes in antisaccade performance after levodopa intake in PD. Thirty-five individuals with PD (and 30 healthy controls) performed antisaccades in OFF and ON medication state. Multiple linear regressions were calculated to predict the change in antisaccade latency, directive errors and express saccade rate based on age at PD onset, disease duration, levodopa-equivalent daily dose, motor symptom severity and executive functions. Levodopa intake did not alter antisaccade performance on a group level. However, the effect of levodopa was differentially modulated by age at PD onset and motor symptom severity. Earlier disease onset and milder motor symptoms in OFF medication state were associated with reduced response inhibition capacity after levodopa intake measured as increased express saccade and error rates. Our results indicate that levodopa may have opposing effects on oculomotor response inhibition dependent on the age at PD onset and motor disease severity. Assuming less dopaminergic loss in ventral parts of the striatum in early compared to late onset PD, these findings support the dopamine overdose hypothesis.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The antisaccade task is an established eye-tracking paradigm to explore response inhibition. While many studies showed that antisaccade performance is impaired in Parkinson's disease (PD), the effect of dopaminergic medication is still an area of debate. According to the dopamine overdose hypothesis, intrinsic basal dopamine levels in ventral parts of the striatum determine whether levodopa intake has beneficial or detrimental effects on dopamine-dependent cognitive tasks. The objective of this study was therefore to explore the effect of several disease-related factors on changes in antisaccade performance after levodopa intake in PD. Thirty-five individuals with PD (and 30 healthy controls) performed antisaccades in OFF and ON medication state. Multiple linear regressions were calculated to predict the change in antisaccade latency, directive errors and express saccade rate based on age at PD onset, disease duration, levodopa-equivalent daily dose, motor symptom severity and executive functions. Levodopa intake did not alter antisaccade performance on a group level. However, the effect of levodopa was differentially modulated by age at PD onset and motor symptom severity. Earlier disease onset and milder motor symptoms in OFF medication state were associated with reduced response inhibition capacity after levodopa intake measured as increased express saccade and error rates. Our results indicate that levodopa may have opposing effects on oculomotor response inhibition dependent on the age at PD onset and motor disease severity. Assuming less dopaminergic loss in ventral parts of the striatum in early compared to late onset PD, these findings support the dopamine overdose hypothesis.

Close

  • doi:10.1016/j.neuropsychologia.2021.108082

Close

Ilja Wagner; Christian Wolf; Alexander C. Schütz

Motor learning by selection in visual working memory Journal Article

In: Scientific Reports, vol. 11, pp. 1–12, 2021.

Abstract | Links | BibTeX

@article{Wagner2021,
title = {Motor learning by selection in visual working memory},
author = {Ilja Wagner and Christian Wolf and Alexander C. Schütz},
doi = {10.1038/s41598-021-87572-6},
year = {2021},
date = {2021-01-01},
journal = {Scientific Reports},
volume = {11},
pages = {1--12},
publisher = {Nature Publishing Group UK},
abstract = {Motor adaptation maintains movement accuracy over the lifetime. Saccadic eye movements have been used successfully to study the mechanisms and neural basis of adaptation. Using behaviorally irrelevant targets, it has been shown that saccade adaptation is driven by errors only in a brief temporal interval after movement completion. However, under natural conditions, eye movements are used to extract information from behaviorally relevant objects and to guide actions manipulating these objects. In this case, the action outcome often becomes apparent only long after movement completion, outside the supposed temporal window of error evaluation. Here, we show that saccade adaptation can be driven by error signals long after the movement when using behaviorally relevant targets. Adaptation occurred when a task-relevant target appeared two seconds after the saccade, or when a retro-cue indicated which of two targets, stored in visual working memory, was task-relevant. Our results emphasize the important role of visual working memory for optimal movement control.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Motor adaptation maintains movement accuracy over the lifetime. Saccadic eye movements have been used successfully to study the mechanisms and neural basis of adaptation. Using behaviorally irrelevant targets, it has been shown that saccade adaptation is driven by errors only in a brief temporal interval after movement completion. However, under natural conditions, eye movements are used to extract information from behaviorally relevant objects and to guide actions manipulating these objects. In this case, the action outcome often becomes apparent only long after movement completion, outside the supposed temporal window of error evaluation. Here, we show that saccade adaptation can be driven by error signals long after the movement when using behaviorally relevant targets. Adaptation occurred when a task-relevant target appeared two seconds after the saccade, or when a retro-cue indicated which of two targets, stored in visual working memory, was task-relevant. Our results emphasize the important role of visual working memory for optimal movement control.

Close

  • doi:10.1038/s41598-021-87572-6

Close

Cécile Vullings; Preeti Verghese

Mapping the binocular scotoma in macular degeneration Journal Article

In: Journal of Vision, vol. 21, no. 3, pp. 1–12, 2021.

Abstract | Links | BibTeX

@article{Vullings2021,
title = {Mapping the binocular scotoma in macular degeneration},
author = {Cécile Vullings and Preeti Verghese},
doi = {10.1167/jov.21.3.9},
year = {2021},
date = {2021-01-01},
journal = {Journal of Vision},
volume = {21},
number = {3},
pages = {1--12},
abstract = {When the scotoma is binocular in macular degeneration (MD), it often obscures objects of interest, causing individuals to miss information. To map the binocular scotoma as precisely as current methods that map the monocular scotoma, we propose an iterative eye-tracker method. Study participants included nine individuals with MD and four age-matched controls. We measured the extent of the monocular scotomata using a scanning laser ophthalmoscope/optical coherence tomography (SLO/OCT). Then, we precisely mapped monocular and binocular scotomata with an eye tracker, while fixation was monitored. Participants responded whenever they detected briefly flashed dots, which were first presented on a coarse grid, and then at manually selected points to refine the shape and edges of the scotoma. Monocular scotomata measured in the SLO and eye tracker are highly similar, validating the eye-tracking method for scotoma mapping. Moreover, all participants used clustered fixation loci corresponding to their dominant preferred fixation locus. Critically, for individuals with binocular scotomata, the binocular map from the eye tracker was consistent with the overlap of the monocular scotoma profiles from the SLO. Thus, eye-tracker-based perimetry offers a reliable and sensitive tool for measuring both monocular and binocular scotomata, unlike the SLO/OCT that is limited to monocular viewing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When the scotoma is binocular in macular degeneration (MD), it often obscures objects of interest, causing individuals to miss information. To map the binocular scotoma as precisely as current methods that map the monocular scotoma, we propose an iterative eye-tracker method. Study participants included nine individuals with MD and four age-matched controls. We measured the extent of the monocular scotomata using a scanning laser ophthalmoscope/optical coherence tomography (SLO/OCT). Then, we precisely mapped monocular and binocular scotomata with an eye tracker, while fixation was monitored. Participants responded whenever they detected briefly flashed dots, which were first presented on a coarse grid, and then at manually selected points to refine the shape and edges of the scotoma. Monocular scotomata measured in the SLO and eye tracker are highly similar, validating the eye-tracking method for scotoma mapping. Moreover, all participants used clustered fixation loci corresponding to their dominant preferred fixation locus. Critically, for individuals with binocular scotomata, the binocular map from the eye tracker was consistent with the overlap of the monocular scotoma profiles from the SLO. Thus, eye-tracker-based perimetry offers a reliable and sensitive tool for measuring both monocular and binocular scotomata, unlike the SLO/OCT that is limited to monocular viewing.

Close

  • doi:10.1167/jov.21.3.9

Close

Stella D. Voulgaropoulou; Fasya Fauzani; Janine Pfirrmann; Claudia Vingerhoets; Thérèse Amelsvoort; Dennis Hernaus

Asymmetric effects of acute stress on cost and benefit learning Journal Article

In: Psychoneuroendocrinology, pp. 105646, 2021.

Abstract | Links | BibTeX

@article{Voulgaropoulou2021,
title = {Asymmetric effects of acute stress on cost and benefit learning},
author = {Stella D. Voulgaropoulou and Fasya Fauzani and Janine Pfirrmann and Claudia Vingerhoets and Thérèse Amelsvoort and Dennis Hernaus},
doi = {10.1016/j.psyneuen.2021.105646},
year = {2021},
date = {2021-01-01},
journal = {Psychoneuroendocrinology},
pages = {105646},
publisher = {Elsevier},
abstract = {Background: Humans are continuously exposed to stressful challenges in everyday life. Such stressful events trigger a complex physiological reaction – the fight-or-flight response – that can hamper flexible decision-making and learning. Inspired by key neural and peripheral characteristics of the fight-or-flight response, here we ask whether acute stress changes how humans learn about costs and benefits. Methods: Healthy adults were randomly exposed to an acute stress (age mean=23.48, 21/40 female) or no-stress control (age mean=23.80, 22/40 female) condition, after which they completed areinforcement learning task in which they minimize cost (physical effort) and maximize benefits (monetary rewards). During the task pupillometry data were collected. A computational model of cost-benefit reinforcement learning was employed to investigate the effect of acute stress on cost and benefit learning and decision-making. Results: Acute stress improved learning to maximize rewards relative to minimizing physical effort (Condition-by-Trial Type interaction: F(1,78)=6.53},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Humans are continuously exposed to stressful challenges in everyday life. Such stressful events trigger a complex physiological reaction – the fight-or-flight response – that can hamper flexible decision-making and learning. Inspired by key neural and peripheral characteristics of the fight-or-flight response, here we ask whether acute stress changes how humans learn about costs and benefits. Methods: Healthy adults were randomly exposed to an acute stress (age mean=23.48, 21/40 female) or no-stress control (age mean=23.80, 22/40 female) condition, after which they completed areinforcement learning task in which they minimize cost (physical effort) and maximize benefits (monetary rewards). During the task pupillometry data were collected. A computational model of cost-benefit reinforcement learning was employed to investigate the effect of acute stress on cost and benefit learning and decision-making. Results: Acute stress improved learning to maximize rewards relative to minimizing physical effort (Condition-by-Trial Type interaction: F(1,78)=6.53

Close

  • doi:10.1016/j.psyneuen.2021.105646

Close

Christoph J Völter; Ludwig Huber; Christoph J Völter

Dogs ' looking times and pupil dilation response reveal expectations about contact causality Journal Article

In: Biology Letters, vol. 17, pp. 1–5, 2021.

Abstract | BibTeX

@article{Voelter2021,
title = {Dogs ' looking times and pupil dilation response reveal expectations about contact causality},
author = {Christoph J Völter and Ludwig Huber and Christoph J Völter},
year = {2021},
date = {2021-01-01},
journal = {Biology Letters},
volume = {17},
pages = {1--5},
abstract = {Contact causality is one of the fundamental principles allowing us to make sense of our physical environment. From an early age, humans perceive spatio-temporally contiguous launching events as causal. Surprisingly little is known about causal perception in non-human animals, particularly outside the primate order. Violation-of-expectation paradigms in combination with eye-tracking and pupillometry have been used to study physical expectations in human infants. In the current study, we establish this approach for dogs (Canis familiaris). We presented dogs with realistic three-dimensional animations of launching events with contact (regular launching event) or without contact between the involved objects. In both conditions, the objects moved with the same timing and kinematic properties. The dogs tracked the object movements closely throughout the study but their pupils were larger in the no-contact condition and they looked longer at the object initiating the launch after the no-contact event compared to the contact event. We conclude that dogs have implicit expectations about contact causality.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Contact causality is one of the fundamental principles allowing us to make sense of our physical environment. From an early age, humans perceive spatio-temporally contiguous launching events as causal. Surprisingly little is known about causal perception in non-human animals, particularly outside the primate order. Violation-of-expectation paradigms in combination with eye-tracking and pupillometry have been used to study physical expectations in human infants. In the current study, we establish this approach for dogs (Canis familiaris). We presented dogs with realistic three-dimensional animations of launching events with contact (regular launching event) or without contact between the involved objects. In both conditions, the objects moved with the same timing and kinematic properties. The dogs tracked the object movements closely throughout the study but their pupils were larger in the no-contact condition and they looked longer at the object initiating the launch after the no-contact event compared to the contact event. We conclude that dogs have implicit expectations about contact causality.

Close

Lucy Vivash; Kelly L Bertram; Charles B Malpas; Cassandra Marotta; Ian H Harding; Scott Kolbe; Joanne Fielding; Meaghan Clough; Simon J G Lewis; Stephen Tisch; Andrew H Evans; John D O'Sullivan; Thomas Kimber; David Darby; Leonid Churilov; Meng Law; Christopher M Hovens; Dennis Velakoulis; Terence J O'Brien

Sodium selenate as a disease-modifying treatment for progressive supranuclear palsy: protocol for a phase 2, randomised, double-blind, placebo-controlled trial Journal Article

In: BMJ Open, vol. 11, no. 12, pp. 1–9, 2021.

Abstract | Links | BibTeX

@article{Vivash2021,
title = {Sodium selenate as a disease-modifying treatment for progressive supranuclear palsy: protocol for a phase 2, randomised, double-blind, placebo-controlled trial},
author = {Lucy Vivash and Kelly L Bertram and Charles B Malpas and Cassandra Marotta and Ian H Harding and Scott Kolbe and Joanne Fielding and Meaghan Clough and Simon J G Lewis and Stephen Tisch and Andrew H Evans and John D O'Sullivan and Thomas Kimber and David Darby and Leonid Churilov and Meng Law and Christopher M Hovens and Dennis Velakoulis and Terence J O'Brien},
doi = {10.1136/bmjopen-2021-055019},
year = {2021},
date = {2021-01-01},
journal = {BMJ Open},
volume = {11},
number = {12},
pages = {1--9},
abstract = {Introduction: Progressive supranuclear palsy (PSP) is a neurodegenerative disorder for which there are currently no disease-modifying therapies. The neuropathology of PSP is associated with the accumulation of hyperphosphorylated tau in the brain. We have previously shown that protein phosphatase 2 activity in the brain is upregulated by sodium selenate, which enhances dephosphorylation. Therefore, the objective of this study is to evaluate the efficacy and safety of sodium selenate as a disease-modifying therapy for PSP. Methods and analysis This will be a multi-site, phase 2b, double-blind, placebo-controlled trial of sodium selenate. 70 patients will be recruited at six Australian academic hospitals and research institutes. Following the confirmation of eligibility at screening, participants will be randomised (1:1) to receive 52 weeks of active treatment (sodium selenate; 15 mg three times a day) or matching placebo. Regular safety and efficacy visits will be completed throughout the study period. The primary study outcome is change in an MRI volume composite (frontal lobe+midbrain–3rd ventricle) over the treatment period. Analysis will be with a general linear model (GLM) with the MRI composite at 52 weeks as the dependent variable, treatment group as an independent variable and baseline MRI composite as a covariate. Secondary outcomes are change in PSP rating scale, clinical global impression of change (clinician) and change in midbrain mean diffusivity. These outcomes will also be analysed with a GLM as above, with the corresponding baseline measure entered as a covariate. Secondary safety and tolerability outcomes are frequency of serious adverse events, frequency of down- titration occurrences and frequency of study discontinuation. Additional, as yet unplanned, exploratory outcomes will include analyses of other imaging, cognitive and biospecimen measures. Ethics and dissemination The study was approved by the Alfred Health Ethics Committee (594/20). Each participant their study partner will provide written informed consent at trial commencement. The results of the study will be presented at national and international conferences and published in peer- reviewed journals.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Introduction: Progressive supranuclear palsy (PSP) is a neurodegenerative disorder for which there are currently no disease-modifying therapies. The neuropathology of PSP is associated with the accumulation of hyperphosphorylated tau in the brain. We have previously shown that protein phosphatase 2 activity in the brain is upregulated by sodium selenate, which enhances dephosphorylation. Therefore, the objective of this study is to evaluate the efficacy and safety of sodium selenate as a disease-modifying therapy for PSP. Methods and analysis This will be a multi-site, phase 2b, double-blind, placebo-controlled trial of sodium selenate. 70 patients will be recruited at six Australian academic hospitals and research institutes. Following the confirmation of eligibility at screening, participants will be randomised (1:1) to receive 52 weeks of active treatment (sodium selenate; 15 mg three times a day) or matching placebo. Regular safety and efficacy visits will be completed throughout the study period. The primary study outcome is change in an MRI volume composite (frontal lobe+midbrain–3rd ventricle) over the treatment period. Analysis will be with a general linear model (GLM) with the MRI composite at 52 weeks as the dependent variable, treatment group as an independent variable and baseline MRI composite as a covariate. Secondary outcomes are change in PSP rating scale, clinical global impression of change (clinician) and change in midbrain mean diffusivity. These outcomes will also be analysed with a GLM as above, with the corresponding baseline measure entered as a covariate. Secondary safety and tolerability outcomes are frequency of serious adverse events, frequency of down- titration occurrences and frequency of study discontinuation. Additional, as yet unplanned, exploratory outcomes will include analyses of other imaging, cognitive and biospecimen measures. Ethics and dissemination The study was approved by the Alfred Health Ethics Committee (594/20). Each participant their study partner will provide written informed consent at trial commencement. The results of the study will be presented at national and international conferences and published in peer- reviewed journals.

Close

  • doi:10.1136/bmjopen-2021-055019

Close

Renée M. Visser; Joe Bathelt; H. Steven Scholte; Merel Kindt

Robust BOLD responses to faces but not to conditioned threat: Challenging the amygdala's reputation in human fear and extinction learning Journal Article

In: Journal of Neuroscience, vol. 41, no. 50, pp. 10278–10292, 2021.

Abstract | Links | BibTeX

@article{Visser2021,
title = {Robust BOLD responses to faces but not to conditioned threat: Challenging the amygdala's reputation in human fear and extinction learning},
author = {Renée M. Visser and Joe Bathelt and H. Steven Scholte and Merel Kindt},
doi = {10.1523/jneurosci.0857-21.2021},
year = {2021},
date = {2021-01-01},
journal = {Journal of Neuroscience},
volume = {41},
number = {50},
pages = {10278--10292},
abstract = {Most of our knowledge about human emotional memory comes from animal research. Based on this work, the amygdala is often labelled the brain's "fear center", but it is unclear to what degree neural circuitries underlying fear and extinction learning are conserved across species. Neuroimaging studies in humans yield conflicting findings, with many studies failing to show amygdala activation in response to learned threat. Such null-findings are often treated as resulting from MRI-specific problems related to measuring deep brain structures. Here we test this assumption in a mega-analysis of three studies on fear acquisition (n=98; 68 female) and extinction learning (n=79; 53 female). The conditioning procedure involved presentation of two pictures of faces and two pictures of houses: one of each pair was followed by an electric shock (CS+), the other one was never followed by a shock (CS-), and participants were instructed to learn these contingencies. Results revealed widespread responses to the CS+ compared to CS- in the fear network, including anterior insula, midcingulate cortex, thalamus and bed nucleus of the stria terminalis, but not the amygdala, which actually responded stronger to the CS-. Results were independent of spatial smoothing, and individual differences in trait anxiety and conditioned pupil responses. In contrast, robust amygdala activation distinguished faces from houses, refuting the idea that poor signal could account for the absence of effects. Moving forward, we suggest that apart from imaging larger samples at higher resolution, alternative statistical approaches may be employed to identify cross-species similarities in fear and extinction learning.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Most of our knowledge about human emotional memory comes from animal research. Based on this work, the amygdala is often labelled the brain's "fear center", but it is unclear to what degree neural circuitries underlying fear and extinction learning are conserved across species. Neuroimaging studies in humans yield conflicting findings, with many studies failing to show amygdala activation in response to learned threat. Such null-findings are often treated as resulting from MRI-specific problems related to measuring deep brain structures. Here we test this assumption in a mega-analysis of three studies on fear acquisition (n=98; 68 female) and extinction learning (n=79; 53 female). The conditioning procedure involved presentation of two pictures of faces and two pictures of houses: one of each pair was followed by an electric shock (CS+), the other one was never followed by a shock (CS-), and participants were instructed to learn these contingencies. Results revealed widespread responses to the CS+ compared to CS- in the fear network, including anterior insula, midcingulate cortex, thalamus and bed nucleus of the stria terminalis, but not the amygdala, which actually responded stronger to the CS-. Results were independent of spatial smoothing, and individual differences in trait anxiety and conditioned pupil responses. In contrast, robust amygdala activation distinguished faces from houses, refuting the idea that poor signal could account for the absence of effects. Moving forward, we suggest that apart from imaging larger samples at higher resolution, alternative statistical approaches may be employed to identify cross-species similarities in fear and extinction learning.

Close

  • doi:10.1523/jneurosci.0857-21.2021

Close

Chiara Visentin; Chiara Valzolgher; Matteo Pellegatti; Paola Potente; Francesco Pavani; Nicola Prodi

A comparison of simultaneously-obtained measures of listening effort: pupil dilation, verbal response time and self-rating Journal Article

In: International Journal of Audiology, pp. 1–13, 2021.

Abstract | Links | BibTeX

@article{Visentin2021,
title = {A comparison of simultaneously-obtained measures of listening effort: pupil dilation, verbal response time and self-rating},
author = {Chiara Visentin and Chiara Valzolgher and Matteo Pellegatti and Paola Potente and Francesco Pavani and Nicola Prodi},
doi = {10.1080/14992027.2021.1921290},
year = {2021},
date = {2021-01-01},
journal = {International Journal of Audiology},
pages = {1--13},
publisher = {Taylor & Francis},
abstract = {Objective: The aim of this study was to assess to what extent simultaneousl-obtained measures of listening effort (task-evoked pupil dilation, verbal response time [RT], and self-rating) could be sensitive to auditory and cognitive manipulations in a speech perception task. The study also aimed to explore the possible relationship between RT and pupil dilation. Design: A within-group design was adopted. All participants were administered the Matrix Sentence Test in 12 conditions (signal-to-noise ratios [SNR] of −3, −6, −9 dB; attentional resources focussed vs divided; spatial priors present vs absent). Study sample: Twenty-four normal-hearing adults, 20–41 years old (M = 23.5), were recruited in the study. Results: A significant effect of the SNR was found for all measures. However, pupil dilation discriminated only partially between the SNRs. Neither of the cognitive manipulations were effective in modulating the measures. No relationship emerged between pupil dilation, RT and self-ratings. Conclusions: RT, pupil dilation, and self-ratings can be obtained simultaneously when administering speech perception tasks, even though some limitations remain related to the absence of a retention period after the listening phase. The sensitivity of the three measures to changes in the auditory environment differs. RTs and self-ratings proved most sensitive to changes in SNR.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: The aim of this study was to assess to what extent simultaneousl-obtained measures of listening effort (task-evoked pupil dilation, verbal response time [RT], and self-rating) could be sensitive to auditory and cognitive manipulations in a speech perception task. The study also aimed to explore the possible relationship between RT and pupil dilation. Design: A within-group design was adopted. All participants were administered the Matrix Sentence Test in 12 conditions (signal-to-noise ratios [SNR] of −3, −6, −9 dB; attentional resources focussed vs divided; spatial priors present vs absent). Study sample: Twenty-four normal-hearing adults, 20–41 years old (M = 23.5), were recruited in the study. Results: A significant effect of the SNR was found for all measures. However, pupil dilation discriminated only partially between the SNRs. Neither of the cognitive manipulations were effective in modulating the measures. No relationship emerged between pupil dilation, RT and self-ratings. Conclusions: RT, pupil dilation, and self-ratings can be obtained simultaneously when administering speech perception tasks, even though some limitations remain related to the absence of a retention period after the listening phase. The sensitivity of the three measures to changes in the auditory environment differs. RTs and self-ratings proved most sensitive to changes in SNR.

Close

  • doi:10.1080/14992027.2021.1921290

Close

Inês S. Veríssimo; Stefanie Hölsken; Christian N. L. Olivers

Individual differences in crowding predict visual search performance Journal Article

In: Journal of Vision, vol. 21, no. 1, pp. 1–17, 2021.

Abstract | Links | BibTeX

@article{Verissimo2021,
title = {Individual differences in crowding predict visual search performance},
author = {Inês S. Veríssimo and Stefanie Hölsken and Christian N. L. Olivers},
doi = {10.1167/jov.21.5.29},
year = {2021},
date = {2021-01-01},
journal = {Journal of Vision},
volume = {21},
number = {1},
pages = {1--17},
abstract = {Visual search is an integral part of human behavior and has proven important to understanding mechanisms of perception, attention, memory, and oculomotor control. Thus far, the dominant theoretical framework posits that search is mainly limited by covert attentional mechanisms, comprising a central bottleneck in visual processing. A different class of theories seeks the cause in the inherent limitations of peripheral vision, with search being constrained by what is known as the functional viewing field (FVF). One of the major factors limiting peripheral vision, and thus the FVF, is crowding. We adopted an individual differences approach to test the prediction from FVF theories that visual search performance is determined by the efficacy of peripheral vision, in particular crowding. Forty-four participants were assessed with regard to their sensitivity to crowding (as measured by critical spacing) and their search efficiency (as indicated by manual responses and eye movements). This revealed substantial correlations between the two tasks, as stronger susceptibility to crowding was predictive of slower search, more eye movements, and longer fixation durations. Our results support FVF theories in showing that peripheral vision is an important determinant of visual search efficiency.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual search is an integral part of human behavior and has proven important to understanding mechanisms of perception, attention, memory, and oculomotor control. Thus far, the dominant theoretical framework posits that search is mainly limited by covert attentional mechanisms, comprising a central bottleneck in visual processing. A different class of theories seeks the cause in the inherent limitations of peripheral vision, with search being constrained by what is known as the functional viewing field (FVF). One of the major factors limiting peripheral vision, and thus the FVF, is crowding. We adopted an individual differences approach to test the prediction from FVF theories that visual search performance is determined by the efficacy of peripheral vision, in particular crowding. Forty-four participants were assessed with regard to their sensitivity to crowding (as measured by critical spacing) and their search efficiency (as indicated by manual responses and eye movements). This revealed substantial correlations between the two tasks, as stronger susceptibility to crowding was predictive of slower search, more eye movements, and longer fixation durations. Our results support FVF theories in showing that peripheral vision is an important determinant of visual search efficiency.

Close

  • doi:10.1167/jov.21.5.29

Close

Aaron Veldre; Roslyn Wong; Sally Andrews

Predictability effects and parafoveal processing in older readers. Journal Article

In: Psychology and Aging, pp. 1–17, 2021.

Abstract | Links | BibTeX

@article{Veldre2021,
title = {Predictability effects and parafoveal processing in older readers.},
author = {Aaron Veldre and Roslyn Wong and Sally Andrews},
doi = {10.1037/pag0000659},
year = {2021},
date = {2021-01-01},
journal = {Psychology and Aging},
pages = {1--17},
abstract = {Normative aging is accompanied by visual and cognitive changes that impact the systems that are critical for fluent reading. The patterns of eye movements during reading displayed by older adults have been characterized as demonstrating a trade-off between longer forward saccades and more word skipping versus higher rates of regressions back to previously read text. This pattern is assumed to reflect older readers' reliance on top-down contextual information to compensate for reduced uptake of parafoveal information from yet-to-be fixated words. However, the empirical evidence for these assumptions is equivocal. This study investigated the depth of older readers' parafoveal processing as indexed by sensitivity to the contextual plausibility of parafoveal words in both neutral and highly constraining sentence contexts. The eye movements of 65 cognitively intact older adults (61–87 years) were compared with data previously collected from young adults in two sentence reading experiments in which critical target words were replaced by valid, plausible, related, or implausible previews until the reader fixated on the target word location. Older and younger adults showed equivalent plausibility preview benefits on first-pass reading measures of both predictable and unpredictable words. However, older readers did not show the benefit of preview orthographic relatedness that was observed in young adults and showed significantly attenuated preview validity effects. Taken together, the data suggest that older readers are specifically impaired in the integration of parafoveal and foveal information but do not show deficits in the depth of parafoveal processing. The implications for understanding the effects of aging on reading are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Normative aging is accompanied by visual and cognitive changes that impact the systems that are critical for fluent reading. The patterns of eye movements during reading displayed by older adults have been characterized as demonstrating a trade-off between longer forward saccades and more word skipping versus higher rates of regressions back to previously read text. This pattern is assumed to reflect older readers' reliance on top-down contextual information to compensate for reduced uptake of parafoveal information from yet-to-be fixated words. However, the empirical evidence for these assumptions is equivocal. This study investigated the depth of older readers' parafoveal processing as indexed by sensitivity to the contextual plausibility of parafoveal words in both neutral and highly constraining sentence contexts. The eye movements of 65 cognitively intact older adults (61–87 years) were compared with data previously collected from young adults in two sentence reading experiments in which critical target words were replaced by valid, plausible, related, or implausible previews until the reader fixated on the target word location. Older and younger adults showed equivalent plausibility preview benefits on first-pass reading measures of both predictable and unpredictable words. However, older readers did not show the benefit of preview orthographic relatedness that was observed in young adults and showed significantly attenuated preview validity effects. Taken together, the data suggest that older readers are specifically impaired in the integration of parafoveal and foveal information but do not show deficits in the depth of parafoveal processing. The implications for understanding the effects of aging on reading are discussed.

Close

  • doi:10.1037/pag0000659

Close

Martin R. Vasilev; Mark Yates; Ethan Prueitt; Timothy J. Slattery

Parafoveal degradation during reading reduces preview costs only when it is not perceptually distinct Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 74, no. 2, pp. 254–276, 2021.

Abstract | Links | BibTeX

@article{Vasilev2021b,
title = {Parafoveal degradation during reading reduces preview costs only when it is not perceptually distinct},
author = {Martin R. Vasilev and Mark Yates and Ethan Prueitt and Timothy J. Slattery},
doi = {10.1177/1747021820959661},
year = {2021},
date = {2021-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {74},
number = {2},
pages = {254--276},
abstract = {There is a growing understanding that the parafoveal preview effect during reading may represent a combination of preview benefits and preview costs due to interference from parafoveal masks. It has been suggested that visually degrading the parafoveal masks may reduce their costs, but adult readers were later shown to be highly sensitive to degraded display changes. Four experiments examined how preview benefits and preview costs are influenced by the perception of distinct parafoveal degradation at the target word location. Participants read sentences with four preview types (identity, orthographic, phonological, and letter-mask preview) and two levels of visual degradation (0% vs. 20%). The distinctiveness of the target word degradation was either eliminated by degrading all words in the sentence (Experiments 1a–2a) or remained present, as in previous research (Experiments 1b–2b). Degrading the letter masks resulted in a reduction in preview costs, but only when all words in the sentence were degraded. When degradation at the target word location was perceptually distinct, it induced costs of its own, even for orthographically and phonologically related previews. These results confirm previous reports that traditional parafoveal masks introduce preview costs that overestimate the size of the true benefit. However, they also show that parafoveal degradation has the unintended consequence of introducing additional costs when participants are aware of distinct degradation on the target word. Parafoveal degradation appears to be easily perceived and may temporarily orient attention away from the reading task, thus delaying word processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

There is a growing understanding that the parafoveal preview effect during reading may represent a combination of preview benefits and preview costs due to interference from parafoveal masks. It has been suggested that visually degrading the parafoveal masks may reduce their costs, but adult readers were later shown to be highly sensitive to degraded display changes. Four experiments examined how preview benefits and preview costs are influenced by the perception of distinct parafoveal degradation at the target word location. Participants read sentences with four preview types (identity, orthographic, phonological, and letter-mask preview) and two levels of visual degradation (0% vs. 20%). The distinctiveness of the target word degradation was either eliminated by degrading all words in the sentence (Experiments 1a–2a) or remained present, as in previous research (Experiments 1b–2b). Degrading the letter masks resulted in a reduction in preview costs, but only when all words in the sentence were degraded. When degradation at the target word location was perceptually distinct, it induced costs of its own, even for orthographically and phonologically related previews. These results confirm previous reports that traditional parafoveal masks introduce preview costs that overestimate the size of the true benefit. However, they also show that parafoveal degradation has the unintended consequence of introducing additional costs when participants are aware of distinct degradation on the target word. Parafoveal degradation appears to be easily perceived and may temporarily orient attention away from the reading task, thus delaying word processing.

Close

  • doi:10.1177/1747021820959661

Close

Martin R. Vasilev; Fabrice B. R. Parmentier; Julie A. Kirkby

Distraction by auditory novelty during reading: Evidence for disruption in saccade planning, but not saccade execution Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 74, no. 5, pp. 826–842, 2021.

Abstract | Links | BibTeX

@article{Vasilev2021a,
title = {Distraction by auditory novelty during reading: Evidence for disruption in saccade planning, but not saccade execution},
author = {Martin R. Vasilev and Fabrice B. R. Parmentier and Julie A. Kirkby},
doi = {10.1177/1747021820982267},
year = {2021},
date = {2021-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {74},
number = {5},
pages = {826--842},
abstract = {Novel or unexpected sounds that deviate from an otherwise repetitive sequence of the same sound cause behavioural distraction. Recent work has suggested that distraction also occurs during reading as fixation durations increased when a deviant sound was presented at the fixation onset of words. The present study tested the hypothesis that this increase in fixation durations occurs due to saccadic inhibition. This was done by manipulating the temporal onset of sounds relative to the fixation onset of words in the text. If novel sounds cause saccadic inhibition, they should be more distracting when presented during the second half of fixations when saccade programming usually takes place. Participants read single sentences and heard a 120 ms sound when they fixated five target words in the sentence. On most occasions (p =.9), the same sine wave tone was presented (“standard”), while on the remaining occasions (p =.1) a new sound was presented (“novel”). Critically, sounds were played, on average, either during the first half of the fixation (0 ms delay) or during the second half of the fixation (120 ms delay). Consistent with the saccadic inhibition hypothesis (SIH), novel sounds led to longer fixation durations in the 120 ms compared to the 0 ms delay condition. However, novel sounds did not generally influence the execution of the subsequent saccade. These results suggest that unexpected sounds have a rapid influence on saccade planning, but not saccade execution.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Novel or unexpected sounds that deviate from an otherwise repetitive sequence of the same sound cause behavioural distraction. Recent work has suggested that distraction also occurs during reading as fixation durations increased when a deviant sound was presented at the fixation onset of words. The present study tested the hypothesis that this increase in fixation durations occurs due to saccadic inhibition. This was done by manipulating the temporal onset of sounds relative to the fixation onset of words in the text. If novel sounds cause saccadic inhibition, they should be more distracting when presented during the second half of fixations when saccade programming usually takes place. Participants read single sentences and heard a 120 ms sound when they fixated five target words in the sentence. On most occasions (p =.9), the same sine wave tone was presented (“standard”), while on the remaining occasions (p =.1) a new sound was presented (“novel”). Critically, sounds were played, on average, either during the first half of the fixation (0 ms delay) or during the second half of the fixation (120 ms delay). Consistent with the saccadic inhibition hypothesis (SIH), novel sounds led to longer fixation durations in the 120 ms compared to the 0 ms delay condition. However, novel sounds did not generally influence the execution of the subsequent saccade. These results suggest that unexpected sounds have a rapid influence on saccade planning, but not saccade execution.

Close

  • doi:10.1177/1747021820982267

Close

Martin R. Vasilev; Victoria I. Adedeji; Calvin Laursen; Marcin Budka; Timothy J. Slattery

Do readers use character information when programming return-sweep saccades? Journal Article

In: Vision Research, vol. 183, pp. 30–40, 2021.

Abstract | Links | BibTeX

@article{Vasilev2021,
title = {Do readers use character information when programming return-sweep saccades?},
author = {Martin R. Vasilev and Victoria I. Adedeji and Calvin Laursen and Marcin Budka and Timothy J. Slattery},
doi = {10.1016/j.visres.2021.01.003},
year = {2021},
date = {2021-01-01},
journal = {Vision Research},
volume = {183},
pages = {30--40},
publisher = {Elsevier Ltd},
abstract = {Reading saccades that occur within a single line of text are guided by the size of letters. However, readers occasionally need to make longer saccades (known as return-sweeps) that take their eyes from the end of one line of text to the beginning of the next. In this study, we tested whether return-sweep saccades are also guided by font size information and whether this guidance depends on visual acuity of the return-sweep target area. To do this, we manipulated the font size of letters (0.29 vs 0.39° per character) and the length of the first line of text (16 vs 26°). The larger font resulted in return-sweeps that landed further to the right of the line start and in a reduction of under-sweeps compared to the smaller font. This suggests that font size information is used when programming return-sweeps. Return-sweeps in the longer line condition landed further to the right of the line start and the proportion of under-sweeps increased compared to the short line condition. This likely reflects an increase in saccadic undershoot error with the increase in intended saccade size. Critically, there was no interaction between font size and line length. This suggests that when programming return-sweeps, the use of font size information does not depend on visual acuity at the saccade target. Instead, it appears that readers rely on global typographic properties of the text in order to maintain an optimal number of characters to the left of their first fixation on a new line.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Reading saccades that occur within a single line of text are guided by the size of letters. However, readers occasionally need to make longer saccades (known as return-sweeps) that take their eyes from the end of one line of text to the beginning of the next. In this study, we tested whether return-sweep saccades are also guided by font size information and whether this guidance depends on visual acuity of the return-sweep target area. To do this, we manipulated the font size of letters (0.29 vs 0.39° per character) and the length of the first line of text (16 vs 26°). The larger font resulted in return-sweeps that landed further to the right of the line start and in a reduction of under-sweeps compared to the smaller font. This suggests that font size information is used when programming return-sweeps. Return-sweeps in the longer line condition landed further to the right of the line start and the proportion of under-sweeps increased compared to the short line condition. This likely reflects an increase in saccadic undershoot error with the increase in intended saccade size. Critically, there was no interaction between font size and line length. This suggests that when programming return-sweeps, the use of font size information does not depend on visual acuity at the saccade target. Instead, it appears that readers rely on global typographic properties of the text in order to maintain an optimal number of characters to the left of their first fixation on a new line.

Close

  • doi:10.1016/j.visres.2021.01.003

Close

Bram Vanroy; Moritz Schaeffer; Lieve Macken

Comparing the effect of product-based metrics on the translation process Journal Article

In: Frontiers in Psychology, vol. 12, pp. 681945, 2021.

Abstract | Links | BibTeX

@article{Vanroy2021,
title = {Comparing the effect of product-based metrics on the translation process},
author = {Bram Vanroy and Moritz Schaeffer and Lieve Macken},
doi = {10.3389/fpsyg.2021.681945},
year = {2021},
date = {2021-01-01},
journal = {Frontiers in Psychology},
volume = {12},
pages = {681945},
abstract = {Characteristics of the translation product are often used in translation process research as predictors for cognitive load, and by extension translation difficulty. In the last decade, user-activity information such as eye-tracking data has been increasingly employed as an experimental tool for that purpose. In this paper, we take a similar approach. We look for significant effects that different predictors may have on three different eye-tracking measures: First Fixation Duration (duration of first fixation on a token), Eye-Key Span (duration between first fixation on a token and the first keystroke contributing to its translation), and Total Reading Time on source tokens (sum of fixations on a token). As predictors we make use of a set of established metrics involving (lexico)semantics and word order, while also investigating the effect of more recent ones concerning syntax, semantics or both. Our results show a, particularly late, positive effect of many of the proposed predictors, suggesting that both fine-grained metrics of syntactic phenomena (such as word reordering) as well as coarse-grained ones (encapsulating both syntactic and semantic information) contribute to translation difficulties. The effect on especially late measures may indicate that the linguistic phenomena that our metrics capture (e.g., word reordering) are resolved in later stages during cognitive processing such as problem-solving and revision.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Characteristics of the translation product are often used in translation process research as predictors for cognitive load, and by extension translation difficulty. In the last decade, user-activity information such as eye-tracking data has been increasingly employed as an experimental tool for that purpose. In this paper, we take a similar approach. We look for significant effects that different predictors may have on three different eye-tracking measures: First Fixation Duration (duration of first fixation on a token), Eye-Key Span (duration between first fixation on a token and the first keystroke contributing to its translation), and Total Reading Time on source tokens (sum of fixations on a token). As predictors we make use of a set of established metrics involving (lexico)semantics and word order, while also investigating the effect of more recent ones concerning syntax, semantics or both. Our results show a, particularly late, positive effect of many of the proposed predictors, suggesting that both fine-grained metrics of syntactic phenomena (such as word reordering) as well as coarse-grained ones (encapsulating both syntactic and semantic information) contribute to translation difficulties. The effect on especially late measures may indicate that the linguistic phenomena that our metrics capture (e.g., word reordering) are resolved in later stages during cognitive processing such as problem-solving and revision.

Close

  • doi:10.3389/fpsyg.2021.681945

Close

Wieske Van Zoest; Christoph Huber-Huber; Matthew D. Weaver; Clayton Hickey

Strategic distractor suppression improves selective control in human vision Journal Article

In: Journal of Neuroscience, vol. 41, no. 33, pp. 7120–7135, 2021.

Abstract | Links | BibTeX

@article{VanZoest2021,
title = {Strategic distractor suppression improves selective control in human vision},
author = {Wieske Van Zoest and Christoph Huber-Huber and Matthew D. Weaver and Clayton Hickey},
doi = {10.1523/JNEUROSCI.0553-21.2021},
year = {2021},
date = {2021-01-01},
journal = {Journal of Neuroscience},
volume = {41},
number = {33},
pages = {7120--7135},
abstract = {Our visual environment is complicated, and our cognitive capacity is limited. As a result, we must strategically ignore some stimuli to prioritize others. Common sense suggests that foreknowledge of distractor characteristics, like location or color, might help us ignore these objects. But empirical studies have provided mixed evidence, often showing that knowing about a distractor before it appears counterintuitively leads to its attentional selection. What has looked like strategic distractor suppression in the past is now commonly explained as a product of prior experience and implicit statistical learning, and the long-standing notion the distractor suppression is reflected in a band oscillatory brain activity has been challenged by results appearing to link a to target resolution. Can we strategically, proactively suppress distractors? And, if so, does this involve a? Here, we use the concurrent recording of human EEG and eye movements in optimized experimental designs to identify behavior and brain activity associated with proactive distractor suppression. Results from three experiments show that knowing about distractors before they appear causes a reduction in electrophysiological indices of covert attentional selection of these objects and a reduction in the overt deployment of the eyes to the location of the objects. This control is established before the distractor appears and is predicted by the power of cue-elicited a activity over the visual cortex. Foreknowledge of distractor characteristics therefore leads to improved selective control, and a oscillations in visual cortex reflect the implementation of this strategic, proactive mechanism.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Our visual environment is complicated, and our cognitive capacity is limited. As a result, we must strategically ignore some stimuli to prioritize others. Common sense suggests that foreknowledge of distractor characteristics, like location or color, might help us ignore these objects. But empirical studies have provided mixed evidence, often showing that knowing about a distractor before it appears counterintuitively leads to its attentional selection. What has looked like strategic distractor suppression in the past is now commonly explained as a product of prior experience and implicit statistical learning, and the long-standing notion the distractor suppression is reflected in a band oscillatory brain activity has been challenged by results appearing to link a to target resolution. Can we strategically, proactively suppress distractors? And, if so, does this involve a? Here, we use the concurrent recording of human EEG and eye movements in optimized experimental designs to identify behavior and brain activity associated with proactive distractor suppression. Results from three experiments show that knowing about distractors before they appear causes a reduction in electrophysiological indices of covert attentional selection of these objects and a reduction in the overt deployment of the eyes to the location of the objects. This control is established before the distractor appears and is predicted by the power of cue-elicited a activity over the visual cortex. Foreknowledge of distractor characteristics therefore leads to improved selective control, and a oscillations in visual cortex reflect the implementation of this strategic, proactive mechanism.

Close

  • doi:10.1523/JNEUROSCI.0553-21.2021

Close

Sietske Viersen; Athanassios Protopapas; George K. Georgiou; Rauno Parrila; Laoura Ziaka; Peter F. Jong

Lexicality effects on orthographic learning in beginning and advanced readers of Dutch: An eye-tracking study Journal Article

In: Quarterly Journal of Experimental Psychology, pp. 1–20, 2021.

Abstract | Links | BibTeX

@article{Viersen2021,
title = {Lexicality effects on orthographic learning in beginning and advanced readers of Dutch: An eye-tracking study},
author = {Sietske Viersen and Athanassios Protopapas and George K. Georgiou and Rauno Parrila and Laoura Ziaka and Peter F. Jong},
doi = {10.1177/17470218211047420},
year = {2021},
date = {2021-01-01},
journal = {Quarterly Journal of Experimental Psychology},
pages = {1--20},
abstract = {Orthographic learning is the topic of many recent studies about reading, but much is still unknown about conditions that affect orthographic learning and their influence on reading fluency development over time. This study investigated lexicality effects on orthographic learning in beginning and relatively advanced readers of Dutch. Eye movements of 131 children in Grades 2 and 5 were monitored during an orthographic learning task. Children read sentences containing pseudowords or low-frequency real words that varied in number of exposures. We examined both offline learning outcomes (i.e., orthographic choice and spelling dictation) of target items and online gaze durations on target words. The results showed general effects of exposure, lexicality, and reading-skill level. Also, a two-way interaction was found between the number of exposures and lexicality when detailed orthographic representations were required, consistent with a larger overall effect of exposure on learning the spellings of pseudowords. Moreover, lexicality and reading-skill level were found to affect the learning rate across exposures based on a decrease in gaze durations, indicating a larger learning effect for pseudowords in Grade 5 children. Yet, further interactions between exposure and reading-skill level were not present, indicating largely similar learning curves for beginning and advanced readers. We concluded that the reading system of more advanced readers may cope somewhat better with words varying in lexicality, but is not more efficient than that of beginning readers in building up orthographic knowledge of specific words across repeated exposures.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Orthographic learning is the topic of many recent studies about reading, but much is still unknown about conditions that affect orthographic learning and their influence on reading fluency development over time. This study investigated lexicality effects on orthographic learning in beginning and relatively advanced readers of Dutch. Eye movements of 131 children in Grades 2 and 5 were monitored during an orthographic learning task. Children read sentences containing pseudowords or low-frequency real words that varied in number of exposures. We examined both offline learning outcomes (i.e., orthographic choice and spelling dictation) of target items and online gaze durations on target words. The results showed general effects of exposure, lexicality, and reading-skill level. Also, a two-way interaction was found between the number of exposures and lexicality when detailed orthographic representations were required, consistent with a larger overall effect of exposure on learning the spellings of pseudowords. Moreover, lexicality and reading-skill level were found to affect the learning rate across exposures based on a decrease in gaze durations, indicating a larger learning effect for pseudowords in Grade 5 children. Yet, further interactions between exposure and reading-skill level were not present, indicating largely similar learning curves for beginning and advanced readers. We concluded that the reading system of more advanced readers may cope somewhat better with words varying in lexicality, but is not more efficient than that of beginning readers in building up orthographic knowledge of specific words across repeated exposures.

Close

  • doi:10.1177/17470218211047420

Close

Marianne L. Marloes L. Moort; Arnout Koornneef; Paul W. Broek

Differentiating text-based and knowledge-Bbsed validation processes during reading: Evidence from eye movements Journal Article

In: Discourse Processes, vol. 58, no. 1, pp. 22–41, 2021.

Abstract | Links | BibTeX

@article{Moort2021,
title = {Differentiating text-based and knowledge-Bbsed validation processes during reading: Evidence from eye movements},
author = {Marianne L. Marloes L. Moort and Arnout Koornneef and Paul W. Broek},
doi = {10.1080/0163853X.2020.1727683},
year = {2021},
date = {2021-01-01},
journal = {Discourse Processes},
volume = {58},
number = {1},
pages = {22--41},
publisher = {Routledge},
abstract = {To build a coherent accurate mental representation of a text, readers routinely validate information they read against the preceding text and their background knowledge. It is clear that both sources affect processing, but when and how they exert their influence remains unclear. To examine the time course and cognitive architecture of text-based and knowledge-based validation processes, we used eye-tracking methodology. Participants read versions of texts that varied systematically in (in)coherence with prior text or background knowledge. Contradictions with respect to prior text and background knowledge both were found to disrupt reading but in different ways: The two types of contradiction led to distinct patterns of processes, and, importantly, these differences were evident already in early processing stages. Moreover, knowledge-based incoherence triggered more pervasive and longer (repair) processes than did text-based incoherence. Finally, processing of text-based and knowledge-based incoherence was not influenced by readers' working memory capacity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

To build a coherent accurate mental representation of a text, readers routinely validate information they read against the preceding text and their background knowledge. It is clear that both sources affect processing, but when and how they exert their influence remains unclear. To examine the time course and cognitive architecture of text-based and knowledge-based validation processes, we used eye-tracking methodology. Participants read versions of texts that varied systematically in (in)coherence with prior text or background knowledge. Contradictions with respect to prior text and background knowledge both were found to disrupt reading but in different ways: The two types of contradiction led to distinct patterns of processes, and, importantly, these differences were evident already in early processing stages. Moreover, knowledge-based incoherence triggered more pervasive and longer (repair) processes than did text-based incoherence. Finally, processing of text-based and knowledge-based incoherence was not influenced by readers' working memory capacity.

Close

  • doi:10.1080/0163853X.2020.1727683

Close

Dirk Moorselaar; Nasim Daneshtalab; Heleen A. Slagter

Neural mechanisms underlying distractor inhibition on the basis of feature and/or spatial expectations Journal Article

In: Cortex, vol. 137, pp. 232–250, 2021.

Abstract | Links | BibTeX

@article{Moorselaar2021,
title = {Neural mechanisms underlying distractor inhibition on the basis of feature and/or spatial expectations},
author = {Dirk Moorselaar and Nasim Daneshtalab and Heleen A. Slagter},
doi = {10.1016/j.cortex.2021.01.010},
year = {2021},
date = {2021-01-01},
journal = {Cortex},
volume = {137},
pages = {232--250},
publisher = {Elsevier Ltd},
abstract = {A rapidly growing body of research indicates that inhibition of distracting information may not be under flexible, top-down control, but instead heavily relies on expectations derived from past experience about the likelihood of events. Yet, how expectations about distracting information influence distractor inhibition at the neural level remains unclear. To determine how expectations induced by distractor features and/or location regularities modulate distractor processing, we measured EEG while participants performed two variants of the additional singleton paradigm. Critically, in these different variants, target and distractor features either randomly swapped across trials, or were fixed, allowing for the development of distractor feature-based expectations. Moreover, the task was initially performed without any spatial regularity, after which a high probability distractor location was introduced. Our results show that both distractor feature- and location regularities contributed to distractor inhibition, as indicated by corresponding reductions in distractor costs during visual search and an earlier distractor-evoked Pd component. Yet, control analyses showed that while observers were sensitive to regularities across longer time scales, the observed effects to a large extent reflected intertrial repetition. Large individual differences further suggest a functional dissociation between early and late Pd components, with the former reflecting early sensory suppression related to intertrial priming and the latter reflecting suppression sensitive to expectations derived over a longer time scale. Also, counter to some previous findings, no increase in anticipatory alpha-band activity was observed over visual regions representing the expected distractor location, although this effect should be interpreted with caution as the effect of spatial statistical learning was also less pronounced than in other studies. Together, these findings suggest that intertrial priming and statistical learning may both contribute to distractor suppression and reveal the underlying neural mechanisms.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A rapidly growing body of research indicates that inhibition of distracting information may not be under flexible, top-down control, but instead heavily relies on expectations derived from past experience about the likelihood of events. Yet, how expectations about distracting information influence distractor inhibition at the neural level remains unclear. To determine how expectations induced by distractor features and/or location regularities modulate distractor processing, we measured EEG while participants performed two variants of the additional singleton paradigm. Critically, in these different variants, target and distractor features either randomly swapped across trials, or were fixed, allowing for the development of distractor feature-based expectations. Moreover, the task was initially performed without any spatial regularity, after which a high probability distractor location was introduced. Our results show that both distractor feature- and location regularities contributed to distractor inhibition, as indicated by corresponding reductions in distractor costs during visual search and an earlier distractor-evoked Pd component. Yet, control analyses showed that while observers were sensitive to regularities across longer time scales, the observed effects to a large extent reflected intertrial repetition. Large individual differences further suggest a functional dissociation between early and late Pd components, with the former reflecting early sensory suppression related to intertrial priming and the latter reflecting suppression sensitive to expectations derived over a longer time scale. Also, counter to some previous findings, no increase in anticipatory alpha-band activity was observed over visual regions representing the expected distractor location, although this effect should be interpreted with caution as the effect of spatial statistical learning was also less pronounced than in other studies. Together, these findings suggest that intertrial priming and statistical learning may both contribute to distractor suppression and reveal the underlying neural mechanisms.

Close

  • doi:10.1016/j.cortex.2021.01.010

Close

Jonathan Leeuwen; Artem V. Belopolsky

Rapid spatial oculomotor updating across saccades is malleable Journal Article

In: Vision Research, vol. 178, pp. 60–69, 2021.

Abstract | Links | BibTeX

@article{Leeuwen2021,
title = {Rapid spatial oculomotor updating across saccades is malleable},
author = {Jonathan Leeuwen and Artem V. Belopolsky},
doi = {10.1016/j.visres.2020.09.006},
year = {2021},
date = {2021-01-01},
journal = {Vision Research},
volume = {178},
pages = {60--69},
publisher = {Elsevier Ltd},
abstract = {The oculomotor system uses a sophisticated updating mechanism to adjust for large retinal displacements which occur with every saccade. Previous studies have shown that updating operates rapidly and starts before saccade is initiated. Here we used saccade adaptation to alter life-long expectations about how a saccade changes the location of an object on the retina. Participants made a sequence of one horizontal and one vertical saccade and ignored an irrelevant distractor. The time-course of oculomotor updating was estimated using saccade curvature of the vertical saccade, relative to the distractor. During the first saccade both saccade targets were shifted on 80% of trials, which induced saccade adaptation (Experiment 1). Critically, since the distractor was left stationary, successful saccade adaptation (e.g., saccade becoming shorter) meant that after the first saccade the distractor appeared in a different hemifield than without adaptation. After adaptation, second saccades curved away only from the newly learned distractor location starting at 80 ms after the first saccade. When on the minority of trials (20%) the targets were not shifted, saccades again first curved away from the newly learned (now empty) location, but then quickly switched to curving away from the life-long learned, visible location. When on some trials the distractor was removed during the first saccade, saccades curved away only from the newly learned (but empty) location (Experiment 2). The results show that updating of locations across saccades is not only fast, but is highly malleable, relying on recently learned sensorimotor contingencies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The oculomotor system uses a sophisticated updating mechanism to adjust for large retinal displacements which occur with every saccade. Previous studies have shown that updating operates rapidly and starts before saccade is initiated. Here we used saccade adaptation to alter life-long expectations about how a saccade changes the location of an object on the retina. Participants made a sequence of one horizontal and one vertical saccade and ignored an irrelevant distractor. The time-course of oculomotor updating was estimated using saccade curvature of the vertical saccade, relative to the distractor. During the first saccade both saccade targets were shifted on 80% of trials, which induced saccade adaptation (Experiment 1). Critically, since the distractor was left stationary, successful saccade adaptation (e.g., saccade becoming shorter) meant that after the first saccade the distractor appeared in a different hemifield than without adaptation. After adaptation, second saccades curved away only from the newly learned distractor location starting at 80 ms after the first saccade. When on the minority of trials (20%) the targets were not shifted, saccades again first curved away from the newly learned (now empty) location, but then quickly switched to curving away from the life-long learned, visible location. When on some trials the distractor was removed during the first saccade, saccades curved away only from the newly learned (but empty) location (Experiment 2). The results show that updating of locations across saccades is not only fast, but is highly malleable, relying on recently learned sensorimotor contingencies.

Close

  • doi:10.1016/j.visres.2020.09.006

Close

Elle Heusden; Mieke Donk; Christian N. L. Olivers

The dynamics of saliency-driven and goal-driven visual selection as a function of eccentricity Journal Article

In: Journal of Vision, vol. 21, no. 3, pp. 1–24, 2021.

Abstract | Links | BibTeX

@article{Heusden2021,
title = {The dynamics of saliency-driven and goal-driven visual selection as a function of eccentricity},
author = {Elle Heusden and Mieke Donk and Christian N. L. Olivers},
doi = {10.1167/jov.21.3.2},
year = {2021},
date = {2021-01-01},
journal = {Journal of Vision},
volume = {21},
number = {3},
pages = {1--24},
abstract = {Both saliency and goal information are important factors in driving visual selection. Saliency-driven selection occurs primarily in early responses, whereas goal-driven selection happens predominantly in later responses. Here, we investigated how eccentricity affects the time courses of saliency-driven and goal-driven visual selection. In three experiments, we asked people to make a speeded eye movement toward a predefined target singleton which was simultaneously presented with a non-target singleton in a background of multiple homogeneously oriented other items. The target singleton could be either more or less salient than the non-target singleton. Both singletons were presented at one of three eccentricities (i.e., near, middle, or far). The results showed that, even though eccentricity had only little effect on overall selection performance, the underlying time courses of saliency-driven and goal-driven selection altered such that saliency effects became protracted and relevance effects became delayed for far eccentricity conditions. The protracted saliency effect was shown to be modulated by expectations as induced by the preceding trial. The results demonstrate the importance of incorporating both time and eccentricity as factors in models of visual selection.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Both saliency and goal information are important factors in driving visual selection. Saliency-driven selection occurs primarily in early responses, whereas goal-driven selection happens predominantly in later responses. Here, we investigated how eccentricity affects the time courses of saliency-driven and goal-driven visual selection. In three experiments, we asked people to make a speeded eye movement toward a predefined target singleton which was simultaneously presented with a non-target singleton in a background of multiple homogeneously oriented other items. The target singleton could be either more or less salient than the non-target singleton. Both singletons were presented at one of three eccentricities (i.e., near, middle, or far). The results showed that, even though eccentricity had only little effect on overall selection performance, the underlying time courses of saliency-driven and goal-driven selection altered such that saliency effects became protracted and relevance effects became delayed for far eccentricity conditions. The protracted saliency effect was shown to be modulated by expectations as induced by the preceding trial. The results demonstrate the importance of incorporating both time and eccentricity as factors in models of visual selection.

Close

  • doi:10.1167/jov.21.3.2

Close

Mats W. J. Es; Tom R. Marshall; Eelke Spaak; Ole Jensen; Jan-Mathijs Schoffelen

Phasic modulation of visual representations during sustained attention Journal Article

In: European Journal of Neuroscience, pp. 1–18, 2021.

Abstract | Links | BibTeX

@article{Es2021,
title = {Phasic modulation of visual representations during sustained attention},
author = {Mats W. J. Es and Tom R. Marshall and Eelke Spaak and Ole Jensen and Jan-Mathijs Schoffelen},
doi = {10.1111/ejn.15084},
year = {2021},
date = {2021-01-01},
journal = {European Journal of Neuroscience},
pages = {1--18},
abstract = {Sustained attention has long been thought to benefit perception in a continuous fashion, but recent evidence suggests that it affects perception in a discrete, rhythmic way. Periodic fluctuations in behavioral performance over time, and modulations of behavioral performance by the phase of spontaneous oscillatory brain activity point to an attentional sampling rate in the theta or alpha frequency range. We investigated whether such discrete sampling by attention is reflected in periodic fluctuations in the decodability of visual stimulus orientation from magnetoencephalographic (MEG) brain signals. In this exploratory study, human subjects attended one of two grating stimuli while MEG was being recorded. We assessed the strength of the visual representation of the attended stimulus using a support vector machine (SVM) to decode the orientation of the grating (clockwise vs. counterclockwise) from the MEG signal. We tested whether decoder performance depended on the theta/alpha phase of local brain activity. While the phase of ongoing activity in visual cortex did not modulate decoding performance, theta/alpha phase of activity in the FEF and parietal cortex, contralateral to the attended stimulus did modulate decoding performance. These findings suggest that phasic modulations of visual stimulus representations in the brain are caused by frequency-specific top-down activity in the fronto-parietal attention network.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Sustained attention has long been thought to benefit perception in a continuous fashion, but recent evidence suggests that it affects perception in a discrete, rhythmic way. Periodic fluctuations in behavioral performance over time, and modulations of behavioral performance by the phase of spontaneous oscillatory brain activity point to an attentional sampling rate in the theta or alpha frequency range. We investigated whether such discrete sampling by attention is reflected in periodic fluctuations in the decodability of visual stimulus orientation from magnetoencephalographic (MEG) brain signals. In this exploratory study, human subjects attended one of two grating stimuli while MEG was being recorded. We assessed the strength of the visual representation of the attended stimulus using a support vector machine (SVM) to decode the orientation of the grating (clockwise vs. counterclockwise) from the MEG signal. We tested whether decoder performance depended on the theta/alpha phase of local brain activity. While the phase of ongoing activity in visual cortex did not modulate decoding performance, theta/alpha phase of activity in the FEF and parietal cortex, contralateral to the attended stimulus did modulate decoding performance. These findings suggest that phasic modulations of visual stimulus representations in the brain are caused by frequency-specific top-down activity in the fronto-parietal attention network.

Close

  • doi:10.1111/ejn.15084

Close

Leonard Elia Dyck; Roland Kwitt; Sebastian Jochen Denzler; Walter Roland Gruber

Comparing object recognition in humans and deep convolutional neural networks - An eye tracking study Journal Article

In: Frontiers in Neuroscience, vol. 15, pp. 750639, 2021.

Abstract | Links | BibTeX

@article{Dyck2021,
title = {Comparing object recognition in humans and deep convolutional neural networks - An eye tracking study},
author = {Leonard Elia Dyck and Roland Kwitt and Sebastian Jochen Denzler and Walter Roland Gruber},
doi = {10.3389/fnins.2021.750639},
year = {2021},
date = {2021-01-01},
journal = {Frontiers in Neuroscience},
volume = {15},
pages = {750639},
abstract = {Deep convolutional neural networks (DCNNs) and the ventral visual pathway share vast architectural and functional similarities in visual challenges such as object recognition. Recent insights have demonstrated that both hierarchical cascades can be compared in terms of both exerted behavior and underlying activation. However, these approaches ignore key differences in spatial priorities of information processing. In this proof-of-concept study, we demonstrate a comparison of human observers (N = 45) and three feedforward DCNNs through eye tracking and saliency maps. The results reveal fundamentally different resolutions in both visualization methods that need to be considered for an insightful comparison. Moreover, we provide evidence that a DCNN with biologically plausible receptive field sizes called vNet reveals higher agreement with human viewing behavior as contrasted with a standard ResNet architecture. We find that image-specific factors such as category, animacy, arousal, and valence have a direct link to the agreement of spatial object recognition priorities in humans and DCNNs, while other measures such as difficulty and general image properties do not. With this approach, we try to open up new perspectives at the intersection of biological and computer vision research.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Deep convolutional neural networks (DCNNs) and the ventral visual pathway share vast architectural and functional similarities in visual challenges such as object recognition. Recent insights have demonstrated that both hierarchical cascades can be compared in terms of both exerted behavior and underlying activation. However, these approaches ignore key differences in spatial priorities of information processing. In this proof-of-concept study, we demonstrate a comparison of human observers (N = 45) and three feedforward DCNNs through eye tracking and saliency maps. The results reveal fundamentally different resolutions in both visualization methods that need to be considered for an insightful comparison. Moreover, we provide evidence that a DCNN with biologically plausible receptive field sizes called vNet reveals higher agreement with human viewing behavior as contrasted with a standard ResNet architecture. We find that image-specific factors such as category, animacy, arousal, and valence have a direct link to the agreement of spatial object recognition priorities in humans and DCNNs, while other measures such as difficulty and general image properties do not. With this approach, we try to open up new perspectives at the intersection of biological and computer vision research.

Close

  • doi:10.3389/fnins.2021.750639

Close

Olof J. Werf; Sanne Ten Oever; Teresa Schuhmann; Alexander T. Sack

No evidence of rhythmic visuospatial attention at cued locations in a spatial cuing paradigm, regardless of their behavioural relevance Journal Article

In: European Journal of Neuroscience, pp. 1–17, 2021.

Abstract | Links | BibTeX

@article{Werf2021,
title = {No evidence of rhythmic visuospatial attention at cued locations in a spatial cuing paradigm, regardless of their behavioural relevance},
author = {Olof J. Werf and Sanne Ten Oever and Teresa Schuhmann and Alexander T. Sack},
doi = {10.1111/ejn.15353},
year = {2021},
date = {2021-01-01},
journal = {European Journal of Neuroscience},
pages = {1--17},
abstract = {Recent evidence suggests that visuospatial attentional performance is not stable over time but fluctuates in a rhythmic fashion. These attentional rhythms allow for sampling of different visuospatial locations in each cycle of this rhythm. However, it is still unclear in which paradigmatic circumstances rhythmic attention becomes evident. First, it is unclear at what spatial locations rhythmic attention occurs. Second, it is unclear how the behavioural relevance of each spatial location determines the rhythmic sampling patterns. Here, we aim to elucidate these two issues. Firstly, we aim to find evidence of rhythmic attention at the predicted (i.e. cued) location under moderately informative predictor value, replicating earlier studies. Secondly, we hypothesise that rhythmic attentional sampling behaviour will be affected by the behavioural relevance of the sampled location, ranging from non-informative to fully informative. To these aims, we used a modified Egly-Driver task with three conditions: a fully informative cue, a moderately informative cue (replication condition), and a non-informative cue. We did not find evidence of rhythmic sampling at cued locations, failing to replicate earlier studies. Nor did we find differences in rhythmic sampling under different predictive values of the cue. The current data does not allow for robust conclusions regarding the non-cued locations due to the absence of a priori hypotheses. Post-hoc explorative data analyses, however, clearly indicate that attention samples non-cued locations in a theta-rhythmic manner, specifically when the cued location bears higher behavioural relevance than the non-cued locations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Recent evidence suggests that visuospatial attentional performance is not stable over time but fluctuates in a rhythmic fashion. These attentional rhythms allow for sampling of different visuospatial locations in each cycle of this rhythm. However, it is still unclear in which paradigmatic circumstances rhythmic attention becomes evident. First, it is unclear at what spatial locations rhythmic attention occurs. Second, it is unclear how the behavioural relevance of each spatial location determines the rhythmic sampling patterns. Here, we aim to elucidate these two issues. Firstly, we aim to find evidence of rhythmic attention at the predicted (i.e. cued) location under moderately informative predictor value, replicating earlier studies. Secondly, we hypothesise that rhythmic attentional sampling behaviour will be affected by the behavioural relevance of the sampled location, ranging from non-informative to fully informative. To these aims, we used a modified Egly-Driver task with three conditions: a fully informative cue, a moderately informative cue (replication condition), and a non-informative cue. We did not find evidence of rhythmic sampling at cued locations, failing to replicate earlier studies. Nor did we find differences in rhythmic sampling under different predictive values of the cue. The current data does not allow for robust conclusions regarding the non-cued locations due to the absence of a priori hypotheses. Post-hoc explorative data analyses, however, clearly indicate that attention samples non-cued locations in a theta-rhythmic manner, specifically when the cued location bears higher behavioural relevance than the non-cued locations.

Close

  • doi:10.1111/ejn.15353

Close

Nathan Van der Stoep; M. J. Van der Smagt; C. Notaro; Z. Spock; M. Naber

The additive nature of the human multisensory evoked pupil response Journal Article

In: Scientific Reports, vol. 11, pp. 707, 2021.

Abstract | Links | BibTeX

@article{VanderStoep2021,
title = {The additive nature of the human multisensory evoked pupil response},
author = {Nathan Van der Stoep and M. J. Van der Smagt and C. Notaro and Z. Spock and M. Naber},
doi = {10.1038/s41598-020-80286-1},
year = {2021},
date = {2021-01-01},
journal = {Scientific Reports},
volume = {11},
pages = {707},
publisher = {Nature Publishing Group UK},
abstract = {Pupillometry has received increased interest for its usefulness in measuring various sensory processes as an alternative to behavioural assessments. This is also apparent for multisensory investigations. Studies of the multisensory pupil response, however, have produced conflicting results. Some studies observed super-additive multisensory pupil responses, indicative of multisensory integration (MSI). Others observed additive multisensory pupil responses even though reaction time (RT) measures were indicative of MSI. Therefore, in the present study, we investigated the nature of the multisensory pupil response by combining methodological approaches of previous studies while using supra-threshold stimuli only. In two experiments we presented auditory and visual stimuli to observers that evoked a(n) (onset) response (be it constriction or dilation) in a simple detection task and a change detection task. In both experiments, the RT data indicated MSI as shown by race model inequality violation. Still, the multisensory pupil response in both experiments could best be explained by linear summation of the unisensory pupil responses. We conclude that the multisensory pupil response for supra-threshold stimuli is additive in nature and cannot be used as a measure of MSI, as only a departure from additivity can unequivocally demonstrate an interaction between the senses.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Pupillometry has received increased interest for its usefulness in measuring various sensory processes as an alternative to behavioural assessments. This is also apparent for multisensory investigations. Studies of the multisensory pupil response, however, have produced conflicting results. Some studies observed super-additive multisensory pupil responses, indicative of multisensory integration (MSI). Others observed additive multisensory pupil responses even though reaction time (RT) measures were indicative of MSI. Therefore, in the present study, we investigated the nature of the multisensory pupil response by combining methodological approaches of previous studies while using supra-threshold stimuli only. In two experiments we presented auditory and visual stimuli to observers that evoked a(n) (onset) response (be it constriction or dilation) in a simple detection task and a change detection task. In both experiments, the RT data indicated MSI as shown by race model inequality violation. Still, the multisensory pupil response in both experiments could best be explained by linear summation of the unisensory pupil responses. We conclude that the multisensory pupil response for supra-threshold stimuli is additive in nature and cannot be used as a measure of MSI, as only a departure from additivity can unequivocally demonstrate an interaction between the senses.

Close

  • doi:10.1038/s41598-020-80286-1

Close

Michael A. Urbin; Charles W. Lafe; Tyler W. Simpson; George F. Wittenberg; Bharath Chandrasekaran; Douglas J. Weber

Electrical stimulation of the external ear acutely activates noradrenergic mechanisms in humans Journal Article

In: Brain Stimulation, vol. 14, no. 4, pp. 990–1001, 2021.

Abstract | Links | BibTeX

@article{Urbin2021,
title = {Electrical stimulation of the external ear acutely activates noradrenergic mechanisms in humans},
author = {Michael A. Urbin and Charles W. Lafe and Tyler W. Simpson and George F. Wittenberg and Bharath Chandrasekaran and Douglas J. Weber},
doi = {10.1016/j.brs.2021.06.002},
year = {2021},
date = {2021-01-01},
journal = {Brain Stimulation},
volume = {14},
number = {4},
pages = {990--1001},
publisher = {Elsevier Ltd},
abstract = {Background: Transcutaneous stimulation of the external ear is thought to recruit afferents of the auricular vagus nerve, providing a means to activate noradrenergic pathways in the central nervous system. Findings from human studies examining the effects of auricular stimulation on noradrenergic biomarkers have been mixed, possibly relating to the limited and variable parameter space explored to date. Objective: We tested the extent to which brief pulse trains applied to locations of auricular innervation (canal and concha) elicit acute pupillary responses (PRs) compared to a sham location (lobe). Pulse amplitude and frequency were varied systematically to examine effects on PR features. Methods: Participants (n = 19) underwent testing in three separate experiments, each with stimulation applied to a different external ear location. Perceptual threshold (PT) was measured at the beginning of each experiment. Pulse trains (∼600 ms) consisting of different amplitude (0.0xPT, 0.8xPT, 1.0xPT, 1.5xPT, 2.0xPT) and frequency (25 Hz, 300 Hz) combinations were administered during eye tracking procedures. Results: Stimulation to all locations elicited PRs which began approximately halfway through the pulse train and peaked shortly after the final pulse (≤1 s). PR size and incidence increased with pulse amplitude and tended to be greatest with canal stimulation. Higher pulse frequency shortened the latency of PR onset and peak dilation. Changes in pupil diameter elicited by pulse trains were weakly associated with baseline pupil diameter. Conclusion: (s): Auricular stimulation elicits acute PRs, providing a basis to synchronize neuromodulator release with task-related neural spiking which preclinical studies show is a critical determinant of therapeutic effects. Further work is needed to dissociate contributions from vagal and non-vagal afferents mediating activation of the biomarker.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Transcutaneous stimulation of the external ear is thought to recruit afferents of the auricular vagus nerve, providing a means to activate noradrenergic pathways in the central nervous system. Findings from human studies examining the effects of auricular stimulation on noradrenergic biomarkers have been mixed, possibly relating to the limited and variable parameter space explored to date. Objective: We tested the extent to which brief pulse trains applied to locations of auricular innervation (canal and concha) elicit acute pupillary responses (PRs) compared to a sham location (lobe). Pulse amplitude and frequency were varied systematically to examine effects on PR features. Methods: Participants (n = 19) underwent testing in three separate experiments, each with stimulation applied to a different external ear location. Perceptual threshold (PT) was measured at the beginning of each experiment. Pulse trains (∼600 ms) consisting of different amplitude (0.0xPT, 0.8xPT, 1.0xPT, 1.5xPT, 2.0xPT) and frequency (25 Hz, 300 Hz) combinations were administered during eye tracking procedures. Results: Stimulation to all locations elicited PRs which began approximately halfway through the pulse train and peaked shortly after the final pulse (≤1 s). PR size and incidence increased with pulse amplitude and tended to be greatest with canal stimulation. Higher pulse frequency shortened the latency of PR onset and peak dilation. Changes in pupil diameter elicited by pulse trains were weakly associated with baseline pupil diameter. Conclusion: (s): Auricular stimulation elicits acute PRs, providing a basis to synchronize neuromodulator release with task-related neural spiking which preclinical studies show is a critical determinant of therapeutic effects. Further work is needed to dissociate contributions from vagal and non-vagal afferents mediating activation of the biomarker.

Close

  • doi:10.1016/j.brs.2021.06.002

Close

Kathryn E. Unruh; Walker S. McKinney; Erin K. Bojanek; Kandace K. Fleming; John A. Sweeney; Matthew W. Mosconi

Initial action output and feedback-guided motor behaviors in autism spectrum disorder Journal Article

In: Molecular Autism, vol. 12, no. 1, pp. 1–25, 2021.

Abstract | Links | BibTeX

@article{Unruh2021,
title = {Initial action output and feedback-guided motor behaviors in autism spectrum disorder},
author = {Kathryn E. Unruh and Walker S. McKinney and Erin K. Bojanek and Kandace K. Fleming and John A. Sweeney and Matthew W. Mosconi},
doi = {10.1186/s13229-021-00452-8},
year = {2021},
date = {2021-01-01},
journal = {Molecular Autism},
volume = {12},
number = {1},
pages = {1--25},
publisher = {BioMed Central},
abstract = {Background: Sensorimotor issues are common in autism spectrum disorder (ASD), related to core symptoms, and predictive of worse functional outcomes. Deficits in rapid behaviors supported primarily by feedforward mechanisms, and continuous, feedback-guided motor behaviors each have been reported, but the degrees to which they are distinct or co-segregate within individuals and across development are not well understood. Methods: We characterized behaviors that varied in their involvement of feedforward control relative to feedback control across skeletomotor (precision grip force) and oculomotor (saccades) control systems in 109 individuals with ASD and 101 age-matched typically developing controls (range: 5–29 years) including 58 individuals with ASD and 57 controls who completed both grip and saccade tests. Grip force was examined across multiple force (15, 45, and 85% MVC) and visual gain levels (low, medium, high). Maximum grip force also was examined. During grip force tests, reaction time, initial force output accuracy, variability, and entropy were examined. For the saccade test, latency, accuracy, and trial-wise variability of latency and accuracy were examined. Results: Relative to controls, individuals with ASD showed similar accuracy of initial grip force but reduced accuracy of saccadic eye movements specific to older ages of our sample. Force variability was greater in ASD relative to controls, but saccade gain variability (across trials) was not different between groups. Force entropy was reduced in ASD, especially at older ages. We also find reduced grip strength in ASD that was more severe in dominant compared to non-dominant hands. Limitations: Our age-related findings rely on cross-sectional data. Longitudinal studies of sensorimotor behaviors and their associations with ASD symptoms are needed. Conclusions: We identify reduced accuracy of initial motor output in ASD that was specific to the oculomotor system implicating deficient feedforward control that may be mitigated during slower occurring behaviors executed in the periphery. Individuals with ASD showed increased continuous force variability but similar levels of trial-to-trial saccade accuracy variability suggesting that feedback-guided refinement of motor commands is deficient specifically when adjustments occur rapidly during continuous behavior. We also document reduced lateralization of grip strength in ASD implicating atypical hemispheric specialization.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Sensorimotor issues are common in autism spectrum disorder (ASD), related to core symptoms, and predictive of worse functional outcomes. Deficits in rapid behaviors supported primarily by feedforward mechanisms, and continuous, feedback-guided motor behaviors each have been reported, but the degrees to which they are distinct or co-segregate within individuals and across development are not well understood. Methods: We characterized behaviors that varied in their involvement of feedforward control relative to feedback control across skeletomotor (precision grip force) and oculomotor (saccades) control systems in 109 individuals with ASD and 101 age-matched typically developing controls (range: 5–29 years) including 58 individuals with ASD and 57 controls who completed both grip and saccade tests. Grip force was examined across multiple force (15, 45, and 85% MVC) and visual gain levels (low, medium, high). Maximum grip force also was examined. During grip force tests, reaction time, initial force output accuracy, variability, and entropy were examined. For the saccade test, latency, accuracy, and trial-wise variability of latency and accuracy were examined. Results: Relative to controls, individuals with ASD showed similar accuracy of initial grip force but reduced accuracy of saccadic eye movements specific to older ages of our sample. Force variability was greater in ASD relative to controls, but saccade gain variability (across trials) was not different between groups. Force entropy was reduced in ASD, especially at older ages. We also find reduced grip strength in ASD that was more severe in dominant compared to non-dominant hands. Limitations: Our age-related findings rely on cross-sectional data. Longitudinal studies of sensorimotor behaviors and their associations with ASD symptoms are needed. Conclusions: We identify reduced accuracy of initial motor output in ASD that was specific to the oculomotor system implicating deficient feedforward control that may be mitigated during slower occurring behaviors executed in the periphery. Individuals with ASD showed increased continuous force variability but similar levels of trial-to-trial saccade accuracy variability suggesting that feedback-guided refinement of motor commands is deficient specifically when adjustments occur rapidly during continuous behavior. We also document reduced lateralization of grip strength in ASD implicating atypical hemispheric specialization.

Close

  • doi:10.1186/s13229-021-00452-8

Close

Akash Umakantha; Rudina Morina; Benjamin R. Cowley; Adam C. Snyder; Matthew A. Smith; Byron M. Yu

Bridging neuronal correlations and dimensionality reduction Journal Article

In: Neuron, vol. 109, no. 17, pp. 2740–2754.e12, 2021.

Abstract | Links | BibTeX

@article{Umakantha2021,
title = {Bridging neuronal correlations and dimensionality reduction},
author = {Akash Umakantha and Rudina Morina and Benjamin R. Cowley and Adam C. Snyder and Matthew A. Smith and Byron M. Yu},
doi = {10.1016/j.neuron.2021.06.028},
year = {2021},
date = {2021-01-01},
journal = {Neuron},
volume = {109},
number = {17},
pages = {2740--2754.e12},
publisher = {Elsevier Inc.},
abstract = {Two commonly used approaches to study interactions among neurons are spike count correlation, which describes pairs of neurons, and dimensionality reduction, applied to a population of neurons. Although both approaches have been used to study trial-to-trial neuronal variability correlated among neurons, they are often used in isolation and have not been directly related. We first established concrete mathematical and empirical relationships between pairwise correlation and metrics of population-wide covariability based on dimensionality reduction. Applying these insights to macaque V4 population recordings, we found that the previously reported decrease in mean pairwise correlation associated with attention stemmed from three distinct changes in population-wide covariability. Overall, our work builds the intuition and formalism to bridge between pairwise correlation and population-wide covariability and presents a cautionary tale about the inferences one can make about population activity by using a single statistic, whether it be mean pairwise correlation or dimensionality.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two commonly used approaches to study interactions among neurons are spike count correlation, which describes pairs of neurons, and dimensionality reduction, applied to a population of neurons. Although both approaches have been used to study trial-to-trial neuronal variability correlated among neurons, they are often used in isolation and have not been directly related. We first established concrete mathematical and empirical relationships between pairwise correlation and metrics of population-wide covariability based on dimensionality reduction. Applying these insights to macaque V4 population recordings, we found that the previously reported decrease in mean pairwise correlation associated with attention stemmed from three distinct changes in population-wide covariability. Overall, our work builds the intuition and formalism to bridge between pairwise correlation and population-wide covariability and presents a cautionary tale about the inferences one can make about population activity by using a single statistic, whether it be mean pairwise correlation or dimensionality.

Close

  • doi:10.1016/j.neuron.2021.06.028

Close

Hamid B. Turker; Elizabeth Riley; Wen Ming Luh; Stan J. Colcombe; Khena M. Swallow

Estimates of locus coeruleus function with functional magnetic resonance imaging are influenced by localization approaches and the use of multi-echo data Journal Article

In: NeuroImage, vol. 236, pp. 118047, 2021.

Abstract | Links | BibTeX

@article{Turker2021,
title = {Estimates of locus coeruleus function with functional magnetic resonance imaging are influenced by localization approaches and the use of multi-echo data},
author = {Hamid B. Turker and Elizabeth Riley and Wen Ming Luh and Stan J. Colcombe and Khena M. Swallow},
doi = {10.1016/j.neuroimage.2021.118047},
year = {2021},
date = {2021-01-01},
journal = {NeuroImage},
volume = {236},
pages = {118047},
publisher = {Elsevier Inc.},
abstract = {The locus coeruleus (LC) plays a central role in regulating human cognition, arousal, and autonomic states. Efforts to characterize the LC's function in humans using functional magnetic resonance imaging have been hampered by its small size and location near a large source of noise, the fourth ventricle. We tested whether the ability to characterize LC function is improved by employing neuromelanin-T1 weighted images (nmT1) for LC localization and multi-echo functional magnetic resonance imaging (ME-fMRI) for estimating intrinsic functional connectivity (iFC). Analyses indicated that, relative to a probabilistic atlas, utilizing nmT1 images to individually localize the LC increases the specificity of seed time series and clusters in the iFC maps. When combined with independent components analysis (ME-ICA), ME-fMRI data provided significant improvements in the temporal signal to noise ratio and DVARS relative to denoised single echo data (1E-fMRI). The effects of acquiring nmT1 images and ME-fMRI data did not appear to only reflect increases in power: iFC maps for each approach overlapped only moderately. This is consistent with findings that ME-fMRI offers substantial advantages over 1E-fMRI acquisition and denoising. It also suggests that individually identifying LC with nmT1 scans is likely to reduce the influence of other nearby brainstem regions on estimates of LC function.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The locus coeruleus (LC) plays a central role in regulating human cognition, arousal, and autonomic states. Efforts to characterize the LC's function in humans using functional magnetic resonance imaging have been hampered by its small size and location near a large source of noise, the fourth ventricle. We tested whether the ability to characterize LC function is improved by employing neuromelanin-T1 weighted images (nmT1) for LC localization and multi-echo functional magnetic resonance imaging (ME-fMRI) for estimating intrinsic functional connectivity (iFC). Analyses indicated that, relative to a probabilistic atlas, utilizing nmT1 images to individually localize the LC increases the specificity of seed time series and clusters in the iFC maps. When combined with independent components analysis (ME-ICA), ME-fMRI data provided significant improvements in the temporal signal to noise ratio and DVARS relative to denoised single echo data (1E-fMRI). The effects of acquiring nmT1 images and ME-fMRI data did not appear to only reflect increases in power: iFC maps for each approach overlapped only moderately. This is consistent with findings that ME-fMRI offers substantial advantages over 1E-fMRI acquisition and denoising. It also suggests that individually identifying LC with nmT1 scans is likely to reduce the influence of other nearby brainstem regions on estimates of LC function.

Close

  • doi:10.1016/j.neuroimage.2021.118047

Close

Sho Tsuji; Anne Caroline Fiévét; Alejandrina Cristia

Toddler word learning from contingent screens with and without human presence Journal Article

In: Infant Behavior and Development, vol. 63, pp. 1–12, 2021.

Abstract | Links | BibTeX

@article{Tsuji2021,
title = {Toddler word learning from contingent screens with and without human presence},
author = {Sho Tsuji and Anne Caroline Fiévét and Alejandrina Cristia},
doi = {10.1016/j.infbeh.2021.101553},
year = {2021},
date = {2021-01-01},
journal = {Infant Behavior and Development},
volume = {63},
pages = {1--12},
abstract = {While previous studies have documented that toddlers learn less well from passive screens than from live interaction, the rise of interactive, digital screen media opens new perspectives, since some work has shown that toddlers can learn similarly well from a human present via video chat as from live exposure. The present study aimed to disentangle the role of human presence from other aspects of social interactions on learning advantages in contingent screen settings. We assessed 16-month-old toddlers' fast mapping of novel words from screen in three conditions: in-person, video chat, and virtual agent. All conditions built on the same controlled and scripted interaction. In the in-person condition, toddlers learned two novel word-object associations from an experimenter present in the same room and reacting contingently to infants' gaze direction. In the video chat condition, the toddler saw the experimenter in real time on screen, while the experimenter only had access to the toddler's real-time gaze position as captured by an eyetracker. This setup allowed contingent reactivity to the toddler's gaze while controlling for any cues beyond these instructions. The virtual agent condition was programmed to follow the infant's gaze, to smile, and to name the object with the same parameters as the experimenter in the other conditions. After the learning phase, all toddlers were tested on their word recognition in a looking-while-listening paradigm. Comparisons against chance revealed that toddlers showed above-chance word learning in the in-person group only. Toddlers in the virtual agent group showed significantly worse performance than those in the in-person group, while performance in the video chat group overlapped with the other two groups. These results confirm that in-person interaction leads to best learning outcomes even in the absence of rich social cues. They also elucidate that contingency is not sufficient either, and that in order for toddlers to learn from interactive digital media, more cues to social agency are required.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

While previous studies have documented that toddlers learn less well from passive screens than from live interaction, the rise of interactive, digital screen media opens new perspectives, since some work has shown that toddlers can learn similarly well from a human present via video chat as from live exposure. The present study aimed to disentangle the role of human presence from other aspects of social interactions on learning advantages in contingent screen settings. We assessed 16-month-old toddlers' fast mapping of novel words from screen in three conditions: in-person, video chat, and virtual agent. All conditions built on the same controlled and scripted interaction. In the in-person condition, toddlers learned two novel word-object associations from an experimenter present in the same room and reacting contingently to infants' gaze direction. In the video chat condition, the toddler saw the experimenter in real time on screen, while the experimenter only had access to the toddler's real-time gaze position as captured by an eyetracker. This setup allowed contingent reactivity to the toddler's gaze while controlling for any cues beyond these instructions. The virtual agent condition was programmed to follow the infant's gaze, to smile, and to name the object with the same parameters as the experimenter in the other conditions. After the learning phase, all toddlers were tested on their word recognition in a looking-while-listening paradigm. Comparisons against chance revealed that toddlers showed above-chance word learning in the in-person group only. Toddlers in the virtual agent group showed significantly worse performance than those in the in-person group, while performance in the video chat group overlapped with the other two groups. These results confirm that in-person interaction leads to best learning outcomes even in the absence of rich social cues. They also elucidate that contingency is not sufficient either, and that in order for toddlers to learn from interactive digital media, more cues to social agency are required.

Close

  • doi:10.1016/j.infbeh.2021.101553

Close

Chiao I. Tseng; Jochen Laubrock; John A. Bateman

The impact of multimodal cohesion on attention and interpretation in film Journal Article

In: Discourse, Context and Media, vol. 44, pp. 100544, 2021.

Abstract | Links | BibTeX

@article{Tseng2021,
title = {The impact of multimodal cohesion on attention and interpretation in film},
author = {Chiao I. Tseng and Jochen Laubrock and John A. Bateman},
doi = {10.1016/j.dcm.2021.100544},
year = {2021},
date = {2021-01-01},
journal = {Discourse, Context and Media},
volume = {44},
pages = {100544},
publisher = {Elsevier Ltd},
abstract = {This article presents results of an exploratory investigation combining multimodal cohesion analysis and eye-tracking studies. Multimodal cohesion, as a tool of multimodal discourse analysis, goes beyond linguistic cohesive mechanisms to enable the construction of cross-modal discourse structures that systematically relate technical details of audio, visual and verbal modalities. Patterns of multimodal cohesion from these discourse structures were used to design eye-tracking experiments and questionnaires in order to empirically investigate how auditory and visual cohesive cues affect attention and comprehension. We argue that the cross-modal structures of cohesion revealed by our method offer a strong methodology for addressing empirical questions concerning viewers' comprehension of narrative settings and the comparative salience of visual, verbal and audio cues. Analyses are presented of the beginning of Hitchcock's The Birds (1963) and a sketch from Monty Python filmed in 1971. Our approach balances the narrative-based issue of how narrative elements in film guide meaning interpretation and the recipient-based question of where a film viewer's attention is directed during viewing and how this affects comprehension.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This article presents results of an exploratory investigation combining multimodal cohesion analysis and eye-tracking studies. Multimodal cohesion, as a tool of multimodal discourse analysis, goes beyond linguistic cohesive mechanisms to enable the construction of cross-modal discourse structures that systematically relate technical details of audio, visual and verbal modalities. Patterns of multimodal cohesion from these discourse structures were used to design eye-tracking experiments and questionnaires in order to empirically investigate how auditory and visual cohesive cues affect attention and comprehension. We argue that the cross-modal structures of cohesion revealed by our method offer a strong methodology for addressing empirical questions concerning viewers' comprehension of narrative settings and the comparative salience of visual, verbal and audio cues. Analyses are presented of the beginning of Hitchcock's The Birds (1963) and a sketch from Monty Python filmed in 1971. Our approach balances the narrative-based issue of how narrative elements in film guide meaning interpretation and the recipient-based question of where a film viewer's attention is directed during viewing and how this affects comprehension.

Close

  • doi:10.1016/j.dcm.2021.100544

Close

Kathryn A. Tremblay; Katherine S. Binder; Scott P. Ardoin; Amani Talwar; Elizabeth L. Tighe

Third graders' strategy use and accuracy on an expository text: an exploratory study using eye movements Journal Article

In: Journal of Research in Reading, vol. 44, no. 4, pp. 737–756, 2021.

Abstract | Links | BibTeX

@article{Tremblay2021a,
title = {Third graders' strategy use and accuracy on an expository text: an exploratory study using eye movements},
author = {Kathryn A. Tremblay and Katherine S. Binder and Scott P. Ardoin and Amani Talwar and Elizabeth L. Tighe},
doi = {10.1111/1467-9817.12369},
year = {2021},
date = {2021-01-01},
journal = {Journal of Research in Reading},
volume = {44},
number = {4},
pages = {737--756},
abstract = {Background: Of the myriad of reading comprehension (RC) assessments used in schools, multiple-choice (MC) questions continue to be one of the most prevalent formats used by educators and researchers. Outcomes from RC assessments dictate many critical factors encountered during a student's academic career, and it is crucial that we gain a deeper understanding of the nuances of these assessments and the types of skills needed for their successful completion. The purpose of this exploratory study was to examine how different component skills (i.e., decoding, word recognition, reading fluency, RC and working memory) were related to students' response accuracy as they read a text and responded to MC questions. Methods: We monitored the eye movements of 73 third graders as they read an expository text and answered MC questions. We investigated whether the component skills differentially predicted accuracy across different question types and difficulty levels. Results: Results indicated that readers who answered MC questions correctly were able to identify when they needed to reread the text to find the answer and were better able to find the relevant area in the text compared with incorrect responders. Incorrect responders were less likely to reread the text to find the answer and generally had poorer precision when attempting to locate the answer in the text. Finally, the component skills relied upon by readers to answer RC questions were related to the type and difficulty of the questions. Conclusions: Results of the present study suggest that comprehension difficulties can arise from a myriad of sources and that reading abilities together with test-taking strategies impact RC test outcomes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Of the myriad of reading comprehension (RC) assessments used in schools, multiple-choice (MC) questions continue to be one of the most prevalent formats used by educators and researchers. Outcomes from RC assessments dictate many critical factors encountered during a student's academic career, and it is crucial that we gain a deeper understanding of the nuances of these assessments and the types of skills needed for their successful completion. The purpose of this exploratory study was to examine how different component skills (i.e., decoding, word recognition, reading fluency, RC and working memory) were related to students' response accuracy as they read a text and responded to MC questions. Methods: We monitored the eye movements of 73 third graders as they read an expository text and answered MC questions. We investigated whether the component skills differentially predicted accuracy across different question types and difficulty levels. Results: Results indicated that readers who answered MC questions correctly were able to identify when they needed to reread the text to find the answer and were better able to find the relevant area in the text compared with incorrect responders. Incorrect responders were less likely to reread the text to find the answer and generally had poorer precision when attempting to locate the answer in the text. Finally, the component skills relied upon by readers to answer RC questions were related to the type and difficulty of the questions. Conclusions: Results of the present study suggest that comprehension difficulties can arise from a myriad of sources and that reading abilities together with test-taking strategies impact RC test outcomes.

Close

  • doi:10.1111/1467-9817.12369

Close

Annie Tremblay; Sahyang Kim; Seulgi Shin; Taehong Cho

Re-examining the effect of phonological similarity between the native- And second-language intonational systems in second-language speech segmentation Journal Article

In: Bilingualism: Language and Cognition, vol. 24, no. 2, pp. 1–13, 2021.

Abstract | Links | BibTeX

@article{Tremblay2021,
title = {Re-examining the effect of phonological similarity between the native- And second-language intonational systems in second-language speech segmentation},
author = {Annie Tremblay and Sahyang Kim and Seulgi Shin and Taehong Cho},
doi = {10.1017/S136672892000053X},
year = {2021},
date = {2021-01-01},
journal = {Bilingualism: Language and Cognition},
volume = {24},
number = {2},
pages = {1--13},
abstract = {This study investigates how phonological and phonetic aspects of the native-language (L1) intonation modulate the use of tonal cues in second-language (L2) speech segmentation. Previous research suggested that prosodic learning is more difficult if the L1 and L2 intonations are phonologically similar but phonetically different (French-Korean) than if they are phonologically different (English-French/Korean) (Prosodic-Learning Interference Hypothesis; Tremblay, Broersma, Coughlin & Choi, 2016). This study provides another test of this hypothesis. Korean listeners and French-speaking and English-speaking L2 learners of Korean in Korea completed an eye-tracking experiment investigating the effects of phrase tones in Korean. All groups patterned similarly with the phrase-final tone, but, unlike Korean and French listeners, English listeners showed early benefits from the phrase-initial tone (signaling word-initial boundaries in English). Importantly, French listeners patterned like Korean listeners with both tones. The Prosodic-Learning Interference Hypothesis is refined to suggest that prosodic learning difficulties may not be persistent for immersed L2 learners.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study investigates how phonological and phonetic aspects of the native-language (L1) intonation modulate the use of tonal cues in second-language (L2) speech segmentation. Previous research suggested that prosodic learning is more difficult if the L1 and L2 intonations are phonologically similar but phonetically different (French-Korean) than if they are phonologically different (English-French/Korean) (Prosodic-Learning Interference Hypothesis; Tremblay, Broersma, Coughlin & Choi, 2016). This study provides another test of this hypothesis. Korean listeners and French-speaking and English-speaking L2 learners of Korean in Korea completed an eye-tracking experiment investigating the effects of phrase tones in Korean. All groups patterned similarly with the phrase-final tone, but, unlike Korean and French listeners, English listeners showed early benefits from the phrase-initial tone (signaling word-initial boundaries in English). Importantly, French listeners patterned like Korean listeners with both tones. The Prosodic-Learning Interference Hypothesis is refined to suggest that prosodic learning difficulties may not be persistent for immersed L2 learners.

Close

  • doi:10.1017/S136672892000053X

Close

Matthew J. Traxler; Timothy Banh; Madeline M. Craft; Kurt Winsler; Trevor A. Brothers; Liv J. Hoversten; Pilar Piñar; David P. Corina

Word skipping in deaf and hearing bilinguals: Cognitive control over eye movements remains with increased perceptual span Journal Article

In: Applied Psycholinguistics, vol. 42, no. 3, pp. 601–630, 2021.

Abstract | Links | BibTeX

@article{Traxler2021,
title = {Word skipping in deaf and hearing bilinguals: Cognitive control over eye movements remains with increased perceptual span},
author = {Matthew J. Traxler and Timothy Banh and Madeline M. Craft and Kurt Winsler and Trevor A. Brothers and Liv J. Hoversten and Pilar Piñar and David P. Corina},
doi = {10.1017/S0142716420000740},
year = {2021},
date = {2021-01-01},
journal = {Applied Psycholinguistics},
volume = {42},
number = {3},
pages = {601--630},
abstract = {Deaf readers may have larger perceptual spans than ability-matched hearing native English readers, allowing them to read more efficiently (Belanger & Rayner, 2015). To further test the hypothesis that deaf and hearing readers have different perceptual spans, the current study uses eye-movement data from two experiments in which deaf American Sign Language-English bilinguals, hearing native English speakers, and hearing Chinese-English bilinguals read semantically unrelated sentences and answered comprehension questions after a proportion of them. We analyzed skip rates, fixation times, and accuracy on comprehension questions. In addition, we analyzed how lexical properties of words affected skipping behavior and fixation durations. Deaf readers skipped words more often than native English speakers, who skipped words more often than Chinese-English bilinguals. Deaf readers had shorter first-pass fixation times than the other two groups. All groups' skipping behaviors were affected by lexical frequency. Deaf readers' comprehension did not differ from hearing Chinese-English bilinguals, despite greater skipping and shorter fixation times. Overall, the eye-tracking findings align with Belanger's word processing efficiency hypothesis. Effects of lexical frequency on skipping behavior indicated further that eye movements during reading remain under cognitive control in deaf readers.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Deaf readers may have larger perceptual spans than ability-matched hearing native English readers, allowing them to read more efficiently (Belanger & Rayner, 2015). To further test the hypothesis that deaf and hearing readers have different perceptual spans, the current study uses eye-movement data from two experiments in which deaf American Sign Language-English bilinguals, hearing native English speakers, and hearing Chinese-English bilinguals read semantically unrelated sentences and answered comprehension questions after a proportion of them. We analyzed skip rates, fixation times, and accuracy on comprehension questions. In addition, we analyzed how lexical properties of words affected skipping behavior and fixation durations. Deaf readers skipped words more often than native English speakers, who skipped words more often than Chinese-English bilinguals. Deaf readers had shorter first-pass fixation times than the other two groups. All groups' skipping behaviors were affected by lexical frequency. Deaf readers' comprehension did not differ from hearing Chinese-English bilinguals, despite greater skipping and shorter fixation times. Overall, the eye-tracking findings align with Belanger's word processing efficiency hypothesis. Effects of lexical frequency on skipping behavior indicated further that eye movements during reading remain under cognitive control in deaf readers.

Close

  • doi:10.1017/S0142716420000740

Close

Tobiasz Trawiński; Chuanli Zang; Simon P. Liversedge; Yao Ge; Ying Fu; Nick Donnelly

The influence of culture on the viewing of Western and East Asian paintings. Journal Article

In: Psychology of Aesthetics, Creativity, and the Arts, pp. 1–22, 2021.

Abstract | Links | BibTeX

@article{Trawinski2021a,
title = {The influence of culture on the viewing of Western and East Asian paintings.},
author = {Tobiasz Trawiński and Chuanli Zang and Simon P. Liversedge and Yao Ge and Ying Fu and Nick Donnelly},
doi = {10.1037/aca0000411},
year = {2021},
date = {2021-01-01},
journal = {Psychology of Aesthetics, Creativity, and the Arts},
pages = {1--22},
abstract = {The influence of British and Chinese culture on the viewing of paintings from Western and East Asian traditions was explored in an old/new discrimination task. Accuracy data were considered alongside signal detection measures of sensitivity and bias. The results showed participant culture and painting tradition interacted but only with respect to response bias and not sensitivity. Eye movements were also recorded during encoding and discrimination. Paintings were split into regions of interest defined by faces, or the theme and context to analyze the eye movement data. With respect to the eye movement data, the results showed that a match between participant culture and painting tradition increased the viewing of faces in paintings at the expense of the viewing of other locations, an effect interpreted as a manifestation of the Other Race Effect on the viewing of paintings. There was, however, no evidence of broader influence of culture on the eye movements made to paintings as might be expected if culture influenced the allocation of attention more generally. Taken together, these findings suggest culture influences the viewing of paintings but only in response to challenges to the encoding of faces. (PsycInfo Database Record (c) 2021 APA, all rights reserved)},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The influence of British and Chinese culture on the viewing of paintings from Western and East Asian traditions was explored in an old/new discrimination task. Accuracy data were considered alongside signal detection measures of sensitivity and bias. The results showed participant culture and painting tradition interacted but only with respect to response bias and not sensitivity. Eye movements were also recorded during encoding and discrimination. Paintings were split into regions of interest defined by faces, or the theme and context to analyze the eye movement data. With respect to the eye movement data, the results showed that a match between participant culture and painting tradition increased the viewing of faces in paintings at the expense of the viewing of other locations, an effect interpreted as a manifestation of the Other Race Effect on the viewing of paintings. There was, however, no evidence of broader influence of culture on the eye movements made to paintings as might be expected if culture influenced the allocation of attention more generally. Taken together, these findings suggest culture influences the viewing of paintings but only in response to challenges to the encoding of faces. (PsycInfo Database Record (c) 2021 APA, all rights reserved)

Close

  • doi:10.1037/aca0000411

Close

Tobiasz Trawiński; Araz Aslanian; Olivia S. Cheung

The effect of implicit racial bias on recognition of other-race faces Journal Article

In: Cognitive Research: Principles and Implications, vol. 6, no. 67, pp. 1–16, 2021.

Abstract | Links | BibTeX

@article{Trawinski2021,
title = {The effect of implicit racial bias on recognition of other-race faces},
author = {Tobiasz Trawiński and Araz Aslanian and Olivia S. Cheung},
doi = {10.1186/s41235-021-00337-7},
year = {2021},
date = {2021-01-01},
journal = {Cognitive Research: Principles and Implications},
volume = {6},
number = {67},
pages = {1--16},
publisher = {Springer International Publishing},
abstract = {Previous research has established a possible link between recognition performance, individuation experience, and implicit racial bias of other-race faces. However, it remains unclear how implicit racial bias might influence other-race face processing in observers with relatively extensive experience with the other race. Here we examined how recognition of other-race faces might be modulated by observers' implicit racial bias, in addition to the effects of experience and face recognition ability. Caucasian participants in a culturally diverse city completed a memory task for Asian and Caucasian faces, an implicit association test, a questionnaire assessing experience with Asians and Caucasians, and a face recognition ability test. As expected, recognition performance for Asian faces was positively predicted by increased face recognition ability, and experience with Asians. More importantly, it was also negatively predicted by increased positive bias towards Asians, which was modulated by an interaction between face recognition ability and implicit bias, with the effect of implicit bias observed predominantly in observers with high face recognition ability. Moreover, the positions of the first two fixations when participants learned the other-race faces were affected by different factors, with the first fixation modulated by the effect of experience and the second fixation modulated by the interaction between implicit bias and face recognition ability. Taken together, these findings suggest the complexity in understanding the perceptual and socio-cognitive influences on the other-race effect, and that observers with high face recognition ability may more likely evaluate racial features involuntarily when recognizing other-race faces.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous research has established a possible link between recognition performance, individuation experience, and implicit racial bias of other-race faces. However, it remains unclear how implicit racial bias might influence other-race face processing in observers with relatively extensive experience with the other race. Here we examined how recognition of other-race faces might be modulated by observers' implicit racial bias, in addition to the effects of experience and face recognition ability. Caucasian participants in a culturally diverse city completed a memory task for Asian and Caucasian faces, an implicit association test, a questionnaire assessing experience with Asians and Caucasians, and a face recognition ability test. As expected, recognition performance for Asian faces was positively predicted by increased face recognition ability, and experience with Asians. More importantly, it was also negatively predicted by increased positive bias towards Asians, which was modulated by an interaction between face recognition ability and implicit bias, with the effect of implicit bias observed predominantly in observers with high face recognition ability. Moreover, the positions of the first two fixations when participants learned the other-race faces were affected by different factors, with the first fixation modulated by the effect of experience and the second fixation modulated by the interaction between implicit bias and face recognition ability. Taken together, these findings suggest the complexity in understanding the perceptual and socio-cognitive influences on the other-race effect, and that observers with high face recognition ability may more likely evaluate racial features involuntarily when recognizing other-race faces.

Close

  • doi:10.1186/s41235-021-00337-7

Close

Michael R. Traner; Ethan S. Bromberg-Martin; Ilya E. Monosov

How the value of the environment controls persistence in visual search Book

2021.

Abstract | Links | BibTeX

@book{Traner2021,
title = {How the value of the environment controls persistence in visual search},
author = {Michael R. Traner and Ethan S. Bromberg-Martin and Ilya E. Monosov},
doi = {10.1371/journal.pcbi.1009662},
year = {2021},
date = {2021-01-01},
booktitle = {PLoS Computational Biology},
volume = {17},
number = {12},
pages = {e1009662},
abstract = {Classic foraging theory predicts that humans and animals aim to gain maximum reward per unit time. However, in standard instrumental conditioning tasks individuals adopt an apparently suboptimal strategy: they respond slowly when the expected value is low. This reward-related bias is often explained as reduced motivation in response to low rewards. Here we present evidence this behavior is associated with a complementary increased motivation to search the environment for alternatives. We trained monkeys to search for reward-related visual targets in environments with different values. We found that the reward-related bias scaled with environment value, was consistent with persistent searching after the target was already found, and was associated with increased exploratory gaze to objects in the environment. A novel computational model of foraging suggests that this search strategy could be adaptive in naturalistic settings where both environments and the objects within them provide partial information about hidden, uncertain rewards.},
keywords = {},
pubstate = {published},
tppubtype = {book}
}

Close

Classic foraging theory predicts that humans and animals aim to gain maximum reward per unit time. However, in standard instrumental conditioning tasks individuals adopt an apparently suboptimal strategy: they respond slowly when the expected value is low. This reward-related bias is often explained as reduced motivation in response to low rewards. Here we present evidence this behavior is associated with a complementary increased motivation to search the environment for alternatives. We trained monkeys to search for reward-related visual targets in environments with different values. We found that the reward-related bias scaled with environment value, was consistent with persistent searching after the target was already found, and was associated with increased exploratory gaze to objects in the environment. A novel computational model of foraging suggests that this search strategy could be adaptive in naturalistic settings where both environments and the objects within them provide partial information about hidden, uncertain rewards.

Close

  • doi:10.1371/journal.pcbi.1009662

Close

Matteo Toscani; Pascal Mamassian; Matteo Valsecchi

Underconfidence in Peripheral Vision Journal Article

In: Journal of Vision, vol. 21, no. 6, pp. 1–14, 2021.

Abstract | Links | BibTeX

@article{Toscani2021,
title = {Underconfidence in Peripheral Vision},
author = {Matteo Toscani and Pascal Mamassian and Matteo Valsecchi},
doi = {10.1167/jov.21.6.2},
year = {2021},
date = {2021-01-01},
journal = {Journal of Vision},
volume = {21},
number = {6},
pages = {1--14},
abstract = {Our visual experience appears uniform across the visual field, despite the poor resolution of peripheral vision. This may be because we do not notice that we are missing details in the periphery of our visual field and believe that peripheral vision is just as rich as central vision. In other words, the uniformity of the visual scene could be explained by a metacognitive bias. We deployed a confidence forced-choice method to measure metacognitive performance in peripheral as compared to central vision. Participants judged the orientation of gratings presented in central and peripheral vision, and reported whether they thought they were more likely to be correct in the perceptual decision for the central or for the peripheral stimulus. Observers were underconfident in the periphery: higher sensory evidence in the periphery was needed to equate confidence choices between central and peripheral perceptual decisions. When performance on the central and peripheral tasks was matched, observers were still more confident in their ability to report the orientation of the central gratings over the one of the peripheral gratings. In a second experiment, we measured metacognitive sensitivity, as the difference in perceptual sensitivity between perceptual decisions that are chosen with high confidence and decisions that are chosen with low confidence. Results showed that metacognitive sensitivity is lower when participants compare central to peripheral perceptual decisions compared to when they compare peripheral to peripheral or central to central perceptual decisions. In a third experiment, we showed that peripheral underconfidence does not arise because observers based confidence judgments on stimulus size or contrast range rather than on perceptual performance. Taken together, results indicate that humans are impaired in comparing central with peripheral perceptual performance, but metacognitive biases cannot explain our impression of uniformity, as this would require peripheral overconfidence.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Our visual experience appears uniform across the visual field, despite the poor resolution of peripheral vision. This may be because we do not notice that we are missing details in the periphery of our visual field and believe that peripheral vision is just as rich as central vision. In other words, the uniformity of the visual scene could be explained by a metacognitive bias. We deployed a confidence forced-choice method to measure metacognitive performance in peripheral as compared to central vision. Participants judged the orientation of gratings presented in central and peripheral vision, and reported whether they thought they were more likely to be correct in the perceptual decision for the central or for the peripheral stimulus. Observers were underconfident in the periphery: higher sensory evidence in the periphery was needed to equate confidence choices between central and peripheral perceptual decisions. When performance on the central and peripheral tasks was matched, observers were still more confident in their ability to report the orientation of the central gratings over the one of the peripheral gratings. In a second experiment, we measured metacognitive sensitivity, as the difference in perceptual sensitivity between perceptual decisions that are chosen with high confidence and decisions that are chosen with low confidence. Results showed that metacognitive sensitivity is lower when participants compare central to peripheral perceptual decisions compared to when they compare peripheral to peripheral or central to central perceptual decisions. In a third experiment, we showed that peripheral underconfidence does not arise because observers based confidence judgments on stimulus size or contrast range rather than on perceptual performance. Taken together, results indicate that humans are impaired in comparing central with peripheral perceptual performance, but metacognitive biases cannot explain our impression of uniformity, as this would require peripheral overconfidence.

Close

  • doi:10.1167/jov.21.6.2

Close

Chiara Tortelli; Marco Turi; David C. Burr; Paola Binda

Objective pupillometry shows that perceptual styles covary with autistic-like personality traits Journal Article

In: eLife, vol. 10, pp. 1–13, 2021.

Abstract | Links | BibTeX

@article{Tortelli2021,
title = {Objective pupillometry shows that perceptual styles covary with autistic-like personality traits},
author = {Chiara Tortelli and Marco Turi and David C. Burr and Paola Binda},
doi = {10.7554/eLife.67185},
year = {2021},
date = {2021-01-01},
journal = {eLife},
volume = {10},
pages = {1--13},
abstract = {We measured the modulation of pupil-size (in constant lighting) elicited by observing transparent surfaces of black and white moving dots, perceived as a cylinder rotating about its vertical axis. The direction of rotation was swapped periodically by flipping stereo-depth of the two surfaces. Pupil size modulated in synchrony with the changes in front-surface color (dilating when black). The magnitude of pupillary modulation was larger for human participants with higher Autism-Spectrum Quotient (AQ), consistent with a local perceptual style, with attention focused on the front surface. The modulation with surface color, and its correlation with AQ, was equally strong when participants passively viewed the stimulus. No other indicator, including involuntary pursuit eye-movements, covaried with AQ. These results reinforce our previous report with a similar bistable stimulus (Turi, Burr, & Binda, 2018), and go on to show that bistable illusory motion is not necessary for the effect, or its dependence on AQ.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We measured the modulation of pupil-size (in constant lighting) elicited by observing transparent surfaces of black and white moving dots, perceived as a cylinder rotating about its vertical axis. The direction of rotation was swapped periodically by flipping stereo-depth of the two surfaces. Pupil size modulated in synchrony with the changes in front-surface color (dilating when black). The magnitude of pupillary modulation was larger for human participants with higher Autism-Spectrum Quotient (AQ), consistent with a local perceptual style, with attention focused on the front surface. The modulation with surface color, and its correlation with AQ, was equally strong when participants passively viewed the stimulus. No other indicator, including involuntary pursuit eye-movements, covaried with AQ. These results reinforce our previous report with a similar bistable stimulus (Turi, Burr, & Binda, 2018), and go on to show that bistable illusory motion is not necessary for the effect, or its dependence on AQ.

Close

  • doi:10.7554/eLife.67185

Close

Chiara Tortelli; Marco Turi; David C. Burr; Paola Binda

Pupillary responses obey Emmert's law and co-vary with autistic traits Journal Article

In: Journal of Autism and Developmental Disorders, vol. 51, no. 8, pp. 2908–2919, 2021.

Abstract | Links | BibTeX

@article{Tortelli2021a,
title = {Pupillary responses obey Emmert's law and co-vary with autistic traits},
author = {Chiara Tortelli and Marco Turi and David C. Burr and Paola Binda},
doi = {10.1007/s10803-020-04718-7},
year = {2021},
date = {2021-01-01},
journal = {Journal of Autism and Developmental Disorders},
volume = {51},
number = {8},
pages = {2908--2919},
publisher = {Springer US},
abstract = {We measured the pupil response to a light stimulus subject to a size illusion and found that stimuli perceived as larger evoke a stronger pupillary response. The size illusion depends on combining retinal signals with contextual 3D information; contextual processing is thought to vary across individuals, being weaker in individuals with stronger autistic traits. Consistent with this theory, autistic traits correlated negatively with the magnitude of pupil modulations in our sample of neurotypical adults; however, psychophysical measurements of the illusion did not correlate with autistic traits, or with the pupil modulations. This shows that pupillometry provides an accurate objective index of complex perceptual processes, particularly useful for quantifying interindividual differences, and potentially more informative than standard psychophysical measures.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We measured the pupil response to a light stimulus subject to a size illusion and found that stimuli perceived as larger evoke a stronger pupillary response. The size illusion depends on combining retinal signals with contextual 3D information; contextual processing is thought to vary across individuals, being weaker in individuals with stronger autistic traits. Consistent with this theory, autistic traits correlated negatively with the magnitude of pupil modulations in our sample of neurotypical adults; however, psychophysical measurements of the illusion did not correlate with autistic traits, or with the pupil modulations. This shows that pupillometry provides an accurate objective index of complex perceptual processes, particularly useful for quantifying interindividual differences, and potentially more informative than standard psychophysical measures.

Close

  • doi:10.1007/s10803-020-04718-7

Close

Débora Torres; Wagner R. Sena; Humberto A. Carmona; André A. Moreira; Hernán A. Makse; José S. Andrade

Eye-tracking as a proxy for coherence and complexity of texts Journal Article

In: PLoS ONE, vol. 16, no. 12, pp. e0260236, 2021.

Abstract | Links | BibTeX

@article{Torres2021,
title = {Eye-tracking as a proxy for coherence and complexity of texts},
author = {Débora Torres and Wagner R. Sena and Humberto A. Carmona and André A. Moreira and Hernán A. Makse and José S. Andrade},
doi = {10.1371/journal.pone.0260236},
year = {2021},
date = {2021-01-01},
journal = {PLoS ONE},
volume = {16},
number = {12},
pages = {e0260236},
abstract = {Reading is a complex cognitive process that involves primary oculomotor function and high-level activities like attention focus and language processing. When we read, our eyes move by primary physiological functions while responding to language-processing demands. In fact, the eyes perform discontinuous twofold movements, namely, successive long jumps (saccades) interposed by small steps (fixations) in which the gaze “scans” confined locations. It is only through the fixations that information is effectively captured for brain processing. Since individuals can express similar as well as entirely different opinions about a given text, it is therefore expected that the form, content and style of a text could induce different eye-movement patterns among people. A question that naturally arises is whether these individuals' behaviours are correlated, so that eye-tracking while reading can be used as a proxy for text subjective properties. Here we perform a set of eye-tracking experiments with a group of individuals reading different types of texts, including children stories, random word generated texts and excerpts from literature work. In parallel, an extensive Internet survey was conducted for categorizing these texts in terms of their complexity and coherence, considering a large number of individuals selected according to different ages, gender and levels of education. The computational analysis of the fixation maps obtained from the gaze trajectories of the subjects for a given text reveals that the average “magnetization” of the fixation configurations correlates strongly with their complexity observed in the survey. Moreover, we perform a thermodynamic analysis using the Maximum-Entropy Model and find that coherent texts were closer to their corresponding “critical points” than non-coherent ones, as computed from the Pairwise Maximum-Entropy method, suggesting that different texts may induce distinct cohesive reading activities.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Reading is a complex cognitive process that involves primary oculomotor function and high-level activities like attention focus and language processing. When we read, our eyes move by primary physiological functions while responding to language-processing demands. In fact, the eyes perform discontinuous twofold movements, namely, successive long jumps (saccades) interposed by small steps (fixations) in which the gaze “scans” confined locations. It is only through the fixations that information is effectively captured for brain processing. Since individuals can express similar as well as entirely different opinions about a given text, it is therefore expected that the form, content and style of a text could induce different eye-movement patterns among people. A question that naturally arises is whether these individuals' behaviours are correlated, so that eye-tracking while reading can be used as a proxy for text subjective properties. Here we perform a set of eye-tracking experiments with a group of individuals reading different types of texts, including children stories, random word generated texts and excerpts from literature work. In parallel, an extensive Internet survey was conducted for categorizing these texts in terms of their complexity and coherence, considering a large number of individuals selected according to different ages, gender and levels of education. The computational analysis of the fixation maps obtained from the gaze trajectories of the subjects for a given text reveals that the average “magnetization” of the fixation configurations correlates strongly with their complexity observed in the survey. Moreover, we perform a thermodynamic analysis using the Maximum-Entropy Model and find that coherent texts were closer to their corresponding “critical points” than non-coherent ones, as computed from the Pairwise Maximum-Entropy method, suggesting that different texts may induce distinct cohesive reading activities.

Close

  • doi:10.1371/journal.pone.0260236

Close

David Torrents-Rodas; Stephan Koenig; Metin Uengoer; Harald Lachnit

A rise in prediction error increases attention to irrelevant cues Journal Article

In: Biological Psychology, vol. 159, pp. 1–11, 2021.

Abstract | Links | BibTeX

@article{TorrentsRodas2021a,
title = {A rise in prediction error increases attention to irrelevant cues},
author = {David Torrents-Rodas and Stephan Koenig and Metin Uengoer and Harald Lachnit},
doi = {10.1016/j.biopsycho.2020.108007},
year = {2021},
date = {2021-01-01},
journal = {Biological Psychology},
volume = {159},
pages = {1--11},
publisher = {Elsevier B.V.},
abstract = {We investigated whether a sudden rise in prediction error widens an individual's focus of attention by increasing ocular fixations on cues that otherwise tend to be ignored. To this end, we used a discrimination learning task including cues that were either relevant or irrelevant for predicting the outcomes. Half of participants experienced contingency reversal once they had learned to predict the outcomes (reversal group},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We investigated whether a sudden rise in prediction error widens an individual's focus of attention by increasing ocular fixations on cues that otherwise tend to be ignored. To this end, we used a discrimination learning task including cues that were either relevant or irrelevant for predicting the outcomes. Half of participants experienced contingency reversal once they had learned to predict the outcomes (reversal group

Close

  • doi:10.1016/j.biopsycho.2020.108007

Close

10162 entries « ‹ 2 of 102 › »

让我们保持联系

  • Twitter
  • Facebook
  • Instagram
  • LinkedIn
  • YouTube
新闻通讯
新闻通讯存档
会议

联系

info@sr-research.com

电话: +1-613-271-8686

免费电话: +1-866-821-0731

传真: +1-613-482-4866

快捷链接

产品

解决方案

技术支持

法律信息

法律声明

隐私政策 | 可访性政策

EyeLink® 眼动仪是研究设备,不能用于医疗诊断或治疗。

特色博客

Reading Profiles of Adults with Dyslexia

成人阅读障碍的阅读概况


Copyright © 2023 · SR Research Ltd. All Rights Reserved. EyeLink is a registered trademark of SR Research Ltd.