• Skip to main content
  • Skip to header right navigation
  • Skip to site footer
Fast, Accurate, Reliable Eye Tracking

高速、精准和可靠的眼动追踪解决方案

  • 硬件
    • EyeLink 1000 Plus
    • EyeLink Portable Duo
    • fMRI和MEG系统
    • EyeLink II
    • 硬件集成
  • 软件
    • Experiment Builder
    • Data Viewer
    • WebLink
    • 软件集成
    • Purchase Licenses
  • 解决方案
    • 阅读与语言
    • 发展研究
    • fMRI 和 MEG
    • EEG 和 fNIRS
    • 临床与眼动机制研究
    • 认知性
    • 可用性与应用研究
    • 非人类 灵长类动物
  • 技术支持
    • 论坛
    • 资源
    • 有用的应用程序
    • 训练
  • 关于
    • 关于我们
    • EyeLink出版物
    • 新闻
    • 制造
    • 职业生涯
    • 关于眼动追踪
    • 新闻通讯
  • 博客
  • 联系
  • English
eye tracking research

EyeLink眼球跟踪出版物库

全部EyeLink出版物

截至2021,所有10000多份经同行评审的EyeLink研究出版物(其中一些在2022年初)以下按年份列出。您可以使用视觉搜索、平滑追踪、帕金森氏症等关键字搜索出版物库。您还可以搜索单个作者的姓名。按研究领域分组的眼球追踪研究可在解决方案页面上找到。如果我们遗漏了任何EyeLink眼球追踪文件,请给我们发电子邮件!

10162 entries « ‹ 1 of 102 › »

2022

Floor van den Berg; Jelle Brouwer; Thomas B. Tienkamp; Josje Verhagen; Merel Keijzer

Language entropy relates to behavioral and pupil indices of executive control in young adult bilinguals Journal Article

In: Frontiers in Psychology, vol. 13, pp. 1-17, 2022.

Abstract | BibTeX

@article{nokey,
title = {Language entropy relates to behavioral and pupil indices of executive control in young adult bilinguals},
author = {Floor van den Berg and Jelle Brouwer and Thomas B. Tienkamp and Josje Verhagen and Merel Keijzer},
year = {2022},
date = {2022-05-04},
journal = {Frontiers in Psychology},
volume = {13},
pages = {1-17},
abstract = {Introduction: It has been proposed that bilinguals’ language use patterns are differentially associated with executive control. To further examine this, the present study relates the social diversity of bilingual language use to performance on a color- shape switching task (CSST) in a group of bilingual university students with diverse linguistic backgrounds. Crucially, this study used language entropy as a measure of bilinguals’ language use patterns. This continuous measure reflects a spectrum of language use in a variety of social contexts, ranging from compartmentalized use to fully integrated use. Methods: Language entropy for university and non-university contexts was calculated from questionnaire data on language use. Reaction times (RTs) were measured to calculate global RT and switching and mixing costs on the CSST, representing conflict monitoring, mental set shifting, and goal maintenance, respectively. In addition, this study innovatively recorded a potentially more sensitive measure of set shifting abilities, namely, pupil size during task performance. Results: Higher university entropy was related to slower global RT. Neither university entropy nor non-university entropy were associated with switching costs as manifested in RTs. However, bilinguals with more compartmentalized language use in non-university contexts showed a larger difference in pupil dilation for switch trials in comparison with non-switch trials. Mixing costs in RTs were reduced for bilinguals with higher diversity of language use in non-university contexts. No such effects were found for university entropy. Discussion: These results point to the social diversity of bilinguals’ language use as being associated with executive control, but the direction of the effects may depend on social context (university vs. non-university). Importantly, the results also suggest that some of these effects may only be detected by using more sensitive measures, such as pupil dilation. The paper discusses theoretical and practical implications regarding the language entropy measure and the cognitive effects of bilingual experiences more generally, as well as as how methodological choices can advance our understanding of these effects.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Introduction: It has been proposed that bilinguals’ language use patterns are differentially associated with executive control. To further examine this, the present study relates the social diversity of bilingual language use to performance on a color- shape switching task (CSST) in a group of bilingual university students with diverse linguistic backgrounds. Crucially, this study used language entropy as a measure of bilinguals’ language use patterns. This continuous measure reflects a spectrum of language use in a variety of social contexts, ranging from compartmentalized use to fully integrated use. Methods: Language entropy for university and non-university contexts was calculated from questionnaire data on language use. Reaction times (RTs) were measured to calculate global RT and switching and mixing costs on the CSST, representing conflict monitoring, mental set shifting, and goal maintenance, respectively. In addition, this study innovatively recorded a potentially more sensitive measure of set shifting abilities, namely, pupil size during task performance. Results: Higher university entropy was related to slower global RT. Neither university entropy nor non-university entropy were associated with switching costs as manifested in RTs. However, bilinguals with more compartmentalized language use in non-university contexts showed a larger difference in pupil dilation for switch trials in comparison with non-switch trials. Mixing costs in RTs were reduced for bilinguals with higher diversity of language use in non-university contexts. No such effects were found for university entropy. Discussion: These results point to the social diversity of bilinguals’ language use as being associated with executive control, but the direction of the effects may depend on social context (university vs. non-university). Importantly, the results also suggest that some of these effects may only be detected by using more sensitive measures, such as pupil dilation. The paper discusses theoretical and practical implications regarding the language entropy measure and the cognitive effects of bilingual experiences more generally, as well as as how methodological choices can advance our understanding of these effects.

Close

Yueyuan Zheng; Xinchen Ye; Janet H. Hsiao

Does adding video and subtitles to an audio lesson facilitate its comprehension? Journal Article

In: Learning and Instruction, vol. 77, pp. 101542, 2022.

Abstract | Links | BibTeX

@article{Zheng2022,
title = {Does adding video and subtitles to an audio lesson facilitate its comprehension?},
author = {Yueyuan Zheng and Xinchen Ye and Janet H. Hsiao},
doi = {10.1016/j.learninstruc.2021.101542},
year = {2022},
date = {2022-01-01},
journal = {Learning and Instruction},
volume = {77},
pages = {101542},
publisher = {Elsevier Ltd},
abstract = {We examined whether adding video and subtitles to an audio lesson facilitates its comprehension and whether the comprehension depends on participants' cognitive abilities, including working memory and executive functions, and where they looked during video viewing. Participants received lessons consisting of statements of facts under four conditions: audio-only, audio with verbatim subtitles, audio with relevant video, and audio with both subtitles and video. Comprehension was assessed as the accuracy in answering multiple-choice questions for content memory. We found that subtitles facilitated comprehension whereas video did not. In addition, comprehension of audio lessons with video depended on participants' cognitive abilities and eye movement pattern: a more centralized (looking mainly at the screen center) eye movement pattern predicted better comprehension as opposed to a distributed pattern (with distributed regions of interest). Thus, whether video facilitates comprehension of audio lessons depends on both learners' cognitive abilities and where they look during video viewing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We examined whether adding video and subtitles to an audio lesson facilitates its comprehension and whether the comprehension depends on participants' cognitive abilities, including working memory and executive functions, and where they looked during video viewing. Participants received lessons consisting of statements of facts under four conditions: audio-only, audio with verbatim subtitles, audio with relevant video, and audio with both subtitles and video. Comprehension was assessed as the accuracy in answering multiple-choice questions for content memory. We found that subtitles facilitated comprehension whereas video did not. In addition, comprehension of audio lessons with video depended on participants' cognitive abilities and eye movement pattern: a more centralized (looking mainly at the screen center) eye movement pattern predicted better comprehension as opposed to a distributed pattern (with distributed regions of interest). Thus, whether video facilitates comprehension of audio lessons depends on both learners' cognitive abilities and where they look during video viewing.

Close

  • doi:10.1016/j.learninstruc.2021.101542

Close

Aspen H. Yoo; Alfredo Bolaños; Grace E. Hallenbeck; Masih Rahmati; Thomas C. Sprague; Clayton E. Curtis

Behavioral prioritization enhances working memory precision and neural population gain Journal Article

In: Journal of Cognitive Neuroscience, vol. 34, no. 2, pp. 365–379, 2022.

Abstract | Links | BibTeX

@article{Yoo2022,
title = {Behavioral prioritization enhances working memory precision and neural population gain},
author = {Aspen H. Yoo and Alfredo Bolaños and Grace E. Hallenbeck and Masih Rahmati and Thomas C. Sprague and Clayton E. Curtis},
doi = {10.1162/jocn_a_01804},
year = {2022},
date = {2022-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {34},
number = {2},
pages = {365--379},
abstract = {Humans allocate visual working memory (WM) resource according to behavioral relevance, resulting in more precise memories for more important items. Theoretically, items may be maintained by feature-tuned neural populations, where the relative gain of the populations encoding each item determines precision. To test this hypothesis, we compared the amplitudes of delay period activity in the different parts of retinotopic maps representing each of several WM items, predicting the amplitudes would track behavioral priority. Using fMRI, we scanned participants while they remembered the location of multiple items over a WM delay and then reported the location of one probed item using a memory-guided saccade. Importantly, items were not equally probable to be probed (0.6, 0.3, 0.1, 0.0), which was indicated with a precue. We analyzed fMRI activity in 10 visual field maps in occipital, parietal, and frontal cortex known to be important for visual WM. In early visual cortex, but not association cortex, the amplitude of BOLD activation within voxels corresponding to the retinotopic location of visual WM items increased with the priority of the item. Interestingly, these results were contrasted with a common finding that higher-level brain regions had greater delay period activity, demonstrating a dissociation between the absolute amount of activity in a brain area and the activity of different spatially selective populations within it. These results suggest that the distribution of WM resources according to priority sculpts the relative gains of neural populations that encode items, offering a neural mechanism for how prioritization impacts memory precision.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Humans allocate visual working memory (WM) resource according to behavioral relevance, resulting in more precise memories for more important items. Theoretically, items may be maintained by feature-tuned neural populations, where the relative gain of the populations encoding each item determines precision. To test this hypothesis, we compared the amplitudes of delay period activity in the different parts of retinotopic maps representing each of several WM items, predicting the amplitudes would track behavioral priority. Using fMRI, we scanned participants while they remembered the location of multiple items over a WM delay and then reported the location of one probed item using a memory-guided saccade. Importantly, items were not equally probable to be probed (0.6, 0.3, 0.1, 0.0), which was indicated with a precue. We analyzed fMRI activity in 10 visual field maps in occipital, parietal, and frontal cortex known to be important for visual WM. In early visual cortex, but not association cortex, the amplitude of BOLD activation within voxels corresponding to the retinotopic location of visual WM items increased with the priority of the item. Interestingly, these results were contrasted with a common finding that higher-level brain regions had greater delay period activity, demonstrating a dissociation between the absolute amount of activity in a brain area and the activity of different spatially selective populations within it. These results suggest that the distribution of WM resources according to priority sculpts the relative gains of neural populations that encode items, offering a neural mechanism for how prioritization impacts memory precision.

Close

  • doi:10.1162/jocn_a_01804

Close

Jiahui Wang; Abigail Stebbins; Richard E. Ferdig

Examining the effects of students' self-efficacy and prior knowledge on learning and visual behavior in a physics game Journal Article

In: Computers and Education, vol. 178, pp. 104405, 2022.

Abstract | Links | BibTeX

@article{Wang2022,
title = {Examining the effects of students' self-efficacy and prior knowledge on learning and visual behavior in a physics game},
author = {Jiahui Wang and Abigail Stebbins and Richard E. Ferdig},
doi = {10.1016/j.compedu.2021.104405},
year = {2022},
date = {2022-01-01},
journal = {Computers and Education},
volume = {178},
pages = {104405},
publisher = {Elsevier Ltd},
abstract = {Research has provided evidence of the significant promise of using educational games for learning. However, there is limited understanding of how individual differences (e.g., self-efficacy and prior knowledge) affect visual processing of game elements and learning from an educational game. This study aimed to address these gaps by: a) examining the effects of students' self-efficacy and prior knowledge on learning from a physics game; and b) exploring how learners with distinct levels of self-efficacy and prior knowledge differ in their visual behavior with respect to the game elements. The visual behavior of 69 undergraduate students was recorded as they played an educational game focusing on Newtonian mechanics. Individual differences in self-efficacy in learning physics and prior knowledge were assessed prior to the game, while a comprehension test was administered immediately after gameplay. Wilcoxon signed-rank tests showed that all participants significantly improved in their understanding of Newtonian mechanics. Mann- Whitney U tests indicated learning gains were not significantly different between the groups with varying levels of prior knowledge or self-efficacy. Additionally, a series of Mann-Whitney U tests of the eye tracking data suggested the learners with high self-efficacy tended to pay more attention to the motion map - a critical navigation component of the game. Further, the high prior knowledge individuals excelled in attentional control abilities and exhibited effective visual processing strategies. The study concludes with important implications for the future design of educational games and developing individualized instructional support in game-based learning. 1.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Research has provided evidence of the significant promise of using educational games for learning. However, there is limited understanding of how individual differences (e.g., self-efficacy and prior knowledge) affect visual processing of game elements and learning from an educational game. This study aimed to address these gaps by: a) examining the effects of students' self-efficacy and prior knowledge on learning from a physics game; and b) exploring how learners with distinct levels of self-efficacy and prior knowledge differ in their visual behavior with respect to the game elements. The visual behavior of 69 undergraduate students was recorded as they played an educational game focusing on Newtonian mechanics. Individual differences in self-efficacy in learning physics and prior knowledge were assessed prior to the game, while a comprehension test was administered immediately after gameplay. Wilcoxon signed-rank tests showed that all participants significantly improved in their understanding of Newtonian mechanics. Mann- Whitney U tests indicated learning gains were not significantly different between the groups with varying levels of prior knowledge or self-efficacy. Additionally, a series of Mann-Whitney U tests of the eye tracking data suggested the learners with high self-efficacy tended to pay more attention to the motion map - a critical navigation component of the game. Further, the high prior knowledge individuals excelled in attentional control abilities and exhibited effective visual processing strategies. The study concludes with important implications for the future design of educational games and developing individualized instructional support in game-based learning. 1.

Close

  • doi:10.1016/j.compedu.2021.104405

Close

Jérôme Tagu; Árni Kristjánsson

Dynamics of attentional and oculomotor orienting in visual foraging tasks Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 75, no. 2, pp. 260–276, 2022.

Abstract | Links | BibTeX

@article{Tagu2022a,
title = {Dynamics of attentional and oculomotor orienting in visual foraging tasks},
author = {Jérôme Tagu and Árni Kristjánsson},
doi = {10.1177/1747021820919351},
year = {2022},
date = {2022-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {75},
number = {2},
pages = {260--276},
abstract = {A vast amount of research has been carried out to understand how humans visually search for targets in their environment. However, this research has typically involved search for one unique target among several distractors. Although this line of research has yielded important insights into the basic characteristics of how humans explore their visual environment, this may not be a very realistic model for everyday visual orientation. Recently, researchers have used multi-target displays to assess orienting in the visual field. Eye movements in such tasks are, however, less well understood. Here, we investigated oculomotor dynamics during four visual foraging tasks differing in target crypticity (feature-based foraging vs. conjunction-based foraging) and the effector type being used for target selection (mouse foraging vs. gaze foraging). Our results show that both target crypticity and effector type affect foraging strategies. These changes are reflected in oculomotor dynamics, feature foraging being associated with focal exploration (long fixations and short-amplitude saccades), and conjunction foraging with ambient exploration (short fixations and high-amplitude saccades). These results provide important new information for existing accounts of visual attention and oculomotor control and emphasise the usefulness of foraging tasks for a better understanding of how humans orient in the visual environment.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A vast amount of research has been carried out to understand how humans visually search for targets in their environment. However, this research has typically involved search for one unique target among several distractors. Although this line of research has yielded important insights into the basic characteristics of how humans explore their visual environment, this may not be a very realistic model for everyday visual orientation. Recently, researchers have used multi-target displays to assess orienting in the visual field. Eye movements in such tasks are, however, less well understood. Here, we investigated oculomotor dynamics during four visual foraging tasks differing in target crypticity (feature-based foraging vs. conjunction-based foraging) and the effector type being used for target selection (mouse foraging vs. gaze foraging). Our results show that both target crypticity and effector type affect foraging strategies. These changes are reflected in oculomotor dynamics, feature foraging being associated with focal exploration (long fixations and short-amplitude saccades), and conjunction foraging with ambient exploration (short fixations and high-amplitude saccades). These results provide important new information for existing accounts of visual attention and oculomotor control and emphasise the usefulness of foraging tasks for a better understanding of how humans orient in the visual environment.

Close

  • doi:10.1177/1747021820919351

Close

Jérôme Tagu; Árni Kristjánsson

The selection balance: Contrasting value, proximity and priming in a multitarget foraging task Journal Article

In: Cognition, vol. 218, pp. 1–12, 2022.

Abstract | Links | BibTeX

@article{Tagu2022,
title = {The selection balance: Contrasting value, proximity and priming in a multitarget foraging task},
author = {Jérôme Tagu and Árni Kristjánsson},
doi = {10.1016/j.cognition.2021.104935},
year = {2022},
date = {2022-01-01},
journal = {Cognition},
volume = {218},
pages = {1--12},
abstract = {A critical question in visual foraging concerns the mechanisms driving the next target selection. Observers first identify a set of candidate targets, and then select the best option among these candidates. Recent evidence suggests that target selection relies on internal biases towards proximity (nearest target from the last selection), priming (target from the same category as the last selection) and value (target associated with high value). Here, we tested the role of eye movements in target selection, and notably whether disabling eye movements during target selection could affect search strategy. We asked observers to perform four foraging tasks differing by selection modality and target value. During gaze foraging, participants had to accurately fixate the targets to select them and could not anticipate the next selection with their eyes, while during mouse foraging they selected the targets with mouse clicks and were free to move their eyes. We moreover manipulated both target value and proximity. Our results revealed notable individual differences in search strategy, confirming the existence of internal biases towards value, proximity and priming. Critically, there were no differences in search strategy between mouse and gaze foraging, suggesting that disabling eye movements during target selection did not affect foraging behaviour. These results importantly suggest that overt orienting is not necessary for target selection. This study provides fundamental information for theoretical conceptions of attentional selection, and emphasizes the importance of covert attention for target selection during visual foraging.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A critical question in visual foraging concerns the mechanisms driving the next target selection. Observers first identify a set of candidate targets, and then select the best option among these candidates. Recent evidence suggests that target selection relies on internal biases towards proximity (nearest target from the last selection), priming (target from the same category as the last selection) and value (target associated with high value). Here, we tested the role of eye movements in target selection, and notably whether disabling eye movements during target selection could affect search strategy. We asked observers to perform four foraging tasks differing by selection modality and target value. During gaze foraging, participants had to accurately fixate the targets to select them and could not anticipate the next selection with their eyes, while during mouse foraging they selected the targets with mouse clicks and were free to move their eyes. We moreover manipulated both target value and proximity. Our results revealed notable individual differences in search strategy, confirming the existence of internal biases towards value, proximity and priming. Critically, there were no differences in search strategy between mouse and gaze foraging, suggesting that disabling eye movements during target selection did not affect foraging behaviour. These results importantly suggest that overt orienting is not necessary for target selection. This study provides fundamental information for theoretical conceptions of attentional selection, and emphasizes the importance of covert attention for target selection during visual foraging.

Close

  • doi:10.1016/j.cognition.2021.104935

Close

Carlos Sillero‐Rejon; Osama Mahmoud; Ricardo M. Tamayo; Alvaro Arturo Clavijo‐Alvarez; Sally Adams; Olivia M. Maynard

Standardised packs and larger health warnings: Visual attention and perceptions among Colombian smokers and non‐smokers Journal Article

In: Addiction, pp. 1–11, 2022.

Abstract | Links | BibTeX

@article{Sillero‐Rejon2022,
title = {Standardised packs and larger health warnings: Visual attention and perceptions among Colombian smokers and non‐smokers},
author = {Carlos Sillero‐Rejon and Osama Mahmoud and Ricardo M. Tamayo and Alvaro Arturo Clavijo‐Alvarez and Sally Adams and Olivia M. Maynard},
doi = {10.1111/add.15779},
year = {2022},
date = {2022-01-01},
journal = {Addiction},
pages = {1--11},
abstract = {Aims: To measure how cigarette packaging (standardised packaging and branded packag- ing) and health warning size affect visual attention and pack preferences among Colombian smokers and non-smokers. Design: To explore visual attention, we used an eye-tracking experiment where non- smokers, weekly smokers and daily smokers were shown cigarette packs varying in warning size (30%-pictorial on top of the text, 30%-pictorial and text side-by-side, 50%, 70%) and packaging (standardised packaging, branded packaging). We used a discrete choice experiment (DCE) to examine the impact of warning size, packaging and brand name on preferences to try, taste perceptions and perceptions of harm. Setting: Eye-tracking laboratory, Universidad Nacional de Colombia, Bogotá, Colombia. Participants: Participants (n = 175) were 18 to 40 years old. Measurements: For the eye-tracking experiment, our primary outcome measure was the number of fixations toward the health warning compared with the branding. For the DCE, outcome measures were preferences to try, taste perceptions and harm perceptions. Findings: We observed greater visual attention to warning labels on standardised versus branded packages (F[3,167] = 22.87, P < 0.001) and when warnings were larger (F[9,161] = 147.17, P < 0.001); as warning size increased, the difference in visual attention to warnings between standardised and branded packaging decreased (F[9,161] = 4.44, P < 0.001). Non-smokers visually attended toward the warnings more than smokers, but as warning size increased these differences decreased (F[6,334] = 2.92},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Aims: To measure how cigarette packaging (standardised packaging and branded packag- ing) and health warning size affect visual attention and pack preferences among Colombian smokers and non-smokers. Design: To explore visual attention, we used an eye-tracking experiment where non- smokers, weekly smokers and daily smokers were shown cigarette packs varying in warning size (30%-pictorial on top of the text, 30%-pictorial and text side-by-side, 50%, 70%) and packaging (standardised packaging, branded packaging). We used a discrete choice experiment (DCE) to examine the impact of warning size, packaging and brand name on preferences to try, taste perceptions and perceptions of harm. Setting: Eye-tracking laboratory, Universidad Nacional de Colombia, Bogotá, Colombia. Participants: Participants (n = 175) were 18 to 40 years old. Measurements: For the eye-tracking experiment, our primary outcome measure was the number of fixations toward the health warning compared with the branding. For the DCE, outcome measures were preferences to try, taste perceptions and harm perceptions. Findings: We observed greater visual attention to warning labels on standardised versus branded packages (F[3,167] = 22.87, P < 0.001) and when warnings were larger (F[9,161] = 147.17, P < 0.001); as warning size increased, the difference in visual attention to warnings between standardised and branded packaging decreased (F[9,161] = 4.44, P < 0.001). Non-smokers visually attended toward the warnings more than smokers, but as warning size increased these differences decreased (F[6,334] = 2.92

Close

  • doi:10.1111/add.15779

Close

Weikang Shi; Sébastien Ballesta; Camillo Padoa-Schioppa

Economic choices under simultaneous or sequential offers rely on the same neural circuit Journal Article

In: Journal of Neuroscience, vol. 42, no. 1, pp. 33–43, 2022.

Abstract | Links | BibTeX

@article{Shi2022,
title = {Economic choices under simultaneous or sequential offers rely on the same neural circuit},
author = {Weikang Shi and Sébastien Ballesta and Camillo Padoa-Schioppa},
doi = {10.1523/jneurosci.1265-21.2021},
year = {2022},
date = {2022-01-01},
journal = {Journal of Neuroscience},
volume = {42},
number = {1},
pages = {33--43},
abstract = {A series of studies in which monkeys chose between two juices offered in variable amounts identified in the orbitofrontal cortex (OFC) different groups of neurons encoding the value of individual options ( offer value ), the binary choice outcome ( chosen juice ) and the chosen value . These variables capture both the input and the output of the choice process, suggesting that the cell groups identified in OFC constitute the building blocks of a decision circuit. Several lines of evidence support this hypothesis. However, in previous experiments offers were presented simultaneously, raising the question of whether current notions generalize to when goods are presented or are examined in sequence. Recently, [Ballesta and Padoa-Schioppa (2019)][1] examined OFC activity under sequential offers. An analysis of neuronal responses across time windows revealed that a small number of cell groups encoded specific sequences of variables. These sequences appeared analogous to the variables identified under simultaneous offers, but the correspondence remained tentative. Thus in the present study we examined the relation between cell groups found under sequential versus simultaneous offers. We recorded from the OFC while monkeys chose between different juices. Trials with simultaneous and sequential offers were randomly interleaved in each session. We classified cells in each choice modality and we examined the relation between the two classifications. We found a strong correspondence – in other words, the cell groups measured under simultaneous offers and under sequential offers were one and the same. This result indicates that economic choices under simultaneous or sequential offers rely on the same neural circuit. Significance Statement Research in the past 20 years has shed light on the neuronal underpinnings of economic choices. A large number of results indicates that decisions between goods are formed in a neural circuit within the orbitofrontal cortex (OFC). In most previous studies, subjects chose between two goods offered simultaneously. Yet, in daily situations, goods available for choice are often presented or examined in sequence. Here we recorded neuronal activity in the primate OFC alternating trials under simultaneous and under sequential offers. Our analyses demonstrate that the same neural circuit supports choices in the two modalities. Hence current notions on the neuronal mechanisms underlying economic decisions generalize to choices under sequential offers. ### Competing Interest Statement The authors have declared no competing interest. [1]: #ref-2},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A series of studies in which monkeys chose between two juices offered in variable amounts identified in the orbitofrontal cortex (OFC) different groups of neurons encoding the value of individual options ( offer value ), the binary choice outcome ( chosen juice ) and the chosen value . These variables capture both the input and the output of the choice process, suggesting that the cell groups identified in OFC constitute the building blocks of a decision circuit. Several lines of evidence support this hypothesis. However, in previous experiments offers were presented simultaneously, raising the question of whether current notions generalize to when goods are presented or are examined in sequence. Recently, [Ballesta and Padoa-Schioppa (2019)][1] examined OFC activity under sequential offers. An analysis of neuronal responses across time windows revealed that a small number of cell groups encoded specific sequences of variables. These sequences appeared analogous to the variables identified under simultaneous offers, but the correspondence remained tentative. Thus in the present study we examined the relation between cell groups found under sequential versus simultaneous offers. We recorded from the OFC while monkeys chose between different juices. Trials with simultaneous and sequential offers were randomly interleaved in each session. We classified cells in each choice modality and we examined the relation between the two classifications. We found a strong correspondence – in other words, the cell groups measured under simultaneous offers and under sequential offers were one and the same. This result indicates that economic choices under simultaneous or sequential offers rely on the same neural circuit. Significance Statement Research in the past 20 years has shed light on the neuronal underpinnings of economic choices. A large number of results indicates that decisions between goods are formed in a neural circuit within the orbitofrontal cortex (OFC). In most previous studies, subjects chose between two goods offered simultaneously. Yet, in daily situations, goods available for choice are often presented or examined in sequence. Here we recorded neuronal activity in the primate OFC alternating trials under simultaneous and under sequential offers. Our analyses demonstrate that the same neural circuit supports choices in the two modalities. Hence current notions on the neuronal mechanisms underlying economic decisions generalize to choices under sequential offers. ### Competing Interest Statement The authors have declared no competing interest. [1]: #ref-2

Close

  • doi:10.1523/jneurosci.1265-21.2021

Close

Arunava Samaddar; Brooke S. Jackson; Christopher J. Helms; Nicole A. Lazar; Jennifer E. McDowell; Cheolwoo Park

A group comparison in fMRI data using a semiparametric model under shape invariance Journal Article

In: Computational Statistics and Data Analysis, vol. 167, pp. 107361, 2022.

Abstract | Links | BibTeX

@article{Samaddar2022,
title = {A group comparison in fMRI data using a semiparametric model under shape invariance},
author = {Arunava Samaddar and Brooke S. Jackson and Christopher J. Helms and Nicole A. Lazar and Jennifer E. McDowell and Cheolwoo Park},
doi = {10.1016/j.csda.2021.107361},
year = {2022},
date = {2022-01-01},
journal = {Computational Statistics and Data Analysis},
volume = {167},
pages = {107361},
publisher = {Elsevier B.V.},
abstract = {In the analysis of functional magnetic resonance imaging (fMRI) data, a common type of analysis is to compare differences across scanning sessions. A challenge to direct comparisons of this type is the low signal-to-noise ratio in fMRI data. By using the property that brain signals from a task-related experiment may exhibit a similar pattern in regions of interest across participants, a semiparametric approach under shape invariance to quantify and test the differences in sessions and groups is developed. The common function is estimated with local polynomial regression and the shape invariance model parameters are estimated using evolutionary optimization methods. The efficacy of the semi-parametric approach is demonstrated on a study of brain activation changes across two sessions associated with practice-related cognitive control. The objective of the study is to evaluate neural circuitry supporting a cognitive control task, and associated practice-related changes via acquisition of blood oxygenation level dependent (BOLD) signal collected using fMRI. By using the proposed approach, BOLD signals in multiple regions of interest for control participants and participants with schizophrenia are compared as they perform a cognitive control task (known as the antisaccade task) at two sessions, and the effects of task practice in these groups are quantified.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the analysis of functional magnetic resonance imaging (fMRI) data, a common type of analysis is to compare differences across scanning sessions. A challenge to direct comparisons of this type is the low signal-to-noise ratio in fMRI data. By using the property that brain signals from a task-related experiment may exhibit a similar pattern in regions of interest across participants, a semiparametric approach under shape invariance to quantify and test the differences in sessions and groups is developed. The common function is estimated with local polynomial regression and the shape invariance model parameters are estimated using evolutionary optimization methods. The efficacy of the semi-parametric approach is demonstrated on a study of brain activation changes across two sessions associated with practice-related cognitive control. The objective of the study is to evaluate neural circuitry supporting a cognitive control task, and associated practice-related changes via acquisition of blood oxygenation level dependent (BOLD) signal collected using fMRI. By using the proposed approach, BOLD signals in multiple regions of interest for control participants and participants with schizophrenia are compared as they perform a cognitive control task (known as the antisaccade task) at two sessions, and the effects of task practice in these groups are quantified.

Close

  • doi:10.1016/j.csda.2021.107361

Close

Nuria Sagarra; Nicole Rodriguez

Subject-verb number agreement in bilingual processing: (Lack of) age of acquisition and proficiency effects Journal Article

In: Languages, vol. 7, pp. 15, 2022.

Abstract | BibTeX

@article{Sagarra2022,
title = {Subject-verb number agreement in bilingual processing: (Lack of) age of acquisition and proficiency effects},
author = {Nuria Sagarra and Nicole Rodriguez},
year = {2022},
date = {2022-01-01},
journal = {Languages},
volume = {7},
pages = {15},
abstract = {Children acquire language more easily than adults, though it is controversial whether this faculty declines as a result of a critical period or something else. To address this question, we investigate the role of age of acquisition and proficiency on morphosyntactic processing in adult monolinguals and bilinguals. Spanish monolinguals and intermediate and advanced early and late bilinguals of Spanish read sentences with adjacent subject–verb number agreements and violations and chose one of four pictures. Eye-tracking data revealed that all groups were sensitive to the violations and attended more to more salient plural and preterit verbs than less obvious singular and present verbs, regardless of AoA and proficiency level. We conclude that the processing of adjacent SV agreement depends on perceptual salience and language use, rather than AoA or proficiency. These findings support usage-based theories of language acquisition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Children acquire language more easily than adults, though it is controversial whether this faculty declines as a result of a critical period or something else. To address this question, we investigate the role of age of acquisition and proficiency on morphosyntactic processing in adult monolinguals and bilinguals. Spanish monolinguals and intermediate and advanced early and late bilinguals of Spanish read sentences with adjacent subject–verb number agreements and violations and chose one of four pictures. Eye-tracking data revealed that all groups were sensitive to the violations and attended more to more salient plural and preterit verbs than less obvious singular and present verbs, regardless of AoA and proficiency level. We conclude that the processing of adjacent SV agreement depends on perceptual salience and language use, rather than AoA or proficiency. These findings support usage-based theories of language acquisition.

Close

Johannes Rennig; Michael S Beauchamp

Intelligibility of audiovisual sentences drives nultivoxel response patterns in human superior temporal cortex Journal Article

In: NeuroImage, vol. 247, pp. 118796, 2022.

Abstract | Links | BibTeX

@article{Rennig2022,
title = {Intelligibility of audiovisual sentences drives nultivoxel response patterns in human superior temporal cortex},
author = {Johannes Rennig and Michael S Beauchamp},
doi = {10.1016/j.neuroimage.2021.118796},
year = {2022},
date = {2022-01-01},
journal = {NeuroImage},
volume = {247},
pages = {118796},
publisher = {Elsevier Inc.},
abstract = {Regions of the human posterior superior temporal gyrus and sulcus (pSTG/S) respond to the visual mouth movements that constitute visual speech and the auditory vocalizations that constitute auditory speech, and neural responses in pSTG/S may underlie the perceptual benefit of visual speech for the comprehension of noisy auditory speech. We examined this possibility through the lens of multivoxel pattern responses in pSTG/S. BOLD fMRI data was collected from 22 participants presented with speech consisting of English sentences presented in five different formats: visual-only; auditory with and without added auditory noise; and audiovisual with and without auditory noise. Participants reported the intelligibility of each sentence with a button press and trials were sorted post-hoc into those that were more or less intelligible. Response patterns were measured in regions of the pSTG/S identified with an independent localizer. Noisy audiovisual sentences with very similar physical properties evoked very different response patterns depending on their intelligibility. When a noisy audiovisual sentence was reported as intelligible, the pattern was nearly identical to that elicited by clear audiovisual sentences. In contrast, an unintelligible noisy audiovisual sentence evoked a pattern like that of visual-only sentences. This effect was less pronounced for noisy auditory-only sentences, which evoked similar response patterns regardless of intelligibility. The successful integration of visual and auditory speech produces a characteristic neural signature in pSTG/S, highlighting the importance of this region in generating the perceptual benefit of visual speech.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Regions of the human posterior superior temporal gyrus and sulcus (pSTG/S) respond to the visual mouth movements that constitute visual speech and the auditory vocalizations that constitute auditory speech, and neural responses in pSTG/S may underlie the perceptual benefit of visual speech for the comprehension of noisy auditory speech. We examined this possibility through the lens of multivoxel pattern responses in pSTG/S. BOLD fMRI data was collected from 22 participants presented with speech consisting of English sentences presented in five different formats: visual-only; auditory with and without added auditory noise; and audiovisual with and without auditory noise. Participants reported the intelligibility of each sentence with a button press and trials were sorted post-hoc into those that were more or less intelligible. Response patterns were measured in regions of the pSTG/S identified with an independent localizer. Noisy audiovisual sentences with very similar physical properties evoked very different response patterns depending on their intelligibility. When a noisy audiovisual sentence was reported as intelligible, the pattern was nearly identical to that elicited by clear audiovisual sentences. In contrast, an unintelligible noisy audiovisual sentence evoked a pattern like that of visual-only sentences. This effect was less pronounced for noisy auditory-only sentences, which evoked similar response patterns regardless of intelligibility. The successful integration of visual and auditory speech produces a characteristic neural signature in pSTG/S, highlighting the importance of this region in generating the perceptual benefit of visual speech.

Close

  • doi:10.1016/j.neuroimage.2021.118796

Close

Megan J. Raden; Andrew F. Jarosz

Strategy transfer on fluid reasoning tasks Journal Article

In: Intelligence, vol. 91, pp. 101618, 2022.

Abstract | Links | BibTeX

@article{Raden2022,
title = {Strategy transfer on fluid reasoning tasks},
author = {Megan J. Raden and Andrew F. Jarosz},
doi = {10.1016/j.intell.2021.101618},
year = {2022},
date = {2022-01-01},
journal = {Intelligence},
volume = {91},
pages = {101618},
publisher = {Elsevier Inc.},
abstract = {Strategy use on reasoning tasks has consistently been shown to correlate with working memory capacity and accuracy, but it is still unclear to what degree individual preferences, working memory capacity, and features of the task itself contribute to strategy use. The present studies used eye tracking to explore the potential for strategy transfer between reasoning tasks. Study 1 demonstrated that participants are consistent in what strategy they use across reasoning tasks and that strategy transfer between tasks is possible. Additionally, post-hoc an- alyses identified certain ambiguous items in the figural analogies task that required participants to assess the response bank to reach solution, which appeared to push participants towards a more response-based strategy. Study 2 utilized a between-subjects design to manipulate this “ambiguity” in figural analogies problems prior to completing the RAPM. Once again, participants transferred strategies between tasks when primed with different strategies, although this did not affect their ability to accurately solve the problem. Importantly, strategy use changed considerably depending on the ambiguity of the initial reasoning task. The results provided across the two studies suggest that participants are consistent in what strategies they employ across reasoning tasks, and that if features of the task push participants towards a different strategy, they will transfer that strategy to another reasoning task. Furthermore, to understand the role of strategy use on reasoning tasks, future work will require a diverse sample of both reasoning tasks and strategy use measures. Fluid},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Strategy use on reasoning tasks has consistently been shown to correlate with working memory capacity and accuracy, but it is still unclear to what degree individual preferences, working memory capacity, and features of the task itself contribute to strategy use. The present studies used eye tracking to explore the potential for strategy transfer between reasoning tasks. Study 1 demonstrated that participants are consistent in what strategy they use across reasoning tasks and that strategy transfer between tasks is possible. Additionally, post-hoc an- alyses identified certain ambiguous items in the figural analogies task that required participants to assess the response bank to reach solution, which appeared to push participants towards a more response-based strategy. Study 2 utilized a between-subjects design to manipulate this “ambiguity” in figural analogies problems prior to completing the RAPM. Once again, participants transferred strategies between tasks when primed with different strategies, although this did not affect their ability to accurately solve the problem. Importantly, strategy use changed considerably depending on the ambiguity of the initial reasoning task. The results provided across the two studies suggest that participants are consistent in what strategies they employ across reasoning tasks, and that if features of the task push participants towards a different strategy, they will transfer that strategy to another reasoning task. Furthermore, to understand the role of strategy use on reasoning tasks, future work will require a diverse sample of both reasoning tasks and strategy use measures. Fluid

Close

  • doi:10.1016/j.intell.2021.101618

Close

Alessandro Piras; Aurelio Trofè; Andrea Meoni; Milena Raffi

Influence of radial optic flow stimulation on static postural balance in Parkinson's disease: A preliminary study Journal Article

In: Human Movement Science, vol. 81, pp. 102905, 2022.

Abstract | Links | BibTeX

@article{Piras2022,
title = {Influence of radial optic flow stimulation on static postural balance in Parkinson's disease: A preliminary study},
author = {Alessandro Piras and Aurelio Trofè and Andrea Meoni and Milena Raffi},
doi = {10.1016/j.humov.2021.102905},
year = {2022},
date = {2022-01-01},
journal = {Human Movement Science},
volume = {81},
pages = {102905},
abstract = {The role of optic flow in the control of balance in persons with Parkinson's disease (PD) has yet to be studied. Since basal ganglia are understood to have a role in controlling ocular fixation, we have hypothesized that persons with PD would exhibit impaired performance in fixation tasks, i.e., altered postural balance due to the possible relationships between postural disorders and visual perception. The aim of this preliminary study was to investigate how people affected by PD respond to optic flow stimuli presented with radial expanding motion, with the intention to see how the stimulation of different retinal portions may alter the static postural sway. We measured the body sway using center of pressure parameters recorded from two force platforms during the presentation of the foveal, peripheral and full field radial optic flow stimuli. Persons with PD had different visual responses in terms of fixational eye movement characteristics, with greater postural alteration in the sway area and in the medio-lateral direction than the age-matched control group. Balance impairment in the medio-lateral oscillation is often observed in persons with atypical Parkinsonism, but not in Parkinson's disease. Persons with PD are more dependent on visual feedback with respect to age-matched control subjects, and this could be due to their impaired peripheral kinesthetic feedback. Visual stimulation of standing posture would provide reliable signs in the differential diagnosis of Parkinsonism.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The role of optic flow in the control of balance in persons with Parkinson's disease (PD) has yet to be studied. Since basal ganglia are understood to have a role in controlling ocular fixation, we have hypothesized that persons with PD would exhibit impaired performance in fixation tasks, i.e., altered postural balance due to the possible relationships between postural disorders and visual perception. The aim of this preliminary study was to investigate how people affected by PD respond to optic flow stimuli presented with radial expanding motion, with the intention to see how the stimulation of different retinal portions may alter the static postural sway. We measured the body sway using center of pressure parameters recorded from two force platforms during the presentation of the foveal, peripheral and full field radial optic flow stimuli. Persons with PD had different visual responses in terms of fixational eye movement characteristics, with greater postural alteration in the sway area and in the medio-lateral direction than the age-matched control group. Balance impairment in the medio-lateral oscillation is often observed in persons with atypical Parkinsonism, but not in Parkinson's disease. Persons with PD are more dependent on visual feedback with respect to age-matched control subjects, and this could be due to their impaired peripheral kinesthetic feedback. Visual stimulation of standing posture would provide reliable signs in the differential diagnosis of Parkinsonism.

Close

  • doi:10.1016/j.humov.2021.102905

Close

Pablo Oyarzo; David Preiss; Diego Cosmelli

Attentional and meta‐cognitive processes underlying mind wandering episodes during continuous naturalistic reading are associated with specific changes in eye behavior Journal Article

In: Psychophysiology, pp. e13994, 2022.

Abstract | Links | BibTeX

@article{Oyarzo2022,
title = {Attentional and meta‐cognitive processes underlying mind wandering episodes during continuous naturalistic reading are associated with specific changes in eye behavior},
author = {Pablo Oyarzo and David Preiss and Diego Cosmelli},
doi = {10.1111/psyp.13994},
year = {2022},
date = {2022-01-01},
journal = {Psychophysiology},
pages = {e13994},
abstract = {Although eye movements during reading have been studied extensively, their variation due to attentional fluctuations such as spontaneous distractions is not well understood. Here we used a naturalistic reading task combined with an at- tentional sampling method to examine the effects of mind wandering— and the subsequent metacognitive awareness of its occurrence— on eye movements and pupillary dynamics. Our goal was to better understand the attentional and meta- cognitive processes involved in the initiation and termination of mind wander- ing episodes. Our results show that changes in eye behavior are consistent with underlying independent cognitive mechanisms working in tandem to sustain the attentional resources required for focused reading. In addition to changes in blink frequency, blink duration, and the number of saccades, variations in eye movements during unaware distractions point to a loss of the perceptual asym- metry that is usually observed in attentive, left- to- right reading. Also, before self- detected distractions, we observed a specific increase in pupillary diameter, indicating the likely presence of an anticipatory autonomic process that could contribute to becoming aware of the current attentional state. These findings stress the need for further research tackling the temporal structure of attentional dynamics during tasks that have a significant real- world impact.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Although eye movements during reading have been studied extensively, their variation due to attentional fluctuations such as spontaneous distractions is not well understood. Here we used a naturalistic reading task combined with an at- tentional sampling method to examine the effects of mind wandering— and the subsequent metacognitive awareness of its occurrence— on eye movements and pupillary dynamics. Our goal was to better understand the attentional and meta- cognitive processes involved in the initiation and termination of mind wander- ing episodes. Our results show that changes in eye behavior are consistent with underlying independent cognitive mechanisms working in tandem to sustain the attentional resources required for focused reading. In addition to changes in blink frequency, blink duration, and the number of saccades, variations in eye movements during unaware distractions point to a loss of the perceptual asym- metry that is usually observed in attentive, left- to- right reading. Also, before self- detected distractions, we observed a specific increase in pupillary diameter, indicating the likely presence of an anticipatory autonomic process that could contribute to becoming aware of the current attentional state. These findings stress the need for further research tackling the temporal structure of attentional dynamics during tasks that have a significant real- world impact.

Close

  • doi:10.1111/psyp.13994

Close

Joel T. Martin; Annalise H. Whittaker; Stephen J. Johnston

Pupillometry and the vigilance decrement: Task‐evoked but not baseline pupil measures reflect declining performance in visual vigilance tasks Journal Article

In: European Journal of Neuroscience, vol. 44, pp. 1–22, 2022.

Abstract | Links | BibTeX

@article{Martin2022,
title = {Pupillometry and the vigilance decrement: Task‐evoked but not baseline pupil measures reflect declining performance in visual vigilance tasks},
author = {Joel T. Martin and Annalise H. Whittaker and Stephen J. Johnston},
doi = {10.1111/ejn.15585},
year = {2022},
date = {2022-01-01},
journal = {European Journal of Neuroscience},
volume = {44},
pages = {1--22},
abstract = {Baseline and task-evoked pupil measures are known to reflect the activity of the nervous system's central arousal mechanisms. With the increasing availability, affordability and flexibility of video-based eye tracking hardware, these measures may one day find practical application in real-time biobehavioral monitoring systems to assess performance or fitness for duty in tasks requiring vigilant attention. But real-world vigilance tasks are predominantly visual in their nature and most research in this area has taken place in the auditory domain. Here we explore the relationship between pupil size—both baseline and task-evoked—and behavioral performance measures in two novel vigilance tasks requiring visual target detection: 1) a traditional vigilance task involving prolonged, continuous, and uninterrupted performance (n = 28), and 2) a psychomotor vigilance task (n = 25). In both tasks, behavioral performance and task-evoked pupil responses declined as time spent on task increased, corroborating previous reports in the literature of a vigilance decrement with a corresponding reduction in task-evoked pupil measures. Also in line with previous findings, baseline pupil size did not show a consistent relationship with performance measures. We discuss our findings considering the adaptive gain theory of locus coeruleus function and question the validity of the assumption that baseline (prestimulus) pupil size and task-evoked (poststimulus) pupil measures correspond to the tonic and phasic firing modes of the LC. ### Competing Interest Statement The authors have declared no competing interest.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Baseline and task-evoked pupil measures are known to reflect the activity of the nervous system's central arousal mechanisms. With the increasing availability, affordability and flexibility of video-based eye tracking hardware, these measures may one day find practical application in real-time biobehavioral monitoring systems to assess performance or fitness for duty in tasks requiring vigilant attention. But real-world vigilance tasks are predominantly visual in their nature and most research in this area has taken place in the auditory domain. Here we explore the relationship between pupil size—both baseline and task-evoked—and behavioral performance measures in two novel vigilance tasks requiring visual target detection: 1) a traditional vigilance task involving prolonged, continuous, and uninterrupted performance (n = 28), and 2) a psychomotor vigilance task (n = 25). In both tasks, behavioral performance and task-evoked pupil responses declined as time spent on task increased, corroborating previous reports in the literature of a vigilance decrement with a corresponding reduction in task-evoked pupil measures. Also in line with previous findings, baseline pupil size did not show a consistent relationship with performance measures. We discuss our findings considering the adaptive gain theory of locus coeruleus function and question the validity of the assumption that baseline (prestimulus) pupil size and task-evoked (poststimulus) pupil measures correspond to the tonic and phasic firing modes of the LC. ### Competing Interest Statement The authors have declared no competing interest.

Close

  • doi:10.1111/ejn.15585

Close

Ana Marcet; Manuel Perea

Does omitting the accent mark in a word affect sentence reading? Evidence from Spanish Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 75, no. 1, pp. 148–155, 2022.

Abstract | Links | BibTeX

@article{Marcet2022,
title = {Does omitting the accent mark in a word affect sentence reading? Evidence from Spanish},
author = {Ana Marcet and Manuel Perea},
doi = {10.1177/17470218211044694},
year = {2022},
date = {2022-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {75},
number = {1},
pages = {148--155},
abstract = {Lexical stress in multisyllabic words is consistent in some languages (e.g., first syllable in Finnish), but it is variable in others (e.g., Spanish, English). To help lexical processing in a transparent language like Spanish, scholars have proposed a set of rules specifying which words require an accent mark indicating lexical stress in writing. However, recent word recognition using that lexical decision showed that word identification times were not affected by the omission of a word's accent mark in Spanish. To examine this question in a paradigm with greater ecological validity, we tested whether omitting the accent mark in a Spanish word had a deleterious effect during silent sentence reading. A target word was embedded in a sentence with its accent mark or not. Results showed no reading cost of omitting the word's accent mark in first-pass eye fixation durations, but we found a cost in the total reading time spent on the target word (i.e., including re-reading). Thus, the omission of an accent mark delays late, but not early, lexical processing in Spanish. These findings help constrain the locus of accent mark information in models of visual word recognition and reading. Furthermore, these findings offer some clues on how to simplify the Spanish rules of accentuation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Lexical stress in multisyllabic words is consistent in some languages (e.g., first syllable in Finnish), but it is variable in others (e.g., Spanish, English). To help lexical processing in a transparent language like Spanish, scholars have proposed a set of rules specifying which words require an accent mark indicating lexical stress in writing. However, recent word recognition using that lexical decision showed that word identification times were not affected by the omission of a word's accent mark in Spanish. To examine this question in a paradigm with greater ecological validity, we tested whether omitting the accent mark in a Spanish word had a deleterious effect during silent sentence reading. A target word was embedded in a sentence with its accent mark or not. Results showed no reading cost of omitting the word's accent mark in first-pass eye fixation durations, but we found a cost in the total reading time spent on the target word (i.e., including re-reading). Thus, the omission of an accent mark delays late, but not early, lexical processing in Spanish. These findings help constrain the locus of accent mark information in models of visual word recognition and reading. Furthermore, these findings offer some clues on how to simplify the Spanish rules of accentuation.

Close

  • doi:10.1177/17470218211044694

Close

Sixin Liao; Lili Yu; Jan-Louis Kruger; Erik D. Reichle

The impact of audio on the reading of intralingual versus interlingual subtitles: Evidence from eye movements Book

2022.

Abstract | Links | BibTeX

@book{Liao2022,
title = {The impact of audio on the reading of intralingual versus interlingual subtitles: Evidence from eye movements},
author = {Sixin Liao and Lili Yu and Jan-Louis Kruger and Erik D. Reichle},
doi = {10.1017/s0142716421000527},
year = {2022},
date = {2022-01-01},
booktitle = {Applied Psycholinguistics},
volume = {43},
number = {1},
pages = {237--269},
abstract = {This study investigated how semantically relevant auditory information might affect the reading of subtitles, and if such effects might be modulated by the concurrent video content. Thirty-four native Chinese speakers with English as their second language watched video with English subtitles in six conditions defined by manipulating the nature of the audio (Chinese/L1 audio vs. English/L2 audio vs. no audio) and the presence versus absence of video content. Global eye-movement analyses showed that participants tended to rely less on subtitles with Chinese or English audio than without audio, and the effects of audio were more pronounced in the presence of video presentation. Lexical processing of subtitles was not modulated by the audio. However, Chinese audio, which presumably obviated the need to read the subtitles, resulted in more superficial post-lexical processing of the subtitles relative to either the English or no audio. On the contrary, English audio accentuated post-lexical processing of the subtitles compared with Chinese audio or no audio, indicating that participants might use English audio to support subtitle reading (or vice versa) and thus engaged in deeper processing of the subtitles. These findings suggest that, in multimodal reading situations, eye movements are not only controlled by processing difficulties associated with properties of words (e.g., their frequency and length) but also guided by metacognitive strategies involved in monitoring comprehension and its online modulation by different information sources.},
keywords = {},
pubstate = {published},
tppubtype = {book}
}

Close

This study investigated how semantically relevant auditory information might affect the reading of subtitles, and if such effects might be modulated by the concurrent video content. Thirty-four native Chinese speakers with English as their second language watched video with English subtitles in six conditions defined by manipulating the nature of the audio (Chinese/L1 audio vs. English/L2 audio vs. no audio) and the presence versus absence of video content. Global eye-movement analyses showed that participants tended to rely less on subtitles with Chinese or English audio than without audio, and the effects of audio were more pronounced in the presence of video presentation. Lexical processing of subtitles was not modulated by the audio. However, Chinese audio, which presumably obviated the need to read the subtitles, resulted in more superficial post-lexical processing of the subtitles relative to either the English or no audio. On the contrary, English audio accentuated post-lexical processing of the subtitles compared with Chinese audio or no audio, indicating that participants might use English audio to support subtitle reading (or vice versa) and thus engaged in deeper processing of the subtitles. These findings suggest that, in multimodal reading situations, eye movements are not only controlled by processing difficulties associated with properties of words (e.g., their frequency and length) but also guided by metacognitive strategies involved in monitoring comprehension and its online modulation by different information sources.

Close

  • doi:10.1017/s0142716421000527

Close

Astar Lev; Yoram Braw; Tomer Elbaum; Michael Wagner; Yuri Rassovsky

Eye tracking during a continuous performance test: Utility for assessing ADHD patients Journal Article

In: Journal of Attention Disorders, vol. 26, no. 2, pp. 245–255, 2022.

Abstract | Links | BibTeX

@article{Lev2022,
title = {Eye tracking during a continuous performance test: Utility for assessing ADHD patients},
author = {Astar Lev and Yoram Braw and Tomer Elbaum and Michael Wagner and Yuri Rassovsky},
doi = {10.1177/1087054720972786},
year = {2022},
date = {2022-01-01},
journal = {Journal of Attention Disorders},
volume = {26},
number = {2},
pages = {245--255},
abstract = {Objective: The use of continuous performance tests (CPTs) for assessing ADHD related cognitive impairment is ubiquitous. Novel psychophysiological measures may enhance the data that is derived from CPTs and thereby improve clinical decision-making regarding diagnosis and treatment. As part of the current study, we integrated an eye tracker with the MOXO-dCPT and assessed the utility of eye movement measures to differentiate ADHD patients and healthy controls. Method: Adult ADHD patients and gender/age-matched healthy controls performed the MOXO-dCPT while their eye movements were monitored (n = 33 per group). Results: ADHD patients spent significantly more time gazing at irrelevant regions, both on the screen and outside of it, than healthy controls. The eye movement measures showed adequate ability to classify ADHD patients. Moreover, a scale that combined eye movement measures enhanced group prediction, compared to the sole use of conventional MOXO-dCPT indices. Conclusions: Integrating an eye tracker with CPTs is a feasible way of enhancing diagnostic precision and shows initial promise for clarifying the cognitive profile of ADHD patients. Pending replication, these findings point toward a promising path for the evolution of existing CPTs.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: The use of continuous performance tests (CPTs) for assessing ADHD related cognitive impairment is ubiquitous. Novel psychophysiological measures may enhance the data that is derived from CPTs and thereby improve clinical decision-making regarding diagnosis and treatment. As part of the current study, we integrated an eye tracker with the MOXO-dCPT and assessed the utility of eye movement measures to differentiate ADHD patients and healthy controls. Method: Adult ADHD patients and gender/age-matched healthy controls performed the MOXO-dCPT while their eye movements were monitored (n = 33 per group). Results: ADHD patients spent significantly more time gazing at irrelevant regions, both on the screen and outside of it, than healthy controls. The eye movement measures showed adequate ability to classify ADHD patients. Moreover, a scale that combined eye movement measures enhanced group prediction, compared to the sole use of conventional MOXO-dCPT indices. Conclusions: Integrating an eye tracker with CPTs is a feasible way of enhancing diagnostic precision and shows initial promise for clarifying the cognitive profile of ADHD patients. Pending replication, these findings point toward a promising path for the evolution of existing CPTs.

Close

  • doi:10.1177/1087054720972786

Close

Timo L. Kvamme; Mesud Sarmanlu; Christopher Bailey; Morten Overgaard

Neurofeedback modulation of the sound-induced flash illusion using parietal cortex alpha oscillations reveals dependency on prior multisensory congruency Journal Article

In: Neuroscience, vol. 482, pp. 1–17, 2022.

Abstract | Links | BibTeX

@article{Kvamme2022,
title = {Neurofeedback modulation of the sound-induced flash illusion using parietal cortex alpha oscillations reveals dependency on prior multisensory congruency},
author = {Timo L. Kvamme and Mesud Sarmanlu and Christopher Bailey and Morten Overgaard},
doi = {10.1016/j.neuroscience.2021.11.028},
year = {2022},
date = {2022-01-01},
journal = {Neuroscience},
volume = {482},
pages = {1--17},
publisher = {The Authors},
abstract = {Spontaneous neural oscillations are key predictors of perceptual decisions to bind multisensory signals into a unified percept. Research links decreased alpha power in the posterior cortices to attention and audiovisual binding in the sound-induced flash illusion (SIFI) paradigm. This suggests that controlling alpha oscillations would be a way of controlling audiovisual binding. In the present feasibility study we used MEG-neurofeedback to train one group of subjects to increase left/right and another to increase right/left alpha power ratios in the parietal cortex. We tested for changes in audiovisual binding in a SIFI paradigm where flashes appeared in both hemifields. Results showed that the neurofeedback induced a significant asymmetry in alpha power for the left/right group, not seen for the right/left group. Corresponding asymmetry changes in audiovisual binding in illusion trials (with 2, 3, and 4 beeps paired with 1 flash) were not apparent. Exploratory analyses showed that neurofeedback training effects were present for illusion trials with the lowest numeric disparity (i.e., 2 beeps and 1 flash trials) only if the previous trial had high congruency (2 beeps and 2 flashes). Our data suggest that the relation between parietal alpha power (an index of attention) and its effect on audiovisual binding is dependent on the learned causal structure in the previous stimulus. The present results suggests that low alpha power biases observers towards audiovisual binding when they have learned that audiovisual signals originate from a common origin, consistent with a Bayesian causal inference account of multisensory perception.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Spontaneous neural oscillations are key predictors of perceptual decisions to bind multisensory signals into a unified percept. Research links decreased alpha power in the posterior cortices to attention and audiovisual binding in the sound-induced flash illusion (SIFI) paradigm. This suggests that controlling alpha oscillations would be a way of controlling audiovisual binding. In the present feasibility study we used MEG-neurofeedback to train one group of subjects to increase left/right and another to increase right/left alpha power ratios in the parietal cortex. We tested for changes in audiovisual binding in a SIFI paradigm where flashes appeared in both hemifields. Results showed that the neurofeedback induced a significant asymmetry in alpha power for the left/right group, not seen for the right/left group. Corresponding asymmetry changes in audiovisual binding in illusion trials (with 2, 3, and 4 beeps paired with 1 flash) were not apparent. Exploratory analyses showed that neurofeedback training effects were present for illusion trials with the lowest numeric disparity (i.e., 2 beeps and 1 flash trials) only if the previous trial had high congruency (2 beeps and 2 flashes). Our data suggest that the relation between parietal alpha power (an index of attention) and its effect on audiovisual binding is dependent on the learned causal structure in the previous stimulus. The present results suggests that low alpha power biases observers towards audiovisual binding when they have learned that audiovisual signals originate from a common origin, consistent with a Bayesian causal inference account of multisensory perception.

Close

  • doi:10.1016/j.neuroscience.2021.11.028

Close

Koji Kuraoka; Kae Nakamura

Facial temperature and pupil size as indicators of internal state in primates Journal Article

In: Neuroscience Research, 2022.

Abstract | Links | BibTeX

@article{Kuraoka2022,
title = {Facial temperature and pupil size as indicators of internal state in primates},
author = {Koji Kuraoka and Kae Nakamura},
doi = {10.1016/j.neures.2022.01.002},
year = {2022},
date = {2022-01-01},
journal = {Neuroscience Research},
publisher = {Elsevier Ireland Ltd and Japan Neuroscience Society},
abstract = {Studies in human subjects have revealed that autonomic responses provide objective and biologically relevant information about cognitive and affective states. Measures of autonomic responses can also be applied to studies of non-human primates, which are neuro-anatomically and physically similar to humans. Facial temperature and pupil size are measured remotely and can be applied to physiological experiments in primates, preferably in a head-fixed condition. However, detailed guidelines for the use of these measures in non-human primates is lacking. Here, we review the neuronal circuits and methodological considerations necessary for measuring and analyzing facial temperature and pupil size in non-human primates. Previous studies have shown that the modulation of these measures primarily reflects sympathetic reactions to cognitive and emotional processes, including alertness, attention, and mental effort, over different time scales. Integrated analyses of autonomic, behavioral, and neurophysiological data in primates are promising methods that reflect multiple dimensions of emotion and could potentially provide tools for understanding the mechanisms underlying neuropsychiatric disorders and vulnerabilities characterized by cognitive and affective disturbances.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Studies in human subjects have revealed that autonomic responses provide objective and biologically relevant information about cognitive and affective states. Measures of autonomic responses can also be applied to studies of non-human primates, which are neuro-anatomically and physically similar to humans. Facial temperature and pupil size are measured remotely and can be applied to physiological experiments in primates, preferably in a head-fixed condition. However, detailed guidelines for the use of these measures in non-human primates is lacking. Here, we review the neuronal circuits and methodological considerations necessary for measuring and analyzing facial temperature and pupil size in non-human primates. Previous studies have shown that the modulation of these measures primarily reflects sympathetic reactions to cognitive and emotional processes, including alertness, attention, and mental effort, over different time scales. Integrated analyses of autonomic, behavioral, and neurophysiological data in primates are promising methods that reflect multiple dimensions of emotion and could potentially provide tools for understanding the mechanisms underlying neuropsychiatric disorders and vulnerabilities characterized by cognitive and affective disturbances.

Close

  • doi:10.1016/j.neures.2022.01.002

Close

Jan-Louis Kruger; Natalia Wisniewska; Sixin Liao

Why subtitle speed matters: Evidence from word skipping and rereading Journal Article

In: Applied Psycholinguistics, vol. 43, no. 1, pp. 211–236, 2022.

Abstract | Links | BibTeX

@article{Kruger2022,
title = {Why subtitle speed matters: Evidence from word skipping and rereading},
author = {Jan-Louis Kruger and Natalia Wisniewska and Sixin Liao},
doi = {10.1017/s0142716421000503},
year = {2022},
date = {2022-01-01},
journal = {Applied Psycholinguistics},
volume = {43},
number = {1},
pages = {211--236},
abstract = {High subtitle speed undoubtedly impacts the viewer experience. However, little is known about how fast subtitles might impact the reading of individual words. This article presents new findings on the effect of subtitle speed on viewers' reading behavior using word-based eye-tracking measures with specific attention to word skipping and rereading. In multimodal reading situations such as reading subtitles in video, rereading allows people to correct for oculomotor error or comprehension failure during linguistic processing or integrate words with elements of the image to build a situation model of the video. However, the opportunity to reread words, to read the majority of the words in the subtitle and to read subtitles to completion, is likely to be compromised when subtitles are too fast. Participants watched videos with subtitles at 12, 20, and 28 characters per second (cps) while their eye movements were recorded. It was found that comprehension declined as speed increased. Eye movement records also showed that faster subtitles resulted in more incomplete reading of subtitles. Furthermore, increased speed also caused fewer words to be reread following both horizontal eye movements (likely resulting in reduced lexical processing) and vertical eye movements (which would likely reduce higher-level comprehension and integration).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

High subtitle speed undoubtedly impacts the viewer experience. However, little is known about how fast subtitles might impact the reading of individual words. This article presents new findings on the effect of subtitle speed on viewers' reading behavior using word-based eye-tracking measures with specific attention to word skipping and rereading. In multimodal reading situations such as reading subtitles in video, rereading allows people to correct for oculomotor error or comprehension failure during linguistic processing or integrate words with elements of the image to build a situation model of the video. However, the opportunity to reread words, to read the majority of the words in the subtitle and to read subtitles to completion, is likely to be compromised when subtitles are too fast. Participants watched videos with subtitles at 12, 20, and 28 characters per second (cps) while their eye movements were recorded. It was found that comprehension declined as speed increased. Eye movement records also showed that faster subtitles resulted in more incomplete reading of subtitles. Furthermore, increased speed also caused fewer words to be reread following both horizontal eye movements (likely resulting in reduced lexical processing) and vertical eye movements (which would likely reduce higher-level comprehension and integration).

Close

  • doi:10.1017/s0142716421000503

Close

Nadezhda Kerimova; Pavel Sivokhin; Diana Kodzokova; Karine Nikogosyan; Vasily Klucharev

Visual processing of green zones in shared courtyards during renting decisions: An eye-tracking study Journal Article

In: Urban Forestry and Urban Greening, vol. 68, pp. 127460, 2022.

Abstract | Links | BibTeX

@article{Kerimova2022,
title = {Visual processing of green zones in shared courtyards during renting decisions: An eye-tracking study},
author = {Nadezhda Kerimova and Pavel Sivokhin and Diana Kodzokova and Karine Nikogosyan and Vasily Klucharev},
doi = {10.1016/j.ufug.2022.127460},
year = {2022},
date = {2022-01-01},
journal = {Urban Forestry and Urban Greening},
volume = {68},
pages = {127460},
publisher = {Elsevier GmbH},
abstract = {We used an eye-tracking technique to investigate the effect of green zones and car ownership on the attrac- tiveness of the courtyards of multistorey apartment buildings. Two interest groups—20 people who owned a car and 20 people who did not a car—observed 36 images of courtyards. Images were digitally modified to manipulate the spatial arrangement of key courtyard elements: green zones, parking lots, and children's play- grounds. The participants were asked to rate the attractiveness of courtyards during hypothetical renting decisions. Overall, we investigated whether visual exploration and appraisal of courtyards differed between people who owned a car and those who did not. The participants in both interest groups gazed longer at perceptually salient playgrounds and parking lots than at greenery. We also observed that participants gazed significantly longer at the greenery in courtyards rated as most attractive than those rated as least attractive. They gazed significantly longer at parking lots in courtyards rated as least attractive than those rated as most attractive. Using regression analysis, we further investigated the relationship between gaze fixations on courtyard elements and the attractiveness ratings of courtyards. The model confirmed a significant positive relationship between the number and duration of fixations on greenery and the attractiveness estimates of courtyards, while the model showed an opposite relationship for the duration of fixations on parking lots. Interestingly, the positive association between fixations on greenery and the attractiveness of courtyards was significantly stronger for participants who owned cars than for those who did not. These findings confirmed that the more people pay attention to green areas, the more positively they evaluate urban areas. The results also indicate that urban greenery may differentially affect the preferences of interest groups. 1.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We used an eye-tracking technique to investigate the effect of green zones and car ownership on the attrac- tiveness of the courtyards of multistorey apartment buildings. Two interest groups—20 people who owned a car and 20 people who did not a car—observed 36 images of courtyards. Images were digitally modified to manipulate the spatial arrangement of key courtyard elements: green zones, parking lots, and children's play- grounds. The participants were asked to rate the attractiveness of courtyards during hypothetical renting decisions. Overall, we investigated whether visual exploration and appraisal of courtyards differed between people who owned a car and those who did not. The participants in both interest groups gazed longer at perceptually salient playgrounds and parking lots than at greenery. We also observed that participants gazed significantly longer at the greenery in courtyards rated as most attractive than those rated as least attractive. They gazed significantly longer at parking lots in courtyards rated as least attractive than those rated as most attractive. Using regression analysis, we further investigated the relationship between gaze fixations on courtyard elements and the attractiveness ratings of courtyards. The model confirmed a significant positive relationship between the number and duration of fixations on greenery and the attractiveness estimates of courtyards, while the model showed an opposite relationship for the duration of fixations on parking lots. Interestingly, the positive association between fixations on greenery and the attractiveness of courtyards was significantly stronger for participants who owned cars than for those who did not. These findings confirmed that the more people pay attention to green areas, the more positively they evaluate urban areas. The results also indicate that urban greenery may differentially affect the preferences of interest groups. 1.

Close

  • doi:10.1016/j.ufug.2022.127460

Close

Ignace T C Hooge; Diederick C Niehorster; Marcus Nystrom; Richard Andersson; Roy S Hessels

Fixation classification: How to merge and select fixation candidates Journal Article

In: Behavior Research Methods, pp. 1–12, 2022.

Abstract | BibTeX

@article{Hooge2022,
title = {Fixation classification: How to merge and select fixation candidates},
author = {Ignace T C Hooge and Diederick C Niehorster and Marcus Nystrom and Richard Andersson and Roy S Hessels},
year = {2022},
date = {2022-01-01},
journal = {Behavior Research Methods},
pages = {1--12},
abstract = {Eye trackers are applied in many research fields (e.g., cognitive science, medicine, marketing research). To give meaning to the eye-tracking data, researchers have a broad choice of classification methods to extract various behaviors (e.g., saccade, blink, fixation) from the gaze signal. There is extensive literature about the different classification algorithms. Surprisingly, not much is known about the effect of fixation and saccade selection rules that are usually (implicitly) applied. We want to answer the following question: What is the impact of the selection-rule parameters (minimal saccade amplitude and minimal fixation duration) on the distribution of fixation durations? To answer this question, we used eye-tracking data with high and low quality and seven different classification algorithms. We conclude that selection rules play an important role in merging and selecting fixation candidates. For eye-tracking data with good-to-moderate precision (RMSD < 0.5◦), the classification algorithm of choice does not matter too much as long as it is sensitive enough and is followed by a rule that selects saccades with amplitudes larger than 1.0◦ and a rule that selects fixations with duration longer than 60 ms. Because of the importance of selection, researchers should always report whether they performed selection and the values of their parameters.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye trackers are applied in many research fields (e.g., cognitive science, medicine, marketing research). To give meaning to the eye-tracking data, researchers have a broad choice of classification methods to extract various behaviors (e.g., saccade, blink, fixation) from the gaze signal. There is extensive literature about the different classification algorithms. Surprisingly, not much is known about the effect of fixation and saccade selection rules that are usually (implicitly) applied. We want to answer the following question: What is the impact of the selection-rule parameters (minimal saccade amplitude and minimal fixation duration) on the distribution of fixation durations? To answer this question, we used eye-tracking data with high and low quality and seven different classification algorithms. We conclude that selection rules play an important role in merging and selecting fixation candidates. For eye-tracking data with good-to-moderate precision (RMSD < 0.5◦), the classification algorithm of choice does not matter too much as long as it is sensitive enough and is followed by a rule that selects saccades with amplitudes larger than 1.0◦ and a rule that selects fixations with duration longer than 60 ms. Because of the importance of selection, researchers should always report whether they performed selection and the values of their parameters.

Close

Christoph Helmchen; Björn Machner; Andreas Sprenger; David S. Zee

Monocular patching attenuates vertical nystagmus in Wernicke's encephalopathy via release of activity in subcortical visual pathways Journal Article

In: Movement Disorders Clinical Practice, vol. 9, no. 1, pp. 107–109, 2022.

Links | BibTeX

@article{Helmchen2022,
title = {Monocular patching attenuates vertical nystagmus in Wernicke's encephalopathy via release of activity in subcortical visual pathways},
author = {Christoph Helmchen and Björn Machner and Andreas Sprenger and David S. Zee},
doi = {10.1002/mdc3.13380},
year = {2022},
date = {2022-01-01},
journal = {Movement Disorders Clinical Practice},
volume = {9},
number = {1},
pages = {107--109},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

  • doi:10.1002/mdc3.13380

Close

Frauke Heins; Markus Lappe

Flexible use of post-saccadic visual feedback in oculomotor learning Journal Article

In: Journal of Vision, vol. 22, no. 1, pp. 1–16, 2022.

Abstract | Links | BibTeX

@article{Heins2022,
title = {Flexible use of post-saccadic visual feedback in oculomotor learning},
author = {Frauke Heins and Markus Lappe},
doi = {10.1167/jov.22.1.3},
year = {2022},
date = {2022-01-01},
journal = {Journal of Vision},
volume = {22},
number = {1},
pages = {1--16},
abstract = {Saccadic eye movements bring objects of interest onto our fovea. These gaze shifts are essential for visual perception of our environment and the interaction with the objects within it. They precede our actions and are thus modulated by current goals. It is assumed that saccadic adaptation, a recalibration process that restores saccade accuracy in case of error, is mainly based on an implicit comparison of expected and actual post-saccadic position of the target on the retina. However, there is increasing evidence that task demands modulate saccade adaptation and that errors in task performance may be sufficient to induce changes to saccade amplitude. We investigated if human participants are able to flexibly use different information sources within the post-saccadic visual feedback in task-dependent fashion. Using intra-saccadic manipulation of the visual input, participants were either presented with congruent post-saccadic information, indicating the saccade target unambiguously, or incongruent post-saccadic information, creating conflict between two possible target objects. Using different task instructions, we found that participants were able to modify their saccade behavior such that they achieved the goal of the task. They succeeded in decreasing saccade gain or maintaining it, depending on what was necessary for the task, irrespective of whether the post-saccadic feedback was congruent or incongruent. It appears that action intentions prime task-relevant feature dimensions and thereby facilitated the selection of the relevant information within the post-saccadic image. Thus, participants use post-saccadic feedback flexibly, depending on their intentions and pending actions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Saccadic eye movements bring objects of interest onto our fovea. These gaze shifts are essential for visual perception of our environment and the interaction with the objects within it. They precede our actions and are thus modulated by current goals. It is assumed that saccadic adaptation, a recalibration process that restores saccade accuracy in case of error, is mainly based on an implicit comparison of expected and actual post-saccadic position of the target on the retina. However, there is increasing evidence that task demands modulate saccade adaptation and that errors in task performance may be sufficient to induce changes to saccade amplitude. We investigated if human participants are able to flexibly use different information sources within the post-saccadic visual feedback in task-dependent fashion. Using intra-saccadic manipulation of the visual input, participants were either presented with congruent post-saccadic information, indicating the saccade target unambiguously, or incongruent post-saccadic information, creating conflict between two possible target objects. Using different task instructions, we found that participants were able to modify their saccade behavior such that they achieved the goal of the task. They succeeded in decreasing saccade gain or maintaining it, depending on what was necessary for the task, irrespective of whether the post-saccadic feedback was congruent or incongruent. It appears that action intentions prime task-relevant feature dimensions and thereby facilitated the selection of the relevant information within the post-saccadic image. Thus, participants use post-saccadic feedback flexibly, depending on their intentions and pending actions.

Close

  • doi:10.1167/jov.22.1.3

Close

Erin Goddard; Thomas A. Carlson; Alexandra Woolgar

Spatial and feature-selective attention have distinct, interacting effects on population-level tuning Journal Article

In: Journal of Cognitive Neuroscience, vol. 34, no. 2, pp. 290–312, 2022.

Abstract | Links | BibTeX

@article{Goddard2022,
title = {Spatial and feature-selective attention have distinct, interacting effects on population-level tuning},
author = {Erin Goddard and Thomas A. Carlson and Alexandra Woolgar},
doi = {10.1162/jocn_a_01796},
year = {2022},
date = {2022-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {34},
number = {2},
pages = {290--312},
abstract = {Attention can be deployed in different ways: When searching for a taxi in New York City, we can decide where to attend (e.g., to the street) and what to attend to (e.g., yellow cars). Although we use the same word to describe both processes, nonhuman primate data suggest that these produce distinct effects on neural tuning. This has been challenging to assess in humans, but here we used an opportunity afforded by multivariate decoding of MEG data. We found that attending to an object at a particular location and attending to a particular object feature produced effects that interacted multiplicatively. The two types of attention induced distinct patterns of enhancement in occipital cortex, with feature-selective attention producing relatively more enhancement of small feature differences and spatial attention producing relatively larger effects for larger feature differences. An information flow analysis further showed that stimulus representations in occipital cortex were Granger-caused by coding in frontal cortices earlier in time and that the timing of this feedback matched the onset of attention effects. The data suggest that spatial and feature-selective attention rely on distinct neural mechanisms that arise from frontal-occipital information exchange, interacting multiplicatively to selectively enhance task-relevant information.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Attention can be deployed in different ways: When searching for a taxi in New York City, we can decide where to attend (e.g., to the street) and what to attend to (e.g., yellow cars). Although we use the same word to describe both processes, nonhuman primate data suggest that these produce distinct effects on neural tuning. This has been challenging to assess in humans, but here we used an opportunity afforded by multivariate decoding of MEG data. We found that attending to an object at a particular location and attending to a particular object feature produced effects that interacted multiplicatively. The two types of attention induced distinct patterns of enhancement in occipital cortex, with feature-selective attention producing relatively more enhancement of small feature differences and spatial attention producing relatively larger effects for larger feature differences. An information flow analysis further showed that stimulus representations in occipital cortex were Granger-caused by coding in frontal cortices earlier in time and that the timing of this feedback matched the onset of attention effects. The data suggest that spatial and feature-selective attention rely on distinct neural mechanisms that arise from frontal-occipital information exchange, interacting multiplicatively to selectively enhance task-relevant information.

Close

  • doi:10.1162/jocn_a_01796

Close

Marco Esposito; Clarissa Ferrari; Claudia Fracassi; Carlo Miniussi; Debora Brignani

Responsiveness to left‐prefrontal tDCS varies according to arousal levels Journal Article

In: European Journal of Neuroscience, pp. 1–45, 2022.

Abstract | Links | BibTeX

@article{Esposito2022,
title = {Responsiveness to left‐prefrontal tDCS varies according to arousal levels},
author = {Marco Esposito and Clarissa Ferrari and Claudia Fracassi and Carlo Miniussi and Debora Brignani},
doi = {10.1111/ejn.15584},
year = {2022},
date = {2022-01-01},
journal = {European Journal of Neuroscience},
pages = {1--45},
abstract = {Over the past two decades, the postulated modulatory effects of transcranial direct current stimulation (tDCS) on the human brain have been extensively investigated. However, recent concerns on reliability of tDCS effects have been raised, principally due to reduced replicability This article is protected by copyright. All rights reserved. and to interindividual variability in response to tDCS. These inconsistencies are likely due to the interplay between the level of induced cortical excitability and unaccounted structural and state-dependent functional factors. On these grounds, we aimed at verifying whether the behavioural effects induced by a common tDCS montage (F3-rSOA) were influenced by the participants' arousal levels, as part of a broader mechanism of state-dependency. Pupillary dynamics were recorded during an auditory oddball task while applying either a sham or real tDCS. The tDCS effects were evaluated as a function of subjective and physiological arousal predictors (STAI-Y State scores and pre-stimulus pupil size, respectively). We showed that prefrontal tDCS hindered task learning effects on response speed such that performance improvement occurred during sham, but not real stimulation. Moreover, both subjective and physiological arousal predictors significantly explained performance during real tDCS, with interaction effects showing performance improvement only with moderate arousal levels; likewise, pupil response was affected by real tDCS according to the ongoing levels of arousal, with reduced dilation during higher arousal trials. These findings highlight the potential role of arousal in shaping the neuromodulatory outcome, thus emphasizing a more careful interpretation of null or negative results while also encouraging more individually tailored tDCS applications based on arousal levels, especially in clinical populations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Over the past two decades, the postulated modulatory effects of transcranial direct current stimulation (tDCS) on the human brain have been extensively investigated. However, recent concerns on reliability of tDCS effects have been raised, principally due to reduced replicability This article is protected by copyright. All rights reserved. and to interindividual variability in response to tDCS. These inconsistencies are likely due to the interplay between the level of induced cortical excitability and unaccounted structural and state-dependent functional factors. On these grounds, we aimed at verifying whether the behavioural effects induced by a common tDCS montage (F3-rSOA) were influenced by the participants' arousal levels, as part of a broader mechanism of state-dependency. Pupillary dynamics were recorded during an auditory oddball task while applying either a sham or real tDCS. The tDCS effects were evaluated as a function of subjective and physiological arousal predictors (STAI-Y State scores and pre-stimulus pupil size, respectively). We showed that prefrontal tDCS hindered task learning effects on response speed such that performance improvement occurred during sham, but not real stimulation. Moreover, both subjective and physiological arousal predictors significantly explained performance during real tDCS, with interaction effects showing performance improvement only with moderate arousal levels; likewise, pupil response was affected by real tDCS according to the ongoing levels of arousal, with reduced dilation during higher arousal trials. These findings highlight the potential role of arousal in shaping the neuromodulatory outcome, thus emphasizing a more careful interpretation of null or negative results while also encouraging more individually tailored tDCS applications based on arousal levels, especially in clinical populations.

Close

  • doi:10.1111/ejn.15584

Close

Mina Elhamiasl; Gabriella Silva; Andrea M. Cataldo; Hillary Hadley; Erik Arnold; James W. Tanaka; Tim Curran; Lisa S. Scott

Dissociations between performance and visual fixations after subordinate- and basic-level training with novel objects Journal Article

In: Vision Research, vol. 191, pp. 107971, 2022.

Abstract | Links | BibTeX

@article{Elhamiasl2022,
title = {Dissociations between performance and visual fixations after subordinate- and basic-level training with novel objects},
author = {Mina Elhamiasl and Gabriella Silva and Andrea M. Cataldo and Hillary Hadley and Erik Arnold and James W. Tanaka and Tim Curran and Lisa S. Scott},
doi = {10.1016/j.visres.2021.107971},
year = {2022},
date = {2022-01-01},
journal = {Vision Research},
volume = {191},
pages = {107971},
publisher = {Elsevier Ltd},
abstract = {Previous work suggests that subordinate-level object training improves exemplar-level perceptual discrimination over basic-level training. However, the extent to which visual fixation strategies and the use of visual features, such as color and spatial frequency (SF), change with improved discrimination was not previously known. In the current study, adults (n = 24) completed 6 days of training with 2 families of computer-generated novel objects. Participants were trained to identify one object family at the subordinate level and the other object family at the basic level. Before and after training, discrimination accuracy and visual fixations were measured for trained and untrained exemplars. To examine the impact of training on visual feature use, image color and SF were manipulated and tested before and after training. Discrimination accuracy increased for the object family trained at the subordinate-level, but not for the family trained at the basic level. This increase was seen for all image manipulations (color, SF) and generalized to untrained exemplars within the trained family. Both subordinate- and basic-level training increased average fixation duration and saccadic amplitude and decreased the number of total fixations. Collectively, these results suggest a dissociation between discrimination accuracy, indicative of recognition, and the associated pattern of changes present for visual fixations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous work suggests that subordinate-level object training improves exemplar-level perceptual discrimination over basic-level training. However, the extent to which visual fixation strategies and the use of visual features, such as color and spatial frequency (SF), change with improved discrimination was not previously known. In the current study, adults (n = 24) completed 6 days of training with 2 families of computer-generated novel objects. Participants were trained to identify one object family at the subordinate level and the other object family at the basic level. Before and after training, discrimination accuracy and visual fixations were measured for trained and untrained exemplars. To examine the impact of training on visual feature use, image color and SF were manipulated and tested before and after training. Discrimination accuracy increased for the object family trained at the subordinate-level, but not for the family trained at the basic level. This increase was seen for all image manipulations (color, SF) and generalized to untrained exemplars within the trained family. Both subordinate- and basic-level training increased average fixation duration and saccadic amplitude and decreased the number of total fixations. Collectively, these results suggest a dissociation between discrimination accuracy, indicative of recognition, and the associated pattern of changes present for visual fixations.

Close

  • doi:10.1016/j.visres.2021.107971

Close

Lorenzo Diana; Giulia Scotti; Edoardo N Aiello; Patrick Pilastro; Aleksandra K Eberhard-moscicka; Ren M Müri; Nadia Bolognini

Conventional and HD-tDCS may (or may not) modulate overt attentional orienting: An integrated spatio-temporal approach and methodological reflection Journal Article

In: Brain Sciences, vol. 12, no. 71, pp. 1–20, 2022.

Abstract | BibTeX

@article{Diana2022,
title = {Conventional and HD-tDCS may (or may not) modulate overt attentional orienting: An integrated spatio-temporal approach and methodological reflection},
author = {Lorenzo Diana and Giulia Scotti and Edoardo N Aiello and Patrick Pilastro and Aleksandra K Eberhard-moscicka and Ren M Müri and Nadia Bolognini},
year = {2022},
date = {2022-01-01},
journal = {Brain Sciences},
volume = {12},
number = {71},
pages = {1--20},
abstract = {Transcranial Direct Current Stimulation (tDCS) has been employed to modulate visuo- spatial attentional asymmetries, however, further investigation is needed to characterize tDCS- associated variability in more ecological settings. In the present research, we tested the effects of offline, anodal conventional tDCS (Experiment 1) and HD-tDCS (Experiment 2) delivered over the posterior parietal cortex (PPC) and Frontal Eye Field (FEF) of the right hemisphere in healthy participants. Attentional asymmetries were measured by means of an eye tracking-based, ecological paradigm, that is, a Free Visual Exploration task of naturalistic pictures. Data were analyzed from a spatiotemporal perspective. In Experiment 1, a pre-post linear mixed model (LMM) indicated a leftward attentional shift after PPC tDCS; this effect was not confirmed when the individual baseline performance was considered. In Experiment 2, FEF HD-tDCS was shown to induce a significant leftward shift of gaze position, which emerged after 6 s of picture exploration and lasted for 200 ms. The present results do not allow us to conclude on a clear efficacy of offline conventional tDCS and HD- tDCS in modulating overt visuospatial attention in an ecological setting. Nonetheless, our findings highlight a complex relationship among stimulated area, focality of stimulation, spatiotemporal aspects of deployment of attention, and the role of individual baseline performance in shaping the effects of tDCS.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Transcranial Direct Current Stimulation (tDCS) has been employed to modulate visuo- spatial attentional asymmetries, however, further investigation is needed to characterize tDCS- associated variability in more ecological settings. In the present research, we tested the effects of offline, anodal conventional tDCS (Experiment 1) and HD-tDCS (Experiment 2) delivered over the posterior parietal cortex (PPC) and Frontal Eye Field (FEF) of the right hemisphere in healthy participants. Attentional asymmetries were measured by means of an eye tracking-based, ecological paradigm, that is, a Free Visual Exploration task of naturalistic pictures. Data were analyzed from a spatiotemporal perspective. In Experiment 1, a pre-post linear mixed model (LMM) indicated a leftward attentional shift after PPC tDCS; this effect was not confirmed when the individual baseline performance was considered. In Experiment 2, FEF HD-tDCS was shown to induce a significant leftward shift of gaze position, which emerged after 6 s of picture exploration and lasted for 200 ms. The present results do not allow us to conclude on a clear efficacy of offline conventional tDCS and HD- tDCS in modulating overt visuospatial attention in an ecological setting. Nonetheless, our findings highlight a complex relationship among stimulated area, focality of stimulation, spatiotemporal aspects of deployment of attention, and the role of individual baseline performance in shaping the effects of tDCS.

Close

Lei Cui; Chuanli Zang; Xiaochen Xu; Wenxin Zhang; Yuhan Su; Simon P. Liversedge

Predictability effects and parafoveal processing of compound words in natural Chinese reading Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 75, no. 1, pp. 18–29, 2022.

Abstract | Links | BibTeX

@article{Cui2022,
title = {Predictability effects and parafoveal processing of compound words in natural Chinese reading},
author = {Lei Cui and Chuanli Zang and Xiaochen Xu and Wenxin Zhang and Yuhan Su and Simon P. Liversedge},
doi = {10.1177/17470218211048193},
year = {2022},
date = {2022-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {75},
number = {1},
pages = {18--29},
abstract = {We report a boundary paradigm eye movement experiment to investigate whether the predictability of the second character of a two-character compound word affects how it is processed prior to direct fixation during reading. The boundary was positioned immediately prior to the second character of the target word, which itself was either predictable or unpredictable. The preview was either a pseudocharacter (nonsense preview) or an identity preview. We obtained clear preview effects in all conditions, but more importantly, skipping probability for the second character of the target word and the whole target word from pretarget was greater when it was predictable than when it was not predictable from the preceding context. Interactive effects for later measures on the whole target word (gaze duration and go-past time) were also obtained. These results demonstrate that predictability information from preceding sentential context and information regarding the likely identity of upcoming characters are used concurrently to constrain the nature of lexical processing during natural Chinese reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We report a boundary paradigm eye movement experiment to investigate whether the predictability of the second character of a two-character compound word affects how it is processed prior to direct fixation during reading. The boundary was positioned immediately prior to the second character of the target word, which itself was either predictable or unpredictable. The preview was either a pseudocharacter (nonsense preview) or an identity preview. We obtained clear preview effects in all conditions, but more importantly, skipping probability for the second character of the target word and the whole target word from pretarget was greater when it was predictable than when it was not predictable from the preceding context. Interactive effects for later measures on the whole target word (gaze duration and go-past time) were also obtained. These results demonstrate that predictability information from preceding sentential context and information regarding the likely identity of upcoming characters are used concurrently to constrain the nature of lexical processing during natural Chinese reading.

Close

  • doi:10.1177/17470218211048193

Close

Ruth E. Corps; Charlotte Brooke; Martin J. Pickering

Prediction involves two stages: Evidence from visual-world eye-tracking Journal Article

In: Journal of Memory and Language, vol. 122, pp. 104298, 2022.

Abstract | Links | BibTeX

@article{Corps2022,
title = {Prediction involves two stages: Evidence from visual-world eye-tracking},
author = {Ruth E. Corps and Charlotte Brooke and Martin J. Pickering},
doi = {10.1016/j.jml.2021.104298},
year = {2022},
date = {2022-01-01},
journal = {Journal of Memory and Language},
volume = {122},
pages = {104298},
publisher = {Elsevier Inc.},
abstract = {Comprehenders often predict what they are going to hear. But do they make the best predictions possible? We addressed this question in three visual-world eye-tracking experiments by asking when comprehenders consider perspective. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nicełdots) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress, distractor: hairdryer) objects. In all three experiments, participants rapidly predicted semantic associates of the verb. But participants also predicted consistently-that is, consistent with their beliefs about what the speaker would ultimately say. They predicted consistently from the speaker's perspective in Experiment 1, their own perspective in Experiment 2, and the character's perspective in Experiment 3. This consistent effect occurred later than the associative effect. We conclude that comprehenders consider perspective when predicting, but not from the earliest moments of prediction, consistent with a two-stage account.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Comprehenders often predict what they are going to hear. But do they make the best predictions possible? We addressed this question in three visual-world eye-tracking experiments by asking when comprehenders consider perspective. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nicełdots) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress, distractor: hairdryer) objects. In all three experiments, participants rapidly predicted semantic associates of the verb. But participants also predicted consistently-that is, consistent with their beliefs about what the speaker would ultimately say. They predicted consistently from the speaker's perspective in Experiment 1, their own perspective in Experiment 2, and the character's perspective in Experiment 3. This consistent effect occurred later than the associative effect. We conclude that comprehenders consider perspective when predicting, but not from the earliest moments of prediction, consistent with a two-stage account.

Close

  • doi:10.1016/j.jml.2021.104298

Close

Alasdair D. F. Clarke; Jessica L. Irons; Warren James; Andrew B. Leber; Amelia R. Hunt

Stable individual differences in strategies within, but not between, visual search tasks Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 75, no. 2, pp. 289–296, 2022.

Abstract | Links | BibTeX

@article{Clarke2022,
title = {Stable individual differences in strategies within, but not between, visual search tasks},
author = {Alasdair D. F. Clarke and Jessica L. Irons and Warren James and Andrew B. Leber and Amelia R. Hunt},
doi = {10.1177/1747021820929190},
year = {2022},
date = {2022-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {75},
number = {2},
pages = {289--296},
abstract = {A striking range of individual differences has recently been reported in three different visual search tasks. These differences in performance can be attributed to strategy, that is, the efficiency with which participants control their search to complete the task quickly and accurately. Here, we ask whether an individual's strategy and performance in one search task is correlated with how they perform in the other two. We tested 64 observers and found that even though the test–retest reliability of the tasks was high, an observer's performance and strategy in one task was not predictive of their behaviour in the other two. These results suggest search strategies are stable over time, but context-specific. To understand visual search, we therefore need to account not only for differences between individuals but also how individuals interact with the search task and context.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A striking range of individual differences has recently been reported in three different visual search tasks. These differences in performance can be attributed to strategy, that is, the efficiency with which participants control their search to complete the task quickly and accurately. Here, we ask whether an individual's strategy and performance in one search task is correlated with how they perform in the other two. We tested 64 observers and found that even though the test–retest reliability of the tasks was high, an observer's performance and strategy in one task was not predictive of their behaviour in the other two. These results suggest search strategies are stable over time, but context-specific. To understand visual search, we therefore need to account not only for differences between individuals but also how individuals interact with the search task and context.

Close

  • doi:10.1177/1747021820929190

Close

Alexis Cheviet; Jana Masselink; Eric Koun; Roméo Salemme; Markus Lappe; Caroline Froment-Tilikete; Denis Pélisson

Cerebellar signals drive motor adjustments and visual perceptual changes during forward and backward adaptation of reactive saccades Journal Article

In: Cerebral Cortex, pp. 1–21, 2022.

Abstract | Links | BibTeX

@article{Cheviet2022,
title = {Cerebellar signals drive motor adjustments and visual perceptual changes during forward and backward adaptation of reactive saccades},
author = {Alexis Cheviet and Jana Masselink and Eric Koun and Roméo Salemme and Markus Lappe and Caroline Froment-Tilikete and Denis Pélisson},
doi = {10.1093/cercor/bhab455},
year = {2022},
date = {2022-01-01},
journal = {Cerebral Cortex},
pages = {1--21},
abstract = {Saccadic adaptation (SA) is a cerebellar-dependent learning of motor commands (MC), which aims at preserving saccade accuracy. Since SA alters visual localization during fixation and even more so across saccades, it could also involve changes of target and/or saccade visuospatial representations, the latter (CDv) resulting from a motor-to-visual transformation (forward dynamics model) of the corollary discharge of the MC. In the present study, we investigated if, in addition to its established role in adaptive adjustment of MC, the cerebellum could contribute to the adaptation-associated perceptual changes. Transfer of backward and forward adaptation to spatial perceptual performance (during ocular fixation and trans-saccadically) was assessed in eight cerebellar patients and eight healthy volunteers. In healthy participants, both types of SA altered MC as well as internal representations of the saccade target and of the saccadic eye displacement. In patients, adaptation-related adjustments of MC and adaptation transfer to localization were strongly reduced relative to healthy participants, unraveling abnormal adaptation-related changes of target and CDv. Importantly, the estimated changes of CDv were totally abolished following forward session but mainly preserved in backward session, suggesting that an internal model ensuring trans-saccadic localization could be located in the adaptation-related cerebellar networks or in downstream networks, respectively.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Saccadic adaptation (SA) is a cerebellar-dependent learning of motor commands (MC), which aims at preserving saccade accuracy. Since SA alters visual localization during fixation and even more so across saccades, it could also involve changes of target and/or saccade visuospatial representations, the latter (CDv) resulting from a motor-to-visual transformation (forward dynamics model) of the corollary discharge of the MC. In the present study, we investigated if, in addition to its established role in adaptive adjustment of MC, the cerebellum could contribute to the adaptation-associated perceptual changes. Transfer of backward and forward adaptation to spatial perceptual performance (during ocular fixation and trans-saccadically) was assessed in eight cerebellar patients and eight healthy volunteers. In healthy participants, both types of SA altered MC as well as internal representations of the saccade target and of the saccadic eye displacement. In patients, adaptation-related adjustments of MC and adaptation transfer to localization were strongly reduced relative to healthy participants, unraveling abnormal adaptation-related changes of target and CDv. Importantly, the estimated changes of CDv were totally abolished following forward session but mainly preserved in backward session, suggesting that an internal model ensuring trans-saccadic localization could be located in the adaptation-related cerebellar networks or in downstream networks, respectively.

Close

  • doi:10.1093/cercor/bhab455

Close

Yi-Ting Chen; Ming-Chou Ho

Eye movement patterns differ while watching captioned videos of second language vs. mathematics lessons Journal Article

In: Learning and Individual Differences, vol. 93, pp. 102106, 2022.

Abstract | Links | BibTeX

@article{Chen2022,
title = {Eye movement patterns differ while watching captioned videos of second language vs. mathematics lessons},
author = {Yi-Ting Chen and Ming-Chou Ho},
doi = {10.1016/j.lindif.2021.102106},
year = {2022},
date = {2022-01-01},
journal = {Learning and Individual Differences},
volume = {93},
pages = {102106},
publisher = {Elsevier Inc.},
abstract = {Background: Extant eye-tracking studies suggest that foreign-language learners tend to read the native language captions while watching foreign-language videos. However, it remains unclear how the captions affect the learners' eye movements when watching Math videos. Purpose: While watching teaching videos, we seek to determine how the lesson type (English or Math), cognitive load (high or low), and caption type (meaningful, no captions, or meaningless) affect the dwell times and fixation counts on the captions. Methods: One hundred and eighty undergraduate students were randomly and equally assigned to six (2 lesson type × 3 caption type) conditions. Each participant watched two short teaching videos (one low load and one high load). After watching each video, a comprehension test and three self-reported items (fatigue, effort, and difficulty) regarding this particular video were given. Results: We reported more dwell times and fixation counts on the meaningful captions, compared to the meaningless captions and no captions. In the high-load condition, viewers watching an English lesson relied more on the meaningful captions than they did when watching a Math lesson. In the low-load condition, the dwell times and fixation counts on the captions were similar between the English and Math lessons. Finally, the captions did not affect the comprehension test performances after ruling out individual differences in the prior performances of English and Math. Conclusions: English language learning may rely more on the captions than is the case in learning Math. This study provides the direction for designing multimedia teaching materials in the current trend of multimedia teaching. In},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Extant eye-tracking studies suggest that foreign-language learners tend to read the native language captions while watching foreign-language videos. However, it remains unclear how the captions affect the learners' eye movements when watching Math videos. Purpose: While watching teaching videos, we seek to determine how the lesson type (English or Math), cognitive load (high or low), and caption type (meaningful, no captions, or meaningless) affect the dwell times and fixation counts on the captions. Methods: One hundred and eighty undergraduate students were randomly and equally assigned to six (2 lesson type × 3 caption type) conditions. Each participant watched two short teaching videos (one low load and one high load). After watching each video, a comprehension test and three self-reported items (fatigue, effort, and difficulty) regarding this particular video were given. Results: We reported more dwell times and fixation counts on the meaningful captions, compared to the meaningless captions and no captions. In the high-load condition, viewers watching an English lesson relied more on the meaningful captions than they did when watching a Math lesson. In the low-load condition, the dwell times and fixation counts on the captions were similar between the English and Math lessons. Finally, the captions did not affect the comprehension test performances after ruling out individual differences in the prior performances of English and Math. Conclusions: English language learning may rely more on the captions than is the case in learning Math. This study provides the direction for designing multimedia teaching materials in the current trend of multimedia teaching. In

Close

  • doi:10.1016/j.lindif.2021.102106

Close

Frederick H. F. Chan; Hin Suen; Antoni B. Chan; Janet H. Hsiao; Tom J. Barry

The effects of attentional and interpretation biases on later pain outcomes among younger and older adults: A prospective study Journal Article

In: European Journal of Pain, vol. 26, no. 1, pp. 181–196, 2022.

Abstract | Links | BibTeX

@article{Chan2022,
title = {The effects of attentional and interpretation biases on later pain outcomes among younger and older adults: A prospective study},
author = {Frederick H. F. Chan and Hin Suen and Antoni B. Chan and Janet H. Hsiao and Tom J. Barry},
doi = {10.1002/ejp.1853},
year = {2022},
date = {2022-01-01},
journal = {European Journal of Pain},
volume = {26},
number = {1},
pages = {181--196},
abstract = {Background: Studies examining the effect of biased cognitions on later pain outcomes have primarily focused on attentional biases, leaving the role of interpretation biases largely unexplored. Also, few studies have examined pain-related cognitive biases in elderly persons. The current study aims to fill these research gaps. Methods: Younger and older adults with and without chronic pain (N = 126) completed an interpretation bias task and a free-viewing task of injury and neutral scenes at baseline. Participants' pain intensity and disability were assessed at baseline and at a 6-month follow-up. A machine-learning data-driven approach to analysing eye movement data was adopted. Results: Eye movement analyses revealed two common attentional pattern subgroups for scene-viewing: an “explorative” group and a “focused” group. At baseline, participants with chronic pain endorsed more injury-/illness-related interpretations compared to pain-free controls, but they did not differ in eye movements on scene images. Older adults interpreted illness-related scenarios more negatively compared to younger adults, but there was also no difference in eye movements between age groups. Moreover, negative interpretation biases were associated with baseline but not follow-up pain disability, whereas a focused gaze tendency for injury scenes was associated with follow-up but not baseline pain disability. Additionally, there was an indirect effect of interpretation biases on pain disability 6 months later through attentional bias for pain-related images. Conclusions: The present study provided evidence for pain status and age group differences in injury-/illness-related interpretation biases. Results also revealed distinct roles of interpretation and attentional biases in pain chronicity. Significance: Adults with chronic pain endorsed more injury-/illness-related interpretations than pain-free controls. Older adults endorsed more illness interpretations than younger adults. A more negative interpretation bias indirectly predicted pain disability 6 months later through hypervigilance towards pain.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Studies examining the effect of biased cognitions on later pain outcomes have primarily focused on attentional biases, leaving the role of interpretation biases largely unexplored. Also, few studies have examined pain-related cognitive biases in elderly persons. The current study aims to fill these research gaps. Methods: Younger and older adults with and without chronic pain (N = 126) completed an interpretation bias task and a free-viewing task of injury and neutral scenes at baseline. Participants' pain intensity and disability were assessed at baseline and at a 6-month follow-up. A machine-learning data-driven approach to analysing eye movement data was adopted. Results: Eye movement analyses revealed two common attentional pattern subgroups for scene-viewing: an “explorative” group and a “focused” group. At baseline, participants with chronic pain endorsed more injury-/illness-related interpretations compared to pain-free controls, but they did not differ in eye movements on scene images. Older adults interpreted illness-related scenarios more negatively compared to younger adults, but there was also no difference in eye movements between age groups. Moreover, negative interpretation biases were associated with baseline but not follow-up pain disability, whereas a focused gaze tendency for injury scenes was associated with follow-up but not baseline pain disability. Additionally, there was an indirect effect of interpretation biases on pain disability 6 months later through attentional bias for pain-related images. Conclusions: The present study provided evidence for pain status and age group differences in injury-/illness-related interpretation biases. Results also revealed distinct roles of interpretation and attentional biases in pain chronicity. Significance: Adults with chronic pain endorsed more injury-/illness-related interpretations than pain-free controls. Older adults endorsed more illness interpretations than younger adults. A more negative interpretation bias indirectly predicted pain disability 6 months later through hypervigilance towards pain.

Close

  • doi:10.1002/ejp.1853

Close

Olivia G. Calancie; Donald C. Brien; Jeff Huang; Brian C. Coe; Linda Booij; Sarosh Khalid-Khan; Douglas P. Munoz

Maturation of temporal saccade prediction from childhood to adulthood: Predictive saccades, reduced pupil size, and blink synchronization Journal Article

In: Journal of Neuroscience, vol. 42, no. 1, pp. 69–80, 2022.

Abstract | Links | BibTeX

@article{Calancie2022,
title = {Maturation of temporal saccade prediction from childhood to adulthood: Predictive saccades, reduced pupil size, and blink synchronization},
author = {Olivia G. Calancie and Donald C. Brien and Jeff Huang and Brian C. Coe and Linda Booij and Sarosh Khalid-Khan and Douglas P. Munoz},
doi = {10.1523/jneurosci.0837-21.2021},
year = {2022},
date = {2022-01-01},
journal = {Journal of Neuroscience},
volume = {42},
number = {1},
pages = {69--80},
abstract = {When presented with a periodic stimulus, humans spontaneously adjust their movements from reacting to predicting the timing of its arrival, but little is known about how this sensorimotor adaptation changes across development. To investigate this, we analyzed saccade behavior in 114 healthy humans (ages 6–24 years) performing the visual metronome task, who were instructed to move their eyes in time with a visual target that alternated between two known locations at a fixed rate, and we compared their behavior to per- formance in a random task, where target onsets were randomized across five interstimulus intervals (ISIs) and thus the timing of appearance was unknown. Saccades initiated before registration of the visual target, thus in anticipation of its appearance, were la- beled predictive [saccade reaction time (SRT),90ms] and saccades that were made in reaction to its appearance were labeled reac- tive (SRT.90ms). Eye-tracking behavior including saccadic metrics (e.g., peak velocity, amplitude), pupil size following saccade to target, and blink behavior all varied as a function of predicting or reacting to periodic targets. Compared with reactive saccades, pre- dictive saccades had a lower peak velocity, a hypometric amplitude, smaller pupil size, and a reduced probability of blink occurrence before target appearance. The percentage of predictive and reactive saccades changed inversely from ages 8–16, at which they reached adult-levels of behavior. Differences in predictive saccades for fast and slow target rates are interpreted by differential maturation of cerebellar-thalamic-striatal pathways.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When presented with a periodic stimulus, humans spontaneously adjust their movements from reacting to predicting the timing of its arrival, but little is known about how this sensorimotor adaptation changes across development. To investigate this, we analyzed saccade behavior in 114 healthy humans (ages 6–24 years) performing the visual metronome task, who were instructed to move their eyes in time with a visual target that alternated between two known locations at a fixed rate, and we compared their behavior to per- formance in a random task, where target onsets were randomized across five interstimulus intervals (ISIs) and thus the timing of appearance was unknown. Saccades initiated before registration of the visual target, thus in anticipation of its appearance, were la- beled predictive [saccade reaction time (SRT),90ms] and saccades that were made in reaction to its appearance were labeled reac- tive (SRT.90ms). Eye-tracking behavior including saccadic metrics (e.g., peak velocity, amplitude), pupil size following saccade to target, and blink behavior all varied as a function of predicting or reacting to periodic targets. Compared with reactive saccades, pre- dictive saccades had a lower peak velocity, a hypometric amplitude, smaller pupil size, and a reduced probability of blink occurrence before target appearance. The percentage of predictive and reactive saccades changed inversely from ages 8–16, at which they reached adult-levels of behavior. Differences in predictive saccades for fast and slow target rates are interpreted by differential maturation of cerebellar-thalamic-striatal pathways.

Close

  • doi:10.1523/jneurosci.0837-21.2021

Close

Philippa Broadbent; Daniel E. Schoth; Christina Liossi

Association between attentional bias to experimentally induced pain and to pain-related words in healthy individuals: The moderating role of interpretation bias Journal Article

In: Pain, vol. 163, no. 2, pp. 319–333, 2022.

Abstract | Links | BibTeX

@article{Broadbent2022,
title = {Association between attentional bias to experimentally induced pain and to pain-related words in healthy individuals: The moderating role of interpretation bias},
author = {Philippa Broadbent and Daniel E. Schoth and Christina Liossi},
doi = {10.1097/j.pain.0000000000002318},
year = {2022},
date = {2022-01-01},
journal = {Pain},
volume = {163},
number = {2},
pages = {319--333},
abstract = {Attentional bias to pain-related information may contribute to chronic pain maintenance. It is theoretically predicted that attentional bias to pain-related language derives from attentional bias to painful sensations; however, the complex interconnection between these types of attentional bias has not yet been tested. This study aimed to investigate the association between attentional bias to pain words and attentional bias to the location of pain, as well as the moderating role of pain-related interpretation bias in this association. Fifty-four healthy individuals performed a visual probe task with pain-related and neutral words, during which eye movements were tracked. In a subset of trials, participants were presented with a cold pain stimulus on one hand. Pain-related interpretation and memory biases were also assessed. Attentional bias to pain words and attentional bias to the pain location were not significantly correlated, although the association was significantly moderated by interpretation bias. A combination of pain-related interpretation bias and attentional bias to painful sensations was associated with avoidance of pain words. In addition, first fixation durations on pain words were longer when the pain word and cold pain stimulus were presented on the same side of the body, as compared to on opposite sides. This indicates that congruency between the locations of pain and pain-related information may strengthen attentional bias. Overall, these findings indicate that cognitive biases to pain-related information interact with cognitive biases to somatosensory information. The implications of these findings for attentional bias modification interventions are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Attentional bias to pain-related information may contribute to chronic pain maintenance. It is theoretically predicted that attentional bias to pain-related language derives from attentional bias to painful sensations; however, the complex interconnection between these types of attentional bias has not yet been tested. This study aimed to investigate the association between attentional bias to pain words and attentional bias to the location of pain, as well as the moderating role of pain-related interpretation bias in this association. Fifty-four healthy individuals performed a visual probe task with pain-related and neutral words, during which eye movements were tracked. In a subset of trials, participants were presented with a cold pain stimulus on one hand. Pain-related interpretation and memory biases were also assessed. Attentional bias to pain words and attentional bias to the pain location were not significantly correlated, although the association was significantly moderated by interpretation bias. A combination of pain-related interpretation bias and attentional bias to painful sensations was associated with avoidance of pain words. In addition, first fixation durations on pain words were longer when the pain word and cold pain stimulus were presented on the same side of the body, as compared to on opposite sides. This indicates that congruency between the locations of pain and pain-related information may strengthen attentional bias. Overall, these findings indicate that cognitive biases to pain-related information interact with cognitive biases to somatosensory information. The implications of these findings for attentional bias modification interventions are discussed.

Close

  • doi:10.1097/j.pain.0000000000002318

Close

Rhona M. Amos; Kilian G. Seeber; Martin J. Pickering

Prediction during simultaneous interpreting: Evidence from the visual-world paradigm Journal Article

In: Cognition, vol. 220, pp. 104987, 2022.

Abstract | Links | BibTeX

@article{Amos2022,
title = {Prediction during simultaneous interpreting: Evidence from the visual-world paradigm},
author = {Rhona M. Amos and Kilian G. Seeber and Martin J. Pickering},
doi = {10.1016/j.cognition.2021.104987},
year = {2022},
date = {2022-01-01},
journal = {Cognition},
volume = {220},
pages = {104987},
publisher = {Elsevier B.V.},
abstract = {We report the results of an eye-tracking study which used the Visual World Paradigm (VWP) to investigate the time-course of prediction during a simultaneous interpreting task. Twenty-four L1 French professional conference interpreters and twenty-four L1 French professional translators untrained in simultaneous interpretation listened to sentences in English and interpreted them simultaneously into French while looking at a visual scene. Sentences contained a highly predictable word (e.g., The dentist asked the man to open his mouth a little wider). The visual scene comprised four objects, one of which depicted either the target object (mouth; bouche), an English phonological competitor (mouse; souris), a French phonological competitor (cork; bouchon), or an unrelated word (bone; os). We considered 1) whether interpreters and translators predict upcoming nouns during a simultaneous interpreting task, 2) whether interpreters and translators predict the form of these nouns in English and in French and 3) whether interpreters and translators manifest different predictive behaviour. Our results suggest that both interpreters and translators predict upcoming nouns, but neither group predicts the word-form of these nouns. In addition, we did not find significant differences between patterns of prediction in interpreters and translators. Thus, evidence from the visual-world paradigm shows that prediction takes place in simultaneous interpreting, regardless of training and experience. However, we were unable to establish whether word-form was predicted.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We report the results of an eye-tracking study which used the Visual World Paradigm (VWP) to investigate the time-course of prediction during a simultaneous interpreting task. Twenty-four L1 French professional conference interpreters and twenty-four L1 French professional translators untrained in simultaneous interpretation listened to sentences in English and interpreted them simultaneously into French while looking at a visual scene. Sentences contained a highly predictable word (e.g., The dentist asked the man to open his mouth a little wider). The visual scene comprised four objects, one of which depicted either the target object (mouth; bouche), an English phonological competitor (mouse; souris), a French phonological competitor (cork; bouchon), or an unrelated word (bone; os). We considered 1) whether interpreters and translators predict upcoming nouns during a simultaneous interpreting task, 2) whether interpreters and translators predict the form of these nouns in English and in French and 3) whether interpreters and translators manifest different predictive behaviour. Our results suggest that both interpreters and translators predict upcoming nouns, but neither group predicts the word-form of these nouns. In addition, we did not find significant differences between patterns of prediction in interpreters and translators. Thus, evidence from the visual-world paradigm shows that prediction takes place in simultaneous interpreting, regardless of training and experience. However, we were unable to establish whether word-form was predicted.

Close

  • doi:10.1016/j.cognition.2021.104987

Close

Carlos Alós-Ferrer; Alexander Ritschel

Attention and salience in preference reversals Journal Article

In: Experimental Economics, pp. 1–28, 2022.

Abstract | Links | BibTeX

@article{AlosFerrer2022,
title = {Attention and salience in preference reversals},
author = {Carlos Alós-Ferrer and Alexander Ritschel},
doi = {10.1007/s10683-021-09740-9},
year = {2022},
date = {2022-01-01},
journal = {Experimental Economics},
pages = {1--28},
publisher = {Springer US},
abstract = {We investigate the implications of Salience Theory for the classical preference reversal phenomenon, where monetary valuations contradict risky choices. It has been stated that one factor behind reversals is that monetary valuations of lotteries are inflated when elicited in isolation, and that they should be reduced if an alternative lottery is present and draws attention. We conducted two preregistered experiments, an online choice study (N=256) and an eye-tracking study (N = 64), in which we investigated salience and attention in preference reversals, manipulating salience through the presence or absence of an alternative lottery during evaluations. We find that the alternative lottery draws attention, and that fixations on that lottery influence the evaluation of the target lottery as predicted by Salience Theory. The effect, however, is of a modest magnitude and fails to translate into an effect on preference reversal rates in either experiment. We also use transitions (eye movements) across outcomes of different lotteries to study attention on the states of the world underlying Salience Theory, but we find no evidence that larger salience results in more transitions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We investigate the implications of Salience Theory for the classical preference reversal phenomenon, where monetary valuations contradict risky choices. It has been stated that one factor behind reversals is that monetary valuations of lotteries are inflated when elicited in isolation, and that they should be reduced if an alternative lottery is present and draws attention. We conducted two preregistered experiments, an online choice study (N=256) and an eye-tracking study (N = 64), in which we investigated salience and attention in preference reversals, manipulating salience through the presence or absence of an alternative lottery during evaluations. We find that the alternative lottery draws attention, and that fixations on that lottery influence the evaluation of the target lottery as predicted by Salience Theory. The effect, however, is of a modest magnitude and fails to translate into an effect on preference reversal rates in either experiment. We also use transitions (eye movements) across outcomes of different lotteries to study attention on the states of the world underlying Salience Theory, but we find no evidence that larger salience results in more transitions.

Close

  • doi:10.1007/s10683-021-09740-9

Close

Emily J. Allen; Ghislain St-Yves; Yihan Wu; Jesse L. Breedlove; Jacob S. Prince; Logan T. Dowdle; Matthias Nau; Brad Caron; Franco Pestilli; Ian Charest; J. Benjamin Hutchinson; Thomas Naselaris; Kendrick Kay

A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence Journal Article

In: Nature Neuroscience, vol. 25, no. 1, pp. 116–126, 2022.

Abstract | Links | BibTeX

@article{Allen2022,
title = {A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence},
author = {Emily J. Allen and Ghislain St-Yves and Yihan Wu and Jesse L. Breedlove and Jacob S. Prince and Logan T. Dowdle and Matthias Nau and Brad Caron and Franco Pestilli and Ian Charest and J. Benjamin Hutchinson and Thomas Naselaris and Kendrick Kay},
doi = {10.1038/s41593-021-00962-x},
year = {2022},
date = {2022-01-01},
journal = {Nature Neuroscience},
volume = {25},
number = {1},
pages = {116--126},
publisher = {Springer US},
abstract = {Extensive sampling of neural activity during rich cognitive phenomena is critical for robust understanding of brain function. Here we present the Natural Scenes Dataset (NSD), in which high-resolution functional magnetic resonance imaging responses to tens of thousands of richly annotated natural scenes were measured while participants performed a continuous recognition task. To optimize data quality, we developed and applied novel estimation and denoising techniques. Simple visual inspections of the NSD data reveal clear representational transformations along the ventral visual pathway. Further exemplifying the inferential power of the dataset, we used NSD to build and train deep neural network models that predict brain activity more accurately than state-of-the-art models from computer vision. NSD also includes substantial resting-state and diffusion data, enabling network neuroscience perspectives to constrain and enhance models of perception and memory. Given its unprecedented scale, quality and breadth, NSD opens new avenues of inquiry in cognitive neuroscience and artificial intelligence.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Extensive sampling of neural activity during rich cognitive phenomena is critical for robust understanding of brain function. Here we present the Natural Scenes Dataset (NSD), in which high-resolution functional magnetic resonance imaging responses to tens of thousands of richly annotated natural scenes were measured while participants performed a continuous recognition task. To optimize data quality, we developed and applied novel estimation and denoising techniques. Simple visual inspections of the NSD data reveal clear representational transformations along the ventral visual pathway. Further exemplifying the inferential power of the dataset, we used NSD to build and train deep neural network models that predict brain activity more accurately than state-of-the-art models from computer vision. NSD also includes substantial resting-state and diffusion data, enabling network neuroscience perspectives to constrain and enhance models of perception and memory. Given its unprecedented scale, quality and breadth, NSD opens new avenues of inquiry in cognitive neuroscience and artificial intelligence.

Close

  • doi:10.1038/s41593-021-00962-x

Close

2021

Delia A. Gheorghe; Muriel T. N. Panouillères; Nicholas D. Walsh

Investigating the effects of cerebellar transcranial direct current stimulation on saccadic adaptation and cortisol response Journal Article

In: Cerebellum and Ataxias, vol. 8, no. 1, pp. 1–11, 2021.

Abstract | Links | BibTeX

@article{Gheorghe2021,
title = {Investigating the effects of cerebellar transcranial direct current stimulation on saccadic adaptation and cortisol response},
author = {Delia A. Gheorghe and Muriel T. N. Panouillères and Nicholas D. Walsh},
doi = {10.1186/s40673-020-00124-y},
year = {2021},
date = {2021-12-01},
journal = {Cerebellum and Ataxias},
volume = {8},
number = {1},
pages = {1--11},
publisher = {BioMed Central Ltd},
abstract = {Background: Transcranial Direct Current Stimulation (tDCS) over the prefrontal cortex has been shown to modulate subjective, neuronal and neuroendocrine responses, particularly in the context of stress processing. However, it is currently unknown whether tDCS stimulation over other brain regions, such as the cerebellum, can similarly affect the stress response. Despite increasing evidence linking the cerebellum to stress-related processing, no studies have investigated the hormonal and behavioural effects of cerebellar tDCS. Methods: This study tested the hypothesis of a cerebellar tDCS effect on mood, behaviour and cortisol. To do this we employed a single-blind, sham-controlled design to measure performance on a cerebellar-dependent saccadic adaptation task, together with changes in cortisol output and mood, during online anodal and cathodal stimulation. Forty-five participants were included in the analysis. Stimulation groups were matched on demographic variables, potential confounding factors known to affect cortisol levels, mood and a number of personality characteristics. Results: Results showed that tDCS polarity did not affect cortisol levels or subjective mood, but did affect behaviour. Participants receiving anodal stimulation showed an 8.4% increase in saccadic adaptation, which was significantly larger compared to the cathodal group (1.6%). Conclusion: The stimulation effect on saccadic adaptation contributes to the current body of literature examining the mechanisms of cerebellar stimulation on associated function. We conclude that further studies are needed to understand whether and how cerebellar tDCS may module stress reactivity under challenge conditions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Transcranial Direct Current Stimulation (tDCS) over the prefrontal cortex has been shown to modulate subjective, neuronal and neuroendocrine responses, particularly in the context of stress processing. However, it is currently unknown whether tDCS stimulation over other brain regions, such as the cerebellum, can similarly affect the stress response. Despite increasing evidence linking the cerebellum to stress-related processing, no studies have investigated the hormonal and behavioural effects of cerebellar tDCS. Methods: This study tested the hypothesis of a cerebellar tDCS effect on mood, behaviour and cortisol. To do this we employed a single-blind, sham-controlled design to measure performance on a cerebellar-dependent saccadic adaptation task, together with changes in cortisol output and mood, during online anodal and cathodal stimulation. Forty-five participants were included in the analysis. Stimulation groups were matched on demographic variables, potential confounding factors known to affect cortisol levels, mood and a number of personality characteristics. Results: Results showed that tDCS polarity did not affect cortisol levels or subjective mood, but did affect behaviour. Participants receiving anodal stimulation showed an 8.4% increase in saccadic adaptation, which was significantly larger compared to the cathodal group (1.6%). Conclusion: The stimulation effect on saccadic adaptation contributes to the current body of literature examining the mechanisms of cerebellar stimulation on associated function. We conclude that further studies are needed to understand whether and how cerebellar tDCS may module stress reactivity under challenge conditions.

Close

  • doi:10.1186/s40673-020-00124-y

Close

Sarah Chabal; Sayuri Hayakawa; Viorica Marian

How a picture becomes a word: Individual differences in the development of language-mediated visual search Journal Article

In: Cognitive Research: Principles and Implications, vol. 6, no. 2, pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Chabal2021,
title = {How a picture becomes a word: Individual differences in the development of language-mediated visual search},
author = {Sarah Chabal and Sayuri Hayakawa and Viorica Marian},
doi = {10.1186/s41235-020-00268-9},
year = {2021},
date = {2021-12-01},
journal = {Cognitive Research: Principles and Implications},
volume = {6},
number = {2},
pages = {1--10},
publisher = {Springer International Publishing},
abstract = {Over the course of our lifetimes, we accumulate extensive experience associating the things that we see with the words we have learned to describe them. As a result, adults engaged in a visual search task will often look at items with labels that share phonological features with the target object, demonstrating that language can become activated even in non-linguistic contexts. This highly interactive cognitive system is the culmination of our linguistic and visual experiences—and yet, our understanding of how the relationship between language and vision develops remains limited. The present study explores the developmental trajectory of language-mediated visual search by examining whether children can be distracted by linguistic competitors during a non-linguistic visual search task. Though less robust compared to what has been previously observed with adults, we find evidence of phonological competition in children as young as 8 years old. Furthermore, the extent of language activation is predicted by individual differences in linguistic, visual, and domain-general cognitive abilities, with the greatest phonological competition observed among children with strong language abilities combined with weaker visual memory and inhibitory control. We propose that linguistic expertise is fundamental to the development of language-mediated visual search, but that the rate and degree of automatic language activation depends on interactions among a broader network of cognitive abilities.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Over the course of our lifetimes, we accumulate extensive experience associating the things that we see with the words we have learned to describe them. As a result, adults engaged in a visual search task will often look at items with labels that share phonological features with the target object, demonstrating that language can become activated even in non-linguistic contexts. This highly interactive cognitive system is the culmination of our linguistic and visual experiences—and yet, our understanding of how the relationship between language and vision develops remains limited. The present study explores the developmental trajectory of language-mediated visual search by examining whether children can be distracted by linguistic competitors during a non-linguistic visual search task. Though less robust compared to what has been previously observed with adults, we find evidence of phonological competition in children as young as 8 years old. Furthermore, the extent of language activation is predicted by individual differences in linguistic, visual, and domain-general cognitive abilities, with the greatest phonological competition observed among children with strong language abilities combined with weaker visual memory and inhibitory control. We propose that linguistic expertise is fundamental to the development of language-mediated visual search, but that the rate and degree of automatic language activation depends on interactions among a broader network of cognitive abilities.

Close

  • doi:10.1186/s41235-020-00268-9

Close

Jasmine R. Aziz; Samantha R. Good; Raymond M. Klein; Gail A. Eskes

Role of aging and working memory in performance on a naturalistic visual search task Journal Article

In: Cortex, vol. 136, pp. 28–40, 2021.

Abstract | Links | BibTeX

@article{Aziz2021,
title = {Role of aging and working memory in performance on a naturalistic visual search task},
author = {Jasmine R. Aziz and Samantha R. Good and Raymond M. Klein and Gail A. Eskes},
doi = {10.1016/j.cortex.2020.12.003},
year = {2021},
date = {2021-12-01},
journal = {Cortex},
volume = {136},
pages = {28--40},
publisher = {Elsevier Ltd},
abstract = {Studying age-related changes in working memory (WM) and visual search can provide insights into mechanisms of visuospatial attention. In visual search, WM is used to remember previously inspected objects/locations and to maintain a mental representation of the target to guide the search. We sought to extend this work, using aging as a case of reduced WM capacity. The present study tested whether various domains of WM would predict visual search performance in both young (n = 47; aged 18–35 yrs) and older (n = 48; aged 55–78) adults. Participants completed executive and domain-specific WM measures, and a naturalistic visual search task with (single) feature and triple-conjunction (three-feature) search conditions. We also varied the WM load requirements of the search task by manipulating whether a reference picture of the target (i.e., target template) was displayed during the search, or whether participants needed to search from memory. In both age groups, participants with better visuospatial executive WM were faster to locate complex search targets. Working memory storage capacity predicted search performance regardless of target complexity; however, visuospatial storage capacity was more predictive for young adults, whereas verbal storage capacity was more predictive for older adults. Displaying a target template during search diminished the involvement of WM in search performance, but this effect was primarily observed in young adults. Age-specific interactions between WM and visual search abilities are discussed in the context of mechanisms of visuospatial attention and how they may vary across the lifespan.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Studying age-related changes in working memory (WM) and visual search can provide insights into mechanisms of visuospatial attention. In visual search, WM is used to remember previously inspected objects/locations and to maintain a mental representation of the target to guide the search. We sought to extend this work, using aging as a case of reduced WM capacity. The present study tested whether various domains of WM would predict visual search performance in both young (n = 47; aged 18–35 yrs) and older (n = 48; aged 55–78) adults. Participants completed executive and domain-specific WM measures, and a naturalistic visual search task with (single) feature and triple-conjunction (three-feature) search conditions. We also varied the WM load requirements of the search task by manipulating whether a reference picture of the target (i.e., target template) was displayed during the search, or whether participants needed to search from memory. In both age groups, participants with better visuospatial executive WM were faster to locate complex search targets. Working memory storage capacity predicted search performance regardless of target complexity; however, visuospatial storage capacity was more predictive for young adults, whereas verbal storage capacity was more predictive for older adults. Displaying a target template during search diminished the involvement of WM in search performance, but this effect was primarily observed in young adults. Age-specific interactions between WM and visual search abilities are discussed in the context of mechanisms of visuospatial attention and how they may vary across the lifespan.

Close

  • doi:10.1016/j.cortex.2020.12.003

Close

Aaron Veldre; Roslyn Wong; Sally Andrews

Reading proficiency predicts the extent of the right, but not left, perceptual span in older readers Journal Article

In: Attention, Perception, and Psychophysics, vol. 83, no. 1, pp. 18–26, 2021.

Abstract | Links | BibTeX

@article{Veldre2021a,
title = {Reading proficiency predicts the extent of the right, but not left, perceptual span in older readers},
author = {Aaron Veldre and Roslyn Wong and Sally Andrews},
doi = {10.3758/s13414-020-02185-x},
year = {2021},
date = {2021-11-01},
journal = {Attention, Perception, and Psychophysics},
volume = {83},
number = {1},
pages = {18--26},
publisher = {Attention, Perception, & Psychophysics},
abstract = {The gaze-contingent moving-window paradigm was used to assess the size and symmetry of the perceptual span in older readers. The eye movements of 49 cognitively intact older adults (60–88 years of age) were recorded as they read sentences varying in difficulty, and the availability of letter information to the right and left of fixation was manipulated. To reconcile discrepancies in previous estimates of the perceptual span in older readers, individual differences in written language proficiency were assessed with tests of vocabulary, reading comprehension, reading speed, spelling ability, and print exposure. The results revealed that higher proficiency older adults extracted information up to 15 letter spaces to the right of fixation, while lower proficiency readers showed no additional benefit beyond 9 letters to the right. However, all readers showed improvements to reading with the availability of up to 9 letters to the left—confirming previous evidence of reduced perceptual span asymmetry in older readers. The findings raise questions about whether the source of age-related changes in parafoveal processing lies in the adoption of a risky reading strategy involving an increased propensity to both guess upcoming words and make corrective regressions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The gaze-contingent moving-window paradigm was used to assess the size and symmetry of the perceptual span in older readers. The eye movements of 49 cognitively intact older adults (60–88 years of age) were recorded as they read sentences varying in difficulty, and the availability of letter information to the right and left of fixation was manipulated. To reconcile discrepancies in previous estimates of the perceptual span in older readers, individual differences in written language proficiency were assessed with tests of vocabulary, reading comprehension, reading speed, spelling ability, and print exposure. The results revealed that higher proficiency older adults extracted information up to 15 letter spaces to the right of fixation, while lower proficiency readers showed no additional benefit beyond 9 letters to the right. However, all readers showed improvements to reading with the availability of up to 9 letters to the left—confirming previous evidence of reduced perceptual span asymmetry in older readers. The findings raise questions about whether the source of age-related changes in parafoveal processing lies in the adoption of a risky reading strategy involving an increased propensity to both guess upcoming words and make corrective regressions.

Close

  • doi:10.3758/s13414-020-02185-x

Close

Mikael Rubin; Michael J. Telch

Pupillary response to affective voices: Physiological responsivity and posttraumatic stress disorder Journal Article

In: Journal of Traumatic Stress, vol. 34, no. 1, pp. 182–189, 2021.

Abstract | Links | BibTeX

@article{Rubin2021a,
title = {Pupillary response to affective voices: Physiological responsivity and posttraumatic stress disorder},
author = {Mikael Rubin and Michael J. Telch},
doi = {10.1002/jts.22574},
year = {2021},
date = {2021-02-01},
journal = {Journal of Traumatic Stress},
volume = {34},
number = {1},
pages = {182--189},
abstract = {Posttraumatic stress disorder (PTSD) is related to dysfunctional emotional processing, thus motivating the search for physiological indices that can elucidate this process. Toward this aim, we compared pupillary response patterns in response to angry and fearful auditory stimuli among 99 adults, some with PTSD (n = 14), some trauma-exposed without PTSD (TE; n = 53), and some with no history of trauma exposure (CON; n = 32). We hypothesized that individuals with PTSD would show more pupillary response to angry and fearful auditory stimuli compared to those in the TE and CON groups. Among participants who had experienced a traumatic event, we explored the association between PTSD symptoms and pupillary response; contrary to our prediction, individuals with PTSD displayed the least pupillary response to fearful auditory stimuli compared those in the TE},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Posttraumatic stress disorder (PTSD) is related to dysfunctional emotional processing, thus motivating the search for physiological indices that can elucidate this process. Toward this aim, we compared pupillary response patterns in response to angry and fearful auditory stimuli among 99 adults, some with PTSD (n = 14), some trauma-exposed without PTSD (TE; n = 53), and some with no history of trauma exposure (CON; n = 32). We hypothesized that individuals with PTSD would show more pupillary response to angry and fearful auditory stimuli compared to those in the TE and CON groups. Among participants who had experienced a traumatic event, we explored the association between PTSD symptoms and pupillary response; contrary to our prediction, individuals with PTSD displayed the least pupillary response to fearful auditory stimuli compared those in the TE

Close

  • doi:10.1002/jts.22574

Close

Ariel Zylberberg

Decision prioritization and causal reasoning in decision hierarchies Book

2021.

Abstract | Links | BibTeX

@book{Zylberberg2021,
title = {Decision prioritization and causal reasoning in decision hierarchies},
author = {Ariel Zylberberg},
doi = {10.1371/journal.pcbi.1009688},
year = {2021},
date = {2021-01-01},
booktitle = {PLoS Computational Biology},
volume = {17},
number = {12},
pages = {1--39},
abstract = {From cooking a meal to finding a route to a destination, many real life decisions can be decomposed into a hierarchy of sub-decisions. In a hierarchy, choosing which decision to think about requires planning over a potentially vast space of possible decision sequences. To gain insight into how people decide what to decide on, we studied a novel task that combines perceptual decision making, active sensing and hierarchical and counterfactual reasoning. Human participants had to find a target hidden at the lowest level of a decision tree. They could solicit information from the different nodes of the decision tree to gather noisy evidence about the target's location. Feedback was given only after errors at the leaf nodes and provided ambiguous evidence about the cause of the error. Despite the complexity of task (with 10 7 latent states) participants were able to plan efficiently in the task. A computational model of this process identified a small number of heuristics of low computational complexity that accounted for human behavior. These heuristics include making categorical decisions at the branching points of the decision tree rather than carrying forward entire probability distributions, discarding sensory evidence deemed unreliable to make a choice, and using choice confidence to infer the cause of the error after an initial plan failed. Plans based on probabilistic inference or myopic sampling norms could not capture participants' behavior. Our results show that it is possible to identify hallmarks of heuristic planning with sensing in human behavior and that the use of tasks of intermediate complexity helps identify the rules underlying human ability to reason over decision hierarchies.},
keywords = {},
pubstate = {published},
tppubtype = {book}
}

Close

From cooking a meal to finding a route to a destination, many real life decisions can be decomposed into a hierarchy of sub-decisions. In a hierarchy, choosing which decision to think about requires planning over a potentially vast space of possible decision sequences. To gain insight into how people decide what to decide on, we studied a novel task that combines perceptual decision making, active sensing and hierarchical and counterfactual reasoning. Human participants had to find a target hidden at the lowest level of a decision tree. They could solicit information from the different nodes of the decision tree to gather noisy evidence about the target's location. Feedback was given only after errors at the leaf nodes and provided ambiguous evidence about the cause of the error. Despite the complexity of task (with 10 7 latent states) participants were able to plan efficiently in the task. A computational model of this process identified a small number of heuristics of low computational complexity that accounted for human behavior. These heuristics include making categorical decisions at the branching points of the decision tree rather than carrying forward entire probability distributions, discarding sensory evidence deemed unreliable to make a choice, and using choice confidence to infer the cause of the error after an initial plan failed. Plans based on probabilistic inference or myopic sampling norms could not capture participants' behavior. Our results show that it is possible to identify hallmarks of heuristic planning with sensing in human behavior and that the use of tasks of intermediate complexity helps identify the rules underlying human ability to reason over decision hierarchies.

Close

  • doi:10.1371/journal.pcbi.1009688

Close

Inbal Ziv; Yoram S. Bonneh

Oculomotor inhibition during smooth pursuit and its dependence on contrast sensitivity Journal Article

In: Journal of Vision, vol. 21, no. 2, pp. 1–20, 2021.

Abstract | Links | BibTeX

@article{Ziv2021,
title = {Oculomotor inhibition during smooth pursuit and its dependence on contrast sensitivity},
author = {Inbal Ziv and Yoram S. Bonneh},
doi = {10.1167/jov.21.2.12},
year = {2021},
date = {2021-01-01},
journal = {Journal of Vision},
volume = {21},
number = {2},
pages = {1--20},
abstract = {Our eyes are never still, but tend to "freeze" in response to stimulus onset. This effect is termed "oculomotor inhibition" (OMI); its magnitude and time course depend on the stimulus parameters, attention, and expectation. We previously showed that the time course and duration of microsaccade and spontaneous eye-blink inhibition provide an involuntary measure of low-level visual properties such as contrast sensitivity during fixation. We investigated whether this stimulus-dependent inhibition also occurs during smooth pursuit, for both the catch-up saccades and the pursuit itself. Observers followed a target with continuous back-and-forth horizontal motion while a Gabor patch was briefly flashed centrally with varied spatial frequency and contrast. Catch-up saccades of the size of microsaccades had a similar pattern of inhibition as microsaccades during fixation, with stronger inhibition onset and faster inhibition release for more salient stimuli. Moreover, a similar stimulus dependency of inhibition was shown for pursuit latencies and peak velocity. Additionally, microsaccade latencies at inhibition release, peak pursuit velocities, and latencies at minimum pursuit velocity were correlated with contrast sensitivity.We demonstrated the generality of OMI to smooth pursuit for both microsaccades and the pursuit itself and its close relation to the low-level processes that define saliency, such as contrast sensitivity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Our eyes are never still, but tend to "freeze" in response to stimulus onset. This effect is termed "oculomotor inhibition" (OMI); its magnitude and time course depend on the stimulus parameters, attention, and expectation. We previously showed that the time course and duration of microsaccade and spontaneous eye-blink inhibition provide an involuntary measure of low-level visual properties such as contrast sensitivity during fixation. We investigated whether this stimulus-dependent inhibition also occurs during smooth pursuit, for both the catch-up saccades and the pursuit itself. Observers followed a target with continuous back-and-forth horizontal motion while a Gabor patch was briefly flashed centrally with varied spatial frequency and contrast. Catch-up saccades of the size of microsaccades had a similar pattern of inhibition as microsaccades during fixation, with stronger inhibition onset and faster inhibition release for more salient stimuli. Moreover, a similar stimulus dependency of inhibition was shown for pursuit latencies and peak velocity. Additionally, microsaccade latencies at inhibition release, peak pursuit velocities, and latencies at minimum pursuit velocity were correlated with contrast sensitivity.We demonstrated the generality of OMI to smooth pursuit for both microsaccades and the pursuit itself and its close relation to the low-level processes that define saliency, such as contrast sensitivity.

Close

  • doi:10.1167/jov.21.2.12

Close

Kristin Marie Zimmermann; Kirsten Daniela Schmidt; Franziska Gronow; Jens Sommer; Frank Leweke; Andreas Jansen

Seeing things differently: Gaze shapes neural signal during mentalizing according to emotional awareness Journal Article

In: NeuroImage, vol. 238, pp. 1–14, 2021.

Abstract | Links | BibTeX

@article{Zimmermann2021,
title = {Seeing things differently: Gaze shapes neural signal during mentalizing according to emotional awareness},
author = {Kristin Marie Zimmermann and Kirsten Daniela Schmidt and Franziska Gronow and Jens Sommer and Frank Leweke and Andreas Jansen},
doi = {10.1016/j.neuroimage.2021.118223},
year = {2021},
date = {2021-01-01},
journal = {NeuroImage},
volume = {238},
pages = {1--14},
publisher = {Elsevier Inc.},
abstract = {Studies on social cognition often use complex visual stimuli to asses neural processes attributed to abilities like “mentalizing” or “Theory of Mind” (ToM). During the processing of these stimuli, eye gaze, however, shapes neural signal patterns. Individual differences in neural operations on social cognition may therefore be obscured if individuals' gaze behavior differs systematically. These obstacles can be overcome by the combined analysis of neural signal and natural viewing behavior. Here, we combined functional magnetic resonance imaging (fMRI) with eye-tracking to examine effects of unconstrained gaze on neural ToM processes in healthy individuals with differing levels of emotional awareness, i.e. alexithymia. First, as previously described for emotional tasks, people with higher alexithymia levels look less at eyes in both ToM and task-free viewing contexts. Further, we find that neural ToM processes are not affected by individual differences in alexithymia per se. Instead, depending on alexithymia levels, gaze on critical stimulus aspects reversely shapes the signal in medial prefrontal cortex (MPFC) and anterior temporoparietal junction (TPJ) as distinct nodes of the ToM system. These results emphasize that natural selective attention affects fMRI patterns well beyond the visual system. Our study implies that, whenever using a task with multiple degrees of freedom in scan paths, ignoring the latter might obscure important conclusions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Studies on social cognition often use complex visual stimuli to asses neural processes attributed to abilities like “mentalizing” or “Theory of Mind” (ToM). During the processing of these stimuli, eye gaze, however, shapes neural signal patterns. Individual differences in neural operations on social cognition may therefore be obscured if individuals' gaze behavior differs systematically. These obstacles can be overcome by the combined analysis of neural signal and natural viewing behavior. Here, we combined functional magnetic resonance imaging (fMRI) with eye-tracking to examine effects of unconstrained gaze on neural ToM processes in healthy individuals with differing levels of emotional awareness, i.e. alexithymia. First, as previously described for emotional tasks, people with higher alexithymia levels look less at eyes in both ToM and task-free viewing contexts. Further, we find that neural ToM processes are not affected by individual differences in alexithymia per se. Instead, depending on alexithymia levels, gaze on critical stimulus aspects reversely shapes the signal in medial prefrontal cortex (MPFC) and anterior temporoparietal junction (TPJ) as distinct nodes of the ToM system. These results emphasize that natural selective attention affects fMRI patterns well beyond the visual system. Our study implies that, whenever using a task with multiple degrees of freedom in scan paths, ignoring the latter might obscure important conclusions.

Close

  • doi:10.1016/j.neuroimage.2021.118223

Close

Yijing Zhuang; Li Gu; Jingchang Chen; Zixuan Xu; Lily Y. L. Chan; Lei Feng; Qingqing Ye; Shenglan Zhang; Jin Yuan; Jinrong Li

The integration of eye tracking responses for the measurement of contrast sensitivity: A proof of concept study Journal Article

In: Frontiers in Neuroscience, vol. 15, pp. 710578, 2021.

Abstract | Links | BibTeX

@article{Zhuang2021b,
title = {The integration of eye tracking responses for the measurement of contrast sensitivity: A proof of concept study},
author = {Yijing Zhuang and Li Gu and Jingchang Chen and Zixuan Xu and Lily Y. L. Chan and Lei Feng and Qingqing Ye and Shenglan Zhang and Jin Yuan and Jinrong Li},
doi = {10.3389/fnins.2021.710578},
year = {2021},
date = {2021-01-01},
journal = {Frontiers in Neuroscience},
volume = {15},
pages = {710578},
abstract = {Contrast sensitivity (CS) is important when assessing functional vision. However, current techniques for assessing CS are not suitable for young children or non-verbal individuals because they require reliable, subjective perceptual reports. This study explored the feasibility of applying eye tracking technology to quantify CS as a first step toward developing a testing paradigm that will not rely on observers' behavioral or language abilities. Using a within-subject design, 27 healthy young adults completed CS measures for three spatial frequencies with best-corrected vision and lens-induced optical blur. Monocular CS was estimated using a five-alternative, forced-choice grating detection task. Thresholds were measured using eye movement responses and conventional key-press responses. CS measured using eye movements compared well with results obtained using key-press responses [Pearson's rbest–corrected = 0.966, P < 0.001]. Good test–retest variability was evident for the eye-movement-based measures (Pearson's r = 0.916, P < 0.001) with a coefficient of repeatability of 0.377 log CS across different days. This study provides a proof of concept that eye tracking can be used to automatically record eye gaze positions and accurately quantify human spatial vision. Future work will update this paradigm by incorporating the preferential looking technique into the eye tracking methods, optimizing the CS sampling algorithm and adapting the methodology to broaden its use on infants and non-verbal individuals.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Contrast sensitivity (CS) is important when assessing functional vision. However, current techniques for assessing CS are not suitable for young children or non-verbal individuals because they require reliable, subjective perceptual reports. This study explored the feasibility of applying eye tracking technology to quantify CS as a first step toward developing a testing paradigm that will not rely on observers' behavioral or language abilities. Using a within-subject design, 27 healthy young adults completed CS measures for three spatial frequencies with best-corrected vision and lens-induced optical blur. Monocular CS was estimated using a five-alternative, forced-choice grating detection task. Thresholds were measured using eye movement responses and conventional key-press responses. CS measured using eye movements compared well with results obtained using key-press responses [Pearson's rbest–corrected = 0.966, P < 0.001]. Good test–retest variability was evident for the eye-movement-based measures (Pearson's r = 0.916, P < 0.001) with a coefficient of repeatability of 0.377 log CS across different days. This study provides a proof of concept that eye tracking can be used to automatically record eye gaze positions and accurately quantify human spatial vision. Future work will update this paradigm by incorporating the preferential looking technique into the eye tracking methods, optimizing the CS sampling algorithm and adapting the methodology to broaden its use on infants and non-verbal individuals.

Close

  • doi:10.3389/fnins.2021.710578

Close

Ran Zhuang; Yanyan Tu; Xiangzhen Wang; Yanju Ren; Richard A. Abrams

Contributions of gains and losses to attentional capture and disengagement: evidence from the gap paradigm Journal Article

In: Experimental Brain Research, vol. 239, no. 11, pp. 3381–3395, 2021.

Abstract | Links | BibTeX

@article{Zhuang2021a,
title = {Contributions of gains and losses to attentional capture and disengagement: evidence from the gap paradigm},
author = {Ran Zhuang and Yanyan Tu and Xiangzhen Wang and Yanju Ren and Richard A. Abrams},
doi = {10.1007/s00221-021-06210-9},
year = {2021},
date = {2021-01-01},
journal = {Experimental Brain Research},
volume = {239},
number = {11},
pages = {3381--3395},
publisher = {Springer Berlin Heidelberg},
abstract = {It is known that movements of visual attention are influenced by features in a scene, such as colors, that are associated with value or with loss. The present study examined the detailed nature of these attentional effects by employing the gap paradigm—a technique that has been used to separately reveal changes in attentional capture and shifting, and changes in attentional disengagement. In four experiments, participants either looked toward or away from stimuli with colors that had been associated either with gains or with losses. We found that participants were faster to look to colors associated with gains and slower to look away from them, revealing effects of gains on both attentional capture and attentional disengagement. On the other hand, participants were both slower to look to features associated with loss, and faster to look away from such features. The pattern of results suggested, however, that the latter finding was not due to more rapid disengagement from loss-associated colors, but instead to more rapid shifting of attention away from such colors. Taken together, the results reveal a complex pattern of effects of gains and losses on the disengagement, capture, and shifting of visual attention, revealing a remarkable flexibility of the attention system.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It is known that movements of visual attention are influenced by features in a scene, such as colors, that are associated with value or with loss. The present study examined the detailed nature of these attentional effects by employing the gap paradigm—a technique that has been used to separately reveal changes in attentional capture and shifting, and changes in attentional disengagement. In four experiments, participants either looked toward or away from stimuli with colors that had been associated either with gains or with losses. We found that participants were faster to look to colors associated with gains and slower to look away from them, revealing effects of gains on both attentional capture and attentional disengagement. On the other hand, participants were both slower to look to features associated with loss, and faster to look away from such features. The pattern of results suggested, however, that the latter finding was not due to more rapid disengagement from loss-associated colors, but instead to more rapid shifting of attention away from such colors. Taken together, the results reveal a complex pattern of effects of gains and losses on the disengagement, capture, and shifting of visual attention, revealing a remarkable flexibility of the attention system.

Close

  • doi:10.1007/s00221-021-06210-9

Close

Qian Zhuang; Xiaoxiao Zheng; Benjamin Becker; Wei Lei; Xiaolei Xu; Keith M. Kendrick

Intranasal vasopressin like oxytocin increases social attention by influencing top-down control, but additionally enhances bottom-up control Journal Article

In: Psychoneuroendocrinology, vol. 133, pp. 105412, 2021.

Abstract | Links | BibTeX

@article{Zhuang2021,
title = {Intranasal vasopressin like oxytocin increases social attention by influencing top-down control, but additionally enhances bottom-up control},
author = {Qian Zhuang and Xiaoxiao Zheng and Benjamin Becker and Wei Lei and Xiaolei Xu and Keith M. Kendrick},
doi = {10.1016/j.psyneuen.2021.105412},
year = {2021},
date = {2021-01-01},
journal = {Psychoneuroendocrinology},
volume = {133},
pages = {105412},
publisher = {Elsevier Ltd},
abstract = {The respective roles of the neuropeptides arginine vasopressin (AVP) and oxytocin (OXT) in modulating social cognition and for therapeutic intervention in autism spectrum disorder have not been fully established. In particular, while numerous studies have demonstrated effects of oxytocin in promoting social attention the role of AVP has not been examined. The present study employed a randomized, double-blind, placebo (PLC)-controlled between-subject design to explore the social- and emotion-specific effects of AVP on both bottom-up and top-down attention processing with a validated emotional anti-saccade eye-tracking paradigm in 80 healthy male subjects (PLC = 40},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The respective roles of the neuropeptides arginine vasopressin (AVP) and oxytocin (OXT) in modulating social cognition and for therapeutic intervention in autism spectrum disorder have not been fully established. In particular, while numerous studies have demonstrated effects of oxytocin in promoting social attention the role of AVP has not been examined. The present study employed a randomized, double-blind, placebo (PLC)-controlled between-subject design to explore the social- and emotion-specific effects of AVP on both bottom-up and top-down attention processing with a validated emotional anti-saccade eye-tracking paradigm in 80 healthy male subjects (PLC = 40

Close

  • doi:10.1016/j.psyneuen.2021.105412

Close

Yikang Zhu; Lihua Xu; Wenzheng Wang; Qian Guo; Shan Chen; Caidi Zhang; Tianhong Zhang; Xiaochen Hu; Paul Enck; Chunbo Li; Jianhua Sheng; Jijun Wang

Gender differences in attentive bias during social information processing in schizophrenia: An eye-tracking study Journal Article

In: Asian Journal of Psychiatry, vol. 66, pp. 1–6, 2021.

Abstract | Links | BibTeX

@article{Zhu2021b,
title = {Gender differences in attentive bias during social information processing in schizophrenia: An eye-tracking study},
author = {Yikang Zhu and Lihua Xu and Wenzheng Wang and Qian Guo and Shan Chen and Caidi Zhang and Tianhong Zhang and Xiaochen Hu and Paul Enck and Chunbo Li and Jianhua Sheng and Jijun Wang},
doi = {10.1016/j.ajp.2021.102871},
year = {2021},
date = {2021-01-01},
journal = {Asian Journal of Psychiatry},
volume = {66},
pages = {1--6},
publisher = {Elsevier B.V.},
abstract = {Interpersonal communication is a specific scenario in which patients with psychiatric symptoms may manifest different behavioral patterns due to psychopathology. This was a pilot study by eye-tracking technology to investigate attentive bias during social information processing in schizophrenia. We enrolled 39 patients with schizophrenia from Shanghai Mental Health Center and 42 age-, gender- and education-matched healthy controls. The experiment was a free-viewing task, in which pictures with three types of degree of interpersonal communication were shown. We used two measures: 1) initial fixation duration, 2) total gaze duration. The Positive and Negative Syndrome Scale (PANSS) was used to determine symptom severity. The ratio of first fixation duration for pictures of communicating vs. non-communicating persons was significantly lower in patients than in controls (Mann-Whitney U = 512},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Interpersonal communication is a specific scenario in which patients with psychiatric symptoms may manifest different behavioral patterns due to psychopathology. This was a pilot study by eye-tracking technology to investigate attentive bias during social information processing in schizophrenia. We enrolled 39 patients with schizophrenia from Shanghai Mental Health Center and 42 age-, gender- and education-matched healthy controls. The experiment was a free-viewing task, in which pictures with three types of degree of interpersonal communication were shown. We used two measures: 1) initial fixation duration, 2) total gaze duration. The Positive and Negative Syndrome Scale (PANSS) was used to determine symptom severity. The ratio of first fixation duration for pictures of communicating vs. non-communicating persons was significantly lower in patients than in controls (Mann-Whitney U = 512

Close

  • doi:10.1016/j.ajp.2021.102871

Close

Shengnan Zhu; Yang Zhang; Junli Dong; Lihong Chen; Wenbo Luo

Low-spatial-frequency information facilitates threat detection in a response-specific manner Journal Article

In: Journal of Vision, vol. 21, no. 4, pp. 1–9, 2021.

Abstract | Links | BibTeX

@article{Zhu2021a,
title = {Low-spatial-frequency information facilitates threat detection in a response-specific manner},
author = {Shengnan Zhu and Yang Zhang and Junli Dong and Lihong Chen and Wenbo Luo},
doi = {10.1167/JOV.21.4.8},
year = {2021},
date = {2021-01-01},
journal = {Journal of Vision},
volume = {21},
number = {4},
pages = {1--9},
abstract = {The role of different spatial frequency bands in threat detection has been explored extensively. However, most studies use manual responses and the results are mixed. Here, we aimed to investigate the contribution of spatial frequency information to threat detection by using three response types, including manual responses, eye movements, and reaching movements, together with a priming paradigm. The results showed that both saccade and reaching responses were significantly faster to threatening stimuli than to nonthreatening stimuli when primed by low-spatial-frequency gratings rather than by high-spatial-frequency gratings. However, the manual response times to threatening stimuli were comparable to nonthreatening stimuli, irrespective of the spatial frequency content of the primes. The findings provide clear evidence that low-spatial-frequency information can facilitate threat detection in a response-specific manner, possibly through the subcortical magnocellular pathway dedicated to processing threat-related signals, which is automatically prioritized in the oculomotor system and biases behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The role of different spatial frequency bands in threat detection has been explored extensively. However, most studies use manual responses and the results are mixed. Here, we aimed to investigate the contribution of spatial frequency information to threat detection by using three response types, including manual responses, eye movements, and reaching movements, together with a priming paradigm. The results showed that both saccade and reaching responses were significantly faster to threatening stimuli than to nonthreatening stimuli when primed by low-spatial-frequency gratings rather than by high-spatial-frequency gratings. However, the manual response times to threatening stimuli were comparable to nonthreatening stimuli, irrespective of the spatial frequency content of the primes. The findings provide clear evidence that low-spatial-frequency information can facilitate threat detection in a response-specific manner, possibly through the subcortical magnocellular pathway dedicated to processing threat-related signals, which is automatically prioritized in the oculomotor system and biases behavior.

Close

  • doi:10.1167/JOV.21.4.8

Close

Ruomeng Zhu; Mateo Obregón; Hamutal Kreiner; Richard Shillcock

Small temporal asynchronies between the two eyes in binocular reading: Crosslinguistic data and the implications for ocular prevalence Journal Article

In: Attention, Perception, and Psychophysics, vol. 83, no. 7, pp. 3035–3045, 2021.

Abstract | Links | BibTeX

@article{Zhu2021,
title = {Small temporal asynchronies between the two eyes in binocular reading: Crosslinguistic data and the implications for ocular prevalence},
author = {Ruomeng Zhu and Mateo Obregón and Hamutal Kreiner and Richard Shillcock},
doi = {10.3758/s13414-021-02286-1},
year = {2021},
date = {2021-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {83},
number = {7},
pages = {3035--3045},
publisher = {Attention, Perception, & Psychophysics},
abstract = {We investigated small temporal nonalignments between the two eyes' fixations in the reading of English and Chinese. We define nine different patterns of asynchrony and report their spatial distribution across the screen of text. We interpret them in terms of their implications for ocular prevalence—prioritizing the input from one eye over the input from the other eye in higher perception/cognition, even when binocular fusion has occurred. The data are strikingly similar across the two very different orthographies. Asynchronies, in which one eye begins the fixation earlier and/or ends it later, occur most frequently in the hemifield corresponding to that eye. We propose that such small asynchronies cue higher processing to prioritize the input from that eye, during and after binocular fusion.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We investigated small temporal nonalignments between the two eyes' fixations in the reading of English and Chinese. We define nine different patterns of asynchrony and report their spatial distribution across the screen of text. We interpret them in terms of their implications for ocular prevalence—prioritizing the input from one eye over the input from the other eye in higher perception/cognition, even when binocular fusion has occurred. The data are strikingly similar across the two very different orthographies. Asynchronies, in which one eye begins the fixation earlier and/or ends it later, occur most frequently in the hemifield corresponding to that eye. We propose that such small asynchronies cue higher processing to prioritize the input from that eye, during and after binocular fusion.

Close

  • doi:10.3758/s13414-021-02286-1

Close

Mengyan Zhu; Xiangling Zhuang; Guojie Ma

Readers extract semantic information from parafoveal two-character synonyms in Chinese reading Journal Article

In: Reading and Writing, vol. 34, no. 3, pp. 773–790, 2021.

Abstract | Links | BibTeX

@article{Zhu2021c,
title = {Readers extract semantic information from parafoveal two-character synonyms in Chinese reading},
author = {Mengyan Zhu and Xiangling Zhuang and Guojie Ma},
doi = {10.1007/s11145-020-10092-8},
year = {2021},
date = {2021-01-01},
journal = {Reading and Writing},
volume = {34},
number = {3},
pages = {773--790},
publisher = {Springer Netherlands},
abstract = {In Chinese reading, the possibility and mechanism of semantic parafoveal processing has been debated for a long time. To advance the topic, “semantic preview benefit” in Chinese reading was reexamined, with a specific focus on how it is affected by the semantic relatedness between preview and target words at the two-character word level. Eighty critical two-character words were selected as target words. Reading tasks with gaze-contingent boundary paradigms were used to study whether different semantic-relatedness preview conditions influenced parafoveal processing. The data showed that synonyms (the most closely related preview) produced significant preview benefit compared with the semantic-related (non-synonyms) condition, even when plausibility was controlled. This result indicates that the larger extent of semantic preview benefit is mainly caused by the larger semantic relatedness between preview and target words. Moreover, plausibility is not the only cause of semantic preview benefit in Chinese reading. These findings improve the current understanding of the mechanism of parafoveal processing in Chinese reading and the implications on modeling eye movement control are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In Chinese reading, the possibility and mechanism of semantic parafoveal processing has been debated for a long time. To advance the topic, “semantic preview benefit” in Chinese reading was reexamined, with a specific focus on how it is affected by the semantic relatedness between preview and target words at the two-character word level. Eighty critical two-character words were selected as target words. Reading tasks with gaze-contingent boundary paradigms were used to study whether different semantic-relatedness preview conditions influenced parafoveal processing. The data showed that synonyms (the most closely related preview) produced significant preview benefit compared with the semantic-related (non-synonyms) condition, even when plausibility was controlled. This result indicates that the larger extent of semantic preview benefit is mainly caused by the larger semantic relatedness between preview and target words. Moreover, plausibility is not the only cause of semantic preview benefit in Chinese reading. These findings improve the current understanding of the mechanism of parafoveal processing in Chinese reading and the implications on modeling eye movement control are discussed.

Close

  • doi:10.1007/s11145-020-10092-8

Close

Ying Joey Zhou; Luca Iemi; Jan-Mathijs Schoffelen; Floris P. Lange; Saskia Haegens

Alpha oscillations shape sensory representation and perceptual sensitivity Journal Article

In: Journal of Neuroscience, vol. 41, no. 46, pp. 1–43, 2021.

Abstract | Links | BibTeX

@article{Zhou2021i,
title = {Alpha oscillations shape sensory representation and perceptual sensitivity},
author = {Ying Joey Zhou and Luca Iemi and Jan-Mathijs Schoffelen and Floris P. Lange and Saskia Haegens},
doi = {10.1523/jneurosci.1114-21.2021},
year = {2021},
date = {2021-01-01},
journal = {Journal of Neuroscience},
volume = {41},
number = {46},
pages = {1--43},
abstract = {Alpha activity (8–14 Hz) is the dominant rhythm in the awake brain and is thought to play an important role in setting the internal state of the brain. Previous work has associated states of decreased alpha power with enhanced neural excitability. However, evidence is mixed on whether and how such excitability enhancement modulates sensory signals of interest versus noise differently, and what, if any, are the consequences for subsequent perception. Here, human subjects (male and female) performed a visual detection task in which we manipulated their decision criteria in a blockwise manner. Although our manipulation led to substantial criterion shifts, these shifts were not reflected in prestimulus alpha band changes. Rather, lower prestimulus alpha power in occipital-parietal areas improved perceptual sensitivity and enhanced information content decodable from neural activity patterns. Additionally, oscillatory alpha phase immediately before stimulus presentation modulated accuracy. Together, our results suggest that alpha band dynamics modulate sensory signals of interest more strongly than noise.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Alpha activity (8–14 Hz) is the dominant rhythm in the awake brain and is thought to play an important role in setting the internal state of the brain. Previous work has associated states of decreased alpha power with enhanced neural excitability. However, evidence is mixed on whether and how such excitability enhancement modulates sensory signals of interest versus noise differently, and what, if any, are the consequences for subsequent perception. Here, human subjects (male and female) performed a visual detection task in which we manipulated their decision criteria in a blockwise manner. Although our manipulation led to substantial criterion shifts, these shifts were not reflected in prestimulus alpha band changes. Rather, lower prestimulus alpha power in occipital-parietal areas improved perceptual sensitivity and enhanced information content decodable from neural activity patterns. Additionally, oscillatory alpha phase immediately before stimulus presentation modulated accuracy. Together, our results suggest that alpha band dynamics modulate sensory signals of interest more strongly than noise.

Close

  • doi:10.1523/jneurosci.1114-21.2021

Close

Yang Zhou; Matthew C. Rosen; Sruthi K. Swaminathan; Nicolas Y. Masse; Ou Zhu; David J. Freedman

Distributed functions of prefrontal and parietal cortices during sequential categorical decisions Journal Article

In: eLife, vol. 10, pp. 1–30, 2021.

Abstract | Links | BibTeX

@article{Zhou2021h,
title = {Distributed functions of prefrontal and parietal cortices during sequential categorical decisions},
author = {Yang Zhou and Matthew C. Rosen and Sruthi K. Swaminathan and Nicolas Y. Masse and Ou Zhu and David J. Freedman},
doi = {10.7554/ELIFE.58782},
year = {2021},
date = {2021-01-01},
journal = {eLife},
volume = {10},
pages = {1--30},
abstract = {Comparing sequential stimuli is crucial for guiding complex behaviors. To understand mechanisms underlying sequential decisions, we compared neuronal responses in the prefrontal cortex (PFC), the lateral intraparietal (LIP), and medial intraparietal (MIP) areas in monkeys trained to decide whether sequentially presented stimuli were from matching (M) or nonmatching (NM) categories. We found that PFC leads M/NM decisions, whereas LIP and MIP appear more involved in stimulus evaluation and motor planning, respectively. Compared to LIP, PFC showed greater nonlinear integration of currently visible and remembered stimuli, which correlated with the monkeys' M/NM decisions. Furthermore, multi-module recurrent networks trained on the same task exhibited key features of PFC and LIP encoding, including nonlinear integration in the PFC-like module, which was causally involved in the networks' decisions. Network analysis found that nonlinear units have stronger and more widespread connections with input, output, and within-area units, indicating putative circuit-level mechanisms for sequential decisions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Comparing sequential stimuli is crucial for guiding complex behaviors. To understand mechanisms underlying sequential decisions, we compared neuronal responses in the prefrontal cortex (PFC), the lateral intraparietal (LIP), and medial intraparietal (MIP) areas in monkeys trained to decide whether sequentially presented stimuli were from matching (M) or nonmatching (NM) categories. We found that PFC leads M/NM decisions, whereas LIP and MIP appear more involved in stimulus evaluation and motor planning, respectively. Compared to LIP, PFC showed greater nonlinear integration of currently visible and remembered stimuli, which correlated with the monkeys' M/NM decisions. Furthermore, multi-module recurrent networks trained on the same task exhibited key features of PFC and LIP encoding, including nonlinear integration in the PFC-like module, which was causally involved in the networks' decisions. Network analysis found that nonlinear units have stronger and more widespread connections with input, output, and within-area units, indicating putative circuit-level mechanisms for sequential decisions.

Close

  • doi:10.7554/ELIFE.58782

Close

Yan Bang Zhou; Qiang Li; Hong Zhi Liu

Visual attention and time preference reversals Journal Article

In: Judgment and Decision Making, vol. 16, no. 4, pp. 1010–1038, 2021.

Abstract | BibTeX

@article{Zhou2021g,
title = {Visual attention and time preference reversals},
author = {Yan Bang Zhou and Qiang Li and Hong Zhi Liu},
year = {2021},
date = {2021-01-01},
journal = {Judgment and Decision Making},
volume = {16},
number = {4},
pages = {1010--1038},
abstract = {Time preference reversal refers to systematic inconsistencies between preferences and bids for intertemporal options. From the two eye-tracking studies (N1 = 60},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Time preference reversal refers to systematic inconsistencies between preferences and bids for intertemporal options. From the two eye-tracking studies (N1 = 60

Close

Xiaomei Zhou; Shruti Vyas; Jinbiao Ning; Margaret C. Moulson

Naturalistic face learning in infants and adults Journal Article

In: Psychological Science, pp. 1–17, 2021.

Abstract | Links | BibTeX

@article{Zhou2021f,
title = {Naturalistic face learning in infants and adults},
author = {Xiaomei Zhou and Shruti Vyas and Jinbiao Ning and Margaret C. Moulson},
doi = {10.1177/09567976211030630},
year = {2021},
date = {2021-01-01},
journal = {Psychological Science},
pages = {1--17},
abstract = {Everyday face recognition presents a difficult challenge because faces vary naturally in appearance as a result of changes in lighting, expression, viewing angle, and hairstyle. We know little about how humans develop the ability to learn faces despite natural facial variability. In the current study, we provide the first examination of attentional mechanisms underlying adults' and infants' learning of naturally varying faces. Adults ( n = 48) and 6- to 12-month-old infants ( n = 48) viewed videos of models reading a storybook; the facial appearance of these models was either high or low in variability. Participants then viewed the learned face paired with a novel face. Infants showed adultlike prioritization of face over nonface regions; both age groups fixated the face region more in the high- than low-variability condition. Overall, however, infants showed less ability to resist contextual distractions during learning, which potentially contributed to their lack of discrimination between the learned and novel faces. Mechanisms underlying face learning across natural variability are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Everyday face recognition presents a difficult challenge because faces vary naturally in appearance as a result of changes in lighting, expression, viewing angle, and hairstyle. We know little about how humans develop the ability to learn faces despite natural facial variability. In the current study, we provide the first examination of attentional mechanisms underlying adults' and infants' learning of naturally varying faces. Adults ( n = 48) and 6- to 12-month-old infants ( n = 48) viewed videos of models reading a storybook; the facial appearance of these models was either high or low in variability. Participants then viewed the learned face paired with a novel face. Infants showed adultlike prioritization of face over nonface regions; both age groups fixated the face region more in the high- than low-variability condition. Overall, however, infants showed less ability to resist contextual distractions during learning, which potentially contributed to their lack of discrimination between the learned and novel faces. Mechanisms underlying face learning across natural variability are discussed.

Close

  • doi:10.1177/09567976211030630

Close

Wei Zhou; Aiping Wang; Ming Yan

Eye movements and the perceptual span among skilled Uighur readers Journal Article

In: Vision Research, vol. 182, pp. 20–26, 2021.

Abstract | Links | BibTeX

@article{Zhou2021e,
title = {Eye movements and the perceptual span among skilled Uighur readers},
author = {Wei Zhou and Aiping Wang and Ming Yan},
doi = {10.1016/j.visres.2021.01.005},
year = {2021},
date = {2021-01-01},
journal = {Vision Research},
volume = {182},
pages = {20--26},
publisher = {Elsevier Ltd},
abstract = {In the present study, we explored the perceptual span of skilled Uighur readers during their natural reading of sentences. The Uighur script is based on Arabic letters and it runs horizontally from right to left, offering a test to understand the effect of text direction. We utilized the gaze contingent moving window paradigm, in which legible text was provided only within a window that moved in synchrony with readers' eyes while all other letters were masked. The size of the window was manipulated systematically to determine the smallest size that allowed readers to show normal reading behaviors. Comparisons of window conditions with the baseline condition showed that the Uighur readers reached asymptotic performance in reading speed and gaze duration when windows revealed at least five letters to the right and twelve letters to the left of the currently fixated one. The present study is the first to document the size of the perceptual span in a horizontally leftwards running script. Cross-script comparisons with prior findings suggest that the size of the perceptual span for a certain writing system is likely influenced by its reading direction and visual complexity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the present study, we explored the perceptual span of skilled Uighur readers during their natural reading of sentences. The Uighur script is based on Arabic letters and it runs horizontally from right to left, offering a test to understand the effect of text direction. We utilized the gaze contingent moving window paradigm, in which legible text was provided only within a window that moved in synchrony with readers' eyes while all other letters were masked. The size of the window was manipulated systematically to determine the smallest size that allowed readers to show normal reading behaviors. Comparisons of window conditions with the baseline condition showed that the Uighur readers reached asymptotic performance in reading speed and gaze duration when windows revealed at least five letters to the right and twelve letters to the left of the currently fixated one. The present study is the first to document the size of the perceptual span in a horizontally leftwards running script. Cross-script comparisons with prior findings suggest that the size of the perceptual span for a certain writing system is likely influenced by its reading direction and visual complexity.

Close

  • doi:10.1016/j.visres.2021.01.005

Close

Shou Han Zhou; Gerard Loughnane; Redmond O'connell; Mark A. Bellgrove; Trevor T. J. Chong

Distractors selectively modulate electrophysiological markers of perceptual decisions Journal Article

In: Journal of Cognitive Neuroscience, vol. 33, no. 6, pp. 1020–1031, 2021.

Abstract | Links | BibTeX

@article{Zhou2021d,
title = {Distractors selectively modulate electrophysiological markers of perceptual decisions},
author = {Shou Han Zhou and Gerard Loughnane and Redmond O'connell and Mark A. Bellgrove and Trevor T. J. Chong},
doi = {10.1162/jocn_a_01703},
year = {2021},
date = {2021-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {33},
number = {6},
pages = {1020--1031},
abstract = {Current models of perceptual decision-making assume that choices are made after evidence in favor of an alternative accumulates to a given threshold. This process has recently been revealed in human EEG recordings, but an unresolved issue is how these neural mechanisms are modulated by competing, yet task-irrelevant, stimuli. In this study, we tested 20 healthy participants on a motion direction discrimination task. Participants monitored two patches of random dot motion simultaneously presented on either side of fixation for periodic changes in an upward or downward motion, which could occur equiprobably in either patch. On a random 50% of trials, these periods of coherent vertical motion were accompanied by simultaneous task-irrelevant, horizontal motion in the contralateral patch. Our data showed that these distractors selectively increased the amplitude of early target selection responses over scalp sites contralateral to the distractor stimulus, without impacting on responses ipsilat-eral to the distractor. Importantly, this modulation mediated a decrement in the subsequent buildup rate of a neural signature of evidence accumulation and accounted for a slowing of RTs. These data offer new insights into the functional interactions between target selection and evidence accumulation signals, and their susceptibility to task-irrelevant distractors. More broadly, these data neurally inform future models of perceptual decision-making by highlighting the influence of early processing of competing stimuli on the accumulation of perceptual evidence.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Current models of perceptual decision-making assume that choices are made after evidence in favor of an alternative accumulates to a given threshold. This process has recently been revealed in human EEG recordings, but an unresolved issue is how these neural mechanisms are modulated by competing, yet task-irrelevant, stimuli. In this study, we tested 20 healthy participants on a motion direction discrimination task. Participants monitored two patches of random dot motion simultaneously presented on either side of fixation for periodic changes in an upward or downward motion, which could occur equiprobably in either patch. On a random 50% of trials, these periods of coherent vertical motion were accompanied by simultaneous task-irrelevant, horizontal motion in the contralateral patch. Our data showed that these distractors selectively increased the amplitude of early target selection responses over scalp sites contralateral to the distractor stimulus, without impacting on responses ipsilat-eral to the distractor. Importantly, this modulation mediated a decrement in the subsequent buildup rate of a neural signature of evidence accumulation and accounted for a slowing of RTs. These data offer new insights into the functional interactions between target selection and evidence accumulation signals, and their susceptibility to task-irrelevant distractors. More broadly, these data neurally inform future models of perceptual decision-making by highlighting the influence of early processing of competing stimuli on the accumulation of perceptual evidence.

Close

  • doi:10.1162/jocn_a_01703

Close

Peng Zhou; Jiawei Shi; Likan Zhan

Real-time comprehension of garden-path constructions by preschoolers: A Mandarin perspective Journal Article

In: Applied Psycholinguistics, vol. 42, no. 1, pp. 181–205, 2021.

Abstract | Links | BibTeX

@article{Zhou2021c,
title = {Real-time comprehension of garden-path constructions by preschoolers: A Mandarin perspective},
author = {Peng Zhou and Jiawei Shi and Likan Zhan},
doi = {10.1017/S0142716420000697},
year = {2021},
date = {2021-01-01},
journal = {Applied Psycholinguistics},
volume = {42},
number = {1},
pages = {181--205},
abstract = {The present study investigated whether 4- and 5-year-old Mandarin-speaking children are able to process garden-path constructions in real time when the working memory burden associated with revision and reanalysis is kept to minimum. In total, 25 4-year-olds, 25 5-year-olds, and 30 adults were tested using the visual-world paradigm of eye tracking. The obtained eye gaze patterns reflect that the 4- and 5-year-olds, like the adults, committed to an initial misinterpretation and later successfully revised their initial interpretation. The findings show that preschool children are able to revise and reanalyze their initial commitment and then arrive at the correct interpretation using the later-encountered linguistic information when processing the garden-path constructions in the current study. The findings also suggest that although the 4-year-olds successfully processed the garden-path constructions in real time, they were not as effective as the 5-year-olds and the adults in revising and reanalyzing their initial mistaken interpretation when later encountering the critical linguistic cue. Taken together, our findings call for a fine-grained model of child sentence processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study investigated whether 4- and 5-year-old Mandarin-speaking children are able to process garden-path constructions in real time when the working memory burden associated with revision and reanalysis is kept to minimum. In total, 25 4-year-olds, 25 5-year-olds, and 30 adults were tested using the visual-world paradigm of eye tracking. The obtained eye gaze patterns reflect that the 4- and 5-year-olds, like the adults, committed to an initial misinterpretation and later successfully revised their initial interpretation. The findings show that preschool children are able to revise and reanalyze their initial commitment and then arrive at the correct interpretation using the later-encountered linguistic information when processing the garden-path constructions in the current study. The findings also suggest that although the 4-year-olds successfully processed the garden-path constructions in real time, they were not as effective as the 5-year-olds and the adults in revising and reanalyzing their initial mistaken interpretation when later encountering the critical linguistic cue. Taken together, our findings call for a fine-grained model of child sentence processing.

Close

  • doi:10.1017/S0142716420000697

Close

Junyi Zhou

Differences on prosaccade task in skilled and less skilled female adolescent soccer players Journal Article

In: Frontiers in Psychology, vol. 12, pp. 711420, 2021.

Abstract | Links | BibTeX

@article{Zhou2021b,
title = {Differences on prosaccade task in skilled and less skilled female adolescent soccer players},
author = {Junyi Zhou},
doi = {10.3389/fpsyg.2021.711420},
year = {2021},
date = {2021-01-01},
journal = {Frontiers in Psychology},
volume = {12},
pages = {711420},
abstract = {Although the relationship between cognitive processes and saccadic eye movements has been outlined, the relationship between specific cognitive processes underlying saccadic eye movements and skill level of soccer players remains unclear. Present study used the prosaccade task as a tool to investigate the difference in saccadic eye movements in skilled and less skilled Chinese female adolescent soccer players. Fifty-six healthy female adolescent soccer players (range: 14–18years, mean age: 16.5years) from Fujian Youth Football Training Base (Fujian Province, China) took part in the experiment. In the prosaccade task, participants were instructed to fixate at the cross at the center of the screen as long as the target appeared peripherally. They were told to saccade to the target as quickly and accurately as possible once it appeared. The results indicated that skilled soccer players exhibited shorter saccade latency (p=0.031), decreased variability of saccade latency (p=0.013), and higher spatial accuracy of saccade (p=0.032) than their less skilled counterparts. The shorter saccade latency and decreased variability of saccade latency may imply that the attentional system of skilled soccer player is superior which leads to smaller attention fluctuation and less attentional lapse. Additionally, higher spatial accuracy of saccade may imply potential structural differences in brain underlying saccadic eye movement between skilled and less skilled soccer players. More importantly, the results of the present study demonstrated that soccer players' cognitive capacities vary as a function of their skill levels. The limitations of the present study and future directions of research were discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Although the relationship between cognitive processes and saccadic eye movements has been outlined, the relationship between specific cognitive processes underlying saccadic eye movements and skill level of soccer players remains unclear. Present study used the prosaccade task as a tool to investigate the difference in saccadic eye movements in skilled and less skilled Chinese female adolescent soccer players. Fifty-six healthy female adolescent soccer players (range: 14–18years, mean age: 16.5years) from Fujian Youth Football Training Base (Fujian Province, China) took part in the experiment. In the prosaccade task, participants were instructed to fixate at the cross at the center of the screen as long as the target appeared peripherally. They were told to saccade to the target as quickly and accurately as possible once it appeared. The results indicated that skilled soccer players exhibited shorter saccade latency (p=0.031), decreased variability of saccade latency (p=0.013), and higher spatial accuracy of saccade (p=0.032) than their less skilled counterparts. The shorter saccade latency and decreased variability of saccade latency may imply that the attentional system of skilled soccer player is superior which leads to smaller attention fluctuation and less attentional lapse. Additionally, higher spatial accuracy of saccade may imply potential structural differences in brain underlying saccadic eye movement between skilled and less skilled soccer players. More importantly, the results of the present study demonstrated that soccer players' cognitive capacities vary as a function of their skill levels. The limitations of the present study and future directions of research were discussed.

Close

  • doi:10.3389/fpsyg.2021.711420

Close

Hong Zhou; Xia Wang; Di Ma; Yanyan Jiang; Fan Li; Yunchuang Sun; Jing Chen; Wei Sun; Elmar H. Pinkhardt; Bernhard Landwehrmeyer; Albert Ludolph; Lin Zhang; Guiping Zhao; Zhaoxia Wang

The differential diagnostic value of a battery of oculomotor evaluation in Parkinson's Disease and Multiple System Atrophy Journal Article

In: Brain and Behavior, vol. 11, no. 7, pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Zhou2021a,
title = {The differential diagnostic value of a battery of oculomotor evaluation in Parkinson's Disease and Multiple System Atrophy},
author = {Hong Zhou and Xia Wang and Di Ma and Yanyan Jiang and Fan Li and Yunchuang Sun and Jing Chen and Wei Sun and Elmar H. Pinkhardt and Bernhard Landwehrmeyer and Albert Ludolph and Lin Zhang and Guiping Zhao and Zhaoxia Wang},
doi = {10.1002/brb3.2184},
year = {2021},
date = {2021-01-01},
journal = {Brain and Behavior},
volume = {11},
number = {7},
pages = {1--10},
abstract = {Introduction: Clinical diagnosis of Parkinsonism is still challenging, and the diagnostic biomarkers of Multiple System Atrophy (MSA) are scarce. This study aimed to investigate the diagnostic value of the combined eye movement tests in patients with Parkinson's disease (PD) and those with MSA. Methods: We enrolled 96 PD patients, 33 MSA patients (18 with MSA-P and 15 with MSA-C), and 40 healthy controls who had their horizontal ocular movements measured. The multiple-step pattern of memory-guided saccade (MGS), the hypometria/hypermetria of the reflexive saccade, the abnormal saccade in smooth pursuit movement (SPM), gaze-evoked nystagmus, and square-wave jerks in gaze-holding test were qualitatively analyzed. The reflexive saccadic parameters and gain of SPM were also quantitatively analyzed. Results: The MGS test showed that patients with either diagnosis had a significantly higher incidence of multiple-step pattern compared with controls (68.6%, 65.2%, and versus. 2.5%, p <.05, in PD, MSA, versus. controls, respectively). The reflexive saccade test showed that MSA patients showing a prominent higher incidence of the abnormal saccade (63.6%, both hypometria and hypermetria) than that of PD patients and controls (33.3%, 7.5%, respectively, hypometria) (p <.05). The SPM test showed PD patients had mildly decreased gain among whom 28.1% presenting “saccade intrusions”; and that MSA patients had the significant decreased gain with 51.5% presenting “catch-up saccades”(p <.05). Only MSA patients showed gaze-evoked nystagmus (24.2%), square-wave jerks (6.1%) in gaze-holding test (p <.05). Conclusions: A panel of eye movements tests may help to differentiate PD from MSA. The combined presence of hypometria and hypermetria in saccadic eye movement, the impaired gain of smooth pursuit movement with “catch-up saccades,” gaze-evoked nystagmus, square-wave jerks in gaze-holding test, and multiple-step pattern in MGS may provide clues to the diagnosis of MSA.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Introduction: Clinical diagnosis of Parkinsonism is still challenging, and the diagnostic biomarkers of Multiple System Atrophy (MSA) are scarce. This study aimed to investigate the diagnostic value of the combined eye movement tests in patients with Parkinson's disease (PD) and those with MSA. Methods: We enrolled 96 PD patients, 33 MSA patients (18 with MSA-P and 15 with MSA-C), and 40 healthy controls who had their horizontal ocular movements measured. The multiple-step pattern of memory-guided saccade (MGS), the hypometria/hypermetria of the reflexive saccade, the abnormal saccade in smooth pursuit movement (SPM), gaze-evoked nystagmus, and square-wave jerks in gaze-holding test were qualitatively analyzed. The reflexive saccadic parameters and gain of SPM were also quantitatively analyzed. Results: The MGS test showed that patients with either diagnosis had a significantly higher incidence of multiple-step pattern compared with controls (68.6%, 65.2%, and versus. 2.5%, p <.05, in PD, MSA, versus. controls, respectively). The reflexive saccade test showed that MSA patients showing a prominent higher incidence of the abnormal saccade (63.6%, both hypometria and hypermetria) than that of PD patients and controls (33.3%, 7.5%, respectively, hypometria) (p <.05). The SPM test showed PD patients had mildly decreased gain among whom 28.1% presenting “saccade intrusions”; and that MSA patients had the significant decreased gain with 51.5% presenting “catch-up saccades”(p <.05). Only MSA patients showed gaze-evoked nystagmus (24.2%), square-wave jerks (6.1%) in gaze-holding test (p <.05). Conclusions: A panel of eye movements tests may help to differentiate PD from MSA. The combined presence of hypometria and hypermetria in saccadic eye movement, the impaired gain of smooth pursuit movement with “catch-up saccades,” gaze-evoked nystagmus, square-wave jerks in gaze-holding test, and multiple-step pattern in MGS may provide clues to the diagnosis of MSA.

Close

  • doi:10.1002/brb3.2184

Close

Feng Zhou; X. Jessie Yang; Joost C. F. Winter

Using eye-tracking data to predict situation awareness in real time during takeover transitions in conditionally automated driving Journal Article

In: IEEE Transactions on Intelligent Transportation Systems, pp. 1–12, 2021.

Abstract | Links | BibTeX

@article{Zhou2021,
title = {Using eye-tracking data to predict situation awareness in real time during takeover transitions in conditionally automated driving},
author = {Feng Zhou and X. Jessie Yang and Joost C. F. Winter},
doi = {10.1109/TITS.2021.3069776},
year = {2021},
date = {2021-01-01},
journal = {IEEE Transactions on Intelligent Transportation Systems},
pages = {1--12},
abstract = {Situation awareness (SA) is critical to improving takeover performance during the transition period from automated driving to manual driving. Although many studies measured SA during or after the driving task, few studies have attempted to predict SA in real time in automated driving. In this work, we propose to predict SA during the takeover transition period in conditionally automated driving using eye-tracking and self-reported data. First, a tree ensemble machine learning model, named LightGBM (Light Gradient Boosting Machine), was used to predict SA. Second, in order to understand what factors influenced SA and how, SHAP (SHapley Additive exPlanations) values of individual predictor variables in the LightGBM model were calculated. These SHAP values explained the prediction model by identifying the most important factors and their effects on SA, which further improved the model performance of LightGBM through feature selection. We standardized SA between 0 and 1 by aggregating three performance measures (i.e., placement, distance, and speed estimation of vehicles with regard to the ego-vehicle) of SA in recreating simulated driving scenarios, after 33 participants viewed 32 videos with six lengths between 1 and 20 s. Using only eye-tracking data, our proposed model outperformed other selected machine learning models, having a root-mean-squared error (RMSE) of 0.121, a mean absolute error (MAE) of 0.096, and a 0.719 correlation coefficient between the predicted SA and the ground truth. The code is available at https://github.com/refengchou/Situation-awareness-prediction. Our proposed model provided important implications on how to monitor and predict SA in real time in automated driving using eye-tracking data.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Situation awareness (SA) is critical to improving takeover performance during the transition period from automated driving to manual driving. Although many studies measured SA during or after the driving task, few studies have attempted to predict SA in real time in automated driving. In this work, we propose to predict SA during the takeover transition period in conditionally automated driving using eye-tracking and self-reported data. First, a tree ensemble machine learning model, named LightGBM (Light Gradient Boosting Machine), was used to predict SA. Second, in order to understand what factors influenced SA and how, SHAP (SHapley Additive exPlanations) values of individual predictor variables in the LightGBM model were calculated. These SHAP values explained the prediction model by identifying the most important factors and their effects on SA, which further improved the model performance of LightGBM through feature selection. We standardized SA between 0 and 1 by aggregating three performance measures (i.e., placement, distance, and speed estimation of vehicles with regard to the ego-vehicle) of SA in recreating simulated driving scenarios, after 33 participants viewed 32 videos with six lengths between 1 and 20 s. Using only eye-tracking data, our proposed model outperformed other selected machine learning models, having a root-mean-squared error (RMSE) of 0.121, a mean absolute error (MAE) of 0.096, and a 0.719 correlation coefficient between the predicted SA and the ground truth. The code is available at https://github.com/refengchou/Situation-awareness-prediction. Our proposed model provided important implications on how to monitor and predict SA in real time in automated driving using eye-tracking data.

Close

  • doi:10.1109/TITS.2021.3069776

Close

Alexander Zhigalov; Katharina Duecker; Ole Jensen

The visual cortex produces gamma band echo in response to broadband visual flicker Journal Article

In: PLoS Computational Biology, vol. 17, no. 6, pp. 1–24, 2021.

Abstract | Links | BibTeX

@article{Zhigalov2021,
title = {The visual cortex produces gamma band echo in response to broadband visual flicker},
author = {Alexander Zhigalov and Katharina Duecker and Ole Jensen},
doi = {10.1371/journal.pcbi.1009046},
year = {2021},
date = {2021-01-01},
journal = {PLoS Computational Biology},
volume = {17},
number = {6},
pages = {1--24},
abstract = {The aim of this study is to uncover the network dynamics of the human visual cortex by driving it with a broadband random visual flicker. We here applied a broadband flicker (1–720 Hz) while measuring the MEG and then estimated the temporal response function (TRF) between the visual input and the MEG response. This TRF revealed an early response in the 40–60 Hz gamma range as well as in the 8–12 Hz alpha band. While the gamma band response is novel, the latter has been termed the alpha band perceptual echo. The gamma echo preceded the alpha perceptual echo. The dominant frequency of the gamma echo was subject-specific thereby reflecting the individual dynamical properties of the early visual cortex. To understand the neuronal mechanisms generating the gamma echo, we implemented a pyramidal-interneuron gamma (PING) model that produces gamma oscillations in the presence of constant input currents. Applying a broadband input current mimicking the visual stimulation allowed us to estimate TRF between the input current and the population response (akin to the local field potentials). The TRF revealed a gamma echo that was similar to the one we observed in the MEG data. Our results suggest that the visual gamma echo can be explained by the dynamics of the PING model even in the absence of sustained gamma oscillations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The aim of this study is to uncover the network dynamics of the human visual cortex by driving it with a broadband random visual flicker. We here applied a broadband flicker (1–720 Hz) while measuring the MEG and then estimated the temporal response function (TRF) between the visual input and the MEG response. This TRF revealed an early response in the 40–60 Hz gamma range as well as in the 8–12 Hz alpha band. While the gamma band response is novel, the latter has been termed the alpha band perceptual echo. The gamma echo preceded the alpha perceptual echo. The dominant frequency of the gamma echo was subject-specific thereby reflecting the individual dynamical properties of the early visual cortex. To understand the neuronal mechanisms generating the gamma echo, we implemented a pyramidal-interneuron gamma (PING) model that produces gamma oscillations in the presence of constant input currents. Applying a broadband input current mimicking the visual stimulation allowed us to estimate TRF between the input current and the population response (akin to the local field potentials). The TRF revealed a gamma echo that was similar to the one we observed in the MEG data. Our results suggest that the visual gamma echo can be explained by the dynamics of the PING model even in the absence of sustained gamma oscillations.

Close

  • doi:10.1371/journal.pcbi.1009046

Close

Junming Zheng; Muhammad Waqqas Khan Tarin; Denghui Jiang; Min Li; Jing Ye; Lingyan Chen; Tianyou He; Yushan Zheng

Which ornamental features of bamboo plants will attract the people most? Journal Article

In: Urban Forestry and Urban Greening, vol. 61, pp. 127101, 2021.

Abstract | Links | BibTeX

@article{Zheng2021b,
title = {Which ornamental features of bamboo plants will attract the people most?},
author = {Junming Zheng and Muhammad Waqqas Khan Tarin and Denghui Jiang and Min Li and Jing Ye and Lingyan Chen and Tianyou He and Yushan Zheng},
doi = {10.1016/j.ufug.2021.127101},
year = {2021},
date = {2021-01-01},
journal = {Urban Forestry and Urban Greening},
volume = {61},
pages = {127101},
publisher = {Elsevier GmbH},
abstract = {Plant structure and architecture have a significant influence on how people interpret them. Bamboo plants have highly ornamental attributes, but the traits that attract people the most are still unknown. Therefore, to assess the people's preference for ornamental features of bamboo plants, eye-tracking measures (fixation count, percent of dwell time, pupil size, and saccade amplitude) and a questionnaire survey about subjective preference were conducted by ninety college students as the participants. The result showed that subjective ratings of stem color, leaf stripes, and stem stripes showed a significant positive correlation with the fixation count. The pupil size and saccade amplitude of different ornamental features were not correlated with the subjective ratings. According to random forest model, fixation count was the most influential aspect affecting subjective ratings. Based on integrated eye-tracking measures and subjective ratings, we conclude that people prefer the ornamental features like green stem, green stem with irregular yellow stripes or yellow stem with narrow green stripes, leaves with less number of stripes, normal stem, and tree. In addition, people prefer natural traits, for instance, green stem, normal stem, and tree, related to latent conscious belief and evolutionary adaptation. Abnormal traits, such as leaf stripes and stem stripes attract people's visual attention and interests, making the fixation count and increasing the percentage of dwell time. This study has significant implications for landscape experts in the design and maintenance of ornamental bamboo plantations in China as well as in other areas of the world.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Plant structure and architecture have a significant influence on how people interpret them. Bamboo plants have highly ornamental attributes, but the traits that attract people the most are still unknown. Therefore, to assess the people's preference for ornamental features of bamboo plants, eye-tracking measures (fixation count, percent of dwell time, pupil size, and saccade amplitude) and a questionnaire survey about subjective preference were conducted by ninety college students as the participants. The result showed that subjective ratings of stem color, leaf stripes, and stem stripes showed a significant positive correlation with the fixation count. The pupil size and saccade amplitude of different ornamental features were not correlated with the subjective ratings. According to random forest model, fixation count was the most influential aspect affecting subjective ratings. Based on integrated eye-tracking measures and subjective ratings, we conclude that people prefer the ornamental features like green stem, green stem with irregular yellow stripes or yellow stem with narrow green stripes, leaves with less number of stripes, normal stem, and tree. In addition, people prefer natural traits, for instance, green stem, normal stem, and tree, related to latent conscious belief and evolutionary adaptation. Abnormal traits, such as leaf stripes and stem stripes attract people's visual attention and interests, making the fixation count and increasing the percentage of dwell time. This study has significant implications for landscape experts in the design and maintenance of ornamental bamboo plantations in China as well as in other areas of the world.

Close

  • doi:10.1016/j.ufug.2021.127101

Close

Haiyan Zheng; Xiaoxiao Ying; Xianghang He; Jia Qu; Fang Hou

Defective temporal window of the foveal visual processing in high myopia Journal Article

In: Investigative Ophthalmology & Visual Science, vol. 62, no. 9, pp. 1–11, 2021.

Abstract | Links | BibTeX

@article{Zheng2021a,
title = {Defective temporal window of the foveal visual processing in high myopia},
author = {Haiyan Zheng and Xiaoxiao Ying and Xianghang He and Jia Qu and Fang Hou},
doi = {10.1167/iovs.62.9.11},
year = {2021},
date = {2021-01-01},
journal = {Investigative Ophthalmology & Visual Science},
volume = {62},
number = {9},
pages = {1--11},
abstract = {PURPOSE. To investigate the temporal characteristics of visual processing at the fovea and the periphery in high myopia. METHODS. Eighteen low (LM, ≤ −0.50 and > −6.00 D) and 18 high myopic (HM, ≤ −6.00 D) participants took part in this study. The contrast thresholds in an orientation discrimination task under various stimulus onset asynchrony (SOA) masking conditions were measured at the fovea and a more peripheral area (7°) for the two groups. An elaborated perceptual template model (ePTM) was fit to the behavioral data for each participant. RESULTS. An analysis of variance with three factors (SOA, degree of myopia and eccentricity) was performed on the threshold data. The interaction between SOA and degree of myopia in the fovea was significant (F (4, 128) = 2.66},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

PURPOSE. To investigate the temporal characteristics of visual processing at the fovea and the periphery in high myopia. METHODS. Eighteen low (LM, ≤ −0.50 and > −6.00 D) and 18 high myopic (HM, ≤ −6.00 D) participants took part in this study. The contrast thresholds in an orientation discrimination task under various stimulus onset asynchrony (SOA) masking conditions were measured at the fovea and a more peripheral area (7°) for the two groups. An elaborated perceptual template model (ePTM) was fit to the behavioral data for each participant. RESULTS. An analysis of variance with three factors (SOA, degree of myopia and eccentricity) was performed on the threshold data. The interaction between SOA and degree of myopia in the fovea was significant (F (4, 128) = 2.66

Close

  • doi:10.1167/iovs.62.9.11

Close

Annie Zheng; Jessica A. Church

A developmental eye tracking investigation of cued task switching performance Journal Article

In: Child Development, vol. 92, no. 4, pp. 1652–1672, 2021.

Abstract | Links | BibTeX

@article{Zheng2021,
title = {A developmental eye tracking investigation of cued task switching performance},
author = {Annie Zheng and Jessica A. Church},
doi = {10.1111/cdev.13478},
year = {2021},
date = {2021-01-01},
journal = {Child Development},
volume = {92},
number = {4},
pages = {1652--1672},
abstract = {Children perform worse than adults on tests of cognitive flexibility, which is a component of executive function. To assess what aspects of a cognitive flexibility task (cued switching) children have difficulty with, investigators tested where eye gaze diverged over age. Eye-tracking was used as a proxy for attention during the preparatory period of each trial in 48 children ages 8–16 years and 51 adults ages 18–27 years. Children fixated more often and longer on the cued rule, and made more saccades between rule and response options. Behavioral performance correlated with gaze location and saccades. Mid-adolescents were similar to adults, supporting the slow maturation of cognitive flexibility. Lower preparatory control and associated lower cognitive flexibility task performance in development may particularly relate to rule processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Children perform worse than adults on tests of cognitive flexibility, which is a component of executive function. To assess what aspects of a cognitive flexibility task (cued switching) children have difficulty with, investigators tested where eye gaze diverged over age. Eye-tracking was used as a proxy for attention during the preparatory period of each trial in 48 children ages 8–16 years and 51 adults ages 18–27 years. Children fixated more often and longer on the cued rule, and made more saccades between rule and response options. Behavioral performance correlated with gaze location and saccades. Mid-adolescents were similar to adults, supporting the slow maturation of cognitive flexibility. Lower preparatory control and associated lower cognitive flexibility task performance in development may particularly relate to rule processing.

Close

  • doi:10.1111/cdev.13478

Close

Sainan Zhao; Lin Li; Min Chang; Jingxin Wang; Kevin B Paterson

A further look at ageing and word predictability effects in Chinese reading: Evidence from one-character words Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 74, no. 1, pp. 68–78, 2021.

Abstract | Links | BibTeX

@article{Zhao2021,
title = {A further look at ageing and word predictability effects in Chinese reading: Evidence from one-character words},
author = {Sainan Zhao and Lin Li and Min Chang and Jingxin Wang and Kevin B Paterson},
doi = {10.1177/1747021820951131},
year = {2021},
date = {2021-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {74},
number = {1},
pages = {68--78},
abstract = {Older adults are thought to compensate for slower lexical processing by making greater use of contextual knowledge, relative to young adults, to predict words in sentences. Accordingly, compared to young adults, older adults should produce larger contextual predictability effects in reading times and skipping rates for words. Empirical support for this account is nevertheless scarce. Perhaps the clearest evidence to date comes from a recent Chinese study showing larger word predictability effects for older adults in reading times but not skipping rates for two-character words. However, one possibility is that the absence of a word-skipping effect in this experiment was due to the older readers skipping words infrequently because of difficulty processing two-character words parafoveally. We therefore took a further look at this issue, using one-character target words to boost word-skipping. Young (18–30 years) and older (65+ years) adults read sentences containing a target word that was either highly predictable or less predictable from the prior sentence context. Our results replicate the finding that older adults produce larger word predictability effects in reading times but not word-skipping, despite high skipping rates. We discuss these findings in relation to ageing effects on reading in different writing systems.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Older adults are thought to compensate for slower lexical processing by making greater use of contextual knowledge, relative to young adults, to predict words in sentences. Accordingly, compared to young adults, older adults should produce larger contextual predictability effects in reading times and skipping rates for words. Empirical support for this account is nevertheless scarce. Perhaps the clearest evidence to date comes from a recent Chinese study showing larger word predictability effects for older adults in reading times but not skipping rates for two-character words. However, one possibility is that the absence of a word-skipping effect in this experiment was due to the older readers skipping words infrequently because of difficulty processing two-character words parafoveally. We therefore took a further look at this issue, using one-character target words to boost word-skipping. Young (18–30 years) and older (65+ years) adults read sentences containing a target word that was either highly predictable or less predictable from the prior sentence context. Our results replicate the finding that older adults produce larger word predictability effects in reading times but not word-skipping, despite high skipping rates. We discuss these findings in relation to ageing effects on reading in different writing systems.

Close

  • doi:10.1177/1747021820951131

Close

Yi Zhang; Ke Xu; Zhongling Pi; Jiumin Yang

Instructor's position affects learning from video lectures in Chinese context: an eye-tracking study Journal Article

In: Behaviour and Information Technology, pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Zhang2021j,
title = {Instructor's position affects learning from video lectures in Chinese context: an eye-tracking study},
author = {Yi Zhang and Ke Xu and Zhongling Pi and Jiumin Yang},
doi = {10.1080/0144929X.2021.1910731},
year = {2021},
date = {2021-01-01},
journal = {Behaviour and Information Technology},
pages = {1--10},
publisher = {Taylor & Francis},
abstract = {Although more and more online courses use video lectures that feature an instructor and slides, there are few specific guidelines for designing these video lectures. This experiment tested whether the instructor should appear on the screen and whether her position on the screen (left, middle, right of the content on the slides) influenced students. Students were randomly assigned to watch one of four video lectures on the topic of sleep. The results showed that the video lectures with an instructor's presence (regardless of position) motivated students more than the video lecture without an instructor presence did. Learning performance and satisfaction were highest when the instructor appeared on the right side of the screen. Furthermore, eye movement data showed that compared to students in all other conditions, students in the middle condition paid more attention to the instructor and less attention to the learning content, and switched more between instructor and learning content. The findings highlight the positive effects of the instructor appearing on the right side of the screen in video lectures with slides.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Although more and more online courses use video lectures that feature an instructor and slides, there are few specific guidelines for designing these video lectures. This experiment tested whether the instructor should appear on the screen and whether her position on the screen (left, middle, right of the content on the slides) influenced students. Students were randomly assigned to watch one of four video lectures on the topic of sleep. The results showed that the video lectures with an instructor's presence (regardless of position) motivated students more than the video lecture without an instructor presence did. Learning performance and satisfaction were highest when the instructor appeared on the right side of the screen. Furthermore, eye movement data showed that compared to students in all other conditions, students in the middle condition paid more attention to the instructor and less attention to the learning content, and switched more between instructor and learning content. The findings highlight the positive effects of the instructor appearing on the right side of the screen in video lectures with slides.

Close

  • doi:10.1080/0144929X.2021.1910731

Close

Yan-Bo Zhang; Peng-Chong Wang; Yun Ma; Xiang-Yun Yang; Fan-Qiang Meng; Simon A Broadley; Jing Sun; Zhan-Jiang Li

Using eye movements in the dot-probe paradigm to investigate attention bias in illness anxiety disorder Journal Article

In: World Journal of Psychiatry, vol. 11, no. 3, pp. 73–86, 2021.

Abstract | Links | BibTeX

@article{Zhang2021i,
title = {Using eye movements in the dot-probe paradigm to investigate attention bias in illness anxiety disorder},
author = {Yan-Bo Zhang and Peng-Chong Wang and Yun Ma and Xiang-Yun Yang and Fan-Qiang Meng and Simon A Broadley and Jing Sun and Zhan-Jiang Li},
doi = {10.5498/wjp.v11.i3.73},
year = {2021},
date = {2021-01-01},
journal = {World Journal of Psychiatry},
volume = {11},
number = {3},
pages = {73--86},
abstract = {BACKGROUND: Illness anxiety disorder (IAD) is a common, distressing, and debilitating condition with the key feature being a persistent conviction of the possibility of having one or more serious or progressive physical disorders. Because eye movements are guided by visual-spatial attention, eye-tracking technology is a comparatively direct, continuous measure of attention direction and speed when stimuli are oriented. Researchers have tried to identify selective visual attention biases by tracking eye movements within dot-probe paradigms because dot-probe paradigm can distinguish these attentional biases more clearly. AIM: To examine the association between IAD and biased processing of illness-related information. METHODS: A case-control study design was used to record eye movements of individuals with IAD and healthy controls while participants viewed a set of pictures from four categories (illness-related, socially threatening, positive, and neutral images). Biases in initial orienting were assessed from the location of the initial shift in gaze, and biases in the maintenance of attention were assessed from the duration of gaze that was initially fixated on the picture per image category. RESULTS: The eye movement of the participants in the IAD group was characterized by an avoidance bias in initial orienting to illness-related pictures. There was no evidence of individuals with IAD spending significantly more time viewing illness-related images compared with other images. Patients with IAD had an attention bias at the early stage and overall attentional avoidance. In addition, this study found that patients with significant anxiety symptoms showed attention bias in the late stages of attention processing. CONCLUSION: Illness-related information processing biases appear to be a robust feature of IAD and may have an important role in explaining the etiology and maintenance of the disorder.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

BACKGROUND: Illness anxiety disorder (IAD) is a common, distressing, and debilitating condition with the key feature being a persistent conviction of the possibility of having one or more serious or progressive physical disorders. Because eye movements are guided by visual-spatial attention, eye-tracking technology is a comparatively direct, continuous measure of attention direction and speed when stimuli are oriented. Researchers have tried to identify selective visual attention biases by tracking eye movements within dot-probe paradigms because dot-probe paradigm can distinguish these attentional biases more clearly. AIM: To examine the association between IAD and biased processing of illness-related information. METHODS: A case-control study design was used to record eye movements of individuals with IAD and healthy controls while participants viewed a set of pictures from four categories (illness-related, socially threatening, positive, and neutral images). Biases in initial orienting were assessed from the location of the initial shift in gaze, and biases in the maintenance of attention were assessed from the duration of gaze that was initially fixated on the picture per image category. RESULTS: The eye movement of the participants in the IAD group was characterized by an avoidance bias in initial orienting to illness-related pictures. There was no evidence of individuals with IAD spending significantly more time viewing illness-related images compared with other images. Patients with IAD had an attention bias at the early stage and overall attentional avoidance. In addition, this study found that patients with significant anxiety symptoms showed attention bias in the late stages of attention processing. CONCLUSION: Illness-related information processing biases appear to be a robust feature of IAD and may have an important role in explaining the etiology and maintenance of the disorder.

Close

  • doi:10.5498/wjp.v11.i3.73

Close

Xinyuan Zhang; Mario Dalmaso; Luigi Castelli; Shimin Fu; Giovanni Galfano

Cross-cultural asymmetries in oculomotor interference elicited by gaze distractors belonging to Asian and White faces Journal Article

In: Scientific Reports, vol. 11, pp. 1–11, 2021.

Abstract | Links | BibTeX

@article{Zhang2021h,
title = {Cross-cultural asymmetries in oculomotor interference elicited by gaze distractors belonging to Asian and White faces},
author = {Xinyuan Zhang and Mario Dalmaso and Luigi Castelli and Shimin Fu and Giovanni Galfano},
doi = {10.1038/s41598-021-99954-x},
year = {2021},
date = {2021-01-01},
journal = {Scientific Reports},
volume = {11},
pages = {1--11},
publisher = {Nature Publishing Group UK},
abstract = {The averted gaze of others triggers reflexive attentional orienting in the corresponding direction. This phenomenon can be modulated by many social factors. Here, we used an eye-tracking technique to investigate the role of ethnic membership in a cross-cultural oculomotor interference study. Chinese and Italian participants were required to perform a saccade whose direction might be either congruent or incongruent with the averted-gaze of task-irrelevant faces belonging to Asian and White individuals. The results showed that, for Chinese participants, White faces elicited a larger oculomotor interference than Asian faces. By contrast, Italian participants exhibited a similar oculomotor interference effect for both Asian and White faces. Hence, Chinese participants found it more difficult to suppress eye-gaze processing of White rather than Asian faces. The findings provide converging evidence that social attention can be modulated by social factors characterizing both the face stimulus and the participants. The data are discussed with reference to possible cross-cultural differences in perceived social status.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The averted gaze of others triggers reflexive attentional orienting in the corresponding direction. This phenomenon can be modulated by many social factors. Here, we used an eye-tracking technique to investigate the role of ethnic membership in a cross-cultural oculomotor interference study. Chinese and Italian participants were required to perform a saccade whose direction might be either congruent or incongruent with the averted-gaze of task-irrelevant faces belonging to Asian and White individuals. The results showed that, for Chinese participants, White faces elicited a larger oculomotor interference than Asian faces. By contrast, Italian participants exhibited a similar oculomotor interference effect for both Asian and White faces. Hence, Chinese participants found it more difficult to suppress eye-gaze processing of White rather than Asian faces. The findings provide converging evidence that social attention can be modulated by social factors characterizing both the face stimulus and the participants. The data are discussed with reference to possible cross-cultural differences in perceived social status.

Close

  • doi:10.1038/s41598-021-99954-x

Close

Xinru Zhang; Zhongling Pi; Chenyu Li; Weiping Hu

Intrinsic motivation enhances online group creativity via promoting members' effort, not interaction Journal Article

In: British Journal of Educational Technology, vol. 52, no. 2, pp. 606–618, 2021.

Abstract | Links | BibTeX

@article{Zhang2021k,
title = {Intrinsic motivation enhances online group creativity via promoting members' effort, not interaction},
author = {Xinru Zhang and Zhongling Pi and Chenyu Li and Weiping Hu},
doi = {10.1111/bjet.13045},
year = {2021},
date = {2021-01-01},
journal = {British Journal of Educational Technology},
volume = {52},
number = {2},
pages = {606--618},
abstract = {Intrinsic motivation is seen as the principal source of vitality in educational settings. This study examined whether intrinsic motivation promoted online group creativity and tested a cognitive mechanism that might explain this effect. University students (N = 72; 61 women) who volunteered to participate were asked to fulfill a creative task with a peer using online software. The peer was actually a fake participant who was programed to send prepared answers in sequence. Ratings of creativity (fluency, flexibility and originality) and eye movement data (focus on own vs. peer's ideas on the screen) were used to compare students who were induced to have high intrinsic motivation and those induced to have low intrinsic motivation. Results showed that compared to participants with low intrinsic motivation, those with high intrinsic motivation showed higher fluency and flexibility on the creative task and spent a larger percentage of time looking at their own ideas on the screen. The two groups did not differ in how much they looked at the peer's ideas. In addition, students' percentage dwell time on their own ideas mediated the beneficial effect of intrinsic motivation on idea fluency. These results suggest that although intrinsic motivation could enhance the fluency of creative ideas in an online group, it does not necessarily promote interaction among group members. Given the importance of interaction in online group setting, findings of this study suggest that in addition to enhancing intrinsic motivation, other measures should be taken to promote the interaction behavior in online groups. Practitioner Notes What is already known about this topic The generation of creative ideas in group settings calls for both individual effort and cognitive stimulation from other members. Intrinsic motivation has been shown to foster creativity in face-to-face groups, which is primarily due the promotion of individual effort. In online group settings, students' creativity tends to rely on intrinsic motivation because the extrinsic motivation typically provided by teachers' supervision and peer pressure in face-to-face settings is minimized online. What this paper adds Creative performance in online groups benefits from intrinsic motivation. Intrinsic motivation promotes creativity through an individual's own cognitive effort instead of interaction among members. Implications for practice and/or policy Improving students' intrinsic motivation is an effective way to promote creativity in online groups. Teachers should take additional steps to encourage students to interact more with each other in online groups.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Intrinsic motivation is seen as the principal source of vitality in educational settings. This study examined whether intrinsic motivation promoted online group creativity and tested a cognitive mechanism that might explain this effect. University students (N = 72; 61 women) who volunteered to participate were asked to fulfill a creative task with a peer using online software. The peer was actually a fake participant who was programed to send prepared answers in sequence. Ratings of creativity (fluency, flexibility and originality) and eye movement data (focus on own vs. peer's ideas on the screen) were used to compare students who were induced to have high intrinsic motivation and those induced to have low intrinsic motivation. Results showed that compared to participants with low intrinsic motivation, those with high intrinsic motivation showed higher fluency and flexibility on the creative task and spent a larger percentage of time looking at their own ideas on the screen. The two groups did not differ in how much they looked at the peer's ideas. In addition, students' percentage dwell time on their own ideas mediated the beneficial effect of intrinsic motivation on idea fluency. These results suggest that although intrinsic motivation could enhance the fluency of creative ideas in an online group, it does not necessarily promote interaction among group members. Given the importance of interaction in online group setting, findings of this study suggest that in addition to enhancing intrinsic motivation, other measures should be taken to promote the interaction behavior in online groups. Practitioner Notes What is already known about this topic The generation of creative ideas in group settings calls for both individual effort and cognitive stimulation from other members. Intrinsic motivation has been shown to foster creativity in face-to-face groups, which is primarily due the promotion of individual effort. In online group settings, students' creativity tends to rely on intrinsic motivation because the extrinsic motivation typically provided by teachers' supervision and peer pressure in face-to-face settings is minimized online. What this paper adds Creative performance in online groups benefits from intrinsic motivation. Intrinsic motivation promotes creativity through an individual's own cognitive effort instead of interaction among members. Implications for practice and/or policy Improving students' intrinsic motivation is an effective way to promote creativity in online groups. Teachers should take additional steps to encourage students to interact more with each other in online groups.

Close

  • doi:10.1111/bjet.13045

Close

Xiaoli Zhang; Julie D. Golomb

Neural representations of covert attention across saccades: Comparing pattern similarity to shifting and holding attention during fixation Journal Article

In: eNeuro, vol. 8, no. 2, pp. 1–19, 2021.

Abstract | Links | BibTeX

@article{Zhang2021g,
title = {Neural representations of covert attention across saccades: Comparing pattern similarity to shifting and holding attention during fixation},
author = {Xiaoli Zhang and Julie D. Golomb},
doi = {10.1523/ENEURO.0186-20.2021},
year = {2021},
date = {2021-01-01},
journal = {eNeuro},
volume = {8},
number = {2},
pages = {1--19},
abstract = {We can focus visuospatial attention by covertly attending to relevant locations, moving our eyes, or both simultaneously. How does shifting versus holding covert attention during fixation compare with maintaining covert attention across saccades? We acquired human fMRI data during a combined saccade and covert attention task. On Eyes-fixed trials, participants either held attention at the same initial location (“hold atten-tion”) or shifted attention to another location midway through the trial (“shift attention”). On Eyes-move trials, participants made a saccade midway through the trial, while maintaining attention in one of two reference frames: the “retinotopic attention” condition involved holding attention at a fixation-relative location but shifting to a different screen-centered location, whereas the “spatiotopic attention” condition involved holding attention on the same screen-centered location but shifting relative to fixation. We localized the brain network sensitive to attention shifts (shift > hold attention), and used multivoxel pattern time course (MVPTC) analyses to investigate the patterns of brain activity for spatiotopic and retinotopic attention across saccades. In the attention shift network, we found transient information about both whether covert shifts were made and whether saccades were executed. Moreover, in this network, both retinotopic and spatiotopic conditions were represented more similarly to shifting than to holding covert attention. An exploratory searchlight analysis revealed additional regions where spatiotopic was relatively more similar to shifting and retinotopic more to holding. Thus, maintaining retinotopic and spatiotopic attention across saccades may involve different types of updating that vary in similarity to covert attention “hold” and “shift” signals across different regions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We can focus visuospatial attention by covertly attending to relevant locations, moving our eyes, or both simultaneously. How does shifting versus holding covert attention during fixation compare with maintaining covert attention across saccades? We acquired human fMRI data during a combined saccade and covert attention task. On Eyes-fixed trials, participants either held attention at the same initial location (“hold atten-tion”) or shifted attention to another location midway through the trial (“shift attention”). On Eyes-move trials, participants made a saccade midway through the trial, while maintaining attention in one of two reference frames: the “retinotopic attention” condition involved holding attention at a fixation-relative location but shifting to a different screen-centered location, whereas the “spatiotopic attention” condition involved holding attention on the same screen-centered location but shifting relative to fixation. We localized the brain network sensitive to attention shifts (shift > hold attention), and used multivoxel pattern time course (MVPTC) analyses to investigate the patterns of brain activity for spatiotopic and retinotopic attention across saccades. In the attention shift network, we found transient information about both whether covert shifts were made and whether saccades were executed. Moreover, in this network, both retinotopic and spatiotopic conditions were represented more similarly to shifting than to holding covert attention. An exploratory searchlight analysis revealed additional regions where spatiotopic was relatively more similar to shifting and retinotopic more to holding. Thus, maintaining retinotopic and spatiotopic attention across saccades may involve different types of updating that vary in similarity to covert attention “hold” and “shift” signals across different regions.

Close

  • doi:10.1523/ENEURO.0186-20.2021

Close

TianHong Zhang; YingYu Yang; LiHua Xu; XiaoChen Tang; YeGang Hu; Xin Xiong; YanYan Wei; HuiRu Cui; YingYing Tang; HaiChun Liu; Tao Chen; Zhi Liu; Li Hui; ChunBo Li; XiaoLi Guo; JiJun Wang

Inefficient integration during multiple facial processing in pre-morbid and early phases of psychosis Journal Article

In: The World Journal of Biological Psychiatry, pp. 1–13, 2021.

Abstract | Links | BibTeX

@article{Zhang2021f,
title = {Inefficient integration during multiple facial processing in pre-morbid and early phases of psychosis},
author = {TianHong Zhang and YingYu Yang and LiHua Xu and XiaoChen Tang and YeGang Hu and Xin Xiong and YanYan Wei and HuiRu Cui and YingYing Tang and HaiChun Liu and Tao Chen and Zhi Liu and Li Hui and ChunBo Li and XiaoLi Guo and JiJun Wang},
doi = {10.1080/15622975.2021.2011402},
year = {2021},
date = {2021-01-01},
journal = {The World Journal of Biological Psychiatry},
pages = {1--13},
publisher = {Taylor & Francis},
abstract = {Objectives: We used eye-tracking to evaluate multiple facial context processing and event-related potential (ERP) to evaluate multiple facial recognition in individuals at clinical high risk (CHR) for psychosis. Methods: In total, 173 subjects (83 CHRs and 90 healthy controls [HCs]) were included and their emotion perception performances were accessed. A total of 40 CHRs and 40 well-matched HCs completed an eye-tracking task where they viewed pictures depicting a person in the foreground, presented as context-free, context-compatible, and context-incompatible. During the two-year follow-up, 26 CHRs developed psychosis, including 17 individuals who developed first-episode schizophrenia (FES). Eighteen well-matched HCs were made to complete the face number detection ERP task with image stimuli of one, two, or three faces. Results: Compared to the HC group, the CHR group showed reduced visual attention to contextual processing when viewing multiple faces. With the increasing complexity of contextual faces, the differences in eye-tracking characteristics also increased. In the ERP task, the N170 amplitude decreased with a higher face number in FES patients, while it increased with a higher face number in HCs. Conclusions: Individuals in the very early phase of psychosis showed facial processing deficits with supporting evidence of different scan paths during context processing and disruption of N170 during multiple facial recognition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objectives: We used eye-tracking to evaluate multiple facial context processing and event-related potential (ERP) to evaluate multiple facial recognition in individuals at clinical high risk (CHR) for psychosis. Methods: In total, 173 subjects (83 CHRs and 90 healthy controls [HCs]) were included and their emotion perception performances were accessed. A total of 40 CHRs and 40 well-matched HCs completed an eye-tracking task where they viewed pictures depicting a person in the foreground, presented as context-free, context-compatible, and context-incompatible. During the two-year follow-up, 26 CHRs developed psychosis, including 17 individuals who developed first-episode schizophrenia (FES). Eighteen well-matched HCs were made to complete the face number detection ERP task with image stimuli of one, two, or three faces. Results: Compared to the HC group, the CHR group showed reduced visual attention to contextual processing when viewing multiple faces. With the increasing complexity of contextual faces, the differences in eye-tracking characteristics also increased. In the ERP task, the N170 amplitude decreased with a higher face number in FES patients, while it increased with a higher face number in HCs. Conclusions: Individuals in the very early phase of psychosis showed facial processing deficits with supporting evidence of different scan paths during context processing and disruption of N170 during multiple facial recognition.

Close

  • doi:10.1080/15622975.2021.2011402

Close

Luming Zhang; Xiaoqin Zhang; Mingliang Xu; Ling Shao

Massive-scale aerial photo categorization by cross-resolution visual perception enhancement Journal Article

In: IEEE Transactions on Neural Networks and Learning Systems, pp. 1–14, 2021.

Abstract | Links | BibTeX

@article{Zhang2021e,
title = {Massive-scale aerial photo categorization by cross-resolution visual perception enhancement},
author = {Luming Zhang and Xiaoqin Zhang and Mingliang Xu and Ling Shao},
doi = {10.1109/TNNLS.2021.3055548},
year = {2021},
date = {2021-01-01},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
pages = {1--14},
abstract = {Categorizing aerial photographs with varied weather/lighting conditions and sophisticated geomorphic factors is a key module in autonomous navigation, environmental evaluation, and so on. Previous image recognizers cannot fulfill this task due to three challenges: 1) localizing visually/semantically salient regions within each aerial photograph in a weakly annotated context due to the unaffordable human resources required for pixel-level annotation; 2) aerial photographs are generally with multiple informative attributes (e.g., clarity and reflectivity), and we have to encode them for better aerial photograph modeling; and 3) designing a cross-domain knowledge transferal module to enhance aerial photograph perception since multiresolution aerial photographs are taken asynchronistically and are mutually complementary. To handle the above problems, we propose to optimize aerial photograph's feature learning by leveraging the low-resolution spatial composition to enhance the deep learning of perceptual features with a high resolution. More specifically, we first extract many BING-based object patches (Cheng et al., 2014) from each aerial photograph. A weakly supervised ranking algorithm selects a few semantically salient ones by seamlessly incorporating multiple aerial photograph attributes. Toward an interpretable aerial photograph recognizer indicative to human visual perception, we construct a gaze shifting path (GSP) by linking the top-ranking object patches and, subsequently, derive the deep GSP feature. Finally, a cross-domain multilabel SVM is formulated to categorize each aerial photograph. It leverages the global feature from low-resolution counterparts to optimize the deep GSP feature from a high-resolution aerial photograph. Comparative results on our compiled million-scale aerial photograph set have demonstrated the competitiveness of our approach. Besides, the eye-tracking experiment has shown that our ranking-based GSPs are over 92% consistent with the real human gaze shifting sequences.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Categorizing aerial photographs with varied weather/lighting conditions and sophisticated geomorphic factors is a key module in autonomous navigation, environmental evaluation, and so on. Previous image recognizers cannot fulfill this task due to three challenges: 1) localizing visually/semantically salient regions within each aerial photograph in a weakly annotated context due to the unaffordable human resources required for pixel-level annotation; 2) aerial photographs are generally with multiple informative attributes (e.g., clarity and reflectivity), and we have to encode them for better aerial photograph modeling; and 3) designing a cross-domain knowledge transferal module to enhance aerial photograph perception since multiresolution aerial photographs are taken asynchronistically and are mutually complementary. To handle the above problems, we propose to optimize aerial photograph's feature learning by leveraging the low-resolution spatial composition to enhance the deep learning of perceptual features with a high resolution. More specifically, we first extract many BING-based object patches (Cheng et al., 2014) from each aerial photograph. A weakly supervised ranking algorithm selects a few semantically salient ones by seamlessly incorporating multiple aerial photograph attributes. Toward an interpretable aerial photograph recognizer indicative to human visual perception, we construct a gaze shifting path (GSP) by linking the top-ranking object patches and, subsequently, derive the deep GSP feature. Finally, a cross-domain multilabel SVM is formulated to categorize each aerial photograph. It leverages the global feature from low-resolution counterparts to optimize the deep GSP feature from a high-resolution aerial photograph. Comparative results on our compiled million-scale aerial photograph set have demonstrated the competitiveness of our approach. Besides, the eye-tracking experiment has shown that our ranking-based GSPs are over 92&#x0025; consistent with the real human gaze shifting sequences.

Close

  • doi:10.1109/TNNLS.2021.3055548

Close

Luming Zhang; Zhigeng Pan; Ling Shao

Semi-supervised perception augmentation for aerial photo topologies understanding Journal Article

In: IEEE Transactions on Image Processing, vol. 30, pp. 7803–7814, 2021.

Abstract | Links | BibTeX

@article{Zhang2021d,
title = {Semi-supervised perception augmentation for aerial photo topologies understanding},
author = {Luming Zhang and Zhigeng Pan and Ling Shao},
doi = {10.1109/TIP.2021.3079820},
year = {2021},
date = {2021-01-01},
journal = {IEEE Transactions on Image Processing},
volume = {30},
pages = {7803--7814},
abstract = {Intelligently understanding the sophisticated topological structures from aerial photographs is a useful technique in aerial image analysis. Conventional methods cannot fulfill this task due to the following challenges: 1) the topology number of an aerial photo increases exponentially with the topology size, which requires a fine-grained visual descriptor to discriminatively represent each topology; 2) identifying visually/semantically salient topologies within each aerial photo in a weakly-labeled context, owing to the unaffordable human resources required for pixel-level annotation; and 3) designing a cross-domain knowledge transferal module to augment aerial photo perception, since multi-resolution aerial photos are taken asynchronistically in practice. To handle the above problems, we propose a unified framework to understand aerial photo topologies, focusing on representing each aerial photo by a set of visually/semantically salient topologies based on human visual perception and further employing them for visual categorization. Specifically, we first extract multiple atomic regions from each aerial photo, and thereby graphlets are built to capture the each aerial photo topologically. Then, a weakly-supervised ranking algorithm selects a few semantically salient graphlets by seamlessly encoding multiple image-level attributes. Toward a visualizable and perception-aware framework, we construct gaze shifting path (GSP) by linking the top-ranking graphlets. Finally, we derive the deep GSP representation, and formulate a semi-supervised and cross-domain SVM to partition each aerial photo into multiple categories. The SVM utilizes the global composition from low-resolution counterparts to enhance the deep GSP features from high-resolution aerial photos which are partially-annotated. Extensive visualization results and categorization performance comparisons have demonstrated the competitiveness of our approach.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Intelligently understanding the sophisticated topological structures from aerial photographs is a useful technique in aerial image analysis. Conventional methods cannot fulfill this task due to the following challenges: 1) the topology number of an aerial photo increases exponentially with the topology size, which requires a fine-grained visual descriptor to discriminatively represent each topology; 2) identifying visually/semantically salient topologies within each aerial photo in a weakly-labeled context, owing to the unaffordable human resources required for pixel-level annotation; and 3) designing a cross-domain knowledge transferal module to augment aerial photo perception, since multi-resolution aerial photos are taken asynchronistically in practice. To handle the above problems, we propose a unified framework to understand aerial photo topologies, focusing on representing each aerial photo by a set of visually/semantically salient topologies based on human visual perception and further employing them for visual categorization. Specifically, we first extract multiple atomic regions from each aerial photo, and thereby graphlets are built to capture the each aerial photo topologically. Then, a weakly-supervised ranking algorithm selects a few semantically salient graphlets by seamlessly encoding multiple image-level attributes. Toward a visualizable and perception-aware framework, we construct gaze shifting path (GSP) by linking the top-ranking graphlets. Finally, we derive the deep GSP representation, and formulate a semi-supervised and cross-domain SVM to partition each aerial photo into multiple categories. The SVM utilizes the global composition from low-resolution counterparts to enhance the deep GSP features from high-resolution aerial photos which are partially-annotated. Extensive visualization results and categorization performance comparisons have demonstrated the competitiveness of our approach.

Close

  • doi:10.1109/TIP.2021.3079820

Close

Li Zhang; Guoli Yan; Valerie Benson

The influence of emotional face distractors on attentional orienting in Chinese children with autism spectrum disorder Journal Article

In: PLoS ONE, vol. 16, no. 5, pp. 1–14, 2021.

Abstract | Links | BibTeX

@article{Zhang2021c,
title = {The influence of emotional face distractors on attentional orienting in Chinese children with autism spectrum disorder},
author = {Li Zhang and Guoli Yan and Valerie Benson},
doi = {10.1371/journal.pone.0250998},
year = {2021},
date = {2021-01-01},
journal = {PLoS ONE},
volume = {16},
number = {5},
pages = {1--14},
abstract = {The current study examined how emotional faces impact on attentional control at both involuntary and voluntary levels in children with and without autism spectrum disorder (ASD). A non-face single target was either presented in isolation or synchronously with emotional face distractors namely angry, happy and neutral faces. ASD and typically developing children made more erroneous saccades towards emotional distractors relative to neutral distractors in parafoveal and peripheral conditions. Remote distractor effects were observed on saccade latency in both groups regardless of distractor type, whereby time taken to initiate an eye movement to the target was longest in central distractor conditions, followed by parafoveal and peripheral distractor conditions. The remote distractor effect was greater for angry faces compared to happy faces in the ASD group. Proportions of failed disengagement trials from central distractors, for the first saccade, were higher in the angry distractor condition compared with the other two distractor conditions in ASD, and this effect was absent for the typical group. Eye movement results suggest difficulties in disengaging from fixated angry faces in ASD. Atypical disengagement from angry faces at the voluntary level could have consequences for the development of higher-level socio-communicative skills in ASD.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The current study examined how emotional faces impact on attentional control at both involuntary and voluntary levels in children with and without autism spectrum disorder (ASD). A non-face single target was either presented in isolation or synchronously with emotional face distractors namely angry, happy and neutral faces. ASD and typically developing children made more erroneous saccades towards emotional distractors relative to neutral distractors in parafoveal and peripheral conditions. Remote distractor effects were observed on saccade latency in both groups regardless of distractor type, whereby time taken to initiate an eye movement to the target was longest in central distractor conditions, followed by parafoveal and peripheral distractor conditions. The remote distractor effect was greater for angry faces compared to happy faces in the ASD group. Proportions of failed disengagement trials from central distractors, for the first saccade, were higher in the angry distractor condition compared with the other two distractor conditions in ASD, and this effect was absent for the typical group. Eye movement results suggest difficulties in disengaging from fixated angry faces in ASD. Atypical disengagement from angry faces at the voluntary level could have consequences for the development of higher-level socio-communicative skills in ASD.

Close

  • doi:10.1371/journal.pone.0250998

Close

Guangyao Zhang; Binke Yuan; Huimin Hua; Ya Lou; Nan Lin; Xingshan Li

Individual differences in first-pass fixation duration in reading are related to resting-state functional connectivity Journal Article

In: Brain and Language, vol. 213, pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Zhang2021,
title = {Individual differences in first-pass fixation duration in reading are related to resting-state functional connectivity},
author = {Guangyao Zhang and Binke Yuan and Huimin Hua and Ya Lou and Nan Lin and Xingshan Li},
doi = {10.1016/j.bandl.2020.104893},
year = {2021},
date = {2021-01-01},
journal = {Brain and Language},
volume = {213},
pages = {1--10},
publisher = {Elsevier Inc.},
abstract = {Although there are considerable individual differences in eye movements during text reading, their neural correlates remain unclear. In this study, we investigated the relationship between the first-pass fixation duration (FPFD) in natural reading and resting-state functional connectivity (RSFC) in the brain. We defined the brain regions associated with early visual processing, word identification, attention shifts, and oculomotor control as seed regions. The results showed that individual FPFDs were positively correlated with individual RSFCs between the early visual network, visual word form area, and eye movement control/dorsal attention network. Our findings provide new evidence on the neural correlates of eye movements in text reading and indicate that individual differences in fixation time may shape the RSFC differences in the brain through the time-on-task effect and the mechanism of Hebbian learning.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Although there are considerable individual differences in eye movements during text reading, their neural correlates remain unclear. In this study, we investigated the relationship between the first-pass fixation duration (FPFD) in natural reading and resting-state functional connectivity (RSFC) in the brain. We defined the brain regions associated with early visual processing, word identification, attention shifts, and oculomotor control as seed regions. The results showed that individual FPFDs were positively correlated with individual RSFCs between the early visual network, visual word form area, and eye movement control/dorsal attention network. Our findings provide new evidence on the neural correlates of eye movements in text reading and indicate that individual differences in fixation time may shape the RSFC differences in the brain through the time-on-task effect and the mechanism of Hebbian learning.

Close

  • doi:10.1016/j.bandl.2020.104893

Close

Fan Zhang; Zhicheng Lin; Yang Zhang; Ming Zhang

Behavioral evidence for attention selection as entrained synchronization without awareness. Journal Article

In: Journal of Experimental Psychology: General, vol. 150, no. 9, pp. 1–12, 2021.

Abstract | Links | BibTeX

@article{Zhang2021b,
title = {Behavioral evidence for attention selection as entrained synchronization without awareness.},
author = {Fan Zhang and Zhicheng Lin and Yang Zhang and Ming Zhang},
doi = {10.1037/xge0000825},
year = {2021},
date = {2021-01-01},
journal = {Journal of Experimental Psychology: General},
volume = {150},
number = {9},
pages = {1--12},
abstract = {Animal physiological and human neuroimaging studies have established a link between attention and ␥-band (30–90 Hz) oscillations and synchronizations. However, a behavioral link between entrained ␥-band oscillations and attention has been fraught with technical challenges. In particular, while entrainment at mid-␥ band (40–70 Hz) has been claimed to be privileged in evoking attentional modulations without awareness, the effect may be attributed to display artifacts. Here, by exploiting isoluminant chromatic flicker without luminance modulation and not subject to these artifacts, we tested attentional attraction by chromatic flicker too fast to perceive. Awareness of flicker was subjectively and objectively tested with a high-powered design and evaluated with traditional and Bayesian statistics. Across 2 experiments in human participants, we observed—and also replicated—that 30-Hz chromatic flicker outside mid-␥ band attracted attention, resulting in a facilitation effect at a 50 ms interstimulus interval (ISI) and an inhibition effect at a 500 ms ISI. The attention test was confirmed to be more sensitive to the cue than the direct cue-localization task was. We further showed that these attention effects were absent for 50-Hz chromatic flicker. These results provide strong direct evidence against a privileged role of mid-␥ band in unconscious attention, but are consistent with known cortical responses to chromatic flicker in early visual cortex. Taken together, our findings provide behavioral evidence that entrained synchronization may serve as a mechanism for bottom-up attention selection and that chromatic flicker},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Animal physiological and human neuroimaging studies have established a link between attention and ␥-band (30–90 Hz) oscillations and synchronizations. However, a behavioral link between entrained ␥-band oscillations and attention has been fraught with technical challenges. In particular, while entrainment at mid-␥ band (40–70 Hz) has been claimed to be privileged in evoking attentional modulations without awareness, the effect may be attributed to display artifacts. Here, by exploiting isoluminant chromatic flicker without luminance modulation and not subject to these artifacts, we tested attentional attraction by chromatic flicker too fast to perceive. Awareness of flicker was subjectively and objectively tested with a high-powered design and evaluated with traditional and Bayesian statistics. Across 2 experiments in human participants, we observed—and also replicated—that 30-Hz chromatic flicker outside mid-␥ band attracted attention, resulting in a facilitation effect at a 50 ms interstimulus interval (ISI) and an inhibition effect at a 500 ms ISI. The attention test was confirmed to be more sensitive to the cue than the direct cue-localization task was. We further showed that these attention effects were absent for 50-Hz chromatic flicker. These results provide strong direct evidence against a privileged role of mid-␥ band in unconscious attention, but are consistent with known cortical responses to chromatic flicker in early visual cortex. Taken together, our findings provide behavioral evidence that entrained synchronization may serve as a mechanism for bottom-up attention selection and that chromatic flicker

Close

  • doi:10.1037/xge0000825

Close

Beizhen Zhang; Janis Ying Ying Kan; Mingpo Yang; Xiaochun Wang; Jiahao Tu; Michael Christopher Dorris

Transforming absolute value to categorical choice in primate superior colliculus during value-based decision making Journal Article

In: Nature Communications, vol. 12, no. 1, pp. 3410, 2021.

Abstract | Links | BibTeX

@article{Zhang2021a,
title = {Transforming absolute value to categorical choice in primate superior colliculus during value-based decision making},
author = {Beizhen Zhang and Janis Ying Ying Kan and Mingpo Yang and Xiaochun Wang and Jiahao Tu and Michael Christopher Dorris},
doi = {10.1038/s41467-021-23747-z},
year = {2021},
date = {2021-01-01},
journal = {Nature Communications},
volume = {12},
number = {1},
pages = {3410},
publisher = {Springer US},
abstract = {Value-based decision making involves choosing from multiple options with different values. Despite extensive studies on value representation in various brain regions, the neural mechanism for how multiple value options are converted to motor actions remains unclear. To study this, we developed a multi-value foraging task with varying menu of items in non-human primates using eye movements that dissociates value and choice, and conducted electrophysiological recording in the midbrain superior colliculus (SC). SC neurons encoded “absolute” value, independent of available options, during late fixation. In addition, SC neurons also represent value threshold, modulated by available options, different from conventional motor threshold. Electrical stimulation of SC neurons biased choices in a manner predicted by the difference between the value representation and the value threshold. These results reveal a neural mechanism directly transforming absolute values to categorical choices within SC, supporting highly efficient value-based decision making critical for real-world economic behaviors.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Value-based decision making involves choosing from multiple options with different values. Despite extensive studies on value representation in various brain regions, the neural mechanism for how multiple value options are converted to motor actions remains unclear. To study this, we developed a multi-value foraging task with varying menu of items in non-human primates using eye movements that dissociates value and choice, and conducted electrophysiological recording in the midbrain superior colliculus (SC). SC neurons encoded “absolute” value, independent of available options, during late fixation. In addition, SC neurons also represent value threshold, modulated by available options, different from conventional motor threshold. Electrical stimulation of SC neurons biased choices in a manner predicted by the difference between the value representation and the value threshold. These results reveal a neural mechanism directly transforming absolute values to categorical choices within SC, supporting highly efficient value-based decision making critical for real-world economic behaviors.

Close

  • doi:10.1038/s41467-021-23747-z

Close

Paul Zerr; Surya Gayet; Floris Esschert; Mitchel Kappen; Zoril Olah; Stefan Van der Stigchel

The development of retro-cue benefits with extensive practice: Implications for capacity estimation and attentional states in visual working memory Journal Article

In: Memory and Cognition, vol. 49, no. 5, pp. 1036–1049, 2021.

Abstract | Links | BibTeX

@article{Zerr2021,
title = {The development of retro-cue benefits with extensive practice: Implications for capacity estimation and attentional states in visual working memory},
author = {Paul Zerr and Surya Gayet and Floris Esschert and Mitchel Kappen and Zoril Olah and Stefan Van der Stigchel},
doi = {10.3758/s13421-021-01138-5},
year = {2021},
date = {2021-01-01},
journal = {Memory and Cognition},
volume = {49},
number = {5},
pages = {1036--1049},
publisher = {Memory & Cognition},
abstract = {Accessing the contents of visual short-term memory (VSTM) is compromised by information bottlenecks and visual interference between memorization and recall. Retro-cues, displayed after the offset of a memory stimulus and prior to the onset of a probe stimulus, indicate the test item and improve performance in VSTM tasks. It has been proposed that retro-cues aid recall by transferring information from a high-capacity memory store into visual working memory (multiple-store hypothesis). Alternatively, retro-cues could aid recall by redistributing memory resources within the same (low-capacity) working memory store (single-store hypothesis). If retro-cues provide access to a memory store with a capacity exceeding the set size, then, given sufficient training in the use of the retro-cue, near-ceiling performance should be observed. To test this prediction, 10 observers each performed 12 hours across 8 sessions in a retro-cue change-detection task (40,000+ trials total). The results provided clear support for the single-store hypothesis: retro-cue benefits (difference between a condition with and without retro-cues) emerged after a few hundred trials and then remained constant throughout the testing sessions, consistently improving performance by two items, rather than reaching ceiling performance. Surprisingly, we also observed a general increase in performance throughout the experiment in conditions with and without retro-cues, calling into question the generalizability of change-detection tasks in assessing working memory capacity as a stable trait of an observer (data and materials are available at osf.io/9xr82 and github.com/paulzerr/retrocues). In summary, the present findings suggest that retro-cues increase capacity estimates by redistributing memory resources across memoranda within a low-capacity working memory store.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Accessing the contents of visual short-term memory (VSTM) is compromised by information bottlenecks and visual interference between memorization and recall. Retro-cues, displayed after the offset of a memory stimulus and prior to the onset of a probe stimulus, indicate the test item and improve performance in VSTM tasks. It has been proposed that retro-cues aid recall by transferring information from a high-capacity memory store into visual working memory (multiple-store hypothesis). Alternatively, retro-cues could aid recall by redistributing memory resources within the same (low-capacity) working memory store (single-store hypothesis). If retro-cues provide access to a memory store with a capacity exceeding the set size, then, given sufficient training in the use of the retro-cue, near-ceiling performance should be observed. To test this prediction, 10 observers each performed 12 hours across 8 sessions in a retro-cue change-detection task (40,000+ trials total). The results provided clear support for the single-store hypothesis: retro-cue benefits (difference between a condition with and without retro-cues) emerged after a few hundred trials and then remained constant throughout the testing sessions, consistently improving performance by two items, rather than reaching ceiling performance. Surprisingly, we also observed a general increase in performance throughout the experiment in conditions with and without retro-cues, calling into question the generalizability of change-detection tasks in assessing working memory capacity as a stable trait of an observer (data and materials are available at osf.io/9xr82 and github.com/paulzerr/retrocues). In summary, the present findings suggest that retro-cues increase capacity estimates by redistributing memory resources across memoranda within a low-capacity working memory store.

Close

  • doi:10.3758/s13421-021-01138-5

Close

Tao Zeng; Yating Mu; Taoyan Zhu

Structural priming from simple arithmetic to Chinese ambiguous structures: evidence from eye movement Journal Article

In: Cognitive Processing, vol. 22, no. 2, pp. 185–207, 2021.

Abstract | Links | BibTeX

@article{Zeng2021a,
title = {Structural priming from simple arithmetic to Chinese ambiguous structures: evidence from eye movement},
author = {Tao Zeng and Yating Mu and Taoyan Zhu},
doi = {10.1007/s10339-020-01003-4},
year = {2021},
date = {2021-01-01},
journal = {Cognitive Processing},
volume = {22},
number = {2},
pages = {185--207},
publisher = {Springer Berlin Heidelberg},
abstract = {This article explores the domain generality of hierarchical representation between linguistic and mathematical cognition by adopting the structural priming paradigm in an eye-tracking reading experiment. The experiment investigated whether simple arithmetic equations with high (e.g., (7 + 2) × 3 + 1)- or low (e.g., 7 + 2 × 3 + 1)- attachment influence language users' interpretation of Chinese ambiguous structures (NP1 + He + NP2 + De + NP3; Quantifier + NP1 + De + NP2; NP1 + Kan/WangZhe + NP2 + AP). On the one hand, behavioral results showed that high-attachment primes led to more high-attachment interpretation, while low-attachment primes led to more low-attachment interpretation. On the other hand, the eye movement data indicated that structural priming was of great help to reduce dwell time on the ambiguous structure. There were structural priming effects from simple arithmetic to three different structures in Chinese, which provided new evidence on the cross-domain priming from simple arithmetic to language. Besides attachment priming effect at global level, online sentence integration at local level was found to be structure-dependent by some differences in eye movement measures. Our results have provided some evidence for the Representational Account.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This article explores the domain generality of hierarchical representation between linguistic and mathematical cognition by adopting the structural priming paradigm in an eye-tracking reading experiment. The experiment investigated whether simple arithmetic equations with high (e.g., (7 + 2) × 3 + 1)- or low (e.g., 7 + 2 × 3 + 1)- attachment influence language users' interpretation of Chinese ambiguous structures (NP1 + He + NP2 + De + NP3; Quantifier + NP1 + De + NP2; NP1 + Kan/WangZhe + NP2 + AP). On the one hand, behavioral results showed that high-attachment primes led to more high-attachment interpretation, while low-attachment primes led to more low-attachment interpretation. On the other hand, the eye movement data indicated that structural priming was of great help to reduce dwell time on the ambiguous structure. There were structural priming effects from simple arithmetic to three different structures in Chinese, which provided new evidence on the cross-domain priming from simple arithmetic to language. Besides attachment priming effect at global level, online sentence integration at local level was found to be structure-dependent by some differences in eye movement measures. Our results have provided some evidence for the Representational Account.

Close

  • doi:10.1007/s10339-020-01003-4

Close

Tao Zeng; Wen Mao; Yarong Gao

An eye-tracking study of structural priming from abstract arithmetic to Chinese atructure NP1 + You + NP2 + Hen + AP Journal Article

In: Journal of Psycholinguistic Research, no. 1-26, 2021.

Abstract | Links | BibTeX

@article{Zeng2021,
title = {An eye-tracking study of structural priming from abstract arithmetic to Chinese atructure NP1 + You + NP2 + Hen + AP},
author = {Tao Zeng and Wen Mao and Yarong Gao},
doi = {10.1007/s10936-021-09819-7},
year = {2021},
date = {2021-01-01},
journal = {Journal of Psycholinguistic Research},
number = {1-26},
publisher = {Springer US},
abstract = {The present study attempted to explore the abstract priming effects from mathematical equations to Mandarin Chinese structure NP1 + You + NP2 + Hen + AP in an on-line comprehension task with the aim to figure out the mechanism that underlying these effects. The results revealed that compared with baseline priming conditions, participants tended to choose more high-attachment options in high-attachment priming conditions and more low-attachment priming options in low-attachment priming conditions. Such difference had reached a significant level, which provided evidence for the shared structural representation across mathematical and linguistic domains. Additionally, the fixations sequences during arithmetic calculations reflected those equations were processed hierarchically and could be extracted in parallel instead of being scanned in a sequentially left-to-right order. Our results have provided some evidence for the Representational Account.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study attempted to explore the abstract priming effects from mathematical equations to Mandarin Chinese structure NP1 + You + NP2 + Hen + AP in an on-line comprehension task with the aim to figure out the mechanism that underlying these effects. The results revealed that compared with baseline priming conditions, participants tended to choose more high-attachment options in high-attachment priming conditions and more low-attachment priming options in low-attachment priming conditions. Such difference had reached a significant level, which provided evidence for the shared structural representation across mathematical and linguistic domains. Additionally, the fixations sequences during arithmetic calculations reflected those equations were processed hierarchically and could be extracted in parallel instead of being scanned in a sequentially left-to-right order. Our results have provided some evidence for the Representational Account.

Close

  • doi:10.1007/s10936-021-09819-7

Close

Alessandra Zarcone; Vera Demberg

Interaction of script knowledge and temporal discourse cues in a visual world study Journal Article

In: Discourse Processes, vol. 58, no. 9, pp. 804–819, 2021.

Abstract | Links | BibTeX

@article{Zarcone2021,
title = {Interaction of script knowledge and temporal discourse cues in a visual world study},
author = {Alessandra Zarcone and Vera Demberg},
doi = {10.1080/0163853X.2021.1930807},
year = {2021},
date = {2021-01-01},
journal = {Discourse Processes},
volume = {58},
number = {9},
pages = {804--819},
publisher = {Routledge},
abstract = {There is now a well-established literature showing that people anticipate upcoming concepts and words during language processing. Commonsense knowledge about typical event sequences and verbal selectional preferences can contribute to anticipating what will be mentioned next. We here investigate how temporal discourse connectives (before, after), which signal event ordering along a temporal dimension, modulate predictions for upcoming discourse referents. Our study analyses anticipatory gaze in the visual world and supports the idea that script knowledge, temporal connectives (before eating → menu, appetizer), and the verb's selectional preferences (order → appetizer) jointly contribute to shaping rapid prediction of event participants.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

There is now a well-established literature showing that people anticipate upcoming concepts and words during language processing. Commonsense knowledge about typical event sequences and verbal selectional preferences can contribute to anticipating what will be mentioned next. We here investigate how temporal discourse connectives (before, after), which signal event ordering along a temporal dimension, modulate predictions for upcoming discourse referents. Our study analyses anticipatory gaze in the visual world and supports the idea that script knowledge, temporal connectives (before eating → menu, appetizer), and the verb's selectional preferences (order → appetizer) jointly contribute to shaping rapid prediction of event participants.

Close

  • doi:10.1080/0163853X.2021.1930807

Close

Chuanli Zang; Ying Fu; Xuejun Bai; Guoli Yan; Simon P. Liversedge

Foveal and parafoveal processing of Chinese three-character idioms in reading Journal Article

In: Journal of Memory and Language, vol. 119, pp. 1–15, 2021.

Abstract | Links | BibTeX

@article{Zang2021,
title = {Foveal and parafoveal processing of Chinese three-character idioms in reading},
author = {Chuanli Zang and Ying Fu and Xuejun Bai and Guoli Yan and Simon P. Liversedge},
doi = {10.1016/j.jml.2021.104243},
year = {2021},
date = {2021-01-01},
journal = {Journal of Memory and Language},
volume = {119},
pages = {1--15},
publisher = {Elsevier Inc.},
abstract = {Chinese idioms are likely to be represented and processed as Multi-Constituent Units (MCUs, a multi-word unit with a single lexical representation, see Zang, 2019). Chinese idioms with a 1-character verb and 2-character noun structure are processed foveally, but not parafoveally, as a single lexical unit (Yu et al., 2016), probably because the verb only loosely constrains noun identity. By contrast, Chinese idioms with modifier-noun structure are more likely MCU candidates due to significant modifier constraint over the subsequent noun. We investigated whether idioms of this type are parafoveally and foveally processed as MCUs during natural reading. In Experiment 1, we manipulated phrase type (idiom or matched phrase) and preview of the noun (identity, unrelated character or pseudocharacter) using the boundary paradigm (Rayner, 1975). A larger preview effect occurred for idioms on the modifier with shorter fixations for identical than unrelated and pseudocharacter previews. This suggests idioms are parafoveally processed to a greater extent than matched phrases. In Experiment 2, preview of the modifier and noun of idioms and phrases (identity or pseudocharacter) was orthogonally manipulated (c.f., Cutter, Drieghe & Liversedge, 2014). For identity modifiers, a greater noun preview effect occurred for idioms relative to phrases providing further evidence that modifier-noun idioms are lexicalised MCUs and processed parafoveally as single, unified representations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Chinese idioms are likely to be represented and processed as Multi-Constituent Units (MCUs, a multi-word unit with a single lexical representation, see Zang, 2019). Chinese idioms with a 1-character verb and 2-character noun structure are processed foveally, but not parafoveally, as a single lexical unit (Yu et al., 2016), probably because the verb only loosely constrains noun identity. By contrast, Chinese idioms with modifier-noun structure are more likely MCU candidates due to significant modifier constraint over the subsequent noun. We investigated whether idioms of this type are parafoveally and foveally processed as MCUs during natural reading. In Experiment 1, we manipulated phrase type (idiom or matched phrase) and preview of the noun (identity, unrelated character or pseudocharacter) using the boundary paradigm (Rayner, 1975). A larger preview effect occurred for idioms on the modifier with shorter fixations for identical than unrelated and pseudocharacter previews. This suggests idioms are parafoveally processed to a greater extent than matched phrases. In Experiment 2, preview of the modifier and noun of idioms and phrases (identity or pseudocharacter) was orthogonally manipulated (c.f., Cutter, Drieghe & Liversedge, 2014). For identity modifiers, a greater noun preview effect occurred for idioms relative to phrases providing further evidence that modifier-noun idioms are lexicalised MCUs and processed parafoveally as single, unified representations.

Close

  • doi:10.1016/j.jml.2021.104243

Close

Tania S. Zamuner; Theresa Rabideau; Margarethe Mcdonald; H. Henny Yeung

Developmental change in children's speech processing of auditory and visual cues: An eyetracking study Journal Article

In: Journal of Child Language, pp. 1–25, 2021.

Abstract | Links | BibTeX

@article{Zamuner2021,
title = {Developmental change in children's speech processing of auditory and visual cues: An eyetracking study},
author = {Tania S. Zamuner and Theresa Rabideau and Margarethe Mcdonald and H. Henny Yeung},
doi = {10.1017/s0305000921000684},
year = {2021},
date = {2021-01-01},
journal = {Journal of Child Language},
pages = {1--25},
abstract = {This study investigates how children aged two to eight years ( N = 129) and adults ( N = 29) use auditory and visual speech for word recognition. The goal was to bridge the gap between apparent successes of visual speech processing in young children in visual-looking tasks, with apparent difficulties of speech processing in older children from explicit behavioural measures. Participants were presented with familiar words in audio-visual (AV), audio-only (A-only) or visual-only (V-only) speech modalities, then presented with target and distractor images, and looking to targets was measured. Adults showed high accuracy, with slightly less target-image looking in the V-only modality. Developmentally, looking was above chance for both AV and A-only modalities, but not in the V-only modality until 6 years of age (earlier on /k/-initial words). Flexible use of visual cues for lexical access develops throughout childhood.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study investigates how children aged two to eight years ( N = 129) and adults ( N = 29) use auditory and visual speech for word recognition. The goal was to bridge the gap between apparent successes of visual speech processing in young children in visual-looking tasks, with apparent difficulties of speech processing in older children from explicit behavioural measures. Participants were presented with familiar words in audio-visual (AV), audio-only (A-only) or visual-only (V-only) speech modalities, then presented with target and distractor images, and looking to targets was measured. Adults showed high accuracy, with slightly less target-image looking in the V-only modality. Developmentally, looking was above chance for both AV and A-only modalities, but not in the V-only modality until 6 years of age (earlier on /k/-initial words). Flexible use of visual cues for lexical access develops throughout childhood.

Close

  • doi:10.1017/s0305000921000684

Close

Mengxi Yun; Masafumi Nejime; Masayuki Matsumoto

Single-unit recording in awake behaving non-human primates Journal Article

In: Bio-protocol, vol. 11, no. 8, pp. 1–16, 2021.

Abstract | Links | BibTeX

@article{Yun2021,
title = {Single-unit recording in awake behaving non-human primates},
author = {Mengxi Yun and Masafumi Nejime and Masayuki Matsumoto},
doi = {10.21769/BioProtoc.3987},
year = {2021},
date = {2021-01-01},
journal = {Bio-protocol},
volume = {11},
number = {8},
pages = {1--16},
abstract = {Non-human primates (NHPs) have been widely used as a species model in studies to understand higher brain functions in health and disease. These studies employ specifically designed behavioral tasks in which animal behavior is well-controlled, and record neuronal activity at high spatial and temporal resolutions while animals are performing the tasks. Here, we present a detailed procedure to conduct single-unit recording, which fulfils high spatial and temporal resolutions while macaque monkeys (i.e., widely used NHPs) perform behavioral tasks in a well-controlled manner. This procedure was used in our previous study to investigate the dynamics of neuronal activity during economic decision-making by the monkeys. Monkeys' behavior was quantitated by eye position tracking and button press/release detection. By inserting a microelectrode into the brain, with a grid system in reference to magnetic resonance imaging, we precisely recorded the brain regions. Our experimental system permits rigorous investigation of the link between neuronal activity and behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Non-human primates (NHPs) have been widely used as a species model in studies to understand higher brain functions in health and disease. These studies employ specifically designed behavioral tasks in which animal behavior is well-controlled, and record neuronal activity at high spatial and temporal resolutions while animals are performing the tasks. Here, we present a detailed procedure to conduct single-unit recording, which fulfils high spatial and temporal resolutions while macaque monkeys (i.e., widely used NHPs) perform behavioral tasks in a well-controlled manner. This procedure was used in our previous study to investigate the dynamics of neuronal activity during economic decision-making by the monkeys. Monkeys' behavior was quantitated by eye position tracking and button press/release detection. By inserting a microelectrode into the brain, with a grid system in reference to magnetic resonance imaging, we precisely recorded the brain regions. Our experimental system permits rigorous investigation of the link between neuronal activity and behavior.

Close

  • doi:10.21769/BioProtoc.3987

Close

Nicole H. Yuen; Fred Tam; Nathan W. Churchill; Tom A. Schweizer; Simon J. Graham

Driving with distraction: Measuring brain activity and oculomotor behavior using fMRI and eye-tracking Journal Article

In: Frontiers in Human Neuroscience, vol. 15, pp. 1–20, 2021.

Abstract | Links | BibTeX

@article{Yuen2021,
title = {Driving with distraction: Measuring brain activity and oculomotor behavior using fMRI and eye-tracking},
author = {Nicole H. Yuen and Fred Tam and Nathan W. Churchill and Tom A. Schweizer and Simon J. Graham},
doi = {10.3389/fnhum.2021.659040},
year = {2021},
date = {2021-01-01},
journal = {Frontiers in Human Neuroscience},
volume = {15},
pages = {1--20},
abstract = {Introduction: Driving motor vehicles is a complex task that depends heavily on how visual stimuli are received and subsequently processed by the brain. The potential impact of distraction on driving performance is well known and poses a safety concern – especially for individuals with cognitive impairments who may be clinically unfit to drive. The present study is the first to combine functional magnetic resonance imaging (fMRI) and eye-tracking during simulated driving with distraction, providing oculomotor metrics to enhance scientific understanding of the brain activity that supports driving performance. Materials and Methods: As initial work, twelve healthy young, right-handed participants performed turns ranging in complexity, including simple right and left turns without oncoming traffic, and left turns with oncoming traffic. Distraction was introduced as an auditory task during straight driving, and during left turns with oncoming traffic. Eye-tracking data were recorded during fMRI to characterize fixations, saccades, pupil diameter and blink rate. Results: Brain activation maps for right turns, left turns without oncoming traffic, left turns with oncoming traffic, and the distraction conditions were largely consistent with previous literature reporting the neural correlates of simulated driving. When the effects of distraction were evaluated for left turns with oncoming traffic, increased activation was observed in areas involved in executive function (e.g., middle and inferior frontal gyri) as well as decreased activation in the posterior brain (e.g., middle and superior occipital gyri). Whereas driving performance remained mostly unchanged (e.g., turn speed, time to turn, collisions), the oculomotor measures showed that distraction resulted in more consistent gaze at oncoming traffic in a small area of the visual scene; less time spent gazing at off-road targets (e.g., speedometer, rear-view mirror); more time spent performing saccadic eye movements; and decreased blink rate. Conclusion: Oculomotor behavior modulated with driving task complexity and distraction in a manner consistent with the brain activation features revealed by fMRI. The results suggest that eye-tracking technology should be included in future fMRI studies of simulated driving behavior in targeted populations, such as the elderly and individuals with cognitive complaints – ultimately toward developing better technology to assess and enhance fitness to drive.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Introduction: Driving motor vehicles is a complex task that depends heavily on how visual stimuli are received and subsequently processed by the brain. The potential impact of distraction on driving performance is well known and poses a safety concern – especially for individuals with cognitive impairments who may be clinically unfit to drive. The present study is the first to combine functional magnetic resonance imaging (fMRI) and eye-tracking during simulated driving with distraction, providing oculomotor metrics to enhance scientific understanding of the brain activity that supports driving performance. Materials and Methods: As initial work, twelve healthy young, right-handed participants performed turns ranging in complexity, including simple right and left turns without oncoming traffic, and left turns with oncoming traffic. Distraction was introduced as an auditory task during straight driving, and during left turns with oncoming traffic. Eye-tracking data were recorded during fMRI to characterize fixations, saccades, pupil diameter and blink rate. Results: Brain activation maps for right turns, left turns without oncoming traffic, left turns with oncoming traffic, and the distraction conditions were largely consistent with previous literature reporting the neural correlates of simulated driving. When the effects of distraction were evaluated for left turns with oncoming traffic, increased activation was observed in areas involved in executive function (e.g., middle and inferior frontal gyri) as well as decreased activation in the posterior brain (e.g., middle and superior occipital gyri). Whereas driving performance remained mostly unchanged (e.g., turn speed, time to turn, collisions), the oculomotor measures showed that distraction resulted in more consistent gaze at oncoming traffic in a small area of the visual scene; less time spent gazing at off-road targets (e.g., speedometer, rear-view mirror); more time spent performing saccadic eye movements; and decreased blink rate. Conclusion: Oculomotor behavior modulated with driving task complexity and distraction in a manner consistent with the brain activation features revealed by fMRI. The results suggest that eye-tracking technology should be included in future fMRI studies of simulated driving behavior in targeted populations, such as the elderly and individuals with cognitive complaints – ultimately toward developing better technology to assess and enhance fitness to drive.

Close

  • doi:10.3389/fnhum.2021.659040

Close

Xinger Yu; Timothy D. Hanks; Joy J. Geng

Attentional guidance and match decisions rely on different template information during visual search Journal Article

In: Psychological Science, pp. 1–16, 2021.

Abstract | Links | BibTeX

@article{Yu2021,
title = {Attentional guidance and match decisions rely on different template information during visual search},
author = {Xinger Yu and Timothy D. Hanks and Joy J. Geng},
doi = {10.1177/09567976211032225},
year = {2021},
date = {2021-01-01},
journal = {Psychological Science},
pages = {1--16},
abstract = {When searching for a target object, we engage in a continuous “look-identify” cycle in which we use known features of the target to guide attention toward potential targets and then to decide whether the selected object is indeed the target. Target information in memory (the target template or attentional template) is typically characterized as having a single, fixed source. However, debate has recently emerged over whether flexibility in the target template is relational or optimal. On the basis of evidence from two experiments using college students ( Ns = 30 and 70, respectively), we propose that initial guidance of attention uses a coarse relational code, but subsequent decisions use an optimal code. Our results offer a novel perspective that the precision of template information differs when guiding sensory selection and when making identity decisions during visual search.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When searching for a target object, we engage in a continuous “look-identify” cycle in which we use known features of the target to guide attention toward potential targets and then to decide whether the selected object is indeed the target. Target information in memory (the target template or attentional template) is typically characterized as having a single, fixed source. However, debate has recently emerged over whether flexibility in the target template is relational or optimal. On the basis of evidence from two experiments using college students ( Ns = 30 and 70, respectively), we propose that initial guidance of attention uses a coarse relational code, but subsequent decisions use an optimal code. Our results offer a novel perspective that the precision of template information differs when guiding sensory selection and when making identity decisions during visual search.

Close

  • doi:10.1177/09567976211032225

Close

Lili Yu; Yanping Liu; Erik D. Reichle

A corpus-based versus experimental examination of word- and character-frequency effects in Chinese reading: Theoretical implications for models of reading Journal Article

In: Journal of Experimental Psychology: General, vol. 150, no. 8, pp. 1612–1641, 2021.

Abstract | Links | BibTeX

@article{Yu2021a,
title = {A corpus-based versus experimental examination of word- and character-frequency effects in Chinese reading: Theoretical implications for models of reading},
author = {Lili Yu and Yanping Liu and Erik D. Reichle},
doi = {10.1037/xge0001014},
year = {2021},
date = {2021-01-01},
journal = {Journal of Experimental Psychology: General},
volume = {150},
number = {8},
pages = {1612--1641},
abstract = {Chinese words consist of a variable number of characters that are normally written in continuous lines, without the blank spaces that are used to separate words in most alphabetic writing systems. These conventions raise questions about the relative roles of character versus whole-word processing in word identification, and how words are segmented from strings of characters for the purpose of their identification and saccade targeting. The present article attempts to address these questions by reporting an eye-movement experiment in which 60 participants read a corpus of sentences containing two-character target words that varied in terms of their overall frequency and the frequency of their initial characters. We examine participants' eye movements using both corpus-based statistical models and more standard analyses of our target words. In addition to documenting how key lexical variables influence eye movements and highlighting a few discrepancies between the results obtained using our two statistical approaches, our experiment shows that high-frequency initial characters can actually slow word identification. We discuss the theoretical significance of this finding and others for current models of Chinese reading, and then describe a new computational model of eye-movement control during the reading of Chinese. Finally, we report simulations showing that this model can account for our findings. (PsycInfo Database Record (c) 2020 APA, all rights reserved)},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Chinese words consist of a variable number of characters that are normally written in continuous lines, without the blank spaces that are used to separate words in most alphabetic writing systems. These conventions raise questions about the relative roles of character versus whole-word processing in word identification, and how words are segmented from strings of characters for the purpose of their identification and saccade targeting. The present article attempts to address these questions by reporting an eye-movement experiment in which 60 participants read a corpus of sentences containing two-character target words that varied in terms of their overall frequency and the frequency of their initial characters. We examine participants' eye movements using both corpus-based statistical models and more standard analyses of our target words. In addition to documenting how key lexical variables influence eye movements and highlighting a few discrepancies between the results obtained using our two statistical approaches, our experiment shows that high-frequency initial characters can actually slow word identification. We discuss the theoretical significance of this finding and others for current models of Chinese reading, and then describe a new computational model of eye-movement control during the reading of Chinese. Finally, we report simulations showing that this model can account for our findings. (PsycInfo Database Record (c) 2020 APA, all rights reserved)

Close

  • doi:10.1037/xge0001014

Close

Seng Bum Michael Yoo; Jiaxin Cindy Tu; Benjamin Yost Hayden

Multicentric tracking of multiple agents by anterior cingulate cortex during pursuit and evasion Journal Article

In: Nature Communications, vol. 12, pp. 1–14, 2021.

Abstract | Links | BibTeX

@article{Yoo2021a,
title = {Multicentric tracking of multiple agents by anterior cingulate cortex during pursuit and evasion},
author = {Seng Bum Michael Yoo and Jiaxin Cindy Tu and Benjamin Yost Hayden},
doi = {10.1038/s41467-021-22195-z},
year = {2021},
date = {2021-01-01},
journal = {Nature Communications},
volume = {12},
pages = {1--14},
publisher = {Springer US},
abstract = {Successful pursuit and evasion require rapid and precise coordination of navigation with adaptive motor control. We hypothesize that the dorsal anterior cingulate cortex (dACC), which communicates bidirectionally with both the hippocampal complex and premotor/motor areas, would serve a mapping role in this process. We recorded responses of dACC ensembles in two macaques performing a joystick-controlled continuous pursuit/evasion task. We find that dACC carries two sets of signals, (1) world-centric variables that together form a representation of the position and velocity of all relevant agents (self, prey, and predator) in the virtual world, and (2) avatar-centric variables, i.e. self-prey distance and angle. Both sets of variables are multiplexed within an overlapping set of neurons. Our results suggest that dACC may contribute to pursuit and evasion by computing and continuously updating a multicentric representation of the unfolding task state, and support the hypothesis that it plays a high-level abstract role in the control of behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Successful pursuit and evasion require rapid and precise coordination of navigation with adaptive motor control. We hypothesize that the dorsal anterior cingulate cortex (dACC), which communicates bidirectionally with both the hippocampal complex and premotor/motor areas, would serve a mapping role in this process. We recorded responses of dACC ensembles in two macaques performing a joystick-controlled continuous pursuit/evasion task. We find that dACC carries two sets of signals, (1) world-centric variables that together form a representation of the position and velocity of all relevant agents (self, prey, and predator) in the virtual world, and (2) avatar-centric variables, i.e. self-prey distance and angle. Both sets of variables are multiplexed within an overlapping set of neurons. Our results suggest that dACC may contribute to pursuit and evasion by computing and continuously updating a multicentric representation of the unfolding task state, and support the hypothesis that it plays a high-level abstract role in the control of behavior.

Close

  • doi:10.1038/s41467-021-22195-z

Close

Kyung Yoo; Jeongyeol Ahn; Sang-Hun Lee

The confounding effects of eye blinking on pupillometry, and their remedy Journal Article

In: PLoS ONE, vol. 16, no. 12, pp. 1–32, 2021.

Abstract | Links | BibTeX

@article{Yoo2021,
title = {The confounding effects of eye blinking on pupillometry, and their remedy},
author = {Kyung Yoo and Jeongyeol Ahn and Sang-Hun Lee},
doi = {10.1371/journal.pone.0261463},
year = {2021},
date = {2021-01-01},
journal = {PLoS ONE},
volume = {16},
number = {12},
pages = {1--32},
abstract = {Pupillometry, thanks to its strong relationship with cognitive factors and recent advancements in measuring techniques, has become popular among cognitive or neural scientists as a tool for studying the physiological processes involved in mental or neural processes. Despite this growing popularity of pupillometry, the methodological understanding of pupillometry is limited, especially regarding potential factors that may threaten pupillary measurements' validity. Eye blinking can be a factor because it frequently occurs in a manner dependent on many cognitive components and induces a pulse-like pupillary change consisting of constriction and dilation with substantive magnitude and length. We set out to characterize the basic properties of this “blink-locked pupillary response (BPR),” including the shape and magnitude of BPR and their variability across subjects and blinks, as the first step of studying the confounding nature of eye blinking. Then, we demonstrated how the dependency of eye blinking on cognitive factors could confound, via BPR, the pupillary responses that are supposed to reflect the cognitive states of interest. By building a statistical model of how the confounding effects of eye blinking occur, we proposed a probabilistic-inference algorithm of de-confounding raw pupillary measurements and showed that the proposed algorithm selectively removed BPR and enhanced the statistical power of pupillometry experiments. Our findings call for attention to the presence and confounding nature of BPR in pupillometry. The algorithm we developed here can be used as an effective remedy for the confounding effects of BPR on pupillometry.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Pupillometry, thanks to its strong relationship with cognitive factors and recent advancements in measuring techniques, has become popular among cognitive or neural scientists as a tool for studying the physiological processes involved in mental or neural processes. Despite this growing popularity of pupillometry, the methodological understanding of pupillometry is limited, especially regarding potential factors that may threaten pupillary measurements' validity. Eye blinking can be a factor because it frequently occurs in a manner dependent on many cognitive components and induces a pulse-like pupillary change consisting of constriction and dilation with substantive magnitude and length. We set out to characterize the basic properties of this “blink-locked pupillary response (BPR),” including the shape and magnitude of BPR and their variability across subjects and blinks, as the first step of studying the confounding nature of eye blinking. Then, we demonstrated how the dependency of eye blinking on cognitive factors could confound, via BPR, the pupillary responses that are supposed to reflect the cognitive states of interest. By building a statistical model of how the confounding effects of eye blinking occur, we proposed a probabilistic-inference algorithm of de-confounding raw pupillary measurements and showed that the proposed algorithm selectively removed BPR and enhanced the statistical power of pupillometry experiments. Our findings call for attention to the presence and confounding nature of BPR in pupillometry. The algorithm we developed here can be used as an effective remedy for the confounding effects of BPR on pupillometry.

Close

  • doi:10.1371/journal.pone.0261463

Close

Panpan Yao; Adrian Staub; Xingshan Li

Predictability eliminates neighborhood effects during Chinese sentence reading Journal Article

In: Psychonomic Bulletin & Review, pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Yao2021d,
title = {Predictability eliminates neighborhood effects during Chinese sentence reading},
author = {Panpan Yao and Adrian Staub and Xingshan Li},
doi = {10.3758/s13423-021-01966-1},
year = {2021},
date = {2021-01-01},
journal = {Psychonomic Bulletin & Review},
pages = {1--10},
publisher = {Psychonomic Bulletin & Review},
abstract = {Previous research has demonstrated effects of both orthographic neighborhood size and neighbor frequency in word recognition in Chinese. A large neighborhood—where neighborhood size is defined by the number of words that differ from a target word by a single character—appears to facilitate word recognition, while the presence of a higher-frequency neighbor has an inhibitory effect. The present study investigated modulation of these effects by a word's predictability in context. In two eye-movement experiments, the predictability of a target word in each sentence was manipulated. Target words differed in their neighborhood size (Experiment 1) and in whether they had a higher-frequency neighbor (Experiment 2). The study replicated the previously observed effects of neighborhood size and neighbor frequency when the target word was unpredictable, but in both experiments neighborhood effects were absent when the target was predictable. These results suggest that when a word is preactivated by context, the activation of its neighbors may be diminished to such an extent that these neighbors do not effectively compete for selection.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous research has demonstrated effects of both orthographic neighborhood size and neighbor frequency in word recognition in Chinese. A large neighborhood—where neighborhood size is defined by the number of words that differ from a target word by a single character—appears to facilitate word recognition, while the presence of a higher-frequency neighbor has an inhibitory effect. The present study investigated modulation of these effects by a word's predictability in context. In two eye-movement experiments, the predictability of a target word in each sentence was manipulated. Target words differed in their neighborhood size (Experiment 1) and in whether they had a higher-frequency neighbor (Experiment 2). The study replicated the previously observed effects of neighborhood size and neighbor frequency when the target word was unpredictable, but in both experiments neighborhood effects were absent when the target was predictable. These results suggest that when a word is preactivated by context, the activation of its neighbors may be diminished to such an extent that these neighbors do not effectively compete for selection.

Close

  • doi:10.3758/s13423-021-01966-1

Close

Panpan Yao; Timothy J. Slattery; Xingshan Li

Sentence context modulates the neighborhood frequency effect in Chinese reading: Evidence from eye movements. Journal Article

In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–40, 2021.

Abstract | Links | BibTeX

@article{Yao2021c,
title = {Sentence context modulates the neighborhood frequency effect in Chinese reading: Evidence from eye movements.},
author = {Panpan Yao and Timothy J. Slattery and Xingshan Li},
doi = {10.1037/xlm0001030},
year = {2021},
date = {2021-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
pages = {1--40},
abstract = {In the current study, we conducted two eye-tracking reading experiments to explore whether sentence context can influence neighbor effects in word recognition during Chinese reading. Chinese readers read sentences in which the targets' orthographic neighbors were either plausible or implausible with the pre-target context. The results revealed that the neighbor effect was influenced by context: the context in the biased condition (where only targets but not neighbors can fit in the pre-target context) evoked a significantly weaker inhibitory neighbor effect than in the neutral condition (where both targets and neighbors can fit in the pre-target context). These results indicate that contextual information can be used to modulate neighbor effects during on-line sentence reading in Chinese.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the current study, we conducted two eye-tracking reading experiments to explore whether sentence context can influence neighbor effects in word recognition during Chinese reading. Chinese readers read sentences in which the targets' orthographic neighbors were either plausible or implausible with the pre-target context. The results revealed that the neighbor effect was influenced by context: the context in the biased condition (where only targets but not neighbors can fit in the pre-target context) evoked a significantly weaker inhibitory neighbor effect than in the neutral condition (where both targets and neighbors can fit in the pre-target context). These results indicate that contextual information can be used to modulate neighbor effects during on-line sentence reading in Chinese.

Close

  • doi:10.1037/xlm0001030

Close

Panpan Yao; Reem Alkhammash; Xingshan Li

Plausibility and syntactic reanalysis in processing novel noun-noun combinations during Chinese reading: evidence from native and non-native speakers Journal Article

In: Scientific Studies of Reading, pp. 1–19, 2021.

Abstract | Links | BibTeX

@article{Yao2021b,
title = {Plausibility and syntactic reanalysis in processing novel noun-noun combinations during Chinese reading: evidence from native and non-native speakers},
author = {Panpan Yao and Reem Alkhammash and Xingshan Li},
doi = {10.1080/10888438.2021.2020796},
year = {2021},
date = {2021-01-01},
journal = {Scientific Studies of Reading},
pages = {1--19},
publisher = {Routledge},
abstract = {We aimed to tackle the question about the time course of plausibility effect in online processing of Chinese nouns in temporarily ambiguous structures, and whether L2ers can immediately use the plausibility information generated from classifier-noun associations in analyzing ambiguous structures. Two eye-tracking experiments were conducted to explore how native Chinese speakers (Experiment 1) and high-proficiency Dutch-Chinese learners (Experiment 2) online process 4-character novel noun-noun combinations in Chinese. In each pair of nominal phrases (Numeral+Classifier+Noun1 +Noun2), the plausibility of Classifier-Noun1 varied (plausible vs. implausible) while the whole nominal phrases were always plausible. Results showed that the plausibility of Classifier-Noun1 associations had an immediate effect on Noun1, and a reversed effect on Noun2 for both groups of participants. These findings indicated that plausibility plays an immediate role in incremental semantic integration during online processing of Chinese. Similar to native Chinese speakers, high-proficiency L2ers can also use the plausibility infor- mation of classifier-noun associations in syntactic reanalysis. Sentence},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We aimed to tackle the question about the time course of plausibility effect in online processing of Chinese nouns in temporarily ambiguous structures, and whether L2ers can immediately use the plausibility information generated from classifier-noun associations in analyzing ambiguous structures. Two eye-tracking experiments were conducted to explore how native Chinese speakers (Experiment 1) and high-proficiency Dutch-Chinese learners (Experiment 2) online process 4-character novel noun-noun combinations in Chinese. In each pair of nominal phrases (Numeral+Classifier+Noun1 +Noun2), the plausibility of Classifier-Noun1 varied (plausible vs. implausible) while the whole nominal phrases were always plausible. Results showed that the plausibility of Classifier-Noun1 associations had an immediate effect on Noun1, and a reversed effect on Noun2 for both groups of participants. These findings indicated that plausibility plays an immediate role in incremental semantic integration during online processing of Chinese. Similar to native Chinese speakers, high-proficiency L2ers can also use the plausibility infor- mation of classifier-noun associations in syntactic reanalysis. Sentence

Close

  • doi:10.1080/10888438.2021.2020796

Close

Bo Yao; Jason R. Taylor; Briony Banks; Sonja A. Kotz

Reading direct speech quotes increases theta phase-locking: Evidence for cortical tracking of inner speech? Journal Article

In: NeuroImage, vol. 239, pp. 118313, 2021.

Abstract | Links | BibTeX

@article{Yao2021a,
title = {Reading direct speech quotes increases theta phase-locking: Evidence for cortical tracking of inner speech?},
author = {Bo Yao and Jason R. Taylor and Briony Banks and Sonja A. Kotz},
doi = {10.1016/j.neuroimage.2021.118313},
year = {2021},
date = {2021-01-01},
journal = {NeuroImage},
volume = {239},
pages = {118313},
publisher = {Elsevier Inc.},
abstract = {Growing evidence shows that theta-band (4–7 Hz) activity in the auditory cortex phase-locks to rhythms of overt speech. Does theta activity also encode the rhythmic dynamics of inner speech? Previous research established that silent reading of direct speech quotes (e.g., Mary said: “This dress is lovely!”) elicits more vivid inner speech than indirect speech quotes (e.g., Mary said that the dress was lovely). As we cannot directly track the phase alignment between theta activity and inner speech over time, we used EEG to measure the brain's phase-locked responses to the onset of speech quote reading. We found that direct (vs. indirect) quote reading was associated with increased theta phase synchrony over trials at 250–500 ms post-reading onset, with sources of the evoked activity estimated in the speech processing network. An eye-tracking control experiment confirmed that increased theta phase synchrony in direct quote reading was not driven by eye movement patterns, and more likely reflects synchronous phase resetting at the onset of inner speech. These findings suggest a functional role of theta phase modulation in reading-induced inner speech.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Growing evidence shows that theta-band (4–7 Hz) activity in the auditory cortex phase-locks to rhythms of overt speech. Does theta activity also encode the rhythmic dynamics of inner speech? Previous research established that silent reading of direct speech quotes (e.g., Mary said: “This dress is lovely!”) elicits more vivid inner speech than indirect speech quotes (e.g., Mary said that the dress was lovely). As we cannot directly track the phase alignment between theta activity and inner speech over time, we used EEG to measure the brain's phase-locked responses to the onset of speech quote reading. We found that direct (vs. indirect) quote reading was associated with increased theta phase synchrony over trials at 250–500 ms post-reading onset, with sources of the evoked activity estimated in the speech processing network. An eye-tracking control experiment confirmed that increased theta phase synchrony in direct quote reading was not driven by eye movement patterns, and more likely reflects synchronous phase resetting at the onset of inner speech. These findings suggest a functional role of theta phase modulation in reading-induced inner speech.

Close

  • doi:10.1016/j.neuroimage.2021.118313

Close

Beier Yao; Martin Rolfs; Christopher McLaughlin; Emily L. Isenstein; Sylvia B. Guillory; Hannah Grosman; Deborah A. Kashy; Jennifer H. Foss-Feig; Katharine N. Thakkar

Oculomotor corollary discharge signaling is related to repetitive behavior in children with autism spectrum disorder Journal Article

In: Journal of Vision, vol. 21, no. 8, pp. 1–20, 2021.

Abstract | Links | BibTeX

@article{Yao2021,
title = {Oculomotor corollary discharge signaling is related to repetitive behavior in children with autism spectrum disorder},
author = {Beier Yao and Martin Rolfs and Christopher McLaughlin and Emily L. Isenstein and Sylvia B. Guillory and Hannah Grosman and Deborah A. Kashy and Jennifer H. Foss-Feig and Katharine N. Thakkar},
doi = {10.1167/jov.21.8.9},
year = {2021},
date = {2021-01-01},
journal = {Journal of Vision},
volume = {21},
number = {8},
pages = {1--20},
abstract = {Corollary discharge (CD) signals are “copies” of motor signals sent to sensory regions that allow animals to adjust sensory consequences of self-generated actions. Autism spectrum disorder (ASD) is characterized by sensory and motor deficits, which may be underpinned by altered CD signaling. We evaluated oculomotor CD using the blanking task, which measures the influence of saccades on visual perception, in 30 children with ASD and 35 typically developing (TD) children. Participants were instructed to make a saccade to a visual target. Upon saccade initiation, the presaccadic target disappeared and reappeared to the left or right of the original position. Participants indicated the direction of},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Corollary discharge (CD) signals are “copies” of motor signals sent to sensory regions that allow animals to adjust sensory consequences of self-generated actions. Autism spectrum disorder (ASD) is characterized by sensory and motor deficits, which may be underpinned by altered CD signaling. We evaluated oculomotor CD using the blanking task, which measures the influence of saccades on visual perception, in 30 children with ASD and 35 typically developing (TD) children. Participants were instructed to make a saccade to a visual target. Upon saccade initiation, the presaccadic target disappeared and reappeared to the left or right of the original position. Participants indicated the direction of

Close

  • doi:10.1167/jov.21.8.9

Close

Jiumin Yang; Yi Zhang; Zhongling Pi; Yaohui Xie

Students' achievement motivation moderates the effects of interpolated pre-questions on attention and learning from video lectures Journal Article

In: Learning and Individual Differences, vol. 91, pp. 1–9, 2021.

Abstract | Links | BibTeX

@article{Yang2021,
title = {Students' achievement motivation moderates the effects of interpolated pre-questions on attention and learning from video lectures},
author = {Jiumin Yang and Yi Zhang and Zhongling Pi and Yaohui Xie},
doi = {10.1016/j.lindif.2021.102055},
year = {2021},
date = {2021-01-01},
journal = {Learning and Individual Differences},
volume = {91},
pages = {1--9},
publisher = {Elsevier Inc.},
abstract = {The study tested achievement motivation as a moderator of the relationship between pre-interpolated questions and learning from video lectures. Participants were 63 university students who were selected from a group of 123 volunteers, based on having high (n = 31) or low (n = 32) scores on the Achievement Motivation Scale. The students in each group were randomly assigned to view an instructional video with or without interpolated pre-questions. Visual attention was assessed by eye tracking measures of fixation duration and first time to fixation, and learning performance was assessed by tests of retention and transfer. The results of ANCOVAs showed that after controlling for prior knowledge, students with high achievement motivation benefitted more from the pre-questions than students with low achievement motivation. Among students with high achievement motivation, there was longer fixation duration to the learning materials and better transfer in the pre-questions condition than in the no-questions condition, but these differences based on video type were not apparent among students with low achievement. The findings have practical implications: interpolated pre-questions in video learning appear to be helpful for highly motivated students, and the benefit is seen in transfer rather than retention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The study tested achievement motivation as a moderator of the relationship between pre-interpolated questions and learning from video lectures. Participants were 63 university students who were selected from a group of 123 volunteers, based on having high (n = 31) or low (n = 32) scores on the Achievement Motivation Scale. The students in each group were randomly assigned to view an instructional video with or without interpolated pre-questions. Visual attention was assessed by eye tracking measures of fixation duration and first time to fixation, and learning performance was assessed by tests of retention and transfer. The results of ANCOVAs showed that after controlling for prior knowledge, students with high achievement motivation benefitted more from the pre-questions than students with low achievement motivation. Among students with high achievement motivation, there was longer fixation duration to the learning materials and better transfer in the pre-questions condition than in the no-questions condition, but these differences based on video type were not apparent among students with low achievement. The findings have practical implications: interpolated pre-questions in video learning appear to be helpful for highly motivated students, and the benefit is seen in transfer rather than retention.

Close

  • doi:10.1016/j.lindif.2021.102055

Close

10162 entries « ‹ 1 of 102 › »

让我们保持联系

  • Twitter
  • Facebook
  • Instagram
  • LinkedIn
  • YouTube
新闻通讯
新闻通讯存档
会议

联系

info@sr-research.com

电话: +1-613-271-8686

免费电话: +1-866-821-0731

传真: +1-613-482-4866

快捷链接

产品

解决方案

技术支持

法律信息

法律声明

隐私政策 | 可访性政策

EyeLink® 眼动仪是研究设备,不能用于医疗诊断或治疗。

特色博客

Reading Profiles of Adults with Dyslexia

成人阅读障碍的阅读概况


Copyright © 2023 · SR Research Ltd. All Rights Reserved. EyeLink is a registered trademark of SR Research Ltd.