• Skip to main content
  • Skip to header right navigation
  • Skip to site footer
Fast, Accurate, Reliable Eye Tracking

Fast, Accurate, Reliable Eye Tracking

  • Hardware
    • EyeLink 1000 Plus
    • EyeLink Portable Duo
    • fMRI and MEG Systems
    • EyeLink II
    • Hardware Integration
  • Software
    • Experiment Builder
    • Data Viewer
    • WebLink
    • Software Integration
    • Purchase Licenses
  • Solutions
    • Reading and Language
    • Developmental
    • fMRI and MEG
    • EEG and fNIRS
    • Clinical and Oculomotor
    • Cognitive
    • Usability and Applied
    • Non Human Primate
  • Support
    • Forum
    • Resources
    • Useful Apps
    • Training
  • About
    • About Us
    • EyeLink Publications
    • Events
    • Manufacturing
    • Careers
    • About Eye Tracking
    • Newsletter
  • Blog
  • Contact
  • 简体中文
eye tracking research

Cognitive Publications

EyeLink Cognitive Publications 

All EyeLink cognitive and perception research publications up until 2021 (with some early 2022s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!

5787 entries « ‹ 1 of 58 › »

2022

Floor van den Berg; Jelle Brouwer; Thomas B. Tienkamp; Josje Verhagen; Merel Keijzer

Language entropy relates to behavioral and pupil indices of executive control in young adult bilinguals Journal Article

In: Frontiers in Psychology, vol. 13, pp. 1-17, 2022.

Abstract | BibTeX

@article{nokey,
title = {Language entropy relates to behavioral and pupil indices of executive control in young adult bilinguals},
author = {Floor van den Berg and Jelle Brouwer and Thomas B. Tienkamp and Josje Verhagen and Merel Keijzer},
year = {2022},
date = {2022-05-04},
journal = {Frontiers in Psychology},
volume = {13},
pages = {1-17},
abstract = {Introduction: It has been proposed that bilinguals’ language use patterns are differentially associated with executive control. To further examine this, the present study relates the social diversity of bilingual language use to performance on a color- shape switching task (CSST) in a group of bilingual university students with diverse linguistic backgrounds. Crucially, this study used language entropy as a measure of bilinguals’ language use patterns. This continuous measure reflects a spectrum of language use in a variety of social contexts, ranging from compartmentalized use to fully integrated use. Methods: Language entropy for university and non-university contexts was calculated from questionnaire data on language use. Reaction times (RTs) were measured to calculate global RT and switching and mixing costs on the CSST, representing conflict monitoring, mental set shifting, and goal maintenance, respectively. In addition, this study innovatively recorded a potentially more sensitive measure of set shifting abilities, namely, pupil size during task performance. Results: Higher university entropy was related to slower global RT. Neither university entropy nor non-university entropy were associated with switching costs as manifested in RTs. However, bilinguals with more compartmentalized language use in non-university contexts showed a larger difference in pupil dilation for switch trials in comparison with non-switch trials. Mixing costs in RTs were reduced for bilinguals with higher diversity of language use in non-university contexts. No such effects were found for university entropy. Discussion: These results point to the social diversity of bilinguals’ language use as being associated with executive control, but the direction of the effects may depend on social context (university vs. non-university). Importantly, the results also suggest that some of these effects may only be detected by using more sensitive measures, such as pupil dilation. The paper discusses theoretical and practical implications regarding the language entropy measure and the cognitive effects of bilingual experiences more generally, as well as as how methodological choices can advance our understanding of these effects.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Introduction: It has been proposed that bilinguals’ language use patterns are differentially associated with executive control. To further examine this, the present study relates the social diversity of bilingual language use to performance on a color- shape switching task (CSST) in a group of bilingual university students with diverse linguistic backgrounds. Crucially, this study used language entropy as a measure of bilinguals’ language use patterns. This continuous measure reflects a spectrum of language use in a variety of social contexts, ranging from compartmentalized use to fully integrated use. Methods: Language entropy for university and non-university contexts was calculated from questionnaire data on language use. Reaction times (RTs) were measured to calculate global RT and switching and mixing costs on the CSST, representing conflict monitoring, mental set shifting, and goal maintenance, respectively. In addition, this study innovatively recorded a potentially more sensitive measure of set shifting abilities, namely, pupil size during task performance. Results: Higher university entropy was related to slower global RT. Neither university entropy nor non-university entropy were associated with switching costs as manifested in RTs. However, bilinguals with more compartmentalized language use in non-university contexts showed a larger difference in pupil dilation for switch trials in comparison with non-switch trials. Mixing costs in RTs were reduced for bilinguals with higher diversity of language use in non-university contexts. No such effects were found for university entropy. Discussion: These results point to the social diversity of bilinguals’ language use as being associated with executive control, but the direction of the effects may depend on social context (university vs. non-university). Importantly, the results also suggest that some of these effects may only be detected by using more sensitive measures, such as pupil dilation. The paper discusses theoretical and practical implications regarding the language entropy measure and the cognitive effects of bilingual experiences more generally, as well as as how methodological choices can advance our understanding of these effects.

Close

Aspen H. Yoo; Alfredo Bolaños; Grace E. Hallenbeck; Masih Rahmati; Thomas C. Sprague; Clayton E. Curtis

Behavioral prioritization enhances working memory precision and neural population gain Journal Article

In: Journal of Cognitive Neuroscience, vol. 34, no. 2, pp. 365–379, 2022.

Abstract | Links | BibTeX

@article{Yoo2022,
title = {Behavioral prioritization enhances working memory precision and neural population gain},
author = {Aspen H. Yoo and Alfredo Bolaños and Grace E. Hallenbeck and Masih Rahmati and Thomas C. Sprague and Clayton E. Curtis},
doi = {10.1162/jocn_a_01804},
year = {2022},
date = {2022-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {34},
number = {2},
pages = {365--379},
abstract = {Humans allocate visual working memory (WM) resource according to behavioral relevance, resulting in more precise memories for more important items. Theoretically, items may be maintained by feature-tuned neural populations, where the relative gain of the populations encoding each item determines precision. To test this hypothesis, we compared the amplitudes of delay period activity in the different parts of retinotopic maps representing each of several WM items, predicting the amplitudes would track behavioral priority. Using fMRI, we scanned participants while they remembered the location of multiple items over a WM delay and then reported the location of one probed item using a memory-guided saccade. Importantly, items were not equally probable to be probed (0.6, 0.3, 0.1, 0.0), which was indicated with a precue. We analyzed fMRI activity in 10 visual field maps in occipital, parietal, and frontal cortex known to be important for visual WM. In early visual cortex, but not association cortex, the amplitude of BOLD activation within voxels corresponding to the retinotopic location of visual WM items increased with the priority of the item. Interestingly, these results were contrasted with a common finding that higher-level brain regions had greater delay period activity, demonstrating a dissociation between the absolute amount of activity in a brain area and the activity of different spatially selective populations within it. These results suggest that the distribution of WM resources according to priority sculpts the relative gains of neural populations that encode items, offering a neural mechanism for how prioritization impacts memory precision.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Humans allocate visual working memory (WM) resource according to behavioral relevance, resulting in more precise memories for more important items. Theoretically, items may be maintained by feature-tuned neural populations, where the relative gain of the populations encoding each item determines precision. To test this hypothesis, we compared the amplitudes of delay period activity in the different parts of retinotopic maps representing each of several WM items, predicting the amplitudes would track behavioral priority. Using fMRI, we scanned participants while they remembered the location of multiple items over a WM delay and then reported the location of one probed item using a memory-guided saccade. Importantly, items were not equally probable to be probed (0.6, 0.3, 0.1, 0.0), which was indicated with a precue. We analyzed fMRI activity in 10 visual field maps in occipital, parietal, and frontal cortex known to be important for visual WM. In early visual cortex, but not association cortex, the amplitude of BOLD activation within voxels corresponding to the retinotopic location of visual WM items increased with the priority of the item. Interestingly, these results were contrasted with a common finding that higher-level brain regions had greater delay period activity, demonstrating a dissociation between the absolute amount of activity in a brain area and the activity of different spatially selective populations within it. These results suggest that the distribution of WM resources according to priority sculpts the relative gains of neural populations that encode items, offering a neural mechanism for how prioritization impacts memory precision.

Close

  • doi:10.1162/jocn_a_01804

Close

Jiahui Wang; Abigail Stebbins; Richard E. Ferdig

Examining the effects of students' self-efficacy and prior knowledge on learning and visual behavior in a physics game Journal Article

In: Computers and Education, vol. 178, pp. 104405, 2022.

Abstract | Links | BibTeX

@article{Wang2022,
title = {Examining the effects of students' self-efficacy and prior knowledge on learning and visual behavior in a physics game},
author = {Jiahui Wang and Abigail Stebbins and Richard E. Ferdig},
doi = {10.1016/j.compedu.2021.104405},
year = {2022},
date = {2022-01-01},
journal = {Computers and Education},
volume = {178},
pages = {104405},
publisher = {Elsevier Ltd},
abstract = {Research has provided evidence of the significant promise of using educational games for learning. However, there is limited understanding of how individual differences (e.g., self-efficacy and prior knowledge) affect visual processing of game elements and learning from an educational game. This study aimed to address these gaps by: a) examining the effects of students' self-efficacy and prior knowledge on learning from a physics game; and b) exploring how learners with distinct levels of self-efficacy and prior knowledge differ in their visual behavior with respect to the game elements. The visual behavior of 69 undergraduate students was recorded as they played an educational game focusing on Newtonian mechanics. Individual differences in self-efficacy in learning physics and prior knowledge were assessed prior to the game, while a comprehension test was administered immediately after gameplay. Wilcoxon signed-rank tests showed that all participants significantly improved in their understanding of Newtonian mechanics. Mann- Whitney U tests indicated learning gains were not significantly different between the groups with varying levels of prior knowledge or self-efficacy. Additionally, a series of Mann-Whitney U tests of the eye tracking data suggested the learners with high self-efficacy tended to pay more attention to the motion map - a critical navigation component of the game. Further, the high prior knowledge individuals excelled in attentional control abilities and exhibited effective visual processing strategies. The study concludes with important implications for the future design of educational games and developing individualized instructional support in game-based learning. 1.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Research has provided evidence of the significant promise of using educational games for learning. However, there is limited understanding of how individual differences (e.g., self-efficacy and prior knowledge) affect visual processing of game elements and learning from an educational game. This study aimed to address these gaps by: a) examining the effects of students' self-efficacy and prior knowledge on learning from a physics game; and b) exploring how learners with distinct levels of self-efficacy and prior knowledge differ in their visual behavior with respect to the game elements. The visual behavior of 69 undergraduate students was recorded as they played an educational game focusing on Newtonian mechanics. Individual differences in self-efficacy in learning physics and prior knowledge were assessed prior to the game, while a comprehension test was administered immediately after gameplay. Wilcoxon signed-rank tests showed that all participants significantly improved in their understanding of Newtonian mechanics. Mann- Whitney U tests indicated learning gains were not significantly different between the groups with varying levels of prior knowledge or self-efficacy. Additionally, a series of Mann-Whitney U tests of the eye tracking data suggested the learners with high self-efficacy tended to pay more attention to the motion map - a critical navigation component of the game. Further, the high prior knowledge individuals excelled in attentional control abilities and exhibited effective visual processing strategies. The study concludes with important implications for the future design of educational games and developing individualized instructional support in game-based learning. 1.

Close

  • doi:10.1016/j.compedu.2021.104405

Close

Jérôme Tagu; Árni Kristjánsson

Dynamics of attentional and oculomotor orienting in visual foraging tasks Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 75, no. 2, pp. 260–276, 2022.

Abstract | Links | BibTeX

@article{Tagu2022a,
title = {Dynamics of attentional and oculomotor orienting in visual foraging tasks},
author = {Jérôme Tagu and Árni Kristjánsson},
doi = {10.1177/1747021820919351},
year = {2022},
date = {2022-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {75},
number = {2},
pages = {260--276},
abstract = {A vast amount of research has been carried out to understand how humans visually search for targets in their environment. However, this research has typically involved search for one unique target among several distractors. Although this line of research has yielded important insights into the basic characteristics of how humans explore their visual environment, this may not be a very realistic model for everyday visual orientation. Recently, researchers have used multi-target displays to assess orienting in the visual field. Eye movements in such tasks are, however, less well understood. Here, we investigated oculomotor dynamics during four visual foraging tasks differing in target crypticity (feature-based foraging vs. conjunction-based foraging) and the effector type being used for target selection (mouse foraging vs. gaze foraging). Our results show that both target crypticity and effector type affect foraging strategies. These changes are reflected in oculomotor dynamics, feature foraging being associated with focal exploration (long fixations and short-amplitude saccades), and conjunction foraging with ambient exploration (short fixations and high-amplitude saccades). These results provide important new information for existing accounts of visual attention and oculomotor control and emphasise the usefulness of foraging tasks for a better understanding of how humans orient in the visual environment.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A vast amount of research has been carried out to understand how humans visually search for targets in their environment. However, this research has typically involved search for one unique target among several distractors. Although this line of research has yielded important insights into the basic characteristics of how humans explore their visual environment, this may not be a very realistic model for everyday visual orientation. Recently, researchers have used multi-target displays to assess orienting in the visual field. Eye movements in such tasks are, however, less well understood. Here, we investigated oculomotor dynamics during four visual foraging tasks differing in target crypticity (feature-based foraging vs. conjunction-based foraging) and the effector type being used for target selection (mouse foraging vs. gaze foraging). Our results show that both target crypticity and effector type affect foraging strategies. These changes are reflected in oculomotor dynamics, feature foraging being associated with focal exploration (long fixations and short-amplitude saccades), and conjunction foraging with ambient exploration (short fixations and high-amplitude saccades). These results provide important new information for existing accounts of visual attention and oculomotor control and emphasise the usefulness of foraging tasks for a better understanding of how humans orient in the visual environment.

Close

  • doi:10.1177/1747021820919351

Close

Jérôme Tagu; Árni Kristjánsson

The selection balance: Contrasting value, proximity and priming in a multitarget foraging task Journal Article

In: Cognition, vol. 218, pp. 1–12, 2022.

Abstract | Links | BibTeX

@article{Tagu2022,
title = {The selection balance: Contrasting value, proximity and priming in a multitarget foraging task},
author = {Jérôme Tagu and Árni Kristjánsson},
doi = {10.1016/j.cognition.2021.104935},
year = {2022},
date = {2022-01-01},
journal = {Cognition},
volume = {218},
pages = {1--12},
abstract = {A critical question in visual foraging concerns the mechanisms driving the next target selection. Observers first identify a set of candidate targets, and then select the best option among these candidates. Recent evidence suggests that target selection relies on internal biases towards proximity (nearest target from the last selection), priming (target from the same category as the last selection) and value (target associated with high value). Here, we tested the role of eye movements in target selection, and notably whether disabling eye movements during target selection could affect search strategy. We asked observers to perform four foraging tasks differing by selection modality and target value. During gaze foraging, participants had to accurately fixate the targets to select them and could not anticipate the next selection with their eyes, while during mouse foraging they selected the targets with mouse clicks and were free to move their eyes. We moreover manipulated both target value and proximity. Our results revealed notable individual differences in search strategy, confirming the existence of internal biases towards value, proximity and priming. Critically, there were no differences in search strategy between mouse and gaze foraging, suggesting that disabling eye movements during target selection did not affect foraging behaviour. These results importantly suggest that overt orienting is not necessary for target selection. This study provides fundamental information for theoretical conceptions of attentional selection, and emphasizes the importance of covert attention for target selection during visual foraging.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A critical question in visual foraging concerns the mechanisms driving the next target selection. Observers first identify a set of candidate targets, and then select the best option among these candidates. Recent evidence suggests that target selection relies on internal biases towards proximity (nearest target from the last selection), priming (target from the same category as the last selection) and value (target associated with high value). Here, we tested the role of eye movements in target selection, and notably whether disabling eye movements during target selection could affect search strategy. We asked observers to perform four foraging tasks differing by selection modality and target value. During gaze foraging, participants had to accurately fixate the targets to select them and could not anticipate the next selection with their eyes, while during mouse foraging they selected the targets with mouse clicks and were free to move their eyes. We moreover manipulated both target value and proximity. Our results revealed notable individual differences in search strategy, confirming the existence of internal biases towards value, proximity and priming. Critically, there were no differences in search strategy between mouse and gaze foraging, suggesting that disabling eye movements during target selection did not affect foraging behaviour. These results importantly suggest that overt orienting is not necessary for target selection. This study provides fundamental information for theoretical conceptions of attentional selection, and emphasizes the importance of covert attention for target selection during visual foraging.

Close

  • doi:10.1016/j.cognition.2021.104935

Close

Carlos Sillero‐Rejon; Osama Mahmoud; Ricardo M. Tamayo; Alvaro Arturo Clavijo‐Alvarez; Sally Adams; Olivia M. Maynard

Standardised packs and larger health warnings: Visual attention and perceptions among Colombian smokers and non‐smokers Journal Article

In: Addiction, pp. 1–11, 2022.

Abstract | Links | BibTeX

@article{Sillero‐Rejon2022,
title = {Standardised packs and larger health warnings: Visual attention and perceptions among Colombian smokers and non‐smokers},
author = {Carlos Sillero‐Rejon and Osama Mahmoud and Ricardo M. Tamayo and Alvaro Arturo Clavijo‐Alvarez and Sally Adams and Olivia M. Maynard},
doi = {10.1111/add.15779},
year = {2022},
date = {2022-01-01},
journal = {Addiction},
pages = {1--11},
abstract = {Aims: To measure how cigarette packaging (standardised packaging and branded packag- ing) and health warning size affect visual attention and pack preferences among Colombian smokers and non-smokers. Design: To explore visual attention, we used an eye-tracking experiment where non- smokers, weekly smokers and daily smokers were shown cigarette packs varying in warning size (30%-pictorial on top of the text, 30%-pictorial and text side-by-side, 50%, 70%) and packaging (standardised packaging, branded packaging). We used a discrete choice experiment (DCE) to examine the impact of warning size, packaging and brand name on preferences to try, taste perceptions and perceptions of harm. Setting: Eye-tracking laboratory, Universidad Nacional de Colombia, Bogotá, Colombia. Participants: Participants (n = 175) were 18 to 40 years old. Measurements: For the eye-tracking experiment, our primary outcome measure was the number of fixations toward the health warning compared with the branding. For the DCE, outcome measures were preferences to try, taste perceptions and harm perceptions. Findings: We observed greater visual attention to warning labels on standardised versus branded packages (F[3,167] = 22.87, P < 0.001) and when warnings were larger (F[9,161] = 147.17, P < 0.001); as warning size increased, the difference in visual attention to warnings between standardised and branded packaging decreased (F[9,161] = 4.44, P < 0.001). Non-smokers visually attended toward the warnings more than smokers, but as warning size increased these differences decreased (F[6,334] = 2.92},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Aims: To measure how cigarette packaging (standardised packaging and branded packag- ing) and health warning size affect visual attention and pack preferences among Colombian smokers and non-smokers. Design: To explore visual attention, we used an eye-tracking experiment where non- smokers, weekly smokers and daily smokers were shown cigarette packs varying in warning size (30%-pictorial on top of the text, 30%-pictorial and text side-by-side, 50%, 70%) and packaging (standardised packaging, branded packaging). We used a discrete choice experiment (DCE) to examine the impact of warning size, packaging and brand name on preferences to try, taste perceptions and perceptions of harm. Setting: Eye-tracking laboratory, Universidad Nacional de Colombia, Bogotá, Colombia. Participants: Participants (n = 175) were 18 to 40 years old. Measurements: For the eye-tracking experiment, our primary outcome measure was the number of fixations toward the health warning compared with the branding. For the DCE, outcome measures were preferences to try, taste perceptions and harm perceptions. Findings: We observed greater visual attention to warning labels on standardised versus branded packages (F[3,167] = 22.87, P < 0.001) and when warnings were larger (F[9,161] = 147.17, P < 0.001); as warning size increased, the difference in visual attention to warnings between standardised and branded packaging decreased (F[9,161] = 4.44, P < 0.001). Non-smokers visually attended toward the warnings more than smokers, but as warning size increased these differences decreased (F[6,334] = 2.92

Close

  • doi:10.1111/add.15779

Close

Weikang Shi; Sébastien Ballesta; Camillo Padoa-Schioppa

Economic choices under simultaneous or sequential offers rely on the same neural circuit Journal Article

In: Journal of Neuroscience, vol. 42, no. 1, pp. 33–43, 2022.

Abstract | Links | BibTeX

@article{Shi2022,
title = {Economic choices under simultaneous or sequential offers rely on the same neural circuit},
author = {Weikang Shi and Sébastien Ballesta and Camillo Padoa-Schioppa},
doi = {10.1523/jneurosci.1265-21.2021},
year = {2022},
date = {2022-01-01},
journal = {Journal of Neuroscience},
volume = {42},
number = {1},
pages = {33--43},
abstract = {A series of studies in which monkeys chose between two juices offered in variable amounts identified in the orbitofrontal cortex (OFC) different groups of neurons encoding the value of individual options ( offer value ), the binary choice outcome ( chosen juice ) and the chosen value . These variables capture both the input and the output of the choice process, suggesting that the cell groups identified in OFC constitute the building blocks of a decision circuit. Several lines of evidence support this hypothesis. However, in previous experiments offers were presented simultaneously, raising the question of whether current notions generalize to when goods are presented or are examined in sequence. Recently, [Ballesta and Padoa-Schioppa (2019)][1] examined OFC activity under sequential offers. An analysis of neuronal responses across time windows revealed that a small number of cell groups encoded specific sequences of variables. These sequences appeared analogous to the variables identified under simultaneous offers, but the correspondence remained tentative. Thus in the present study we examined the relation between cell groups found under sequential versus simultaneous offers. We recorded from the OFC while monkeys chose between different juices. Trials with simultaneous and sequential offers were randomly interleaved in each session. We classified cells in each choice modality and we examined the relation between the two classifications. We found a strong correspondence – in other words, the cell groups measured under simultaneous offers and under sequential offers were one and the same. This result indicates that economic choices under simultaneous or sequential offers rely on the same neural circuit. Significance Statement Research in the past 20 years has shed light on the neuronal underpinnings of economic choices. A large number of results indicates that decisions between goods are formed in a neural circuit within the orbitofrontal cortex (OFC). In most previous studies, subjects chose between two goods offered simultaneously. Yet, in daily situations, goods available for choice are often presented or examined in sequence. Here we recorded neuronal activity in the primate OFC alternating trials under simultaneous and under sequential offers. Our analyses demonstrate that the same neural circuit supports choices in the two modalities. Hence current notions on the neuronal mechanisms underlying economic decisions generalize to choices under sequential offers. ### Competing Interest Statement The authors have declared no competing interest. [1]: #ref-2},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A series of studies in which monkeys chose between two juices offered in variable amounts identified in the orbitofrontal cortex (OFC) different groups of neurons encoding the value of individual options ( offer value ), the binary choice outcome ( chosen juice ) and the chosen value . These variables capture both the input and the output of the choice process, suggesting that the cell groups identified in OFC constitute the building blocks of a decision circuit. Several lines of evidence support this hypothesis. However, in previous experiments offers were presented simultaneously, raising the question of whether current notions generalize to when goods are presented or are examined in sequence. Recently, [Ballesta and Padoa-Schioppa (2019)][1] examined OFC activity under sequential offers. An analysis of neuronal responses across time windows revealed that a small number of cell groups encoded specific sequences of variables. These sequences appeared analogous to the variables identified under simultaneous offers, but the correspondence remained tentative. Thus in the present study we examined the relation between cell groups found under sequential versus simultaneous offers. We recorded from the OFC while monkeys chose between different juices. Trials with simultaneous and sequential offers were randomly interleaved in each session. We classified cells in each choice modality and we examined the relation between the two classifications. We found a strong correspondence – in other words, the cell groups measured under simultaneous offers and under sequential offers were one and the same. This result indicates that economic choices under simultaneous or sequential offers rely on the same neural circuit. Significance Statement Research in the past 20 years has shed light on the neuronal underpinnings of economic choices. A large number of results indicates that decisions between goods are formed in a neural circuit within the orbitofrontal cortex (OFC). In most previous studies, subjects chose between two goods offered simultaneously. Yet, in daily situations, goods available for choice are often presented or examined in sequence. Here we recorded neuronal activity in the primate OFC alternating trials under simultaneous and under sequential offers. Our analyses demonstrate that the same neural circuit supports choices in the two modalities. Hence current notions on the neuronal mechanisms underlying economic decisions generalize to choices under sequential offers. ### Competing Interest Statement The authors have declared no competing interest. [1]: #ref-2

Close

  • doi:10.1523/jneurosci.1265-21.2021

Close

Arunava Samaddar; Brooke S. Jackson; Christopher J. Helms; Nicole A. Lazar; Jennifer E. McDowell; Cheolwoo Park

A group comparison in fMRI data using a semiparametric model under shape invariance Journal Article

In: Computational Statistics and Data Analysis, vol. 167, pp. 107361, 2022.

Abstract | Links | BibTeX

@article{Samaddar2022,
title = {A group comparison in fMRI data using a semiparametric model under shape invariance},
author = {Arunava Samaddar and Brooke S. Jackson and Christopher J. Helms and Nicole A. Lazar and Jennifer E. McDowell and Cheolwoo Park},
doi = {10.1016/j.csda.2021.107361},
year = {2022},
date = {2022-01-01},
journal = {Computational Statistics and Data Analysis},
volume = {167},
pages = {107361},
publisher = {Elsevier B.V.},
abstract = {In the analysis of functional magnetic resonance imaging (fMRI) data, a common type of analysis is to compare differences across scanning sessions. A challenge to direct comparisons of this type is the low signal-to-noise ratio in fMRI data. By using the property that brain signals from a task-related experiment may exhibit a similar pattern in regions of interest across participants, a semiparametric approach under shape invariance to quantify and test the differences in sessions and groups is developed. The common function is estimated with local polynomial regression and the shape invariance model parameters are estimated using evolutionary optimization methods. The efficacy of the semi-parametric approach is demonstrated on a study of brain activation changes across two sessions associated with practice-related cognitive control. The objective of the study is to evaluate neural circuitry supporting a cognitive control task, and associated practice-related changes via acquisition of blood oxygenation level dependent (BOLD) signal collected using fMRI. By using the proposed approach, BOLD signals in multiple regions of interest for control participants and participants with schizophrenia are compared as they perform a cognitive control task (known as the antisaccade task) at two sessions, and the effects of task practice in these groups are quantified.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the analysis of functional magnetic resonance imaging (fMRI) data, a common type of analysis is to compare differences across scanning sessions. A challenge to direct comparisons of this type is the low signal-to-noise ratio in fMRI data. By using the property that brain signals from a task-related experiment may exhibit a similar pattern in regions of interest across participants, a semiparametric approach under shape invariance to quantify and test the differences in sessions and groups is developed. The common function is estimated with local polynomial regression and the shape invariance model parameters are estimated using evolutionary optimization methods. The efficacy of the semi-parametric approach is demonstrated on a study of brain activation changes across two sessions associated with practice-related cognitive control. The objective of the study is to evaluate neural circuitry supporting a cognitive control task, and associated practice-related changes via acquisition of blood oxygenation level dependent (BOLD) signal collected using fMRI. By using the proposed approach, BOLD signals in multiple regions of interest for control participants and participants with schizophrenia are compared as they perform a cognitive control task (known as the antisaccade task) at two sessions, and the effects of task practice in these groups are quantified.

Close

  • doi:10.1016/j.csda.2021.107361

Close

Megan J. Raden; Andrew F. Jarosz

Strategy transfer on fluid reasoning tasks Journal Article

In: Intelligence, vol. 91, pp. 101618, 2022.

Abstract | Links | BibTeX

@article{Raden2022,
title = {Strategy transfer on fluid reasoning tasks},
author = {Megan J. Raden and Andrew F. Jarosz},
doi = {10.1016/j.intell.2021.101618},
year = {2022},
date = {2022-01-01},
journal = {Intelligence},
volume = {91},
pages = {101618},
publisher = {Elsevier Inc.},
abstract = {Strategy use on reasoning tasks has consistently been shown to correlate with working memory capacity and accuracy, but it is still unclear to what degree individual preferences, working memory capacity, and features of the task itself contribute to strategy use. The present studies used eye tracking to explore the potential for strategy transfer between reasoning tasks. Study 1 demonstrated that participants are consistent in what strategy they use across reasoning tasks and that strategy transfer between tasks is possible. Additionally, post-hoc an- alyses identified certain ambiguous items in the figural analogies task that required participants to assess the response bank to reach solution, which appeared to push participants towards a more response-based strategy. Study 2 utilized a between-subjects design to manipulate this “ambiguity” in figural analogies problems prior to completing the RAPM. Once again, participants transferred strategies between tasks when primed with different strategies, although this did not affect their ability to accurately solve the problem. Importantly, strategy use changed considerably depending on the ambiguity of the initial reasoning task. The results provided across the two studies suggest that participants are consistent in what strategies they employ across reasoning tasks, and that if features of the task push participants towards a different strategy, they will transfer that strategy to another reasoning task. Furthermore, to understand the role of strategy use on reasoning tasks, future work will require a diverse sample of both reasoning tasks and strategy use measures. Fluid},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Strategy use on reasoning tasks has consistently been shown to correlate with working memory capacity and accuracy, but it is still unclear to what degree individual preferences, working memory capacity, and features of the task itself contribute to strategy use. The present studies used eye tracking to explore the potential for strategy transfer between reasoning tasks. Study 1 demonstrated that participants are consistent in what strategy they use across reasoning tasks and that strategy transfer between tasks is possible. Additionally, post-hoc an- alyses identified certain ambiguous items in the figural analogies task that required participants to assess the response bank to reach solution, which appeared to push participants towards a more response-based strategy. Study 2 utilized a between-subjects design to manipulate this “ambiguity” in figural analogies problems prior to completing the RAPM. Once again, participants transferred strategies between tasks when primed with different strategies, although this did not affect their ability to accurately solve the problem. Importantly, strategy use changed considerably depending on the ambiguity of the initial reasoning task. The results provided across the two studies suggest that participants are consistent in what strategies they employ across reasoning tasks, and that if features of the task push participants towards a different strategy, they will transfer that strategy to another reasoning task. Furthermore, to understand the role of strategy use on reasoning tasks, future work will require a diverse sample of both reasoning tasks and strategy use measures. Fluid

Close

  • doi:10.1016/j.intell.2021.101618

Close

Alessandro Piras; Aurelio Trofè; Andrea Meoni; Milena Raffi

Influence of radial optic flow stimulation on static postural balance in Parkinson's disease: A preliminary study Journal Article

In: Human Movement Science, vol. 81, pp. 102905, 2022.

Abstract | Links | BibTeX

@article{Piras2022,
title = {Influence of radial optic flow stimulation on static postural balance in Parkinson's disease: A preliminary study},
author = {Alessandro Piras and Aurelio Trofè and Andrea Meoni and Milena Raffi},
doi = {10.1016/j.humov.2021.102905},
year = {2022},
date = {2022-01-01},
journal = {Human Movement Science},
volume = {81},
pages = {102905},
abstract = {The role of optic flow in the control of balance in persons with Parkinson's disease (PD) has yet to be studied. Since basal ganglia are understood to have a role in controlling ocular fixation, we have hypothesized that persons with PD would exhibit impaired performance in fixation tasks, i.e., altered postural balance due to the possible relationships between postural disorders and visual perception. The aim of this preliminary study was to investigate how people affected by PD respond to optic flow stimuli presented with radial expanding motion, with the intention to see how the stimulation of different retinal portions may alter the static postural sway. We measured the body sway using center of pressure parameters recorded from two force platforms during the presentation of the foveal, peripheral and full field radial optic flow stimuli. Persons with PD had different visual responses in terms of fixational eye movement characteristics, with greater postural alteration in the sway area and in the medio-lateral direction than the age-matched control group. Balance impairment in the medio-lateral oscillation is often observed in persons with atypical Parkinsonism, but not in Parkinson's disease. Persons with PD are more dependent on visual feedback with respect to age-matched control subjects, and this could be due to their impaired peripheral kinesthetic feedback. Visual stimulation of standing posture would provide reliable signs in the differential diagnosis of Parkinsonism.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The role of optic flow in the control of balance in persons with Parkinson's disease (PD) has yet to be studied. Since basal ganglia are understood to have a role in controlling ocular fixation, we have hypothesized that persons with PD would exhibit impaired performance in fixation tasks, i.e., altered postural balance due to the possible relationships between postural disorders and visual perception. The aim of this preliminary study was to investigate how people affected by PD respond to optic flow stimuli presented with radial expanding motion, with the intention to see how the stimulation of different retinal portions may alter the static postural sway. We measured the body sway using center of pressure parameters recorded from two force platforms during the presentation of the foveal, peripheral and full field radial optic flow stimuli. Persons with PD had different visual responses in terms of fixational eye movement characteristics, with greater postural alteration in the sway area and in the medio-lateral direction than the age-matched control group. Balance impairment in the medio-lateral oscillation is often observed in persons with atypical Parkinsonism, but not in Parkinson's disease. Persons with PD are more dependent on visual feedback with respect to age-matched control subjects, and this could be due to their impaired peripheral kinesthetic feedback. Visual stimulation of standing posture would provide reliable signs in the differential diagnosis of Parkinsonism.

Close

  • doi:10.1016/j.humov.2021.102905

Close

Joel T. Martin; Annalise H. Whittaker; Stephen J. Johnston

Pupillometry and the vigilance decrement: Task‐evoked but not baseline pupil measures reflect declining performance in visual vigilance tasks Journal Article

In: European Journal of Neuroscience, vol. 44, pp. 1–22, 2022.

Abstract | Links | BibTeX

@article{Martin2022,
title = {Pupillometry and the vigilance decrement: Task‐evoked but not baseline pupil measures reflect declining performance in visual vigilance tasks},
author = {Joel T. Martin and Annalise H. Whittaker and Stephen J. Johnston},
doi = {10.1111/ejn.15585},
year = {2022},
date = {2022-01-01},
journal = {European Journal of Neuroscience},
volume = {44},
pages = {1--22},
abstract = {Baseline and task-evoked pupil measures are known to reflect the activity of the nervous system's central arousal mechanisms. With the increasing availability, affordability and flexibility of video-based eye tracking hardware, these measures may one day find practical application in real-time biobehavioral monitoring systems to assess performance or fitness for duty in tasks requiring vigilant attention. But real-world vigilance tasks are predominantly visual in their nature and most research in this area has taken place in the auditory domain. Here we explore the relationship between pupil size—both baseline and task-evoked—and behavioral performance measures in two novel vigilance tasks requiring visual target detection: 1) a traditional vigilance task involving prolonged, continuous, and uninterrupted performance (n = 28), and 2) a psychomotor vigilance task (n = 25). In both tasks, behavioral performance and task-evoked pupil responses declined as time spent on task increased, corroborating previous reports in the literature of a vigilance decrement with a corresponding reduction in task-evoked pupil measures. Also in line with previous findings, baseline pupil size did not show a consistent relationship with performance measures. We discuss our findings considering the adaptive gain theory of locus coeruleus function and question the validity of the assumption that baseline (prestimulus) pupil size and task-evoked (poststimulus) pupil measures correspond to the tonic and phasic firing modes of the LC. ### Competing Interest Statement The authors have declared no competing interest.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Baseline and task-evoked pupil measures are known to reflect the activity of the nervous system's central arousal mechanisms. With the increasing availability, affordability and flexibility of video-based eye tracking hardware, these measures may one day find practical application in real-time biobehavioral monitoring systems to assess performance or fitness for duty in tasks requiring vigilant attention. But real-world vigilance tasks are predominantly visual in their nature and most research in this area has taken place in the auditory domain. Here we explore the relationship between pupil size—both baseline and task-evoked—and behavioral performance measures in two novel vigilance tasks requiring visual target detection: 1) a traditional vigilance task involving prolonged, continuous, and uninterrupted performance (n = 28), and 2) a psychomotor vigilance task (n = 25). In both tasks, behavioral performance and task-evoked pupil responses declined as time spent on task increased, corroborating previous reports in the literature of a vigilance decrement with a corresponding reduction in task-evoked pupil measures. Also in line with previous findings, baseline pupil size did not show a consistent relationship with performance measures. We discuss our findings considering the adaptive gain theory of locus coeruleus function and question the validity of the assumption that baseline (prestimulus) pupil size and task-evoked (poststimulus) pupil measures correspond to the tonic and phasic firing modes of the LC. ### Competing Interest Statement The authors have declared no competing interest.

Close

  • doi:10.1111/ejn.15585

Close

Astar Lev; Yoram Braw; Tomer Elbaum; Michael Wagner; Yuri Rassovsky

Eye tracking during a continuous performance test: Utility for assessing ADHD patients Journal Article

In: Journal of Attention Disorders, vol. 26, no. 2, pp. 245–255, 2022.

Abstract | Links | BibTeX

@article{Lev2022,
title = {Eye tracking during a continuous performance test: Utility for assessing ADHD patients},
author = {Astar Lev and Yoram Braw and Tomer Elbaum and Michael Wagner and Yuri Rassovsky},
doi = {10.1177/1087054720972786},
year = {2022},
date = {2022-01-01},
journal = {Journal of Attention Disorders},
volume = {26},
number = {2},
pages = {245--255},
abstract = {Objective: The use of continuous performance tests (CPTs) for assessing ADHD related cognitive impairment is ubiquitous. Novel psychophysiological measures may enhance the data that is derived from CPTs and thereby improve clinical decision-making regarding diagnosis and treatment. As part of the current study, we integrated an eye tracker with the MOXO-dCPT and assessed the utility of eye movement measures to differentiate ADHD patients and healthy controls. Method: Adult ADHD patients and gender/age-matched healthy controls performed the MOXO-dCPT while their eye movements were monitored (n = 33 per group). Results: ADHD patients spent significantly more time gazing at irrelevant regions, both on the screen and outside of it, than healthy controls. The eye movement measures showed adequate ability to classify ADHD patients. Moreover, a scale that combined eye movement measures enhanced group prediction, compared to the sole use of conventional MOXO-dCPT indices. Conclusions: Integrating an eye tracker with CPTs is a feasible way of enhancing diagnostic precision and shows initial promise for clarifying the cognitive profile of ADHD patients. Pending replication, these findings point toward a promising path for the evolution of existing CPTs.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: The use of continuous performance tests (CPTs) for assessing ADHD related cognitive impairment is ubiquitous. Novel psychophysiological measures may enhance the data that is derived from CPTs and thereby improve clinical decision-making regarding diagnosis and treatment. As part of the current study, we integrated an eye tracker with the MOXO-dCPT and assessed the utility of eye movement measures to differentiate ADHD patients and healthy controls. Method: Adult ADHD patients and gender/age-matched healthy controls performed the MOXO-dCPT while their eye movements were monitored (n = 33 per group). Results: ADHD patients spent significantly more time gazing at irrelevant regions, both on the screen and outside of it, than healthy controls. The eye movement measures showed adequate ability to classify ADHD patients. Moreover, a scale that combined eye movement measures enhanced group prediction, compared to the sole use of conventional MOXO-dCPT indices. Conclusions: Integrating an eye tracker with CPTs is a feasible way of enhancing diagnostic precision and shows initial promise for clarifying the cognitive profile of ADHD patients. Pending replication, these findings point toward a promising path for the evolution of existing CPTs.

Close

  • doi:10.1177/1087054720972786

Close

Koji Kuraoka; Kae Nakamura

Facial temperature and pupil size as indicators of internal state in primates Journal Article

In: Neuroscience Research, 2022.

Abstract | Links | BibTeX

@article{Kuraoka2022,
title = {Facial temperature and pupil size as indicators of internal state in primates},
author = {Koji Kuraoka and Kae Nakamura},
doi = {10.1016/j.neures.2022.01.002},
year = {2022},
date = {2022-01-01},
journal = {Neuroscience Research},
publisher = {Elsevier Ireland Ltd and Japan Neuroscience Society},
abstract = {Studies in human subjects have revealed that autonomic responses provide objective and biologically relevant information about cognitive and affective states. Measures of autonomic responses can also be applied to studies of non-human primates, which are neuro-anatomically and physically similar to humans. Facial temperature and pupil size are measured remotely and can be applied to physiological experiments in primates, preferably in a head-fixed condition. However, detailed guidelines for the use of these measures in non-human primates is lacking. Here, we review the neuronal circuits and methodological considerations necessary for measuring and analyzing facial temperature and pupil size in non-human primates. Previous studies have shown that the modulation of these measures primarily reflects sympathetic reactions to cognitive and emotional processes, including alertness, attention, and mental effort, over different time scales. Integrated analyses of autonomic, behavioral, and neurophysiological data in primates are promising methods that reflect multiple dimensions of emotion and could potentially provide tools for understanding the mechanisms underlying neuropsychiatric disorders and vulnerabilities characterized by cognitive and affective disturbances.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Studies in human subjects have revealed that autonomic responses provide objective and biologically relevant information about cognitive and affective states. Measures of autonomic responses can also be applied to studies of non-human primates, which are neuro-anatomically and physically similar to humans. Facial temperature and pupil size are measured remotely and can be applied to physiological experiments in primates, preferably in a head-fixed condition. However, detailed guidelines for the use of these measures in non-human primates is lacking. Here, we review the neuronal circuits and methodological considerations necessary for measuring and analyzing facial temperature and pupil size in non-human primates. Previous studies have shown that the modulation of these measures primarily reflects sympathetic reactions to cognitive and emotional processes, including alertness, attention, and mental effort, over different time scales. Integrated analyses of autonomic, behavioral, and neurophysiological data in primates are promising methods that reflect multiple dimensions of emotion and could potentially provide tools for understanding the mechanisms underlying neuropsychiatric disorders and vulnerabilities characterized by cognitive and affective disturbances.

Close

  • doi:10.1016/j.neures.2022.01.002

Close

Nadezhda Kerimova; Pavel Sivokhin; Diana Kodzokova; Karine Nikogosyan; Vasily Klucharev

Visual processing of green zones in shared courtyards during renting decisions: An eye-tracking study Journal Article

In: Urban Forestry and Urban Greening, vol. 68, pp. 127460, 2022.

Abstract | Links | BibTeX

@article{Kerimova2022,
title = {Visual processing of green zones in shared courtyards during renting decisions: An eye-tracking study},
author = {Nadezhda Kerimova and Pavel Sivokhin and Diana Kodzokova and Karine Nikogosyan and Vasily Klucharev},
doi = {10.1016/j.ufug.2022.127460},
year = {2022},
date = {2022-01-01},
journal = {Urban Forestry and Urban Greening},
volume = {68},
pages = {127460},
publisher = {Elsevier GmbH},
abstract = {We used an eye-tracking technique to investigate the effect of green zones and car ownership on the attrac- tiveness of the courtyards of multistorey apartment buildings. Two interest groups—20 people who owned a car and 20 people who did not a car—observed 36 images of courtyards. Images were digitally modified to manipulate the spatial arrangement of key courtyard elements: green zones, parking lots, and children's play- grounds. The participants were asked to rate the attractiveness of courtyards during hypothetical renting decisions. Overall, we investigated whether visual exploration and appraisal of courtyards differed between people who owned a car and those who did not. The participants in both interest groups gazed longer at perceptually salient playgrounds and parking lots than at greenery. We also observed that participants gazed significantly longer at the greenery in courtyards rated as most attractive than those rated as least attractive. They gazed significantly longer at parking lots in courtyards rated as least attractive than those rated as most attractive. Using regression analysis, we further investigated the relationship between gaze fixations on courtyard elements and the attractiveness ratings of courtyards. The model confirmed a significant positive relationship between the number and duration of fixations on greenery and the attractiveness estimates of courtyards, while the model showed an opposite relationship for the duration of fixations on parking lots. Interestingly, the positive association between fixations on greenery and the attractiveness of courtyards was significantly stronger for participants who owned cars than for those who did not. These findings confirmed that the more people pay attention to green areas, the more positively they evaluate urban areas. The results also indicate that urban greenery may differentially affect the preferences of interest groups. 1.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We used an eye-tracking technique to investigate the effect of green zones and car ownership on the attrac- tiveness of the courtyards of multistorey apartment buildings. Two interest groups—20 people who owned a car and 20 people who did not a car—observed 36 images of courtyards. Images were digitally modified to manipulate the spatial arrangement of key courtyard elements: green zones, parking lots, and children's play- grounds. The participants were asked to rate the attractiveness of courtyards during hypothetical renting decisions. Overall, we investigated whether visual exploration and appraisal of courtyards differed between people who owned a car and those who did not. The participants in both interest groups gazed longer at perceptually salient playgrounds and parking lots than at greenery. We also observed that participants gazed significantly longer at the greenery in courtyards rated as most attractive than those rated as least attractive. They gazed significantly longer at parking lots in courtyards rated as least attractive than those rated as most attractive. Using regression analysis, we further investigated the relationship between gaze fixations on courtyard elements and the attractiveness ratings of courtyards. The model confirmed a significant positive relationship between the number and duration of fixations on greenery and the attractiveness estimates of courtyards, while the model showed an opposite relationship for the duration of fixations on parking lots. Interestingly, the positive association between fixations on greenery and the attractiveness of courtyards was significantly stronger for participants who owned cars than for those who did not. These findings confirmed that the more people pay attention to green areas, the more positively they evaluate urban areas. The results also indicate that urban greenery may differentially affect the preferences of interest groups. 1.

Close

  • doi:10.1016/j.ufug.2022.127460

Close

Frauke Heins; Markus Lappe

Flexible use of post-saccadic visual feedback in oculomotor learning Journal Article

In: Journal of Vision, vol. 22, no. 1, pp. 1–16, 2022.

Abstract | Links | BibTeX

@article{Heins2022,
title = {Flexible use of post-saccadic visual feedback in oculomotor learning},
author = {Frauke Heins and Markus Lappe},
doi = {10.1167/jov.22.1.3},
year = {2022},
date = {2022-01-01},
journal = {Journal of Vision},
volume = {22},
number = {1},
pages = {1--16},
abstract = {Saccadic eye movements bring objects of interest onto our fovea. These gaze shifts are essential for visual perception of our environment and the interaction with the objects within it. They precede our actions and are thus modulated by current goals. It is assumed that saccadic adaptation, a recalibration process that restores saccade accuracy in case of error, is mainly based on an implicit comparison of expected and actual post-saccadic position of the target on the retina. However, there is increasing evidence that task demands modulate saccade adaptation and that errors in task performance may be sufficient to induce changes to saccade amplitude. We investigated if human participants are able to flexibly use different information sources within the post-saccadic visual feedback in task-dependent fashion. Using intra-saccadic manipulation of the visual input, participants were either presented with congruent post-saccadic information, indicating the saccade target unambiguously, or incongruent post-saccadic information, creating conflict between two possible target objects. Using different task instructions, we found that participants were able to modify their saccade behavior such that they achieved the goal of the task. They succeeded in decreasing saccade gain or maintaining it, depending on what was necessary for the task, irrespective of whether the post-saccadic feedback was congruent or incongruent. It appears that action intentions prime task-relevant feature dimensions and thereby facilitated the selection of the relevant information within the post-saccadic image. Thus, participants use post-saccadic feedback flexibly, depending on their intentions and pending actions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Saccadic eye movements bring objects of interest onto our fovea. These gaze shifts are essential for visual perception of our environment and the interaction with the objects within it. They precede our actions and are thus modulated by current goals. It is assumed that saccadic adaptation, a recalibration process that restores saccade accuracy in case of error, is mainly based on an implicit comparison of expected and actual post-saccadic position of the target on the retina. However, there is increasing evidence that task demands modulate saccade adaptation and that errors in task performance may be sufficient to induce changes to saccade amplitude. We investigated if human participants are able to flexibly use different information sources within the post-saccadic visual feedback in task-dependent fashion. Using intra-saccadic manipulation of the visual input, participants were either presented with congruent post-saccadic information, indicating the saccade target unambiguously, or incongruent post-saccadic information, creating conflict between two possible target objects. Using different task instructions, we found that participants were able to modify their saccade behavior such that they achieved the goal of the task. They succeeded in decreasing saccade gain or maintaining it, depending on what was necessary for the task, irrespective of whether the post-saccadic feedback was congruent or incongruent. It appears that action intentions prime task-relevant feature dimensions and thereby facilitated the selection of the relevant information within the post-saccadic image. Thus, participants use post-saccadic feedback flexibly, depending on their intentions and pending actions.

Close

  • doi:10.1167/jov.22.1.3

Close

Erin Goddard; Thomas A. Carlson; Alexandra Woolgar

Spatial and feature-selective attention have distinct, interacting effects on population-level tuning Journal Article

In: Journal of Cognitive Neuroscience, vol. 34, no. 2, pp. 290–312, 2022.

Abstract | Links | BibTeX

@article{Goddard2022,
title = {Spatial and feature-selective attention have distinct, interacting effects on population-level tuning},
author = {Erin Goddard and Thomas A. Carlson and Alexandra Woolgar},
doi = {10.1162/jocn_a_01796},
year = {2022},
date = {2022-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {34},
number = {2},
pages = {290--312},
abstract = {Attention can be deployed in different ways: When searching for a taxi in New York City, we can decide where to attend (e.g., to the street) and what to attend to (e.g., yellow cars). Although we use the same word to describe both processes, nonhuman primate data suggest that these produce distinct effects on neural tuning. This has been challenging to assess in humans, but here we used an opportunity afforded by multivariate decoding of MEG data. We found that attending to an object at a particular location and attending to a particular object feature produced effects that interacted multiplicatively. The two types of attention induced distinct patterns of enhancement in occipital cortex, with feature-selective attention producing relatively more enhancement of small feature differences and spatial attention producing relatively larger effects for larger feature differences. An information flow analysis further showed that stimulus representations in occipital cortex were Granger-caused by coding in frontal cortices earlier in time and that the timing of this feedback matched the onset of attention effects. The data suggest that spatial and feature-selective attention rely on distinct neural mechanisms that arise from frontal-occipital information exchange, interacting multiplicatively to selectively enhance task-relevant information.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Attention can be deployed in different ways: When searching for a taxi in New York City, we can decide where to attend (e.g., to the street) and what to attend to (e.g., yellow cars). Although we use the same word to describe both processes, nonhuman primate data suggest that these produce distinct effects on neural tuning. This has been challenging to assess in humans, but here we used an opportunity afforded by multivariate decoding of MEG data. We found that attending to an object at a particular location and attending to a particular object feature produced effects that interacted multiplicatively. The two types of attention induced distinct patterns of enhancement in occipital cortex, with feature-selective attention producing relatively more enhancement of small feature differences and spatial attention producing relatively larger effects for larger feature differences. An information flow analysis further showed that stimulus representations in occipital cortex were Granger-caused by coding in frontal cortices earlier in time and that the timing of this feedback matched the onset of attention effects. The data suggest that spatial and feature-selective attention rely on distinct neural mechanisms that arise from frontal-occipital information exchange, interacting multiplicatively to selectively enhance task-relevant information.

Close

  • doi:10.1162/jocn_a_01796

Close

Marco Esposito; Clarissa Ferrari; Claudia Fracassi; Carlo Miniussi; Debora Brignani

Responsiveness to left‐prefrontal tDCS varies according to arousal levels Journal Article

In: European Journal of Neuroscience, pp. 1–45, 2022.

Abstract | Links | BibTeX

@article{Esposito2022,
title = {Responsiveness to left‐prefrontal tDCS varies according to arousal levels},
author = {Marco Esposito and Clarissa Ferrari and Claudia Fracassi and Carlo Miniussi and Debora Brignani},
doi = {10.1111/ejn.15584},
year = {2022},
date = {2022-01-01},
journal = {European Journal of Neuroscience},
pages = {1--45},
abstract = {Over the past two decades, the postulated modulatory effects of transcranial direct current stimulation (tDCS) on the human brain have been extensively investigated. However, recent concerns on reliability of tDCS effects have been raised, principally due to reduced replicability This article is protected by copyright. All rights reserved. and to interindividual variability in response to tDCS. These inconsistencies are likely due to the interplay between the level of induced cortical excitability and unaccounted structural and state-dependent functional factors. On these grounds, we aimed at verifying whether the behavioural effects induced by a common tDCS montage (F3-rSOA) were influenced by the participants' arousal levels, as part of a broader mechanism of state-dependency. Pupillary dynamics were recorded during an auditory oddball task while applying either a sham or real tDCS. The tDCS effects were evaluated as a function of subjective and physiological arousal predictors (STAI-Y State scores and pre-stimulus pupil size, respectively). We showed that prefrontal tDCS hindered task learning effects on response speed such that performance improvement occurred during sham, but not real stimulation. Moreover, both subjective and physiological arousal predictors significantly explained performance during real tDCS, with interaction effects showing performance improvement only with moderate arousal levels; likewise, pupil response was affected by real tDCS according to the ongoing levels of arousal, with reduced dilation during higher arousal trials. These findings highlight the potential role of arousal in shaping the neuromodulatory outcome, thus emphasizing a more careful interpretation of null or negative results while also encouraging more individually tailored tDCS applications based on arousal levels, especially in clinical populations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Over the past two decades, the postulated modulatory effects of transcranial direct current stimulation (tDCS) on the human brain have been extensively investigated. However, recent concerns on reliability of tDCS effects have been raised, principally due to reduced replicability This article is protected by copyright. All rights reserved. and to interindividual variability in response to tDCS. These inconsistencies are likely due to the interplay between the level of induced cortical excitability and unaccounted structural and state-dependent functional factors. On these grounds, we aimed at verifying whether the behavioural effects induced by a common tDCS montage (F3-rSOA) were influenced by the participants' arousal levels, as part of a broader mechanism of state-dependency. Pupillary dynamics were recorded during an auditory oddball task while applying either a sham or real tDCS. The tDCS effects were evaluated as a function of subjective and physiological arousal predictors (STAI-Y State scores and pre-stimulus pupil size, respectively). We showed that prefrontal tDCS hindered task learning effects on response speed such that performance improvement occurred during sham, but not real stimulation. Moreover, both subjective and physiological arousal predictors significantly explained performance during real tDCS, with interaction effects showing performance improvement only with moderate arousal levels; likewise, pupil response was affected by real tDCS according to the ongoing levels of arousal, with reduced dilation during higher arousal trials. These findings highlight the potential role of arousal in shaping the neuromodulatory outcome, thus emphasizing a more careful interpretation of null or negative results while also encouraging more individually tailored tDCS applications based on arousal levels, especially in clinical populations.

Close

  • doi:10.1111/ejn.15584

Close

Mina Elhamiasl; Gabriella Silva; Andrea M. Cataldo; Hillary Hadley; Erik Arnold; James W. Tanaka; Tim Curran; Lisa S. Scott

Dissociations between performance and visual fixations after subordinate- and basic-level training with novel objects Journal Article

In: Vision Research, vol. 191, pp. 107971, 2022.

Abstract | Links | BibTeX

@article{Elhamiasl2022,
title = {Dissociations between performance and visual fixations after subordinate- and basic-level training with novel objects},
author = {Mina Elhamiasl and Gabriella Silva and Andrea M. Cataldo and Hillary Hadley and Erik Arnold and James W. Tanaka and Tim Curran and Lisa S. Scott},
doi = {10.1016/j.visres.2021.107971},
year = {2022},
date = {2022-01-01},
journal = {Vision Research},
volume = {191},
pages = {107971},
publisher = {Elsevier Ltd},
abstract = {Previous work suggests that subordinate-level object training improves exemplar-level perceptual discrimination over basic-level training. However, the extent to which visual fixation strategies and the use of visual features, such as color and spatial frequency (SF), change with improved discrimination was not previously known. In the current study, adults (n = 24) completed 6 days of training with 2 families of computer-generated novel objects. Participants were trained to identify one object family at the subordinate level and the other object family at the basic level. Before and after training, discrimination accuracy and visual fixations were measured for trained and untrained exemplars. To examine the impact of training on visual feature use, image color and SF were manipulated and tested before and after training. Discrimination accuracy increased for the object family trained at the subordinate-level, but not for the family trained at the basic level. This increase was seen for all image manipulations (color, SF) and generalized to untrained exemplars within the trained family. Both subordinate- and basic-level training increased average fixation duration and saccadic amplitude and decreased the number of total fixations. Collectively, these results suggest a dissociation between discrimination accuracy, indicative of recognition, and the associated pattern of changes present for visual fixations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous work suggests that subordinate-level object training improves exemplar-level perceptual discrimination over basic-level training. However, the extent to which visual fixation strategies and the use of visual features, such as color and spatial frequency (SF), change with improved discrimination was not previously known. In the current study, adults (n = 24) completed 6 days of training with 2 families of computer-generated novel objects. Participants were trained to identify one object family at the subordinate level and the other object family at the basic level. Before and after training, discrimination accuracy and visual fixations were measured for trained and untrained exemplars. To examine the impact of training on visual feature use, image color and SF were manipulated and tested before and after training. Discrimination accuracy increased for the object family trained at the subordinate-level, but not for the family trained at the basic level. This increase was seen for all image manipulations (color, SF) and generalized to untrained exemplars within the trained family. Both subordinate- and basic-level training increased average fixation duration and saccadic amplitude and decreased the number of total fixations. Collectively, these results suggest a dissociation between discrimination accuracy, indicative of recognition, and the associated pattern of changes present for visual fixations.

Close

  • doi:10.1016/j.visres.2021.107971

Close

Lorenzo Diana; Giulia Scotti; Edoardo N Aiello; Patrick Pilastro; Aleksandra K Eberhard-moscicka; Ren M Müri; Nadia Bolognini

Conventional and HD-tDCS may (or may not) modulate overt attentional orienting: An integrated spatio-temporal approach and methodological reflection Journal Article

In: Brain Sciences, vol. 12, no. 71, pp. 1–20, 2022.

Abstract | BibTeX

@article{Diana2022,
title = {Conventional and HD-tDCS may (or may not) modulate overt attentional orienting: An integrated spatio-temporal approach and methodological reflection},
author = {Lorenzo Diana and Giulia Scotti and Edoardo N Aiello and Patrick Pilastro and Aleksandra K Eberhard-moscicka and Ren M Müri and Nadia Bolognini},
year = {2022},
date = {2022-01-01},
journal = {Brain Sciences},
volume = {12},
number = {71},
pages = {1--20},
abstract = {Transcranial Direct Current Stimulation (tDCS) has been employed to modulate visuo- spatial attentional asymmetries, however, further investigation is needed to characterize tDCS- associated variability in more ecological settings. In the present research, we tested the effects of offline, anodal conventional tDCS (Experiment 1) and HD-tDCS (Experiment 2) delivered over the posterior parietal cortex (PPC) and Frontal Eye Field (FEF) of the right hemisphere in healthy participants. Attentional asymmetries were measured by means of an eye tracking-based, ecological paradigm, that is, a Free Visual Exploration task of naturalistic pictures. Data were analyzed from a spatiotemporal perspective. In Experiment 1, a pre-post linear mixed model (LMM) indicated a leftward attentional shift after PPC tDCS; this effect was not confirmed when the individual baseline performance was considered. In Experiment 2, FEF HD-tDCS was shown to induce a significant leftward shift of gaze position, which emerged after 6 s of picture exploration and lasted for 200 ms. The present results do not allow us to conclude on a clear efficacy of offline conventional tDCS and HD- tDCS in modulating overt visuospatial attention in an ecological setting. Nonetheless, our findings highlight a complex relationship among stimulated area, focality of stimulation, spatiotemporal aspects of deployment of attention, and the role of individual baseline performance in shaping the effects of tDCS.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Transcranial Direct Current Stimulation (tDCS) has been employed to modulate visuo- spatial attentional asymmetries, however, further investigation is needed to characterize tDCS- associated variability in more ecological settings. In the present research, we tested the effects of offline, anodal conventional tDCS (Experiment 1) and HD-tDCS (Experiment 2) delivered over the posterior parietal cortex (PPC) and Frontal Eye Field (FEF) of the right hemisphere in healthy participants. Attentional asymmetries were measured by means of an eye tracking-based, ecological paradigm, that is, a Free Visual Exploration task of naturalistic pictures. Data were analyzed from a spatiotemporal perspective. In Experiment 1, a pre-post linear mixed model (LMM) indicated a leftward attentional shift after PPC tDCS; this effect was not confirmed when the individual baseline performance was considered. In Experiment 2, FEF HD-tDCS was shown to induce a significant leftward shift of gaze position, which emerged after 6 s of picture exploration and lasted for 200 ms. The present results do not allow us to conclude on a clear efficacy of offline conventional tDCS and HD- tDCS in modulating overt visuospatial attention in an ecological setting. Nonetheless, our findings highlight a complex relationship among stimulated area, focality of stimulation, spatiotemporal aspects of deployment of attention, and the role of individual baseline performance in shaping the effects of tDCS.

Close

Alasdair D. F. Clarke; Jessica L. Irons; Warren James; Andrew B. Leber; Amelia R. Hunt

Stable individual differences in strategies within, but not between, visual search tasks Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 75, no. 2, pp. 289–296, 2022.

Abstract | Links | BibTeX

@article{Clarke2022,
title = {Stable individual differences in strategies within, but not between, visual search tasks},
author = {Alasdair D. F. Clarke and Jessica L. Irons and Warren James and Andrew B. Leber and Amelia R. Hunt},
doi = {10.1177/1747021820929190},
year = {2022},
date = {2022-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {75},
number = {2},
pages = {289--296},
abstract = {A striking range of individual differences has recently been reported in three different visual search tasks. These differences in performance can be attributed to strategy, that is, the efficiency with which participants control their search to complete the task quickly and accurately. Here, we ask whether an individual's strategy and performance in one search task is correlated with how they perform in the other two. We tested 64 observers and found that even though the test–retest reliability of the tasks was high, an observer's performance and strategy in one task was not predictive of their behaviour in the other two. These results suggest search strategies are stable over time, but context-specific. To understand visual search, we therefore need to account not only for differences between individuals but also how individuals interact with the search task and context.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A striking range of individual differences has recently been reported in three different visual search tasks. These differences in performance can be attributed to strategy, that is, the efficiency with which participants control their search to complete the task quickly and accurately. Here, we ask whether an individual's strategy and performance in one search task is correlated with how they perform in the other two. We tested 64 observers and found that even though the test–retest reliability of the tasks was high, an observer's performance and strategy in one task was not predictive of their behaviour in the other two. These results suggest search strategies are stable over time, but context-specific. To understand visual search, we therefore need to account not only for differences between individuals but also how individuals interact with the search task and context.

Close

  • doi:10.1177/1747021820929190

Close

Alexis Cheviet; Jana Masselink; Eric Koun; Roméo Salemme; Markus Lappe; Caroline Froment-Tilikete; Denis Pélisson

Cerebellar signals drive motor adjustments and visual perceptual changes during forward and backward adaptation of reactive saccades Journal Article

In: Cerebral Cortex, pp. 1–21, 2022.

Abstract | Links | BibTeX

@article{Cheviet2022,
title = {Cerebellar signals drive motor adjustments and visual perceptual changes during forward and backward adaptation of reactive saccades},
author = {Alexis Cheviet and Jana Masselink and Eric Koun and Roméo Salemme and Markus Lappe and Caroline Froment-Tilikete and Denis Pélisson},
doi = {10.1093/cercor/bhab455},
year = {2022},
date = {2022-01-01},
journal = {Cerebral Cortex},
pages = {1--21},
abstract = {Saccadic adaptation (SA) is a cerebellar-dependent learning of motor commands (MC), which aims at preserving saccade accuracy. Since SA alters visual localization during fixation and even more so across saccades, it could also involve changes of target and/or saccade visuospatial representations, the latter (CDv) resulting from a motor-to-visual transformation (forward dynamics model) of the corollary discharge of the MC. In the present study, we investigated if, in addition to its established role in adaptive adjustment of MC, the cerebellum could contribute to the adaptation-associated perceptual changes. Transfer of backward and forward adaptation to spatial perceptual performance (during ocular fixation and trans-saccadically) was assessed in eight cerebellar patients and eight healthy volunteers. In healthy participants, both types of SA altered MC as well as internal representations of the saccade target and of the saccadic eye displacement. In patients, adaptation-related adjustments of MC and adaptation transfer to localization were strongly reduced relative to healthy participants, unraveling abnormal adaptation-related changes of target and CDv. Importantly, the estimated changes of CDv were totally abolished following forward session but mainly preserved in backward session, suggesting that an internal model ensuring trans-saccadic localization could be located in the adaptation-related cerebellar networks or in downstream networks, respectively.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Saccadic adaptation (SA) is a cerebellar-dependent learning of motor commands (MC), which aims at preserving saccade accuracy. Since SA alters visual localization during fixation and even more so across saccades, it could also involve changes of target and/or saccade visuospatial representations, the latter (CDv) resulting from a motor-to-visual transformation (forward dynamics model) of the corollary discharge of the MC. In the present study, we investigated if, in addition to its established role in adaptive adjustment of MC, the cerebellum could contribute to the adaptation-associated perceptual changes. Transfer of backward and forward adaptation to spatial perceptual performance (during ocular fixation and trans-saccadically) was assessed in eight cerebellar patients and eight healthy volunteers. In healthy participants, both types of SA altered MC as well as internal representations of the saccade target and of the saccadic eye displacement. In patients, adaptation-related adjustments of MC and adaptation transfer to localization were strongly reduced relative to healthy participants, unraveling abnormal adaptation-related changes of target and CDv. Importantly, the estimated changes of CDv were totally abolished following forward session but mainly preserved in backward session, suggesting that an internal model ensuring trans-saccadic localization could be located in the adaptation-related cerebellar networks or in downstream networks, respectively.

Close

  • doi:10.1093/cercor/bhab455

Close

Frederick H. F. Chan; Hin Suen; Antoni B. Chan; Janet H. Hsiao; Tom J. Barry

The effects of attentional and interpretation biases on later pain outcomes among younger and older adults: A prospective study Journal Article

In: European Journal of Pain, vol. 26, no. 1, pp. 181–196, 2022.

Abstract | Links | BibTeX

@article{Chan2022,
title = {The effects of attentional and interpretation biases on later pain outcomes among younger and older adults: A prospective study},
author = {Frederick H. F. Chan and Hin Suen and Antoni B. Chan and Janet H. Hsiao and Tom J. Barry},
doi = {10.1002/ejp.1853},
year = {2022},
date = {2022-01-01},
journal = {European Journal of Pain},
volume = {26},
number = {1},
pages = {181--196},
abstract = {Background: Studies examining the effect of biased cognitions on later pain outcomes have primarily focused on attentional biases, leaving the role of interpretation biases largely unexplored. Also, few studies have examined pain-related cognitive biases in elderly persons. The current study aims to fill these research gaps. Methods: Younger and older adults with and without chronic pain (N = 126) completed an interpretation bias task and a free-viewing task of injury and neutral scenes at baseline. Participants' pain intensity and disability were assessed at baseline and at a 6-month follow-up. A machine-learning data-driven approach to analysing eye movement data was adopted. Results: Eye movement analyses revealed two common attentional pattern subgroups for scene-viewing: an “explorative” group and a “focused” group. At baseline, participants with chronic pain endorsed more injury-/illness-related interpretations compared to pain-free controls, but they did not differ in eye movements on scene images. Older adults interpreted illness-related scenarios more negatively compared to younger adults, but there was also no difference in eye movements between age groups. Moreover, negative interpretation biases were associated with baseline but not follow-up pain disability, whereas a focused gaze tendency for injury scenes was associated with follow-up but not baseline pain disability. Additionally, there was an indirect effect of interpretation biases on pain disability 6 months later through attentional bias for pain-related images. Conclusions: The present study provided evidence for pain status and age group differences in injury-/illness-related interpretation biases. Results also revealed distinct roles of interpretation and attentional biases in pain chronicity. Significance: Adults with chronic pain endorsed more injury-/illness-related interpretations than pain-free controls. Older adults endorsed more illness interpretations than younger adults. A more negative interpretation bias indirectly predicted pain disability 6 months later through hypervigilance towards pain.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Studies examining the effect of biased cognitions on later pain outcomes have primarily focused on attentional biases, leaving the role of interpretation biases largely unexplored. Also, few studies have examined pain-related cognitive biases in elderly persons. The current study aims to fill these research gaps. Methods: Younger and older adults with and without chronic pain (N = 126) completed an interpretation bias task and a free-viewing task of injury and neutral scenes at baseline. Participants' pain intensity and disability were assessed at baseline and at a 6-month follow-up. A machine-learning data-driven approach to analysing eye movement data was adopted. Results: Eye movement analyses revealed two common attentional pattern subgroups for scene-viewing: an “explorative” group and a “focused” group. At baseline, participants with chronic pain endorsed more injury-/illness-related interpretations compared to pain-free controls, but they did not differ in eye movements on scene images. Older adults interpreted illness-related scenarios more negatively compared to younger adults, but there was also no difference in eye movements between age groups. Moreover, negative interpretation biases were associated with baseline but not follow-up pain disability, whereas a focused gaze tendency for injury scenes was associated with follow-up but not baseline pain disability. Additionally, there was an indirect effect of interpretation biases on pain disability 6 months later through attentional bias for pain-related images. Conclusions: The present study provided evidence for pain status and age group differences in injury-/illness-related interpretation biases. Results also revealed distinct roles of interpretation and attentional biases in pain chronicity. Significance: Adults with chronic pain endorsed more injury-/illness-related interpretations than pain-free controls. Older adults endorsed more illness interpretations than younger adults. A more negative interpretation bias indirectly predicted pain disability 6 months later through hypervigilance towards pain.

Close

  • doi:10.1002/ejp.1853

Close

Philippa Broadbent; Daniel E. Schoth; Christina Liossi

Association between attentional bias to experimentally induced pain and to pain-related words in healthy individuals: The moderating role of interpretation bias Journal Article

In: Pain, vol. 163, no. 2, pp. 319–333, 2022.

Abstract | Links | BibTeX

@article{Broadbent2022,
title = {Association between attentional bias to experimentally induced pain and to pain-related words in healthy individuals: The moderating role of interpretation bias},
author = {Philippa Broadbent and Daniel E. Schoth and Christina Liossi},
doi = {10.1097/j.pain.0000000000002318},
year = {2022},
date = {2022-01-01},
journal = {Pain},
volume = {163},
number = {2},
pages = {319--333},
abstract = {Attentional bias to pain-related information may contribute to chronic pain maintenance. It is theoretically predicted that attentional bias to pain-related language derives from attentional bias to painful sensations; however, the complex interconnection between these types of attentional bias has not yet been tested. This study aimed to investigate the association between attentional bias to pain words and attentional bias to the location of pain, as well as the moderating role of pain-related interpretation bias in this association. Fifty-four healthy individuals performed a visual probe task with pain-related and neutral words, during which eye movements were tracked. In a subset of trials, participants were presented with a cold pain stimulus on one hand. Pain-related interpretation and memory biases were also assessed. Attentional bias to pain words and attentional bias to the pain location were not significantly correlated, although the association was significantly moderated by interpretation bias. A combination of pain-related interpretation bias and attentional bias to painful sensations was associated with avoidance of pain words. In addition, first fixation durations on pain words were longer when the pain word and cold pain stimulus were presented on the same side of the body, as compared to on opposite sides. This indicates that congruency between the locations of pain and pain-related information may strengthen attentional bias. Overall, these findings indicate that cognitive biases to pain-related information interact with cognitive biases to somatosensory information. The implications of these findings for attentional bias modification interventions are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Attentional bias to pain-related information may contribute to chronic pain maintenance. It is theoretically predicted that attentional bias to pain-related language derives from attentional bias to painful sensations; however, the complex interconnection between these types of attentional bias has not yet been tested. This study aimed to investigate the association between attentional bias to pain words and attentional bias to the location of pain, as well as the moderating role of pain-related interpretation bias in this association. Fifty-four healthy individuals performed a visual probe task with pain-related and neutral words, during which eye movements were tracked. In a subset of trials, participants were presented with a cold pain stimulus on one hand. Pain-related interpretation and memory biases were also assessed. Attentional bias to pain words and attentional bias to the pain location were not significantly correlated, although the association was significantly moderated by interpretation bias. A combination of pain-related interpretation bias and attentional bias to painful sensations was associated with avoidance of pain words. In addition, first fixation durations on pain words were longer when the pain word and cold pain stimulus were presented on the same side of the body, as compared to on opposite sides. This indicates that congruency between the locations of pain and pain-related information may strengthen attentional bias. Overall, these findings indicate that cognitive biases to pain-related information interact with cognitive biases to somatosensory information. The implications of these findings for attentional bias modification interventions are discussed.

Close

  • doi:10.1097/j.pain.0000000000002318

Close

Carlos Alós-Ferrer; Alexander Ritschel

Attention and salience in preference reversals Journal Article

In: Experimental Economics, pp. 1–28, 2022.

Abstract | Links | BibTeX

@article{AlosFerrer2022,
title = {Attention and salience in preference reversals},
author = {Carlos Alós-Ferrer and Alexander Ritschel},
doi = {10.1007/s10683-021-09740-9},
year = {2022},
date = {2022-01-01},
journal = {Experimental Economics},
pages = {1--28},
publisher = {Springer US},
abstract = {We investigate the implications of Salience Theory for the classical preference reversal phenomenon, where monetary valuations contradict risky choices. It has been stated that one factor behind reversals is that monetary valuations of lotteries are inflated when elicited in isolation, and that they should be reduced if an alternative lottery is present and draws attention. We conducted two preregistered experiments, an online choice study (N=256) and an eye-tracking study (N = 64), in which we investigated salience and attention in preference reversals, manipulating salience through the presence or absence of an alternative lottery during evaluations. We find that the alternative lottery draws attention, and that fixations on that lottery influence the evaluation of the target lottery as predicted by Salience Theory. The effect, however, is of a modest magnitude and fails to translate into an effect on preference reversal rates in either experiment. We also use transitions (eye movements) across outcomes of different lotteries to study attention on the states of the world underlying Salience Theory, but we find no evidence that larger salience results in more transitions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We investigate the implications of Salience Theory for the classical preference reversal phenomenon, where monetary valuations contradict risky choices. It has been stated that one factor behind reversals is that monetary valuations of lotteries are inflated when elicited in isolation, and that they should be reduced if an alternative lottery is present and draws attention. We conducted two preregistered experiments, an online choice study (N=256) and an eye-tracking study (N = 64), in which we investigated salience and attention in preference reversals, manipulating salience through the presence or absence of an alternative lottery during evaluations. We find that the alternative lottery draws attention, and that fixations on that lottery influence the evaluation of the target lottery as predicted by Salience Theory. The effect, however, is of a modest magnitude and fails to translate into an effect on preference reversal rates in either experiment. We also use transitions (eye movements) across outcomes of different lotteries to study attention on the states of the world underlying Salience Theory, but we find no evidence that larger salience results in more transitions.

Close

  • doi:10.1007/s10683-021-09740-9

Close

2021

Sarah Chabal; Sayuri Hayakawa; Viorica Marian

How a picture becomes a word: Individual differences in the development of language-mediated visual search Journal Article

In: Cognitive Research: Principles and Implications, vol. 6, no. 2, pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Chabal2021,
title = {How a picture becomes a word: Individual differences in the development of language-mediated visual search},
author = {Sarah Chabal and Sayuri Hayakawa and Viorica Marian},
doi = {10.1186/s41235-020-00268-9},
year = {2021},
date = {2021-12-01},
journal = {Cognitive Research: Principles and Implications},
volume = {6},
number = {2},
pages = {1--10},
publisher = {Springer International Publishing},
abstract = {Over the course of our lifetimes, we accumulate extensive experience associating the things that we see with the words we have learned to describe them. As a result, adults engaged in a visual search task will often look at items with labels that share phonological features with the target object, demonstrating that language can become activated even in non-linguistic contexts. This highly interactive cognitive system is the culmination of our linguistic and visual experiences—and yet, our understanding of how the relationship between language and vision develops remains limited. The present study explores the developmental trajectory of language-mediated visual search by examining whether children can be distracted by linguistic competitors during a non-linguistic visual search task. Though less robust compared to what has been previously observed with adults, we find evidence of phonological competition in children as young as 8 years old. Furthermore, the extent of language activation is predicted by individual differences in linguistic, visual, and domain-general cognitive abilities, with the greatest phonological competition observed among children with strong language abilities combined with weaker visual memory and inhibitory control. We propose that linguistic expertise is fundamental to the development of language-mediated visual search, but that the rate and degree of automatic language activation depends on interactions among a broader network of cognitive abilities.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Over the course of our lifetimes, we accumulate extensive experience associating the things that we see with the words we have learned to describe them. As a result, adults engaged in a visual search task will often look at items with labels that share phonological features with the target object, demonstrating that language can become activated even in non-linguistic contexts. This highly interactive cognitive system is the culmination of our linguistic and visual experiences—and yet, our understanding of how the relationship between language and vision develops remains limited. The present study explores the developmental trajectory of language-mediated visual search by examining whether children can be distracted by linguistic competitors during a non-linguistic visual search task. Though less robust compared to what has been previously observed with adults, we find evidence of phonological competition in children as young as 8 years old. Furthermore, the extent of language activation is predicted by individual differences in linguistic, visual, and domain-general cognitive abilities, with the greatest phonological competition observed among children with strong language abilities combined with weaker visual memory and inhibitory control. We propose that linguistic expertise is fundamental to the development of language-mediated visual search, but that the rate and degree of automatic language activation depends on interactions among a broader network of cognitive abilities.

Close

  • doi:10.1186/s41235-020-00268-9

Close

Jasmine R. Aziz; Samantha R. Good; Raymond M. Klein; Gail A. Eskes

Role of aging and working memory in performance on a naturalistic visual search task Journal Article

In: Cortex, vol. 136, pp. 28–40, 2021.

Abstract | Links | BibTeX

@article{Aziz2021,
title = {Role of aging and working memory in performance on a naturalistic visual search task},
author = {Jasmine R. Aziz and Samantha R. Good and Raymond M. Klein and Gail A. Eskes},
doi = {10.1016/j.cortex.2020.12.003},
year = {2021},
date = {2021-12-01},
journal = {Cortex},
volume = {136},
pages = {28--40},
publisher = {Elsevier Ltd},
abstract = {Studying age-related changes in working memory (WM) and visual search can provide insights into mechanisms of visuospatial attention. In visual search, WM is used to remember previously inspected objects/locations and to maintain a mental representation of the target to guide the search. We sought to extend this work, using aging as a case of reduced WM capacity. The present study tested whether various domains of WM would predict visual search performance in both young (n = 47; aged 18–35 yrs) and older (n = 48; aged 55–78) adults. Participants completed executive and domain-specific WM measures, and a naturalistic visual search task with (single) feature and triple-conjunction (three-feature) search conditions. We also varied the WM load requirements of the search task by manipulating whether a reference picture of the target (i.e., target template) was displayed during the search, or whether participants needed to search from memory. In both age groups, participants with better visuospatial executive WM were faster to locate complex search targets. Working memory storage capacity predicted search performance regardless of target complexity; however, visuospatial storage capacity was more predictive for young adults, whereas verbal storage capacity was more predictive for older adults. Displaying a target template during search diminished the involvement of WM in search performance, but this effect was primarily observed in young adults. Age-specific interactions between WM and visual search abilities are discussed in the context of mechanisms of visuospatial attention and how they may vary across the lifespan.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Studying age-related changes in working memory (WM) and visual search can provide insights into mechanisms of visuospatial attention. In visual search, WM is used to remember previously inspected objects/locations and to maintain a mental representation of the target to guide the search. We sought to extend this work, using aging as a case of reduced WM capacity. The present study tested whether various domains of WM would predict visual search performance in both young (n = 47; aged 18–35 yrs) and older (n = 48; aged 55–78) adults. Participants completed executive and domain-specific WM measures, and a naturalistic visual search task with (single) feature and triple-conjunction (three-feature) search conditions. We also varied the WM load requirements of the search task by manipulating whether a reference picture of the target (i.e., target template) was displayed during the search, or whether participants needed to search from memory. In both age groups, participants with better visuospatial executive WM were faster to locate complex search targets. Working memory storage capacity predicted search performance regardless of target complexity; however, visuospatial storage capacity was more predictive for young adults, whereas verbal storage capacity was more predictive for older adults. Displaying a target template during search diminished the involvement of WM in search performance, but this effect was primarily observed in young adults. Age-specific interactions between WM and visual search abilities are discussed in the context of mechanisms of visuospatial attention and how they may vary across the lifespan.

Close

  • doi:10.1016/j.cortex.2020.12.003

Close

Mikael Rubin; Michael J. Telch

Pupillary response to affective voices: Physiological responsivity and posttraumatic stress disorder Journal Article

In: Journal of Traumatic Stress, vol. 34, no. 1, pp. 182–189, 2021.

Abstract | Links | BibTeX

@article{Rubin2021a,
title = {Pupillary response to affective voices: Physiological responsivity and posttraumatic stress disorder},
author = {Mikael Rubin and Michael J. Telch},
doi = {10.1002/jts.22574},
year = {2021},
date = {2021-02-01},
journal = {Journal of Traumatic Stress},
volume = {34},
number = {1},
pages = {182--189},
abstract = {Posttraumatic stress disorder (PTSD) is related to dysfunctional emotional processing, thus motivating the search for physiological indices that can elucidate this process. Toward this aim, we compared pupillary response patterns in response to angry and fearful auditory stimuli among 99 adults, some with PTSD (n = 14), some trauma-exposed without PTSD (TE; n = 53), and some with no history of trauma exposure (CON; n = 32). We hypothesized that individuals with PTSD would show more pupillary response to angry and fearful auditory stimuli compared to those in the TE and CON groups. Among participants who had experienced a traumatic event, we explored the association between PTSD symptoms and pupillary response; contrary to our prediction, individuals with PTSD displayed the least pupillary response to fearful auditory stimuli compared those in the TE},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Posttraumatic stress disorder (PTSD) is related to dysfunctional emotional processing, thus motivating the search for physiological indices that can elucidate this process. Toward this aim, we compared pupillary response patterns in response to angry and fearful auditory stimuli among 99 adults, some with PTSD (n = 14), some trauma-exposed without PTSD (TE; n = 53), and some with no history of trauma exposure (CON; n = 32). We hypothesized that individuals with PTSD would show more pupillary response to angry and fearful auditory stimuli compared to those in the TE and CON groups. Among participants who had experienced a traumatic event, we explored the association between PTSD symptoms and pupillary response; contrary to our prediction, individuals with PTSD displayed the least pupillary response to fearful auditory stimuli compared those in the TE

Close

  • doi:10.1002/jts.22574

Close

Ariel Zylberberg

Decision prioritization and causal reasoning in decision hierarchies Book

2021.

Abstract | Links | BibTeX

@book{Zylberberg2021,
title = {Decision prioritization and causal reasoning in decision hierarchies},
author = {Ariel Zylberberg},
doi = {10.1371/journal.pcbi.1009688},
year = {2021},
date = {2021-01-01},
booktitle = {PLoS Computational Biology},
volume = {17},
number = {12},
pages = {1--39},
abstract = {From cooking a meal to finding a route to a destination, many real life decisions can be decomposed into a hierarchy of sub-decisions. In a hierarchy, choosing which decision to think about requires planning over a potentially vast space of possible decision sequences. To gain insight into how people decide what to decide on, we studied a novel task that combines perceptual decision making, active sensing and hierarchical and counterfactual reasoning. Human participants had to find a target hidden at the lowest level of a decision tree. They could solicit information from the different nodes of the decision tree to gather noisy evidence about the target's location. Feedback was given only after errors at the leaf nodes and provided ambiguous evidence about the cause of the error. Despite the complexity of task (with 10 7 latent states) participants were able to plan efficiently in the task. A computational model of this process identified a small number of heuristics of low computational complexity that accounted for human behavior. These heuristics include making categorical decisions at the branching points of the decision tree rather than carrying forward entire probability distributions, discarding sensory evidence deemed unreliable to make a choice, and using choice confidence to infer the cause of the error after an initial plan failed. Plans based on probabilistic inference or myopic sampling norms could not capture participants' behavior. Our results show that it is possible to identify hallmarks of heuristic planning with sensing in human behavior and that the use of tasks of intermediate complexity helps identify the rules underlying human ability to reason over decision hierarchies.},
keywords = {},
pubstate = {published},
tppubtype = {book}
}

Close

From cooking a meal to finding a route to a destination, many real life decisions can be decomposed into a hierarchy of sub-decisions. In a hierarchy, choosing which decision to think about requires planning over a potentially vast space of possible decision sequences. To gain insight into how people decide what to decide on, we studied a novel task that combines perceptual decision making, active sensing and hierarchical and counterfactual reasoning. Human participants had to find a target hidden at the lowest level of a decision tree. They could solicit information from the different nodes of the decision tree to gather noisy evidence about the target's location. Feedback was given only after errors at the leaf nodes and provided ambiguous evidence about the cause of the error. Despite the complexity of task (with 10 7 latent states) participants were able to plan efficiently in the task. A computational model of this process identified a small number of heuristics of low computational complexity that accounted for human behavior. These heuristics include making categorical decisions at the branching points of the decision tree rather than carrying forward entire probability distributions, discarding sensory evidence deemed unreliable to make a choice, and using choice confidence to infer the cause of the error after an initial plan failed. Plans based on probabilistic inference or myopic sampling norms could not capture participants' behavior. Our results show that it is possible to identify hallmarks of heuristic planning with sensing in human behavior and that the use of tasks of intermediate complexity helps identify the rules underlying human ability to reason over decision hierarchies.

Close

  • doi:10.1371/journal.pcbi.1009688

Close

Kristin Marie Zimmermann; Kirsten Daniela Schmidt; Franziska Gronow; Jens Sommer; Frank Leweke; Andreas Jansen

Seeing things differently: Gaze shapes neural signal during mentalizing according to emotional awareness Journal Article

In: NeuroImage, vol. 238, pp. 1–14, 2021.

Abstract | Links | BibTeX

@article{Zimmermann2021,
title = {Seeing things differently: Gaze shapes neural signal during mentalizing according to emotional awareness},
author = {Kristin Marie Zimmermann and Kirsten Daniela Schmidt and Franziska Gronow and Jens Sommer and Frank Leweke and Andreas Jansen},
doi = {10.1016/j.neuroimage.2021.118223},
year = {2021},
date = {2021-01-01},
journal = {NeuroImage},
volume = {238},
pages = {1--14},
publisher = {Elsevier Inc.},
abstract = {Studies on social cognition often use complex visual stimuli to asses neural processes attributed to abilities like “mentalizing” or “Theory of Mind” (ToM). During the processing of these stimuli, eye gaze, however, shapes neural signal patterns. Individual differences in neural operations on social cognition may therefore be obscured if individuals' gaze behavior differs systematically. These obstacles can be overcome by the combined analysis of neural signal and natural viewing behavior. Here, we combined functional magnetic resonance imaging (fMRI) with eye-tracking to examine effects of unconstrained gaze on neural ToM processes in healthy individuals with differing levels of emotional awareness, i.e. alexithymia. First, as previously described for emotional tasks, people with higher alexithymia levels look less at eyes in both ToM and task-free viewing contexts. Further, we find that neural ToM processes are not affected by individual differences in alexithymia per se. Instead, depending on alexithymia levels, gaze on critical stimulus aspects reversely shapes the signal in medial prefrontal cortex (MPFC) and anterior temporoparietal junction (TPJ) as distinct nodes of the ToM system. These results emphasize that natural selective attention affects fMRI patterns well beyond the visual system. Our study implies that, whenever using a task with multiple degrees of freedom in scan paths, ignoring the latter might obscure important conclusions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Studies on social cognition often use complex visual stimuli to asses neural processes attributed to abilities like “mentalizing” or “Theory of Mind” (ToM). During the processing of these stimuli, eye gaze, however, shapes neural signal patterns. Individual differences in neural operations on social cognition may therefore be obscured if individuals' gaze behavior differs systematically. These obstacles can be overcome by the combined analysis of neural signal and natural viewing behavior. Here, we combined functional magnetic resonance imaging (fMRI) with eye-tracking to examine effects of unconstrained gaze on neural ToM processes in healthy individuals with differing levels of emotional awareness, i.e. alexithymia. First, as previously described for emotional tasks, people with higher alexithymia levels look less at eyes in both ToM and task-free viewing contexts. Further, we find that neural ToM processes are not affected by individual differences in alexithymia per se. Instead, depending on alexithymia levels, gaze on critical stimulus aspects reversely shapes the signal in medial prefrontal cortex (MPFC) and anterior temporoparietal junction (TPJ) as distinct nodes of the ToM system. These results emphasize that natural selective attention affects fMRI patterns well beyond the visual system. Our study implies that, whenever using a task with multiple degrees of freedom in scan paths, ignoring the latter might obscure important conclusions.

Close

  • doi:10.1016/j.neuroimage.2021.118223

Close

Yijing Zhuang; Li Gu; Jingchang Chen; Zixuan Xu; Lily Y. L. Chan; Lei Feng; Qingqing Ye; Shenglan Zhang; Jin Yuan; Jinrong Li

The integration of eye tracking responses for the measurement of contrast sensitivity: A proof of concept study Journal Article

In: Frontiers in Neuroscience, vol. 15, pp. 710578, 2021.

Abstract | Links | BibTeX

@article{Zhuang2021b,
title = {The integration of eye tracking responses for the measurement of contrast sensitivity: A proof of concept study},
author = {Yijing Zhuang and Li Gu and Jingchang Chen and Zixuan Xu and Lily Y. L. Chan and Lei Feng and Qingqing Ye and Shenglan Zhang and Jin Yuan and Jinrong Li},
doi = {10.3389/fnins.2021.710578},
year = {2021},
date = {2021-01-01},
journal = {Frontiers in Neuroscience},
volume = {15},
pages = {710578},
abstract = {Contrast sensitivity (CS) is important when assessing functional vision. However, current techniques for assessing CS are not suitable for young children or non-verbal individuals because they require reliable, subjective perceptual reports. This study explored the feasibility of applying eye tracking technology to quantify CS as a first step toward developing a testing paradigm that will not rely on observers' behavioral or language abilities. Using a within-subject design, 27 healthy young adults completed CS measures for three spatial frequencies with best-corrected vision and lens-induced optical blur. Monocular CS was estimated using a five-alternative, forced-choice grating detection task. Thresholds were measured using eye movement responses and conventional key-press responses. CS measured using eye movements compared well with results obtained using key-press responses [Pearson's rbest–corrected = 0.966, P < 0.001]. Good test–retest variability was evident for the eye-movement-based measures (Pearson's r = 0.916, P < 0.001) with a coefficient of repeatability of 0.377 log CS across different days. This study provides a proof of concept that eye tracking can be used to automatically record eye gaze positions and accurately quantify human spatial vision. Future work will update this paradigm by incorporating the preferential looking technique into the eye tracking methods, optimizing the CS sampling algorithm and adapting the methodology to broaden its use on infants and non-verbal individuals.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Contrast sensitivity (CS) is important when assessing functional vision. However, current techniques for assessing CS are not suitable for young children or non-verbal individuals because they require reliable, subjective perceptual reports. This study explored the feasibility of applying eye tracking technology to quantify CS as a first step toward developing a testing paradigm that will not rely on observers' behavioral or language abilities. Using a within-subject design, 27 healthy young adults completed CS measures for three spatial frequencies with best-corrected vision and lens-induced optical blur. Monocular CS was estimated using a five-alternative, forced-choice grating detection task. Thresholds were measured using eye movement responses and conventional key-press responses. CS measured using eye movements compared well with results obtained using key-press responses [Pearson's rbest–corrected = 0.966, P < 0.001]. Good test–retest variability was evident for the eye-movement-based measures (Pearson's r = 0.916, P < 0.001) with a coefficient of repeatability of 0.377 log CS across different days. This study provides a proof of concept that eye tracking can be used to automatically record eye gaze positions and accurately quantify human spatial vision. Future work will update this paradigm by incorporating the preferential looking technique into the eye tracking methods, optimizing the CS sampling algorithm and adapting the methodology to broaden its use on infants and non-verbal individuals.

Close

  • doi:10.3389/fnins.2021.710578

Close

Ran Zhuang; Yanyan Tu; Xiangzhen Wang; Yanju Ren; Richard A. Abrams

Contributions of gains and losses to attentional capture and disengagement: evidence from the gap paradigm Journal Article

In: Experimental Brain Research, vol. 239, no. 11, pp. 3381–3395, 2021.

Abstract | Links | BibTeX

@article{Zhuang2021a,
title = {Contributions of gains and losses to attentional capture and disengagement: evidence from the gap paradigm},
author = {Ran Zhuang and Yanyan Tu and Xiangzhen Wang and Yanju Ren and Richard A. Abrams},
doi = {10.1007/s00221-021-06210-9},
year = {2021},
date = {2021-01-01},
journal = {Experimental Brain Research},
volume = {239},
number = {11},
pages = {3381--3395},
publisher = {Springer Berlin Heidelberg},
abstract = {It is known that movements of visual attention are influenced by features in a scene, such as colors, that are associated with value or with loss. The present study examined the detailed nature of these attentional effects by employing the gap paradigm—a technique that has been used to separately reveal changes in attentional capture and shifting, and changes in attentional disengagement. In four experiments, participants either looked toward or away from stimuli with colors that had been associated either with gains or with losses. We found that participants were faster to look to colors associated with gains and slower to look away from them, revealing effects of gains on both attentional capture and attentional disengagement. On the other hand, participants were both slower to look to features associated with loss, and faster to look away from such features. The pattern of results suggested, however, that the latter finding was not due to more rapid disengagement from loss-associated colors, but instead to more rapid shifting of attention away from such colors. Taken together, the results reveal a complex pattern of effects of gains and losses on the disengagement, capture, and shifting of visual attention, revealing a remarkable flexibility of the attention system.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It is known that movements of visual attention are influenced by features in a scene, such as colors, that are associated with value or with loss. The present study examined the detailed nature of these attentional effects by employing the gap paradigm—a technique that has been used to separately reveal changes in attentional capture and shifting, and changes in attentional disengagement. In four experiments, participants either looked toward or away from stimuli with colors that had been associated either with gains or with losses. We found that participants were faster to look to colors associated with gains and slower to look away from them, revealing effects of gains on both attentional capture and attentional disengagement. On the other hand, participants were both slower to look to features associated with loss, and faster to look away from such features. The pattern of results suggested, however, that the latter finding was not due to more rapid disengagement from loss-associated colors, but instead to more rapid shifting of attention away from such colors. Taken together, the results reveal a complex pattern of effects of gains and losses on the disengagement, capture, and shifting of visual attention, revealing a remarkable flexibility of the attention system.

Close

  • doi:10.1007/s00221-021-06210-9

Close

Qian Zhuang; Xiaoxiao Zheng; Benjamin Becker; Wei Lei; Xiaolei Xu; Keith M. Kendrick

Intranasal vasopressin like oxytocin increases social attention by influencing top-down control, but additionally enhances bottom-up control Journal Article

In: Psychoneuroendocrinology, vol. 133, pp. 105412, 2021.

Abstract | Links | BibTeX

@article{Zhuang2021,
title = {Intranasal vasopressin like oxytocin increases social attention by influencing top-down control, but additionally enhances bottom-up control},
author = {Qian Zhuang and Xiaoxiao Zheng and Benjamin Becker and Wei Lei and Xiaolei Xu and Keith M. Kendrick},
doi = {10.1016/j.psyneuen.2021.105412},
year = {2021},
date = {2021-01-01},
journal = {Psychoneuroendocrinology},
volume = {133},
pages = {105412},
publisher = {Elsevier Ltd},
abstract = {The respective roles of the neuropeptides arginine vasopressin (AVP) and oxytocin (OXT) in modulating social cognition and for therapeutic intervention in autism spectrum disorder have not been fully established. In particular, while numerous studies have demonstrated effects of oxytocin in promoting social attention the role of AVP has not been examined. The present study employed a randomized, double-blind, placebo (PLC)-controlled between-subject design to explore the social- and emotion-specific effects of AVP on both bottom-up and top-down attention processing with a validated emotional anti-saccade eye-tracking paradigm in 80 healthy male subjects (PLC = 40},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The respective roles of the neuropeptides arginine vasopressin (AVP) and oxytocin (OXT) in modulating social cognition and for therapeutic intervention in autism spectrum disorder have not been fully established. In particular, while numerous studies have demonstrated effects of oxytocin in promoting social attention the role of AVP has not been examined. The present study employed a randomized, double-blind, placebo (PLC)-controlled between-subject design to explore the social- and emotion-specific effects of AVP on both bottom-up and top-down attention processing with a validated emotional anti-saccade eye-tracking paradigm in 80 healthy male subjects (PLC = 40

Close

  • doi:10.1016/j.psyneuen.2021.105412

Close

Yikang Zhu; Lihua Xu; Wenzheng Wang; Qian Guo; Shan Chen; Caidi Zhang; Tianhong Zhang; Xiaochen Hu; Paul Enck; Chunbo Li; Jianhua Sheng; Jijun Wang

Gender differences in attentive bias during social information processing in schizophrenia: An eye-tracking study Journal Article

In: Asian Journal of Psychiatry, vol. 66, pp. 1–6, 2021.

Abstract | Links | BibTeX

@article{Zhu2021b,
title = {Gender differences in attentive bias during social information processing in schizophrenia: An eye-tracking study},
author = {Yikang Zhu and Lihua Xu and Wenzheng Wang and Qian Guo and Shan Chen and Caidi Zhang and Tianhong Zhang and Xiaochen Hu and Paul Enck and Chunbo Li and Jianhua Sheng and Jijun Wang},
doi = {10.1016/j.ajp.2021.102871},
year = {2021},
date = {2021-01-01},
journal = {Asian Journal of Psychiatry},
volume = {66},
pages = {1--6},
publisher = {Elsevier B.V.},
abstract = {Interpersonal communication is a specific scenario in which patients with psychiatric symptoms may manifest different behavioral patterns due to psychopathology. This was a pilot study by eye-tracking technology to investigate attentive bias during social information processing in schizophrenia. We enrolled 39 patients with schizophrenia from Shanghai Mental Health Center and 42 age-, gender- and education-matched healthy controls. The experiment was a free-viewing task, in which pictures with three types of degree of interpersonal communication were shown. We used two measures: 1) initial fixation duration, 2) total gaze duration. The Positive and Negative Syndrome Scale (PANSS) was used to determine symptom severity. The ratio of first fixation duration for pictures of communicating vs. non-communicating persons was significantly lower in patients than in controls (Mann-Whitney U = 512},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Interpersonal communication is a specific scenario in which patients with psychiatric symptoms may manifest different behavioral patterns due to psychopathology. This was a pilot study by eye-tracking technology to investigate attentive bias during social information processing in schizophrenia. We enrolled 39 patients with schizophrenia from Shanghai Mental Health Center and 42 age-, gender- and education-matched healthy controls. The experiment was a free-viewing task, in which pictures with three types of degree of interpersonal communication were shown. We used two measures: 1) initial fixation duration, 2) total gaze duration. The Positive and Negative Syndrome Scale (PANSS) was used to determine symptom severity. The ratio of first fixation duration for pictures of communicating vs. non-communicating persons was significantly lower in patients than in controls (Mann-Whitney U = 512

Close

  • doi:10.1016/j.ajp.2021.102871

Close

Ruomeng Zhu; Mateo Obregón; Hamutal Kreiner; Richard Shillcock

Small temporal asynchronies between the two eyes in binocular reading: Crosslinguistic data and the implications for ocular prevalence Journal Article

In: Attention, Perception, and Psychophysics, vol. 83, no. 7, pp. 3035–3045, 2021.

Abstract | Links | BibTeX

@article{Zhu2021,
title = {Small temporal asynchronies between the two eyes in binocular reading: Crosslinguistic data and the implications for ocular prevalence},
author = {Ruomeng Zhu and Mateo Obregón and Hamutal Kreiner and Richard Shillcock},
doi = {10.3758/s13414-021-02286-1},
year = {2021},
date = {2021-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {83},
number = {7},
pages = {3035--3045},
publisher = {Attention, Perception, & Psychophysics},
abstract = {We investigated small temporal nonalignments between the two eyes' fixations in the reading of English and Chinese. We define nine different patterns of asynchrony and report their spatial distribution across the screen of text. We interpret them in terms of their implications for ocular prevalence—prioritizing the input from one eye over the input from the other eye in higher perception/cognition, even when binocular fusion has occurred. The data are strikingly similar across the two very different orthographies. Asynchronies, in which one eye begins the fixation earlier and/or ends it later, occur most frequently in the hemifield corresponding to that eye. We propose that such small asynchronies cue higher processing to prioritize the input from that eye, during and after binocular fusion.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We investigated small temporal nonalignments between the two eyes' fixations in the reading of English and Chinese. We define nine different patterns of asynchrony and report their spatial distribution across the screen of text. We interpret them in terms of their implications for ocular prevalence—prioritizing the input from one eye over the input from the other eye in higher perception/cognition, even when binocular fusion has occurred. The data are strikingly similar across the two very different orthographies. Asynchronies, in which one eye begins the fixation earlier and/or ends it later, occur most frequently in the hemifield corresponding to that eye. We propose that such small asynchronies cue higher processing to prioritize the input from that eye, during and after binocular fusion.

Close

  • doi:10.3758/s13414-021-02286-1

Close

Ying Joey Zhou; Luca Iemi; Jan-Mathijs Schoffelen; Floris P. Lange; Saskia Haegens

Alpha oscillations shape sensory representation and perceptual sensitivity Journal Article

In: Journal of Neuroscience, vol. 41, no. 46, pp. 1–43, 2021.

Abstract | Links | BibTeX

@article{Zhou2021i,
title = {Alpha oscillations shape sensory representation and perceptual sensitivity},
author = {Ying Joey Zhou and Luca Iemi and Jan-Mathijs Schoffelen and Floris P. Lange and Saskia Haegens},
doi = {10.1523/jneurosci.1114-21.2021},
year = {2021},
date = {2021-01-01},
journal = {Journal of Neuroscience},
volume = {41},
number = {46},
pages = {1--43},
abstract = {Alpha activity (8–14 Hz) is the dominant rhythm in the awake brain and is thought to play an important role in setting the internal state of the brain. Previous work has associated states of decreased alpha power with enhanced neural excitability. However, evidence is mixed on whether and how such excitability enhancement modulates sensory signals of interest versus noise differently, and what, if any, are the consequences for subsequent perception. Here, human subjects (male and female) performed a visual detection task in which we manipulated their decision criteria in a blockwise manner. Although our manipulation led to substantial criterion shifts, these shifts were not reflected in prestimulus alpha band changes. Rather, lower prestimulus alpha power in occipital-parietal areas improved perceptual sensitivity and enhanced information content decodable from neural activity patterns. Additionally, oscillatory alpha phase immediately before stimulus presentation modulated accuracy. Together, our results suggest that alpha band dynamics modulate sensory signals of interest more strongly than noise.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Alpha activity (8–14 Hz) is the dominant rhythm in the awake brain and is thought to play an important role in setting the internal state of the brain. Previous work has associated states of decreased alpha power with enhanced neural excitability. However, evidence is mixed on whether and how such excitability enhancement modulates sensory signals of interest versus noise differently, and what, if any, are the consequences for subsequent perception. Here, human subjects (male and female) performed a visual detection task in which we manipulated their decision criteria in a blockwise manner. Although our manipulation led to substantial criterion shifts, these shifts were not reflected in prestimulus alpha band changes. Rather, lower prestimulus alpha power in occipital-parietal areas improved perceptual sensitivity and enhanced information content decodable from neural activity patterns. Additionally, oscillatory alpha phase immediately before stimulus presentation modulated accuracy. Together, our results suggest that alpha band dynamics modulate sensory signals of interest more strongly than noise.

Close

  • doi:10.1523/jneurosci.1114-21.2021

Close

Yan Bang Zhou; Qiang Li; Hong Zhi Liu

Visual attention and time preference reversals Journal Article

In: Judgment and Decision Making, vol. 16, no. 4, pp. 1010–1038, 2021.

Abstract | BibTeX

@article{Zhou2021g,
title = {Visual attention and time preference reversals},
author = {Yan Bang Zhou and Qiang Li and Hong Zhi Liu},
year = {2021},
date = {2021-01-01},
journal = {Judgment and Decision Making},
volume = {16},
number = {4},
pages = {1010--1038},
abstract = {Time preference reversal refers to systematic inconsistencies between preferences and bids for intertemporal options. From the two eye-tracking studies (N1 = 60},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Time preference reversal refers to systematic inconsistencies between preferences and bids for intertemporal options. From the two eye-tracking studies (N1 = 60

Close

Xiaomei Zhou; Shruti Vyas; Jinbiao Ning; Margaret C. Moulson

Naturalistic face learning in infants and adults Journal Article

In: Psychological Science, pp. 1–17, 2021.

Abstract | Links | BibTeX

@article{Zhou2021f,
title = {Naturalistic face learning in infants and adults},
author = {Xiaomei Zhou and Shruti Vyas and Jinbiao Ning and Margaret C. Moulson},
doi = {10.1177/09567976211030630},
year = {2021},
date = {2021-01-01},
journal = {Psychological Science},
pages = {1--17},
abstract = {Everyday face recognition presents a difficult challenge because faces vary naturally in appearance as a result of changes in lighting, expression, viewing angle, and hairstyle. We know little about how humans develop the ability to learn faces despite natural facial variability. In the current study, we provide the first examination of attentional mechanisms underlying adults' and infants' learning of naturally varying faces. Adults ( n = 48) and 6- to 12-month-old infants ( n = 48) viewed videos of models reading a storybook; the facial appearance of these models was either high or low in variability. Participants then viewed the learned face paired with a novel face. Infants showed adultlike prioritization of face over nonface regions; both age groups fixated the face region more in the high- than low-variability condition. Overall, however, infants showed less ability to resist contextual distractions during learning, which potentially contributed to their lack of discrimination between the learned and novel faces. Mechanisms underlying face learning across natural variability are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Everyday face recognition presents a difficult challenge because faces vary naturally in appearance as a result of changes in lighting, expression, viewing angle, and hairstyle. We know little about how humans develop the ability to learn faces despite natural facial variability. In the current study, we provide the first examination of attentional mechanisms underlying adults' and infants' learning of naturally varying faces. Adults ( n = 48) and 6- to 12-month-old infants ( n = 48) viewed videos of models reading a storybook; the facial appearance of these models was either high or low in variability. Participants then viewed the learned face paired with a novel face. Infants showed adultlike prioritization of face over nonface regions; both age groups fixated the face region more in the high- than low-variability condition. Overall, however, infants showed less ability to resist contextual distractions during learning, which potentially contributed to their lack of discrimination between the learned and novel faces. Mechanisms underlying face learning across natural variability are discussed.

Close

  • doi:10.1177/09567976211030630

Close

Wei Zhou; Aiping Wang; Ming Yan

Eye movements and the perceptual span among skilled Uighur readers Journal Article

In: Vision Research, vol. 182, pp. 20–26, 2021.

Abstract | Links | BibTeX

@article{Zhou2021e,
title = {Eye movements and the perceptual span among skilled Uighur readers},
author = {Wei Zhou and Aiping Wang and Ming Yan},
doi = {10.1016/j.visres.2021.01.005},
year = {2021},
date = {2021-01-01},
journal = {Vision Research},
volume = {182},
pages = {20--26},
publisher = {Elsevier Ltd},
abstract = {In the present study, we explored the perceptual span of skilled Uighur readers during their natural reading of sentences. The Uighur script is based on Arabic letters and it runs horizontally from right to left, offering a test to understand the effect of text direction. We utilized the gaze contingent moving window paradigm, in which legible text was provided only within a window that moved in synchrony with readers' eyes while all other letters were masked. The size of the window was manipulated systematically to determine the smallest size that allowed readers to show normal reading behaviors. Comparisons of window conditions with the baseline condition showed that the Uighur readers reached asymptotic performance in reading speed and gaze duration when windows revealed at least five letters to the right and twelve letters to the left of the currently fixated one. The present study is the first to document the size of the perceptual span in a horizontally leftwards running script. Cross-script comparisons with prior findings suggest that the size of the perceptual span for a certain writing system is likely influenced by its reading direction and visual complexity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the present study, we explored the perceptual span of skilled Uighur readers during their natural reading of sentences. The Uighur script is based on Arabic letters and it runs horizontally from right to left, offering a test to understand the effect of text direction. We utilized the gaze contingent moving window paradigm, in which legible text was provided only within a window that moved in synchrony with readers' eyes while all other letters were masked. The size of the window was manipulated systematically to determine the smallest size that allowed readers to show normal reading behaviors. Comparisons of window conditions with the baseline condition showed that the Uighur readers reached asymptotic performance in reading speed and gaze duration when windows revealed at least five letters to the right and twelve letters to the left of the currently fixated one. The present study is the first to document the size of the perceptual span in a horizontally leftwards running script. Cross-script comparisons with prior findings suggest that the size of the perceptual span for a certain writing system is likely influenced by its reading direction and visual complexity.

Close

  • doi:10.1016/j.visres.2021.01.005

Close

Shou Han Zhou; Gerard Loughnane; Redmond O'connell; Mark A. Bellgrove; Trevor T. J. Chong

Distractors selectively modulate electrophysiological markers of perceptual decisions Journal Article

In: Journal of Cognitive Neuroscience, vol. 33, no. 6, pp. 1020–1031, 2021.

Abstract | Links | BibTeX

@article{Zhou2021d,
title = {Distractors selectively modulate electrophysiological markers of perceptual decisions},
author = {Shou Han Zhou and Gerard Loughnane and Redmond O'connell and Mark A. Bellgrove and Trevor T. J. Chong},
doi = {10.1162/jocn_a_01703},
year = {2021},
date = {2021-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {33},
number = {6},
pages = {1020--1031},
abstract = {Current models of perceptual decision-making assume that choices are made after evidence in favor of an alternative accumulates to a given threshold. This process has recently been revealed in human EEG recordings, but an unresolved issue is how these neural mechanisms are modulated by competing, yet task-irrelevant, stimuli. In this study, we tested 20 healthy participants on a motion direction discrimination task. Participants monitored two patches of random dot motion simultaneously presented on either side of fixation for periodic changes in an upward or downward motion, which could occur equiprobably in either patch. On a random 50% of trials, these periods of coherent vertical motion were accompanied by simultaneous task-irrelevant, horizontal motion in the contralateral patch. Our data showed that these distractors selectively increased the amplitude of early target selection responses over scalp sites contralateral to the distractor stimulus, without impacting on responses ipsilat-eral to the distractor. Importantly, this modulation mediated a decrement in the subsequent buildup rate of a neural signature of evidence accumulation and accounted for a slowing of RTs. These data offer new insights into the functional interactions between target selection and evidence accumulation signals, and their susceptibility to task-irrelevant distractors. More broadly, these data neurally inform future models of perceptual decision-making by highlighting the influence of early processing of competing stimuli on the accumulation of perceptual evidence.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Current models of perceptual decision-making assume that choices are made after evidence in favor of an alternative accumulates to a given threshold. This process has recently been revealed in human EEG recordings, but an unresolved issue is how these neural mechanisms are modulated by competing, yet task-irrelevant, stimuli. In this study, we tested 20 healthy participants on a motion direction discrimination task. Participants monitored two patches of random dot motion simultaneously presented on either side of fixation for periodic changes in an upward or downward motion, which could occur equiprobably in either patch. On a random 50% of trials, these periods of coherent vertical motion were accompanied by simultaneous task-irrelevant, horizontal motion in the contralateral patch. Our data showed that these distractors selectively increased the amplitude of early target selection responses over scalp sites contralateral to the distractor stimulus, without impacting on responses ipsilat-eral to the distractor. Importantly, this modulation mediated a decrement in the subsequent buildup rate of a neural signature of evidence accumulation and accounted for a slowing of RTs. These data offer new insights into the functional interactions between target selection and evidence accumulation signals, and their susceptibility to task-irrelevant distractors. More broadly, these data neurally inform future models of perceptual decision-making by highlighting the influence of early processing of competing stimuli on the accumulation of perceptual evidence.

Close

  • doi:10.1162/jocn_a_01703

Close

Peng Zhou; Jiawei Shi; Likan Zhan

Real-time comprehension of garden-path constructions by preschoolers: A Mandarin perspective Journal Article

In: Applied Psycholinguistics, vol. 42, no. 1, pp. 181–205, 2021.

Abstract | Links | BibTeX

@article{Zhou2021c,
title = {Real-time comprehension of garden-path constructions by preschoolers: A Mandarin perspective},
author = {Peng Zhou and Jiawei Shi and Likan Zhan},
doi = {10.1017/S0142716420000697},
year = {2021},
date = {2021-01-01},
journal = {Applied Psycholinguistics},
volume = {42},
number = {1},
pages = {181--205},
abstract = {The present study investigated whether 4- and 5-year-old Mandarin-speaking children are able to process garden-path constructions in real time when the working memory burden associated with revision and reanalysis is kept to minimum. In total, 25 4-year-olds, 25 5-year-olds, and 30 adults were tested using the visual-world paradigm of eye tracking. The obtained eye gaze patterns reflect that the 4- and 5-year-olds, like the adults, committed to an initial misinterpretation and later successfully revised their initial interpretation. The findings show that preschool children are able to revise and reanalyze their initial commitment and then arrive at the correct interpretation using the later-encountered linguistic information when processing the garden-path constructions in the current study. The findings also suggest that although the 4-year-olds successfully processed the garden-path constructions in real time, they were not as effective as the 5-year-olds and the adults in revising and reanalyzing their initial mistaken interpretation when later encountering the critical linguistic cue. Taken together, our findings call for a fine-grained model of child sentence processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study investigated whether 4- and 5-year-old Mandarin-speaking children are able to process garden-path constructions in real time when the working memory burden associated with revision and reanalysis is kept to minimum. In total, 25 4-year-olds, 25 5-year-olds, and 30 adults were tested using the visual-world paradigm of eye tracking. The obtained eye gaze patterns reflect that the 4- and 5-year-olds, like the adults, committed to an initial misinterpretation and later successfully revised their initial interpretation. The findings show that preschool children are able to revise and reanalyze their initial commitment and then arrive at the correct interpretation using the later-encountered linguistic information when processing the garden-path constructions in the current study. The findings also suggest that although the 4-year-olds successfully processed the garden-path constructions in real time, they were not as effective as the 5-year-olds and the adults in revising and reanalyzing their initial mistaken interpretation when later encountering the critical linguistic cue. Taken together, our findings call for a fine-grained model of child sentence processing.

Close

  • doi:10.1017/S0142716420000697

Close

Hong Zhou; Xia Wang; Di Ma; Yanyan Jiang; Fan Li; Yunchuang Sun; Jing Chen; Wei Sun; Elmar H. Pinkhardt; Bernhard Landwehrmeyer; Albert Ludolph; Lin Zhang; Guiping Zhao; Zhaoxia Wang

The differential diagnostic value of a battery of oculomotor evaluation in Parkinson's Disease and Multiple System Atrophy Journal Article

In: Brain and Behavior, vol. 11, no. 7, pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Zhou2021a,
title = {The differential diagnostic value of a battery of oculomotor evaluation in Parkinson's Disease and Multiple System Atrophy},
author = {Hong Zhou and Xia Wang and Di Ma and Yanyan Jiang and Fan Li and Yunchuang Sun and Jing Chen and Wei Sun and Elmar H. Pinkhardt and Bernhard Landwehrmeyer and Albert Ludolph and Lin Zhang and Guiping Zhao and Zhaoxia Wang},
doi = {10.1002/brb3.2184},
year = {2021},
date = {2021-01-01},
journal = {Brain and Behavior},
volume = {11},
number = {7},
pages = {1--10},
abstract = {Introduction: Clinical diagnosis of Parkinsonism is still challenging, and the diagnostic biomarkers of Multiple System Atrophy (MSA) are scarce. This study aimed to investigate the diagnostic value of the combined eye movement tests in patients with Parkinson's disease (PD) and those with MSA. Methods: We enrolled 96 PD patients, 33 MSA patients (18 with MSA-P and 15 with MSA-C), and 40 healthy controls who had their horizontal ocular movements measured. The multiple-step pattern of memory-guided saccade (MGS), the hypometria/hypermetria of the reflexive saccade, the abnormal saccade in smooth pursuit movement (SPM), gaze-evoked nystagmus, and square-wave jerks in gaze-holding test were qualitatively analyzed. The reflexive saccadic parameters and gain of SPM were also quantitatively analyzed. Results: The MGS test showed that patients with either diagnosis had a significantly higher incidence of multiple-step pattern compared with controls (68.6%, 65.2%, and versus. 2.5%, p <.05, in PD, MSA, versus. controls, respectively). The reflexive saccade test showed that MSA patients showing a prominent higher incidence of the abnormal saccade (63.6%, both hypometria and hypermetria) than that of PD patients and controls (33.3%, 7.5%, respectively, hypometria) (p <.05). The SPM test showed PD patients had mildly decreased gain among whom 28.1% presenting “saccade intrusions”; and that MSA patients had the significant decreased gain with 51.5% presenting “catch-up saccades”(p <.05). Only MSA patients showed gaze-evoked nystagmus (24.2%), square-wave jerks (6.1%) in gaze-holding test (p <.05). Conclusions: A panel of eye movements tests may help to differentiate PD from MSA. The combined presence of hypometria and hypermetria in saccadic eye movement, the impaired gain of smooth pursuit movement with “catch-up saccades,” gaze-evoked nystagmus, square-wave jerks in gaze-holding test, and multiple-step pattern in MGS may provide clues to the diagnosis of MSA.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Introduction: Clinical diagnosis of Parkinsonism is still challenging, and the diagnostic biomarkers of Multiple System Atrophy (MSA) are scarce. This study aimed to investigate the diagnostic value of the combined eye movement tests in patients with Parkinson's disease (PD) and those with MSA. Methods: We enrolled 96 PD patients, 33 MSA patients (18 with MSA-P and 15 with MSA-C), and 40 healthy controls who had their horizontal ocular movements measured. The multiple-step pattern of memory-guided saccade (MGS), the hypometria/hypermetria of the reflexive saccade, the abnormal saccade in smooth pursuit movement (SPM), gaze-evoked nystagmus, and square-wave jerks in gaze-holding test were qualitatively analyzed. The reflexive saccadic parameters and gain of SPM were also quantitatively analyzed. Results: The MGS test showed that patients with either diagnosis had a significantly higher incidence of multiple-step pattern compared with controls (68.6%, 65.2%, and versus. 2.5%, p <.05, in PD, MSA, versus. controls, respectively). The reflexive saccade test showed that MSA patients showing a prominent higher incidence of the abnormal saccade (63.6%, both hypometria and hypermetria) than that of PD patients and controls (33.3%, 7.5%, respectively, hypometria) (p <.05). The SPM test showed PD patients had mildly decreased gain among whom 28.1% presenting “saccade intrusions”; and that MSA patients had the significant decreased gain with 51.5% presenting “catch-up saccades”(p <.05). Only MSA patients showed gaze-evoked nystagmus (24.2%), square-wave jerks (6.1%) in gaze-holding test (p <.05). Conclusions: A panel of eye movements tests may help to differentiate PD from MSA. The combined presence of hypometria and hypermetria in saccadic eye movement, the impaired gain of smooth pursuit movement with “catch-up saccades,” gaze-evoked nystagmus, square-wave jerks in gaze-holding test, and multiple-step pattern in MGS may provide clues to the diagnosis of MSA.

Close

  • doi:10.1002/brb3.2184

Close

Feng Zhou; X. Jessie Yang; Joost C. F. Winter

Using eye-tracking data to predict situation awareness in real time during takeover transitions in conditionally automated driving Journal Article

In: IEEE Transactions on Intelligent Transportation Systems, pp. 1–12, 2021.

Abstract | Links | BibTeX

@article{Zhou2021,
title = {Using eye-tracking data to predict situation awareness in real time during takeover transitions in conditionally automated driving},
author = {Feng Zhou and X. Jessie Yang and Joost C. F. Winter},
doi = {10.1109/TITS.2021.3069776},
year = {2021},
date = {2021-01-01},
journal = {IEEE Transactions on Intelligent Transportation Systems},
pages = {1--12},
abstract = {Situation awareness (SA) is critical to improving takeover performance during the transition period from automated driving to manual driving. Although many studies measured SA during or after the driving task, few studies have attempted to predict SA in real time in automated driving. In this work, we propose to predict SA during the takeover transition period in conditionally automated driving using eye-tracking and self-reported data. First, a tree ensemble machine learning model, named LightGBM (Light Gradient Boosting Machine), was used to predict SA. Second, in order to understand what factors influenced SA and how, SHAP (SHapley Additive exPlanations) values of individual predictor variables in the LightGBM model were calculated. These SHAP values explained the prediction model by identifying the most important factors and their effects on SA, which further improved the model performance of LightGBM through feature selection. We standardized SA between 0 and 1 by aggregating three performance measures (i.e., placement, distance, and speed estimation of vehicles with regard to the ego-vehicle) of SA in recreating simulated driving scenarios, after 33 participants viewed 32 videos with six lengths between 1 and 20 s. Using only eye-tracking data, our proposed model outperformed other selected machine learning models, having a root-mean-squared error (RMSE) of 0.121, a mean absolute error (MAE) of 0.096, and a 0.719 correlation coefficient between the predicted SA and the ground truth. The code is available at https://github.com/refengchou/Situation-awareness-prediction. Our proposed model provided important implications on how to monitor and predict SA in real time in automated driving using eye-tracking data.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Situation awareness (SA) is critical to improving takeover performance during the transition period from automated driving to manual driving. Although many studies measured SA during or after the driving task, few studies have attempted to predict SA in real time in automated driving. In this work, we propose to predict SA during the takeover transition period in conditionally automated driving using eye-tracking and self-reported data. First, a tree ensemble machine learning model, named LightGBM (Light Gradient Boosting Machine), was used to predict SA. Second, in order to understand what factors influenced SA and how, SHAP (SHapley Additive exPlanations) values of individual predictor variables in the LightGBM model were calculated. These SHAP values explained the prediction model by identifying the most important factors and their effects on SA, which further improved the model performance of LightGBM through feature selection. We standardized SA between 0 and 1 by aggregating three performance measures (i.e., placement, distance, and speed estimation of vehicles with regard to the ego-vehicle) of SA in recreating simulated driving scenarios, after 33 participants viewed 32 videos with six lengths between 1 and 20 s. Using only eye-tracking data, our proposed model outperformed other selected machine learning models, having a root-mean-squared error (RMSE) of 0.121, a mean absolute error (MAE) of 0.096, and a 0.719 correlation coefficient between the predicted SA and the ground truth. The code is available at https://github.com/refengchou/Situation-awareness-prediction. Our proposed model provided important implications on how to monitor and predict SA in real time in automated driving using eye-tracking data.

Close

  • doi:10.1109/TITS.2021.3069776

Close

Junming Zheng; Muhammad Waqqas Khan Tarin; Denghui Jiang; Min Li; Jing Ye; Lingyan Chen; Tianyou He; Yushan Zheng

Which ornamental features of bamboo plants will attract the people most? Journal Article

In: Urban Forestry and Urban Greening, vol. 61, pp. 127101, 2021.

Abstract | Links | BibTeX

@article{Zheng2021b,
title = {Which ornamental features of bamboo plants will attract the people most?},
author = {Junming Zheng and Muhammad Waqqas Khan Tarin and Denghui Jiang and Min Li and Jing Ye and Lingyan Chen and Tianyou He and Yushan Zheng},
doi = {10.1016/j.ufug.2021.127101},
year = {2021},
date = {2021-01-01},
journal = {Urban Forestry and Urban Greening},
volume = {61},
pages = {127101},
publisher = {Elsevier GmbH},
abstract = {Plant structure and architecture have a significant influence on how people interpret them. Bamboo plants have highly ornamental attributes, but the traits that attract people the most are still unknown. Therefore, to assess the people's preference for ornamental features of bamboo plants, eye-tracking measures (fixation count, percent of dwell time, pupil size, and saccade amplitude) and a questionnaire survey about subjective preference were conducted by ninety college students as the participants. The result showed that subjective ratings of stem color, leaf stripes, and stem stripes showed a significant positive correlation with the fixation count. The pupil size and saccade amplitude of different ornamental features were not correlated with the subjective ratings. According to random forest model, fixation count was the most influential aspect affecting subjective ratings. Based on integrated eye-tracking measures and subjective ratings, we conclude that people prefer the ornamental features like green stem, green stem with irregular yellow stripes or yellow stem with narrow green stripes, leaves with less number of stripes, normal stem, and tree. In addition, people prefer natural traits, for instance, green stem, normal stem, and tree, related to latent conscious belief and evolutionary adaptation. Abnormal traits, such as leaf stripes and stem stripes attract people's visual attention and interests, making the fixation count and increasing the percentage of dwell time. This study has significant implications for landscape experts in the design and maintenance of ornamental bamboo plantations in China as well as in other areas of the world.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Plant structure and architecture have a significant influence on how people interpret them. Bamboo plants have highly ornamental attributes, but the traits that attract people the most are still unknown. Therefore, to assess the people's preference for ornamental features of bamboo plants, eye-tracking measures (fixation count, percent of dwell time, pupil size, and saccade amplitude) and a questionnaire survey about subjective preference were conducted by ninety college students as the participants. The result showed that subjective ratings of stem color, leaf stripes, and stem stripes showed a significant positive correlation with the fixation count. The pupil size and saccade amplitude of different ornamental features were not correlated with the subjective ratings. According to random forest model, fixation count was the most influential aspect affecting subjective ratings. Based on integrated eye-tracking measures and subjective ratings, we conclude that people prefer the ornamental features like green stem, green stem with irregular yellow stripes or yellow stem with narrow green stripes, leaves with less number of stripes, normal stem, and tree. In addition, people prefer natural traits, for instance, green stem, normal stem, and tree, related to latent conscious belief and evolutionary adaptation. Abnormal traits, such as leaf stripes and stem stripes attract people's visual attention and interests, making the fixation count and increasing the percentage of dwell time. This study has significant implications for landscape experts in the design and maintenance of ornamental bamboo plantations in China as well as in other areas of the world.

Close

  • doi:10.1016/j.ufug.2021.127101

Close

Annie Zheng; Jessica A. Church

A developmental eye tracking investigation of cued task switching performance Journal Article

In: Child Development, vol. 92, no. 4, pp. 1652–1672, 2021.

Abstract | Links | BibTeX

@article{Zheng2021,
title = {A developmental eye tracking investigation of cued task switching performance},
author = {Annie Zheng and Jessica A. Church},
doi = {10.1111/cdev.13478},
year = {2021},
date = {2021-01-01},
journal = {Child Development},
volume = {92},
number = {4},
pages = {1652--1672},
abstract = {Children perform worse than adults on tests of cognitive flexibility, which is a component of executive function. To assess what aspects of a cognitive flexibility task (cued switching) children have difficulty with, investigators tested where eye gaze diverged over age. Eye-tracking was used as a proxy for attention during the preparatory period of each trial in 48 children ages 8–16 years and 51 adults ages 18–27 years. Children fixated more often and longer on the cued rule, and made more saccades between rule and response options. Behavioral performance correlated with gaze location and saccades. Mid-adolescents were similar to adults, supporting the slow maturation of cognitive flexibility. Lower preparatory control and associated lower cognitive flexibility task performance in development may particularly relate to rule processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Children perform worse than adults on tests of cognitive flexibility, which is a component of executive function. To assess what aspects of a cognitive flexibility task (cued switching) children have difficulty with, investigators tested where eye gaze diverged over age. Eye-tracking was used as a proxy for attention during the preparatory period of each trial in 48 children ages 8–16 years and 51 adults ages 18–27 years. Children fixated more often and longer on the cued rule, and made more saccades between rule and response options. Behavioral performance correlated with gaze location and saccades. Mid-adolescents were similar to adults, supporting the slow maturation of cognitive flexibility. Lower preparatory control and associated lower cognitive flexibility task performance in development may particularly relate to rule processing.

Close

  • doi:10.1111/cdev.13478

Close

Yi Zhang; Ke Xu; Zhongling Pi; Jiumin Yang

Instructor's position affects learning from video lectures in Chinese context: an eye-tracking study Journal Article

In: Behaviour and Information Technology, pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Zhang2021j,
title = {Instructor's position affects learning from video lectures in Chinese context: an eye-tracking study},
author = {Yi Zhang and Ke Xu and Zhongling Pi and Jiumin Yang},
doi = {10.1080/0144929X.2021.1910731},
year = {2021},
date = {2021-01-01},
journal = {Behaviour and Information Technology},
pages = {1--10},
publisher = {Taylor & Francis},
abstract = {Although more and more online courses use video lectures that feature an instructor and slides, there are few specific guidelines for designing these video lectures. This experiment tested whether the instructor should appear on the screen and whether her position on the screen (left, middle, right of the content on the slides) influenced students. Students were randomly assigned to watch one of four video lectures on the topic of sleep. The results showed that the video lectures with an instructor's presence (regardless of position) motivated students more than the video lecture without an instructor presence did. Learning performance and satisfaction were highest when the instructor appeared on the right side of the screen. Furthermore, eye movement data showed that compared to students in all other conditions, students in the middle condition paid more attention to the instructor and less attention to the learning content, and switched more between instructor and learning content. The findings highlight the positive effects of the instructor appearing on the right side of the screen in video lectures with slides.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Although more and more online courses use video lectures that feature an instructor and slides, there are few specific guidelines for designing these video lectures. This experiment tested whether the instructor should appear on the screen and whether her position on the screen (left, middle, right of the content on the slides) influenced students. Students were randomly assigned to watch one of four video lectures on the topic of sleep. The results showed that the video lectures with an instructor's presence (regardless of position) motivated students more than the video lecture without an instructor presence did. Learning performance and satisfaction were highest when the instructor appeared on the right side of the screen. Furthermore, eye movement data showed that compared to students in all other conditions, students in the middle condition paid more attention to the instructor and less attention to the learning content, and switched more between instructor and learning content. The findings highlight the positive effects of the instructor appearing on the right side of the screen in video lectures with slides.

Close

  • doi:10.1080/0144929X.2021.1910731

Close

Yan-Bo Zhang; Peng-Chong Wang; Yun Ma; Xiang-Yun Yang; Fan-Qiang Meng; Simon A Broadley; Jing Sun; Zhan-Jiang Li

Using eye movements in the dot-probe paradigm to investigate attention bias in illness anxiety disorder Journal Article

In: World Journal of Psychiatry, vol. 11, no. 3, pp. 73–86, 2021.

Abstract | Links | BibTeX

@article{Zhang2021i,
title = {Using eye movements in the dot-probe paradigm to investigate attention bias in illness anxiety disorder},
author = {Yan-Bo Zhang and Peng-Chong Wang and Yun Ma and Xiang-Yun Yang and Fan-Qiang Meng and Simon A Broadley and Jing Sun and Zhan-Jiang Li},
doi = {10.5498/wjp.v11.i3.73},
year = {2021},
date = {2021-01-01},
journal = {World Journal of Psychiatry},
volume = {11},
number = {3},
pages = {73--86},
abstract = {BACKGROUND: Illness anxiety disorder (IAD) is a common, distressing, and debilitating condition with the key feature being a persistent conviction of the possibility of having one or more serious or progressive physical disorders. Because eye movements are guided by visual-spatial attention, eye-tracking technology is a comparatively direct, continuous measure of attention direction and speed when stimuli are oriented. Researchers have tried to identify selective visual attention biases by tracking eye movements within dot-probe paradigms because dot-probe paradigm can distinguish these attentional biases more clearly. AIM: To examine the association between IAD and biased processing of illness-related information. METHODS: A case-control study design was used to record eye movements of individuals with IAD and healthy controls while participants viewed a set of pictures from four categories (illness-related, socially threatening, positive, and neutral images). Biases in initial orienting were assessed from the location of the initial shift in gaze, and biases in the maintenance of attention were assessed from the duration of gaze that was initially fixated on the picture per image category. RESULTS: The eye movement of the participants in the IAD group was characterized by an avoidance bias in initial orienting to illness-related pictures. There was no evidence of individuals with IAD spending significantly more time viewing illness-related images compared with other images. Patients with IAD had an attention bias at the early stage and overall attentional avoidance. In addition, this study found that patients with significant anxiety symptoms showed attention bias in the late stages of attention processing. CONCLUSION: Illness-related information processing biases appear to be a robust feature of IAD and may have an important role in explaining the etiology and maintenance of the disorder.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

BACKGROUND: Illness anxiety disorder (IAD) is a common, distressing, and debilitating condition with the key feature being a persistent conviction of the possibility of having one or more serious or progressive physical disorders. Because eye movements are guided by visual-spatial attention, eye-tracking technology is a comparatively direct, continuous measure of attention direction and speed when stimuli are oriented. Researchers have tried to identify selective visual attention biases by tracking eye movements within dot-probe paradigms because dot-probe paradigm can distinguish these attentional biases more clearly. AIM: To examine the association between IAD and biased processing of illness-related information. METHODS: A case-control study design was used to record eye movements of individuals with IAD and healthy controls while participants viewed a set of pictures from four categories (illness-related, socially threatening, positive, and neutral images). Biases in initial orienting were assessed from the location of the initial shift in gaze, and biases in the maintenance of attention were assessed from the duration of gaze that was initially fixated on the picture per image category. RESULTS: The eye movement of the participants in the IAD group was characterized by an avoidance bias in initial orienting to illness-related pictures. There was no evidence of individuals with IAD spending significantly more time viewing illness-related images compared with other images. Patients with IAD had an attention bias at the early stage and overall attentional avoidance. In addition, this study found that patients with significant anxiety symptoms showed attention bias in the late stages of attention processing. CONCLUSION: Illness-related information processing biases appear to be a robust feature of IAD and may have an important role in explaining the etiology and maintenance of the disorder.

Close

  • doi:10.5498/wjp.v11.i3.73

Close

Xinru Zhang; Zhongling Pi; Chenyu Li; Weiping Hu

Intrinsic motivation enhances online group creativity via promoting members' effort, not interaction Journal Article

In: British Journal of Educational Technology, vol. 52, no. 2, pp. 606–618, 2021.

Abstract | Links | BibTeX

@article{Zhang2021k,
title = {Intrinsic motivation enhances online group creativity via promoting members' effort, not interaction},
author = {Xinru Zhang and Zhongling Pi and Chenyu Li and Weiping Hu},
doi = {10.1111/bjet.13045},
year = {2021},
date = {2021-01-01},
journal = {British Journal of Educational Technology},
volume = {52},
number = {2},
pages = {606--618},
abstract = {Intrinsic motivation is seen as the principal source of vitality in educational settings. This study examined whether intrinsic motivation promoted online group creativity and tested a cognitive mechanism that might explain this effect. University students (N = 72; 61 women) who volunteered to participate were asked to fulfill a creative task with a peer using online software. The peer was actually a fake participant who was programed to send prepared answers in sequence. Ratings of creativity (fluency, flexibility and originality) and eye movement data (focus on own vs. peer's ideas on the screen) were used to compare students who were induced to have high intrinsic motivation and those induced to have low intrinsic motivation. Results showed that compared to participants with low intrinsic motivation, those with high intrinsic motivation showed higher fluency and flexibility on the creative task and spent a larger percentage of time looking at their own ideas on the screen. The two groups did not differ in how much they looked at the peer's ideas. In addition, students' percentage dwell time on their own ideas mediated the beneficial effect of intrinsic motivation on idea fluency. These results suggest that although intrinsic motivation could enhance the fluency of creative ideas in an online group, it does not necessarily promote interaction among group members. Given the importance of interaction in online group setting, findings of this study suggest that in addition to enhancing intrinsic motivation, other measures should be taken to promote the interaction behavior in online groups. Practitioner Notes What is already known about this topic The generation of creative ideas in group settings calls for both individual effort and cognitive stimulation from other members. Intrinsic motivation has been shown to foster creativity in face-to-face groups, which is primarily due the promotion of individual effort. In online group settings, students' creativity tends to rely on intrinsic motivation because the extrinsic motivation typically provided by teachers' supervision and peer pressure in face-to-face settings is minimized online. What this paper adds Creative performance in online groups benefits from intrinsic motivation. Intrinsic motivation promotes creativity through an individual's own cognitive effort instead of interaction among members. Implications for practice and/or policy Improving students' intrinsic motivation is an effective way to promote creativity in online groups. Teachers should take additional steps to encourage students to interact more with each other in online groups.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Intrinsic motivation is seen as the principal source of vitality in educational settings. This study examined whether intrinsic motivation promoted online group creativity and tested a cognitive mechanism that might explain this effect. University students (N = 72; 61 women) who volunteered to participate were asked to fulfill a creative task with a peer using online software. The peer was actually a fake participant who was programed to send prepared answers in sequence. Ratings of creativity (fluency, flexibility and originality) and eye movement data (focus on own vs. peer's ideas on the screen) were used to compare students who were induced to have high intrinsic motivation and those induced to have low intrinsic motivation. Results showed that compared to participants with low intrinsic motivation, those with high intrinsic motivation showed higher fluency and flexibility on the creative task and spent a larger percentage of time looking at their own ideas on the screen. The two groups did not differ in how much they looked at the peer's ideas. In addition, students' percentage dwell time on their own ideas mediated the beneficial effect of intrinsic motivation on idea fluency. These results suggest that although intrinsic motivation could enhance the fluency of creative ideas in an online group, it does not necessarily promote interaction among group members. Given the importance of interaction in online group setting, findings of this study suggest that in addition to enhancing intrinsic motivation, other measures should be taken to promote the interaction behavior in online groups. Practitioner Notes What is already known about this topic The generation of creative ideas in group settings calls for both individual effort and cognitive stimulation from other members. Intrinsic motivation has been shown to foster creativity in face-to-face groups, which is primarily due the promotion of individual effort. In online group settings, students' creativity tends to rely on intrinsic motivation because the extrinsic motivation typically provided by teachers' supervision and peer pressure in face-to-face settings is minimized online. What this paper adds Creative performance in online groups benefits from intrinsic motivation. Intrinsic motivation promotes creativity through an individual's own cognitive effort instead of interaction among members. Implications for practice and/or policy Improving students' intrinsic motivation is an effective way to promote creativity in online groups. Teachers should take additional steps to encourage students to interact more with each other in online groups.

Close

  • doi:10.1111/bjet.13045

Close

Xiaoli Zhang; Julie D. Golomb

Neural representations of covert attention across saccades: Comparing pattern similarity to shifting and holding attention during fixation Journal Article

In: eNeuro, vol. 8, no. 2, pp. 1–19, 2021.

Abstract | Links | BibTeX

@article{Zhang2021g,
title = {Neural representations of covert attention across saccades: Comparing pattern similarity to shifting and holding attention during fixation},
author = {Xiaoli Zhang and Julie D. Golomb},
doi = {10.1523/ENEURO.0186-20.2021},
year = {2021},
date = {2021-01-01},
journal = {eNeuro},
volume = {8},
number = {2},
pages = {1--19},
abstract = {We can focus visuospatial attention by covertly attending to relevant locations, moving our eyes, or both simultaneously. How does shifting versus holding covert attention during fixation compare with maintaining covert attention across saccades? We acquired human fMRI data during a combined saccade and covert attention task. On Eyes-fixed trials, participants either held attention at the same initial location (“hold atten-tion”) or shifted attention to another location midway through the trial (“shift attention”). On Eyes-move trials, participants made a saccade midway through the trial, while maintaining attention in one of two reference frames: the “retinotopic attention” condition involved holding attention at a fixation-relative location but shifting to a different screen-centered location, whereas the “spatiotopic attention” condition involved holding attention on the same screen-centered location but shifting relative to fixation. We localized the brain network sensitive to attention shifts (shift > hold attention), and used multivoxel pattern time course (MVPTC) analyses to investigate the patterns of brain activity for spatiotopic and retinotopic attention across saccades. In the attention shift network, we found transient information about both whether covert shifts were made and whether saccades were executed. Moreover, in this network, both retinotopic and spatiotopic conditions were represented more similarly to shifting than to holding covert attention. An exploratory searchlight analysis revealed additional regions where spatiotopic was relatively more similar to shifting and retinotopic more to holding. Thus, maintaining retinotopic and spatiotopic attention across saccades may involve different types of updating that vary in similarity to covert attention “hold” and “shift” signals across different regions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We can focus visuospatial attention by covertly attending to relevant locations, moving our eyes, or both simultaneously. How does shifting versus holding covert attention during fixation compare with maintaining covert attention across saccades? We acquired human fMRI data during a combined saccade and covert attention task. On Eyes-fixed trials, participants either held attention at the same initial location (“hold atten-tion”) or shifted attention to another location midway through the trial (“shift attention”). On Eyes-move trials, participants made a saccade midway through the trial, while maintaining attention in one of two reference frames: the “retinotopic attention” condition involved holding attention at a fixation-relative location but shifting to a different screen-centered location, whereas the “spatiotopic attention” condition involved holding attention on the same screen-centered location but shifting relative to fixation. We localized the brain network sensitive to attention shifts (shift > hold attention), and used multivoxel pattern time course (MVPTC) analyses to investigate the patterns of brain activity for spatiotopic and retinotopic attention across saccades. In the attention shift network, we found transient information about both whether covert shifts were made and whether saccades were executed. Moreover, in this network, both retinotopic and spatiotopic conditions were represented more similarly to shifting than to holding covert attention. An exploratory searchlight analysis revealed additional regions where spatiotopic was relatively more similar to shifting and retinotopic more to holding. Thus, maintaining retinotopic and spatiotopic attention across saccades may involve different types of updating that vary in similarity to covert attention “hold” and “shift” signals across different regions.

Close

  • doi:10.1523/ENEURO.0186-20.2021

Close

TianHong Zhang; YingYu Yang; LiHua Xu; XiaoChen Tang; YeGang Hu; Xin Xiong; YanYan Wei; HuiRu Cui; YingYing Tang; HaiChun Liu; Tao Chen; Zhi Liu; Li Hui; ChunBo Li; XiaoLi Guo; JiJun Wang

Inefficient integration during multiple facial processing in pre-morbid and early phases of psychosis Journal Article

In: The World Journal of Biological Psychiatry, pp. 1–13, 2021.

Abstract | Links | BibTeX

@article{Zhang2021f,
title = {Inefficient integration during multiple facial processing in pre-morbid and early phases of psychosis},
author = {TianHong Zhang and YingYu Yang and LiHua Xu and XiaoChen Tang and YeGang Hu and Xin Xiong and YanYan Wei and HuiRu Cui and YingYing Tang and HaiChun Liu and Tao Chen and Zhi Liu and Li Hui and ChunBo Li and XiaoLi Guo and JiJun Wang},
doi = {10.1080/15622975.2021.2011402},
year = {2021},
date = {2021-01-01},
journal = {The World Journal of Biological Psychiatry},
pages = {1--13},
publisher = {Taylor & Francis},
abstract = {Objectives: We used eye-tracking to evaluate multiple facial context processing and event-related potential (ERP) to evaluate multiple facial recognition in individuals at clinical high risk (CHR) for psychosis. Methods: In total, 173 subjects (83 CHRs and 90 healthy controls [HCs]) were included and their emotion perception performances were accessed. A total of 40 CHRs and 40 well-matched HCs completed an eye-tracking task where they viewed pictures depicting a person in the foreground, presented as context-free, context-compatible, and context-incompatible. During the two-year follow-up, 26 CHRs developed psychosis, including 17 individuals who developed first-episode schizophrenia (FES). Eighteen well-matched HCs were made to complete the face number detection ERP task with image stimuli of one, two, or three faces. Results: Compared to the HC group, the CHR group showed reduced visual attention to contextual processing when viewing multiple faces. With the increasing complexity of contextual faces, the differences in eye-tracking characteristics also increased. In the ERP task, the N170 amplitude decreased with a higher face number in FES patients, while it increased with a higher face number in HCs. Conclusions: Individuals in the very early phase of psychosis showed facial processing deficits with supporting evidence of different scan paths during context processing and disruption of N170 during multiple facial recognition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objectives: We used eye-tracking to evaluate multiple facial context processing and event-related potential (ERP) to evaluate multiple facial recognition in individuals at clinical high risk (CHR) for psychosis. Methods: In total, 173 subjects (83 CHRs and 90 healthy controls [HCs]) were included and their emotion perception performances were accessed. A total of 40 CHRs and 40 well-matched HCs completed an eye-tracking task where they viewed pictures depicting a person in the foreground, presented as context-free, context-compatible, and context-incompatible. During the two-year follow-up, 26 CHRs developed psychosis, including 17 individuals who developed first-episode schizophrenia (FES). Eighteen well-matched HCs were made to complete the face number detection ERP task with image stimuli of one, two, or three faces. Results: Compared to the HC group, the CHR group showed reduced visual attention to contextual processing when viewing multiple faces. With the increasing complexity of contextual faces, the differences in eye-tracking characteristics also increased. In the ERP task, the N170 amplitude decreased with a higher face number in FES patients, while it increased with a higher face number in HCs. Conclusions: Individuals in the very early phase of psychosis showed facial processing deficits with supporting evidence of different scan paths during context processing and disruption of N170 during multiple facial recognition.

Close

  • doi:10.1080/15622975.2021.2011402

Close

Luming Zhang; Zhigeng Pan; Ling Shao

Semi-supervised perception augmentation for aerial photo topologies understanding Journal Article

In: IEEE Transactions on Image Processing, vol. 30, pp. 7803–7814, 2021.

Abstract | Links | BibTeX

@article{Zhang2021d,
title = {Semi-supervised perception augmentation for aerial photo topologies understanding},
author = {Luming Zhang and Zhigeng Pan and Ling Shao},
doi = {10.1109/TIP.2021.3079820},
year = {2021},
date = {2021-01-01},
journal = {IEEE Transactions on Image Processing},
volume = {30},
pages = {7803--7814},
abstract = {Intelligently understanding the sophisticated topological structures from aerial photographs is a useful technique in aerial image analysis. Conventional methods cannot fulfill this task due to the following challenges: 1) the topology number of an aerial photo increases exponentially with the topology size, which requires a fine-grained visual descriptor to discriminatively represent each topology; 2) identifying visually/semantically salient topologies within each aerial photo in a weakly-labeled context, owing to the unaffordable human resources required for pixel-level annotation; and 3) designing a cross-domain knowledge transferal module to augment aerial photo perception, since multi-resolution aerial photos are taken asynchronistically in practice. To handle the above problems, we propose a unified framework to understand aerial photo topologies, focusing on representing each aerial photo by a set of visually/semantically salient topologies based on human visual perception and further employing them for visual categorization. Specifically, we first extract multiple atomic regions from each aerial photo, and thereby graphlets are built to capture the each aerial photo topologically. Then, a weakly-supervised ranking algorithm selects a few semantically salient graphlets by seamlessly encoding multiple image-level attributes. Toward a visualizable and perception-aware framework, we construct gaze shifting path (GSP) by linking the top-ranking graphlets. Finally, we derive the deep GSP representation, and formulate a semi-supervised and cross-domain SVM to partition each aerial photo into multiple categories. The SVM utilizes the global composition from low-resolution counterparts to enhance the deep GSP features from high-resolution aerial photos which are partially-annotated. Extensive visualization results and categorization performance comparisons have demonstrated the competitiveness of our approach.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Intelligently understanding the sophisticated topological structures from aerial photographs is a useful technique in aerial image analysis. Conventional methods cannot fulfill this task due to the following challenges: 1) the topology number of an aerial photo increases exponentially with the topology size, which requires a fine-grained visual descriptor to discriminatively represent each topology; 2) identifying visually/semantically salient topologies within each aerial photo in a weakly-labeled context, owing to the unaffordable human resources required for pixel-level annotation; and 3) designing a cross-domain knowledge transferal module to augment aerial photo perception, since multi-resolution aerial photos are taken asynchronistically in practice. To handle the above problems, we propose a unified framework to understand aerial photo topologies, focusing on representing each aerial photo by a set of visually/semantically salient topologies based on human visual perception and further employing them for visual categorization. Specifically, we first extract multiple atomic regions from each aerial photo, and thereby graphlets are built to capture the each aerial photo topologically. Then, a weakly-supervised ranking algorithm selects a few semantically salient graphlets by seamlessly encoding multiple image-level attributes. Toward a visualizable and perception-aware framework, we construct gaze shifting path (GSP) by linking the top-ranking graphlets. Finally, we derive the deep GSP representation, and formulate a semi-supervised and cross-domain SVM to partition each aerial photo into multiple categories. The SVM utilizes the global composition from low-resolution counterparts to enhance the deep GSP features from high-resolution aerial photos which are partially-annotated. Extensive visualization results and categorization performance comparisons have demonstrated the competitiveness of our approach.

Close

  • doi:10.1109/TIP.2021.3079820

Close

Li Zhang; Guoli Yan; Valerie Benson

The influence of emotional face distractors on attentional orienting in Chinese children with autism spectrum disorder Journal Article

In: PLoS ONE, vol. 16, no. 5, pp. 1–14, 2021.

Abstract | Links | BibTeX

@article{Zhang2021c,
title = {The influence of emotional face distractors on attentional orienting in Chinese children with autism spectrum disorder},
author = {Li Zhang and Guoli Yan and Valerie Benson},
doi = {10.1371/journal.pone.0250998},
year = {2021},
date = {2021-01-01},
journal = {PLoS ONE},
volume = {16},
number = {5},
pages = {1--14},
abstract = {The current study examined how emotional faces impact on attentional control at both involuntary and voluntary levels in children with and without autism spectrum disorder (ASD). A non-face single target was either presented in isolation or synchronously with emotional face distractors namely angry, happy and neutral faces. ASD and typically developing children made more erroneous saccades towards emotional distractors relative to neutral distractors in parafoveal and peripheral conditions. Remote distractor effects were observed on saccade latency in both groups regardless of distractor type, whereby time taken to initiate an eye movement to the target was longest in central distractor conditions, followed by parafoveal and peripheral distractor conditions. The remote distractor effect was greater for angry faces compared to happy faces in the ASD group. Proportions of failed disengagement trials from central distractors, for the first saccade, were higher in the angry distractor condition compared with the other two distractor conditions in ASD, and this effect was absent for the typical group. Eye movement results suggest difficulties in disengaging from fixated angry faces in ASD. Atypical disengagement from angry faces at the voluntary level could have consequences for the development of higher-level socio-communicative skills in ASD.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The current study examined how emotional faces impact on attentional control at both involuntary and voluntary levels in children with and without autism spectrum disorder (ASD). A non-face single target was either presented in isolation or synchronously with emotional face distractors namely angry, happy and neutral faces. ASD and typically developing children made more erroneous saccades towards emotional distractors relative to neutral distractors in parafoveal and peripheral conditions. Remote distractor effects were observed on saccade latency in both groups regardless of distractor type, whereby time taken to initiate an eye movement to the target was longest in central distractor conditions, followed by parafoveal and peripheral distractor conditions. The remote distractor effect was greater for angry faces compared to happy faces in the ASD group. Proportions of failed disengagement trials from central distractors, for the first saccade, were higher in the angry distractor condition compared with the other two distractor conditions in ASD, and this effect was absent for the typical group. Eye movement results suggest difficulties in disengaging from fixated angry faces in ASD. Atypical disengagement from angry faces at the voluntary level could have consequences for the development of higher-level socio-communicative skills in ASD.

Close

  • doi:10.1371/journal.pone.0250998

Close

Fan Zhang; Zhicheng Lin; Yang Zhang; Ming Zhang

Behavioral evidence for attention selection as entrained synchronization without awareness. Journal Article

In: Journal of Experimental Psychology: General, vol. 150, no. 9, pp. 1–12, 2021.

Abstract | Links | BibTeX

@article{Zhang2021b,
title = {Behavioral evidence for attention selection as entrained synchronization without awareness.},
author = {Fan Zhang and Zhicheng Lin and Yang Zhang and Ming Zhang},
doi = {10.1037/xge0000825},
year = {2021},
date = {2021-01-01},
journal = {Journal of Experimental Psychology: General},
volume = {150},
number = {9},
pages = {1--12},
abstract = {Animal physiological and human neuroimaging studies have established a link between attention and ␥-band (30–90 Hz) oscillations and synchronizations. However, a behavioral link between entrained ␥-band oscillations and attention has been fraught with technical challenges. In particular, while entrainment at mid-␥ band (40–70 Hz) has been claimed to be privileged in evoking attentional modulations without awareness, the effect may be attributed to display artifacts. Here, by exploiting isoluminant chromatic flicker without luminance modulation and not subject to these artifacts, we tested attentional attraction by chromatic flicker too fast to perceive. Awareness of flicker was subjectively and objectively tested with a high-powered design and evaluated with traditional and Bayesian statistics. Across 2 experiments in human participants, we observed—and also replicated—that 30-Hz chromatic flicker outside mid-␥ band attracted attention, resulting in a facilitation effect at a 50 ms interstimulus interval (ISI) and an inhibition effect at a 500 ms ISI. The attention test was confirmed to be more sensitive to the cue than the direct cue-localization task was. We further showed that these attention effects were absent for 50-Hz chromatic flicker. These results provide strong direct evidence against a privileged role of mid-␥ band in unconscious attention, but are consistent with known cortical responses to chromatic flicker in early visual cortex. Taken together, our findings provide behavioral evidence that entrained synchronization may serve as a mechanism for bottom-up attention selection and that chromatic flicker},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Animal physiological and human neuroimaging studies have established a link between attention and ␥-band (30–90 Hz) oscillations and synchronizations. However, a behavioral link between entrained ␥-band oscillations and attention has been fraught with technical challenges. In particular, while entrainment at mid-␥ band (40–70 Hz) has been claimed to be privileged in evoking attentional modulations without awareness, the effect may be attributed to display artifacts. Here, by exploiting isoluminant chromatic flicker without luminance modulation and not subject to these artifacts, we tested attentional attraction by chromatic flicker too fast to perceive. Awareness of flicker was subjectively and objectively tested with a high-powered design and evaluated with traditional and Bayesian statistics. Across 2 experiments in human participants, we observed—and also replicated—that 30-Hz chromatic flicker outside mid-␥ band attracted attention, resulting in a facilitation effect at a 50 ms interstimulus interval (ISI) and an inhibition effect at a 500 ms ISI. The attention test was confirmed to be more sensitive to the cue than the direct cue-localization task was. We further showed that these attention effects were absent for 50-Hz chromatic flicker. These results provide strong direct evidence against a privileged role of mid-␥ band in unconscious attention, but are consistent with known cortical responses to chromatic flicker in early visual cortex. Taken together, our findings provide behavioral evidence that entrained synchronization may serve as a mechanism for bottom-up attention selection and that chromatic flicker

Close

  • doi:10.1037/xge0000825

Close

Beizhen Zhang; Janis Ying Ying Kan; Mingpo Yang; Xiaochun Wang; Jiahao Tu; Michael Christopher Dorris

Transforming absolute value to categorical choice in primate superior colliculus during value-based decision making Journal Article

In: Nature Communications, vol. 12, no. 1, pp. 3410, 2021.

Abstract | Links | BibTeX

@article{Zhang2021a,
title = {Transforming absolute value to categorical choice in primate superior colliculus during value-based decision making},
author = {Beizhen Zhang and Janis Ying Ying Kan and Mingpo Yang and Xiaochun Wang and Jiahao Tu and Michael Christopher Dorris},
doi = {10.1038/s41467-021-23747-z},
year = {2021},
date = {2021-01-01},
journal = {Nature Communications},
volume = {12},
number = {1},
pages = {3410},
publisher = {Springer US},
abstract = {Value-based decision making involves choosing from multiple options with different values. Despite extensive studies on value representation in various brain regions, the neural mechanism for how multiple value options are converted to motor actions remains unclear. To study this, we developed a multi-value foraging task with varying menu of items in non-human primates using eye movements that dissociates value and choice, and conducted electrophysiological recording in the midbrain superior colliculus (SC). SC neurons encoded “absolute” value, independent of available options, during late fixation. In addition, SC neurons also represent value threshold, modulated by available options, different from conventional motor threshold. Electrical stimulation of SC neurons biased choices in a manner predicted by the difference between the value representation and the value threshold. These results reveal a neural mechanism directly transforming absolute values to categorical choices within SC, supporting highly efficient value-based decision making critical for real-world economic behaviors.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Value-based decision making involves choosing from multiple options with different values. Despite extensive studies on value representation in various brain regions, the neural mechanism for how multiple value options are converted to motor actions remains unclear. To study this, we developed a multi-value foraging task with varying menu of items in non-human primates using eye movements that dissociates value and choice, and conducted electrophysiological recording in the midbrain superior colliculus (SC). SC neurons encoded “absolute” value, independent of available options, during late fixation. In addition, SC neurons also represent value threshold, modulated by available options, different from conventional motor threshold. Electrical stimulation of SC neurons biased choices in a manner predicted by the difference between the value representation and the value threshold. These results reveal a neural mechanism directly transforming absolute values to categorical choices within SC, supporting highly efficient value-based decision making critical for real-world economic behaviors.

Close

  • doi:10.1038/s41467-021-23747-z

Close

Paul Zerr; Surya Gayet; Floris Esschert; Mitchel Kappen; Zoril Olah; Stefan Van der Stigchel

The development of retro-cue benefits with extensive practice: Implications for capacity estimation and attentional states in visual working memory Journal Article

In: Memory and Cognition, vol. 49, no. 5, pp. 1036–1049, 2021.

Abstract | Links | BibTeX

@article{Zerr2021,
title = {The development of retro-cue benefits with extensive practice: Implications for capacity estimation and attentional states in visual working memory},
author = {Paul Zerr and Surya Gayet and Floris Esschert and Mitchel Kappen and Zoril Olah and Stefan Van der Stigchel},
doi = {10.3758/s13421-021-01138-5},
year = {2021},
date = {2021-01-01},
journal = {Memory and Cognition},
volume = {49},
number = {5},
pages = {1036--1049},
publisher = {Memory & Cognition},
abstract = {Accessing the contents of visual short-term memory (VSTM) is compromised by information bottlenecks and visual interference between memorization and recall. Retro-cues, displayed after the offset of a memory stimulus and prior to the onset of a probe stimulus, indicate the test item and improve performance in VSTM tasks. It has been proposed that retro-cues aid recall by transferring information from a high-capacity memory store into visual working memory (multiple-store hypothesis). Alternatively, retro-cues could aid recall by redistributing memory resources within the same (low-capacity) working memory store (single-store hypothesis). If retro-cues provide access to a memory store with a capacity exceeding the set size, then, given sufficient training in the use of the retro-cue, near-ceiling performance should be observed. To test this prediction, 10 observers each performed 12 hours across 8 sessions in a retro-cue change-detection task (40,000+ trials total). The results provided clear support for the single-store hypothesis: retro-cue benefits (difference between a condition with and without retro-cues) emerged after a few hundred trials and then remained constant throughout the testing sessions, consistently improving performance by two items, rather than reaching ceiling performance. Surprisingly, we also observed a general increase in performance throughout the experiment in conditions with and without retro-cues, calling into question the generalizability of change-detection tasks in assessing working memory capacity as a stable trait of an observer (data and materials are available at osf.io/9xr82 and github.com/paulzerr/retrocues). In summary, the present findings suggest that retro-cues increase capacity estimates by redistributing memory resources across memoranda within a low-capacity working memory store.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Accessing the contents of visual short-term memory (VSTM) is compromised by information bottlenecks and visual interference between memorization and recall. Retro-cues, displayed after the offset of a memory stimulus and prior to the onset of a probe stimulus, indicate the test item and improve performance in VSTM tasks. It has been proposed that retro-cues aid recall by transferring information from a high-capacity memory store into visual working memory (multiple-store hypothesis). Alternatively, retro-cues could aid recall by redistributing memory resources within the same (low-capacity) working memory store (single-store hypothesis). If retro-cues provide access to a memory store with a capacity exceeding the set size, then, given sufficient training in the use of the retro-cue, near-ceiling performance should be observed. To test this prediction, 10 observers each performed 12 hours across 8 sessions in a retro-cue change-detection task (40,000+ trials total). The results provided clear support for the single-store hypothesis: retro-cue benefits (difference between a condition with and without retro-cues) emerged after a few hundred trials and then remained constant throughout the testing sessions, consistently improving performance by two items, rather than reaching ceiling performance. Surprisingly, we also observed a general increase in performance throughout the experiment in conditions with and without retro-cues, calling into question the generalizability of change-detection tasks in assessing working memory capacity as a stable trait of an observer (data and materials are available at osf.io/9xr82 and github.com/paulzerr/retrocues). In summary, the present findings suggest that retro-cues increase capacity estimates by redistributing memory resources across memoranda within a low-capacity working memory store.

Close

  • doi:10.3758/s13421-021-01138-5

Close

Tao Zeng; Yating Mu; Taoyan Zhu

Structural priming from simple arithmetic to Chinese ambiguous structures: evidence from eye movement Journal Article

In: Cognitive Processing, vol. 22, no. 2, pp. 185–207, 2021.

Abstract | Links | BibTeX

@article{Zeng2021a,
title = {Structural priming from simple arithmetic to Chinese ambiguous structures: evidence from eye movement},
author = {Tao Zeng and Yating Mu and Taoyan Zhu},
doi = {10.1007/s10339-020-01003-4},
year = {2021},
date = {2021-01-01},
journal = {Cognitive Processing},
volume = {22},
number = {2},
pages = {185--207},
publisher = {Springer Berlin Heidelberg},
abstract = {This article explores the domain generality of hierarchical representation between linguistic and mathematical cognition by adopting the structural priming paradigm in an eye-tracking reading experiment. The experiment investigated whether simple arithmetic equations with high (e.g., (7 + 2) × 3 + 1)- or low (e.g., 7 + 2 × 3 + 1)- attachment influence language users' interpretation of Chinese ambiguous structures (NP1 + He + NP2 + De + NP3; Quantifier + NP1 + De + NP2; NP1 + Kan/WangZhe + NP2 + AP). On the one hand, behavioral results showed that high-attachment primes led to more high-attachment interpretation, while low-attachment primes led to more low-attachment interpretation. On the other hand, the eye movement data indicated that structural priming was of great help to reduce dwell time on the ambiguous structure. There were structural priming effects from simple arithmetic to three different structures in Chinese, which provided new evidence on the cross-domain priming from simple arithmetic to language. Besides attachment priming effect at global level, online sentence integration at local level was found to be structure-dependent by some differences in eye movement measures. Our results have provided some evidence for the Representational Account.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This article explores the domain generality of hierarchical representation between linguistic and mathematical cognition by adopting the structural priming paradigm in an eye-tracking reading experiment. The experiment investigated whether simple arithmetic equations with high (e.g., (7 + 2) × 3 + 1)- or low (e.g., 7 + 2 × 3 + 1)- attachment influence language users' interpretation of Chinese ambiguous structures (NP1 + He + NP2 + De + NP3; Quantifier + NP1 + De + NP2; NP1 + Kan/WangZhe + NP2 + AP). On the one hand, behavioral results showed that high-attachment primes led to more high-attachment interpretation, while low-attachment primes led to more low-attachment interpretation. On the other hand, the eye movement data indicated that structural priming was of great help to reduce dwell time on the ambiguous structure. There were structural priming effects from simple arithmetic to three different structures in Chinese, which provided new evidence on the cross-domain priming from simple arithmetic to language. Besides attachment priming effect at global level, online sentence integration at local level was found to be structure-dependent by some differences in eye movement measures. Our results have provided some evidence for the Representational Account.

Close

  • doi:10.1007/s10339-020-01003-4

Close

Tao Zeng; Wen Mao; Yarong Gao

An eye-tracking study of structural priming from abstract arithmetic to Chinese atructure NP1 + You + NP2 + Hen + AP Journal Article

In: Journal of Psycholinguistic Research, no. 1-26, 2021.

Abstract | Links | BibTeX

@article{Zeng2021,
title = {An eye-tracking study of structural priming from abstract arithmetic to Chinese atructure NP1 + You + NP2 + Hen + AP},
author = {Tao Zeng and Wen Mao and Yarong Gao},
doi = {10.1007/s10936-021-09819-7},
year = {2021},
date = {2021-01-01},
journal = {Journal of Psycholinguistic Research},
number = {1-26},
publisher = {Springer US},
abstract = {The present study attempted to explore the abstract priming effects from mathematical equations to Mandarin Chinese structure NP1 + You + NP2 + Hen + AP in an on-line comprehension task with the aim to figure out the mechanism that underlying these effects. The results revealed that compared with baseline priming conditions, participants tended to choose more high-attachment options in high-attachment priming conditions and more low-attachment priming options in low-attachment priming conditions. Such difference had reached a significant level, which provided evidence for the shared structural representation across mathematical and linguistic domains. Additionally, the fixations sequences during arithmetic calculations reflected those equations were processed hierarchically and could be extracted in parallel instead of being scanned in a sequentially left-to-right order. Our results have provided some evidence for the Representational Account.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study attempted to explore the abstract priming effects from mathematical equations to Mandarin Chinese structure NP1 + You + NP2 + Hen + AP in an on-line comprehension task with the aim to figure out the mechanism that underlying these effects. The results revealed that compared with baseline priming conditions, participants tended to choose more high-attachment options in high-attachment priming conditions and more low-attachment priming options in low-attachment priming conditions. Such difference had reached a significant level, which provided evidence for the shared structural representation across mathematical and linguistic domains. Additionally, the fixations sequences during arithmetic calculations reflected those equations were processed hierarchically and could be extracted in parallel instead of being scanned in a sequentially left-to-right order. Our results have provided some evidence for the Representational Account.

Close

  • doi:10.1007/s10936-021-09819-7

Close

Tania S. Zamuner; Theresa Rabideau; Margarethe Mcdonald; H. Henny Yeung

Developmental change in children's speech processing of auditory and visual cues: An eyetracking study Journal Article

In: Journal of Child Language, pp. 1–25, 2021.

Abstract | Links | BibTeX

@article{Zamuner2021,
title = {Developmental change in children's speech processing of auditory and visual cues: An eyetracking study},
author = {Tania S. Zamuner and Theresa Rabideau and Margarethe Mcdonald and H. Henny Yeung},
doi = {10.1017/s0305000921000684},
year = {2021},
date = {2021-01-01},
journal = {Journal of Child Language},
pages = {1--25},
abstract = {This study investigates how children aged two to eight years ( N = 129) and adults ( N = 29) use auditory and visual speech for word recognition. The goal was to bridge the gap between apparent successes of visual speech processing in young children in visual-looking tasks, with apparent difficulties of speech processing in older children from explicit behavioural measures. Participants were presented with familiar words in audio-visual (AV), audio-only (A-only) or visual-only (V-only) speech modalities, then presented with target and distractor images, and looking to targets was measured. Adults showed high accuracy, with slightly less target-image looking in the V-only modality. Developmentally, looking was above chance for both AV and A-only modalities, but not in the V-only modality until 6 years of age (earlier on /k/-initial words). Flexible use of visual cues for lexical access develops throughout childhood.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study investigates how children aged two to eight years ( N = 129) and adults ( N = 29) use auditory and visual speech for word recognition. The goal was to bridge the gap between apparent successes of visual speech processing in young children in visual-looking tasks, with apparent difficulties of speech processing in older children from explicit behavioural measures. Participants were presented with familiar words in audio-visual (AV), audio-only (A-only) or visual-only (V-only) speech modalities, then presented with target and distractor images, and looking to targets was measured. Adults showed high accuracy, with slightly less target-image looking in the V-only modality. Developmentally, looking was above chance for both AV and A-only modalities, but not in the V-only modality until 6 years of age (earlier on /k/-initial words). Flexible use of visual cues for lexical access develops throughout childhood.

Close

  • doi:10.1017/s0305000921000684

Close

Xinger Yu; Timothy D. Hanks; Joy J. Geng

Attentional guidance and match decisions rely on different template information during visual search Journal Article

In: Psychological Science, pp. 1–16, 2021.

Abstract | Links | BibTeX

@article{Yu2021,
title = {Attentional guidance and match decisions rely on different template information during visual search},
author = {Xinger Yu and Timothy D. Hanks and Joy J. Geng},
doi = {10.1177/09567976211032225},
year = {2021},
date = {2021-01-01},
journal = {Psychological Science},
pages = {1--16},
abstract = {When searching for a target object, we engage in a continuous “look-identify” cycle in which we use known features of the target to guide attention toward potential targets and then to decide whether the selected object is indeed the target. Target information in memory (the target template or attentional template) is typically characterized as having a single, fixed source. However, debate has recently emerged over whether flexibility in the target template is relational or optimal. On the basis of evidence from two experiments using college students ( Ns = 30 and 70, respectively), we propose that initial guidance of attention uses a coarse relational code, but subsequent decisions use an optimal code. Our results offer a novel perspective that the precision of template information differs when guiding sensory selection and when making identity decisions during visual search.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When searching for a target object, we engage in a continuous “look-identify” cycle in which we use known features of the target to guide attention toward potential targets and then to decide whether the selected object is indeed the target. Target information in memory (the target template or attentional template) is typically characterized as having a single, fixed source. However, debate has recently emerged over whether flexibility in the target template is relational or optimal. On the basis of evidence from two experiments using college students ( Ns = 30 and 70, respectively), we propose that initial guidance of attention uses a coarse relational code, but subsequent decisions use an optimal code. Our results offer a novel perspective that the precision of template information differs when guiding sensory selection and when making identity decisions during visual search.

Close

  • doi:10.1177/09567976211032225

Close

Seng Bum Michael Yoo; Jiaxin Cindy Tu; Benjamin Yost Hayden

Multicentric tracking of multiple agents by anterior cingulate cortex during pursuit and evasion Journal Article

In: Nature Communications, vol. 12, pp. 1–14, 2021.

Abstract | Links | BibTeX

@article{Yoo2021a,
title = {Multicentric tracking of multiple agents by anterior cingulate cortex during pursuit and evasion},
author = {Seng Bum Michael Yoo and Jiaxin Cindy Tu and Benjamin Yost Hayden},
doi = {10.1038/s41467-021-22195-z},
year = {2021},
date = {2021-01-01},
journal = {Nature Communications},
volume = {12},
pages = {1--14},
publisher = {Springer US},
abstract = {Successful pursuit and evasion require rapid and precise coordination of navigation with adaptive motor control. We hypothesize that the dorsal anterior cingulate cortex (dACC), which communicates bidirectionally with both the hippocampal complex and premotor/motor areas, would serve a mapping role in this process. We recorded responses of dACC ensembles in two macaques performing a joystick-controlled continuous pursuit/evasion task. We find that dACC carries two sets of signals, (1) world-centric variables that together form a representation of the position and velocity of all relevant agents (self, prey, and predator) in the virtual world, and (2) avatar-centric variables, i.e. self-prey distance and angle. Both sets of variables are multiplexed within an overlapping set of neurons. Our results suggest that dACC may contribute to pursuit and evasion by computing and continuously updating a multicentric representation of the unfolding task state, and support the hypothesis that it plays a high-level abstract role in the control of behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Successful pursuit and evasion require rapid and precise coordination of navigation with adaptive motor control. We hypothesize that the dorsal anterior cingulate cortex (dACC), which communicates bidirectionally with both the hippocampal complex and premotor/motor areas, would serve a mapping role in this process. We recorded responses of dACC ensembles in two macaques performing a joystick-controlled continuous pursuit/evasion task. We find that dACC carries two sets of signals, (1) world-centric variables that together form a representation of the position and velocity of all relevant agents (self, prey, and predator) in the virtual world, and (2) avatar-centric variables, i.e. self-prey distance and angle. Both sets of variables are multiplexed within an overlapping set of neurons. Our results suggest that dACC may contribute to pursuit and evasion by computing and continuously updating a multicentric representation of the unfolding task state, and support the hypothesis that it plays a high-level abstract role in the control of behavior.

Close

  • doi:10.1038/s41467-021-22195-z

Close

Kyung Yoo; Jeongyeol Ahn; Sang-Hun Lee

The confounding effects of eye blinking on pupillometry, and their remedy Journal Article

In: PLoS ONE, vol. 16, no. 12, pp. 1–32, 2021.

Abstract | Links | BibTeX

@article{Yoo2021,
title = {The confounding effects of eye blinking on pupillometry, and their remedy},
author = {Kyung Yoo and Jeongyeol Ahn and Sang-Hun Lee},
doi = {10.1371/journal.pone.0261463},
year = {2021},
date = {2021-01-01},
journal = {PLoS ONE},
volume = {16},
number = {12},
pages = {1--32},
abstract = {Pupillometry, thanks to its strong relationship with cognitive factors and recent advancements in measuring techniques, has become popular among cognitive or neural scientists as a tool for studying the physiological processes involved in mental or neural processes. Despite this growing popularity of pupillometry, the methodological understanding of pupillometry is limited, especially regarding potential factors that may threaten pupillary measurements' validity. Eye blinking can be a factor because it frequently occurs in a manner dependent on many cognitive components and induces a pulse-like pupillary change consisting of constriction and dilation with substantive magnitude and length. We set out to characterize the basic properties of this “blink-locked pupillary response (BPR),” including the shape and magnitude of BPR and their variability across subjects and blinks, as the first step of studying the confounding nature of eye blinking. Then, we demonstrated how the dependency of eye blinking on cognitive factors could confound, via BPR, the pupillary responses that are supposed to reflect the cognitive states of interest. By building a statistical model of how the confounding effects of eye blinking occur, we proposed a probabilistic-inference algorithm of de-confounding raw pupillary measurements and showed that the proposed algorithm selectively removed BPR and enhanced the statistical power of pupillometry experiments. Our findings call for attention to the presence and confounding nature of BPR in pupillometry. The algorithm we developed here can be used as an effective remedy for the confounding effects of BPR on pupillometry.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Pupillometry, thanks to its strong relationship with cognitive factors and recent advancements in measuring techniques, has become popular among cognitive or neural scientists as a tool for studying the physiological processes involved in mental or neural processes. Despite this growing popularity of pupillometry, the methodological understanding of pupillometry is limited, especially regarding potential factors that may threaten pupillary measurements' validity. Eye blinking can be a factor because it frequently occurs in a manner dependent on many cognitive components and induces a pulse-like pupillary change consisting of constriction and dilation with substantive magnitude and length. We set out to characterize the basic properties of this “blink-locked pupillary response (BPR),” including the shape and magnitude of BPR and their variability across subjects and blinks, as the first step of studying the confounding nature of eye blinking. Then, we demonstrated how the dependency of eye blinking on cognitive factors could confound, via BPR, the pupillary responses that are supposed to reflect the cognitive states of interest. By building a statistical model of how the confounding effects of eye blinking occur, we proposed a probabilistic-inference algorithm of de-confounding raw pupillary measurements and showed that the proposed algorithm selectively removed BPR and enhanced the statistical power of pupillometry experiments. Our findings call for attention to the presence and confounding nature of BPR in pupillometry. The algorithm we developed here can be used as an effective remedy for the confounding effects of BPR on pupillometry.

Close

  • doi:10.1371/journal.pone.0261463

Close

Panpan Yao; Adrian Staub; Xingshan Li

Predictability eliminates neighborhood effects during Chinese sentence reading Journal Article

In: Psychonomic Bulletin & Review, pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Yao2021d,
title = {Predictability eliminates neighborhood effects during Chinese sentence reading},
author = {Panpan Yao and Adrian Staub and Xingshan Li},
doi = {10.3758/s13423-021-01966-1},
year = {2021},
date = {2021-01-01},
journal = {Psychonomic Bulletin & Review},
pages = {1--10},
publisher = {Psychonomic Bulletin & Review},
abstract = {Previous research has demonstrated effects of both orthographic neighborhood size and neighbor frequency in word recognition in Chinese. A large neighborhood—where neighborhood size is defined by the number of words that differ from a target word by a single character—appears to facilitate word recognition, while the presence of a higher-frequency neighbor has an inhibitory effect. The present study investigated modulation of these effects by a word's predictability in context. In two eye-movement experiments, the predictability of a target word in each sentence was manipulated. Target words differed in their neighborhood size (Experiment 1) and in whether they had a higher-frequency neighbor (Experiment 2). The study replicated the previously observed effects of neighborhood size and neighbor frequency when the target word was unpredictable, but in both experiments neighborhood effects were absent when the target was predictable. These results suggest that when a word is preactivated by context, the activation of its neighbors may be diminished to such an extent that these neighbors do not effectively compete for selection.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous research has demonstrated effects of both orthographic neighborhood size and neighbor frequency in word recognition in Chinese. A large neighborhood—where neighborhood size is defined by the number of words that differ from a target word by a single character—appears to facilitate word recognition, while the presence of a higher-frequency neighbor has an inhibitory effect. The present study investigated modulation of these effects by a word's predictability in context. In two eye-movement experiments, the predictability of a target word in each sentence was manipulated. Target words differed in their neighborhood size (Experiment 1) and in whether they had a higher-frequency neighbor (Experiment 2). The study replicated the previously observed effects of neighborhood size and neighbor frequency when the target word was unpredictable, but in both experiments neighborhood effects were absent when the target was predictable. These results suggest that when a word is preactivated by context, the activation of its neighbors may be diminished to such an extent that these neighbors do not effectively compete for selection.

Close

  • doi:10.3758/s13423-021-01966-1

Close

Panpan Yao; Reem Alkhammash; Xingshan Li

Plausibility and syntactic reanalysis in processing novel noun-noun combinations during Chinese reading: evidence from native and non-native speakers Journal Article

In: Scientific Studies of Reading, pp. 1–19, 2021.

Abstract | Links | BibTeX

@article{Yao2021b,
title = {Plausibility and syntactic reanalysis in processing novel noun-noun combinations during Chinese reading: evidence from native and non-native speakers},
author = {Panpan Yao and Reem Alkhammash and Xingshan Li},
doi = {10.1080/10888438.2021.2020796},
year = {2021},
date = {2021-01-01},
journal = {Scientific Studies of Reading},
pages = {1--19},
publisher = {Routledge},
abstract = {We aimed to tackle the question about the time course of plausibility effect in online processing of Chinese nouns in temporarily ambiguous structures, and whether L2ers can immediately use the plausibility information generated from classifier-noun associations in analyzing ambiguous structures. Two eye-tracking experiments were conducted to explore how native Chinese speakers (Experiment 1) and high-proficiency Dutch-Chinese learners (Experiment 2) online process 4-character novel noun-noun combinations in Chinese. In each pair of nominal phrases (Numeral+Classifier+Noun1 +Noun2), the plausibility of Classifier-Noun1 varied (plausible vs. implausible) while the whole nominal phrases were always plausible. Results showed that the plausibility of Classifier-Noun1 associations had an immediate effect on Noun1, and a reversed effect on Noun2 for both groups of participants. These findings indicated that plausibility plays an immediate role in incremental semantic integration during online processing of Chinese. Similar to native Chinese speakers, high-proficiency L2ers can also use the plausibility infor- mation of classifier-noun associations in syntactic reanalysis. Sentence},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We aimed to tackle the question about the time course of plausibility effect in online processing of Chinese nouns in temporarily ambiguous structures, and whether L2ers can immediately use the plausibility information generated from classifier-noun associations in analyzing ambiguous structures. Two eye-tracking experiments were conducted to explore how native Chinese speakers (Experiment 1) and high-proficiency Dutch-Chinese learners (Experiment 2) online process 4-character novel noun-noun combinations in Chinese. In each pair of nominal phrases (Numeral+Classifier+Noun1 +Noun2), the plausibility of Classifier-Noun1 varied (plausible vs. implausible) while the whole nominal phrases were always plausible. Results showed that the plausibility of Classifier-Noun1 associations had an immediate effect on Noun1, and a reversed effect on Noun2 for both groups of participants. These findings indicated that plausibility plays an immediate role in incremental semantic integration during online processing of Chinese. Similar to native Chinese speakers, high-proficiency L2ers can also use the plausibility infor- mation of classifier-noun associations in syntactic reanalysis. Sentence

Close

  • doi:10.1080/10888438.2021.2020796

Close

Beier Yao; Martin Rolfs; Christopher McLaughlin; Emily L. Isenstein; Sylvia B. Guillory; Hannah Grosman; Deborah A. Kashy; Jennifer H. Foss-Feig; Katharine N. Thakkar

Oculomotor corollary discharge signaling is related to repetitive behavior in children with autism spectrum disorder Journal Article

In: Journal of Vision, vol. 21, no. 8, pp. 1–20, 2021.

Abstract | Links | BibTeX

@article{Yao2021,
title = {Oculomotor corollary discharge signaling is related to repetitive behavior in children with autism spectrum disorder},
author = {Beier Yao and Martin Rolfs and Christopher McLaughlin and Emily L. Isenstein and Sylvia B. Guillory and Hannah Grosman and Deborah A. Kashy and Jennifer H. Foss-Feig and Katharine N. Thakkar},
doi = {10.1167/jov.21.8.9},
year = {2021},
date = {2021-01-01},
journal = {Journal of Vision},
volume = {21},
number = {8},
pages = {1--20},
abstract = {Corollary discharge (CD) signals are “copies” of motor signals sent to sensory regions that allow animals to adjust sensory consequences of self-generated actions. Autism spectrum disorder (ASD) is characterized by sensory and motor deficits, which may be underpinned by altered CD signaling. We evaluated oculomotor CD using the blanking task, which measures the influence of saccades on visual perception, in 30 children with ASD and 35 typically developing (TD) children. Participants were instructed to make a saccade to a visual target. Upon saccade initiation, the presaccadic target disappeared and reappeared to the left or right of the original position. Participants indicated the direction of},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Corollary discharge (CD) signals are “copies” of motor signals sent to sensory regions that allow animals to adjust sensory consequences of self-generated actions. Autism spectrum disorder (ASD) is characterized by sensory and motor deficits, which may be underpinned by altered CD signaling. We evaluated oculomotor CD using the blanking task, which measures the influence of saccades on visual perception, in 30 children with ASD and 35 typically developing (TD) children. Participants were instructed to make a saccade to a visual target. Upon saccade initiation, the presaccadic target disappeared and reappeared to the left or right of the original position. Participants indicated the direction of

Close

  • doi:10.1167/jov.21.8.9

Close

Jiumin Yang; Yi Zhang; Zhongling Pi; Yaohui Xie

Students' achievement motivation moderates the effects of interpolated pre-questions on attention and learning from video lectures Journal Article

In: Learning and Individual Differences, vol. 91, pp. 1–9, 2021.

Abstract | Links | BibTeX

@article{Yang2021,
title = {Students' achievement motivation moderates the effects of interpolated pre-questions on attention and learning from video lectures},
author = {Jiumin Yang and Yi Zhang and Zhongling Pi and Yaohui Xie},
doi = {10.1016/j.lindif.2021.102055},
year = {2021},
date = {2021-01-01},
journal = {Learning and Individual Differences},
volume = {91},
pages = {1--9},
publisher = {Elsevier Inc.},
abstract = {The study tested achievement motivation as a moderator of the relationship between pre-interpolated questions and learning from video lectures. Participants were 63 university students who were selected from a group of 123 volunteers, based on having high (n = 31) or low (n = 32) scores on the Achievement Motivation Scale. The students in each group were randomly assigned to view an instructional video with or without interpolated pre-questions. Visual attention was assessed by eye tracking measures of fixation duration and first time to fixation, and learning performance was assessed by tests of retention and transfer. The results of ANCOVAs showed that after controlling for prior knowledge, students with high achievement motivation benefitted more from the pre-questions than students with low achievement motivation. Among students with high achievement motivation, there was longer fixation duration to the learning materials and better transfer in the pre-questions condition than in the no-questions condition, but these differences based on video type were not apparent among students with low achievement. The findings have practical implications: interpolated pre-questions in video learning appear to be helpful for highly motivated students, and the benefit is seen in transfer rather than retention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The study tested achievement motivation as a moderator of the relationship between pre-interpolated questions and learning from video lectures. Participants were 63 university students who were selected from a group of 123 volunteers, based on having high (n = 31) or low (n = 32) scores on the Achievement Motivation Scale. The students in each group were randomly assigned to view an instructional video with or without interpolated pre-questions. Visual attention was assessed by eye tracking measures of fixation duration and first time to fixation, and learning performance was assessed by tests of retention and transfer. The results of ANCOVAs showed that after controlling for prior knowledge, students with high achievement motivation benefitted more from the pre-questions than students with low achievement motivation. Among students with high achievement motivation, there was longer fixation duration to the learning materials and better transfer in the pre-questions condition than in the no-questions condition, but these differences based on video type were not apparent among students with low achievement. The findings have practical implications: interpolated pre-questions in video learning appear to be helpful for highly motivated students, and the benefit is seen in transfer rather than retention.

Close

  • doi:10.1016/j.lindif.2021.102055

Close

Victoria Yaneva; Brian E. Clauser; Amy Morales; Miguel Paniagua

Using eye‐tracking data as part of the validity argument for multiple‐choice questions: A demonstration Journal Article

In: Journal of Educational Measurement, pp. 1–23, 2021.

Abstract | Links | BibTeX

@article{Yaneva2021,
title = {Using eye‐tracking data as part of the validity argument for multiple‐choice questions: A demonstration},
author = {Victoria Yaneva and Brian E. Clauser and Amy Morales and Miguel Paniagua},
doi = {10.1111/jedm.12304},
year = {2021},
date = {2021-01-01},
journal = {Journal of Educational Measurement},
pages = {1--23},
abstract = {Eye-tracking technology can create a record of the location and duration of visual fixations as a test-taker reads test questions. Although the cognitive process the test- taker is using cannot be directly observed, eye-tracking data can support inferences about these unobserved cognitive processes. This type of information has the potential to support improved test design and to contribute to an overall validity argument for the inferences and uses made based on test scores. Although several authors have referred to the potential usefulness of eye-tracking data, there are relatively few published studies that provide examples ofthat use. In this paper, we report the results an eye-tracking study designed to evaluate how the presence of the options in multiple-choice questions impacts the way medical students responded to questions designed to evaluate clinical reasoning. Examples of the types of data that can be extracted are presented. We then discuss the implications ofthese results for evaluating the validity of inferences made based on the type of items used in this study.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye-tracking technology can create a record of the location and duration of visual fixations as a test-taker reads test questions. Although the cognitive process the test- taker is using cannot be directly observed, eye-tracking data can support inferences about these unobserved cognitive processes. This type of information has the potential to support improved test design and to contribute to an overall validity argument for the inferences and uses made based on test scores. Although several authors have referred to the potential usefulness of eye-tracking data, there are relatively few published studies that provide examples ofthat use. In this paper, we report the results an eye-tracking study designed to evaluate how the presence of the options in multiple-choice questions impacts the way medical students responded to questions designed to evaluate clinical reasoning. Examples of the types of data that can be extracted are presented. We then discuss the implications ofthese results for evaluating the validity of inferences made based on the type of items used in this study.

Close

  • doi:10.1111/jedm.12304

Close

Chuyao Yan; Tao He; Zhiguo Wang

Predictive remapping leaves a behaviorally measurable attentional trace on eye-centered brain maps Journal Article

In: Psychonomic Bulletin & Review, vol. 28, no. 4, pp. 1243–1251, 2021.

Abstract | Links | BibTeX

@article{Yan2021,
title = {Predictive remapping leaves a behaviorally measurable attentional trace on eye-centered brain maps},
author = {Chuyao Yan and Tao He and Zhiguo Wang},
doi = {10.3758/s13423-021-01893-1},
year = {2021},
date = {2021-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {28},
number = {4},
pages = {1243--1251},
publisher = {Psychonomic Bulletin & Review},
abstract = {How does the brain maintain spatial attention despite the retinal displacement of objects by saccades? A possible solution is to use the vector of an upcoming saccade to compensate for the shift of objects on eye-centered (retinotopic) brain maps. In support of this hypothesis, previous studies have revealed attentional effects at the future retinal locus of an attended object, just before the onset of saccades. A critical yet unresolved theoretical issue is whether predictively remapped attentional effects would persist long enough on eye-centered brain maps, so no external input (goal, expectation, reward, memory, etc.) is needed to maintain spatial attention immediately following saccades. The present study examined this issue with inhibition of return (IOR), an attentional effect that reveals itself in both world-centered and eye-centered coordinates, and predictively remaps before saccades. In the first task, a saccade was introduced to a cueing task (“nonreturn-saccade task”) to show that IOR is coded in world-centered coordinates following saccades. In a second cueing task, two consecutive saccades were executed to trigger remapping and to dissociate the retinal locus relevant to remapping from the cued retinal locus (“return-saccade” task). IOR was observed at the remapped retinal locus 430-ms following the (first) saccade that triggered remapping. A third cueing task (“no-remapping” task) further revealed that the lingering IOR effect left by remapping was not confounded by the attention spillover. These results together show that predictive remapping leaves a robust attentional trace on eye-centered brain maps. This retinotopic trace is sufficient to sustain spatial attention for a few hundred milliseconds following saccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

How does the brain maintain spatial attention despite the retinal displacement of objects by saccades? A possible solution is to use the vector of an upcoming saccade to compensate for the shift of objects on eye-centered (retinotopic) brain maps. In support of this hypothesis, previous studies have revealed attentional effects at the future retinal locus of an attended object, just before the onset of saccades. A critical yet unresolved theoretical issue is whether predictively remapped attentional effects would persist long enough on eye-centered brain maps, so no external input (goal, expectation, reward, memory, etc.) is needed to maintain spatial attention immediately following saccades. The present study examined this issue with inhibition of return (IOR), an attentional effect that reveals itself in both world-centered and eye-centered coordinates, and predictively remaps before saccades. In the first task, a saccade was introduced to a cueing task (“nonreturn-saccade task”) to show that IOR is coded in world-centered coordinates following saccades. In a second cueing task, two consecutive saccades were executed to trigger remapping and to dissociate the retinal locus relevant to remapping from the cued retinal locus (“return-saccade” task). IOR was observed at the remapped retinal locus 430-ms following the (first) saccade that triggered remapping. A third cueing task (“no-remapping” task) further revealed that the lingering IOR effect left by remapping was not confounded by the attention spillover. These results together show that predictive remapping leaves a robust attentional trace on eye-centered brain maps. This retinotopic trace is sufficient to sustain spatial attention for a few hundred milliseconds following saccades.

Close

  • doi:10.3758/s13423-021-01893-1

Close

Jumpei Yamashita; Hiroki Terashima; Makoto Yoneya; Kazushi Maruya; Hidetaka Koya; Haruo Oishi; Hiroyuki Nakamura; Takatsune Kumada

Pupillary fluctuation amplitude before target presentation reflects short-term vigilance level in Psychomotor Vigilance Tasks Journal Article

In: PLoS ONE, vol. 16, no. 9, pp. 1–22, 2021.

Abstract | Links | BibTeX

@article{Yamashita2021,
title = {Pupillary fluctuation amplitude before target presentation reflects short-term vigilance level in Psychomotor Vigilance Tasks},
author = {Jumpei Yamashita and Hiroki Terashima and Makoto Yoneya and Kazushi Maruya and Hidetaka Koya and Haruo Oishi and Hiroyuki Nakamura and Takatsune Kumada},
doi = {10.1371/journal.pone.0256953},
year = {2021},
date = {2021-01-01},
journal = {PLoS ONE},
volume = {16},
number = {9},
pages = {1--22},
abstract = {Our daily activities require vigilance. Therefore, it is useful to externally monitor and predict our vigilance level using a straightforward method. It is known that the vigilance level is linked to pupillary fluctuations via Locus Coeruleus and Norepinephrine (LC-NE) system. However, previous methods of estimating long-term vigilance require monitoring pupillary fluctuations at rest over a long period. We developed a method of predicting the short-term vigilance level by monitoring pupillary fluctuation for a shorter period consisting of several seconds. The LC activity also fluctuates at a timescale of seconds. Therefore, we hypothesized that the short-term vigilance level could be estimated using pupillary fluctuations in a short period and quantified their amplitude as the Micro-Pupillary Unrest Index (M-PUI). We found an intra-individual trial-by-trial positive correlation between Reaction Time (RT) reflecting the short-term vigilance level and M-PUI in the period immediately before the target onset in a Psychomotor Vigilance Task (PVT). This relationship was most evident when the fluctuation was smoothed by a Hanning window of approximately 50 to 100 ms (including cases of down-sampled data at 100 and 50 Hz), and M-PUI was calculated in the period up to one or two seconds before the target onset. These results suggest that M-PUI can monitor and predict fluctuating levels of vigilance. M-PUI is also useful for examining pupillary fluctuations in a short period for elucidating the psychophysiological mechanisms of short-term vigilance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Our daily activities require vigilance. Therefore, it is useful to externally monitor and predict our vigilance level using a straightforward method. It is known that the vigilance level is linked to pupillary fluctuations via Locus Coeruleus and Norepinephrine (LC-NE) system. However, previous methods of estimating long-term vigilance require monitoring pupillary fluctuations at rest over a long period. We developed a method of predicting the short-term vigilance level by monitoring pupillary fluctuation for a shorter period consisting of several seconds. The LC activity also fluctuates at a timescale of seconds. Therefore, we hypothesized that the short-term vigilance level could be estimated using pupillary fluctuations in a short period and quantified their amplitude as the Micro-Pupillary Unrest Index (M-PUI). We found an intra-individual trial-by-trial positive correlation between Reaction Time (RT) reflecting the short-term vigilance level and M-PUI in the period immediately before the target onset in a Psychomotor Vigilance Task (PVT). This relationship was most evident when the fluctuation was smoothed by a Hanning window of approximately 50 to 100 ms (including cases of down-sampled data at 100 and 50 Hz), and M-PUI was calculated in the period up to one or two seconds before the target onset. These results suggest that M-PUI can monitor and predict fluctuating levels of vigilance. M-PUI is also useful for examining pupillary fluctuations in a short period for elucidating the psychophysiological mechanisms of short-term vigilance.

Close

  • doi:10.1371/journal.pone.0256953

Close

Hongge Xu; Jing Samantha Pan; Xiaoye Michael Wang; Geoffrey P. Bingham

Information for perceiving blurry events: Optic flow and color are additive Journal Article

In: Attention, Perception, and Psychophysics, vol. 83, no. 1, pp. 389–398, 2021.

Abstract | Links | BibTeX

@article{Xu2021,
title = {Information for perceiving blurry events: Optic flow and color are additive},
author = {Hongge Xu and Jing Samantha Pan and Xiaoye Michael Wang and Geoffrey P. Bingham},
doi = {10.3758/s13414-020-02135-7},
year = {2021},
date = {2021-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {83},
number = {1},
pages = {389--398},
publisher = {Attention, Perception, & Psychophysics},
abstract = {Information used in visual event perception includes both static image structure projected from opaque object surfaces and dynamic optic flow generated by motion. Events presented in static blurry grayscale displays have been shown to be recognized only when and after presented with optic flow. In this study, we investigate the effects of optic flow and color on identifying blurry events by studying the identification accuracy and eye-movement patterns. Three types of color displays were tested: grayscale, original colors, or rearranged colors (where the RGB values of the original colors were adjusted). In each color condition, participants identified 12 blurry events in five experimental phases. In the first two phases, static blurry images were presented alone or sequentially with a motion mask between consecutive frames, and identification was poor. In Phase 3, where optic flow was added, identification was comparably good. In Phases 4 and 5, motion was removed, but identification remained good. Thus, optic flow improved event identification during and after its presentation. Color also improved performance, where participants were consistently better at identifying color displays than grayscale or rearranged color displays. Importantly, the effects of optic flow and color were additive. Finally, in both motion and postmotion phases, a significant portion of eye fixations fell in strong optic flow areas, suggesting that participants continued to look where flow was available even after it stopped. We infer that optic flow specified depth structure in the blurry image structure and yielded an improvement in identification from static blurry images.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Information used in visual event perception includes both static image structure projected from opaque object surfaces and dynamic optic flow generated by motion. Events presented in static blurry grayscale displays have been shown to be recognized only when and after presented with optic flow. In this study, we investigate the effects of optic flow and color on identifying blurry events by studying the identification accuracy and eye-movement patterns. Three types of color displays were tested: grayscale, original colors, or rearranged colors (where the RGB values of the original colors were adjusted). In each color condition, participants identified 12 blurry events in five experimental phases. In the first two phases, static blurry images were presented alone or sequentially with a motion mask between consecutive frames, and identification was poor. In Phase 3, where optic flow was added, identification was comparably good. In Phases 4 and 5, motion was removed, but identification remained good. Thus, optic flow improved event identification during and after its presentation. Color also improved performance, where participants were consistently better at identifying color displays than grayscale or rearranged color displays. Importantly, the effects of optic flow and color were additive. Finally, in both motion and postmotion phases, a significant portion of eye fixations fell in strong optic flow areas, suggesting that participants continued to look where flow was available even after it stopped. We infer that optic flow specified depth structure in the blurry image structure and yielded an improvement in identification from static blurry images.

Close

  • doi:10.3758/s13414-020-02135-7

Close

Jia Qiong Xie; Detlef H. Rost; Fu Xing Wang; Jin Liang Wang; Rebecca L. Monk

The association between excessive social media use and distraction: An eye movement tracking study Journal Article

In: Information and Management, vol. 58, no. 2, pp. 1–12, 2021.

Abstract | Links | BibTeX

@article{Xie2021a,
title = {The association between excessive social media use and distraction: An eye movement tracking study},
author = {Jia Qiong Xie and Detlef H. Rost and Fu Xing Wang and Jin Liang Wang and Rebecca L. Monk},
doi = {10.1016/j.im.2020.103415},
year = {2021},
date = {2021-01-01},
journal = {Information and Management},
volume = {58},
number = {2},
pages = {1--12},
publisher = {Elsevier B.V.},
abstract = {Drawing on scan-and-shift hypothesis and scattered attention hypothesis, this article attempted to explore the association between excessive social media use (ESMU) and distraction from an information engagement perspective. In study 1, the results, based on 743 questionnaires completed by Chinese college students, showed that ESMU is related to distraction in daily life. In Study 2, eye-tracking technology was used to investigate the distraction and performance of excessive microblog users when performing the modified Stroop task. The results showed that excessive microblog users had difficulties suppressing interference information than non-microblog users, resulting in poor performance. Theoretical and practical implications are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Drawing on scan-and-shift hypothesis and scattered attention hypothesis, this article attempted to explore the association between excessive social media use (ESMU) and distraction from an information engagement perspective. In study 1, the results, based on 743 questionnaires completed by Chinese college students, showed that ESMU is related to distraction in daily life. In Study 2, eye-tracking technology was used to investigate the distraction and performance of excessive microblog users when performing the modified Stroop task. The results showed that excessive microblog users had difficulties suppressing interference information than non-microblog users, resulting in poor performance. Theoretical and practical implications are discussed.

Close

  • doi:10.1016/j.im.2020.103415

Close

Guangming Xie; Wenbo Du; Hongping Yuan; Yushi Jiang

Promoting reviewer-related attribution: Moderately complex presentation of mixed opinions activates the analytic process Journal Article

In: Sustainability, vol. 13, no. 2, pp. 1–28, 2021.

Abstract | Links | BibTeX

@article{Xie2021,
title = {Promoting reviewer-related attribution: Moderately complex presentation of mixed opinions activates the analytic process},
author = {Guangming Xie and Wenbo Du and Hongping Yuan and Yushi Jiang},
doi = {10.3390/su13020441},
year = {2021},
date = {2021-01-01},
journal = {Sustainability},
volume = {13},
number = {2},
pages = {1--28},
abstract = {Using metacognition and dual process theories, this paper studied the role of types of presentation of mixed opinions in mitigating negative impacts of online word of mouth (WOM) dispersion on consumer's purchasing decisions. Two studies were implemented, respectively. By employing an eye-tracking approach, study 1 recorded consumer's attention to WOM dispersion. The results show that the activation of the analytic system can improve reviewer-related attribution options. In study 2, three kinds of presentation of mixed opinions originating from China's leading online platform were compared. The results demonstrated that mixed opinions expressed in moderately complex form, integrating average ratings and reviewers' impressions of products, was effective in promoting reviewer-related attribution choices. However, too-complicated presentation types of WOM dispersion can impose excessively on consumers' cognitive load and eventually fail to activate the analytic system for promoting reviewer-related attribution choices. The main contribution of this paper lies in that consumer attribution-related choices are supplemented, which provides new insights into information consistency in consumer research. The managerial and theoretical significance of this paper are discussed in order to better understand the purchasing decisions of consumers.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Using metacognition and dual process theories, this paper studied the role of types of presentation of mixed opinions in mitigating negative impacts of online word of mouth (WOM) dispersion on consumer's purchasing decisions. Two studies were implemented, respectively. By employing an eye-tracking approach, study 1 recorded consumer's attention to WOM dispersion. The results show that the activation of the analytic system can improve reviewer-related attribution options. In study 2, three kinds of presentation of mixed opinions originating from China's leading online platform were compared. The results demonstrated that mixed opinions expressed in moderately complex form, integrating average ratings and reviewers' impressions of products, was effective in promoting reviewer-related attribution choices. However, too-complicated presentation types of WOM dispersion can impose excessively on consumers' cognitive load and eventually fail to activate the analytic system for promoting reviewer-related attribution choices. The main contribution of this paper lies in that consumer attribution-related choices are supplemented, which provides new insights into information consistency in consumer research. The managerial and theoretical significance of this paper are discussed in order to better understand the purchasing decisions of consumers.

Close

  • doi:10.3390/su13020441

Close

Xue-Zhen Xiao; Gaoding Jia; Aiping Wang

Semantic preview benefit of Tibetan-Chinese bilinguals during Chinese reading Journal Article

In: Language Learning and Development, pp. 1–15, 2021.

Abstract | Links | BibTeX

@article{Xiao2021a,
title = {Semantic preview benefit of Tibetan-Chinese bilinguals during Chinese reading},
author = {Xue-Zhen Xiao and Gaoding Jia and Aiping Wang},
doi = {10.1080/15475441.2021.2003198},
year = {2021},
date = {2021-01-01},
journal = {Language Learning and Development},
pages = {1--15},
publisher = {Psychology Press},
abstract = {When reading Chinese, skilled native readers regularly gain a preview benefit (PB) when the parafoveal word is orthographically or semantically related to the target word. Evidence shows that non-native, beginning Chinese readers can obtain an orthographic PB during Chinese reading, which indicates the parafoveal processing of low-level visual information. However, whether non-native Chinese readers who are more proficient in Chinese can make use of high-level parafoveal information remains unknown. Therefore, this study examined parafoveal processing during Chinese reading among Tibetan-Chinese bilinguals with high Chinese proficiency and compared their PB effects with those from native Chinese readers. Tibetan-Chinese bilinguals demonstrated both orthographic and semantic PB but did not show phonological PB and only differed from native Chinese in the identical PB when preview characters were identical to the targets. These findings demonstrate that non-native Chinese readers can extract semantic informa- tion from parafoveal preview during Chinese reading and highlight the modulation of parafoveal processing efficiency by reading proficiency. The results are in line with the direct route to access the mental lexicon of visual Chinese characters among non-native Chinese speakers. Introduction},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When reading Chinese, skilled native readers regularly gain a preview benefit (PB) when the parafoveal word is orthographically or semantically related to the target word. Evidence shows that non-native, beginning Chinese readers can obtain an orthographic PB during Chinese reading, which indicates the parafoveal processing of low-level visual information. However, whether non-native Chinese readers who are more proficient in Chinese can make use of high-level parafoveal information remains unknown. Therefore, this study examined parafoveal processing during Chinese reading among Tibetan-Chinese bilinguals with high Chinese proficiency and compared their PB effects with those from native Chinese readers. Tibetan-Chinese bilinguals demonstrated both orthographic and semantic PB but did not show phonological PB and only differed from native Chinese in the identical PB when preview characters were identical to the targets. These findings demonstrate that non-native Chinese readers can extract semantic informa- tion from parafoveal preview during Chinese reading and highlight the modulation of parafoveal processing efficiency by reading proficiency. The results are in line with the direct route to access the mental lexicon of visual Chinese characters among non-native Chinese speakers. Introduction

Close

  • doi:10.1080/15475441.2021.2003198

Close

Jingmei Xiao; Jing Huang; Yujun Long; Xiaoyi Wang; Ying Wang; Ye Yang; Gangrui Hei; Mengxi Sun; Jin Zhao; Li Li; Tiannan Shao; Weiyan Wang; Dongyu Kang; Chenchen Liu; Peng Xie; Yuyan Huang; Renrong Wu; Jingping Zhao

Optimizing and individualizing the pharmacological treatment of first-episode Schizophrenic patients: Study protocol for a multicenter clinical trial Journal Article

In: Frontiers in Psychiatry, vol. 12, no. February, pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Xiao2021,
title = {Optimizing and individualizing the pharmacological treatment of first-episode Schizophrenic patients: Study protocol for a multicenter clinical trial},
author = {Jingmei Xiao and Jing Huang and Yujun Long and Xiaoyi Wang and Ying Wang and Ye Yang and Gangrui Hei and Mengxi Sun and Jin Zhao and Li Li and Tiannan Shao and Weiyan Wang and Dongyu Kang and Chenchen Liu and Peng Xie and Yuyan Huang and Renrong Wu and Jingping Zhao},
doi = {10.3389/fpsyt.2021.611070},
year = {2021},
date = {2021-01-01},
journal = {Frontiers in Psychiatry},
volume = {12},
number = {February},
pages = {1--10},
abstract = {Introduction: Affecting $sim$1% of the world population, schizophrenia is known as one of the costliest and most burdensome diseases worldwide. Antipsychotic medications are the main treatment for schizophrenia to control psychotic symptoms and efficiently prevent new crises. However, due to poor compliance, 74% of patients with schizophrenia discontinue medication within 1.5 years, which severely affects recovery and prognosis. Through research on intra and interindividual variability based on a psychopathology–neuropsychology–neuroimage–genetics–physiology-biochemistry model, our main objective is to investigate an optimized and individualized antipsychotic-treatment regimen and precision treatment for first-episode schizophrenic patients. Methods and Analysis: The study is performed in 20 representative hospitals in China. Three subprojects are included. In subproject 1, 1,800 first-episode patients with schizophrenia are randomized into six different antipsychotic monotherapy groups (olanzapine, risperidone, aripiprazole, ziprasidone, amisulpride, and haloperidol) for an 8-week treatment. By identifying a set of potential biomarkers associated with antipsychotic treatment response, we intend to build a prediction model, which includes neuroimaging, epigenetics, environmental stress, neurocognition, eye movement, electrophysiology, and neurological biochemistry indexes. In subproject 2, apart from verifying the prediction model established in subproject 1 based on an independent cohort of 1,800 first-episode patients with schizophrenia, we recruit patients from a verification cohort who did not get an effective response after an 8-week antipsychotic treatment into a randomized double-blind controlled trial with minocycline (200 mg per day) and sulforaphane (3 tables per day) to explore add-on treatment for patients with schizophrenia. Two hundred forty participants are anticipated to be enrolled for each group. In subproject 3, we tend to carry out one trial to construct an intervention strategy for metabolic syndrome induced by antipsychotic treatment and another one to build a prevention strategy for patients at a high risk of metabolic syndrome, which combines metformin and lifestyle intervention. Two hundred participants are anticipated to be enrolled for each group. Ethics and Dissemination: The study protocol has been approved by the Medical Ethics committee of the Second Xiangya Hospital of Central South University (No. 2017027). Results will be disseminated in peer-reviewed journals and at international conferences. Trial Registration: This trial has been registered on Clinicalrials.gov (NCT03451734). The protocol version is V.1.0 (April 23, 2017).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Introduction: Affecting $sim$1% of the world population, schizophrenia is known as one of the costliest and most burdensome diseases worldwide. Antipsychotic medications are the main treatment for schizophrenia to control psychotic symptoms and efficiently prevent new crises. However, due to poor compliance, 74% of patients with schizophrenia discontinue medication within 1.5 years, which severely affects recovery and prognosis. Through research on intra and interindividual variability based on a psychopathology–neuropsychology–neuroimage–genetics–physiology-biochemistry model, our main objective is to investigate an optimized and individualized antipsychotic-treatment regimen and precision treatment for first-episode schizophrenic patients. Methods and Analysis: The study is performed in 20 representative hospitals in China. Three subprojects are included. In subproject 1, 1,800 first-episode patients with schizophrenia are randomized into six different antipsychotic monotherapy groups (olanzapine, risperidone, aripiprazole, ziprasidone, amisulpride, and haloperidol) for an 8-week treatment. By identifying a set of potential biomarkers associated with antipsychotic treatment response, we intend to build a prediction model, which includes neuroimaging, epigenetics, environmental stress, neurocognition, eye movement, electrophysiology, and neurological biochemistry indexes. In subproject 2, apart from verifying the prediction model established in subproject 1 based on an independent cohort of 1,800 first-episode patients with schizophrenia, we recruit patients from a verification cohort who did not get an effective response after an 8-week antipsychotic treatment into a randomized double-blind controlled trial with minocycline (200 mg per day) and sulforaphane (3 tables per day) to explore add-on treatment for patients with schizophrenia. Two hundred forty participants are anticipated to be enrolled for each group. In subproject 3, we tend to carry out one trial to construct an intervention strategy for metabolic syndrome induced by antipsychotic treatment and another one to build a prevention strategy for patients at a high risk of metabolic syndrome, which combines metformin and lifestyle intervention. Two hundred participants are anticipated to be enrolled for each group. Ethics and Dissemination: The study protocol has been approved by the Medical Ethics committee of the Second Xiangya Hospital of Central South University (No. 2017027). Results will be disseminated in peer-reviewed journals and at international conferences. Trial Registration: This trial has been registered on Clinicalrials.gov (NCT03451734). The protocol version is V.1.0 (April 23, 2017).

Close

  • doi:10.3389/fpsyt.2021.611070

Close

Yanfang Xia; Filip Melinscak; Dominik R. Bach

Saccadic scanpath length: an index for human threat conditioning Journal Article

In: Behavior Research Methods, vol. 53, no. 4, pp. 1426–1439, 2021.

Abstract | Links | BibTeX

@article{Xia2021,
title = {Saccadic scanpath length: an index for human threat conditioning},
author = {Yanfang Xia and Filip Melinscak and Dominik R. Bach},
doi = {10.3758/s13428-020-01490-5},
year = {2021},
date = {2021-01-01},
journal = {Behavior Research Methods},
volume = {53},
number = {4},
pages = {1426--1439},
publisher = {Behavior Research Methods},
abstract = {Threat-conditioned cues are thought to capture overt attention in a bottom-up process. Quantification of this phenomenon typically relies on cue competition paradigms. Here, we sought to exploit gaze patterns during exclusive presentation of a visual conditioned stimulus, in order to quantify human threat conditioning. To this end, we capitalized on a summary statistic of visual search during CS presentation, scanpath length. During a simple delayed threat conditioning paradigm with full-screen monochrome conditioned stimuli (CS), we observed shorter scanpath length during CS+ compared to CS- presentation. Retrodictive validity, i.e., effect size to distinguish CS+ and CS-, was maximized by considering a 2-s time window before US onset. Taking into account the shape of the scan speed response resulted in similar retrodictive validity. The mechanism underlying shorter scanpath length appeared to be longer fixation duration and more fixation on the screen center during CS+ relative to CS- presentation. These findings were replicated in a second experiment with similar setup, and further confirmed in a third experiment using full-screen patterns as CS. This experiment included an extinction session during which scanpath differences appeared to extinguish. In a fourth experiment with auditory CS and instruction to fixate screen center, no scanpath length differences were observed. In conclusion, our study suggests scanpath length as a visual search summary statistic, which may be used as complementary measure to quantify threat conditioning with retrodictive validity similar to that of skin conductance responses.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Threat-conditioned cues are thought to capture overt attention in a bottom-up process. Quantification of this phenomenon typically relies on cue competition paradigms. Here, we sought to exploit gaze patterns during exclusive presentation of a visual conditioned stimulus, in order to quantify human threat conditioning. To this end, we capitalized on a summary statistic of visual search during CS presentation, scanpath length. During a simple delayed threat conditioning paradigm with full-screen monochrome conditioned stimuli (CS), we observed shorter scanpath length during CS+ compared to CS- presentation. Retrodictive validity, i.e., effect size to distinguish CS+ and CS-, was maximized by considering a 2-s time window before US onset. Taking into account the shape of the scan speed response resulted in similar retrodictive validity. The mechanism underlying shorter scanpath length appeared to be longer fixation duration and more fixation on the screen center during CS+ relative to CS- presentation. These findings were replicated in a second experiment with similar setup, and further confirmed in a third experiment using full-screen patterns as CS. This experiment included an extinction session during which scanpath differences appeared to extinguish. In a fourth experiment with auditory CS and instruction to fixate screen center, no scanpath length differences were observed. In conclusion, our study suggests scanpath length as a visual search summary statistic, which may be used as complementary measure to quantify threat conditioning with retrodictive validity similar to that of skin conductance responses.

Close

  • doi:10.3758/s13428-020-01490-5

Close

Jordana S. Wynn; Bradley R. Buchsbaum; Jennifer D. Ryan

Encoding and retrieval eye movements mediate age differences in pattern completion Journal Article

In: Cognition, pp. 1–13, 2021.

Abstract | Links | BibTeX

@article{Wynn2021,
title = {Encoding and retrieval eye movements mediate age differences in pattern completion},
author = {Jordana S. Wynn and Bradley R. Buchsbaum and Jennifer D. Ryan},
doi = {10.1016/j.cognition.2021.104746},
year = {2021},
date = {2021-01-01},
journal = {Cognition},
pages = {1--13},
publisher = {Elsevier B.V.},
abstract = {Older adults often mistake new information as ‘old', yet the mechanisms underlying this response bias remain unclear. Typically, false alarms by older adults are thought to reflect pattern completion – the retrieval of a previously encoded stimulus in response to partial input. However, other work suggests that age-related retrieval errors can be accounted for by deficient encoding processes. In the present study, we used eye movement monitoring to quantify age-related changes in behavioral pattern completion as a function of eye movements during both encoding and partially cued retrieval. Consistent with an age-related encoding deficit, older adults executed more gaze fixations and more similar eye movements across repeated image presentations than younger adults, and such effects were predictive of subsequent recognition memory. Analysis of eye movements at retrieval further indicated that in response to partial lure cues, older adults reactivated the similar studied image, indexed by the similarity between encoding and retrieval gaze patterns, and did so more than younger adults. Critically, reactivation of encoded image content via eye movements was associated with lure false alarms in older adults, providing direct evidence for a pattern completion bias. Together, these findings suggest that age-related changes in both encoding and retrieval processes, indexed by eye movements, underlie older adults' increased vulnerability to memory errors.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Older adults often mistake new information as ‘old', yet the mechanisms underlying this response bias remain unclear. Typically, false alarms by older adults are thought to reflect pattern completion – the retrieval of a previously encoded stimulus in response to partial input. However, other work suggests that age-related retrieval errors can be accounted for by deficient encoding processes. In the present study, we used eye movement monitoring to quantify age-related changes in behavioral pattern completion as a function of eye movements during both encoding and partially cued retrieval. Consistent with an age-related encoding deficit, older adults executed more gaze fixations and more similar eye movements across repeated image presentations than younger adults, and such effects were predictive of subsequent recognition memory. Analysis of eye movements at retrieval further indicated that in response to partial lure cues, older adults reactivated the similar studied image, indexed by the similarity between encoding and retrieval gaze patterns, and did so more than younger adults. Critically, reactivation of encoded image content via eye movements was associated with lure false alarms in older adults, providing direct evidence for a pattern completion bias. Together, these findings suggest that age-related changes in both encoding and retrieval processes, indexed by eye movements, underlie older adults' increased vulnerability to memory errors.

Close

  • doi:10.1016/j.cognition.2021.104746

Close

Yu Wu; Zhixiong Zhuo; Qunyue Liu; Kunyong Yu; Qitang Huang; Jian Liu

The relationships between perceived design intensity, preference, restorativeness and eye movements in designed urban green space Journal Article

In: International Journal of Environmental Research and Public Health, vol. 18, no. 20, pp. 1–16, 2021.

Abstract | Links | BibTeX

@article{Wu2021e,
title = {The relationships between perceived design intensity, preference, restorativeness and eye movements in designed urban green space},
author = {Yu Wu and Zhixiong Zhuo and Qunyue Liu and Kunyong Yu and Qitang Huang and Jian Liu},
doi = {10.3390/ijerph182010944},
year = {2021},
date = {2021-01-01},
journal = {International Journal of Environmental Research and Public Health},
volume = {18},
number = {20},
pages = {1--16},
abstract = {Recent research has demonstrated that landscape design intensity impacts individuals' landscape preferences, which may influence their eye movement. Due to the close relationship between restorativeness and landscape preference, we further explore the relationships between design intensity, preference, restorativeness and eye movements. Specifically, using manipulated images as stimuli for 200 students as participants, the effect of urban green space (UGS) design intensity on landscapes' preference, restorativeness, and eye movement was examined. The results demonstrate that landscape design intensity could contribute to preference and restorativeness and that there is a significant positive relationship between design intensity and eye-tracking metrics, including dwell time percent, fixation percent, fixation count, and visited ranking. Additionally, preference was positively related to restorativeness, dwell time percent, fixation percent, and fixation count, and there is a significant positive relationship between restorativeness and fixation percent. We obtained the most feasible regression equations between design intensity and preference, restorativeness, and eye movement. These results provide a set of guidelines for improving UGS design to achieve its greatest restorative potential and shed new light on the use of eye-tracking technology in landscape perception studies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Recent research has demonstrated that landscape design intensity impacts individuals' landscape preferences, which may influence their eye movement. Due to the close relationship between restorativeness and landscape preference, we further explore the relationships between design intensity, preference, restorativeness and eye movements. Specifically, using manipulated images as stimuli for 200 students as participants, the effect of urban green space (UGS) design intensity on landscapes' preference, restorativeness, and eye movement was examined. The results demonstrate that landscape design intensity could contribute to preference and restorativeness and that there is a significant positive relationship between design intensity and eye-tracking metrics, including dwell time percent, fixation percent, fixation count, and visited ranking. Additionally, preference was positively related to restorativeness, dwell time percent, fixation percent, and fixation count, and there is a significant positive relationship between restorativeness and fixation percent. We obtained the most feasible regression equations between design intensity and preference, restorativeness, and eye movement. These results provide a set of guidelines for improving UGS design to achieve its greatest restorative potential and shed new light on the use of eye-tracking technology in landscape perception studies.

Close

  • doi:10.3390/ijerph182010944

Close

Yingying Wu; Zhenxing Wang; Wanru Lin; Zengyan Ye; Rong Lian

Visual salience accelerates lexical processing and subsequent integration: an eye-movement study Journal Article

In: Journal of Cognitive Psychology, vol. 33, no. 2, pp. 146–156, 2021.

Abstract | Links | BibTeX

@article{Wu2021d,
title = {Visual salience accelerates lexical processing and subsequent integration: an eye-movement study},
author = {Yingying Wu and Zhenxing Wang and Wanru Lin and Zengyan Ye and Rong Lian},
doi = {10.1080/20445911.2021.1879817},
year = {2021},
date = {2021-01-01},
journal = {Journal of Cognitive Psychology},
volume = {33},
number = {2},
pages = {146--156},
publisher = {Taylor & Francis},
abstract = {This study examined how visual salience affects the processing of salient information it highlights (here after called visually salient information), as well as its connection with associated content during online reading. Participants were asked to read descriptive concepts that contained a two-character key concept term with a short definition, and subsequently complete a memory test. The visual salience of the key concept terms was manipulated. The results show that visual salience shortened the reading times of key concept terms, as well as the go-past times of concept definition. In addition, improving the visual salience of the key concept terms helped subjects in the subsequent memory test to make quicker and more accurate judgments regarding incorrect concepts. These results indicate that visual salience accelerates the lexical processing of visually salient information and helps readers build faster and more elaborate connections between visually salient information and associated content in the subsequent integration.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study examined how visual salience affects the processing of salient information it highlights (here after called visually salient information), as well as its connection with associated content during online reading. Participants were asked to read descriptive concepts that contained a two-character key concept term with a short definition, and subsequently complete a memory test. The visual salience of the key concept terms was manipulated. The results show that visual salience shortened the reading times of key concept terms, as well as the go-past times of concept definition. In addition, improving the visual salience of the key concept terms helped subjects in the subsequent memory test to make quicker and more accurate judgments regarding incorrect concepts. These results indicate that visual salience accelerates the lexical processing of visually salient information and helps readers build faster and more elaborate connections between visually salient information and associated content in the subsequent integration.

Close

  • doi:10.1080/20445911.2021.1879817

Close

Xiaogang Wu; Aijun Wang; Ming Zhang

How the size of exogenous attentional cues alters visual performance: From response gain to contrast gain Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 74, no. 10, pp. 1773–1783, 2021.

Abstract | Links | BibTeX

@article{Wu2021b,
title = {How the size of exogenous attentional cues alters visual performance: From response gain to contrast gain},
author = {Xiaogang Wu and Aijun Wang and Ming Zhang},
doi = {10.1177/17470218211024829},
year = {2021},
date = {2021-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {74},
number = {10},
pages = {1773--1783},
abstract = {The normalisation model of attention (NMoA) predicts that the attention gain pattern is mediated by changes in the size of the attentional field and stimuli. However, existing studies have not measured gain patterns when the relative sizes of stimuli are changed. To investigate the NMoA, the present study manipulated the attentional field size, namely, the exogenous cue size. Moreover, we assessed whether the relative rather than the absolute size of the attentional field matters, either by holding the target size constant and changing the cue size (Experiments 1–3) or by holding the cue size constant and changing the target size (Experiment 4), in a spatial cueing paradigm of psychophysical procedures. The results show that the gain modulations changed from response gain to contrast gain when the precue size changed from small to large relative to the target size (Experiments 1–3). Moreover, when the target size was once again made larger than the precue size, there was still a change in response gain (Experiment 4). These results suggest that the size of exogenous cues plays an important role in adjusting the attentional field and that relative changes rather than absolute changes to exogenous cue size determine gain modulation. These results are consistent with the prediction of the NMoA and provide novel insights into gain modulations of visual selective attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The normalisation model of attention (NMoA) predicts that the attention gain pattern is mediated by changes in the size of the attentional field and stimuli. However, existing studies have not measured gain patterns when the relative sizes of stimuli are changed. To investigate the NMoA, the present study manipulated the attentional field size, namely, the exogenous cue size. Moreover, we assessed whether the relative rather than the absolute size of the attentional field matters, either by holding the target size constant and changing the cue size (Experiments 1–3) or by holding the cue size constant and changing the target size (Experiment 4), in a spatial cueing paradigm of psychophysical procedures. The results show that the gain modulations changed from response gain to contrast gain when the precue size changed from small to large relative to the target size (Experiments 1–3). Moreover, when the target size was once again made larger than the precue size, there was still a change in response gain (Experiment 4). These results suggest that the size of exogenous cues plays an important role in adjusting the attentional field and that relative changes rather than absolute changes to exogenous cue size determine gain modulation. These results are consistent with the prediction of the NMoA and provide novel insights into gain modulations of visual selective attention.

Close

  • doi:10.1177/17470218211024829

Close

Ching-Lin Lin Wu; Shu-Ling Ling Peng; Hsueh-Chih Chih Chen

Why Can People Effectively Access Remote Associations? Eye Movements during Chinese Remote Associates Problem Solving Journal Article

In: Creativity Research Journal, vol. 33, no. 2, pp. 158–167, 2021.

Abstract | Links | BibTeX

@article{Wu2021,
title = {Why Can People Effectively Access Remote Associations? Eye Movements during Chinese Remote Associates Problem Solving},
author = {Ching-Lin Lin Wu and Shu-Ling Ling Peng and Hsueh-Chih Chih Chen},
doi = {10.1080/10400419.2020.1856579},
year = {2021},
date = {2021-01-01},
journal = {Creativity Research Journal},
volume = {33},
number = {2},
pages = {158--167},
publisher = {Routledge},
abstract = {An increasing number of studies have explored the process of how subjects solve problems through remote association. Most research has investigated the relationship between an individual's response to semantic search during the think-aloud operation and the individual's reply performance. Few studies, however, have examined the process of obtaining objective physiological indices. Eye-tracking technology is a powerful tool with which to dissect the process of problem solving, with tracked fixation indices that reflect an individual's internal cognitive mechanisms. This study, based on participants' fixation order for various stimulus words, was the first to introduce the concept of association search span, a concept that can be further divided into distributed association and centralized association. This study recorded 62 participants' eye movement indices in an eye-tracking experiment. The results showed that participants with higher remote association ability used more distributed associations and fewer centralized associations. The results indicated that the stronger remote association ability a participant has, the more likely that participant is to form associations with different stimulus words. It was also found that flexible thinking plays a vital role in the generation of remote associations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

An increasing number of studies have explored the process of how subjects solve problems through remote association. Most research has investigated the relationship between an individual's response to semantic search during the think-aloud operation and the individual's reply performance. Few studies, however, have examined the process of obtaining objective physiological indices. Eye-tracking technology is a powerful tool with which to dissect the process of problem solving, with tracked fixation indices that reflect an individual's internal cognitive mechanisms. This study, based on participants' fixation order for various stimulus words, was the first to introduce the concept of association search span, a concept that can be further divided into distributed association and centralized association. This study recorded 62 participants' eye movement indices in an eye-tracking experiment. The results showed that participants with higher remote association ability used more distributed associations and fewer centralized associations. The results indicated that the stronger remote association ability a participant has, the more likely that participant is to form associations with different stimulus words. It was also found that flexible thinking plays a vital role in the generation of remote associations.

Close

  • doi:10.1080/10400419.2020.1856579

Close

Chao Jung Wu; Chia Yu Liu; Chung Hsuan Yang; Yu Cin Jian

Eye-movements reveal children's deliberative thinking and predict performance on arithmetic word problems Journal Article

In: European Journal of Psychology of Education, vol. 36, no. 1, pp. 91–108, 2021.

Abstract | Links | BibTeX

@article{Wu2021f,
title = {Eye-movements reveal children's deliberative thinking and predict performance on arithmetic word problems},
author = {Chao Jung Wu and Chia Yu Liu and Chung Hsuan Yang and Yu Cin Jian},
doi = {10.1007/s10212-020-00461-w},
year = {2021},
date = {2021-01-01},
journal = {European Journal of Psychology of Education},
volume = {36},
number = {1},
pages = {91--108},
publisher = {European Journal of Psychology of Education},
abstract = {Despite decades of research on the close link between eye movements and human cognitive processes, the exact nature of the link between eye movements and deliberative thinking in problem-solving remains unknown. Thus, this study explored the critical eye-movement indicators of deliberative thinking and investigated whether visual behaviors could predict performance on arithmetic word problems of various difficulties. An eye tracker and test were employed to collect 69 sixth-graders' eye-movement behaviors and responses. No significant difference was found between the successful and unsuccessful groups on the simple problems, but on the difficult problems, the successful problem-solvers demonstrated significantly greater gaze aversion, longer fixations, and spontaneous reflections. Notably, the model incorporating RT-TFD, NOF of 500 ms, and pupil size indicators could best predict participants' performance, with an overall hit rate of 74%, rising to 80% when reading comprehension screening test scores were included. These results reveal the solvers' engagement strategies or show that successful problem-solvers were well aware of problem difficulty and could regulate their cognitive resources efficiently. This study sheds light on the development of an adapted learning system with embedded eye tracking to further predict students' visual behaviors, provide real-time feedback, and improve their problem-solving performance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Despite decades of research on the close link between eye movements and human cognitive processes, the exact nature of the link between eye movements and deliberative thinking in problem-solving remains unknown. Thus, this study explored the critical eye-movement indicators of deliberative thinking and investigated whether visual behaviors could predict performance on arithmetic word problems of various difficulties. An eye tracker and test were employed to collect 69 sixth-graders' eye-movement behaviors and responses. No significant difference was found between the successful and unsuccessful groups on the simple problems, but on the difficult problems, the successful problem-solvers demonstrated significantly greater gaze aversion, longer fixations, and spontaneous reflections. Notably, the model incorporating RT-TFD, NOF of 500 ms, and pupil size indicators could best predict participants' performance, with an overall hit rate of 74%, rising to 80% when reading comprehension screening test scores were included. These results reveal the solvers' engagement strategies or show that successful problem-solvers were well aware of problem difficulty and could regulate their cognitive resources efficiently. This study sheds light on the development of an adapted learning system with embedded eye tracking to further predict students' visual behaviors, provide real-time feedback, and improve their problem-solving performance.

Close

  • doi:10.1007/s10212-020-00461-w

Close

Maren-Isabel Wolf; Maximilian Bruchmann; Gilles Pourtois; Sebastian Schindler; Thomas Straube

Top-down Modulation of early visual processing in V1: Dissociable neurophysiological effects of spatial attention, attentional load and task-relevance Journal Article

In: Cerebral Cortex, pp. 1–17, 2021.

Abstract | Links | BibTeX

@article{Wolf2021a,
title = {Top-down Modulation of early visual processing in V1: Dissociable neurophysiological effects of spatial attention, attentional load and task-relevance},
author = {Maren-Isabel Wolf and Maximilian Bruchmann and Gilles Pourtois and Sebastian Schindler and Thomas Straube},
doi = {10.1093/cercor/bhab342},
year = {2021},
date = {2021-01-01},
journal = {Cerebral Cortex},
pages = {1--17},
abstract = {Until today, there is an ongoing discussion if attention processes interact with the information processing stream already at the level of the C1, the earliest visual electrophysiological response of the cortex. We used two highly powered experiments (each N = 52) and examined the effects of task relevance, spatial attention, and attentional load on individual C1 amplitudes for the upper or lower visual hemifield. Bayesian models revealed evidence for the absence of load effects but substantial modulations by task-relevance and spatial attention. When the C1-eliciting stimulus was a task-irrelevant, interfering distracter, we observed increased C1 amplitudes for spatially unattended stimuli. For spatially attended stimuli, different effects of task-relevance for the two experiments were found. Follow-up exploratory single-trial analyses revealed that subtle but systematic deviations from the eye-gaze position at stimulus onset between conditions substantially influenced the effects of attention and task relevance on C1 amplitudes, especially for the upper visual field. For the subsequent P1 component, attentional modulations were clearly expressed and remained unaffected by these deviations. Collectively, these results suggest that spatial attention, unlike load or task relevance, can exert dissociable top-down modulatory effects at the C1 and P1 levels.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Until today, there is an ongoing discussion if attention processes interact with the information processing stream already at the level of the C1, the earliest visual electrophysiological response of the cortex. We used two highly powered experiments (each N = 52) and examined the effects of task relevance, spatial attention, and attentional load on individual C1 amplitudes for the upper or lower visual hemifield. Bayesian models revealed evidence for the absence of load effects but substantial modulations by task-relevance and spatial attention. When the C1-eliciting stimulus was a task-irrelevant, interfering distracter, we observed increased C1 amplitudes for spatially unattended stimuli. For spatially attended stimuli, different effects of task-relevance for the two experiments were found. Follow-up exploratory single-trial analyses revealed that subtle but systematic deviations from the eye-gaze position at stimulus onset between conditions substantially influenced the effects of attention and task relevance on C1 amplitudes, especially for the upper visual field. For the subsequent P1 component, attentional modulations were clearly expressed and remained unaffected by these deviations. Collectively, these results suggest that spatial attention, unlike load or task relevance, can exert dissociable top-down modulatory effects at the C1 and P1 levels.

Close

  • doi:10.1093/cercor/bhab342

Close

Christian Wolf; Markus Lappe

Salient objects dominate the central fixation bias when orienting toward images Journal Article

In: Journal of Vision, vol. 21, no. 8, pp. 1–21, 2021.

Abstract | Links | BibTeX

@article{Wolf2021,
title = {Salient objects dominate the central fixation bias when orienting toward images},
author = {Christian Wolf and Markus Lappe},
doi = {10.1167/jov.21.8.23},
year = {2021},
date = {2021-01-01},
journal = {Journal of Vision},
volume = {21},
number = {8},
pages = {1--21},
abstract = {Short-latency saccades are often biased toward salient objects or toward the center of images, for example, when inspecting photographs of natural scenes. Here, we measured the contribution of salient objects and central fixation bias to visual selection over time. Participants made saccades to images containing one salient object on a structured background and were instructed to either look at (i) the image center, (ii) the salient object, or (iii) at a cued position halfway in between the two. Results revealed, first, an early involuntary bias toward the image center irrespective of strategic behavior or the location of objects in the image. Second, the salient object bias was stronger than the center bias and prevailed over the latter when they directly competed for visual selection. In a second experiment, we tested whether the center bias depends on how well the image can be segregated from the monitor background. We asked participants to explore images that either did or did not contain a salient object while we manipulated the contrast between image background and monitor background to make the image borders more or less visible. The initial orienting toward the image was not affected by the image-monitor contrast, but only by the presence of objects—with a strong bias toward the center of images containing no object. Yet, a low image-monitor contrast reduced this center bias during the subsequent image exploration},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Short-latency saccades are often biased toward salient objects or toward the center of images, for example, when inspecting photographs of natural scenes. Here, we measured the contribution of salient objects and central fixation bias to visual selection over time. Participants made saccades to images containing one salient object on a structured background and were instructed to either look at (i) the image center, (ii) the salient object, or (iii) at a cued position halfway in between the two. Results revealed, first, an early involuntary bias toward the image center irrespective of strategic behavior or the location of objects in the image. Second, the salient object bias was stronger than the center bias and prevailed over the latter when they directly competed for visual selection. In a second experiment, we tested whether the center bias depends on how well the image can be segregated from the monitor background. We asked participants to explore images that either did or did not contain a salient object while we manipulated the contrast between image background and monitor background to make the image borders more or less visible. The initial orienting toward the image was not affected by the image-monitor contrast, but only by the presence of objects—with a strong bias toward the center of images containing no object. Yet, a low image-monitor contrast reduced this center bias during the subsequent image exploration

Close

  • doi:10.1167/jov.21.8.23

Close

Toby Wise; Yunzhe Liu; Fatima Chowdhury; Raymond J. Dolan

Model-based aversive learning in humans is supported by preferential task state reactivation Journal Article

In: Science Advances, vol. 7, no. 31, pp. 1–15, 2021.

Abstract | Links | BibTeX

@article{Wise2021,
title = {Model-based aversive learning in humans is supported by preferential task state reactivation},
author = {Toby Wise and Yunzhe Liu and Fatima Chowdhury and Raymond J. Dolan},
doi = {10.1126/sciadv.abf9616},
year = {2021},
date = {2021-01-01},
journal = {Science Advances},
volume = {7},
number = {31},
pages = {1--15},
abstract = {Harm avoidance is critical for survival, yet little is known regarding the neural mechanisms supporting avoidance in the absence of trial-and-error experience. Flexible avoidance may be supported by a mental model (i.e., model-based), a process for which neural reactivation and sequential replay have emerged as candidate mechanisms. During an aversive learning task, combined with magnetoencephalography, we show prospective and retrospective reactivation during planning and learning, respectively, coupled to evidence for sequential replay. Specifically, when individuals plan in an aversive context, we find preferential reactivation of subsequently chosen goal states. Stronger reactivation is associated with greater hippocampal theta power. At outcome receipt, unchosen goal states are reactivated regardless of outcome valence. Replay of paths leading to goal states was modulated by outcome valence, with aversive outcomes associated with stronger reverse replay than safe outcomes. Our findings are suggestive of avoidance involving simulation of unexperienced states through hippocampally mediated reactivation and replay.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Harm avoidance is critical for survival, yet little is known regarding the neural mechanisms supporting avoidance in the absence of trial-and-error experience. Flexible avoidance may be supported by a mental model (i.e., model-based), a process for which neural reactivation and sequential replay have emerged as candidate mechanisms. During an aversive learning task, combined with magnetoencephalography, we show prospective and retrospective reactivation during planning and learning, respectively, coupled to evidence for sequential replay. Specifically, when individuals plan in an aversive context, we find preferential reactivation of subsequently chosen goal states. Stronger reactivation is associated with greater hippocampal theta power. At outcome receipt, unchosen goal states are reactivated regardless of outcome valence. Replay of paths leading to goal states was modulated by outcome valence, with aversive outcomes associated with stronger reverse replay than safe outcomes. Our findings are suggestive of avoidance involving simulation of unexperienced states through hippocampally mediated reactivation and replay.

Close

  • doi:10.1126/sciadv.abf9616

Close

Matthew B. Winn; Katherine H. Teece

Listening effort is not the same as speech intelligibility score Journal Article

In: Trends in Hearing, vol. 25, pp. 1–26, 2021.

Abstract | Links | BibTeX

@article{Winn2021a,
title = {Listening effort is not the same as speech intelligibility score},
author = {Matthew B. Winn and Katherine H. Teece},
doi = {10.1177/23312165211027688},
year = {2021},
date = {2021-01-01},
journal = {Trends in Hearing},
volume = {25},
pages = {1--26},
abstract = {Listening effort is a valuable and important notion to measure because it is among the primary complaints of people with hearing loss. It is tempting and intuitive to accept speech intelligibility scores as a proxy for listening effort, but this link is likely oversimplified and lacks actionable explanatory power. This study was conducted to explain the mechanisms of listening effort that are not captured by intelligibility scores, using sentence-repetition tasks where specific kinds of mistakes were prospectively planned or analyzed retrospectively. Effort measured as changes in pupil size among 20 listeners with normal hearing and 19 listeners with cochlear implants. Experiment 1 demonstrates that mental correction of misperceived words increases effort even when responses are correct. Experiment 2 shows that for incorrect responses, listening effort is not a function of the proportion of words correct but is rather driven by the types of errors, position of errors within a sentence, and the need to resolve ambiguity, reflecting how easily the listener can make sense of a perception. A simple taxonomy of error types is provided that is both intuitive and consistent with data from these two experiments. The diversity of errors in these experiments implies that speech perception tasks can be designed prospectively to elicit the mistakes that are more closely linked with effort. Although mental corrective action and number of mistakes can scale together in many experiments, it is possible to dissociate them to advance toward a more explanatory (rather than correlational) account of listening effort.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Listening effort is a valuable and important notion to measure because it is among the primary complaints of people with hearing loss. It is tempting and intuitive to accept speech intelligibility scores as a proxy for listening effort, but this link is likely oversimplified and lacks actionable explanatory power. This study was conducted to explain the mechanisms of listening effort that are not captured by intelligibility scores, using sentence-repetition tasks where specific kinds of mistakes were prospectively planned or analyzed retrospectively. Effort measured as changes in pupil size among 20 listeners with normal hearing and 19 listeners with cochlear implants. Experiment 1 demonstrates that mental correction of misperceived words increases effort even when responses are correct. Experiment 2 shows that for incorrect responses, listening effort is not a function of the proportion of words correct but is rather driven by the types of errors, position of errors within a sentence, and the need to resolve ambiguity, reflecting how easily the listener can make sense of a perception. A simple taxonomy of error types is provided that is both intuitive and consistent with data from these two experiments. The diversity of errors in these experiments implies that speech perception tasks can be designed prospectively to elicit the mistakes that are more closely linked with effort. Although mental corrective action and number of mistakes can scale together in many experiments, it is possible to dissociate them to advance toward a more explanatory (rather than correlational) account of listening effort.

Close

  • doi:10.1177/23312165211027688

Close

Lena Wimmer; Gregory Currie; Stacie Friend; Heather Jane Ferguson

Testing correlates of lifetime exposure to print fiction following a multi-method approach: Evidence from young and older readers Journal Article

In: Imagination, Cognition and Personality, vol. 41, no. 1, pp. 54–86, 2021.

Abstract | Links | BibTeX

@article{Wimmer2021,
title = {Testing correlates of lifetime exposure to print fiction following a multi-method approach: Evidence from young and older readers},
author = {Lena Wimmer and Gregory Currie and Stacie Friend and Heather Jane Ferguson},
doi = {10.1177/0276236621996244},
year = {2021},
date = {2021-01-01},
journal = {Imagination, Cognition and Personality},
volume = {41},
number = {1},
pages = {54--86},
abstract = {Two pre-registered studies investigated associations of lifetime exposure to fiction, applying a battery of self-report, explicit and implicit indicators. Study 1 ( N = 150 university students) tested the relationships between exposure to fiction and social and moral cognitive abilities in a lab setting, using a correlational design. Results failed to reveal evidence for enhanced social or moral cognition with increasing lifetime exposure to narrative fiction. Study 2 followed a cross-sectional design and compared 50–80 year-old fiction experts ( N = 66), non-fiction experts ( N = 53), and infrequent readers ( N = 77) regarding social cognition, general knowledge, imaginability, and creativity in an online setting. Fiction experts outperformed the remaining groups regarding creativity, but not regarding social cognition or imaginability. In addition, both fiction and non-fiction experts demonstrated higher general knowledge than infrequent readers. Taken together, the present results do not support theories postulating benefits of narrative fiction for social cognition, but suggest that reading fiction may be associated with a specific gain in creativity, and that print (fiction or non-fiction) exposure has a general enhancement effect on world knowledge.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two pre-registered studies investigated associations of lifetime exposure to fiction, applying a battery of self-report, explicit and implicit indicators. Study 1 ( N = 150 university students) tested the relationships between exposure to fiction and social and moral cognitive abilities in a lab setting, using a correlational design. Results failed to reveal evidence for enhanced social or moral cognition with increasing lifetime exposure to narrative fiction. Study 2 followed a cross-sectional design and compared 50–80 year-old fiction experts ( N = 66), non-fiction experts ( N = 53), and infrequent readers ( N = 77) regarding social cognition, general knowledge, imaginability, and creativity in an online setting. Fiction experts outperformed the remaining groups regarding creativity, but not regarding social cognition or imaginability. In addition, both fiction and non-fiction experts demonstrated higher general knowledge than infrequent readers. Taken together, the present results do not support theories postulating benefits of narrative fiction for social cognition, but suggest that reading fiction may be associated with a specific gain in creativity, and that print (fiction or non-fiction) exposure has a general enhancement effect on world knowledge.

Close

  • doi:10.1177/0276236621996244

Close

James P. Wilmott; Melchi M. Michel

Transsaccadic integration of visual information is predictive, attention-based, and spatially precise Journal Article

In: Journal of Vision, vol. 21, no. 8, pp. 1–26, 2021.

Abstract | Links | BibTeX

@article{Wilmott2021,
title = {Transsaccadic integration of visual information is predictive, attention-based, and spatially precise},
author = {James P. Wilmott and Melchi M. Michel},
doi = {10.1167/jov.21.8.14},
year = {2021},
date = {2021-01-01},
journal = {Journal of Vision},
volume = {21},
number = {8},
pages = {1--26},
abstract = {Eye movements produce shifts in the positions of objects in the retinal image, but observers are able to integrate these shifting retinal images into a coherent representation of visual space. This ability is thought to be mediated by attention-dependent saccade-related neural activity that is used by the visual system to anticipate the retinal consequences of impending eye movements. Previous investigations of the perceptual consequences of this predictive activity typically infer attentional allocation using indirect measures such as accuracy or reaction time. Here, we investigated the perceptual consequences of saccades using an objective measure of attentional allocation, reverse correlation. Human observers executed a saccade while monitoring a flickering target object flanked by flickering distractors and reported whether the average luminance of the target was lighter or darker than the background. Successful task performance required subjects to integrate visual information across the saccade. A reverse correlation analysis yielded a spatiotemporal “psychophysical kernel” characterizing how different parts of the stimulus contributed to the luminance decision throughout each trial. Just before the saccade, observers integrated luminance information from a distractor located at the post-saccadic retinal position of the target, indicating a predictive perceptual updating of the target. Observers did not integrate information from distractors placed in alternative locations, even when they were nearer to the target object. We also observed simultaneous predictive perceptual updating for two spatially distinct targets. These findings suggest both that shifting neural representations mediate the coherent representation of visual space, and that these shifts have significant consequences for transsaccadic perception.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye movements produce shifts in the positions of objects in the retinal image, but observers are able to integrate these shifting retinal images into a coherent representation of visual space. This ability is thought to be mediated by attention-dependent saccade-related neural activity that is used by the visual system to anticipate the retinal consequences of impending eye movements. Previous investigations of the perceptual consequences of this predictive activity typically infer attentional allocation using indirect measures such as accuracy or reaction time. Here, we investigated the perceptual consequences of saccades using an objective measure of attentional allocation, reverse correlation. Human observers executed a saccade while monitoring a flickering target object flanked by flickering distractors and reported whether the average luminance of the target was lighter or darker than the background. Successful task performance required subjects to integrate visual information across the saccade. A reverse correlation analysis yielded a spatiotemporal “psychophysical kernel” characterizing how different parts of the stimulus contributed to the luminance decision throughout each trial. Just before the saccade, observers integrated luminance information from a distractor located at the post-saccadic retinal position of the target, indicating a predictive perceptual updating of the target. Observers did not integrate information from distractors placed in alternative locations, even when they were nearer to the target object. We also observed simultaneous predictive perceptual updating for two spatially distinct targets. These findings suggest both that shifting neural representations mediate the coherent representation of visual space, and that these shifts have significant consequences for transsaccadic perception.

Close

  • doi:10.1167/jov.21.8.14

Close

Lauren Williams; Ann Carrigan; William Auffermann; Megan Mills; Anina Rich; Joann Elmore; Trafton Drew

The invisible breast cancer: Experience does not protect against inattentional blindness to clinically relevant findings in radiology Journal Article

In: Psychonomic Bulletin & Review, vol. 28, no. 2, pp. 503–511, 2021.

Abstract | Links | BibTeX

@article{Williams2021a,
title = {The invisible breast cancer: Experience does not protect against inattentional blindness to clinically relevant findings in radiology},
author = {Lauren Williams and Ann Carrigan and William Auffermann and Megan Mills and Anina Rich and Joann Elmore and Trafton Drew},
doi = {10.3758/s13423-020-01826-4},
year = {2021},
date = {2021-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {28},
number = {2},
pages = {503--511},
publisher = {Psychonomic Bulletin & Review},
abstract = {Retrospectively obvious events are frequently missed when attention is engaged in another task—a phenomenon known as inattentional blindness. Although the task characteristics that predict inattentional blindness rates are relatively well understood, the observer characteristics that predict inattentional blindness rates are largely unknown. Previously, expert radiologists showed a surprising rate of inattentional blindness to a gorilla photoshopped into a CT scan during lung-cancer screening. However, inattentional blindness rates were higher for a group of naïve observers performing the same task, suggesting that perceptual expertise may provide protection against inattentional blindness. Here, we tested whether expertise in radiology predicts inattentional blindness rates for unexpected abnormalities that were clinically relevant. Fifty radiologists evaluated CT scans for lung cancer. The final case contained a large (9.1 cm) breast mass and lymphadenopathy. When their attention was focused on searching for lung nodules, 66% of radiologists did not detect breast cancer and 30% did not detect lymphadenopathy. In contrast, only 3% and 10% of radiologists (N = 30), respectively, missed these abnormalities in a follow-up study when searching for a broader range of abnormalities. Neither experience, primary task performance, nor search behavior predicted which radiologists missed the unexpected abnormalities. These findings suggest perceptual expertise does not protect against inattentional blindness, even for unexpected stimuli that are within the domain of expertise.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Retrospectively obvious events are frequently missed when attention is engaged in another task—a phenomenon known as inattentional blindness. Although the task characteristics that predict inattentional blindness rates are relatively well understood, the observer characteristics that predict inattentional blindness rates are largely unknown. Previously, expert radiologists showed a surprising rate of inattentional blindness to a gorilla photoshopped into a CT scan during lung-cancer screening. However, inattentional blindness rates were higher for a group of naïve observers performing the same task, suggesting that perceptual expertise may provide protection against inattentional blindness. Here, we tested whether expertise in radiology predicts inattentional blindness rates for unexpected abnormalities that were clinically relevant. Fifty radiologists evaluated CT scans for lung cancer. The final case contained a large (9.1 cm) breast mass and lymphadenopathy. When their attention was focused on searching for lung nodules, 66% of radiologists did not detect breast cancer and 30% did not detect lymphadenopathy. In contrast, only 3% and 10% of radiologists (N = 30), respectively, missed these abnormalities in a follow-up study when searching for a broader range of abnormalities. Neither experience, primary task performance, nor search behavior predicted which radiologists missed the unexpected abnormalities. These findings suggest perceptual expertise does not protect against inattentional blindness, even for unexpected stimuli that are within the domain of expertise.

Close

  • doi:10.3758/s13423-020-01826-4

Close

Lauren H. Williams; Ann J. Carrigan; Megan Mills; William F. Auffermann; Anina N. Rich; Trafton Drew

Characteristics of expert search behavior in volumetric medical image interpretation Journal Article

In: Journal of Medical Imaging, vol. 8, no. 04, pp. 1–24, 2021.

Abstract | Links | BibTeX

@article{Williams2021,
title = {Characteristics of expert search behavior in volumetric medical image interpretation},
author = {Lauren H. Williams and Ann J. Carrigan and Megan Mills and William F. Auffermann and Anina N. Rich and Trafton Drew},
doi = {10.1117/1.jmi.8.4.041208},
year = {2021},
date = {2021-01-01},
journal = {Journal of Medical Imaging},
volume = {8},
number = {04},
pages = {1--24},
abstract = {Purpose: Experienced radiologists have enhanced global processing ability relative to novices, allowing experts to rapidly detect medical abnormalities without performing an exhaustive search. However, evidence for global processing models is primarily limited to two-dimensional image interpretation, and it is unclear whether these findings generalize to volumetric images, which are widely used in clinical practice. We examined whether radiologists searching volumetric images use methods consistent with global processing models of expertise. In addition, we investigated whether search strategy (scanning/drilling) differs with experience level.
Approach: Fifty radiologists with a wide range of experience evaluated chest computed-tomography scans for lung nodules while their eye movements and scrolling behaviors were tracked. Multiple linear regressions were used to determine: (1) how search behaviors differed with years of experience and the number of chest CTs evaluated per week and (2) which search behaviors predicted better performance.
Results: Contrary to global processing models based on 2D images, experience was unrelated to measures of global processing (saccadic amplitude, coverage, time to first fixation, search time, and depth passes) in this task. Drilling behavior was associated with better accuracy than scanning behavior when controlling for observer experience. Greater image coverage was a strong predictor of task accuracy.
Conclusions: Global processing ability may play a relatively small role in volumetric image interpretation, where global scene statistics are not available to radiologists in a single glance. Rather, in volumetric images, it may be more important to engage in search strategies that support a more thorough search of the image.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Purpose: Experienced radiologists have enhanced global processing ability relative to novices, allowing experts to rapidly detect medical abnormalities without performing an exhaustive search. However, evidence for global processing models is primarily limited to two-dimensional image interpretation, and it is unclear whether these findings generalize to volumetric images, which are widely used in clinical practice. We examined whether radiologists searching volumetric images use methods consistent with global processing models of expertise. In addition, we investigated whether search strategy (scanning/drilling) differs with experience level.
Approach: Fifty radiologists with a wide range of experience evaluated chest computed-tomography scans for lung nodules while their eye movements and scrolling behaviors were tracked. Multiple linear regressions were used to determine: (1) how search behaviors differed with years of experience and the number of chest CTs evaluated per week and (2) which search behaviors predicted better performance.
Results: Contrary to global processing models based on 2D images, experience was unrelated to measures of global processing (saccadic amplitude, coverage, time to first fixation, search time, and depth passes) in this task. Drilling behavior was associated with better accuracy than scanning behavior when controlling for observer experience. Greater image coverage was a strong predictor of task accuracy.
Conclusions: Global processing ability may play a relatively small role in volumetric image interpretation, where global scene statistics are not available to radiologists in a single glance. Rather, in volumetric images, it may be more important to engage in search strategies that support a more thorough search of the image.

Close

  • doi:10.1117/1.jmi.8.4.041208

Close

Benedict Wild; Stefan Treue

Comparing the influence of stimulus size and contrast on the perception of moving gratings and random dot patterns-A registered report protocol Journal Article

In: PLoS ONE, vol. 16, no. 6, pp. 1–10, 2021.

Abstract | Links | BibTeX

@article{Wild2021,
title = {Comparing the influence of stimulus size and contrast on the perception of moving gratings and random dot patterns-A registered report protocol},
author = {Benedict Wild and Stefan Treue},
doi = {10.1371/journal.pone.0253067},
year = {2021},
date = {2021-01-01},
journal = {PLoS ONE},
volume = {16},
number = {6},
pages = {1--10},
abstract = {Modern accounts of visual motion processing in the primate brain emphasize a hierarchy of different regions within the dorsal visual pathway, especially primary visual cortex (V1) and the middle temporal area (MT). However, recent studies have called the idea of a processing pipeline with fixed contributions to motion perception from each area into doubt. Instead, the role that each area plays appears to depend on properties of the stimulus as well as perceptual history. We propose to test this hypothesis in human subjects by comparing motion perception of two commonly used stimulus types: Drifting sinusoidal gratings (DSGs) and random dot patterns (RDPs). To avoid potential biases in our approach we are pre-registering our study. We will compare the effects of size and contrast levels on the perception of the direction of motion for DSGs and RDPs. In addition, based on intriguing results in a pilot study, we will also explore the effects of a post-stimulus mask. Our approach will offer valuable insights into how motion is processed by the visual system and guide further behavioral and neurophysiological research.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Modern accounts of visual motion processing in the primate brain emphasize a hierarchy of different regions within the dorsal visual pathway, especially primary visual cortex (V1) and the middle temporal area (MT). However, recent studies have called the idea of a processing pipeline with fixed contributions to motion perception from each area into doubt. Instead, the role that each area plays appears to depend on properties of the stimulus as well as perceptual history. We propose to test this hypothesis in human subjects by comparing motion perception of two commonly used stimulus types: Drifting sinusoidal gratings (DSGs) and random dot patterns (RDPs). To avoid potential biases in our approach we are pre-registering our study. We will compare the effects of size and contrast levels on the perception of the direction of motion for DSGs and RDPs. In addition, based on intriguing results in a pilot study, we will also explore the effects of a post-stimulus mask. Our approach will offer valuable insights into how motion is processed by the visual system and guide further behavioral and neurophysiological research.

Close

  • doi:10.1371/journal.pone.0253067

Close

Thomas D. W. Wilcockson; Emmanuel M. Pothos; Ashley M. Osborne; Trevor J. Crawford

Top-down and bottom-up attentional biases for smoking-related stimuli: Comparing dependent and non-dependent smokers Journal Article

In: Addictive Behaviors, vol. 118, pp. 1–7, 2021.

Abstract | Links | BibTeX

@article{Wilcockson2021,
title = {Top-down and bottom-up attentional biases for smoking-related stimuli: Comparing dependent and non-dependent smokers},
author = {Thomas D. W. Wilcockson and Emmanuel M. Pothos and Ashley M. Osborne and Trevor J. Crawford},
doi = {10.1016/j.addbeh.2021.106886},
year = {2021},
date = {2021-01-01},
journal = {Addictive Behaviors},
volume = {118},
pages = {1--7},
publisher = {Elsevier Ltd},
abstract = {Introduction: Substance use causes attentional biases for substance-related stimuli. Both bottom-up (preferential processing) and top-down (inhibitory control) processes are involved in attentional biases. We explored these aspects of attentional bias by using dependent and non-dependent cigarette smokers in order to see whether these two groups would differ in terms of general inhibitory control, bottom-up attentional bias, and top-down attentional biases. This enables us to see whether consumption behaviour would affect these cognitive responses to smoking-related stimuli. Methods: Smokers were categorised as either dependent (N = 26) or non-dependent (N = 34) smokers. A further group of non-smokers (N = 32) were recruited to act as controls. Participants then completed a behavioural inhibition task with general stimuli, a smoking-related eye tracking version of the dot-probe task, and an eye-tracking inhibition task with smoking-related stimuli. Results: Results indicated that dependent smokers had decreased inhibition and increased attentional bias for smoking-related stimuli (and not control stimuli). By contrast, a decreased inhibition for smoking-related stimuli (in comparison to control stimuli) was not observed for non-dependent smokers. Conclusions: Preferential processing of substance-related stimuli may indicate usage of a substance, whereas poor inhibitory control for substance-related stimuli may only emerge if dependence develops. The results suggest that how people engage with substance abuse is important for top-down attentional biases.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Introduction: Substance use causes attentional biases for substance-related stimuli. Both bottom-up (preferential processing) and top-down (inhibitory control) processes are involved in attentional biases. We explored these aspects of attentional bias by using dependent and non-dependent cigarette smokers in order to see whether these two groups would differ in terms of general inhibitory control, bottom-up attentional bias, and top-down attentional biases. This enables us to see whether consumption behaviour would affect these cognitive responses to smoking-related stimuli. Methods: Smokers were categorised as either dependent (N = 26) or non-dependent (N = 34) smokers. A further group of non-smokers (N = 32) were recruited to act as controls. Participants then completed a behavioural inhibition task with general stimuli, a smoking-related eye tracking version of the dot-probe task, and an eye-tracking inhibition task with smoking-related stimuli. Results: Results indicated that dependent smokers had decreased inhibition and increased attentional bias for smoking-related stimuli (and not control stimuli). By contrast, a decreased inhibition for smoking-related stimuli (in comparison to control stimuli) was not observed for non-dependent smokers. Conclusions: Preferential processing of substance-related stimuli may indicate usage of a substance, whereas poor inhibitory control for substance-related stimuli may only emerge if dependence develops. The results suggest that how people engage with substance abuse is important for top-down attentional biases.

Close

  • doi:10.1016/j.addbeh.2021.106886

Close

Marlee Whybird; Rachel Coats; Tessa Vuister; Sophie Harrison; Samantha Booth; Melanie Burke

The role of the posterior parietal cortex on cognition: An exploratory study Journal Article

In: Brain Research, vol. 1764, pp. 1–11, 2021.

Abstract | Links | BibTeX

@article{Whybird2021,
title = {The role of the posterior parietal cortex on cognition: An exploratory study},
author = {Marlee Whybird and Rachel Coats and Tessa Vuister and Sophie Harrison and Samantha Booth and Melanie Burke},
doi = {10.1016/j.brainres.2021.147452},
year = {2021},
date = {2021-01-01},
journal = {Brain Research},
volume = {1764},
pages = {1--11},
publisher = {Elsevier B.V.},
abstract = {Theta burst stimulation (TBS) is a form of repetitive transcranial magnetic stimulation (rTMS) that can be used to increase (intermittent TBS) or reduce (continuous TBS) cortical excitability. The current study provides a preliminary report of the effects of iTBS and cTBS in healthy young adults, to investigate the causal role of the posterior parietal cortex (PPC) during the performance of four cognitive functions: attention, inhibition, sequence learning and working memory. A 2 × 2 repeated measures design was incorporated using hemisphere (left/right) and TBS type (iTBS/cTBS) as the independent variables. 20 participants performed the cognitive tasks both before and after TBS stimulation in 4 counterbalanced experimental sessions (left cTBS, right cTBS, left iTBS and right iTBS) spaced 1 week apart. No change in performance was identified for the attentional cueing task after TBS stimulation, however TBS applied to the left PPC decreased reaction time when inhibiting a reflexive response. The sequence learning task revealed differential effects for encoding of the sequence versus the learnt items. cTBS on the right hemisphere resulted in faster responses to learnt sequences, and iTBS on the right hemisphere reduced reaction times during the initial encoding of the sequence. The reaction times in the 2-back working memory task were increased when TBS stimulation was applied to the right hemisphere. Results reveal clear differential effects for tasks explored, and more specifically where TBS stimulation on right PPC could provide a potential for further investigation into improving oculomotor learning by inducing plasticity-like mechanisms in the brain.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Theta burst stimulation (TBS) is a form of repetitive transcranial magnetic stimulation (rTMS) that can be used to increase (intermittent TBS) or reduce (continuous TBS) cortical excitability. The current study provides a preliminary report of the effects of iTBS and cTBS in healthy young adults, to investigate the causal role of the posterior parietal cortex (PPC) during the performance of four cognitive functions: attention, inhibition, sequence learning and working memory. A 2 × 2 repeated measures design was incorporated using hemisphere (left/right) and TBS type (iTBS/cTBS) as the independent variables. 20 participants performed the cognitive tasks both before and after TBS stimulation in 4 counterbalanced experimental sessions (left cTBS, right cTBS, left iTBS and right iTBS) spaced 1 week apart. No change in performance was identified for the attentional cueing task after TBS stimulation, however TBS applied to the left PPC decreased reaction time when inhibiting a reflexive response. The sequence learning task revealed differential effects for encoding of the sequence versus the learnt items. cTBS on the right hemisphere resulted in faster responses to learnt sequences, and iTBS on the right hemisphere reduced reaction times during the initial encoding of the sequence. The reaction times in the 2-back working memory task were increased when TBS stimulation was applied to the right hemisphere. Results reveal clear differential effects for tasks explored, and more specifically where TBS stimulation on right PPC could provide a potential for further investigation into improving oculomotor learning by inducing plasticity-like mechanisms in the brain.

Close

  • doi:10.1016/j.brainres.2021.147452

Close

Stephen Whitmarsh; Christophe Gitton; Veikko Jousmäki; Jérôme Sackur; Catherine Tallon-Baudry

Neuronal correlates of the subjective experience of attention Journal Article

In: European Journal of Neuroscience, no. January, pp. 1–18, 2021.

Abstract | Links | BibTeX

@article{Whitmarsh2021,
title = {Neuronal correlates of the subjective experience of attention},
author = {Stephen Whitmarsh and Christophe Gitton and Veikko Jousmäki and Jérôme Sackur and Catherine Tallon-Baudry},
doi = {10.1111/ejn.15395},
year = {2021},
date = {2021-01-01},
journal = {European Journal of Neuroscience},
number = {January},
pages = {1--18},
abstract = {The effect of top–down attention on stimulus-evoked responses and alpha oscillations and the association between arousal and pupil diameter are well established. However, the relationship between these indices, and their contribution to the subjective experience of attention, remains largely unknown. Participants performed a sustained (10–30 s) attention task in which rare (10%) targets were detected within continuous tactile stimulation (16 Hz). Trials were followed by attention ratings on an 8-point visual scale. Attention ratings correlated negatively with contralateral somatosensory alpha power and positively with pupil diameter. The effect of pupil diameter on attention ratings extended into the following trial, reflecting a sustained aspect of attention related to vigilance. The effect of alpha power did not carry over to the next trial and furthermore mediated the association between pupil diameter and attention ratings. Variations in steady-state amplitude reflected stimulus processing under the influence of alpha oscillations but were only weakly related to subjective ratings of attention. Together, our results show that both alpha power and pupil diameter are reflected in the subjective experience of attention, albeit on different time spans, while continuous stimulus processing might not contribute to the experience of attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The effect of top–down attention on stimulus-evoked responses and alpha oscillations and the association between arousal and pupil diameter are well established. However, the relationship between these indices, and their contribution to the subjective experience of attention, remains largely unknown. Participants performed a sustained (10–30 s) attention task in which rare (10%) targets were detected within continuous tactile stimulation (16 Hz). Trials were followed by attention ratings on an 8-point visual scale. Attention ratings correlated negatively with contralateral somatosensory alpha power and positively with pupil diameter. The effect of pupil diameter on attention ratings extended into the following trial, reflecting a sustained aspect of attention related to vigilance. The effect of alpha power did not carry over to the next trial and furthermore mediated the association between pupil diameter and attention ratings. Variations in steady-state amplitude reflected stimulus processing under the influence of alpha oscillations but were only weakly related to subjective ratings of attention. Together, our results show that both alpha power and pupil diameter are reflected in the subjective experience of attention, albeit on different time spans, while continuous stimulus processing might not contribute to the experience of attention.

Close

  • doi:10.1111/ejn.15395

Close

Peter S. Whitehead; Younis Mahmoud; Paul Seli; Tobias Egner

Mind wandering at encoding, but not at retrieval, disrupts one-shot stimulus-control learning Journal Article

In: Attention, Perception, and Psychophysics, vol. 83, no. 7, pp. 2968–2982, 2021.

Abstract | Links | BibTeX

@article{Whitehead2021,
title = {Mind wandering at encoding, but not at retrieval, disrupts one-shot stimulus-control learning},
author = {Peter S. Whitehead and Younis Mahmoud and Paul Seli and Tobias Egner},
doi = {10.3758/s13414-021-02343-9},
year = {2021},
date = {2021-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {83},
number = {7},
pages = {2968--2982},
abstract = {The one-shot pairing of a stimulus with a specific cognitive control process, such as task switching, can bind the two together in memory. The episodic control-binding hypothesis posits that the formation of temporary stimulus-control bindings, which are held in event-files supported by episodic memory, can guide the contextually appropriate application of cognitive control. Across two experiments, we sought to examine the role of task-focused attention in the encoding and implementation of stimulus-control bindings in episodic event-files. In Experiment 1, we obtained self-reports of mind wandering during encoding and implementation of stimulus-control bindings. Results indicated that, whereas mind wandering during the implementation of stimulus-control bindings does not decrease their efficacy, mind wandering during the encoding of these control-state associations interferes with their successful deployment at a later point. In Experiment 2, we complemented these results by using trial-by-trial pupillometry to measure attention, again demonstrating that attention levels at encoding predict the subsequent implementation of stimulus-control bindings better than attention levels at implementation. These results suggest that, although encoding stimulus-control bindings in episodic memory requires active attention and engagement, once encoded, these bindings are automatically deployed to guide behavior when the stimulus recurs. These findings expand our understanding of how cognitive control processes are integrated into episodic event files.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The one-shot pairing of a stimulus with a specific cognitive control process, such as task switching, can bind the two together in memory. The episodic control-binding hypothesis posits that the formation of temporary stimulus-control bindings, which are held in event-files supported by episodic memory, can guide the contextually appropriate application of cognitive control. Across two experiments, we sought to examine the role of task-focused attention in the encoding and implementation of stimulus-control bindings in episodic event-files. In Experiment 1, we obtained self-reports of mind wandering during encoding and implementation of stimulus-control bindings. Results indicated that, whereas mind wandering during the implementation of stimulus-control bindings does not decrease their efficacy, mind wandering during the encoding of these control-state associations interferes with their successful deployment at a later point. In Experiment 2, we complemented these results by using trial-by-trial pupillometry to measure attention, again demonstrating that attention levels at encoding predict the subsequent implementation of stimulus-control bindings better than attention levels at implementation. These results suggest that, although encoding stimulus-control bindings in episodic memory requires active attention and engagement, once encoded, these bindings are automatically deployed to guide behavior when the stimulus recurs. These findings expand our understanding of how cognitive control processes are integrated into episodic event files.

Close

  • doi:10.3758/s13414-021-02343-9

Close

Wen Wen; Yangming Zhang; Sheng Li

Gaze dynamics of feature-based distractor inhibition under prior-knowledge and expectations Journal Article

In: Attention, Perception, and Psychophysics, vol. 83, no. 6, pp. 2430–2440, 2021.

Abstract | Links | BibTeX

@article{Wen2021,
title = {Gaze dynamics of feature-based distractor inhibition under prior-knowledge and expectations},
author = {Wen Wen and Yangming Zhang and Sheng Li},
doi = {10.3758/s13414-021-02308-y},
year = {2021},
date = {2021-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {83},
number = {6},
pages = {2430--2440},
publisher = {Attention, Perception, & Psychophysics},
abstract = {Prior information about distractor facilitates selective attention to task-relevant items and helps the optimization of oculomotor planning. In the present study, we capitalized on gaze-position decoding to examine the dynamics of attentional deployment in a feature-based attentional task that involved two groups of dots (target/distractor dots) moving toward different directions. In Experiment 1, participants were provided with target cues indicating the moving direction of target dots. The results showed that participants were biased toward the cued direction and tracked the target dots throughout the task period. In Experiment 2 and Experiment 3, participants were provided with cues that informed the moving direction of distractor dots. When the distractor cue varied on a trial-by-trial basis (Experiment 2), participants continuously monitored the distractor's direction. However, when the to-be-ignored distractor direction remained constant (Experiment 3), participants would strategically bias their attention to the distractor's direction before the cue onset to reduce the cost of redeployment of attention between trials and reactively suppress further attraction evoked by distractors during the stimulus-on stage. This functional dissociation reflected the distinct influence that expectation produced on ocular control. Taken together, these results suggest that monitoring the distractor's feature is a prerequisite for feature-based attentional inhibition, and this process is facilitated by the predictability of the distractor's feature.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Prior information about distractor facilitates selective attention to task-relevant items and helps the optimization of oculomotor planning. In the present study, we capitalized on gaze-position decoding to examine the dynamics of attentional deployment in a feature-based attentional task that involved two groups of dots (target/distractor dots) moving toward different directions. In Experiment 1, participants were provided with target cues indicating the moving direction of target dots. The results showed that participants were biased toward the cued direction and tracked the target dots throughout the task period. In Experiment 2 and Experiment 3, participants were provided with cues that informed the moving direction of distractor dots. When the distractor cue varied on a trial-by-trial basis (Experiment 2), participants continuously monitored the distractor's direction. However, when the to-be-ignored distractor direction remained constant (Experiment 3), participants would strategically bias their attention to the distractor's direction before the cue onset to reduce the cost of redeployment of attention between trials and reactively suppress further attraction evoked by distractors during the stimulus-on stage. This functional dissociation reflected the distinct influence that expectation produced on ocular control. Taken together, these results suggest that monitoring the distractor's feature is a prerequisite for feature-based attentional inhibition, and this process is facilitated by the predictability of the distractor's feature.

Close

  • doi:10.3758/s13414-021-02308-y

Close

Thomas G. G. Wegner; Jan Grenzebach; Alexandra Bendixen; Wolfgang Einhäuser

Parameter dependence in visual pattern-component rivalry at onset and during prolonged viewing Journal Article

In: Vision Research, vol. 182, pp. 69–88, 2021.

Abstract | Links | BibTeX

@article{Wegner2021,
title = {Parameter dependence in visual pattern-component rivalry at onset and during prolonged viewing},
author = {Thomas G. G. Wegner and Jan Grenzebach and Alexandra Bendixen and Wolfgang Einhäuser},
doi = {10.1016/j.visres.2020.12.006},
year = {2021},
date = {2021-01-01},
journal = {Vision Research},
volume = {182},
pages = {69--88},
abstract = {In multistability, perceptual interpretations (“percepts”) of ambiguous stimuli alternate over time. There is considerable debate as to whether similar regularities govern the first percept after stimulus onset and percepts during prolonged presentation. We address this question in a visual pattern-component rivalry paradigm by presenting two overlaid drifting gratings, which participants perceived as individual gratings passing in front of each other (“segregated”) or as a plaid (“integrated”). We varied the enclosed angle (“opening angle”) between the gratings (experiments 1 and 2) and stimulus orientation (experiment 2). The relative number of integrated percepts increased monotonically with opening angle. The point of equality, where half of the percepts were integrated, was at a smaller opening angle at onset than during prolonged viewing. The functional dependence of the relative number of integrated percepts on opening angle showed a steeper curve at onset than during prolonged viewing. Dominance durations of integrated percepts were longer at onset than during prolonged viewing and increased with opening angle. The general pattern persisted when stimuli were rotated (experiment 2), despite some perceptual preference for cardinal motion directions over oblique directions. Analysis of eye movements, specifically the slow phase of the optokinetic nystagmus (OKN), confirmed the veridicality of participants' reports and provided a temporal characterization of percept formation after stimulus onset. Together, our results show that the first percept after stimulus onset exhibits a different dependence on stimulus parameters than percepts during prolonged viewing. This underlines the distinct role of the first percept in multistability.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In multistability, perceptual interpretations (“percepts”) of ambiguous stimuli alternate over time. There is considerable debate as to whether similar regularities govern the first percept after stimulus onset and percepts during prolonged presentation. We address this question in a visual pattern-component rivalry paradigm by presenting two overlaid drifting gratings, which participants perceived as individual gratings passing in front of each other (“segregated”) or as a plaid (“integrated”). We varied the enclosed angle (“opening angle”) between the gratings (experiments 1 and 2) and stimulus orientation (experiment 2). The relative number of integrated percepts increased monotonically with opening angle. The point of equality, where half of the percepts were integrated, was at a smaller opening angle at onset than during prolonged viewing. The functional dependence of the relative number of integrated percepts on opening angle showed a steeper curve at onset than during prolonged viewing. Dominance durations of integrated percepts were longer at onset than during prolonged viewing and increased with opening angle. The general pattern persisted when stimuli were rotated (experiment 2), despite some perceptual preference for cardinal motion directions over oblique directions. Analysis of eye movements, specifically the slow phase of the optokinetic nystagmus (OKN), confirmed the veridicality of participants' reports and provided a temporal characterization of percept formation after stimulus onset. Together, our results show that the first percept after stimulus onset exhibits a different dependence on stimulus parameters than percepts during prolonged viewing. This underlines the distinct role of the first percept in multistability.

Close

  • doi:10.1016/j.visres.2020.12.006

Close

Yuehua Wang; Shulan Lu; Derek Harter

Towards collaborative and intelligent learning environments based on eye tracking data and learning analytics: A Survey Journal Article

In: IEEE Access, vol. 9, pp. 137991–138002, 2021.

Abstract | Links | BibTeX

@article{Wang2021m,
title = {Towards collaborative and intelligent learning environments based on eye tracking data and learning analytics: A Survey},
author = {Yuehua Wang and Shulan Lu and Derek Harter},
doi = {10.1109/ACCESS.2021.3117780},
year = {2021},
date = {2021-01-01},
journal = {IEEE Access},
volume = {9},
pages = {137991--138002},
publisher = {IEEE},
abstract = {The current pandemic has significantly impacted educational practices, modifying many aspects of how and when we learn. In particular, remote learning and the use of digital platforms have greatly increased in importance. Online teaching and e-learning provide many benefits for information retention and schedule flexibility in our on-demand world while breaking down barriers caused by geographic location, physical facilities, transportation issues, or physical impediments. However, educators and researchers have noticed that students face a learning and performance decline as a result of this sudden shift to online teaching and e-learning from classrooms around the world. In this paper, we focus on reviewing eye-tracking techniques and systems, data collection and management methods, datasets, and multi-modal learning data analytics for promoting pervasive and proactive learning in educational environments. We then describe and discuss the crucial challenges and open issues of current learning environments and data learning methods. The review and discussion show the potential of transforming traditional ways of teaching and learning in the classroom, and the feasibility of adaptively driving learning processes using eye-tracking, data science, multimodal learning analytics, and artificial intelligence. These findings call for further attention and research on collaborative and intelligent learning systems, plug-and-play devices and software modules, data science, and learning analytics methods for promoting the evolution of face-to-face learning and e-learning environments and enhancing student collaboration, engagement, and success.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The current pandemic has significantly impacted educational practices, modifying many aspects of how and when we learn. In particular, remote learning and the use of digital platforms have greatly increased in importance. Online teaching and e-learning provide many benefits for information retention and schedule flexibility in our on-demand world while breaking down barriers caused by geographic location, physical facilities, transportation issues, or physical impediments. However, educators and researchers have noticed that students face a learning and performance decline as a result of this sudden shift to online teaching and e-learning from classrooms around the world. In this paper, we focus on reviewing eye-tracking techniques and systems, data collection and management methods, datasets, and multi-modal learning data analytics for promoting pervasive and proactive learning in educational environments. We then describe and discuss the crucial challenges and open issues of current learning environments and data learning methods. The review and discussion show the potential of transforming traditional ways of teaching and learning in the classroom, and the feasibility of adaptively driving learning processes using eye-tracking, data science, multimodal learning analytics, and artificial intelligence. These findings call for further attention and research on collaborative and intelligent learning systems, plug-and-play devices and software modules, data science, and learning analytics methods for promoting the evolution of face-to-face learning and e-learning environments and enhancing student collaboration, engagement, and success.

Close

  • doi:10.1109/ACCESS.2021.3117780

Close

Yiheng Wang; Yanping Liu

Can longer gaze duration determine risky investment decisions? An interactive perspective Journal Article

In: Journal of Eye Movement Research, vol. 14, no. 4, pp. 1–8, 2021.

Abstract | Links | BibTeX

@article{Wang2021k,
title = {Can longer gaze duration determine risky investment decisions? An interactive perspective},
author = {Yiheng Wang and Yanping Liu},
doi = {10.16910/JEMR.14.4.3},
year = {2021},
date = {2021-01-01},
journal = {Journal of Eye Movement Research},
volume = {14},
number = {4},
pages = {1--8},
abstract = {Can longer gaze duration determine risky investment decisions? Recent studies have tested how gaze influences peopleʼs decisions and the boundary of the gaze effect. The current experiment used adaptive gaze-contingent manipulation by adding a self-determined option to test whether longer gaze duration can determine risky investment decisions. The results showed that both the expected value of each option and the gaze duration influenced peopleʼs decisions. This result was consistent with the attentional diffusion model (aDDM) proposed by Krajbich et al. (2010), which suggests that gaze can influence the choice process by amplify the value of the choice. Therefore, the gaze duration would influence the decision when people do not have clear preference.The result also showed that the similarity between options and the computational difficulty would also influence the gaze effect. This result was inconsistent with prior research that used option similarities to represent difficulty, suggesting that both similarity between options and computational difficulty induce different underlying mechanisms of decision difficulty.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Can longer gaze duration determine risky investment decisions? Recent studies have tested how gaze influences peopleʼs decisions and the boundary of the gaze effect. The current experiment used adaptive gaze-contingent manipulation by adding a self-determined option to test whether longer gaze duration can determine risky investment decisions. The results showed that both the expected value of each option and the gaze duration influenced peopleʼs decisions. This result was consistent with the attentional diffusion model (aDDM) proposed by Krajbich et al. (2010), which suggests that gaze can influence the choice process by amplify the value of the choice. Therefore, the gaze duration would influence the decision when people do not have clear preference.The result also showed that the similarity between options and the computational difficulty would also influence the gaze effect. This result was inconsistent with prior research that used option similarities to represent difficulty, suggesting that both similarity between options and computational difficulty induce different underlying mechanisms of decision difficulty.

Close

  • doi:10.16910/JEMR.14.4.3

Close

Xi Wang; Kenneth Holmqvist; Marc Alexa

A consensus-based elastic matching algorithm for mapping recall fixations onto encoding fixations in the looking-at-nothing paradigm Journal Article

In: Behavior Research Methods, vol. 53, no. 5, pp. 2049–2068, 2021.

Abstract | Links | BibTeX

@article{Wang2021i,
title = {A consensus-based elastic matching algorithm for mapping recall fixations onto encoding fixations in the looking-at-nothing paradigm},
author = {Xi Wang and Kenneth Holmqvist and Marc Alexa},
doi = {10.3758/s13428-020-01513-1},
year = {2021},
date = {2021-01-01},
journal = {Behavior Research Methods},
volume = {53},
number = {5},
pages = {2049--2068},
publisher = {Behavior Research Methods},
abstract = {We present an algorithmic method for aligning recall fixations with encoding fixations, to be used in looking-at-nothing paradigms that either record recall eye movements during silence or want to speed up data analysis with recordings of recall data during speech. The algorithm utilizes a novel consensus-based elastic matching algorithm to estimate which encoding fixations correspond to later recall fixations. This is not a scanpath comparison method, as fixation sequence order is ignored and only position configurations are used. The algorithm has three internal parameters and is reasonable stable over a wide range of parameter values. We then evaluate the performance of our algorithm by investigating whether the recalled objects identified by the algorithm correspond with independent assessments of what objects in the image are marked as subjectively important. Our results show that the mapped recall fixations align well with important regions of the images. This result is exemplified in four groups of use cases: to investigate the roles of low-level visual features, faces, signs and text, and people of different sizes, in recall of encoded scenes. The plots from these examples corroborate the finding that the algorithm aligns recall fixations with the most likely important regions in the images. Examples also illustrate how the algorithm can differentiate between image objects that have been fixated during silent recall vs those objects that have not been visually attended, even though they were fixated during encoding.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We present an algorithmic method for aligning recall fixations with encoding fixations, to be used in looking-at-nothing paradigms that either record recall eye movements during silence or want to speed up data analysis with recordings of recall data during speech. The algorithm utilizes a novel consensus-based elastic matching algorithm to estimate which encoding fixations correspond to later recall fixations. This is not a scanpath comparison method, as fixation sequence order is ignored and only position configurations are used. The algorithm has three internal parameters and is reasonable stable over a wide range of parameter values. We then evaluate the performance of our algorithm by investigating whether the recalled objects identified by the algorithm correspond with independent assessments of what objects in the image are marked as subjectively important. Our results show that the mapped recall fixations align well with important regions of the images. This result is exemplified in four groups of use cases: to investigate the roles of low-level visual features, faces, signs and text, and people of different sizes, in recall of encoded scenes. The plots from these examples corroborate the finding that the algorithm aligns recall fixations with the most likely important regions in the images. Examples also illustrate how the algorithm can differentiate between image objects that have been fixated during silent recall vs those objects that have not been visually attended, even though they were fixated during encoding.

Close

  • doi:10.3758/s13428-020-01513-1

Close

Wendy Wang; Meaghan Clough; Owen White; Neil Shuey; Anneke Van Der Walt; Joanne Fielding

Detecting cognitive impairment in idiopathic intracranial hypertension using ocular motor and neuropsychological testing Journal Article

In: Frontiers in Neurology, vol. 12, pp. 772513, 2021.

Abstract | Links | BibTeX

@article{Wang2021h,
title = {Detecting cognitive impairment in idiopathic intracranial hypertension using ocular motor and neuropsychological testing},
author = {Wendy Wang and Meaghan Clough and Owen White and Neil Shuey and Anneke Van Der Walt and Joanne Fielding},
doi = {10.3389/fneur.2021.772513},
year = {2021},
date = {2021-01-01},
journal = {Frontiers in Neurology},
volume = {12},
pages = {772513},
abstract = {Objective: To determine whether cognitive impairments in patients with Idiopathic Intracranial Hypertension (IIH) are correlated with changes in visual processing, weight, waist circumference, mood or headache, and whether they change over time. Methods: Twenty-two newly diagnosed IIH patients participated, with a subset assessed longitudinally at 3 and 6 months. Both conventional and novel ocular motor tests of cognition were included: Symbol Digit Modalities Test (SDMT), Stroop Colour and Word Test (SCWT), Digit Span, California Verbal Learning Test (CVLT), prosaccade (PS) task, antisaccade (AS) task, interleaved antisaccade-prosaccade (AS-PS) task. Patients also completed headache, mood, and visual functioning questionnaires. Results: IIH patients performed more poorly than controls on the SDMT (p< 0.001), SCWT (p = 0.021), Digit Span test (p< 0.001) and CVLT (p = 0.004) at baseline, and generated a higher proportion of AS errors in both the AS (p< 0.001) and AS-PS tasks (p = 0.007). Further, IIH patients exhibited prolonged latencies on the cognitively complex AS-PS task (p = 0.034). While weight, waist circumference, headache and mood did not predict performance on any experimental measure, increased retinal nerve fibre layer (RNFL) was associated with AS error rate on both the block [F(3, 19)=3.22},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: To determine whether cognitive impairments in patients with Idiopathic Intracranial Hypertension (IIH) are correlated with changes in visual processing, weight, waist circumference, mood or headache, and whether they change over time. Methods: Twenty-two newly diagnosed IIH patients participated, with a subset assessed longitudinally at 3 and 6 months. Both conventional and novel ocular motor tests of cognition were included: Symbol Digit Modalities Test (SDMT), Stroop Colour and Word Test (SCWT), Digit Span, California Verbal Learning Test (CVLT), prosaccade (PS) task, antisaccade (AS) task, interleaved antisaccade-prosaccade (AS-PS) task. Patients also completed headache, mood, and visual functioning questionnaires. Results: IIH patients performed more poorly than controls on the SDMT (p< 0.001), SCWT (p = 0.021), Digit Span test (p< 0.001) and CVLT (p = 0.004) at baseline, and generated a higher proportion of AS errors in both the AS (p< 0.001) and AS-PS tasks (p = 0.007). Further, IIH patients exhibited prolonged latencies on the cognitively complex AS-PS task (p = 0.034). While weight, waist circumference, headache and mood did not predict performance on any experimental measure, increased retinal nerve fibre layer (RNFL) was associated with AS error rate on both the block [F(3, 19)=3.22

Close

  • doi:10.3389/fneur.2021.772513

Close

Tianlu Wang; Lena M. Hofbauer; Dante Mantini; Céline R. Gillebert

Behavioural and neural effects of eccentricity and visual field during covert visuospatial attention Journal Article

In: Neuroimage: Reports, vol. 1, no. 3, pp. 100039, 2021.

Abstract | Links | BibTeX

@article{Wang2021g,
title = {Behavioural and neural effects of eccentricity and visual field during covert visuospatial attention},
author = {Tianlu Wang and Lena M. Hofbauer and Dante Mantini and Céline R. Gillebert},
doi = {10.1016/j.ynirp.2021.100039},
year = {2021},
date = {2021-01-01},
journal = {Neuroimage: Reports},
volume = {1},
number = {3},
pages = {100039},
abstract = {The attentional priority map plays a key role in the distribution of attention, and is modulated by bottom-up sensory as well as top-down task-dependent factors. The intraparietal sulcus (IPS) is a key candidate to hold a neural representation of the attentional priority map. In the current study, we examined the role of the IPS during covert attention to spatial locations with high or low eccentricity in one or both visual hemifields. To this end, eighteen neurologically healthy participants performed a cued letter discrimination task in which they were endogenously cued to attend to a location at a 5 or 10◦ eccentricity in the left and/or right visual field. We briefly presented a four-letter target array and subsequently probed perceptual performance while acquiring event- related functional MRI data. While behavioural results showed greater letter discrimination performance at the low eccentricity compared to the high eccentricity location, no neural effect of eccentricity was observed. The results further showed that attending to one visual hemifield produced higher activation in the left parietal and occipital cortex compared to attending bilaterally. Future studies may consider increasing the involvement of top-down control of attention to the cued location to study the neural effect of eccentricity, e.g., through manipulating the task difficulty.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The attentional priority map plays a key role in the distribution of attention, and is modulated by bottom-up sensory as well as top-down task-dependent factors. The intraparietal sulcus (IPS) is a key candidate to hold a neural representation of the attentional priority map. In the current study, we examined the role of the IPS during covert attention to spatial locations with high or low eccentricity in one or both visual hemifields. To this end, eighteen neurologically healthy participants performed a cued letter discrimination task in which they were endogenously cued to attend to a location at a 5 or 10◦ eccentricity in the left and/or right visual field. We briefly presented a four-letter target array and subsequently probed perceptual performance while acquiring event- related functional MRI data. While behavioural results showed greater letter discrimination performance at the low eccentricity compared to the high eccentricity location, no neural effect of eccentricity was observed. The results further showed that attending to one visual hemifield produced higher activation in the left parietal and occipital cortex compared to attending bilaterally. Future studies may consider increasing the involvement of top-down control of attention to the cued location to study the neural effect of eccentricity, e.g., through manipulating the task difficulty.

Close

  • doi:10.1016/j.ynirp.2021.100039

Close

Jinxia Wang; Xiaoying Sun; Jiachen Lu; Hao Ran Dou; Yi Lei

Generalization gradients for fear and disgust in human associative learning Journal Article

In: Scientific Reports, vol. 11, pp. 14210, 2021.

Abstract | Links | BibTeX

@article{Wang2021e,
title = {Generalization gradients for fear and disgust in human associative learning},
author = {Jinxia Wang and Xiaoying Sun and Jiachen Lu and Hao Ran Dou and Yi Lei},
doi = {10.1038/s41598-021-93544-7},
year = {2021},
date = {2021-01-01},
journal = {Scientific Reports},
volume = {11},
pages = {14210},
publisher = {Nature Publishing Group UK},
abstract = {Previous research indicates that excessive fear is a critical feature in anxiety disorders; however, recent studies suggest that disgust may also contribute to the etiology and maintenance of some anxiety disorders. It remains unclear if differences exist between these two threat-related emotions in conditioning and generalization. Evaluating different patterns of fear and disgust learning would facilitate a deeper understanding of how anxiety disorders develop. In this study, 32 college students completed threat conditioning tasks, including conditioned stimuli paired with frightening or disgusting images. Fear and disgust were divided into two randomly ordered blocks to examine differences by recording subjective US expectancy ratings and eye movements in the conditioning and generalization process. During conditioning, differing US expectancy ratings (fear vs. disgust) were found only on CS-, which may demonstrated that fear is associated with inferior discrimination learning. During the generalization test, participants exhibited greater US expectancy ratings to fear-related GS1 (generalized stimulus) and GS2 relative to disgust GS1 and GS2. Fear led to longer reaction times than disgust in both phases, and the pupil size and fixation duration for fear stimuli were larger than for disgust stimuli, suggesting that disgust generalization has a steeper gradient than fear generalization. These findings provide preliminary evidence for differences between fear- and disgust-related stimuli in conditioning and generalization, and suggest insights into treatment for anxiety and other fear- or disgust-related disorders.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous research indicates that excessive fear is a critical feature in anxiety disorders; however, recent studies suggest that disgust may also contribute to the etiology and maintenance of some anxiety disorders. It remains unclear if differences exist between these two threat-related emotions in conditioning and generalization. Evaluating different patterns of fear and disgust learning would facilitate a deeper understanding of how anxiety disorders develop. In this study, 32 college students completed threat conditioning tasks, including conditioned stimuli paired with frightening or disgusting images. Fear and disgust were divided into two randomly ordered blocks to examine differences by recording subjective US expectancy ratings and eye movements in the conditioning and generalization process. During conditioning, differing US expectancy ratings (fear vs. disgust) were found only on CS-, which may demonstrated that fear is associated with inferior discrimination learning. During the generalization test, participants exhibited greater US expectancy ratings to fear-related GS1 (generalized stimulus) and GS2 relative to disgust GS1 and GS2. Fear led to longer reaction times than disgust in both phases, and the pupil size and fixation duration for fear stimuli were larger than for disgust stimuli, suggesting that disgust generalization has a steeper gradient than fear generalization. These findings provide preliminary evidence for differences between fear- and disgust-related stimuli in conditioning and generalization, and suggest insights into treatment for anxiety and other fear- or disgust-related disorders.

Close

  • doi:10.1038/s41598-021-93544-7

Close

5787 entries « ‹ 1 of 58 › »

Let’s Keep in Touch

  • Twitter
  • Facebook
  • Instagram
  • LinkedIn
  • YouTube
Newsletter
Newsletter Archive
Conferences

Contact

info@sr-research.com

Phone: +1-613-271-8686

Toll Free: +1-866-821-0731

Fax: +1-613-482-4866

Quick Links

Products

Solutions

Support Forum

Legal

Legal Notice

Privacy Policy | Accessibility Policy

EyeLink® eye trackers are intended for research purposes only and should not be used in the treatment or diagnosis of any medical condition.

Featured Blog

Reading Profiles of Adults with Dyslexia

Reading Profile of Adults with Dyslexia


Copyright © 2023 · SR Research Ltd. All Rights Reserved. EyeLink is a registered trademark of SR Research Ltd.