EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2022 (with some early 2023s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
Wenyi Zhang; Yang Xie; Tianming Yang
In: Nature Communications, vol. 13, no. 1, pp. 1–12, 2022.
The orbitofrontal cortex (OFC) encodes value and plays a key role in value-based decision-making. However, the attentional modulation of the OFC's value encoding is poorly understood. We trained two monkeys to detect a luminance change at a cued location between a pair of visual stimuli, which were over-trained pictures associated with different amounts of juice reward and, thus, different reward salience. Both the monkeys' behavior and the dorsolateral prefrontal cortex neuronal activities indicated that the monkeys actively directed their spatial attention toward the cued stimulus during the task. However, the OFC's neuronal responses were dominated by the stimulus with higher reward salience and encoded its value. The value of the less salient stimulus was only weakly represented regardless of spatial attention. The results demonstrate that reward and spatial attention are distinctly represented in the prefrontal cortex and the OFC maintains a stable representation of reward salience minimally affected by attention.
TianHong Zhang; YingYu Yang; LiHua Hua Xu; XiaoChen Tang; YeGang Hu; Xin Xiong; YanYan Wei; HuiRu Ru Cui; YingYing Tang; HaiChun Liu; Tao Chen; Zhi Liu; Li Hui; ChunBo Li; XiaoLi Guo; JiJun Wang
In: The World Journal of Biological Psychiatry, vol. 23, no. 5, pp. 1–13, 2022.
Objectives: We used eye-tracking to evaluate multiple facial context processing and event-related potential (ERP) to evaluate multiple facial recognition in individuals at clinical high risk (CHR) for psychosis. Methods: In total, 173 subjects (83 CHRs and 90 healthy controls [HCs]) were included and their emotion perception performances were accessed. A total of 40 CHRs and 40 well-matched HCs completed an eye-tracking task where they viewed pictures depicting a person in the foreground, presented as context-free, context-compatible, and context-incompatible. During the two-year follow-up, 26 CHRs developed psychosis, including 17 individuals who developed first-episode schizophrenia (FES). Eighteen well-matched HCs were made to complete the face number detection ERP task with image stimuli of one, two, or three faces. Results: Compared to the HC group, the CHR group showed reduced visual attention to contextual processing when viewing multiple faces. With the increasing complexity of contextual faces, the differences in eye-tracking characteristics also increased. In the ERP task, the N170 amplitude decreased with a higher face number in FES patients, while it increased with a higher face number in HCs. Conclusions: Individuals in the very early phase of psychosis showed facial processing deficits with supporting evidence of different scan paths during context processing and disruption of N170 during multiple facial recognition.
Kaining Zhang; Ethan S. Bromberg-Martin; Fatih Sogukpinar; Kim Kocher; Ilya E. Monosov
Surprise and recency in novelty detection in the primate brain Journal Article
In: Current Biology, vol. 32, no. 10, pp. 2160–2173, 2022.
Primates and other animals must detect novel objects. However, the neuronal mechanisms of novelty detection remain unclear. Prominent theories propose that visual object novelty is either derived from the computation of recency (how long ago a stimulus was experienced) or is a form of sensory surprise (stimulus unpredictability). Here, we use high-channel electrophysiology in primates to show that in many primate prefrontal, temporal, and subcortical brain areas, object novelty detection is intertwined with the computations of recency and sensory surprise. Also, distinct circuits could be engaged by expected versus unexpected sensory surprise. Finally, we studied neuronal novelty-to-familiarity transformations during learning across many days. We found a diversity of timescales in neurons' learning rates and between-session forgetting rates, both within and across brain areas, that are well suited to support flexible behavior and learning in response to novelty. Our findings show that novelty sensitivity arises on multiple timescales across single neurons due to diverse but related computations of sensory surprise and recency and shed light on the computational underpinnings of novelty detection in the primate brain.
Kaihong Zhang; Zaiqing Chen; Jianbing Chen; Xiaoqiao Huang; Lijun Yun; Yonghang Tai
In: Journal of the Society for Information Display, vol. 30, no. 11, pp. 808–817, 2022.
Color asymmetry of the left and right views is a common phenomenon in the stereoscopic three-dimensional (S3D) display, which can lead to visual discomfort. Due to visual discomfort is a subjective sensation, we hypothesize that variations of pupil diameter while viewing S3D images are related to experienced visual comfort. To test this hypothesis, we conducted eye-tracking experiments on humans viewing hue-asymmetric stereoscopic images while simultaneously collecting their judgment scores of experienced visual discomfort. From the collected eye-tracking data, pupil diameter variations were extracted by a normalization formula. Results show that change in hue asymmetry levels causes a significant change in pupil diameter variations. There was a strong negative correlation between the pupil diameter variation and the visual comfort assessment (VCA) score, and the Pearson's r = −0.8936. We conclude that as the hue asymmetric level of stereoscopic image increases, the pupil diameter of the participant becomes smaller and the visual comfort decreases. A visual comfort predication model was fitted well using pupil diameter variation. It predicts that when the pupil diameter variation exceeds 20.57%, the brain feels visual discomfort.
Huiyuan Zhang; Jing Samantha Pan
In: Journal of Vision, vol. 22, no. 10, pp. 1–23, 2022.
Traditional visual search tasks in the laboratories typically involve looking for targets in 2D displays with exemplar views of objects. In real life, visual search commonly entails 3D objects in 3D spaces with nonperpendicular viewing and relative motions between observers and search array items, both of which lead to transformations of objects' projected images in lawful but unpredicted ways. Furthermore, observers often do not have to memorize a target before searching, but may refer to it while searching, for example, holding a picture of someone while looking for them from a crowd. Extending the traditional visual search task, in this study, we investigated the effects of image transformation as a result of perspective change yielded by discrete viewing angle change (Experiment 1) or continuous rotation of the search array (Experiment 2) and of having external references on visual search performance. Results showed that when searching from 3D objects with a non-zero viewing angle, performance was similar to searching from 2D exemplar views of objects; when searching for 3D targets from rotating arrays in virtual reality, performance was similar to searching from stationary arrays. In general, discrete or continuous perspective change did not affect the search outcomes in terms of accuracy, response time, and self-rated confidence, or the search process in terms of eye movement patterns. Therefore, visual search does not require the exact match of retinal images. Additionally, being able to see the target during the search improved search accuracy and observers' confidence. It increased search time because, as revealed by the eye movements, observers actively checked back on the reference target. Thus, visual search is an embodied process that involves real-time information exchange between the observers and the environment.
Han Zhang; Nicola C. Anderson; Kevin F. Miller
In: Attention, Perception, and Psychophysics, vol. 84, no. 4, pp. 1130–1150, 2022.
During scene viewing, semantic information in the scene has been shown to play a dominant role in guiding fixations compared to visual salience (e.g., Henderson & Hayes, 2017). However, scene viewing is sometimes disrupted by cognitive processes unrelated to the scene. For example, viewers sometimes engage in mind-wandering, or having thoughts unrelated to the current task. How do meaning and visual salience account for fixation allocation when the viewer is mind-wandering, and does it differ from when the viewer is on-task? We asked participants to study a series of real-world scenes in preparation for a later memory test. Thought probes occasionally occurred after a subset of scenes to assess whether participants were on-task or mind-wandering. We used salience maps (Graph-Based Visual Saliency; Harel, Koch, & Perona, 2007) and meaning maps (Henderson & Hayes, 2017) to represent the distribution of visual salience and semantic richness in the scene, respectively. Because visual salience and meaning were represented similarly, we could directly compare how well they predicted fixation allocation. Our results indicate that fixations prioritized meaningful over visually salient regions in the scene during mind-wandering just as during attentive viewing. These results held across the entire viewing time. A re-analysis of an independent study (Krasich, Huffman, Faber, & Brockmole Journal of Vision, 20(9), 10, 2020) showed similar results. Therefore, viewers appear to prioritize meaningful regions over visually salient regions in real-world scenes even during mind-wandering.
Bo Zhang; Yuji Naya
In: Data in Brief, vol. 43, pp. 1–8, 2022.
A dataset consisting of whole-brain fMRI (functional magnetic resonance imaging)/MEG (magnetoencephalography) images, eye tracking files, and behavioral records from healthy adult human participants when they performed a spatial-memory paradigm in a virtual environment was collected to investigate the neural representation of the cognitive map defined by unique spatial relationship of three objects, as well as the neural dynamics of the cognitive map following the task demand from localizing self-location to remembering the target location relative to the self-body. The dataset, including both fMRI and MEG, was also used to investigate the neural networks involved in representing a target within and outside the visual field. The dataset included 19 and 12 university students at Peking University for fMRI and MEG experiments, respectively (fMRI: 12 women, 7 men; MEG: 4 women, 8 men). The average ages of those participants were 24.9 years (MRI: 18–30 years) and 22.5 years (MEG: 19–25 years), respectively. fMRI BOLD and T1-weighted images were acquired using a 3T Siemens Prisma scanner (Siemens, Erlangen, Germany) equipped with a 20-channel receiver head coil. MEG neuromagnetic data were acquired using a 275-channel MEG system (CTF MEG, Canada). The dataset could be further used to investigate a range of neural mechanisms involved in human spatial cognition or to develop a bioinspired deep neural network to enhance machines' abilities in spatial processing.
Bei Zhang; Ralph Weidner; Fredrik Allenmark; Sabine Bertleff; Gereon R. Fink; Zhuanghua Shi; Hermann J. Müller
In: Cerebral Cortex, vol. 32, no. 13, pp. 2729–2744, 2022.
Observers can learn locations where salient distractors appear frequently to reduce potential interference - an effect attributed to better suppression of distractors at frequent locations. But how distractor suppression is implemented in the visual cortex and within the frontoparietal attention networks remains unclear. We used fMRI and a regional distractor-location learning paradigm with two types of distractors defined in either the same (orientation) or a different (color) dimension to the target to investigate this issue. fMRI results showed that BOLD signals in early visual cortex were significantly reduced for distractors (as well as targets) occurring at the frequent versus rare locations, mirroring behavioral patterns. This reduction was more robust with same-dimension distractors. Crucially, behavioral interference was correlated with distractor-evoked visual activity only for same- (but not different-) dimension distractors. Moreover, with different- (but not same-) dimension distractors, a color-processing area within the fusiform gyrus was activated more when a distractor was present in the rare region versus being absent and more with a distractor in the rare versus frequent locations. These results support statistical learning of frequent distractor locations involving regional suppression in early visual cortex and point to differential neural mechanisms of distractor handling with different- versus same-dimension distractors.
Viktoria Zemliak; W. Joseph MacInnes
The spatial leaky competing accumulator model Journal Article
In: Frontiers in Computer Science, vol. 4, pp. 1–11, 2022.
The Leaky Competing Accumulator model (LCA) of Usher and McClelland is able to simulate the time course of perceptual decision making between an arbitrary number of stimuli. Reaction times, such as saccadic latencies, produce a typical distribution that is skewed toward longer latencies and accumulator models have shown excellent fit to these distributions. We propose a new implementation called the Spatial Leaky Competing Accumulator (SLCA), which can be used to predict the timing of subsequent fixation durations during a visual task. SLCA uses a pre-existing saliency map as input and represents accumulation neurons as a two-dimensional grid to generate predictions in visual space. The SLCA builds on several biologically motivated parameters: leakage, recurrent self-excitation, randomness and non-linearity, and we also test two implementations of lateral inhibition. A global lateral inhibition, as implemented in the original model of Usher and McClelland, is applied to all competing neurons, while a local implementation allows only inhibition of immediate neighbors. We trained and compared versions of the SLCA with both global and local lateral inhibition with use of a genetic algorithm, and compared their performance in simulating human fixation latency distribution in a foraging task. Although both implementations were able to produce a positively skewed latency distribution, only the local SLCA was able to match the human data distribution from the foraging task. Our model is discussed for its potential in models of salience and priority, and its benefits as compared to other models like the Leaky integrate and fire network.
Andrea M. Zawoyski; Scott P. Ardoin; Katherine S. Binder
In: School Psychology, pp. 1–8, 2022.
Teachers often encourage students to use test-taking strategies during reading comprehension assessments, but these strategies are not always evidence-based. One common strategy involves teaching students to read the questions before reading an associated passage. Research findings comparing the passage-first (PF) and questions-first (QF) strategies are mixed. The present study employed eye-tracking technology to record 84 third and fourth-grade participants' eye movements (EMs) as they read a passage and responded to multiple-choice (MC) questions using PF and QF strategies in a within-subject design. Although there were no significant differences between groups in accuracy on MC questions, EM measures revealed that the PF condition was superior to the QF condition for elementary readers in terms of efficiency in reading and responding to questions. These findings suggest that the PF strategy supports a more comprehensive understanding of the text. Ultimately, within the PF condition, students required less time to obtain the same accuracy outcomes they attained when reading in the QF condition. School psychologists can improve reading comprehension instruction by encouraging the importance of teaching children to gain meaning from the text rather than search the passage for answers to MC questions.
Farnaz Zamani Esfahlani; Lisa Byrge; Jacob Tanner; Olaf Sporns; Daniel P. Kennedy; Richard F. Betzel
In: NeuroImage, vol. 263, pp. 1–12, 2022.
The interaction between brain regions changes over time, which can be characterized using time-varying functional connectivity (tvFC). The common approach to estimate tvFC uses sliding windows and offers limited temporal resolution. An alternative method is to use the recently proposed edge-centric approach, which enables the tracking of moment-to-moment changes in co-fluctuation patterns between pairs of brain regions. Here, we first examined the dynamic features of edge time series and compared them to those in the sliding window tvFC (sw-tvFC). Then, we used edge time series to compare subjects with autism spectrum disorder (ASD) and healthy controls (CN). Our results indicate that relative to sw-tvFC, edge time series captured rapid and bursty network-level fluctuations that synchronize across subjects during movie-watching. The results from the second part of the study suggested that the magnitude of peak amplitude in the collective co-fluctuations of brain regions (estimated as root sum square (RSS) of edge time series) is similar in CN and ASD. However, the trough-to-trough duration in RSS signal is greater in ASD, compared to CN. Furthermore, an edge-wise comparison of high-amplitude co-fluctuations showed that the within-network edges exhibited greater magnitude fluctuations in CN. Our findings suggest that high-amplitude co-fluctuations captured by edge time series provide details about the disruption of functional brain dynamics that could potentially be used in developing new biomarkers of mental disorders.
Xinger Yu; Timothy D. Hanks; Joy J. Geng
In: Psychological Science, vol. 33, no. 1, pp. 105–120, 2022.
When searching for a target object, we engage in a continuous “look-identify” cycle in which we use known features of the target to guide attention toward potential targets and then to decide whether the selected object is indeed the target. Target information in memory (the target template or attentional template) is typically characterized as having a single, fixed source. However, debate has recently emerged over whether flexibility in the target template is relational or optimal. On the basis of evidence from two experiments using college students (Ns = 30 and 70, respectively), we propose that initial guidance of attention uses a coarse relational code, but subsequent decisions use an optimal code. Our results offer a novel perspective that the precision of template information differs when guiding sensory selection and when making identity decisions during visual search.
Wenyuan Yu; Wenhui Sun; Nai Ding
In: NeuroImage, vol. 255, pp. 1–10, 2022.
Natural scenes contain multi-modal information, which is integrated to form a coherent perception. Previous studies have demonstrated that cross-modal information can modulate neural encoding of low-level sensory features. These studies, however, mostly focus on the processing of single sensory events or rhythmic sensory sequences. Here, we investigate how the neural encoding of basic auditory and visual features is modulated by cross-modal information when the participants watch movie clips primarily composed of non-rhythmic events. We presented audiovisual congruent and audiovisual incongruent movie clips, and since attention can modulate cross-modal interactions, we separately analyzed high- and low-arousal movie clips. We recorded neural responses using electroencephalography (EEG), and employed the temporal response function (TRF) to quantify the neural encoding of auditory and visual features. The neural encoding of sound envelope is enhanced in the audiovisual congruent condition than the incongruent condition, but this effect is only significant for high-arousal movie clips. In contrast, audiovisual congruency does not significantly modulate the neural encoding of visual features, e.g., luminance or visual motion. In summary, our findings demonstrate asymmetrical cross-modal interactions during the processing of natural scenes that lack rhythmicity: Congruent visual information enhances low-level auditory processing, while congruent auditory information does not significantly modulate low-level visual processing.
Gongchen Yu; James P. Herman; Leor N. Katz; Richard J. Krauzlis
In: eLife, vol. 11, pp. 1–14, 2022.
Recent evidence suggests that microsaccades are causally linked to the attentionrelated modulation of neurons—specifically, that microsaccades toward the attended location are required for the subsequent changes in firing rate. These findings have raised questions about whether attention-related modulation is due to different states of attention as traditionally assumed or might instead be a secondary effect of microsaccades. Here, in two rhesus macaques, we tested the relationship between microsaccades and attention-related modulation in the superior colliculus (SC), a brain structure crucial for allocating attention. We found that attention-related modulation emerged even in the absence of microsaccades, was already present prior to microsaccades toward the cued stimulus, and persisted through the suppression of activity that accompanied all microsaccades. Nonetheless, consistent with previous findings, we also found significant attention-related modulation when microsaccades were directed toward, rather than away from, the cued location. Thus, despite the clear links between microsaccades and attention, microsaccades are not necessary for attention-related modulation, at least not in the SC. They do, however, provide an additional marker for the state of attention, especially at times when attention is shifting from one location to another.
Mansoureh Seyed Yousefi; Farnoush Reisi; Mohammad Reza Daliri; Vahid Shalchyan
In: IEEE Access, vol. 10, pp. 1–12, 2022.
Stress has been a common disorder in human societies and numerous studies have been conducted on the early diagnosis of stress. Previous studies have shown that it is possible to diagnose stress using eye tracking data. This study aimed to obtain a new and significant method for detecting parameters of the eye tracker and electrodermal activity signal by discrimination of ‘‘stress'' vs. ‘‘relaxation'' and to achieve higher accuracy than previous research. We used a Stroop task and a mathematical stressor task in which stress elements were placed in a novel design to separate stress from relaxation in the Stroop task and evaluate three levels of stress in the mathematical task. In the present study, we recorded the eye tracking data of fifteen participants and thoroughly investigated the pupil diameter (PD) and electrodermal activity (EDA) features to discriminate different stress states. After preprocessing, several features were extracted and selected. Then, the features were used for classification by applying support vector machine, linear discriminant analysis, and k-nearest neighbor classifiers. The linear discriminant analysis classifier, for which the accuracy was 88.43% in the Stroop and 91.10% in the mathematical, showed higher accuracy than the other classifiers when using PD and EDA features. Also, PD features demonstrated more reliability and ability to differentiate stress from relaxation compared to traditional EDA.
Harun Yörük; Benjamin J. Tamber-Rosenau
In: Visual Cognition, vol. 30, no. 7, pp. 490–505, 2022.
In visual crowding, an item representation is degraded by adjacent flanker items. Recently, the related phenomenon of visual working memory (VWM) crowding has been used to evaluate shared mechanisms between memory and perception. However, some previous studies that investigated VWM crowding suggested that it stemmed from encoding, rather than memory maintenance. In the current study, we evaluated two measures in simultaneously-presented arrays: anisotropy for radially vs. tangentially configured arrays, and effect of target to array proximity (array middle vs. edge targets). Simultaneously presented arrays evoked effects in both measures. We then compared the data from the current study to that from our previous study that used sequential presentation, and thus avoided encoding-based explanations for crowding. We predicted that we would observe greater crowding for simultaneous than sequential presentation because simultaneous arrays allow for two opportunities for crowding—encoding and maintenance—while sequential arrays only allow for maintenance-based crowding. Surprisingly, we observed that both measures were similar across simultaneous and sequential arrays. These results indicate that VWM crowding does not have an additive error mechanism across encoding and maintenance. Moreover, the anisotropy result suggests that both simultaneous and sequential array VWM crowding is influenced by retinotopy in the early visual cortex.
Sang-Ah Yoo; Julio C. Martinez-Trujillo; Stefan Treue; John K. Tsotsos; Mazyar Fallah
In: BMC Biology, vol. 20, no. 1, pp. 1–19, 2022.
Background: Feature-based attention prioritizes the processing of the attended feature while strongly suppressing the processing of nearby ones. This creates a non-linearity or “attentional suppressive surround” predicted by the Selective Tuning model of visual attention. However, previously reported effects of feature-based attention on neuronal responses are linear, e.g., feature-similarity gain. Here, we investigated this apparent contradiction by neurophysiological and psychophysical approaches. Results: Responses of motion direction-selective neurons in area MT/MST of monkeys were recorded during a motion task. When attention was allocated to a stimulus moving in the neurons' preferred direction, response tuning curves showed its minimum for directions 60–90° away from the preferred direction, an attentional suppressive surround. This effect was modeled via the interaction of two Gaussian fields representing excitatory narrowly tuned and inhibitory widely tuned inputs into a neuron, with feature-based attention predominantly increasing the gain of inhibitory inputs. We further showed using a motion repulsion paradigm in humans that feature-based attention produces a similar non-linearity on motion discrimination performance. Conclusions: Our results link the gain modulation of neuronal inputs and tuning curves examined through the feature-similarity gain lens to the attentional impact on neural population responses predicted by the Selective Tuning model, providing a unified framework for the documented effects of feature-based attention on neuronal responses and behavior.
Aspen H. Yoo; Alfredo Bolaños; Grace E. Hallenbeck; Masih Rahmati; Thomas C. Sprague; Clayton E. Curtis
In: Journal of Cognitive Neuroscience, vol. 34, no. 2, pp. 365–379, 2022.
Humans allocate visual working memory (WM) resource according to behavioral relevance, resulting in more precise memories for more important items. Theoretically, items may be maintained by feature-tuned neural populations, where the relative gain of the populations encoding each item determines precision. To test this hypothesis, we compared the amplitudes of delay period activity in the different parts of retinotopic maps representing each of several WM items, predicting the amplitudes would track behavioral priority. Using fMRI, we scanned participants while they remembered the location of multiple items over a WM delay and then reported the location of one probed item using a memory-guided saccade. Importantly, items were not equally probable to be probed (0.6, 0.3, 0.1, 0.0), which was indicated with a precue. We analyzed fMRI activity in 10 visual field maps in occipital, parietal, and frontal cortex known to be important for visual WM. In early visual cortex, but not association cortex, the amplitude of BOLD activation within voxels corresponding to the retinotopic location of visual WM items increased with the priority of the item. Interestingly, these results were contrasted with a common finding that higher-level brain regions had greater delay period activity, demonstrating a dissociation between the absolute amount of activity in a brain area and the activity of different spatially selective populations within it. These results suggest that the distribution of WM resources according to priority sculpts the relative gains of neural populations that encode items, offering a neural mechanism for how prioritization impacts memory precision.
Atsushi Yokoi; Jeffrey Weiler
Pupil diameter tracked during motor adaptation in humans Journal Article
In: Journal of Neurophysiology, vol. 128, no. 5, pp. 1224–1243, 2022.
Pupil diameter is known as a noninvasive window into individuals' internal states. Despite the growing use of pupillometry in cognitive learning studies, it receives little attention in motor learning studies. Here, we characterized the pupil responses in a short-term reach adaptation paradigm by measuring pupil diameter of human participants while they adapted to abrupt, gradual, or switching force field conditions. Our results demonstrate how surprise and uncertainty reflected in pupil diameter develop during motor adaptation.Pupil diameter, under constant illumination, is known to reflect individuals' internal states, such as surprise about observation and environmental uncertainty. Despite the growing use of pupillometry in cognitive learning studies as an additional measure for examining internal states, few studies have used pupillometry in human motor learning studies. Here, we provide the first detailed characterization of pupil diameter changes in a short-term reach adaptation paradigm. We measured pupil changes in 121 human participants while they adapted to abrupt, gradual, or switching force field conditions. Sudden increases in movement error caused by the introduction/reversal of the force field resulted in strong phasic pupil dilation during movement accompanied by a transient increase in tonic premovement baseline pupil diameter in subsequent trials. In contrast, pupil responses were reduced when the force field was gradually introduced, indicating that large, unexpected errors drove the changes in pupil responses. Interestingly, however, error-induced pupil responses gradually became insensitive after experiencing multiple force field reversals. We also found an association between baseline pupil diameter and incidental knowledge of the gradually introduced perturbation. Finally, in all experiments, we found a strong co-occurrence of larger baseline pupil diameter with slower reaction and movement times after each rest break. Collectively, these results suggest that tonic baseline pupil diameter reflects one's belief about environmental uncertainty, whereas phasic pupil dilation during movement reflects surprise about a sensory outcome (i.e., movement error), and both effects are modulated by novelty. Our results provide a new approach for nonverbally assessing participants' internal states during motor learning.
Emanuela Yeung; Dimitrios Askitis; Velisar Manea; Victoria Southgate
In: Open Mind: Discoveries in Cognitive Science, vol. 6, pp. 232–249, 2022.
The capacity to take another's perspective appears to be present from early in life, with young infants ostensibly able to predict others' behaviour even when the self and other perspective are at odds. Yet, infants' abilities are difficult to reconcile with the well-known problems that older children have with ignoring their own perspective. Here we show that it is the development of the self-perspective, at around 18 months, that creates a perspective conflict between self and other during a non-verbal perspective-tracking scenario. Using mirror self-recognition as a measure of self-awareness and pupil dilation to index conflict processing, our results show that mirror recognisers perceive greater conflict during action anticipation, specifically in a high inhibitory demand condition, in which conflict between self and other should be particularly salient.
Rachel Yep; Matthew L. Smorenburg; Heidi C. Riek; Olivia G. Calancie; Ryan H. Kirkpatrick; Julia E. Perkins; Jeff Huang; Brian C. Coe; Donald C. Brien; Douglas P. Munoz
Interleaved pro/anti-saccade behavior across the lifespan Journal Article
In: Frontiers in Aging Neuroscience, vol. 14, pp. 1–15, 2022.
The capacity for inhibitory control is an important cognitive process that undergoes dynamic changes over the course of the lifespan. Robust characterization of this trajectory, considering age continuously and using flexible modeling techniques, is critical to advance our understanding of the neural mechanisms that differ in healthy aging and neurological disease. The interleaved pro/anti-saccade task (IPAST), in which pro- and anti-saccade trials are randomly interleaved within a block, provides a simple and sensitive means of assessing the neural circuitry underlying inhibitory control. We utilized IPAST data collected from a large cross-sectional cohort of normative participants (n = 604, 5–93 years of age), standardized pre-processing protocols, generalized additive modeling, and change point analysis to investigate the effect of age on saccade behavior and identify significant periods of change throughout the lifespan. Maturation of IPAST measures occurred throughout adolescence, while subsequent decline began as early as the mid-20s and continued into old age. Considering pro-saccade correct responses and anti-saccade direction errors made at express (short) and regular (long) latencies was crucial in differentiating developmental and aging processes. We additionally characterized the effect of age on voluntary override time, a novel measure describing the time at which voluntary processes begin to overcome automated processes on anti-saccade trials. Drawing on converging animal neurophysiology, human neuroimaging, and computational modeling literature, we propose potential frontal-parietal and frontal-striatal mechanisms that may mediate the behavioral changes revealed in our analysis. We liken the models presented here to “cognitive growth curves” which have important implications for improved detection of neurological disease states that emerge during vulnerable windows of developing and aging.
Panpan Yao; Adrian Staub; Xingshan Li
In: Psychonomic Bulletin & Review, vol. 29, no. 1, pp. 243–252, 2022.
Previous research has demonstrated effects of both orthographic neighborhood size and neighbor frequency in word recognition in Chinese. A large neighborhood—where neighborhood size is defined by the number of words that differ from a target word by a single character—appears to facilitate word recognition, while the presence of a higher-frequency neighbor has an inhibitory effect. The present study investigated modulation of these effects by a word's predictability in context. In two eye-movement experiments, the predictability of a target word in each sentence was manipulated. Target words differed in their neighborhood size (Experiment 1) and in whether they had a higher-frequency neighbor (Experiment 2). The study replicated the previously observed effects of neighborhood size and neighbor frequency when the target word was unpredictable, but in both experiments neighborhood effects were absent when the target was predictable. These results suggest that when a word is preactivated by context, the activation of its neighbors may be diminished to such an extent that these neighbors do not effectively compete for selection.
Panpan Yao; Reem Alkhammash; Xingshan Li
In: Scientific Studies of Reading, vol. 26, no. 5, pp. 390–408, 2022.
We aimed to tackle the question about the time course of plausibility effect in on-line processing of Chinese nouns in temporarily ambiguous structures, and whether L2ers can immediately use the plausibility information generated from classifier-noun associations in analyzing ambiguous structures. Two eye-tracking experiments were conducted to explore how native Chinese speakers (Experiment 1) and high-proficiency Dutch-Chinese learners (Experiment 2) on-line process 4-character novel noun-noun combinations in Chinese. In each pair of nominal phrases (Numeral+Classifier+Noun1+Noun2), the plausibility of Classifier-Noun1 varied (plausible vs. implausible) while the whole nominal phrases were always plausible. Results showed that the plausibility of Classifier-Noun1 associations had an immediate effect on Noun1, and a reversed effect on Noun2 for both groups of participants. These findings indicated that plausibility plays an immediate role in incremental semantic integration during on-line processing of Chinese. Similar to native Chinese speakers, high-proficiency L2ers can also use the plausibility information of classifier-noun associations in syntactic reanalysis.
Xiaozhi Yang; Ian Krajbich
In: Psychological Review, pp. 1–19, 2022.
When making decisions, how people allocate their attention influences their choices. One empirical finding is that people are more likely to choose the option that they have looked at more. This relation has been formalized with the attentional drift-diffusion model (aDDM; Krajbich et al., 2010). However, options often have multiple attributes, and attention is also thought to govern the relative weighting of those attributes (Roe et al., 2001). Little is known about how these two distinct features of the choice process interact; we still lack a model (and tests of that model) that incorporate both option- and attribute-wise attention. Here, we propose a multi-attribute attentional drift-diffusion model (maaDDM) to account for attentional discount factors on both options and attributes. We then use five eye-tracking datasets (two-alternative, two-attribute preferential tasks) from different choice domains to test the model. We find very stable option-level and attribute-level attentional discount factors across datasets, though nonfixated options are consistently discounted more than nonfixated attributes. Additionally, we find that people generally discount the nonfixated attribute of the nonfixated option in a multiplicative way, and so that feature is consistently discounted the most. Finally, we also find that gaze allocation reflects attribute weights, with more gaze to higher-weighted attributes. In summary, our work uncovers an intricate interplay between attribute weights, gaze processes, and preferential choice.
Qianli Yang; Zhongqiao Lin; Wenyi Zhang; Jianshu Li; Xiyuan Chen; Jiaqi Zhang; Tianming Yang
In: eLife, vol. 11, pp. 1–39, 2022.
Humans can often handle daunting tasks with ease by developing a set of strategies to reduce decision-making into simpler problems. The ability to use heuristic strategies demands an advanced level of intelligence and has not been demonstrated in animals. Here, we trained macaque monkeys to play the classic video game Pac-Man. The monkeys' decision-making may be described with a strategy-based hierarchical decision-making model with over 90% accuracy. The model reveals that the monkeys adopted the take-the-best heuristic by using one dominating strategy for their decision-making at a time and formed compound strategies by assembling the basis strategies to handle particular game situations. With the model, the computationally complex but fully quan-tifiable Pac-Man behavior paradigm provides a new approach to understanding animals' advanced cognition.
Jingwen Yang; Zelin Chen; Guoxin Qiu; Xiangyu Li; Caixia Li; Kexin Yang; Zhuanggui Chen; Leyan Gao; Shuo Lu
Exploring the relationship between children's facial emotion processing characteristics and speech communication ability using deep learning on eye tracking and speech performance measures Journal Article
In: Computer Speech and Language, vol. 76, pp. 1–19, 2022.
The ability of efficient facial emotion recognition (FER) plays a significant role in successful human communication and is closely associated with multiple speech communication disorders (SCD) in children. Despite the relevance, little is known about how speech communication abilities (SCA) and FER are correlated or of their underlying mechanism. To address this, we monitored eye movements of 115 children while watching human faces with different emotions and designed a machine-learning based SCD prediction model to explore the underlying pattern of eye movements during the FER task as well as their correlation with SCA. Strong and detailed correlations were found between different dimensions of SCA and various eye-movement features. A group of FER gazing patterns was found to be highly sensitive to the possibility of children's SCD. The SCD prediction model reached an accuracy as high as 88.9%, which offers a possible technique to fast screen SCD for children.
Victoria Yaneva; Brian E. Clauser; Amy Morales; Miguel Paniagua
In: Advances in Health Sciences Education, vol. 27, no. 5, pp. 1401–1422, 2022.
Understanding the response process used by test takers when responding to multiple-choice questions (MCQs) is particularly important in evaluating the validity of score interpretations. Previous authors have recommended eye-tracking technology as a useful approach for collecting data on the processes test taker's use to respond to test questions. This study proposes a new method for evaluating alternative score interpretations by using eye-tracking data and machine learning. We collect eye-tracking data from 26 students responding to clinical MCQs. Analysis is performed by providing 119 eye-tracking features as input for a machine-learning model aiming to classify correct and incorrect responses. The predictive power of various combinations of features within the model is evaluated to understand how different feature interactions contribute to the predictions. The emerging eye-movement patterns indicate that incorrect responses are associated with working from the options to the stem. By contrast, correct responses are associated with working from the stem to the options, spending more time on reading the problem carefully, and a more decisive selection of a response option. The results suggest that the behaviours associated with correct responses are aligned with the real-world model used for score interpretation, while those associated with incorrect responses are not. To the best of our knowledge, this is the first study to perform data-driven, machine-learning experiments with eye-tracking data for the purpose of evaluating score interpretation validity.
Jumpei Yamashita; Hiroki Terashima; Makoto Yoneya; Kazushi Maruya; Haruo Oishi; Takatsune Kumada
In: PLoS ONE, vol. 17, no. 10, pp. 1–23, 2022.
Understanding temporally attention fluctuations can benefit scientific knowledge and real-life applications. Temporal attention studies have typically used the reaction time (RT), which can be measured only after a target presentation, as an index of attention level. We have proposed the Micro-Pupillary Unrest Index (M-PUI) based on pupillary fluctuation amplitude to estimate RT before the target presentation. However, the kind of temporal attention effects that the M-PUI reflects remains unclear. We examined if the M-PUI shows two types of temporal attention effects initially reported for RTs in the variable foreperiod tasks: the variable foreperiod effect (FP effect) and the sequential effect (SE effect). The FP effect refers to a decrease in the RT due to an increase in the foreperiod of the current trial, whereas the SE effect refers to an increase in the RT in the early part of the foreperiod of the current trial due to an increase in the foreperiod of the previous trial. We used a simple reaction task with the medium-term variable foreperiods (Psychomotor Vigilance Task) and found that the M-PUI primarily reflects the FP effect. Inter-individual analyses showed that the FP effect on the M-PUI, unlike other eye movement indices, is correlated with the FP effect on RT. These results suggest that the M-PUI is a potentially powerful tool for investigating temporal attention fluctuations for a partly unpredictable target.
Cheng Xue; Lily E. Kramer; Marlene R. Cohen
Dynamic task-belief is an integral part of decision-making Journal Article
In: Neuron, vol. 110, no. 15, pp. 2503–2511, 2022.
Natural decisions involve two seemingly separable processes: inferring the relevant task (task-belief) and performing the believed-relevant task. The assumed separability has led to the traditional practice of studying task-switching and perceptual decision-making individually. Here, we used a novel paradigm to manipulate and measure macaque monkeys' task-belief and demonstrated inextricable neuronal links between flexible task-belief and perceptual decision-making. We showed that in animals, but not in artificial networks that performed as well or better than the animals, stronger task-belief is associated with better perception. Correspondingly, recordings from neuronal populations in cortical areas 7a and V1 revealed that stronger task-belief is associated with better discriminability of the believed-relevant, but not the believed-irrelevant, feature. Perception also impacts belief updating; noise fluctuations in V1 help explain how task-belief is updated. Our results demonstrate that complex tasks and multi-area recordings can reveal fundamentally new principles of how biology affects behavior in health and disease.
Ying Xu; Jia-Qiong Xie; Fu-Xing Wang; Rebecca L Monk; James Gaskin; Jin-Liang Wang
In: Social Science Computer Review, pp. 1–19, 2022.
Social media, such as Microblogs, have become an important source for people to obtain information. However, we know little about how this would influence our comprehension over online information. Based on the cognitive load theory, this research explores whether and how two important features of Weibo, which are the feedback function and information fragmentation, would increase cognitive load and may in turn hinder users' information comprehension in Weibo. A 2 (feedback or non-feedback) × 2 (strong-interference or weak-interference information) between-participants experimental design was conducted. Our results revealed that the Weibo feedback function and interference information exerted a negative impact over information comprehension via inducing increased cognitive load. Specifically, these results deepened our understanding regarding the impact of Weibo features on online information comprehension and suggest the mechanism by which this occurs. This finding has implications for how to minimize the potential cost of using Weibo and maximize the adaptive development of social media.
Chen Xing; Yajuan Zhang; Hongliang Lu; Xia Zhu; Danmin Miao
In: Frontiers in Neuroscience, vol. 16, pp. 1–22, 2022.
Many studies have illustrated the close relationship between anxiety disorders and attentional functioning, but the relationship between trait anxiety and attentional bias remains controversial. This study examines the effect of trait anxiety on the time course of attention to emotional stimuli using materials from the International Affective Picture System. Participants with high vs. low trait anxiety (HTA vs. LTA) viewed four categories of pictures simultaneously: dysphoric, threatening, positive, and neutral. Their eye-movements for each emotional stimulus were recorded for static and dynamic analysis. Data were analyzed using a mixed linear model and growth curve analysis. Specifically, the HTA group showed a greater tendency to avoid threatening stimuli and more pupil diameter variation in the early period of stimulus presentation (0–7.9 s). The HTA group also showed a stronger attentional bias toward positive and dysphoric stimuli in the middle and late period of stimulus presentation (7.9–30 s). These results suggest that trait anxiety has a significant temporal effect on attention to emotional stimuli, and that this effect mainly manifests after 7 s. In finding stronger attentional avoidance of threatening stimuli and more changes in neural activity, as well as a stronger attentional bias toward positive stimuli, this study provides novel insights on the relationship between trait anxiety and selective attention.
Zedong Xie; Meng Zhang; Zunping Ma
In: Current Issues in Tourism, pp. 1–16, 2022.
Tourism research has always sought to find ways to improve tourists' experience evaluation and create added value for them. However, the academic community has focused on the on-site and post-travel stages of tourists, and neglected the pre-travel stage. This study examines the influence of guided mental simulation of an upcoming tourist experience on subsequent on-site tourist experience and experience evaluation. The research simulated real-world experience with tour videos shot from the first-person perspective, and measured the variables using both eye movements and self-reporting. Multivariate ANOVA and multigroup analysis were then performed on the data. The results showed that a process simulation of tourists having an engagement experience and an outcome simulation of tourists having a sight-seeing experience resulted in a higher engagement level and higher emotional response during the on-site experience, higher evaluation of the experience, and a greater impact of engagement level on their evaluation. This study expands the research on tourists' psychological experience in the pre-travel stage. Results indicate that the period from the moment consumers book or purchase the tourist product to the moment they actually embark on the tourist experience is a valuable marketing window.
Weizhen Xie; JC Lynne Lu Sing; Ana Martinez-Flores; Weiwei Zhang
In: Emotion, vol. 22, no. 1, pp. 179–197, 2022.
This study examines how induced negative arousal influences the consolidation of fragile sensory inputs into durable working memory (WM) representations. Participants performed a visual WM change detection task with different amounts of encoding time manipulated by random pattern masks inserted at different levels of memory-and-mask Stimulus Onset Asynchrony (SOA). Prior to the WM task, negative or neutral emotion was induced using audio clips from the International Affective Digital Sounds (IADS). Pupillometry was simultaneously recorded to provide an objective measure of induced arousal. Self-report measures of early-life stress (i.e., adverse childhood experiences) and current mood states (i.e., depressed mood and anxious feeling) were also collected as covariates. We find that participants initially remember a comparable number of WM items under a short memory-and-mask SOA of 100 ms across emotion conditions, but then encode more items into WM at a longer memory-and-mask SOA of 333 ms under induced negative arousal. These findings suggest that induced negative arousal speeds up WM consolidation. Yet, induced negative arousal does not seem to significantly affect participants' WM storage capacity estimated from a separate no mask condition. Furthermore, this emotional effect on WM consolidation speed is moderated by key affect-related individual differences. Participants who have greater pupil responses to negative IADS sounds or have more early-life stress show faster WM consolidation under induced negative arousal. Collectively, our findings reveal a critical role of phasic adrenergic responses in the rapid consolidation of visual WM content and identify potential moderators of this association.
Jin Xie; Ting Yan; Jie Zhang; Zhengyu Ma; Huihui Zhou
In: Neuroscience Bulletin, vol. 38, no. 10, pp. 1183–1198, 2022.
Active exploratory behaviors have often been associated with theta oscillations in rodents, while theta oscillations during active exploration in non-human primates are still not well understood. We recorded neural activities in the frontal eye field (FEF) and V4 simultaneously when monkeys performed a free-gaze visual search task. Saccades were strongly phase-locked to theta oscillations of V4 and FEF local field potentials, and the phase-locking was dependent on saccade direction. The spiking probability of V4 and FEF units was significantly modulated by the theta phase in addition to the time-locked modulation associated with the evoked response. V4 and FEF units showed significantly stronger responses following saccades initiated at their preferred phases. Granger causality and ridge regression analysis showed modulatory effects of theta oscillations on saccade timing. Together, our study suggests phase-locking of saccades to the theta modulation of neural activity in visual and oculomotor cortical areas, in addition to the theta phase locking caused by saccade-triggered responses.
Jordana S. Wynn; Ruben D. I. Van Genugten; Signy Sheldon; Daniel L. Schacter
Schema-related eye movements support episodic simulation Journal Article
In: Consciousness and Cognition, vol. 100, pp. 1–9, 2022.
Recent work indicates that eye movements support the retrieval of episodic memories by reactivating the spatiotemporal context in which they were encoded. Although similar mechanisms have been thought to support simulation of future episodes, there is currently no evidence favoring this proposal. In the present study, we investigated the role of eye movements in episodic simulation by comparing the gaze patterns of individual participants imagining future scene and event scenarios to across-participant gaze templates for those same scenarios, reflecting their shared features (i.e., schemas). Our results provide novel evidence that eye movements during episodic simulation in the face of distracting visual noise are (1) schema-specific and (2) predictive of simulation success. Together, these findings suggest that eye movements support episodic simulation via reinstatement of scene and event schemas, and more broadly, that interactions between the memory and oculomotor effector systems may underlie critical cognitive processes including constructive episodic simulation.
Xiuyun Wu; Miriam Spering
In: PLoS ONE, vol. 17, no. 9, pp. 1–22, 2022.
Human smooth pursuit eye movements and motion perception behave similarly when observers track and judge the motion of simple objects, such as dots. But moving objects in our natural environment are complex and contain internal motion. We ask how pursuit and perception integrate the motion of objects with motion that is internal to the object. Observers (n = 20) tracked a moving random-dot kinematogram with their eyes and reported the object's perceived direction. Objects moved horizontally with vertical shifts of 0, ±3, ±6, or ±9° and contained internal dots that were static or moved ±90° up/down. Results show that whereas pursuit direction was consistently biased in the direction of the internal dot motion, perceptual biases differed between observers. Interestingly, the perceptual bias was related to the magnitude of the pursuit bias (r = 0.75): perceptual and pursuit biases were directionally aligned in observers that showed a large pursuit bias, but went in opposite directions in observers with a smaller pursuit bias. Dissociations between perception and pursuit might reflect different functional demands of the two systems. Pursuit integrates all available motion signals in order to maximize the ability to monitor and collect information from the whole scene. Perception needs to recognize and classify visual information, thus segregating the target from its context. Ambiguity in whether internal motion is part of the scene or contributes to object motion might have resulted in individual differences in perception. The perception-pursuit correlation suggests shared early-stage motion processing or perception- pursuit interactions.
Shengyi Wu; Tommy Blanchard; Emily Meschke; Richard N. Aslin; Benjamin Y. Hayden; Celeste Kidd
In: Biology Letters, vol. 18, pp. 1–5, 2022.
Normative learning theories dictate that we should preferentially attend to informative sources, but only up to the point that our limited learning systems can process their content. Humans, including infants, show this predicted strategic deployment of attention. Here, we demonstrate that rhesus monkeys, much like humans, attend to events of moderate surprisingness over both more and less surprising events. They do this in the absence of any specific goal or contingent reward, indicating that the behavioural pattern is spontaneous. We suggest this U-shaped attentional preference represents an evolutionarily preserved strategy for guiding intelligent organisms toward material that is maximally useful for learning.
Anna M. Wright; Jorge A. Salas; Kelly E. Carter; Daniel T. Levin
In: Learning and Instruction, vol. 79, pp. 1–9, 2022.
Recent research has tested whether Eye Movement Modeling Examples (EMMEs) can effectively cue attention and improve learning. However, the effects of EMMEs are variable, and the degree to which viewers follow these cues remains unclear. In the current paper, we compared screen-captured instructional videos that included an EMME in the form of a transparent circular overlay depicting the instructor's gaze location with identical videos that lacked this cue. We observed that EMMEs drove viewer saccades to cued locations and resulted in shorter distances between viewer gaze and the EMME, but learning performance and video preference were unaffected by the presence of an EMME. We argue that EMMEs can effectively guide attention, but the range of circumstances under which they improve learning may be limited.
Jae Hyung Woo; Habiba Azab; Andrew Jahn; Benjamin Hayden; Joshua W. Brown
In: Cognitive, Affective and Behavioral Neuroscience, vol. 22, no. 5, pp. 952–968, 2022.
The anterior cingulate cortex (ACC) has been implicated in a number of functions, including performance monitoring and decision-making involving effort. The prediction of responses and outcomes (PRO) model has provided a unified account of much human and monkey ACC data involving anatomy, neurophysiology, EEG, fMRI, and behavior. We explored the computational nature of ACC with the PRO model, extending it to account specifically for both human and macaque monkey decision-making under risk, including both behavioral and neural data. We show that the PRO model can account for a number of additional effects related to outcome prediction, decision-making under risk, gambling behavior. In particular, we show that the ACC represents the variance of uncertain outcomes, suggesting a link between ACC function and mean-variance theories of decision making. The PRO model provides a unified account of a large set of data regarding the ACC.
Brent Wolter; Chi Yui Leung; Shaoxin Wang; Shifa Chen; Junko Yamashita
In: Cognitive Linguistics, vol. 33, no. 4, pp. 623–657, 2022.
Visual search studies have shown that East Asians rely more on information gathered through their extrafoveal (i.e., peripheral) vision than do Western Caucasians, who tend to rely more on information gathered using their foveal (i.e., central) vision. However, the reasons for this remain unclear. Cognitive linguists suggest that the difference is attributable linguistic variation, while cultural psychologists contend it is due to cultural factors. The current study used eye-tracking data collected during a visual search task to compare these explanations by leveraging a semantic difference against a cultural difference to determine which view best explained strategies used on the task. The task was administered to Chinese, American, and Japanese participants with a primary focus on the Chinese participants' behaviors since the semantic difference aligned the Chinese participants with the Americans, while their cultural affiliation aligned them with the Japanese participants. The results indicated that the Chinese group aligned more closely with the American group on most measures, suggesting that semantic differences were more important than cultural affiliation on this particular task. However, there were some results that could not be accounted for by the semantic differences, suggesting that linguistic and cultural factors might affect visual search strategies concurrently.
Maren-Isabel Wolf; Maximilian Bruchmann; Gilles Pourtois; Sebastian Schindler; Thomas Straube
In: Cerebral Cortex, vol. 32, no. 10, pp. 2112–2128, 2022.
Until today, there is an ongoing discussion if attention processes interact with the information processing stream already at the level of the C1, the earliest visual electrophysiological response of the cortex. We used two highly powered experiments (each N = 52) and examined the effects of task relevance, spatial attention, and attentional load on individual C1 amplitudes for the upper or lower visual hemifield. Bayesian models revealed evidence for the absence of load effects but substantial modulations by task-relevance and spatial attention. When the C1-eliciting stimulus was a task-irrelevant, interfering distracter, we observed increased C1 amplitudes for spatially unattended stimuli. For spatially attended stimuli, different effects of task-relevance for the two experiments were found. Follow-up exploratory single-trial analyses revealed that subtle but systematic deviations from the eye-gaze position at stimulus onset between conditions substantially influenced the effects of attention and task relevance on C1 amplitudes, especially for the upper visual field. For the subsequent P1 component, attentional modulations were clearly expressed and remained unaffected by these deviations. Collectively, these results suggest that spatial attention, unlike load or task relevance, can exert dissociable top-down modulatory effects at the C1 and P1 levels.
Christian Wolf; Markus Lappe
In: Attention, Perception, and Psychophysics, pp. 1–19, 2022.
Visual selection is characterized by a trade-off between speed and accuracy. Speed or accuracy of the selection process can be affected by higher level factors—for example, expecting a reward, obtaining task-relevant information, or seeing an intrinsically relevant target. Recently, motivation by reward has been shown to simultaneously increase speed and accuracy, thus going beyond the speed–accuracy-trade-off. Here, we compared the motivating abilities of monetary reward, task-relevance, and image content to simultaneously increase speed and accuracy. We used a saccadic distraction task that required suppressing a distractor and selecting a target. Across different blocks successful target selection was followed either by (i) a monetary reward, (ii) obtaining task-relevant information, or (iii) seeing the face of a famous person. Each block additionally contained the same number of irrelevant trials lacking these consequences, and participants were informed about the upcoming trial type. We found that postsaccadic vision of a face affected neither speed nor accuracy, suggesting that image content does not affect visual selection via motivational mechanisms. Task relevance increased speed but decreased selection accuracy, an observation compatible with a classical speed–accuracy trade-off. Motivation by reward, however, simultaneously increased response speed and accuracy. Saccades in all conditions deviated away from the distractor, suggesting that the distractor was suppressed, and this deviation was strongest in the reward block. Drift-diffusion modelling revealed that task-relevance affected behavior by affecting decision thresholds, whereas motivation by reward additionally increased the rate of information uptake. The present findings thus show that the three consequences differ in their motivational abilities.
Christian Wolf; Artem V. Belopolsky; Markus Lappe
In: iScience, vol. 25, no. 9, pp. 1–16, 2022.
Humans visually inspect the world with their fovea and select new parts of the scene using saccadic eye movements. Foveal inspection and the decision of where and when to look next proceed simultaneously, but there is mixed evidence concerning their independence. Here, we tested their interdependence using drift-diffusion modeling. Participants first made a saccade to a predetermined inspection target and subsequently decided between two selection targets. We found that the inspected target's meaningfulness and the opportunity to preview it peripherally affects fixation durations and the upcoming saccadic selection. Drift-diffusion modeling showed that meaningfulness and the absence of peripheral preview can both delay the subsequent saccadic decision process and affect the rate at which peripheral information is accumulated. Our results thus show that foveal inspection and peripheral selection are dependent on each other and that peripheral information can be maintained across the saccade to influence subsequent eye movement decisions.
Seth B. Winward; James Siklos-Whillans; Roxane J. Itier
In: Neuroimage: Reports, vol. 2, no. 4, pp. 1–17, 2022.
Recent ERP research using a gaze-contingent paradigm suggests the face-sensitive N170 component is modulated by the presence of a face outline, the number of parafoveal facial features, and the type of feature in parafovea (Parkington and Itier, 2019). The present study re-analyzed these data using robust mass univariate statistics available through the LIMO toolbox, allowing the examination of the ERP signal across all electrodes and time points. We replicated the finding that the presence of a face outline significantly reduced ERP latencies and amplitudes, suggesting it is an important cue to the prototypical face template. However, we found that this effect began around 114 ms, and was maximal during the P1-N170 and N170-P2 intervals. The number of features present in parafovea also impacted the entire waveform, with systematic reductions in amplitude and latency as the number of features increased. This effect was maximal around 120 ms during the P1-N170 interval and around 170 ms between the N170 and P2. The ERP response was also modulated by feature type; contrary to previous findings this effect was maximal around 200 ms and the P2 peak. Although we provide partial repli- cation of the previous results on the N170, the effects were more temporally distributed in the present analysis. These effects were generally maximal before and after the N170 and were the weakest at the N170 peak itself. This re-analysis demonstrates that classical ERP analysis can obscure important aspects of face processing beyond the N170 peak, and that tools like mass univariate statistics are needed to shed light on the whole time-course of face processing.
Thomas Wilschut; Sebastiaan Mathôt
In: Journal of Cognition, vol. 5, no. 1, pp. 1–12, 2022.
Recent studies have found that visual working memory (VWM) for color shows a categorical bias: Observers typically remember colors as more prototypical to the category they belong to than they actually are. Here, we further examine color-category effects on VWM using pupillometry. Participants remembered a color for later reproduction on a color wheel. During the retention interval, a colored probe was presented, and we measured the pupil constriction in response to this probe, assuming that the strength of constriction reflects the visual saliency of the probe. We found that the pupil initially constricted most strongly for non-matching colors that were maximally different from the memorized color; this likely reflects a lack of visual adaptation for these colors, which renders them more salient than memory-matching colors (which were shown before). Strikingly, this effect reversed later in time, such that pupil constriction was more prolonged for memory-matching colors as compared to non-matching colors; this likely reflects that memory-matching colors capture attention more strongly, and perhaps for a longer time, than non-matching colors do. We found no effects of color categories on pupil constriction: After controlling for color distance, (non-matching) colors from the same category as the memory color did not result in a different pupil response as compared to colors from a different category; however, we did find that behavioral responses were biased by color categories. In summary, we found that pupil constriction to colored probes reflects both visual adaptation and VWM content, but, unlike behavioral measures, is not notably affected by color categories.
James P. Wilmott; Mukesh Makwana; Joo-Hyun Song
In: Attention, Perception, & Psychophysics, vol. 84, no. 5, pp. 1538–1552, 2022.
To successfully interact with objects in complex and crowded environments, we often perform visual search to detect or identify a relevant target (or targets) among distractors. Previous studies have reported a redundancy gain when two targets instead of one are presented in a simple target detection task. However, research is scant about the role of multiple targets in target discrimination tasks, especially in the context of visual search. Here, we address this question and investigate its underlying mechanisms in a pop-out search paradigm. In Experiment 1, we directly compared visual search performance for one or two targets for detection or discrimination tasks. We found that two targets led to a redundancy gain for detection, whereas it led to a redundancy cost for discrimination. To understand the basis for the redundancy cost observed in discrimination tasks for multiple targets, we further investigated the role of perceptual grouping (Experiment 2) and stimulus–response feature compatibility (Experiment 3). We determined that the strength of perceptual grouping among homogenous distractors was attenuated when two targets were present compared with one. We also found that response compatibility between two targets contributed more to the redundancy cost compared with perceptual compatibility. Taken together, our results show how pop-out search involving two targets is modulated by the level of feature processing, perceptual grouping, and compatibility of perceptual and response features.
Konstantin F. Willeke; Araceli R. Cardenas; Joachim Bellet; Ziad M. Hafed
In: PNAS, vol. 119, no. 24, pp. 1–9, 2022.
The foveal visual image region provides the human visual system with the highest acuity. However, it is unclear whether such a high fidelity representational advantage is maintained when foveal image locations are committed to short-term memory. Here, we describe a paradoxically large distortion in foveal target location recall by humans. We briefly presented small, but high contrast, points of light at eccentricities ranging from 0.1 to 12°, while subjects maintained their line of sight on a stable target. After a brief memory period, the subjects indicated the remembered target locations via computer controlled cursors. The biggest localization errors, in terms of both directional deviations and amplitude percentage overshoots or undershoots, occurred for the most foveal targets, and such distortions were still present, albeit with qualitatively different patterns, when subjects shifted their gaze to indicate the remembered target locations. Foveal visual images are severely distorted in short-term memory.
Jaimie C. Wilkie; Nathan A. Ryckman; Lynette J. Tippett; Anthony J. Lambert
In: Neuropsychologia, vol. 168, pp. 1–12, 2022.
Visual orienting was studied in a patient (FM) with parietal-occipital damage due to oligodendroglioma and associated surgery, and in eighteen control participants. The ability of FM and control participants to shift attention in response to spatial landmark cues, and in response to cues that recruit endogenous orienting via encoding of cue identity, were assessed. According to the unified model of vision and attention (Lambert, A. et al., Journal of Experimental Psychology: Human Perception & Performance, 44, 412–432) FM should find it difficult to orient attention in response to spatial landmarks due to impaired functioning of the dorsal visual stream; but shifting attention in response to cue identity, encoded via the ventral visual stream, should be spared. Consistent with these predictions, FM was unable to shift attention in the landmark cueing task, but shifted attention effectively in response to identity cues; and her visual orienting performance differed reliably from controls. These findings complement our earlier observation of preserved orienting towards landmark cues in a patient with bilateral damage to the ventral visual stream, and add to a growing body of evidence in support of the unified model of vision and attention.
Marilena Wilding; Christof Körner; Anja Ischebeck; Natalia Zaretskaya
In: NeuroImage, vol. 257, pp. 1–10, 2022.
The constructive nature of human perception sometimes leads us to perceiving rather complex impressions from simple sensory input: for example, recognizing animal contours in cloud formations or seeing living creatures in shadows of objects. A special type of bistable stimuli gives us a rare opportunity to study the neural mechanisms behind this process. Such stimuli can be visually interpreted either as simple or as more complex illusory content on the basis of the same sensory input. Previous studies demonstrated increased activity in the superior parietal cortex during the perception of an illusory Gestalt impression compared to a simpler interpretation. Here, we examined the role of slow fluctuations of resting-state fMRI activity in shaping the subsequent illusory interpretation by investigating activity related to the illusory Gestalt not only during, but also prior to its perception. We presented 31 participants with a bistable motion stimulus, which can be perceived either as four moving dot pairs (local) or two moving illusory squares (global). fMRI was used to measure brain activity in a slow event-related design. We observed stronger IPS and putamen responses to the stimulus when participants perceived the global interpretation compared to the local, confirming the findings of previous studies. Most importantly, we also observed that the global stimulus interpretation was preceded by an increased activity of the bilateral dorsal insula, which is known to process saliency and gate information for conscious access. Our data suggest an important role of the dorsal insula in shaping complex illusory interpretations of the sensory input.
Benedict Wild; Amr Maamoun; Yifan Mayr; Ralf Brockhausen; Stefan Treue
In: Scientific Data, vol. 9, pp. 1–10, 2022.
Establishing the cortical neural representation of visual stimuli is a central challenge of systems neuroscience. Publicly available data would allow a broad range of scientific analyses and hypothesis testing, but are rare and largely focused on the early visual system. To address the shortage of open data from higher visual areas, we provide a comprehensive dataset from a neurophysiology study in macaque monkey visual cortex that includes a complete record of extracellular action potential recordings from the extrastriate medial superior temporal (MST) area, behavioral data, and detailed stimulus records. It includes spiking activity of 172 single neurons recorded in 139 sessions from 4 hemispheres of 3 rhesus macaque monkeys. The data was collected across 3 experiments, designed to characterize the response properties of MST neurons to complex motion stimuli. This data can be used to elucidate visual information processing at the level of single neurons in a high-level area of primate visual cortex. Providing open access to this dataset also promotes the 3R-principle of responsible animal research.
Antonius Wiehler; Francesca Branzoli; Isaac Adanyeguh; Fanny Mochel; Mathias Pessiglione
In: Current Biology, vol. 32, no. 16, pp. 3564–3575, 2022.
Behavioral activities that require control over automatic routines typically feel effortful and result in cognitive fatigue. Beyond subjective report, cognitive fatigue has been conceived as an inflated cost of cognitive control, objectified by more impulsive decisions. However, the origins of such control cost inflation with cognitive work are heavily debated. Here, we suggest a neuro-metabolic account: the cost would relate to the necessity of recycling potentially toxic substances accumulated during cognitive control exertion. We validated this account using magnetic resonance spectroscopy (MRS) to monitor brain metabolites throughout an approximate workday, during which two groups of participants performed either high-demand or low-demand cognitive control tasks, interleaved with economic decisions. Choice-related fatigue markers were only present in the high-demand group, with a reduction of pupil dilation during decision-making and a preference shift toward short-delay and little-effort options (a low-cost bias captured using computational modeling). At the end of the day, high-demand cognitive work resulted in higher glutamate concentration and glutamate/glutamine diffusion in a cognitive control brain region (lateral prefrontal cortex [lPFC]), relative to low-demand cognitive work and to a reference brain region (primary visual cortex [V1]). Taken together with previous fMRI data, these results support a neuro-metabolic model in which glutamate accumulation triggers a regulation mechanism that makes lPFC activation more costly, explaining why cognitive control is harder to mobilize after a strenuous workday.
Stephen Whitmarsh; Christophe Gitton; Veikko Jousmäki; Jérôme Sackur; Catherine Tallon-Baudry
Neuronal correlates of the subjective experience of attention Journal Article
In: European Journal of Neuroscience, vol. 55, no. 11-12, pp. 3465–3482, 2022.
The effect of top–down attention on stimulus-evoked responses and alpha oscillations and the association between arousal and pupil diameter are well established. However, the relationship between these indices, and their contribution to the subjective experience of attention, remains largely unknown. Participants performed a sustained (10–30 s) attention task in which rare (10%) targets were detected within continuous tactile stimulation (16 Hz). Trials were followed by attention ratings on an 8-point visual scale. Attention ratings correlated negatively with contralateral somatosensory alpha power and positively with pupil diameter. The effect of pupil diameter on attention ratings extended into the following trial, reflecting a sustained aspect of attention related to vigilance. The effect of alpha power did not carry over to the next trial and furthermore mediated the association between pupil diameter and attention ratings. Variations in steady-state amplitude reflected stimulus processing under the influence of alpha oscillations but were only weakly related to subjective ratings of attention. Together, our results show that both alpha power and pupil diameter are reflected in the subjective experience of attention, albeit on different time spans, while continuous stimulus processing might not contribute to the experience of attention.
Mirjam C. M. Wever; Lisanne A. E. M. Houtum; Loes H. C. Janssen; Wilma G. M. Wentholt; Iris M. Spruit; Marieke S. Tollenaar; Geert Jan Will; Bernet M. Elzinga
In: NeuroImage, vol. 260, pp. 1–12, 2022.
Eye contact is crucial for the formation and maintenance of social relationships, and plays a key role in facilitating a strong parent-child bond. However, the precise neural and affective mechanisms through which eye contact impacts on parent-child relationships remain elusive. We introduce a task to assess parents' neural and affective responses to prolonged direct and averted gaze coming from their own child, and an unfamiliar child and adult. While in the scanner, 79 parents (n = 44 mothers and n = 35 fathers) were presented with prolonged (16-38 s) videos of their own child, an unfamiliar child, an unfamiliar adult, and themselves (i.e., targets), facing the camera with a direct or an averted gaze. We measured BOLD-responses, tracked parents' eye movements during the videos, and asked them to report on their mood and feelings of connectedness with the targets after each video. Parents reported improved mood and increased feelings of connectedness after prolonged exposure to direct versus averted gaze and these effects were amplified for unfamiliar targets compared to their own child, due to high affect and connectedness ratings after videos of their own child. Neuroimaging results showed that the sight of one's own child was associated with increased activity in middle occipital gyrus, fusiform gyrus and inferior frontal gyrus relative to seeing an unfamiliar child or adult. While we found no robust evidence of specific neural correlates of eye contact (i.e., contrast direct > averted gaze), an exploratory parametric analysis showed that dorsomedial prefrontal cortex (dmPFC) activity increased linearly with duration of eye contact (collapsed across all “other” targets). Eye contact-related dmPFC activity correlated positively with increases in feelings of connectedness, suggesting that this region may drive feelings of connectedness during prolonged eye contact with others. These results underline the importance of prolonged eye contact for affiliative processes and provide first insights into its neural correlates. This may pave the way for new research in individuals or pairs in whom affiliative processes are disrupted.
Jacob A. Westerberg; Michelle S. Schall; Alexander Maier; Geoffrey F. Woodman; Jeffrey D. Schall
In: eLife, vol. 11, pp. 1–23, 2022.
Cognitive operations are widely studied by measuring electric fields through EEG and ECoG. However, despite their widespread use, the neural circuitry giving rise to these signals remains unknown because the functional architecture of cortical columns producing attention-associated electric fields has not been explored. Here we detail the laminar cortical circuitry underlying an attention-associated electric field measured over posterior regions of the brain in humans and monkeys. First, we identified visual cortical area V4 as one plausible contributor to this attention-associated electric field through inverse modeling of cranial EEG in macaque monkeys performing a visual attention task. Next, we performed laminar neurophysiological recordings on the prelunate gyrus and identified the electric-field-producing dipoles as synaptic activity in distinct cortical layers of area V4. Specifically, activation in the extragranular layers of cortex resulted in the generation of the attention-associated dipole. Feature selectivity of a given cortical column determined the overall contribution to this electric field. Columns selective for the attended feature contributed more to the electric field than columns selective for a different feature. Lastly, the laminar profile of synaptic activity generated by V4 was sufficient to produce an attention-associated signal measurable outside of the column. These findings suggest that the top-down recipient cortical layers produce an attention-associated electric field that can be measured extracortically with the relative contribution of each column depending upon the underlying functional architecture.
Stephanie Wermelinger; Lea Moersdorf; Moritz M. Daum
In: Infancy, vol. 27, no. 5, pp. 937–962, 2022.
The COVID-19 pandemic has been influencing people's social life substantially. Everybody, including infants and children needed to adapt to changes in social interactions (e.g., social distancing) and to seeing other people wearing facial masks. In this study, we investigated whether these pandemic-related changes influenced 12- to 15-months-old infants' reactions to observed gaze shifts (i.e., their gaze following). In two eye-tracking tasks, we measured infants' gaze-following behavior during the pandemic (with-COVID-19-experience sample) and compared it to data of infants tested before the pandemic (no-COVID-19-experience sample). Overall, the results indicated no significant differences between the two samples. However, in one sub-task infants in the with-COVID-19-experience sample looked longer at the eyes of a model compared to the no-COVID-19-experience sample. Within the with-COVID-19-experience sample, the amount of mask exposure and the number of contacts without mask were not related to infants' gaze-following behavior. We speculate that even though infants encounter fewer different people during the pandemic and are increasingly exposed to people wearing facial masks, they still also see non-covered faces. These contacts might be sufficient to provide infants with the social input they need to develop social and emotional competencies such as gaze following.
Stephanie Wermelinger; Lea Moersdorf; Simona Ammann; Moritz M. Daum
In: Frontiers in Psychology, vol. 13, pp. 1–13, 2022.
During the COVID-19 pandemic people were increasingly obliged to wear facial masks and to reduce the number of people they met in person. In this study, we asked how these changes in social interactions are associated with young children's emotional development, specifically their emotion recognition via the labeling of emotions. Preschoolers labeled emotional facial expressions of adults (Adult Faces Task) and children (Child Faces Task) in fully visible faces. In addition, we assessed children's COVID-19-related experiences (i.e., time spent with people wearing masks, number of contacts without masks) and recorded children's gaze behavior during emotion labeling. We compared different samples of preschoolers (4.00–5.75 years): The data for the no-COVID-19-experience sample were taken from studies conducted before the pandemic (Adult Faces Task: N = 40; Child Faces Task: N = 30). The data for the with-COVID-19-experience sample (N = 99) were collected during the COVID-19 pandemic in Switzerland between June and November 2021. The results did not indicate differences in children's labeling behavior between the two samples except for fearful adult faces. Children with COVID-19-experience more often labeled fearful faces correctly compared to children with no COVID-19 experience. Furthermore, we found no relations between children's labeling behavior, their individual COVID-19-related experiences, and their gaze behavior. These results suggest that, even though the children had experienced differences in the amount and variability of facial input due to the pandemic, they still received enough input from visible faces to be able to recognize and label different emotions.
Wen Wen; Zhibang Huang; Yin Hou; Sheng Li
In: Journal of Neuroscience, vol. 42, no. 24, pp. 4927–4936, 2022.
Performing visual search tasks requires optimal attention deployment to promote targets and inhibit distractors. Rejection templates based on the distractor's feature can be built to constrain the search process. We measured electroencephalography (EEG) of human participants of both sexes when they performed a visual search task in conditions where the distractor cues were constant within a block (fixed-cueing) or changed on a trial-by-trial basis (varied-cueing). In the fixed-cueing condition, sustained decoding of the cued colors could be achieved during the retention interval and the participants with higher decoding accuracy showed larger suppression benefits of the distractor cueing in the search period. In the varied-cueing condition, the cued color could only be transiently decoded after its onset and the higher decoding accuracy was observed from the participants who demonstrated lower suppression benefit. The differential neural representations of the to-be-ignored color in the two cueing conditions as well as their reverse associations with behavioral performance implied that rejection templates were formed in the fixed-cueing condition but not in the varied-cueing condition. Additionally, we observed stronger posterior alpha lateralization and mid-frontal theta/beta power during the retention interval of the varied-cueing condition, indicating the cognitive costs in template formation caused by the trialwise change of distractor colors. Taken together, our findings revealed the neural markers associated with the critical roles of distractor consistency in linking template formation to successful inhibition.
Lisa Weller; Aleks Pieczykolan; Lynn Huestegge
In: Cognition, vol. 225, pp. 1–8, 2022.
Performing two actions at the same time usually hampers performance. Previous studies have demonstrated a strong impact of the particular effector systems on performance in multiple action control situations. However, an open question is whether performance is generally better or worse in situations in which two actions within the same effector system are coordinated (intra-modal actions: e.g., two pedal or two manual actions) compared to situations requiring two different effector systems (cross-modal actions: e.g., a manual combined with a vocal action). Performance differences can be predicated, among others, in the light of encapsulation accounts. Encapsulation of modules on the output side of processing would suggest that actions in two different modules can be triggered simultaneously without significant interference between the actions. Thus, cross-modal actions should lead to better performance compared to intra-modal actions. We investigated this issue in two basic experiments, in which participants responded to a single stimulus (thereby maximizing control over input and central processing stages) with one or two either intra-modal or cross-modal responses (manual-manual vs. manual-oculomotor/manual-vocal in Experiment 1/2, respectively). The results represent clear evidence for a performance advantage of intra-modal over cross-modal action control across both effector system combinations and independent of the particular spatial compatibility relation between responses. The results suggest performance benefits by taking advantage of integrated, holistic representations of intra-modal action compounds.
Dominik Welke; Edward A. Vessel
In: NeuroImage, vol. 256, pp. 1–19, 2022.
Free gaze and moving images are typically avoided in EEG experiments due to the expected generation of artifacts and noise. Yet for a growing number of research questions, loosening these rigorous restrictions would be beneficial. Among these is research on visual aesthetic experiences, which often involve open-ended exploration of highly variable stimuli. Here we systematically compare the effect of conservative vs. more liberal experimental settings on various measures of behavior, brain activity and physiology in an aesthetic rating task. Our primary aim was to assess EEG signal quality. 43 participants either maintained fixation or were allowed to gaze freely, and viewed either static images or dynamic (video) stimuli consisting of dance performances or nature scenes. A passive auditory background task (auditory steady-state response; ASSR) was added as a proxy measure for overall EEG recording quality. We recorded EEG, ECG and eye tracking data, and participants rated their aesthetic preference and state of boredom on each trial. Whereas both behavioral ratings and gaze behavior were affected by task and stimulus manipulations, EEG SNR was barely affected and generally robust across all conditions, despite only minimal preprocessing and no trial rejection. In particular, we show that using video stimuli does not necessarily result in lower EEG quality and can, on the contrary, significantly reduce eye movements while increasing both the participants' aesthetic response and general task engagement. We see these as encouraging results indicating that — at least in the lab — more liberal experimental conditions can be adopted without significant loss of signal quality.
Hannah B. Weinberg-Wolf; Nick Fagan; Olga Dal Monte; Steve W. C. Chang
In: Journal of Neuroscience, vol. 42, no. 4, pp. 670–681, 2022.
To competently navigate the world, individuals must flexibly balance distinct aspects of social gaze, orienting toward others and inhibiting orienting responses, depending on the context. These behaviors are often disrupted amongst patient populations treated with serotonergic drugs. However, those in the field lack a clear understanding of how the serotonergic system mediates social orienting and inhibiting behaviors. Here, we tested how increasing central concentrations of serotonin with the direct precursor 5-hydroxytryptophan (5-HTP) would modulate the ability of rhesus macaques (both sexes) to use eye movements to flexibly orient to, or inhibit orienting to, faces. Systemic administrations of 5-HTP effectively increased central serotonin levels and impaired flexible orientation and inhibition. Critically, 5-HTP selectively impaired the ability of monkeys to inhibit orienting to face images, whereas it similarly impaired orienting to face and control images. 5-HTP also caused monkeys to perseverate on their gaze responses, making them worse at flexibly switching between orienting and inhibiting behaviors. Furthermore, the effects of 5-HTP on performance correlated with a constriction of the pupil, an increased time to initiate trials, and an increased reaction time, suggesting that the disruptive effects of 5-HTP on social gaze behaviors are likely driven by a downregulation of arousal and motivational states. Together, these findings provide causal evidence for a modulatory relationship between 5-HTP and social gaze behaviors in nonhuman primates and offer translational insights for the role of the serotonergic system in social gaze.
Emily R. Weichart; Matthew Galdo; Vladimir M. Sloutsky; Brandon M. Turner
In: Psychological Review, vol. 129, no. 5, pp. 1104–1143, 2022.
Two fundamental difficulties when learning novel categories are deciding (a) what information is relevant and (b) when to use that information. Although previous theories have specified how observers learn to attend to relevant dimensions over time, those theories have largely remained silent about how attention should be allocated on a within-trial basis, which dimensions of information should be sampled, and how the temporal order of information sampling influences learning. Here, we use the adaptive attention representation model (AARM) to demonstrate that a common set of mechanisms can be used to specify: (a) How the distribution of attention is updated between trials over the course of learning and (b) how attention dynamically shifts among dimensions within a trial. We validate our proposed set of mechanisms by comparing AARM's predictions to observed behavior in four case studies, which collectively encompass different theoretical aspects of selective attention. We use both eye-tracking and choice response data to provide a stringent test of how attention and decision processes dynamically interact during category learning. Specifically, how does attention to selected stimulus dimensions gives rise to decision dynamics, and in turn, how do decision dynamics influence which dimensions are attended to via gaze fixations?
Zi-Han Wei; Qiu-Yue Li; Ci-Juan Liang; Hong-Zhi Liu
In: Frontiers in Psychology, vol. 13, pp. 1–10, 2022.
According to the dual-system theories, the decisions in an ultimatum game (UG) are governed by the automatic System 1 and the controlled System 2. The former drives the preference for fairness, whereas the latter drives the self-interest motive. However, the association between the contributions of the two systems in UG and the cognitive process needs more direct evidence. In the present study, we used the process dissociation procedure to estimate the contributions of the two systems and recorded participants eye movements to examine the cognitive processes underlying UG decisions. Results showed that the estimated contributions of the two systems are uncorrelated and that they demonstrate a dissociated pattern of associations with third variables, such as reaction time (RT) and mean fixation duration (MFD). Furthermore, the relative time advantage (RTA) and the transitions between the two payoffs can predict the final UG decisions. Our findings provide evidence for the independent contributions of preference for fairness (System 1) and self-interest maximizing (System 2) inclinations to UG and shed light on the underlying processes.
Jelena M. Wehrli; Yanfang Xia; Samuel Gerster; Dominik R. Bach
Measuring human trace fear conditioning Journal Article
In: Psychophysiology, vol. 59, no. 12, pp. 1–13, 2022.
Trace fear conditioning is an important research paradigm to model aversive learning in biological or clinical scenarios, where predictors (conditioned stimuli, CS) and aversive outcomes (unconditioned stimuli, US) are separated in time. The optimal measurement of human trace fear conditioning, and in particular of memory retention after consolidation, is currently unclear. We conducted two identical experiments (N1 = 28
Abigail L. M. Webb; Jordi M. Asher; Paul B. Hibbard
In: Vision Research, vol. 198, pp. 1–11, 2022.
The present study explores the threat bias for fearful facial expressions using saccadic latency, with a particular focus on the role of low-level facial information, including spatial frequency and contrast. In a simple localisation task, participants were presented with spatially-filtered versions of neutral, fearful, angry and happy faces. Together, our findings show that saccadic responses are not biased toward fearful expressions compared to neutral, angry or happy counterparts, regardless of their spatial frequency content. Saccadic response times are, however, significantly influenced by the spatial frequency and contrast of facial stimuli. We discuss the implications of these findings for the threat bias literature, and the extent to which image processing can be expected to influence behavioural responses to socially-relevant facial stimuli.
I. K. Wardhani; B. H. Janssen; C. N. Boehler
In: Acta Psychologica, vol. 224, pp. 1–10, 2022.
The present study investigated the effect of background luminance on the self-reported valence ratings of auditory stimuli, as suggested by some earlier work. A secondary aim was to better characterise the effect of auditory valence on pupillary responses, on which the literature is inconsistent. Participants were randomly presented with sounds of different valence categories (negative, neutral, and positive) obtained from the IADS-E database. At the same time, the background luminance of the computer screen (in blue hue) was manipulated across three levels (i.e., low, medium, and high), with pupillometry confirming the expected strong effect of luminance on pupil size. Participants were asked to rate the valence of the presented sound under these different luminance levels. On a behavioural level, we found evidence for an effect of background luminance on the self-reported valence rating, with generally more positive ratings as background luminance increased. Turning to valence effects on pupil size, irrespective of background luminance, interestingly, we observed that pupils were smallest in the positive valence and the largest in negative valence condition, with neutral valence in between. In sum, the present findings provide evidence concerning a relationship between luminance perception (and hence pupil size) and self-reported valence of auditory stimuli, indicating a possible cross-modal interaction of auditory valence processing with completely task-irrelevant visual background luminance. We furthermore discuss the potential for future applications of the current findings in the clinical field.
I. K. Wardhani; C. N. Boehler; S. Mathôt
In: Perception, vol. 51, no. 6, pp. 370–387, 2022.
When the pupil dilates, the amount of light that falls onto the retina increases. However, in daily life, this does not make the world look brighter. Here we asked whether pupil size (resulting from active pupil movement) influences subjective brightness in the absence of indirect cues that, in daily life, support brightness constancy. We measured the subjective brightness of a tester stimulus relative to a referent as a function of pupil size during tester presentation. In Experiment 1, we manipulated pupil size through a secondary working-memory task (larger pupils with higher load and after errors). We found some evidence that the tester was perceived as darker, rather than brighter, when pupils were larger. In Experiment 2, we presented a red or blue display (larger pupils following red displays). We again found that the tester was perceived as darker when pupils were larger. We speculate that the visual system takes pupil size into account when making brightness judgments. Finally, we highlight the challenges associated with manipulating pupil size. In summary, the current study (as well as a recent pharmacological study on the same topic by another team) is intriguing first steps towards understanding the role of pupil size in brightness perception.
Shamini Warda; Jaana Simola; Devin B. Terhune
Pupillometry tracks errors in interval timing Journal Article
In: Behavioral Neuroscience, vol. 13, no. 2, pp. 495–502, 2022.
Recent primate studies suggest a potential link between pupil size and subjectively elapsed duration. Here, we sought to investigate the relationship between pupil size and perceived duration in human participants performing two temporal bisection tasks in the subsecond and suprasecond interval ranges. In the subsecond task, pupil diameter was greater during stimulus processing when shorter intervals were overestimated but also during and after stimulus offset when longer intervals were underestimated. By contrast, in the suprasecond task, larger pupil diameter was observed only in the late stimulus offset phase prior to response prompts when longer intervals were underestimated. This pattern of results suggests that pupil diameter relates to an error monitoring mechanism in interval timing. These results are at odds with a direct relationship between pupil size and the perception of duration but suggest that pupillometric variation might play a key role in signifying errors related to temporal judgments.
Colleen B. Ward; Jennifer E. Mack
In: Journal of Communication Disorders, vol. 100, pp. 1–14, 2022.
Introduction: We tested whether aphasia self-disclosure via an aphasia ID card impacts (1) how non-aphasic listeners initially process language produced by a speaker with aphasia and (2) learning of the speaker's error patterns over time. Methods: In this eye-tracking experiment, 27 young adults followed instructions recorded by a speaker with nonfluent aphasia while viewing a target picture and a distractor. The Card group (n = 14) was shown a simulated aphasia ID card for the speaker and the No Card group (n = 13) was not. The task was divided into Pre-Observation and Post-Observation blocks. Between blocks, participants observed the speaker making semantic paraphasias. Eye-tracking analyses compared the time course of target advantage (reflecting competition from the distractor picture) and workspace advantage (reflecting attention to task) between groups and blocks. Results: Pre-Observation, the Card group had a higher target advantage than the No Card group in the post-response window (i.e., after participants had responded), indicating sustained attention to the speaker's language. Across blocks, there was evidence that the Card group (but not the No Card group) learned that the speaker makes semantic paraphasias. Conclusions: Aphasia ID cards impacted listeners' processing of language produced by a speaker with nonfluent aphasia. Increased patience and attentiveness may underlie both the Card group's sustained attention to the speaker as well as learning of the speaker's error patterns. Further research should address whether these changes impact communication success between PWA and new conversation partners.
Shuai Wang; Jialing Li; Siyu Wang; Wei Wang; Can Mi; Wenjing Xiong; Zhengjia Xu; Longxing Tang; Yanzhang Li
In: Frontiers in Psychology, vol. 13, pp. 1–9, 2022.
Individuals with high risk of internet gaming disorder (HIGD) showed abnormal psychological performances in response inhibition, impulse control, and emotion regulation, and are considered the high-risk stage of internet gaming disorder (IGD). The identification of this population mainly relies on clinical scales, which are less accurate. This study aimed to explore whether these performances have highly accurate for discriminating HIGD from low-risk ones. Eye tracking based anti-saccade task, Barratt impulsiveness scale (BIS), and Wong and Law emotional intelligence scale (WLEIS) were used to evaluate psychological performances in 57 individuals with HIGD and 52 matched low risk of internet gaming disorder (LIGD). HIGD group showed significantly increased BIS total (t = −2.875
Shuai Wang; Jialing Li; Siyu Wang; Can Mi; Wei Wang; Zhengjia Xu; Wenjing Xiong; Longxing Tang; Yanzhang Li
In: Frontiers in Psychiatry, vol. 13, pp. 1–10, 2022.
Background: Escapism-based motivation (EBM) is considered as one of the diagnostic criteria for internet gaming disorder (IGD). However, how EBM affects the high risk of IGD (HIGD) population remains unclear. Methods: An initial number of 789 college students participated in the general, internet gaming behavior, and motivation surveys. After multiple evaluations, 57 individuals were identified as HIGD (25 with EBM, H-EBM; 32 with non-EBM, H-nEBM). In addition, 51 no-gaming individuals were included as the control group (CONTR). The cohorts completed the psychological assessments and eye-tracking tests, and analyses of group differences, correlations, and influencing factors of the indicators were performed. Results: The Barratt impulsiveness score of H-nEBM and H-EBM was significantly higher than that of CONTR (MD = 3.605
Maya Zhe Wang; Benjamin Y. Hayden; Sarah R. Heilbronner
In: Nature Communications, vol. 13, no. 1, pp. 1–12, 2022.
Economic choice requires many cognitive subprocesses, including stimulus detection, valuation, motor output, and outcome monitoring; many of these subprocesses are associated with the central orbitofrontal cortex (cOFC). Prior work has largely assumed that the cOFC is a single region with a single function. Here, we challenge that unified view with convergent anatomical and physiological results from rhesus macaques. Anatomically, we show that the cOFC can be subdivided according to its much stronger (medial) or weaker (lateral) bidirectional anatomical connectivity with the posterior cingulate cortex (PCC). We call these subregions cOFCm and cOFCl, respectively. These two subregions have notable functional differences. Specifically, cOFCm shows enhanced functional connectivity with PCC, as indicated by both spike-field coherence and mutual information. The cOFCm-PCC circuit, but not the cOFCl-PCC circuit, shows signatures of relaying choice signals from a non-spatial comparison framework to a spatially framed organization and shows a putative bidirectional mutually excitatory pattern.
Jie Wang; Jiaming Shi; Xin Wen; Liang Xu; Ke Zhao; Fuyang Tao; Wenbiao Zhao; Xiuying Qian
In: Computers and Security, vol. 121, pp. 1–14, 2022.
The rapid increase in the use of mobile technology and online communication has facilitated more opportunities for social interactions as well as for online fraud. Warnings are one of the last lines of defense in transaction security. Many warnings used in anti-fraud processes are often ineffective due to habituation and the trial-and-error method used in their design. Following psychological theories of persuasion and warning design principles, in this paper, we design fourteen warnings and examine their effectiveness in an eye-tracker experiment (Study 1) and in an online A/B test on the Alipay platform (Study 2). Based on the communication-human information processing (C-HIP) model, Study 1 found that pictorial signal icons and persuasion strategies significantly improved the effectiveness of warnings. Specifically, pictorial signal icons attracted users' attention better than the conventional signal icons, and warnings with authority, social influence, diversion, questioning, and multiple strategies performed better than those without a persuasion strategy. Study 2 showed that our warnings performed better than the original Alipay warnings. The overall case rate was reduced by 33.2%, avoiding at least 30 million yuan in economic losses. Our work contributes to the field of security warning design with both theoretical and practical value and provides an important reference for future research.
Jiahui Wang; Abigail Stebbins; Richard E. Ferdig
In: Computers and Education, vol. 178, pp. 1–13, 2022.
Research has provided evidence of the significant promise of using educational games for learning. However, there is limited understanding of how individual differences (e.g., self-efficacy and prior knowledge) affect visual processing of game elements and learning from an educational game. This study aimed to address these gaps by: a) examining the effects of students' self-efficacy and prior knowledge on learning from a physics game; and b) exploring how learners with distinct levels of self-efficacy and prior knowledge differ in their visual behavior with respect to the game elements. The visual behavior of 69 undergraduate students was recorded as they played an educational game focusing on Newtonian mechanics. Individual differences in self-efficacy in learning physics and prior knowledge were assessed prior to the game, while a comprehension test was administered immediately after gameplay. Wilcoxon signed-rank tests showed that all participants significantly improved in their understanding of Newtonian mechanics. Mann-Whitney U tests indicated learning gains were not significantly different between the groups with varying levels of prior knowledge or self-efficacy. Additionally, a series of Mann-Whitney U tests of the eye tracking data suggested the learners with high self-efficacy tended to pay more attention to the motion map - a critical navigation component of the game. Further, the high prior knowledge individuals excelled in attentional control abilities and exhibited effective visual processing strategies. The study concludes with important implications for the future design of educational games and developing individualized instructional support in game-based learning.
In: Behaviour and Information Technology, pp. 1–15, 2022.
Existing evidence suggested learners with differences in attention and cognition might respond to the same media in differential ways. The current study focused on one format of video design – instructor visibility and explored the moderating effects of working memory capacity on learning from such video design and if learners with high and low working memory capacity attended to the instructor's visuals differently. Participants watched a video either with or without the instructor's visuals on the screen, while their visual attention was recorded simultaneously. After the video, participants responded to a learning test that measured retention and transfer. Although the results did not show working memory capacity moderated the instructor visibility effects on learning or influenced learners' visual attention to the instructor's visuals, the findings did indicate working memory capacity was a positive predictor of retention performance regardless of the video design. Discussions and implications of the findings were provided.
Chao Wang; Mitchell Reid Pond LaPointe; Shree Venkateshan; Guang Zhao; Weidong Tao; Hong-Jin Sun; Bruce Milliken
In: Quarterly Journal of Experimental Psychology, vol. 76, no. 1, pp. 117–132, 2022.
Measures of attentional capture are sensitive to attentional control settings. Recent research suggests that such control settings can be linked associatively to specific items. Rapid item-specific retrieval of these control settings can then modulate measures of attentional capture. However, the processes that produce this item-specific control of attentional capture are unclear. The current study addressed this issue by examining eye-movement patterns associated with the item-specific proportion congruency effect (ISPC). Participants searched for a shape singleton target in search displays that also contained a colour singleton—the colour singleton was either the same item as the shape singleton (congruent trials) or a different item (incongruent trials). The relative proportions of congruent and incongruent trials were manipulated separately for two distinct item types that were randomly intermixed. Response times (RTs) were faster on congruent than incongruent trials, and this congruency effect was larger for high-proportion congruent (HPC) than low-proportion congruent (LPC) items. Eye movement data revealed a higher proportion of saccades towards the distractor and longer dwell times on the distractor in the HPC condition. These results suggest that item-specific associative learning can influence the strength of representation of the task goal (e.g., find the odd shape), a form of selection history effect in visual search.
Andi Wang; Ana Pellicer-Sánchez
In: Language Learning, vol. 72, no. 3, pp. 765–805, 2022.
This study examined the effectiveness of bilingual subtitles relative to captions, subtitles, and no subtitles for incidental vocabulary learning. Learners' processing of novel words in the subtitles and its relationship to learning gains were also explored. While their eye movements were recorded, 112 intermediate to advanced Chinese learners of English watched a documentary in one of 4 conditions: bilingual subtitles, captions, L1 subtitles, and no subtitles. Vocabulary pretests and posttests assessed the participants' knowledge of the target vocabulary for form recognition, meaning recall, and meaning recognition. Results suggested an advantage for bilingual subtitles over captions for meaning recognition and over L1 subtitles for meaning recall. Bilingual subtitles were less effective than captions for form recognition. Participants in the bilingual subtitles group spent more time reading the Chinese translations of the target items than the English target words. The amount of attention to the English target words (but not to the translations) predicted learning gains.
A. J. Walters; A. Lithopoulos; E. M. Tennant; S. Weissman; A. E. Latimer-Cheung
In: Public Health Nursing, vol. 39, pp. 982–992, 2022.
Background: The Canadian 24-Hour Movement Guidelines for Children and Youth (“Guidelines”) not only pioneered the notion of an integrated movement continuum from sleep to vigorous-intensity physical activity but also introduced a new branded Guideline visual identity. Objectives: This study evaluated youths' (N = 46) attention to and thoughts about the Guidelines and the brand. Design: A cross-sectional between-participants randomized intervention design was used. Sample: Canadian youth between 10 and 17 years of age comprised the study sample. Interventions: Participants were randomly assigned to view either branded Guidelines (n = 26) or unbranded Guidelines (n = 20). Youths' eye-movements (e.g., dwell time, fixation count) were recorded during Guideline viewing. Participants completed a follow-up survey assessing brand perceptions and Guideline cognitions. Results: The branded Guidelines neither drew greater overall attention nor led to more positive brand perceptions or Guideline cognitions compared to the unbranded Guidelines. Conclusions: Exploratory analyses provide valuable, yet preliminary insight into how branding and Guideline content may shape how Guidelines are perceived and acted upon. These findings inform an agenda for future health education resources.
Kerri Walter; Peter Bex
In: PloS ONE, vol. 17, no. 11, pp. 1–16, 2022.
Growing evidence links eye movements and cognitive functioning, however there is debate concerning what image content is fixated in natural scenes. Competing approaches have argued that low-level/feedforward and high-level/feedback factors contribute to gaze-guidance. We used one low-level model (Graph Based Visual Salience, GBVS) and a novel language-based high-level model (Global Vectors for Word Representation, GloVe) to predict gaze locations in a natural image search task, and we examined how fixated locations during this task vary under increasing levels of cognitive load. Participants (N = 30) freely viewed a series of 100 natural scenes for 10 seconds each. Between scenes, subjects identified a target object from the scene a specified number of trials (N) back among three distracter objects of the same type but from alternate scenes. The N-back was adaptive: N-back increased following two correct trials and decreased following one incorrect trial. Receiver operating characteristic (ROC) analysis of gaze locations showed that as cognitive load increased, there was a significant increase in prediction power for GBVS, but not for GloVe. Similarly, there was no significant difference in the area under the ROC between the minimum and maximum N-back achieved across subjects for GloVe (t(29) = -1.062
R. Calen Walshe; Wilson S. Geisler
In: Current Biology, vol. 32, no. 1, pp. 26–36, 2022.
The human visual system has a high-resolution fovea and a low-resolution periphery. When actively searching for a target, humans perform a covert search during each fixation, and then shift fixation (the fovea) to probable target locations. Previous studies of covert search under carefully controlled conditions provide strong evidence that for simple and small search displays, humans process all potential target locations with the same efficiency that they process those locations when individually cued on each trial. Here, we extend these studies to the case of large displays, in which the target can appear anywhere within the display. These more natural conditions reveal an attentional effect in which sensitivity in the fovea and parafovea is greatly diminished. We show that this “foveal neglect” is the expected consequence of efficiently allocating a fixed total attentional sensitivity gain across the retinotopic map in the visual cortex. We present a formal theory that explains our findings and the previous findings.
Carla A. Wall; Frederick Shic; Sreeja Varanasi; Jane E. Roberts
In: Autism Research, pp. 1–15, 2022.
Social attention is a critical skill for learning and development. Social attention difficulties are present in both non-syndromic autism spectrum disorder (nsASD) and fragile X syndrome (FXS), and our understanding of these difficulties is complicated by heterogeneity in both disorders, including co-occurring diagnoses like intellectual disability and social anxiety. Existing research largely utilizes a single index of social attention and rarely includes children with intellectual impairment or uses a cross-syndrome approach. This study investigated whether multi-trait social attention profiles including naturalistic initial eye contact, facial attention, and social scene attention differ in preschool children with nsASD and FXS matched on developmental ability (DQ) and contrasted to neurotypical (NT) controls. The relationship between DQ, ASD severity, and social anxiety and social attention profiles was also examined. Initial eye contact related to social scene attention, implicating that naturalistic social attention is consistent with responses during experimental conditions. Reduced eye contact and lower social scene attention characterized nsASD and FXS. Children with nsASD displayed less facial attention than FXS and NT children, who did not differ. Lower DQ and elevated ASD severity associated with decreased eye contact in nsASD and FXS, and lower DQ was associated with lower social scene attention in FXS. Sex, social anxiety, and age were not associated with social attention. These findings suggest social attention profiles of children with nsASD are highly similar to, yet distinct from, children with FXS. Children with nsASD may present with a global social attention deficit whereas FXS profiles may reflect context-dependent social avoidance.
Josefine Waldthaler; Mikkel C. Vinding; Allison Eriksson; Per Svenningsson; Daniel Lundqvist
In: Behavioural Brain Research, vol. 422, pp. 1–12, 2022.
Deficits in response inhibition are a central feature of the highly prevalent dysexecutive syndrome found in Parkinson's disease (PD). Such deficits are related to a range of common clinically relevant symptoms including cognitive impairment as well as impulsive and compulsive behaviors. In this study, we explored the cortical dynamics underlying response inhibition during the mental preparation for the antisaccade task by recording magnetoencephalography (MEG) and eye-movements in 21 non-demented patients with early to mid-stage Parkinson's disease and 21 age-matched healthy control participants (HC). During the pre-stimulus preparatory period for antisaccades we observed: • a preparation-related increase in beta band activity in the right dorsolateral prefrontal cortex (DLPFC) of HC (n = 15) for antisaccades compared with prosaccades that was not detectable in the PD group (n = 17); • a significant attenuation of the preparation-related increase in alpha band power in bilateral FEF and reduced alpha band connectivity between the right DLPFC and right FEF in the PD group compared with HC, suggesting reduced top-down control to inhibit pre-potent activation of FEF in PD; and • a positive correlation between the magnitude of pre-stimulus beta desynchronization in FEF and subsequent antisaccade latency in PD and HC, indicating a relationship between preparatory beta band modulation and effectiveness of subsequent antisaccade execution. Taken together, the results indicate that alterations in pre-stimulus prefrontal alpha and beta activity hinder proactive response inhibition and in turn result in higher error rates and prolonged response latencies in PD.
Elena N. Waidmann; Kenji W. Koyano; Julie J. Hong; Brian E. Russ; David A. Leopold
In: Nature Communications, vol. 13, no. 1, pp. 1–13, 2022.
Humans and other primates recognize one another in part based on unique structural details of the face, including both local features and their spatial configuration within the head and body. Visual analysis of the face is supported by specialized regions of the primate cerebral cortex, which in macaques are commonly known as face patches. Here we ask whether the responses of neurons in anterior face patches, thought to encode face identity, are more strongly driven by local or holistic facial structure. We created stimuli consisting of recombinant photorealistic images of macaques, where we interchanged the eyes, mouth, head, and body between individuals. Unexpectedly, neurons in the anterior medial (AM) and anterior fundus (AF) face patches were predominantly tuned to local facial features, with minimal neural selectivity for feature combinations. These findings indicate that the high-level structural encoding of face identity rests upon populations of neurons specialized for local features.
Christopher N. Wahlheim; Michelle L. Eisenberg; David Stawarczyk; Jeffrey M. Zacks
In: Psychological Science, vol. 33, no. 5, pp. 765–781, 2022.
Memory-guided predictions can improve event comprehension by guiding attention and the eyes to the location where an actor is about to perform an action. But when events change, viewers may experience predictive-looking errors and need to update their memories. In two experiments (Ns = 38 and 98), we examined the consequences of mnemonic predictive-looking errors for comprehending and remembering event changes. University students watched movies of everyday activities with actions that were repeated exactly and actions that were repeated with changed features—for example, an actor reached for a paper towel on one occasion and a dish towel on the next. Memory guidance led to predictive-looking errors that were associated with better memory for subsequently changed event features. These results indicate that retrieving recent event features can guide predictions during unfolding events and that error signals derived from mismatches between mnemonic predictions and actual events contribute to new learning.
Ilja Wagner; Dion Henare; Jan Tünnermann; Anna Schubö; Alexander C. Schütz
In: Attention, Perception, & Psychophysics, vol. 85, pp. 23–40, 2022.
To interact with one's environment, relevant objects have to be selected as targets for saccadic eye movements. Previous studies have demonstrated that factors such as visual saliency and reward influence saccade target selection, and that humans can dynamically trade off these factors to maximize expected value during visual search. However, expected value in everyday situations not only depends on saliency and reward, but also on the required time to find objects, and the likelihood of a successful object-interaction after search. Here we studied whether search costs and the accuracy to discriminate an object feature can be traded off to maximize expected value. We designed a combined visual search and perceptual discrimination task, where participants chose whether to search for an easy- or difficult-to-discriminate target in search displays populated by distractors that shared features with either the easy or the difficult target. Participants received a monetary reward for correct discriminations and were given limited time to complete as many trials as they could. We found that participants considered their discrimination performance and the search costs when choosing targets and, by this, maximized expected value. However, the accumulated reward was constrained by noise in both the choice of which target to search for, and which elements to fixate during search. We conclude that humans take into account the prospective search time and the likelihood of successful a object-interaction, when deciding what to search for. However, search performance is constrained by noise in decisions about what to search for and how to search for it.
Cécile Vullings; Zachary Lively; Preeti Verghese
Saccades during visual search in macular degeneration Journal Article
In: Vision Research, vol. 201, pp. 1–14, 2022.
Macular degeneration (MD) compromises both high-acuity vision and eye movements when the foveal regions of both eyes are affected. Individuals with MD adapt to central field loss by adopting a preferred retinal locus (PRL) for fixation. Here, we investigate how individuals with bilateral MD use eye movements to search for targets in a visual scene under realistic binocular viewing conditions. Five individuals with binocular scotomata, 3 individuals with monocular scotomata and 6 age-matched controls participated in our study. We first extensively mapped the binocular scotoma with an eyetracker, while fixation was carefully monitored (Vullings & Verghese, 2020). Participants then completed a visual search task where 0, 1, or 2 Gaussian blobs were distributed randomly across a natural scene. Participants were given 10 s to actively search the display and report the number of blobs. An analysis of saccade characteristics showed that individuals with binocular scotomata made more saccades in the direction of their scotoma than controls for the same directions. Saccades in the direction of the scotoma were typically of small amplitude, and did not fully uncover the region previously hidden by the scotoma. Rather than make more saccades to explore this hidden region, participants frequently made saccades back toward newly uncovered regions. Backward saccades likely serve a similar purpose to regressive saccades exhibited during reading in MD, by inspecting previously covered regions near the direction of gaze. Our analysis suggests that the higher prevalence of backward saccades in individuals with binocular scotomata might be related to the PRL being adjacent to the scotoma.
Stella D. Voulgaropoulou; Fasya Fauzani; Janine Pfirrmann; Claudia Vingerhoets; Thérèse Amelsvoort; Dennis Hernaus
Asymmetric effects of acute stress on cost and benefit learning Journal Article
In: Psychoneuroendocrinology, vol. 138, pp. 1–10, 2022.
Background: Humans are continuously exposed to stressful challenges in everyday life. Such stressful events trigger a complex physiological reaction – the fight-or-flight response – that can hamper flexible decision-making and learning. Inspired by key neural and peripheral characteristics of the fight-or-flight response, here, we ask whether acute stress changes how humans learn about costs and benefits. Methods: Healthy adults were randomly exposed to an acute stress (age mean=23.48, 21/40 female) or no-stress control (age mean=23.80, 22/40 female) condition, after which they completed a reinforcement learning task in which they minimize cost (physical effort) and maximize benefits (monetary rewards). During the task pupillometry data were collected. A computational model of cost-benefit reinforcement learning was employed to investigate the effect of acute stress on cost and benefit learning and decision-making. Results: Acute stress improved learning to maximize rewards relative to minimizing physical effort (Condition-by-Trial Type interaction: F(1,78)= 6.53
Christoph J. Völter; Ludwig Huber
Pupil size changes reveal dogs' sensitivity to motion cues Journal Article
In: iScience, vol. 25, no. 9, pp. 1–16, 2022.
Certain motion cues like self-propulsion and speed changes allow human and nonhuman animals to quickly detect animate beings. In the current eye-tracking study, we examined whether dogs' (Canis familiaris) pupil size was influenced by such motion cues. In Experiment 1, dogs watched different videos with normal or reversed playback direction showing a human agent releasing an object. The reversed playback gave the impression that the objects were self-propelled. In Experiment 2, dogs watched videos of a rolling ball that either moved at constant or variable speed. We found that the dogs' pupil size only changed significantly over the course of the videos in the conditions with self-propelled (upward) movements (Experiment 1) or variable speed (Experiment 2). Our findings suggest that dogs orient toward self-propelled stimuli that move at variable speed, which might contribute to their detection of animate beings.
Chiara Visentin; Chiara Valzolgher; Matteo Pellegatti; Paola Potente; Francesco Pavani; Nicola Prodi
In: International Journal of Audiology, vol. 61, no. 7, pp. 561–573, 2022.
Objective: The aim of this study was to assess to what extent simultaneously-obtained measures of listening effort (task-evoked pupil dilation, verbal response time [RT], and self-rating) could be sensitive to auditory and cognitive manipulations in a speech perception task. The study also aimed to explore the possible relationship between RT and pupil dilation. Design: A within-group design was adopted. All participants were administered the Matrix Sentence Test in 12 conditions (signal-to-noise ratios [SNR] of −3, −6, −9 dB; attentional resources focussed vs divided; spatial priors present vs absent). Study sample: Twenty-four normal-hearing adults, 20–41 years old (M = 23.5), were recruited in the study. Results: A significant effect of the SNR was found for all measures. However, pupil dilation discriminated only partially between the SNRs. Neither of the cognitive manipulations were effective in modulating the measures. No relationship emerged between pupil dilation, RT and self-ratings. Conclusions: RT, pupil dilation, and self-ratings can be obtained simultaneously when administering speech perception tasks, even though some limitations remain related to the absence of a retention period after the listening phase. The sensitivity of the three measures to changes in the auditory environment differs. RTs and self-ratings proved most sensitive to changes in SNR.
Preeti Verghese; Saeideh Ghahghaei; Zachary Lively
Mapping residual stereopsis in macular degeneration Journal Article
In: Journal of Vision, vol. 22, no. 13, pp. 1–13, 2022.
Individuals with macular degeneration typically lose vision in the central region of one or both eyes. A binocular scotoma occurs when vision loss occurs in overlapping locations in both eyes, but stereopsis is impacted even in the non-overlapping region wherever the visual field in either eye is affected. We used a novel stereoperimetry protocol to measure local stereopsis across the visual field (up to 25° eccentricity) to determine how locations with functional stereopsis relate to the scotomata in the two eyes. Participants included those with monocular or binocular scotomata and age-matched controls with healthy vision. Targets (with or without depth information) were presented on a random dot background. Depth targets had true binocular disparity of 20' (crossed), whereas non-depth targets were defined by monocular cues such as contrast and dot density. Participants reported target location and whether it was in depth or flat. Local depth sensitivity (d') estimates were then combined to generate a stereopsis map. This stereopsis map was compared to the union of the monocular microperimetry estimates that mapped out the functional extent of the scotoma in each eye. The "union" prediction aligned with residual stereopsis, showing impaired stereopsis within this region and residual stereopsis outside this region. Importantly, the stereoblind region was typically more extensive than the binocular scotoma defined by the intersection (overlap) of the scotomata. This explains why individuals may have intact binocular visual fields but be severely compromised in tasks of daily living that benefit from stereopsis, such as eye-hand coordination and navigation.
Elle Heusden; Wieske Zoest; Mieke Donk; Christian N. L. Olivers
In: Psychonomic Bulletin & Review, vol. 29, pp. 1327–1337, 2022.
Human vision involves selectively directing the eyes to potential objects of interest. According to most prominent theories, selection is the quantal outcome of an ongoing competition between saliency-driven signals on the one hand, and relevance-driven signals on the other, with both types of signals continuously and concurrently projecting onto a common priority map. Here, we challenge this view. We asked participants to make a speeded eye movement towards a target orientation, which was presented together with a non-target of opposing tilt. In addition to the difference in relevance, the target and non-target also differed in saliency, with the target being either more or less salient than the non-target. We demonstrate that saliency- and relevance-driven eye movements have highly idiosyncratic temporal profiles, with saliency-driven eye movements occurring rapidly after display onset while relevance-driven eye movements occur only later. Remarkably, these types of eye movements can be fully separated in time: We find that around 250 ms after display onset, eye movements are no longer driven by saliency differences between potential targets, but also not yet driven by relevance information, resulting in a period of non-selectivity, which we refer to as the attentional limbo. Binomial modeling further confirmed that visual selection is not necessarily the outcome of a direct battle between saliency- and relevance-driven signals. Instead, selection reflects the dynamic changes in the underlying saliency- and relevance-driven processes themselves, and the time at which an action is initiated then determines which of the two will emerge as the driving force of behavior.
Mats W. J. Es; Tom R. Marshall; Eelke Spaak; Ole Jensen; Jan-Mathijs Schoffelen
In: European Journal of Neuroscience, vol. 55, no. 11-12, pp. 3191–3208, 2022.
Sustained attention has long been thought to benefit perception in a continuous fashion, but recent evidence suggests that it affects perception in a discrete, rhythmic way. Periodic fluctuations in behavioral performance over time, and modulations of behavioral performance by the phase of spontaneous oscillatory brain activity point to an attentional sampling rate in the theta or alpha frequency range. We investigated whether such discrete sampling by attention is reflected in periodic fluctuations in the decodability of visual stimulus orientation from magnetoencephalographic (MEG) brain signals. In this exploratory study, human subjects attended one of the two grating stimuli, while MEG was being recorded. We assessed the strength of the visual representation of the attended stimulus using a support vector machine (SVM) to decode the orientation of the grating (clockwise vs. counterclockwise) from the MEG signal. We tested whether decoder performance depended on the theta/alpha phase of local brain activity. While the phase of ongoing activity in the visual cortex did not modulate decoding performance, theta/alpha phase of activity in the frontal eye fields and parietal cortex, contralateral to the attended stimulus did modulate decoding performance. These findings suggest that phasic modulations of visual stimulus representations in the brain are caused by frequency-specific top-down activity in the frontoparietal attention network, though the behavioral relevance of these effects could not be established.
Olof J. Werf; Sanne Ten Oever; Teresa Schuhmann; Alexander T. Sack
In: European Journal of Neuroscience, vol. 55, no. 11-12, pp. 3100–3116, 2022.
Recent evidence suggests that visuospatial attentional performance is not stable over time but fluctuates in a rhythmic fashion. These attentional rhythms allow for sampling of different visuospatial locations in each cycle of this rhythm. However, it is still unclear in which paradigmatic circumstances rhythmic attention becomes evident. First, it is unclear at what spatial locations rhythmic attention occurs. Second, it is unclear how the behavioural relevance of each spatial location determines the rhythmic sampling patterns. Here, we aim to elucidate these two issues. Firstly, we aim to find evidence of rhythmic attention at the predicted (i.e. cued) location under moderately informative predictor value, replicating earlier studies. Secondly, we hypothesise that rhythmic attentional sampling behaviour will be affected by the behavioural relevance of the sampled location, ranging from non-informative to fully informative. To these aims, we used a modified Egly-Driver task with three conditions: a fully informative cue, a moderately informative cue (replication condition), and a non-informative cue. We did not find evidence of rhythmic sampling at cued locations, failing to replicate earlier studies. Nor did we find differences in rhythmic sampling under different predictive values of the cue. The current data does not allow for robust conclusions regarding the non-cued locations due to the absence of a priori hypotheses. Post-hoc explorative data analyses, however, clearly indicate that attention samples non-cued locations in a theta-rhythmic manner, specifically when the cued location bears higher behavioural relevance than the non-cued locations.
Ruud L. Brink; Keno Hagena; Niklas Wilming; Peter R. Murphy; Christian Büchel; Tobias H. Donner
In: Neuron, vol. 111, pp. 1–14, 2022.
Humans and non-human primates can flexibly switch between different arbitrary mappings from sensation to action to solve a cognitive task. It has remained unknown how the brain implements such flexible sensory-motor mapping rules. Here, we uncovered a dynamic reconfiguration of task-specific correlated variability between sensory and motor brain regions. Human participants switched between two rules for reporting visual orientation judgments during fMRI recordings. Rule switches were either signaled explicitly or inferred by the participants from ambiguous cues. We used behavioral modeling to reconstruct the time course of their belief about the active rule. In both contexts, the patterns of correlations between ongoing fluctuations in stimulus- and action-selective activity across visual- and action-related brain regions tracked participants' belief about the active rule. The rule-specific correlation patterns broke down around the time of behavioral errors. We conclude that internal beliefs about task state are instantiated in brain-wide, selective patterns of correlated variability.
Nils S. Berg; Nikki A. Lammers; Anouk R. Smits; Selma Lugtmeijer; Yair Pinto; Edward H. F. De Haan
In: Journal of Clinical and Experimental Neuropsychology, vol. 44, no. 8, pp. 580–591, 2022.
Introduction: We aimed to investigate whether associations between deficits in “mid-range” visual functions and deficits in higher-order visual cognitive functions in stroke patients are more in line with a hierarchical, two-pathway model of the visual brain, or with a patchwork model, which assumes a parallel organization with many processing routes and cross-talk. Methods: A group of 182 ischemic stroke patients was assessed with a new diagnostic set-up for the investigation of a comprehensive range of visuosensory mid-range functions: color, shape, location, orientation, correlated motion, contrast and texture. With logistic regression analyses we investigated the predictive value of these mid-range functions for deficits in visuoconstruction (Copy of the Rey-Complex Figure Test), visual emotion recognition (Ekman 60 Faces Test of the FEEST) and visual memory (computerized Doors-test). Results: Results showed that performance on most mid-range visual tasks could not predict performance on higher-order visual cognitive tasks. Correlations were low to weak. Impaired visuoconstruction and visual memory were only modestly predicted by a worse location perception. Impaired emotion perception was modestly predicted by a worse orientation perception. In addition, double dissociations were found: there were patients with selective deficits in mid-range visual functions without higher-order visual deficits and vice versa. Conclusions: Our findings are not in line with the hierarchical, two-pathway model. Instead, the findings are more in line with alternative “patchwork” models, arguing for a parallel organization with many processing routes and cross-talk. However, future studies are needed to test these alternative models.
Raphael Vallat; Başak Türker; Alain Nicolas; Perrine Ruby
In: Nature and Science of Sleep, vol. 14, pp. 265–275, 2022.
Introduction: Several results suggest that the frequency of dream recall is positively correlated with personality traits such as creativity and openness to experience. In addition, neuroimaging results have evidenced different neurophysiological profiles in high dream recallers (HR) and low dream recallers (LR) during both sleep and wakefulness, specifically within regions of the default mode network (DMN). These findings are consistent with the emerging view that dreaming and mind wandering pertain to the same family of spontaneous mental processes, subserved by the DMN. Methods: To further test this hypothesis, we measured the DMN functional connectivity during resting wakefulness, together with personality and cognitive abilities (including creativity) in 28 HR and 27 LR. Results: As expected, HR demonstrated a greater DMN connectivity than LR, higher scores of creativity, and no significant difference in memory abilities. However, there was no significant correlation between creativity scores and DMN connectivity. Discussion: These results further demonstrate that there are trait neurophysiological and psychological differences between individuals who frequently recall their dreams and those who do not. They support the forebrain and the DMN hypotheses of dreaming and leave open the possibility that increased activity in the DMN promotes creative-thinking during both wakefulness and sleep. Further work is needed to test whether activity in the DMN is causally associated with creative-thinking.
Cem Uran; Alina Peter; Andreea Lazar; William Barnes; Johanna Klon-Lipok; Katharine A. Shapcott; Rasmus Roese; Pascal Fries; Wolf Singer; Martin Vinck
In: Neuron, vol. 110, no. 7, pp. 1240–1257, 2022.
Predictive coding is an important candidate theory of self-supervised learning in the brain. Its central idea is that sensory responses result from comparisons between bottom-up inputs and contextual predictions, a process in which rates and synchronization may play distinct roles. We recorded from awake macaque V1 and developed a technique to quantify stimulus predictability for natural images based on self-supervised, generative neural networks. We find that neuronal firing rates were mainly modulated by the contextual predictability of higher-order image features, which correlated strongly with human perceptual similarity judgments. By contrast, V1 gamma ($gamma$)-synchronization increased monotonically with the contextual predictability of low-level image features and emerged exclusively for larger stimuli. Consequently, $gamma$-synchronization was induced by natural images that are highly compressible and low-dimensional. Natural stimuli with low predictability induced prominent, late-onset beta ($beta$)-synchronization, likely reflecting cortical feedback. Our findings reveal distinct roles of synchronization and firing rates in the predictive coding of natural images.
Anne E. Urai; Tobias H. Donner
In: Nature Communications, vol. 13, no. 1, pp. 1–15, 2022.
Humans and other animals tend to repeat or alternate their previous choices, even when judging sensory stimuli presented in a random sequence. It is unclear if and how sensory, associative, and motor cortical circuits produce these idiosyncratic behavioral biases. Here, we combined behavioral modeling of a visual perceptual decision with magnetoencephalographic (MEG) analyses of neural dynamics, across multiple regions of the human cerebral cortex. We identified distinct history-dependent neural signals in motor and posterior parietal cortex. Gamma-band activity in parietal cortex tracked previous choices in a sustained fashion, and biased evidence accumulation toward choice repetition; sustained beta-band activity in motor cortex inversely reflected the previous motor action, and biased the accumulation starting point toward alternation. The parietal, not motor, signal mediated the impact of previous on current choice and reflected individual differences in choice repetition. In sum, parietal cortical signals seem to play a key role in shaping choice sequences.
Rob Udale; Moc Tram Tran; Sanjay Manohar; Masud Husain
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 48, no. 1, pp. 21–36, 2022.
Little is known about how memory resources are allocated in natural vision across sequential eye movements and fixations, as people actively extract information from the visual environment. Here, we used gaze-contingent eye tracking to examine how such resources are dynamically reallocated from old to new information entering working memory. As participants looked sequentially at items, we interrupted the process at different times by extinguishing the display as a saccade was initiated. After a brief interval, participants were probed on one of the items that had been presented. Paradoxically, across all experiments, the final (unfixated) saccade target was recalled more precisely when more items had previously been fixated, that is, with longer rather than shorter saccade sequences. This result is difficult to explain on current models of working memory because recall error, even for the final item, is typically higher as memory load increases. The findings could however be accounted for by a model that describes how resources are dynamically reallocated on a moment-by-moment basis. During each saccade, the target is encoded by consuming a proportion of currently available resources from a limited working memory, as well as by reallocating resources away from previously encoded items. These findings reveal how working memory resources are shifted across memoranda in active vision.
Tawny Tsang; Adam J. Naples; Erin C. Barney; Minhang Xie; Raphael Bernier; Geraldine Dawson; James Dziura; Susan Faja; Shafali Spurling Jeste; James C. McPartland; Charles A. Nelson; Michael Murias; Helen Seow; Catherine Sugar; Sara J. Webb; Frederick Shic; Scott P. Johnson
In: Journal of Autism and Developmental Disorders, pp. 1–10, 2022.
Visual exploration paradigms involving object arrays have been used to examine salience of social stimuli such as faces in ASD. Recent work suggests performance on these paradigms may associate with clinical features of ASD. We evaluate metrics from a visual exploration paradigm in 4-to-11-year-old children with ASD (n = 23; 18 males) and typical development (TD; n = 23; 13 males). Presented with arrays containing faces and nonsocial stimuli, children with ASD looked less at (p = 0.002) and showed fewer fixations to (p = 0.022) faces than TD children, and spent less time looking at each object on average (p = 0.004). Attention to the screen and faces correlated positively with social and cognitive skills in the ASD group (ps <.05). This work furthers our understanding of objective measures of visual exploration in ASD and its potential for quantifying features of ASD.
Audrey Trouilloud; Pauline Rossel; Cynthia Faurite; Alexia Roux-Sibilon; Louise Kauffmann; Carole Peyrin
In: Visual Cognition, vol. 30, no. 6, pp. 425–442, 2022.
The spatial resolution of the human visual field decreases considerably from the center to the periphery. However, several studies have highlighted the importance of peripheral vision for scene categorization. In Experiment 1, we investigated if peripheral vision could influence the scene categorization in central vision. We used photographs of indoor and outdoor scenes from which we extracted a central disk and a peripheral ring. Stimuli were composed of a central disk and a peripheral ring that could be either semantically congruent or incongruent. Participants had to categorize the central disk while ignoring the peripheral ring or the peripheral ring while ignoring the central disk. Results revealed a congruence effect of peripheral vision on central vision, as strong as the reverse. In Experiment 2, we investigated the nature of the physical signal in peripheral vision that influences the categorization in central vision. We used either intact, phase-preserved, or amplitude-preserved peripheral rings. Participants had to categorize the central disk while ignoring the peripheral ring. Results showed that only phase-preserved peripheral rings elicited a congruence effect as strong as the one observed with intact peripheral rings. Information contained in the phase spectrum (spatial configuration of the scene) may be critical in peripheral vision.