EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2020 (with some early 2021s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception article, please email us!
Sarah Chabal; Sayuri Hayakawa; Viorica Marian
In: Cognitive Research: Principles and Implications, 6 (2), pp. 1–10, 2021.
Over the course of our lifetimes, we accumulate extensive experience associating the things that we see with the words we have learned to describe them. As a result, adults engaged in a visual search task will often look at items with labels that share phonological features with the target object, demonstrating that language can become activated even in non-linguistic contexts. This highly interactive cognitive system is the culmination of our linguistic and visual experiences—and yet, our understanding of how the relationship between language and vision develops remains limited. The present study explores the developmental trajectory of language-mediated visual search by examining whether children can be distracted by linguistic competitors during a non-linguistic visual search task. Though less robust compared to what has been previously observed with adults, we find evidence of phonological competition in children as young as 8 years old. Furthermore, the extent of language activation is predicted by individual differences in linguistic, visual, and domain-general cognitive abilities, with the greatest phonological competition observed among children with strong language abilities combined with weaker visual memory and inhibitory control. We propose that linguistic expertise is fundamental to the development of language-mediated visual search, but that the rate and degree of automatic language activation depends on interactions among a broader network of cognitive abilities.
Jasmine R Aziz; Samantha R Good; Raymond M Klein; Gail A Eskes
In: Cortex, 136 , pp. 28–40, 2021.
Studying age-related changes in working memory (WM) and visual search can provide insights into mechanisms of visuospatial attention. In visual search, WM is used to remember previously inspected objects/locations and to maintain a mental representation of the target to guide the search. We sought to extend this work, using aging as a case of reduced WM capacity. The present study tested whether various domains of WM would predict visual search performance in both young (n = 47; aged 18-35 yrs) and older (n = 48; aged 55-78) adults. Participants completed executive and domain-specific WM measures, and a naturalistic visual search task with (single) feature and triple-conjunction (three-feature) search conditions. We also varied the WM load requirements of the search task by manipulating whether a reference picture of the target (i.e., target template) was displayed during the search, or whether participants needed to search from memory. In both age groups, participants with better visuospatial executive WM were faster to locate complex search targets. Working memory storage capacity predicted search performance regardless of target complexity; however, visuospatial storage capacity was more predictive for young adults, whereas verbal storage capacity was more predictive for older adults. Displaying a target template during search diminished the involvement of WM in search performance, but this effect was primarily observed in young adults. Age-specific interactions between WM and visual search abilities are discussed in the context of mechanisms of visuospatial attention and how they may vary across the lifespan.
Jia Qiong Xie; Detlef H Rost; Fu Xing Wang; Jin Liang Wang; Rebecca L Monk
In: Information & Management, 58 (2), pp. 1–12, 2021.
Drawing on scan-and-shift hypothesis and scattered attention hypothesis, this article attempted to explore the association between excessive social media use (ESMU) and distraction from an information engagement perspective. In study 1, the results, based on 743 questionnaires completed by Chinese college students, showed that ESMU is related to distraction in daily life. In Study 2, eye-tracking technology was used to investigate the distfile:///Users/PrinzEugen/Desktop/PDF/Uploaded/1-s2.0-S0378720620303530-main.pdfraction and performance of excessive microblog users when performing the modified Stroop task. The results showed that excessive microblog users had difficulties suppressing interference information than non-microblog users, resulting in poor performance. Theoretical and practical implications are discussed.
Guangming Xie; Wenbo Du; Hongping Yuan; Yushi Jiang
In: Sustainability, 13 (2), pp. 1–28, 2021.
Using metacognition and dual process theories, this paper studied the role of types of presentation of mixed opinions in mitigating negative impacts of online word of mouth (WOM) dispersion on consumer's purchasing decisions. Two studies were implemented, respectively. By employing an eye-tracking approach, study 1 recorded consumer's attention to WOM dispersion. The results show that the activation of the analytic system can improve reviewer-related attribution options. In study 2, three kinds of presentation of mixed opinions originating from China's leading online platform were compared. The results demonstrated that mixed opinions expressed in moderately complex form, integrating average ratings and reviewers' impressions of products, was effective in promoting reviewer-related attribution choices. However, too-complicated presentation types of WOM dispersion can impose excessively on consumers' cognitive load and eventually fail to activate the analytic system for promoting reviewer-related attribution choices. The main contribution of this paper lies in that consumer attribution-related choices are supplemented, which provides new insights into information consistency in consumer research. The managerial and theoretical significance of this paper are discussed in order to better understand the purchasing decisions of consumers.
Ching-Lin Wu; Shu-Ling Peng; Hsueh-Chih Chen
In: Creativity Research Journal, pp. 1–10, 2021.
An increasing number of studies have explored the process of how subjects solve problems through remote association. Most research has investigated the relationship between an individual's response to semantic search during the think-aloud operation and the individual's reply performance. Few studies, however, have examined the process of obtaining objective physiological indices. Eye-tracking technology is a powerful tool with which to dissect the process of problem solving, with tracked fixation indices that reflect an individual's internal cognitive mechanisms. This study, based on participants' fixation order for various stimulus words, was the first to introduce the concept of association search span, a concept that can be further divided into distributed association and centralized association. This study recorded 62 participants' eye movement indices in an eye-tracking experiment. The results showed that participants with higher remote association ability used more distributed associations and fewer centralized asso- ciations. The results indicated that the stronger remote association ability a participant has, the more likely that participant is to form associations with different stimulus words. It was also found that flexible thinking plays a vital role in the generation of remote associations.
Mats W J van Es; Tom R Marshall; Eelke Spaak; Ole Jensen; Jan-Mathijs Schoffelen
In: European Journal of Neuroscience, pp. 1–18, 2021.
Sustained attention has long been thought to benefit perception in a continuous fashion, but recent evidence suggests that it affects perception in a discrete, rhythmic way. Periodic fluctuations in behavioral performance over time, and modulations of behavioral performance by the phase of spontaneous oscillatory brain activity point to an attentional sampling rate in the theta or alpha frequency range. We investigated whether such discrete sampling by attention is reflected in periodic fluctuations in the decodability of visual stimulus orientation from magnetoencephalographic (MEG) brain signals. In this exploratory study, human subjects attended one of two grating stimuli while MEG was being recorded. We assessed the strength of the visual representation of the attended stimulus using a support vector machine (SVM) to decode the orientation of the grating (clockwise vs. counterclockwise) from the MEG signal. We tested whether decoder performance depended on the theta/alpha phase of local brain activity. While the phase of ongoing activity in visual cortex did not modulate decoding performance, theta/alpha phase of activity in the FEF and parietal cortex, contralateral to the attended stimulus did modulate decoding performance. These findings suggest that phasic modulations of visual stimulus representations in the brain are caused by frequency-specific top-down activity in the fronto-parietal attention network.
David Torrents-Rodas; Stephan Koenig; Metin Uengoer; Harald Lachnit
In: Biological Psychology, 159 , pp. 1–11, 2021.
We investigated whether a sudden rise in prediction error widens an individual's focus of attention by increasing ocular fixations on cues that otherwise tend to be ignored. To this end, we used a discrimination learning task including cues that were either relevant or irrelevant for predicting the outcomes. Half of participants experienced contingency reversal once they had learned to predict the outcomes (reversal group
Jörg Schorer; Nico Heibült; Stuart G Wilson; Florian Loffing
In: Psychology of Sport & Exercise, 53 , pp. 1–7, 2021.
Sleep facilitates perceptual, cognitive and motor learning; however, the role of sleep for perceptual learning in sports is yet unclear. Here, we tested the impact of sleep on novices' visual anticipation training using a handball goalkeeping task. To this end, 30 novices were divided randomly in two groups and asked to predict the directional outcome of handball penalties presented as videos. One group did the pre-test and a single session of training in the morning, post-test in the evening on the same day, and the retention test in the next morning again. Conversely, the second group started and finished in the evening. Analyses of prediction accuracy revealed that the group starting in the evening improved largest between pre-and post-test (sleep in-between), while the greatest improvement for the group starting in the morning was found between post-and retention-test (sleep in-between). Overall, our results provide first insight into the potential relevance of sleep for effective anticipation training in sports.
Marian Sauter; Nina M Hanning; Heinrich R Liesefeld; Hermann J Müller
In: Cortex, 135 , pp. 108–126, 2021.
People can learn to ignore salient distractors that occur frequently at particular locations, making them interfere less with task performance. This effect has been attributed to learnt suppression of the likely distractor locations at a pre-selective stage of attentional-priority computation. However, rather than distractors at frequent (vs rare) locations being just less likely to capture attention, attention may possibly also be disengaged faster from such distractors – a post-selective contribution to their reduced interference. Eye-movement studies confirm that learnt suppression, evidenced by a reduced rate of oculomotor capture by distractors at frequent locations, is a major factor, whereas the evidence is mixed with regard to a role of rapid disengagement However, methodological choices in these studies limited conclusions as to the contribution of a post-capture effect. Using an adjusted design, here we positively establish the rapid-disengagement effect, while corroborating the oculomotor-capture effect. Moreover, we examine distractor-location learning effects not only for distractors defined in a different visual dimension to the search target, but also for distractors defined within the same dimension, which are known to cause particularly strong interference and probability-cueing effects. Here, we show that both oculomotor-capture and disengagement dynamics contribute to this pattern. Additionally, on distractor-absent trials, the slowed responses to targets at frequent distractor locations—that we observe only in same-, but not different-, dimension conditions—arise pre-selectively, in prolonged latencies of the very first saccade. This supports the idea that learnt suppression is implemented at a different level of priority computation with same-versus different-dimension distractors.
B Platt; A Sfärlea; C Buhl; J Loechner; J Neumüller; L Asperud Thomsen; K Starman-Wöhrle; E Salemink; G Schulte-Körne
In: Child Psychiatry & Human Development, pp. 1–20, 2021.
Attention biases (AB) are a core component of cognitive models of depression yet it is unclear what role they play in the transgenerational transmission of depression. 44 children (9–14 years) with a high familial risk of depression (HR) were compared on multiple measures of AB with 36 children with a low familial risk of depression (LR). Their parents: 44 adults with a history of depression (HD) and 36 adults with no history of psychiatric disorder (ND) were also compared. There was no evidence of group differences in AB; neither between the HR and LR children, nor between HD and ND parents. There was no evidence of a correlation between parent and child AB. The internal consistency of the tasks varied greatly. The Dot-Probe Task showed unacceptable reliability whereas the behavioral index of the Visual-Search Task and an eye-tracking index of the Passive-Viewing Task showed better reliability. There was little correlation between the AB tasks and the tasks showed minimal convergence with symptoms of depression or anxiety. The null-findings of the current study contradict our expectations and much of the previous literature. They may be due to the poor psychometric properties associated with some of the AB indices, the unreliability of AB in general, or the relatively modest sample size. The poor reliability of the tasks in our sample suggest caution should be taken when interpreting the positive findings of previous studies which have used similar methods and populations.
Adam J Parker; Timothy J Slattery
In: Quarterly Journal of Experimental Psychology, 74 (1), pp. 135–149, 2021.
In recent years, there has been an increase in research concerning individual differences in readers' eye movements. However, this body of work is almost exclusively concerned with the reading of single-line texts. While spelling and reading ability have been reported to influence saccade targeting and fixation times during intra-line reading, where upcoming words are available for parafoveal processing, it is unclear how these variables affect fixations adjacent to return-sweeps. We, therefore, examined the influence of spelling and reading ability on return-sweep and corrective saccade parameters for 120 participants engaged in multiline text reading. Less-skilled readers and spellers tended to launch their return-sweeps closer to the end of the line, prefer a viewing location closer to the start of the next, and made more return-sweep undershoot errors. We additionally report several skill-related differences in readers' fixation durations across multiline texts. Reading ability influenced all fixations except those resulting from return-sweep error. In contrast, spelling ability influenced only those fixations following accurate return-sweeps—where parafoveal processing was not possible prior to fixation. This stands in contrasts to an established body of work where fixation durations are related to reading but not spelling ability. These results indicate that lexical quality shapes the rate at which readers access meaning from the text by enhancing early letter encoding, and influences saccade targeting even in the absence of parafoveal target information.
Katya Olmos-Solis; Anouk Mariette van Loon; Christian N L Olivers
In: Cortex, 135 , pp. 61–77, 2021.
To optimize task sequences, the brain must differentiate between current and prospective goals. We previously showed that currently and prospectively relevant object representations in working memory can be dissociated within object-selective cortex. Based on other recent studies indicating that a range of brain areas may be involved in distinguishing between currently relevant and prospectively relevant information in working memory, here we conducted multivoxel pattern analyses of fMRI activity in additional posterior areas (specifically early visual cortex and the intraparietal sulcus) as well as frontal areas (specifically the frontal eye fields and lateral prefrontal cortex). We assessed whether these areas represent the memory content, the current versus prospective status of the memory, or both. On each trial, participants memorized an object drawn from three different categories. The object was the target for either a first task (currently relevant), a second task (prospectively relevant), or for neither task (irrelevant). The results revealed a division of labor across brain regions: While posterior areas preferentially coded for content (i.e., the category), frontal areas carried information about the current versus prospective relevance status of the memory, irrespective of the category. Intraparietal sulcus revealed both strong category- and status-sensitivity, consistent with its hub function of combining stimulus and priority signals. Furthermore, cross-decoding analyses revealed that while current and prospective representations were similar prior to search, they became dissimilar during search, in posterior as well as frontal areas. The findings provide further evidence for a dissociation between content and control networks in working memory.
Carly Moser; Lyndsay Schmitt; Joseph Schmidt; Amanda Fairchild; Jessica Klusek
In: Brain and Cognition, 148 , pp. 1–10, 2021.
One in 113-178 females worldwide carry a premutation allele on the FMR1 gene. The FMR1 premutation is linked to neurocognitive and neuromotor impairments, although the phenotype is not fully understood, particularly with respect to age effects. This study sought to define oculomotor response inhibition skills in women with the FMR1 premutation and their association with age and fall risk. We employed an antisaccade eye- tracking paradigm to index oculomotor inhibition skills in 35 women with the FMR1 premutation and 28 control women. The FMR1 premutation group exhibited longer antisaccade latency and reduced accuracy relative to controls, indicating deficient response inhibition skills. Longer response latency was associated with older age in the FMR1 premutation and was also predictive of fall risk. Findings highlight the utility of the antisaccade paradigm for detecting early signs of age-related executive decline in the FMR1 premutation, which is related to fall risk. Findings support the need for clinical prevention efforts to decrease and delay the trajectory of age-related executive decline in women with the FMR1 premutation during midlife.
Krithika Mohan; Oliver Zhu; David Freedman
In: Neuron, 109 , pp. 1–17, 2021.
Primates excel at categorization, a cognitive process for assigning stimuli into behaviorally relevant groups. Categories are encoded in multiple brain areas and tasks, yet it remains unclear how neural encoding and dynamics support cognitive tasks with different demands. We recorded from parietal cortex during flexible switching between categorization tasks with distinct cognitive and motor demands, and also studied recurrent neural networks (RNNs) trained on the same tasks. In the one-interval categorization task (OIC), monkeys rapidly reported their decisions with a saccade. In the delayed match-to-category (DMC) task, monkeys decided whether sequentially presented stimuli were categorical matches. Neuronal category encoding generalized across tasks, but categorical encoding was more binary-like in the DMC task and more graded in the OIC task. Furthermore, analysis of the trained RNNs supports the hypothesis that binary-like encoding in the DMC task arises through compression of graded feature encoding by population attractor dynamics underlying short-term working memory.
Leanna McConnell; Wendy Troop-Gordon
In: Journal of Early Adolescence, 41 (1), pp. 97–127, 2021.
Effectively coping with peer victimization may be facilitated by deploying attention away from threat (i.e., bullies, reinforcers) and toward available support (e.g., defenders). To test this premise, 72 early adolescents (38 girls; Mage = 11.67
Karin Ludwig; Thomas Schenk
In: Cortex, 134 , pp. 333–350, 2021.
Patients with spatial neglect show an ipsilesional exploration bias. We developed a gaze- contingent intervention that aims at reducing this bias and tested its effects on visual exploration in healthy participants: During a visual search, stimuli in one half of the search display are removed when the gaze moves into this half. This leads to a relative increase in the exploration of the other half of the search display e the one that can be explored without impediments. In the first experiment, we tested whether this effect transferred to visual exploration during a change detection task (under change blindness conditions), which was the case. In a second experiment, we modified the intervention (to an inter- mittent application) but the original version yielded more promising results. Thus, in the third experiment, the original version was used to test the longevity of its effects and whether its repeated application produced even stronger results. To this aim, we compared two groups: the first group received the intervention once, the second group repeatedly on three consecutive days. The change detection task was administered before the inter- vention and at four points in time after the last intervention (directly afterwards, þ 1 hour, þ 1 day, and þ4 days). The results showed long-lasting effects of the intervention, most pronounced in the second group. Here the intervention changed the bias in the visual exploration pattern significantly until the last follow-up. We conclude that the intervention shows promise for the successful application in neglect patients.
Onkar Krishna; Kiyoharu Aizawa; Go Irie
Computational attention system for children, adults and elderly Journal Article
In: Multimedia Tools and Applications, 80 , pp. 1055–1074, 2021.
The existing computational visual attention systems have focused on the objective to basically simulate and understand the concept of visual attention system in adults. Consequently, the impact of observer's age in scene viewing behavior has rarely been considered. This study quantitatively analyzed the age-related differences in gaze landings during scene viewing for three different class of images: naturals, man-made, and fractals. Observer's of different age-group have shown different scene viewing tendencies independent to the class of the image viewed. Several interesting observations are drawn from the results. First, gaze landings for man-made dataset showed that whereas child observers focus more on the scene foreground, i.e., locations that are near, elderly observers tend to explore the scene background, i.e., locations farther in the scene. Considering this result a framework is proposed in this paper to quantitatively measure the depth bias tendency across age groups. Second, the quantitative analysis results showed that children exhibit the lowest exploratory behavior level but the highest central bias tendency among the age groups and across the different scene categories. Third, inter-individual similarity metrics reveal that an adult had significantly lower gaze consistency with children and elderly compared to other adults for all the scene categories. Finally, these analysis results were consequently leveraged to develop a more accurate age-adapted saliency model independent to the image type. The prediction accuracy suggests that our model fits better to the collected eye-gaze data of the observers belonging to different age groups than the existing models do.
Tamás Káldi; Anna Babarczy
In: Journal of Memory and Language, 116 , pp. 104187, 2021.
Focus is a linguistic device that marks a piece of information within an utterance as most relevant, as when emphasis is placed by the speaker on a word using phonological stress, special intonation, or prosodic prominence. The question addressed in the present study is whether the use of linguistic focus is best seen as a means of directing the listener's attention. We investigated attention allocation on the part of the listener to linguistically focused elements in working memory in a series of eye-tracking experiments. We concentrated on two processes: the encoding of the focused element and its retention. Attentional load during encoding was measured by pupil dilation, and attention allocation during retention was estimated from fixations to locations of previously present visual stimuli on a blank screen. It was found that i) more attention was allocated during the processing of sentences with linguistic focus and ii) linguistically focused elements received more attention during memory retention. However, when the task demanded the sharing of attention, the advantage of the focused element during retention disappeared. Further experiments showed that when verbal stimuli whose prominence was not linguistically marked were presented, the patterns of attention allocation associated with linguistic focus during retention replicated. These results lend further support to the claim that linguistic focus is a grammaticalized means of expressing prominence, and as such, functions as an attention capturing device.
Mega Herlambang B Id; Fokie Cnossen; Niels A Taatgen
The effects of intrinsic motivation on mental fatigue Journal Article
In: PLoS ONE, 16 (1), pp. 1–22, 2021.
There have been many studies attempting to disentangle the relation between motivation and mental fatigue. Mental fatigue occurs after performing a demanding task for a prolonged time, and many studies have suggested that motivation can counteract the negative effects of mental fatigue on task performance. To complicate matters, most mental fatigue studies looked exclusively at the effects of extrinsic motivation but not intrinsic motivation. Individu- als are said to be extrinsically motivated when they perform a task to attain rewards and avoid punishments, while they are said to be intrinsically motivated when they do for the pleasure of doing the activity. To assess whether intrinsic motivation has similar effects as extrinsic motivation, we conducted an experiment using subjective, performance, and physi- ological measures (heart rate variability and pupillometry). In this experiment, 28 partici- pants solved Sudoku puzzles on a computer for three hours, with a cat video playing in the corner of the screen. The experiment consisted of 14 blocks with two alternating conditions: low intrinsic motivation and high intrinsic motivation. The main results showed that irrespec- tive of condition, participants reported becoming fatigued over time. They performed better, invested more mental effort physiologically, and were less distracted in high-level than in low-level motivation blocks. The results suggest that similarly to extrinsic motivation, time- on-task effects are modulated by the level of intrinsic motivation: With high intrinsic motiva- tion, people can maintain their performance over time as they seem willing to invest more effort as time progresses than in low intrinsic motivation.
J Hartwig; A Kretschmer-trendowicz; J R Helmert; M L Jung; S Pannasch
In: International Journal of Psychophysiology, 160 , pp. 38–55, 2021.
Prospective memory (PM), the memory for delayed intentions, develops during childhood. The current study examined PM processes, such as monitoring, PM cue identification and intention retrieval with particular focus on their temporal dynamics and interrelations during successful and unsuccessful PM performance. We analysed eye movements of 6–7 and 9–10 year olds during the inspection of movie stills while they completed one of three different tasks: scene viewing followed by a snippet allocation task, a PM task and a visual search task. We also tested children's executive functions of inhibition, flexibility and working memory. We found that older children outperformed younger children in all tasks but neither age group showed variations in monitoring behaviour during the course of the PM task. In fact, neither age group monitored. According to our data, initial processes necessary for PM success take place during the first fixation on the PM cue. In PM hit trials we found prolonged fixations after the first fixation on the PM cue, and older children showed a greater efficiency in PM processes following this first PM cue fixation. Regarding executive functions, only working memory had a significant effect on children's PM performance. Across both age groups children with better working memory scores needed less time to react to the PM cue. Our data support the notion that children rely on spontaneous processes to notice the PM cue, followed by a resource intensive search for the intended action.
Carolina L Haass-Koffler; Rachel D Souza; James P Wilmott; Elizabeth R Aston; Joo-Hyun Song
In: Alcohol and Alcoholism, 56 (1), pp. 47–56, 2021.
Aims: Previous studies have shown that there may be an underlying mechanism that is common for co-use of alcohol and tobacco and it has been shown that treatment for alcohol use disorder can increase rates of smoking cessation. The primary aim of this study was to assess a novel methodological approach to test a simultaneous behavioral alcohol-smoking cue reactivity (CR) paradigm in people who drink alcohol and smoke cigarettes. Methods: This was a human laboratory study that utilized a novel laboratory procedure with individuals who drink heavily (≥15 drinks/week for men; ≥8 drinks/week for women) and smoke (textgreater5 cigarettes/day). Participants completed a CR in a bar laboratory and an eye-tracking (ET) session using their preferred alcohol beverage, cigarettes brand and water. Results: In both the CR and ET session, there was a difference in time spent interacting with alcohol and cigarettes as compared to water (P's textless 0.001), but no difference in time spent interacting between alcohol and cigarettes (Ptextgreater 0.05). In the CR sessions, craving for cigaretteswas significantly greater than craving for alcohol (P textless 0.001), however, only time spent with alcohol, but not with cigarettes, was correlated with craving for both alcohol and cigarettes (P textless 0.05). Conclusion: This study showed that it is feasible to use simultaneous cues during a CR procedure in a bar laboratory paradigm. The attention bias measured in the integrated alcohol-cigarettes ET procedure predicted participants' decision making in the CR. This novel methodological approach revealed that in people who drink heavily and smoke, alcohol cues may affect craving for both alcohol and cigarettes.
Josephine M Groot; Nya M Boayue; Gábor Csifcsák; Wouter Boekel; René Huster; Birte U Forstmann; Matthias Mittner
In: NeuroImage, 224 , pp. 1–10, 2021.
Mind wandering reflects the shift in attentional focus from task-related cognition driven by external stimuli toward self-generated and internally-oriented thought processes. Although such task-unrelated thoughts (TUTs) are pervasive and detrimental to task performance, their underlying neural mechanisms are only modestly understood. To investigate TUTs with high spatial and temporal precision, we simultaneously measured fMRI, EEG, and pupillometry in healthy adults while they performed a sustained attention task with experience sampling probes. Features of interest were extracted from each modality at the single-trial level and fed to a support vector machine that was trained on the probe responses. Compared to task-focused attention, the neural signature of TUTs was characterized by weaker activity in the default mode network but elevated activity in its anticorrelated network, stronger functional coupling between these networks, widespread increase in alpha, theta, delta, but not beta, frequency power, predominantly reduced amplitudes of late, but not early, event-related potentials, and larger baseline pupil size. Particularly, information contained in dynamic interactions between large-scale cortical networks was predictive of transient changes in attentional focus above other modalities. Together, our results provide insight into the spatiotemporal dynamics of TUTs and the neural markers that may facilitate their detection.
Miguel Garcia Garcia; Katharina Rifai; Siegfried Wahl; Tamara Watson
In: Vision Research, 179 , pp. 75–84, 2021.
Progressive addition lenses introduce distortions in the peripheral visual field that alter both form and motion perception. Here we seek to understand how our peripheral visual field adapts to complex distortions. The adaptation was induced across the visual field by geometrically skewed image sequences, and aftereffects were measured via changes in perception of the double-drift illusion. The double-drift or curveball stimulus contains both local and object motion. Therefore, the aftereffects induced by geometrical distortions might be indicative of how this adaptation interacts with the local and object motion signals. In the absence of the local motion components, the adaptation to skewness modified the perceived trajectory of object motion in the opposite direction of the adaptation stimulus skew. This effect demonstrates that the environment can also tune perceived object trajectories. Testing with the full double-drift stimulus, adaptation to a skew in the opposite direction to the local motion component induced a change in perception, reducing the illusion magnitude (when the stimulus was presented on the right side of the screen. A non-statistically significant shift, when stimuli were on the left side). However, adaptation to the other orientation resulted in no change in the strength of the double-drift illusion (for both stimuli locations). Thus, it seems that the adaptor's orientation and the motion statistics of the stimulus jointly define the perception of the measured aftereffect. In conclusion, not only size, contrast or drifting speed affects the double-drift illusion, but also adaptation to image distortions.
Marcos Domic-Siede; Martín Irani; Joaquín Valdés; Marcela Perrone-Bertolotti; Tomás Ossandón
In: NeuroImage, 226 , pp. 1–19, 2021.
Cognitive planning, the ability to develop a sequenced plan to achieve a goal, plays a crucial role in human goal-directed behavior. However, the specific role of frontal structures in planning is unclear. We used a novel and ecological task, that allowed us to separate the planning period from the execution period. The spatio-temporal dynamics of EEG recordings showed that planning induced a progressive and sustained increase of frontal-midline theta activity (FM$theta$) over time. Source analyses indicated that this activity was generated within the prefrontal cortex. Theta activity from the right mid-Cingulate Cortex (MCC) and the left Anterior Cingulate Cortex (ACC) were correlated with an increase in the time needed for elaborating plans. On the other hand, left Frontopolar cortex (FP) theta activity exhibited a negative correlation with the time required for executing a plan. Since reaction times of planning execution correlated with correct responses, left FP theta activity might be associated with efficiency and accuracy in making a plan. Associations between theta activity from the right MCC and the left ACC with reaction times of the planning period may reflect high cognitive demand of the task, due to the engagement of attentional control and conflict monitoring implementation. In turn, the specific association between left FP theta activity and planning performance may reflect the participation of this brain region in successfully self-generated plans.
Minke J de Boer; Deniz Başkent; Frans W Cornelissen
In: Multisensory Research, 34 , pp. 17–47, 2021.
The majority of emotional expressions used in daily communication are multimodal and dynamic in nature. Consequently, one would expect that human observers utilize specific perceptual strategies to process emotions and to handle the multimodal and dynamic nature of emotions. However, our present knowledge on these strategies is scarce, primarily because most studies on emotion perception have not fully covered this variation, and instead used static and/or unimodal stimuli with few emotion categories. To resolve this knowledge gap, the present study examined how dynamic emotional auditory and visual information is integrated into a unified percept. Since there is a broad spectrum of possible forms of integration, both eye movements and accuracy of emotion identification were evaluated while observers performed an emotion identification task in one of three conditions: audio-only, visual-only video, or audiovisual video. In terms of adaptations of perceptual strategies, eye movement results showed a shift in fixations toward the eyes and away from the nose and mouth when audio is added. Notably, in terms of task performance, audio-only performance was mostly significantly worse than video-only and audiovisual performances, but performance in the latter two conditions was often not different. These results suggest that individuals flexibly and momentarily adapt their perceptual strategies to changes in the available information for emotion recognition, and these changes can be comprehensively quantified with eye tracking.
Jonathan Daume; Peng Wang; Alexander Maye; Dan Zhang; Andreas K Engel
In: NeuroImage, 224 , pp. 1–17, 2021.
The phase of neural oscillatory signals aligns to the predicted onset of upcoming stimulation. Whether such phase alignments represent phase resets of underlying neural oscillations or just rhythmically evoked activity, and whether they can be observed in a rhythm-free visual context, however, remains unclear. Here, we recorded the magnetoencephalogram while participants were engaged in a temporal prediction task, judging the visual or tactile reappearance of a uniformly moving stimulus. The prediction conditions were contrasted with a control condition to dissociate phase adjustments of neural oscillations from stimulus-driven activity. We observed stronger delta band inter-trial phase consistency (ITPC) in a network of sensory, parietal and frontal brain areas, but no power increase reflecting stimulus-driven or prediction-related evoked activity. Delta ITPC further correlated with prediction performance in the cerebellum and visual cortex. Our results provide evidence that phase alignments of low-frequency neural oscillations underlie temporal predictions in a non-rhythmic visual and crossmodal context.
Frederic R Danion; James Mathew; Niels Gouirand; Eli Brenner
In: Cortex, 134 , pp. 30–42, 2021.
When tracking targets moving in various directions with one's eyes, horizontal components of pursuit are more precise than vertical ones. Is this because horizontal target motion is predicted better or because horizontal movements of the eyes are controlled more precisely? When tracking a visual target with the hand, the eyes also track the target. We investigated whether the directional asymmetries that have been found during isolated eye movements are also present during such manual tracking, and if so, whether individual participants' asymmetry in eye movements is accompanied by a similar asymmetry in hand movements. We examined the data of 62 participants who used a joystick to track a visual target with a cursor. The target followed a smooth but unpredictable trajectory in two dimensions. Both the mean gaze-target distance and the mean cursor-target distance were about 20% larger in the vertical direction than in the horizontal direction. Gaze and cursor both followed the target with a slightly longer delay in the vertical than in the horizontal direction, irrespective of the target's trajectory. The delays of gaze and cursor were correlated, as were their errors in tracking the target. Gaze clearly followed the target rather than the cursor, so the asymmetry in both eye and hand movements presumably results from better predictions of the target's horizontal than of its vertical motion. Altogether this study speaks for the presence of anisotropic predictive processes that are shared across effectors.
Vassilis Cutsuridis; Shouyong Jiang; Matt J Dunn; Anne Rosser; James Brawn; Jonathan T Erichsen
In: Chaos, 31 , pp. 1–13, 2021.
Huntington's disease (HD), a genetically determined neurodegenerative disease, is positively correlated with eye movement abnormalities in decision making. The antisaccade conflict paradigm has been widely used to study response inhibition in eye movements and reliable performance deficits in HD subjects have been observed including greater number and timing of direction errors. We recorded the error rates and response latencies of early HD patients and healthy age-matched controls performing the mirror antisaccade task. HD participants displayed slower and more variable antisaccade latencies and increased error rates relative to healthy controls. A competitive accumulator-to-threshold neural model was then employed to quantitatively simulate the controls' and patients' reaction latencies and error rates and uncover the mechanisms giving rise to the observed HD antisaccade deficits. Our simulations showed: 1) a more gradual and noisy rate of accumulation of evidence by HD patients is responsible for the observed prolonged and more variable antisaccade latencies in early HD; 2) the confidence level of early HD patients making a decision is unaffected by the disease; and 3) the antisaccade performance of healthy controls and early HD patients is the end product of a neural lateral competition (inhibition) between a correct and an erroneous decision process, and not the end product of a third top-down stop signal suppressing the erroneous decision process as many have speculated.
Francesca Ciardo; Jacopo De Angelis; Barbara F M Marino; Rossana Actis-Grosso; Paola Ricciardelli
In: Acta Psychologica, 212 , pp. 1–14, 2021.
In the present study, we examine how person categorization conveyed by the combination of multiple cues modulates joint attention. In three experiments, we tested the combinatory effect of age, sex, and social status on gaze-following behaviour and pro-social attitudes. In Experiments 1 and 2, young adults were required to perform an instructed saccade towards left or right targets while viewing a to-be-ignored distracting face (female or male) gazing left or right, that could belong to a young, middle-aged, or elderly adult of high or low social status. Social status was manipulated by semantic knowledge (Experiment 1) or through visual appearance (Experiment 2). Results showed a clear combinatory effect of person perception cues on joint attention (JA). Specifically, our results showed that age and sex cues interacted with social status information depending on the modality through which it was conveyed. In Experiment 3, we further investigated our results by testing whether the identities used in Experiments 1 and 2 triggered different pro-social behaviour. The results of Experiment 3 showed that the identities resulting as more distracting in Experiments 1 and 2 were also perceived as more in need and prompt helping behaviour. Taken together, our evidence shows a combinatorial effect of age, sex, and social status in modulating the gaze following behaviour, highlighting a complex and dynamic interplay between person categorization and joint attention.
Amarender R Bogadhi; Leor N Katz; Anil Bollimunta; David A Leopold; Richard J Krauzlis
The evolution of the primate brain is marked by a dramatic increase in the number of neocortical areas that process visual information 1. This cortical expansion supports two hallmarks of high-level primate vision - the ability to selectively attend to particular visual features 2 and the ability to recognize a seemingly limitless number of complex visual objects 3. Given their prominent roles in high-level vision for primates, it is commonly assumed that these cortical processes supersede the earlier versions of these functions accomplished by the evolutionarily older brain structures that lie beneath the cortex. Contrary to this view, here we show that the superior colliculus (SC), a midbrain structure conserved across all vertebrates 4, is necessary for the normal expression of attention-related modulation and object selectivity in a newly identified region of macaque temporal cortex. Using a combination of psychophysics, causal perturbations and fMRI, we identified a localized region in the temporal cortex that is functionally dependent on the SC. Targeted electrophysiological recordings in this cortical region revealed neurons with strong attention-related modulation that was markedly reduced during attention deficits caused by SC inactivation. Many of these neurons also exhibited selectivity for particular visual objects, and this selectivity was also reduced during SC inactivation. Thus, the SC exerts a causal influence on high-level visual processing in cortex at a surprisingly late stage where attention and object selectivity converge, perhaps determined by the elemental forms of perceptual processing the SC has supported since before there was a neocortex.
Minke De J Boer; Tim Jürgens; Frans W Cornelissen; Deniz Bas
In: Vision Research, 180 , pp. 51–62, 2021.
Emotion recognition requires optimal integration of the multisensory signals from vision and hearing. A sensory loss in either or both modalities can lead to changes in integration and related perceptual strategies. To investigate potential acute effects of combined impairments due to sensory information loss only, we degraded the visual and auditory information in audiovisual video-recordings, and presented these to a group of healthy young volunteers. These degradations intended to approximate some aspects of vision and hearing impairment in simulation. Other aspects, related to advanced age, potential health issues, but also long-term adaptation and cognitive compensation strategies, were not included in the simulations. Besides accuracy of emotion recognition, eye movements were recorded to capture perceptual strategies. Our data show that emotion recognition performance decreases when degraded visual and auditory information are presented in isolation, but simultaneously degrading both modalities does not exacerbate these isolated effects. Moreover, degrading the visual information strongly impacts recognition performance and on viewing behavior. In contrast, degrading auditory information alongside normal or degraded video had little (additional) effect on performance or gaze. Nevertheless, our results hold promise for visually impaired individuals, because the addition of any audio to any video greatly facilitates performance, even though adding audio does not completely compensate for the negative effects of video degradation. Additionally, observers modified their viewing behavior to degraded video in order to maximize their performance. Therefore, optimizing the hearing of visually impaired individuals and teaching them such optimized viewing behavior could be worthwhile endeavors for improving emotion recognition.
Judith Bek; Emma Gowen; Stefan Vogt; Trevor J Crawford; Ellen Poliakoff; Emma Gowen; Stefan Vogt; Trevor J Crawford; Ellen Poliakoff
In: Neuropsychologia, 150 , pp. 1–11, 2021.
Action observation and imitation have been found to influence movement in people with Parkinson's disease (PD), but simple visual stimuli can also guide their movement. To investigate whether action observation may provide a more effective stimulus than other visual cues, the present study examined the effects of observing human pointing movements and simple visual stimuli on hand kinematics and eye movements in people with mild to moderate PD and age-matched controls. In Experiment 1, participants observed videos of movement sequences between horizontal positions, depicted by a simple cue with or without a moving human hand, then imitated the sequence either without further visual input (consecutive task) or while watching the video again (concurrent task). Modulation of movement duration, in accordance with changes in the observed stimulus, increased when the simple cue was accompanied by the hand and in the concurrent task, whereas modulation of horizontal amplitude was greater with the simple cue alone and in the consecutive task. Experiment 2 compared imitation of kinematically-matched dynamic biological (human hand) and non- biological (shape) stimuli, which moved with a high or low vertical trajectory. Both groups exhibited greater modulation for the hand than the shape, and differences in eye movements suggested closer tracking of the hand. Despite producing slower and smaller movements overall, the PD group showed a similar pattern of imitation to controls across tasks and conditions. The findings demonstrate that observing human action influences aspects of movement such as duration or trajectory more strongly than non-biological stimuli, particularly during concurrent imitation.
Michael J Armson; Nicholas B Diamond; Laryssa Levesque; Jennifer D Ryan; Brian Levine
In: Cognition, 206 , pp. 1–8, 2021.
There are marked individual differences in the recollection of personal past events or autobiographical memory (AM). Theory concerning the relationship between mnemonic and visual systems suggests that eye movements promote retrieval of spatiotemporal details from memory, yet assessment of this prediction within naturalistic AM has been limited. We examined the relationship of eye movements to free recall of naturalistic AM and how this relationship is modulated by individual differences in AM capacity. Participants freely recalled past episodes while viewing a blank screen under free and fixed viewing conditions. Memory performance was quantified with the Autobiographical Interview, which separates internal (episodic) and external (non-episodic) details. In Study 1, as a proof of concept, fixation rate was predictive of the number of internal (but not external) details recalled across both free and fixed viewing. In Study 2, using an experimenter-controlled staged event (a museum-style tour) the effect of fixations on free recall of internal (but not external) details was again observed. In this second study, however, the fixation-recall relationship was modulated by individual differences in autobiographical memory, such that the coupling between fixations and internal details was greater for those endorsing higher than lower episodic AM. These results suggest that those with congenitally strong AM rely on the visual system to produce episodic details, whereas those with lower AM retrieve such details via other mechanisms.
Hanna Brinkmann; Louis Williams; Raphael Rosenberg; Eugene McSorley
In: Art and Perception, 8 (1), pp. 27–48, 2020.
Throughout the 20th century, there have been many different forms of abstract painting. While works by some artists, e.g., Piet Mondrian, are usually described as static, others are described as dynamic, such as Jackson Pollock's 'action paintings'. Art historians have assumed that beholders not only conceptualise such differences in depicted dynamics but also mirror these in their viewing behaviour. In an interdisciplinary eye-tracking study, we tested this concept through investigating both the localisation of fixations (polyfocal viewing) and the average duration of fixations as well as saccade velocity, duration and path curvature. We showed 30 different abstract paintings to 40 participants - 20 laypeople and 20 experts (art students) - and used self-reporting to investigate the perceived dynamism of each painting and its relationship with (a) the average number and duration of fixations, (b) the average number, duration and velocity of saccades as well as the amplitude and curvature area of saccade paths, and (c) pleasantness and familiarity ratings. We found that the average number of fixations and saccades, saccade velocity, and pleasantness ratings increase with an increase in perceived dynamism ratings. Meanwhile the saccade duration decreased with an increase in perceived dynamism. Additionally, the analysis showed that experts gave higher dynamic ratings compared to laypeople and were more familiar with the artworks. These results indicate that there is a correlation between perceived dynamism in abstract painting and viewing behaviour - something that has long been assumed by art historians but had never been empirically supported.
Jacob A Westerberg; Alexander Maier; Geoffrey F Woodman; Jeffrey D Schall
Performance monitoring during visual priming Journal Article
In: Journal of Cognitive Neuroscience, 32 (3), pp. 515–526, 2020.
Repetitive performance of single-feature (efficient or popout) visual search improves RTs and accuracy. This phenomenon, known as priming of pop-out, has been demonstrated in both humans and macaque monkeys. We investigated the relationship between performance monitoring and priming of pop-out. Neuronal activity in the supplementary eye field (SEF) contributes to performance monitoring and to the generation of performance monitoring signals in the EEG. To determine whether priming depends on performance monitoring, we investigated spiking activity in SEF as well as the concurrent EEG of two monkeys performing a priming of pop-out task. We found that SEF spiking did not modulate with priming. Surprisingly, concurrent EEG did covary with priming. Together, these results suggest that performance monitoring contributes to priming of pop-out. However, this performance monitoring seems not mediated by SEF. This dissociation suggests that EEG indices of performance monitoring arise from multiple, functionally distinct neural generators.
Seema Gorur Prasad; Ramesh Kumar Mishra
Reward influences masked free-choice priming Journal Article
In: Frontiers in Psychology, 11 , pp. 1–15, 2020.
While it is known that reward induces attentional prioritization, it is not clear what effect reward-learning has when associated with stimuli that are not fully perceived. The masked priming paradigm has been extensively used to investigate the indirect impact of brief stimuli on response behavior. Interestingly, the effect of masked primes is observed even when participants choose their responses freely. While classical theories assume this process to be automatic, recent studies have provided evidence for attentional modulations of masked priming effects. Most such studies have manipulated bottom-up or top-down modes of attentional selection, but the role of “newer” forms of attentional control such as reward-learning and selection history remains unclear. In two experiments, with number and arrow primes, we examined whether reward-mediated attentional selection modulates masked priming when responses are chosen freely. In both experiments, we observed that primes associated with high-reward lead to enhanced free-choice priming compared to primes associated with no-reward. The effect was seen on both proportion of choices and response times, and was more evident in the faster responses. In the slower responses, the effect was diminished. Our study adds to the growing literature showing the susceptibility of masked priming to factors related to attention and executive control.
Sabrina E Twilhaar; Artem V Belopolsky; Jorrit F Kieviet; Ruurd M Elburg; Jaap Oosterlaan; Jorrit F de Kieviet; Ruurd M van Elburg; Jaap Oosterlaan
In: Child Development, 91 (4), pp. 1272–1283, 2020.
Very preterm birth is associated with attention deficits that interfere with academic performance. A better understanding of attention processes is necessary to support very preterm born children. This study examined voluntary and involuntary attentional control in very preterm born adolescents by measuring saccadic eye movements. Additionally, these control processes were related to symptoms of inattention, intelligence, and academic performance. Participants included 47 very preterm and 61 full-term born 13-years-old adolescents. Oculomotor control was assessed using the antisaccade and oculomotor capture paradigm. Very preterm born adolescents showed deficits in antisaccade but not in oculomotor capture performance, indicating impairments in voluntary but not involuntary attentional control. These impairments mediated the relation between very preterm birth and inattention, intelligence, and academic performance.
Aave Hannus; Harold Bekkering; Frans W Cornelissen
In: Attention, Perception, and Psychophysics, 82 (1), pp. 140–152, 2020.
Visual search often requires combining information on distinct visual features such as color and orientation, but how the visual system does this is not fully understood. To better understand this, we showed observers a brief preview of part of a search stimulus—either its color or orientation—before they performed a conjunction search task. Our experimental questions were (1) whether observers would use such previews to prioritize either potential target locations or features, and (2) which neural mechanisms might underlie the observed effects. In two experiments, participants searched for a prespecified target in a display consisting of bar elements, each combining one of two possible colors and one of two possible orientations. Participants responded by making an eye movement to the selected bar. In our first experiment, we found that a preview consisting of colored bars with identical orientation improved saccadic target selection performance, while a preview of oriented gray bars substantially decreased performance. In a follow-up experiment, we found that previews consisting of discs of the same color as the bars (and thus without orientation information) hardly affected performance. Thus, performance improved only when the preview combined color and (noninformative) orientation information. Previews apparently result in a prioritization of features and conjunctions rather than of spatial locations (in the latter case, all previews should have had similar effects). Our results thus also indicate that search for, and prioritization of, combinations involve conjunctively tuned neural mechanisms. These probably reside at the level of the primary visual cortex.
Francesca Capozzi; Lauren J Human; Jelena Ristic
Attention promotes accurate impression formation Journal Article
In: Journal of Personality, 88 (3), pp. 1–11, 2020.
Objective: An ability to form accurate impressions of others is vital for adaptive social behavior in humans. Here, we examined if attending to persons more is associated with greater accuracy in personality impressions. Method: We asked 42 observers (36 females; mean age = 21 years, age range = 18–28; expected power = 0.96) to form personality impressions of unacquainted individuals (i.e., targets) from video interviews while their attentional behavior was assessed using eye tracking. We examined whether (a) attending more to targets benefited accuracy, (b) attending to specific body parts (e.g., face vs. body) drove this association, and (c) targets' ease of personality readability modulated these effects. Results: Paying more attention to a target was associated with forming more accurate personality impressions. Attention to the whole person contributed to this effect, with this association occurring independently of targets' ease of readability. Conclusions: These findings show that attending more to a person is associated with increased accuracy and thus suggest that attention promotes social adaption by supporting accurate social perception.
Maxi Becker; Tobias Sommer; Simone Kühn
In: Human Brain Mapping, 41 (1), pp. 30–45, 2020.
Abstract In insight problem solving solutions with AHA! experience have been assumed to be the consequence of restructuring of a problem which usually takes place shortly before the solution. However, evidence from priming studies suggests that solutions with AHA! are not spontaneously generated during the solution process but already relate to prior subliminal processing. We test this hypothesis by conducting an fMRI study using a modified compound remote associates paradigm which incorporates semantic priming.Weobserve stronger brain activity in bilateral anterior insulae already shortly after trial onset in problems that were latersolvedwiththan without AHA!. This early activity was independent of semantic priming but may be related to other lexical properties of attended words helping to reduce the amount of solutions to look for. In contrast, there was more brain activity in bilateral anterior insulae during solutions that were solved without than with AHA!. This timing (after trial start/during solution) x solution experience (with/without AHA!) interaction was significant. The results suggest that (a) solutions accompanied with AHA! relate to early solution-relevant processing and (b) both solution experiences differ in timingwhen solution-relevant processing takes place. In this context, we discuss the potential role of the anterior insula as part of the salience network involved in problem solving by allocating attentional resources.
Quan Wang; Carla A Wall; Erin C Barney; Jessica L Bradshaw; Suzanne L Macari; Katarzyna Chawarska; Frederick Shic
In: Autism Research, 13 (1), pp. 61–73, 2020.
Young children with autism spectrum disorder (ASD) look less toward faces compared to their non-ASD peers, limiting access to social learning. Currently, no technologies directly target these core social attention difficulties. This study examines the feasibility of automated gaze modification training for improving attention to faces in 3-year-olds with ASD. Using free-viewing data from typically developing (TD) controls (n = 41), we implemented gaze-contingent adaptive cueing to redirect children with ASD toward normative looking patterns during viewing of videos of an actress. Children with ASD were randomly assigned to either (a) an adaptive Cue condition (Cue
Taylor R Hayes; John M Henderson
In: Attention, Perception, and Psychophysics, 82 (3), pp. 985–994, 2020.
How do we determine where to focus our attention in real-world scenes? Image saliency theory proposes that our attention is ‘pulled' to scene regions that differ in low-level image features. However, models that formalize image saliency theory often contain significant scene-independent spatial biases. In the present studies, three different viewing tasks were used to evaluate whether image saliency models account for variance in scene fixation density based primarily on scene-dependent, low-level feature contrast, or on their scene-independent spatial biases. For comparison, fixation density was also compared to semantic feature maps (Meaning Maps; Henderson & Hayes, Nature Human Behaviour, 1, 743–747, 2017) that were generated using human ratings of isolated scene patches. The squared correlations (R2) between scene fixation density and each image saliency model's center bias, each full image saliency model, and meaning maps were computed. The results showed that in tasks that produced observer center bias, the image saliency models on average explained 23% less variance in scene fixation density than their center biases alone. In comparison, meaning maps explained on average 10% more variance than center bias alone. We conclude that image saliency theory generalizes poorly to real-world scenes.
Guillaume Doucet; Roberto A Gulli; Benjamin W Corrigan; Lyndon R Duong; Julio C Martinez-Trujillo
In: Hippocampus, 30 (3), pp. 192–209, 2020.
Primates use saccades to gather information about objects and their relative spatial arrangement, a process essential for visual perception and memory. It has been proposed that signals linked to saccades reset the phase of local field potential (LFP) oscillations in the hippocampus, providing a temporal window for visual signals to activate neurons in this region and influence memory formation. We investigated this issue by measuring hippocampal LFPs and spikes in two macaques performing different tasks with unconstrained eye movements. We found that LFP phase clustering (PC) in the alpha/beta (8–16 Hz) frequencies followed foveation onsets, while PC in frequencies lower than 8 Hz followed spontaneous saccades, even on a homogeneous background. Saccades to a solid grey background were not followed by increases in local neuronal firing, whereas saccades toward appearing visual stimuli were. Finally, saccade parameters correlated with LFPs phase and amplitude: saccade direction correlated with delta (≤4 Hz) phase, and saccade amplitude with theta (4–8 Hz) power. Our results suggest that signals linked to saccades reach the hippocampus, producing synchronization of delta/theta LFPs without a general activation of local neurons. Moreover, some visual inputs co-occurring with saccades produce LFP synchronization in the alpha/beta bands and elevated neuronal firing. Our findings support the hypothesis that saccade-related signals enact sensory input-dependent plasticity and therefore memory formation in the primate hippocampus.
Jaana Simola; Jarmo Kuisma; Johanna K Kaakinen
In: Journal of Business Research, 111 , pp. 249–261, 2020.
We examined the effectiveness of direct and indirect advertising. Direct ads openly depict advertised products and brands. In indirect ads, the ad message requires elaboration. Eye movements were recorded while consumers viewed direct and indirect advertisements under fixed (5 s) or unlimited exposure time. Recognition of ads, brand logos and preference for brands were tested under two different delays (after 24 h or 45 min) from the ad exposure. The total viewing time was longer for the indirect ads when exposure time was unlimited. Overall, ad pictorials received more fixations and the brand preference was higher in the indirect condition. Recognition improved for brand logos of indirect ads when tested after the shorter delay. Consumers experienced indirect ads as more original, surprising, intellectually challenging and harder to interpret than direct ads. Current results indicate that indirect ads elicit cognitive elaboration that translates into higher preference and memorability for brands.
Francesca Beilharz; Andrea Phillipou; David J Castle; Susan L Rossell
Saccadic eye movements in body dysmorphic disorder Journal Article
In: Journal of Obsessive-Compulsive and Related Disorders, 25 , pp. 1–6, 2020.
Body dysmorphic disorder (BDD) is characterised by a preoccupation with perceived flaws in appearance, which significantly disrupts functioning and causes distress. The difference in self-perception characteristic of BDD has been related to a bias in visual processing across a variety of stimuli and tasks. However, it is unknown how BDD participants perform on basic saccade tasks using eye tracking. Eighteen BDD and 21 healthy control participants completed a battery of saccadic eye movement tasks (fixation, prosaccade, anti-saccade, and memory guided). No significant differences were noted between the groups regarding behavioural performance or patterns of eye movements; however, there was a trend for BDD participants to make increased anticipatory errors on the prosaccade task. Overall, BDD participants demonstrated largely intact saccadic eye movement characteristics which may differentiate BDD from other obsessive-compulsive related disorders, although future research using larger samples is required. It is consequently argued that abnormalities in visual processing apparent among people with BDD may reflect abnormalities in higher-order visual systems.
Ming Ray Liao; Brian A Anderson
Reward learning biases the direction of saccades Journal Article
In: Cognition, 196 , pp. 1–9, 2020.
The role of associative reward learning in guiding feature-based attention and spatial attention is well established. However, no studies have looked at the extent to which reward learning can modulate the direction of saccades during visual search. Here, we introduced a novel reward learning paradigm to examine whether reward-associated directions of eye movements can modulate performance in different visual search tasks. Participants had to fixate a peripheral target before fixating one of four disks that subsequently appeared in each cardinal position. This was followed by reward feedback contingent upon the direction chosen, where one direction consistently yielded a high reward. Thus, reward was tied to the direction of saccades rather than the absolute location of the stimulus fixated. Participants selected the target in the high-value direction on the majority of trials, demonstrating robust learning of the task contingencies. In an untimed visual foraging task that followed, which was performed in extinction, initial saccades were reliably biased in the previously rewarded-associated direction. In a second experiment, following the same training procedure, eye movements in the previously high-value direction were facilitated in a saccade-to-target task. Our findings suggest that rewarding directional eye movements biases oculomotor search patterns in a manner that is robust to extinction and generalizes across stimuli and task.
Xiao Yang Sui; Hong Zhi Liu; Li Lin Rao
In: Cognition, 195 , pp. 1–11, 2020.
Risky decisions are ubiquitous in daily life and are central to human behavior, but little attention has been devoted to exploring whether risky choice can be influenced by gaze direction. In the current study, we used gaze-contingent manipulation to manipulate an individual's gaze while he/she decided between two risky options, and we examined whether risky decisions could be biased toward a randomly determined target. We found that participants' risky choices were biased toward a randomly determined target when they were manipulated to gaze longer at the target option (Study 1
Muxuan Lyu; Kyoung Whan Choe; Omid Kardan; Hiroki P Kotabe; John M Henderson; Marc G Berman
In: Journal of Vision, 20 (9), pp. 1–17, 2020.
Computer vision-based research has shown that scene semantics (e.g., presence of meaningful objects in a scene) can predict memorability of scene images. Here, we investigated whether and to what extent overt attentional correlates, such as fixation map consistency (also called inter-observer congruency of fixation maps) and fixation counts, mediate the relationship between scene semantics and scene memorability. First, we confirmed that the higher the fixation map consistency of a scene, the higher its memorability. Moreover, both fixation map consistency and its correlation to scene memorability were the highest in the first 2 seconds of viewing, suggesting that meaningful scene features that contribute to producing more consistent fixation maps early in viewing, such as faces and humans, may also be important for scene encoding. Second, we found that the relationship between scene semantics and scene memorability was partially (but not fully) mediated by fixation map consistency and fixation counts, separately as well as together. Third, we found that fixation map consistency, fixation counts, and scene semantics significantly and additively contributed to scene memorability. Together, these results suggest that eye-tracking measurements can complement computer vision-based algorithms and improve overall scene memorability prediction.
Timo Kootstra; Jonas Teuwen; Jeroen Goudsmit; Tanja Nijboer; Michael Dodd; Stefan Van der Stigchel
In: Journal of Vision, 20 (9), pp. 1–15, 2020.
Since the seminal work of Yarbus, multiple studies have demonstrated the influence of task-set on oculomotor behavior and the current cognitive state. In more recent years, this field of research has expanded by evaluating the costs of abruptly switching between such different tasks. At the same time, the field of classifying oculomotor behavior has been moving toward more advanced, data-driven methods of decoding data. For the current study, we used a large dataset compiled over multiple experiments and implemented separate state-of-the-art machine learning methods for decoding both cognitive state and task-switching. We found that, by extracting a wide range of oculomotor features, we were able to implement robust classifier models for decoding both cognitive state and task-switching. Our decoding performance highlights the feasibility of this approach, even invariant of image statistics. Additionally, we present a feature ranking for both models, indicating the relative magnitude of different oculomotor features for both classifiers. These rankings indicate a separate set of important predictors for decoding each task, respectively. Finally, we discuss the implications of the current approach related to interpreting the decoding results.
Chou P Hung; Chloe Callahan-Flintoft; Paul D Fedele; Kim F Fluitt; Onyekachi Odoemene; Anthony J Walker; Andre V Harrison; Barry D Vaughan; Matthew S Jaswa; Min Wei
In: Journal of Vision, 20 (7), pp. 1–16, 2020.
When scanning across a scene, luminance can vary by up to 100,000-to-1 (high dynamic range, HDR), requiring multiple normalizing mechanisms spanning from the retina to the cortex to support visual acuity and recognition. Vision models based on standard dynamic range (SDR) luminance contrast ratios below 100-to-1 have limited ability to generalize to real-world scenes with HDR luminance. To characterize how orientation and luminance are linked in brain mechanisms for luminance normalization, we measured orientation discrimination of Gabor targets under HDR luminance dynamics. We report a novel phenomenon, that abrupt 10- to 100-fold darkening engages contextual facilitation, distorting the apparent orientation of a high-contrast central target. Surprisingly, facilitation was influenced by grouping by luminance similarity, as well as by the degree of luminance variability in the surround. These results challenge vision models based solely on activity normalization and raise new questions that will lead to models that perform better in real-world scenes.
Shiva Kamkar; Hamid Abrishami Moghaddam; Reza Lashgari; Lauri Oksama; Jie Li; Jukka Hyönä
In: Journal of Vision, 20 (12), pp. 1–15, 2020.
Occlusion is one of the main challenges in tracking multiple moving objects. In almost all real-world scenarios, a moving object or a stationary obstacle occludes targets partially or completely for a short or long time during their movement. A previous study (Zelinsky & Todor, 2010) reported that subjects make timely saccades toward the object in danger of being occluded. Observers make these so-called "rescue saccades" to prevent target swapping. In this study, we examined whether these saccades are helpful. To this aim, we used as the stimuli recorded videos from natural movement of zebrafish larvae swimming freely in a circular container. We considered two main types of occlusion: object-object occlusions that naturally exist in the videos, and object-occluder occlusions created by adding a stationary doughnut-shape occluder in some videos. Four different scenarios were studied: (1) no occlusions, (2) only object-object occlusions, (3) only object-occluder occlusion, or (4) both object-object and object-occluder occlusions. For each condition, two set sizes (two and four) were applied. Participants' eye movements were recorded during tracking, and rescue saccades were extracted afterward. The results showed that rescue saccades are helpful in handling object-object occlusions but had no reliable effect on tracking through object-occluder occlusions. The presence of occlusions generally increased visual sampling of the scenes; nevertheless, tracking accuracy declined due to occlusion.
Marcello Maniglia; Roshni Jogin; Kristina M Visscher; Aaron R Seitz
In: Journal of Vision, 20 (13), pp. 1–14, 2020.
Loss of central vision can be compensated for in part by increased use of peripheral vision. For example, patients with macular degeneration or those experiencing simulated central vision loss tend to develop eccentric viewing strategies for reading or other visual tasks. The factors driving this learning are still unclear and likely involve complex changes in oculomotor strategies that may differ among people and tasks. Although to date a number of studies have examined reliance on peripheral vision after simulated central vision loss, individual differences in developing peripheral viewing strategies and the extent to which they transfer to untrained tasks have received little attention. Here, we apply a recently published method of characterizing oculomotor strategies after central vision loss to understand the time course of changes in oculomotor strategies through training in 19 healthy individuals with a gaze-contingent display obstructing the central 10° of the visual field. After 10 days of training, we found mean improvements in saccadic re-referencing (the percentage of trials in which the first saccade placed the target outside the scotoma), latency of target acquisition (time interval between target presentation and a saccade putting the target outside the scotoma), and fixation stability. These results are consistent with participants developing compensatory oculomotor strategies as a result of training. However, we also observed substantial individual differences in the formation of eye movement strategies and the extent to which they transferred to an untrained task, likely reflecting both variations in learning rates and patterns of learning. This more complete characterization of peripheral looking strategies and how they change with training may help us understand individual differences in rehabilitation after central vision loss.
Raphael Vallat; Alain Nicolas; Perrine Ruby
In: Sleep, 43 (2), pp. 1–11, 2020.
Why do some individuals recall dreams every day while others hardly ever recall one? We hypothesized that sleep inertia—the transient period following awakening associated with brain and cognitive alterations—could be a key mechanism to explain interindividual differences in dream recall at awakening. To test this hypothesis, we measured the brain functional connectivity (combined electroencephalography–functional magnetic resonance imaging) and cognition (memory and mental calculation) of high dream recallers (HR
Jetro J Tuulari; Eeva Leena Kataja; Jukka M Leppänen; John D Lewis; Saara Nolvi; Tuomo Häikiö; Satu J Lehtola; Niloofar Hashempour; Jani Saunavaara; Noora M Scheinin; Riikka Korja; Linnea Karlsson; Hasse Karlsson
In: Developmental Cognitive Neuroscience, 45 , pp. 1–8, 2020.
After 5 months of age, infants begin to prioritize attention to fearful over other facial expressions. One key proposition is that amygdala and related early-maturing subcortical network, is important for emergence of this attentional bias – however, empirical data to support these assertions are lacking. In this prospective longitudinal study, we measured amygdala volumes from MR images in 65 healthy neonates at 2–5 weeks of gestation corrected age and attention disengagement from fearful vs. non-fearful facial expressions at 8 months with eye tracking. Overall, infants were less likely to disengage from fearful than happy/neutral faces, demonstrating an age-typical bias for fear. Left, but not right, amygdala volume (corrected for intracranial volume) was positively associated with the likelihood of disengaging attention from fearful faces to a salient lateral distractor (r =.302
Nicolas Chevalier; Julie Anne Meaney; Hilary Joy Traut; Yuko Munakata
In: Developmental Cognitive Neuroscience, 46 , pp. 1–11, 2020.
Age-related progress in cognitive control reflects more frequent engagement of proactive control during childhood. As proactive preparation for an upcoming task is adaptive only when the task can be reliably predicted, progress in proactive control engagement may rely on more efficient use of contextual cue reliability. Developmental progress may also reflect increasing efficiency in how proactive control is engaged, making this control mode more advantageous with age. To address these possibilities, 6-year-olds, 9-year-olds, and adults completed three versions of a cued task-switching paradigm in which contextual cue reliability was manipulated. When contextual cues were reliable (but not unreliable or uninformative), all age groups showed greater pupil dilation and a more pronounced (pre)cue-locked posterior positivity associated with faster response times, suggesting adaptive engagement of proactive task selection. However, adults additionally showed a larger contingent negative variation (CNV) predicting a further reduction in response times with reliable cues, suggesting motor preparation in adults but not children. Thus, early developing use of contextual cue reliability promotes adaptiveness in proactive control engagement from early childhood; yet, less efficient motor preparation in children makes this control mode overall less advantageous in childhood than adulthood.
Darcy E Burgund
In: Visual Cognition, pp. 1–13, 2020.
Humans remember the faces of members of their own race more accurately than the faces of members of other races, in an effect known as the own-race bias. Previous studies indicate that patterns of eye fixations play an important role in this bias, but the exact nature of their influence on face memory is not clear. The present study examined the role of eye fixations on memory for racially East Asian, Black, and White faces in East Asian and White participants. Results revealed greater looking at the eyes of East Asian and White faces than the eyes of Black faces, and greater looking at the nose/mouth of Black faces than the nose/mouth of East Asian and White faces. In addition, longer time looking at the eyes of all faces predicted better memory for all faces, and longer time looking at the nose/mouth of Black faces predicted better memory for Black faces. These findings are best characterized by a model of face memory in which the eyes are critical for all faces, but certain features (e.g., nose/mouth) may be additionally important for certain race faces (e.g., Black faces).
Astar Lev; Yoram Braw; Tomer Elbaum; Michael Wagner; Yuri Rassovsky
In: Journal of Attention Disorders, pp. 1–11, 2020.
Objective: The use of continuous performance tests (CPTs) for assessing ADHD related cognitive impairment is ubiquitous. Novel psychophysiological measures may enhance the data that is derived from CPTs and thereby improve clinical decision-making regarding diagnosis and treatment. As part of the current study, we integrated an eye tracker with the MOXO-dCPT and assessed the utility of eye movement measures to differentiate ADHD patients and healthy controls. Method: Adult ADHD patients and gender/age-matched healthy controls performed the MOXO-dCPT while their eye movements were monitored (n = 33 per group). Results: ADHD patients spent significantly more time gazing at irrelevant regions, both on the screen and outside of it, than healthy controls. The eye movement measures showed adequate ability to classify ADHD patients. Moreover, a scale that combined eye movement measures enhanced group prediction, compared to the sole use of conventional MOXO-dCPT indices. Conclusions: Integrating an eye tracker with CPTs is a feasible way of enhancing diagnostic precision and shows initial promise for clarifying the cognitive profile of ADHD patients. Pending replication, these findings point toward a promising path for the evolution of existing CPTs.
Athina Manoli; Simon P Liversedge; Edmund J S Sonuga-Barke; Julie A Hadwin
In: Journal of Attention Disorders, pp. 1–12, 2020.
Objective: This study examined the synergistic effects of ADHD and anxiety symptoms on attention and inhibitory control depending on the emotional content of the stimuli. Method: Fifty-four typically developing individuals (27 children/adolescents and 27 adults) completed an eye-movement based emotional Go/No-Go task, using centrally presented (happy, angry) faces and neutral/symbolic stimuli. Sustained attention was measured through saccade latencies and saccadic omission errors (Go trials), and inhibitory control through saccadic commission errors (No-Go trials). ADHD and anxiety were assessed dimensionally. Results: Elevated ADHD symptoms were associated with more commission errors and slower saccade latencies for angry (vs. happy) faces. In contrast, angry faces were linked to faster saccade onsets when anxiety symptoms were high, and this effect prevailed when both anxiety and ADHD symptoms were high. Conclusion: Social threat impacted performance in individuals with sub-clinical anxiety and ADHD differently. The effects of anxiety on threat processing prevailed when both symptoms were high.
Tatiana Malevich; Elena Rybina; Elizaveta Ivtushok; Liubov Ardasheva; Joseph W MacInnes
In: Acta Psychologica, 208 , pp. 1–11, 2020.
Inhibition of return (IOR) represents a delay in responding to a previously inspected location and is viewed as a crucial mechanism that sways attention toward novelty in visual search. Although most visual processing occurs in retinotopic, eye-centered, coordinates, IOR must be coded in spatiotopic, environmental, coordinates to successfully serve its role as a foraging facilitator. Early studies supported this suggestion but recent results have shown that both spatiotopic and retinotopic reference frames of IOR may co-exist. The present study tested possible sources for IOR at the retinotopic location including being part of the spatiotopic IOR gradient, part of hemifield inhibition and being an independent source of IOR. We conducted four experiments that alternated the cue-target spatial distance (discrete and contiguous) and the response modality (manual and saccadic). In all experiments, we tested spatiotopic, retinotopic and neutral (neither spatiotopic nor retinotopic) locations. We did find IOR at both the retinotopic and spatiotopic locations but no evidence for an independent source of retinotopic IOR for either of the response modalities. In fact, we observed the spread of IOR across entire validly cued hemifield including at neutral locations. We conclude that these results indicate a strategy to inhibit the whole cued hemifield or suggest a large horizontal gradient around the spatiotopically cued location. Public significance statement: We perceive the visual world around us as stable despite constant shifts of the retinal image due to saccadic eye movements. In this study, we explore whether Inhibition of return (IOR), a mechanism preventing us from returning to previously attended locations, operates in spatiotopic, world-centered or in retinal, eye-centered coordinates. We tested both saccadic and manual IOR at spatiotopic, retinotopic, and control locations. We did not find an independent retinotopic source of IOR for either of the response modalities. The results suggest that IOR spreads over the whole previously attended visual hemifield or there is a large horizontal spatiotopic gradient. The current results are in line with the idea of IOR being a foraging facilitator in visual search and contribute to our understanding of spatiotopically organized aspects of visual and attentional systems.
Aleksandra Mitrovic; Lisa Mira Hegelmaier; Helmut Leder; Matthew Pelowski
In: Acta Psychologica, 209 , pp. 1–10, 2020.
Studies have routinely shown that individuals spend more time spontaneously looking at people or at mimetic scenes that they subsequently judge to be more aesthetically appealing. This “beauty demands longer looks” phenomenon is typically explained by biological relevance, personal utility, or other survival factors, with visual attraction often driven by structural features (symmetry, texture), which may signify fitness and to which most humans tend to respond similarly. However, what of objects that have less overtly adaptive relevance? Here, we consider whether people also look longer at abstract art with little associative/mimetic content that they subsequently rate for higher aesthetic appeal. We employed the “Visual aesthetic sensitivity test” (VAST), which consists of pairs of matched abstract designs with one example of each pair argued to be objectively ‘aesthetically better' in regards to low-level features, thus offering a potential contrast between ‘objective' (physical feature-based) and ‘subjective' (personal taste-based) assessments. Participants (29 women) first looked at image pairs without a specific task and then in three follow-up blocks indicated their preference within the pairs and rated the individual images for liking and for presumed ratings by an art expert. More preferred designs were looked at longer. However, longer looking only occurred in line with participants' subjective tastes. This suggests a general correlation of attention and visual beauty, which—in abstract art—may nonetheless be related to features that are not identified by experts as more generally appealing and thus may not directly map to other (more utility-related) stimuli types.
David Clewett; Camille Gasser; Lila Davachi
In: Nature Communications, 11 , pp. 1–14, 2020.
Everyday life unfolds continuously, yet we tend to remember past experiences as discrete event sequences or episodes. Although this phenomenon has been well documented, the neuromechanisms that support the transformation of continuous experience into distinct and memorable episodes remain unknown. Here, we show that changes in context, or event boundaries, elicit a burst of autonomic arousal, as indexed by pupil dilation. Event boundaries also lead to the segmentation of adjacent episodes in later memory, evidenced by changes in memory for the temporal duration, order, and perceptual details of recent event sequences. These subjective and objective changes in temporal memory are also related to distinct temporal features of pupil dilations to boundaries as well as to the temporal stability of more prolonged pupil-linked arousal states. Collectively, our findings suggest that pupil measures reflect both stability and change in ongoing mental context representations, which in turn shape the temporal structure of memory.
Tianlong Zu; John Hutson; Lester C Loschky; Sanjay N Rebello
In: Journal of Educational Psychology, 112 (7), pp. 1338–1352, 2020.
In a previous study, DeLeeuw and Mayer (2008) found support for the triarchic model of cognitive load (Sweller, Van Merrienboer, & Paas, 1998, 2019) by showing that three different metrics could be used to independently measure 3 hypothesized types of cognitive load: intrinsic, extraneous, and germane. However, 2 of the 3 metrics that the authors used were intrusive in nature because learning had to be stopped momentarily to complete the measures. The current study extends the design of DeLeeuw and Mayer (2008) by investigating whether learners' eye movement behavior can be used to measure the three proposed types of cognitive load without interrupting learning. During a 1-hr experiment, we presented a multimedia lesson explaining the mechanism of electric motors to participants who had low prior knowledge of this topic. First, we replicated the main results of DeLeeuw and Mayer (2008), providing further support for the triarchic structure of cognitive load. Second, we identified eye movement measures that differentiated the three types of cognitive load. These findings were independent of participants' working memory capacity. Together, these results provide further evidence for the triarchic nature of cognitive load (Sweller et al., 1998, 2019), and are a first step toward online measures of cognitive load that could potentially be implemented into computer assisted learning technologies.
Joshua Zonca; Giorgio Coricelli; Luca Polonio
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, 46 (2), pp. 257–279, 2020.
In our everyday life, we often need to anticipate the potential occurrence of events and their consequences. In this context, the way we represent contingencies can determine our ability to adapt to the environment. However, it is not clear how agents encode and organize available knowledge about the future to react to possible states of the world. In the present study, we investigated the process of contingency representation with three eye-tracking experiments. In Experiment 1, we introduced a novel relational-inference task in which participants had to learn and represent conditional rules regulating the occurrence of interdependent future events. A cluster analysis on early gaze data revealed the existence of 2 distinct types of encoders. A group of (sophisticated) participants built exhaustive contingency models that explicitly linked states with each of their potential consequences. Another group of (unsophisticated) participants simply learned binary conditional rules without exploring the underlying relational complexity. Analyses of individual cognitive measures revealed that cognitive reflection is associated with the emergence of either sophisticated or unsophisticated representation behavior. In Experiment 2, we observed that unsophisticated participants switched toward the sophisticated strategy after having received information about its existence, suggesting that representation behavior was modulated by strategy generation mechanisms. In Experiment 3, we showed that the heterogeneity in representation strategy emerges also in conditional reasoning with verbal sequences, indicating the existence of a general disposition in building either sophisticated or unsophisticated models of contingencies.
Artyom Zinchenko; Markus Conci; Thomas Töllner; Hermann J Müller; Thomas Geyer
In: Psychological Science, 31 (12), pp. 1–13, 2020.
Visual search is facilitated when the target is repeatedly encountered at a fixed position within an invariant (vs. randomly variable) distractor layout—that is, when the layout is learned and guides attention to the target, a phenomenon known as contextual cuing. Subsequently changing the target location within a learned layout abolishes contextual cuing, which is difficult to relearn. Here, we used lateralized event-related electroencephalogram (EEG) potentials to explore memory-based attentional guidance (N = 16). The results revealed reliable contextual cuing during initial learning and an associated EEG-amplitude increase for repeated layouts in attention-related components, starting with an early posterior negativity (N1pc, 80–180 ms). When the target was relocated to the opposite hemifield following learning, contextual cuing was effectively abolished, and the N1pc was reversed in polarity (indicative of persistent misguidance of attention to the original target location). Thus, once learned, repeated layouts trigger attentional-priority signals from memory that proactively interfere with contextual relearning after target relocation.
Artyom Zinchenko; Markus Conci; Johannes Hauser; Hermann J Müller; Thomas Geyer
In: Journal of Vision, 20 (7), pp. 1–14, 2020.
Learnt target-distractor contexts guide visual search. However, updating a previously acquired target-distractor memory subsequent to a change of the target location has been found to be rather inefficient and slow. These results show that the imperviousness of contextual memory to incorporating relocated targets is particularly pronounced when observers adopt a narrow focus of attention to perform a rather difficult form-conjunction search task. By contrast, when they adopt a broad attentional distribution, context-based memories can be updated more readily because this mode promotes the acquisition of more global contextual representations that continue to provide effective cues even after target relocation.
Josua Zimmermann; Dominik R Bach
In: Learning & Memory, 27 (4), pp. 164–172, 2020.
A reminder can render consolidated memory labile and susceptible to amnesic agents during a reconsolidation window. For the case of threat memory (also termed fear memory), it has been suggested that extinction training during this reconsolidation window has the same disruptive impact. This procedure could provide a powerful therapeutic principle for treatment of unwanted aversive memories. However, human research yielded contradictory results. Notably, all published positive replications quantified threat memory by conditioned skin conductance responses (SCR). Yet, other studies measuring SCR and/or fear-potentiated startle failed to observe an effect of a reminder/extinction procedure on the return of fear. Here we sought to shed light on this discrepancy by using a different autonomic response, namely, conditioned pupil dilation, in addition to SCR, in a replication of the original human study. N = 71 humans underwent a 3-d threat conditioning, reminder/extinction, and reinstatement, procedure with 2 CS+, of which one was reminded. Participants successfully learned the threat association on day 1, extinguished conditioned responding on day 2, and showed reinstatement on day 3. However, there was no difference in conditioned responding between the reminded and the nonreminded CS, neither in pupil size nor SCR. Thus, we found no evidence that a reminder trial before extinction prevents the return of threat-conditioned responding.
Jing Zhu; Zihan Wang; Tao Gong; Shuai Zeng; Xiaowei Li; Bin Hu; Jianxiu Li; Shuting Sun; Lan Zhang
In: IEEE Transactions on Nanobioscience, 19 (3), pp. 527–537, 2020.
At present, depression has become a main health burden in the world. However, there are many problems with the diagnosis of depression, such as low patient cooperation, subjective bias and low accuracy. Therefore, reliable and objective evaluation method is needed to achieve effective depression detection. Electroencephalogram (EEG) and eye movements (EMs) data have been widely used for depression detection due to their advantages of easy recording and non-invasion. This research proposes a content based ensemble method (CBEM) to promote the depression detection accuracy, both static and dynamic CBEM were discussed. In the proposed model, EEG or EMs dataset was divided into subsets by the context of the experiments, and then a majority vote strategy was used to determine the subjects' label. The validation of the method is testified on two datasets which included free viewing eye tracking and resting-state EEG, and these two datasets have 36,34 subjects respectively. For these two datasets, CBEM achieves accuracies of 82.5% and 92.65% respectively. The results show that CBEM outperforms traditional classification methods. Our findings provide an effective solution for promoting the accuracy of depression identification, and provide an effective method for identification of depression, which in the future could be used for the auxiliary diagnosis of depression.
Jiawen Zhu; Kara Dawson; Albert D Ritzhaupt; Pavlo Pasha Antonenko
In: Journal of Educational Multimedia and Hypermedia, 29 (3), pp. 265–284, 2020.
This study investigated the effects of multimedia and modal- ity design principles using a learning intervention about Aus- tralia with a sample of college students and employing mea- sures of learning outcomes, visual attention, satisfaction, and mental effort. Seventy-five college students were systemati- cally assigned to one of four conditions: a) text with pictures, b) text without pictures, c) narration with pictures, or d) nar- ration without pictures. No significant differences were found among the four groups in learning performance, satisfaction, or self-reported mental effort, and participants rarely focused their visual attention on the representational pictures provid- ed in the intervention. Neither the multimedia nor the modal- ity principles held true in this study. However, participants in narration environments focused significantly more visual at- tention on the “Next” button, a navigational aid included on all slides. This study contributes to the research on visual at- tention and navigational aids in multimedia learning, and it suggests such features may cause distractions, particularly when spoken text is provided without on-screen text. The paper also offers implications for the design of multimedia learning
In: Revista Argentina de Clinica Psicologica, 29 (2), pp. 523–529, 2020.
Eye-tracking technology has been widely adopted to capture the psychological changes of college students in the learning process. With the aid of eye-tracking technology this paper establishes a psychological analysis model for students in online teaching. Four eye movement parameters were selected for the model, including pupil diameter, fixation time, re-reading time and retrospective time. A total of 100 college students were selected for an eye movement test in online teaching environment. The test data were analyzed on the SPSS software. The results show that the eye movement parameters are greatly affected by the key points in teaching and the contents that interest the students; the two influencing factors can arouse and attract the students' attention in the teaching process. The research results provide an important reference for the psychological study of online teaching in colleges.
In: International Journal of Frontiers in Sociology, 2 (7), pp. 1–12, 2020.
Online travel agencies (OTAs) depends on marketing clues to reduce the consumer uncertainty perceptions of online travel-related products. The latest booking time (LBT) provided by the consumer has a significant impact on purchasing decisions. This study aims to explore the effect of LBT on consumer visual attention and booking intention along with the moderation effect of online comment valence (OCV). Since eye movement is bound up with the transfer of visual attention, eye-tracking is used to record visual attention of consumer. Our research used a 3 (LBT: near vs. medium vs. far) × 3 (OCV: high vs. medium vs. low) design to conduct the experiments. The main findings showed the following:(1) LBT can obviously increase the visual attention to the whole advertisements and improve the booking intention;(2) OCV moderates the effect of LBT on both visual attention to the whole advertisements and booking intention. Only when OCV are medium and high, LBT can obviously improve attention to the whole advertisements and increase consumers' booking intention. The experiment results show that OTAs can improve the advertising effectiveness by adding LBT label, but LBT have no effect with low-level OCV.
Y Zhang; Q Yuan
In: Indian Journal of Pharmaceutical Sciences, 82 , pp. 32–40, 2020.
This study intended to take a special group of trauma patients as research subjects to propose a method analysing the effect of combination of biofeedback and sequential psychotherapy based on the fusion of the set theory model on the cognitive function of these patients with trauma. The occurrence and development of post-traumatic stress disorder and the cognitive function is investigated. The set theory model is used in this study to carry out a survey on the effect of the combination of biofeedback and sequential psychotherapy on patients with post-traumatic stress disorder to describe the occurrence, development, change trajectory and time course characteristics of post-traumatic stress disorder. The set theory model was employed to investigate the cognitive development characteristics of these trauma patients. In addition, through the set theory model, psychological behavior mechanism for the occurrence and development of post-traumatic stress disorder is revealed. The study of the combination of biofeedback and sequential psychotherapy is adopted to investigate the effect of the post-traumatic stress disorder on the cognitive function of the trauma patients. The results of this study could be used to provide scientific advice for the placement and psychological assistance of trauma patients in future, to provide a scientific basis for a targeted psychological intervention and overall planning of the intervention, and to provide scientific and objective indicators and methods for the diagnosis and assessment of intervention of traumatic psychology in patients with trauma in the future.
Xinru Zhang; Zhongling Pi; Chenyu Li; Weiping Hu
In: British Journal of Educational Technology, pp. 1–13, 2020.
Intrinsic motivation is seen as the principal source of vitality in educational settings. This study examined whether intrinsic motivation promoted online group creativity and tested a cognitive mechanism that might explain this effect. University students (N = 72; 61 women) who volunteered to participate were asked to fulfill a creative task with a peer using online software. The peer was actually a fake participant who was programed to send prepared answers in sequence. Ratings of creativity (fluency, flexibility and originality) and eye movement data (focus on own vs. peer's ideas on the screen) were used to compare students who were induced to have high intrinsic motivation and those induced to have low intrinsic motivation. Results showed that compared to participants with low intrinsic motivation, those with high intrinsic motivation showed higher fluency and flexibility on the creative task and spent a larger percentage of time looking at their own ideas on the screen. The two groups did not differ in how much they looked at the peer's ideas. In addition, students' percentage dwell time on their own ideas mediated the beneficial effect of intrinsic motivation on idea fluency. These results suggest that although intrinsic motivation could enhance the fluency of creative ideas in an online group, it does not necessarily promote interaction among group members. Given the importance of interaction in online group setting, findings of this study suggest that in addition to enhancing intrinsic motivation, other measures should be taken to promote the interaction behavior in online groups. Practitioner Notes What is already known about this topic The generation of creative ideas in group settings calls for both individual effort and cognitive stimulation from other members. Intrinsic motivation has been shown to foster creativity in face-to-face groups, which is primarily due the promotion of individual effort. In online group settings, students' creativity tends to rely on intrinsic motivation because the extrinsic motivation typically provided by teachers' supervision and peer pressure in face-to-face settings is minimized online. What this paper adds Creative performance in online groups benefits from intrinsic motivation. Intrinsic motivation promotes creativity through an individual's own cognitive effort instead of interaction among members. Implications for practice and/or policy Improving students' intrinsic motivation is an effective way to promote creativity in online groups. Teachers should take additional steps to encourage students to interact more with each other in online groups.
Hui Zhang; Ping Wang; Tinghu Kang
In: Art and Design Review, 8 , pp. 215–227, 2020.
This study compares the characteristics of the aesthetic experience of different cognitive styles in calligraphy style. The study used a cursive script and running script as experimental materials and the EyeLink 1000 Plus eye tracker to record eye movements while viewing calligraphy. The results showed that, in the overall analysis, there were differences in the field cogni-tion style in total fixation counts, saccade amplitude, and saccade counts and differences in the calligraphic style in total fixation counts and saccade counts. Further local analysis found significant differences in the field cogni-tive style in mean pupil diameter, fixation counts, and regression in count, and that there were differences in fixation counts and regression in count in the calligraphic style, as well as interactions with the area of interest. The results indicate that the field cognitive style is characterized by different aesthetic experiences in calligraphy appreciation and that there are aesthetic preferences in calligraphy style.
Hanshu Zhang; Joseph W Houpt
In: Attention, Perception, and Psychophysics, 82 (7), pp. 3340–3356, 2020.
Despite the increasing focus on target prevalence in visual search research, few papers have thoroughly examined the effect of how target prevalence is communicated. Findings in the judgment and decision-making literature have demonstrated that people behave differently depending on whether probabilistic information is made explicit or learned through experience, hence there is potential for a similar difference when communicating prevalence in visual search. Our current research examined how visual search changes depending on whether the target prevalence information was explicitly given to observers or they learned the prevalence through experience with additional manipulations of target reward and salience. We found that when the target prevalence was low, learning prevalence from experience resulted in more target-present responses and longer search times before quitting compared to when observers were explicitly informed of the target probability. The discrepancy narrowed with increased prevalence and reversed in the high target prevalence condition. Eye-tracking results indicated that search with experience consistently resulted in longer fixation durations, with the largest difference in low-prevalence conditions. Longer search time was primarily due to observers re-visited more items. Our work addressed the importance of exploring influences brought by probability communication in future prevalence visual search studies.
Han Zhang; Chuyan Qu; Kevin F Miller; Kai S Cortina
In: Journal of experimental psychology. Learning, memory, and cognition, 46 (4), pp. 638–648, 2020.
Mind-wandering (i.e., thoughts irrelevant to the current task) occurs frequently during reading. The current study examined whether mind-wandering was associated with reduced rereading when the reader read the so-called garden-path jokes. In a garden-path joke, the reader's initial interpretation is violated by the final punchline, and the violation creates a semantic incongruity that needs to be resolved (e.g., "My girlfriend has read so many negative things about smoking. Therefore, she decided to quit reading."). Rereading text prior to the punchline can help resolve the incongruity. In a main study and a preregistered replication, participants read jokes and nonfunny controls embedded in filler texts and responded to thought probes that assessed intentional and unintentional mind-wandering. Results were consistent across the two studies: When the reader was not mind-wandering, jokes elicited more rereading (from the punchline) than the nonfunny controls did, and had a recall advantage over the nonfunny controls. During mind-wandering, however, the additional eye movement processing and the recall advantage of jokes were generally reduced. These results show that mind-wandering is associated with reduced rereading, which is important for resolving higher level comprehension difficulties. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Bao Zhang; Shuhui Liu; Cenlou Hu; Ziwen Luo; Sai Huang; Jie Sui
In: Computers in Human Behavior, 107 , pp. 1–7, 2020.
Action video game players (AVGPs) have been shown to have an enhanced cognitive control ability to reduce stimulus-driven attentional capture (e.g., from an exogenous salient distractor) compared with non-action video game players (NVGPs). Here we examined whether these benefits could extend to the memory-driven attentional capture (i.e., working memory (WM) representations bias visual attention toward a matching distractor). AVGPs and NVGPs were instructed to complete a visual search task while actively maintaining 1, 2 or 4 items in WM. There was a robust advantage to the memory-driven attentional capture in reaction time and first eye movement fixation in the AVGPs compared to the NVGPs when they had to maintain one item in WM. Moreover, the effect of memory-driven attentional capture was maintained in the AVGPs when the WM load was increased, but it was eliminated in the NVGPs. The results suggest that AVGPs may devote more attentional resources to sustaining the cognitive control rather than to suppressing the attentional capture driven by the active WM representations.
David Zeugin; Michael P Notter; Jean François Knebel; Silvio Ionta
In: Brain and Cognition, 143 , pp. 1–6, 2020.
Face recognition requires comparing the current visual input with stored mental representations of faces. Based on its role in visual recognition of faces and mental representation of the body, we hypothesized that the right temporo-parietal junction (rTPJ) could be implicated also in processing mental representation of faces. To test this hypothesis, we asked 30 neurotypical participants to perform mental rotation (laterality judgment of rotated pictures) of self- and other-face images, before and after the inhibition of rTPJ through repetitive transcranial magnetic stimulation. After inhibition of rTPJ the mental rotation of self-face was slower than other-face. In the control condition the mental rotation of self/other faces was not significantly different. This supports that the role of rTPJ extends to mental representation of faces, specifically for the self. Since the experimental task did not require to explicitly recognize identity, we propose that unconscious identity attribution affects also the mental representation of faces. The present study offers insights on the involvement rTPJ in mental representation of faces and proposes that the neural substrate dedicated to mental representation of faces goes beyond the traditional visual and memory areas.
Hong Zeng; Junjie Shen; Wenming Zheng; Aiguo Song; Jia Liu
In: Journal of Healthcare Engineering, pp. 1–15, 2020.
The topdown determined visual object perception refers to the ability of a person to identify a prespecified visual target. This paper studies the technical foundation for measuring the target-perceptual ability in a guided visual search task, using the EEG-based brain imaging technique. Specifically, it focuses on the feature representation learning problem for single-trial classification of fixation-related potentials (FRPs). The existing methods either capture only first-order statistics while ignoring second-order statistics in data, or directly extract second-order statistics with covariance matrices estimated with raw FRPs that suffer from low signal-to-noise ratio. In this paper, we propose a new representation learning pipeline involving a low-level convolution subnetwork followed by a high-level Riemannian manifold subnetwork, with a novel midlevel pooling layer bridging them. In this way, the discriminative power of the first-order features can be increased by the convolution subnetwork, while the second-order information in the convolutional features could further be deeply learned with the subsequent Riemannian subnetwork. In particular, the temporal ordering of FRPs is well preserved for the components in our pipeline, which is considered to be a valuable source of discriminant information. The experimental results show that proposed approach leads to improved classification performance and robustness to lack of data over the state-of-the-art ones, thus making it appealing for practical applications in measuring the target-perceptual ability of cognitively impaired patients with the FRP technique.
Polina Zamarashkina; Dina V Popovkina; Anitha Pasupathy
In: Journal of Neurophysiology, 123 (6), pp. 2311–2325, 2020.
In the primate visual cortex, both the magnitude of the neuronal response and its timing can carry important information about the visual world, but studies typically focus only on response magnitude. Here, we examine the onset and offset latency of the responses of neurons in area V4 of awake, behaving macaques across several experiments in the context of a variety of stimuli and task paradigms. Our results highlight distinct contributions of stimuli and tasks to V4 response latency. We found that response onset latencies are shorter than typically cited (median = 75.5 ms), supporting a role for V4 neurons in rapid object and scene recognition functions. Moreover, onset latencies are longer for smaller stimuli and stimulus outlines, consistent with the hypothesis that longer latencies are associated with higher spatial frequency content. Strikingly, we found that onset latencies showed no significant dependence on stimulus occlusion, unlike in inferotemporal cortex, nor on task demands. Across the V4 population, onset latencies had a broad distribution, reflecting the diversity of feedforward, recurrent, and feedback connections that inform the responses of individual neurons. Response offset latencies, on the other hand, displayed the opposite tendency in their relationship to stimulus and task attributes: they are less influenced by stimulus appearance but are shorter in guided saccade tasks compared with fixation tasks. The observation that response latency is influenced by stimulus- and task-associated factors emphasizes a need to examine response timing alongside firing rate in determining the functional role of area V4.NEW & NOTEWORTHY Onset and offset timing of neuronal responses can provide information about visual environment and neuron's role in visual processing and its anatomical connectivity. In the first comprehensive examination of onset and offset latencies in the intermediate visual cortical area V4, we find neurons respond faster than previously reported, making them ideally suited to contribute to rapid object and scene recognition. While response onset reflects stimulus characteristics, timing of response offset is influenced more by behavioral task.
Harun Yörük; Lindsay A Santacroce; Benjamin J Tamber-Rosenau
In: Psychonomic Bulletin & Review, 27 (6), pp. 1383–1396, 2020.
The prominent sensory recruitment model argues that visual working memory (WM) is maintained via representations in the same early visual cortex brain regions that initially encode sensory stimuli, either in the identical neural populations as perceptual representations or in distinct neural populations. While recent research seems to reject the former (strong) sensory recruitment model, the latter (flexible) account remains plausible. Moreover, this flexibility could explain a recent result of high theoretical impact (Harrison & Bays, The Journal of Neuroscience, 38 (12), 3116-3123, 2018) – a failure to observe interactions between items held in visual WM – that has been taken to reject the sensory recruitment model. Harrison and Bays (The Journal of Neuroscience, 38 (12), 3116-3123, 2018) tested the sensory recruitment model by comparing the precision of memoranda in radially and tangentially oriented memory arrays. Because perceptual visual crowding effects are greater in radial than tangential arrays, they reasoned that a failure to observe such anisotropy in WM would reject the sensory recruitment model. In the present Registered Report or Replication, we replicated their study with greater sensitivity and extended their task by controlling a potential strategic confound. Specifically, participants might remap memory items to new locations, reducing interactions between proximal memoranda. To combat remapping, we cued participants to report either a memory item or its precise location – with this report cue presented only after a memory maintenance period. Our results suggest that, similar to visual perceptual crowding, location-bound visual memoranda interact with one another when remapping is prevented. Thus, our results support at least a flexible form of the sensory recruitment model.
Ashley York; Stefanie I Becker
In: Journal of Vision, 20 (4), pp. 1–16, 2020.
It is well-known that we can tune attention to specific features (e.g., colors). Originally, it was believed that attention would always be tuned to the exact feature value of the sought-after target (e.g., orange). However, subsequent studies showed that selection is often geared towards target-dissimilar items, which was variably attributed to (1) tuning attention to the relative target feature that distinguishes the target from other items in the surround (e.g., reddest item; relational tuning), (2) tuning attention to a shifted target feature that allows more optimal target selection (e.g., reddish orange; optimal tuning), or (3) broad attentional tuning and selection of the most salient item that is still similar to the target (combined similarity/saliency). The present study used a color search task and assessed gaze capture by differently coloured distractors to distinguish between the three accounts. The results of the first experiment showed that a very target-dissimilar distractor that matched the relative color of the target but was outside of the area of optimal tuning still captured very strongly. As shown by a control condition and a control experiment, bottom-up saliency modulated capture only weakly, ruling out a combined similarity-saliency account. With this, the results support the relational account that attention is tuned to the relative target feature (e.g., reddest), not an optimal feature value or the target feature.
Ashley A York; David K Sewell; Stefanie I Becker
In: Journal of Experimental Psychology: Human Perception and Performance, 46 (11), pp. 1368–1386, 2020.
Current models of attention propose that we can tune attention in a top-down controlled manner to a specific feature value (e.g., shape, color) to find specific items (e.g., a red car; feature-specific search). However, subsequent research has shown that attention is often tuned in a context-dependent manner to the relative features that distinguish a sought-after target from other surrounding nontarget items (e.g., larger, bluer, and faster; relational search). Currently, it is unknown whether search will be featurespecific or relational in search for multiple targets with different attributes. In the present study, observers had to search for 2 targets that differed either across 2 stimulus dimensions (color, motion; Experiment 1) or within the same stimulus dimension (color; Experiment 2: orange/redder or aqua/bluer). We distinguished between feature-specific and relational search by measuring eye movements to different types of irrelevant distractors (e.g., relatively matching vs. feature-matching). The results showed that attention was biased to the 2 relative features of the targets, both across different feature dimensions (i.e., motion and color) and within a single dimension (i.e., 2 colors; bluer and redder). The results were not due to automatic intertrial effects (dimension weighting or feature priming), and we found only small effects for valid precueing of the target feature, indicating that relational search for two targets was conducted with relative ease. This is the first demonstration that attention is top-down biased to the relative target features in dual target search, which shows that the relational account generalizes to multiple target search.
Tehrim Yoon; Afareen Jaleel; Alaa A Ahmed; Reza Shadmehr
In: Journal of Neurophysiology, 123 (6), pp. 2161–2172, 2020.
Decisions are made based on the subjective value that the brain assigns to options. However, subjective value is a mathematical construct that cannot be measured directly, but rather is inferred from choices. Recent results have demonstrated that reaction time, amplitude, and velocity of movements are modulated by reward, raising the possibility that there is a link between how the brain evaluates an option and how it controls movements toward that option. Here, we asked people to choose among risky options represented by abstract stimuli, some associated with gain (points in a game), and others with loss. From their choices we estimated the subjective value that they assigned to each stimulus. In probe trials, a single stimulus appeared at center, instructing subjects to make a saccade to a peripheral target. We found that the reaction time, peak velocity, and amplitude of the peripherally directed saccade varied roughly linearly with the subjective value that the participant had assigned to the central stimulus: reaction time was shorter, velocity was higher, and amplitude was larger for stimuli that the participant valued more. Naturally, participants differed in how much they valued a given stimulus. Remarkably, those who valued a stimulus more, as evidenced by their choices in decision trials, tended to move with shorter reaction time and greater velocity in response to that stimulus in probe trials. Overall, the reaction time of the saccade in response to a stimulus partly predicted the subjective value that the brain assigned to that stimulus.
Seng Bum Michael Yoo; Benjamin Y Hayden
In: Neuron, 105 (4), pp. 1–13, 2020.
Economic choice proceeds from evaluation, in which we contemplate options, to selection, in which we weigh options and choose one. These stages must be differentiated so that decision makers do not proceed to selection before evaluation is complete. We examined responses of neurons in two core reward regions, orbitofrontal (OFC) and ventromedial prefrontal cortex (vmPFC), during two-option choice with asynchronous offer presentation. Our data suggest that neurons selective during the first (presumed evaluation) and second (presumed comparison and selection) offer epochs come from a single pool. Stage transition is accompanied by a shift toward orthogonality in the low-dimensional population response manifold. Nonetheless, the relative position of each option in driving responses in the population subspace is preserved. The orthogonalization we observe supports the hypothesis that the transition from evaluation to selection leads to reorganization of response subspace and suggests a mechanism by which value-related signals are prevented from prematurely driving choice.
Hörmet Yiltiz; David J Heeger; Michael S Landy
Contingent adaptation in masking and surround suppression Journal Article
In: Vision Research, 166 , pp. 72–80, 2020.
Adaptation is the process that changes a neuron's response based on recent inputs. In the traditional model, a neuron's state of adaptation depends on the recent input to that neuron alone, whereas in a recently introduced model (Hebbian normalization), adaptation depends on the structure of neural correlated firing. In particular, increased response products between pairs of neurons leads to increased mutual suppression. We test a psychophysical prediction of this model: adaptation should depend on 2nd-order statistics of input stimuli. That is, if two stimuli excite two distinct sub-populations of neurons, then presenting those stimuli simultaneously during adaptation should strengthen mutual suppression between those subpopulations. We confirm this prediction in two experiments. In the first, pairing two gratings synchronously during adaptation (i.e., a plaid) rather than asynchronously (interleaving the two gratings in time) leads to increased effectiveness of one pattern for masking the other. In the second, pairing the gratings in a center-surround configuration results in reduced apparent contrast for the central grating when paired with the same surround (as compared with a condition in which the central grating appears with a different surround at test than during adaptation). These results are consistent with the prediction that an increase in response covariance leads to greater mutual suppression between neurons. This effect is detectable both at threshold (masking) and well above threshold (apparent contrast).
Cheng Xue; Antonino Calapai; Julius Krumbiegel; Stefan Treue
In: Scientific Reports, 10 , pp. 1–10, 2020.
Small ballistic eye movements, so called microsaccades, occur even while foveating an object. Previous studies using covert attention tasks have shown that shortly after a symbolic spatial cue, specifying a behaviorally relevant location, microsaccades tend to be directed toward the cued location. This suggests that microsaccades can serve as an index for the covert orientation of spatial attention. However, this hypothesis faces two major challenges: First, effects associated with visual spatial attention are hard to distinguish from those that associated with the contemplation of foveating a peripheral stimulus. Second, it is less clear whether endogenously sustained attention alone can bias microsaccade directions without a spatial cue on each trial. To address the first issue, we investigated the direction of microsaccades in human subjects while they attended to a behaviorally relevant location and prepared a response eye movement either toward or away from this location. We find that directions of microsaccades are biased toward the attended location rather than towards the saccade target. To tackle the second issue, we verbally indicated the location to attend before the start of each block of trials, to exclude potential visual cue-specific effects on microsaccades. Our results indicate that sustained spatial attention alone reliably produces the microsaccade direction effect. Overall, our findings demonstrate that sustained spatial attention alone, even in the absence of saccade planning or a spatial cue, is sufficient to explain the direction bias observed in microsaccades.
Hongge Xu; Jing Samantha Pan; Xiaoye Michael Wang; Geoffrey P Bingham
In: Attention, Perception, and Psychophysics, pp. 1–10, 2020.
Information used in visual event perception includes both static image structure projected from opaque object surfaces and dynamic optic flow generated by motion. Events presented in static blurry grayscale displays have been shown to be recognized only when and after presented with optic flow. In this study, we investigate the effects of optic flow and color on identifying blurry events by studying the identification accuracy and eye-movement patterns. Three types of color displays were tested: grayscale, original colors, or rearranged colors (where the RGB values of the original colors were adjusted). In each color condition, participants identified 12 blurry events in five experimental phases. In the first two phases, static blurry images were presented alone or sequentially with a motion mask between consecutive frames, and identification was poor. In Phase 3, where optic flow was added, identification was comparably good. In Phases 4 and 5, motion was removed, but identification remained good. Thus, optic flow improved event identification during and after its presentation. Color also improved performance, where participants were consistently better at identifying color displays than grayscale or rearranged color displays. Importantly, the effects of optic flow and color were additive. Finally, in both motion and postmotion phases, a significant portion of eye fixations fell in strong optic flow areas, suggesting that participants continued to look where flow was available even after it stopped. We infer that optic flow specified depth structure in the blurry image structure and yielded an improvement in identification from static blurry images.
Jianping Xiong; Xiaokang Jin; Weili Li
In: Frontiers in Psychology, 11 , pp. 1–11, 2020.
Regulatory focus theory uses two different motivation focus systems—promotional and preventive—to describe how individuals approach positive goals and avoid negative goals. Moreover, the regulatory focus can manifest as chronic personality characteristics and can be situationally induced by tasks or the environment. The current study employed eye-tracking methodology to investigate how individuals who differ in their chronic regulatory focus (promotional vs. preventive) process information (Experiment 1) and whether an induced experimental situation could modulate features of their information processing (Experiment 2). Both experiments used a 3 × 3 grid information-processing task, containing eight information cells and a fixation cell; half the information cells were characterized by attribute-based information, and the other half by alternative-based information. We asked the subjects to view the grid based on their personal preferences and choose one of the virtual products presented in this grid to "purchase" by the end of each trial. Results of Experiment 1 show that promotional individuals do not exhibit a clear preference between the two types of information, whereas preventive individuals tend to fixate longer on the alternative-based information. In Experiment 2, we induced the situational regulatory focus via experimental tasks before the information-processing task. The results demonstrate that the behavioral motivation is significantly enhanced, thereby increasing the depth of the preferred mode of information processing, when the chronic regulatory focus matches the situational focus. In contrast, individuals process information more thoroughly, using both processing modes, in the non-fit condition, i.e., when the focuses do not match.
Xin Yu Xie; Xing Nan Zhao; Cong Yu
In: Vision Research, 175 , pp. 51–57, 2020.
One interesting observation of perceptual learning is the asymmetric transfer between stimuli at different external noise levels: learning at zero/low noise can transfer significantly to the same stimulus at high noise, but not vice versa. The mechanisms underlying this asymmetric transfer have been investigated by psychophysical, neurophysiological, brain imaging, and computational modeling studies. One study (PNAS 113 (2016) 5724–5729) reported that rTMS stimulations of dorsal and ventral areas impair motion direction discrimination of moving dot stimuli at 40% coherent (“noisy”) and 100% coherent (zero-noise) levels, respectively. However, after direction training at 100% coherence, only rTMS stimulation of the ventral cortex is effective, disturbing direction discrimination at both coherence levels. These results were interpreted as learning-induced changes of functional specializations of visual areas. We have concerns with the behavioral data of this study. First, contrary to the report of highly location-specific motion direction learning, our replicating experiment showed substantial learning transfer (e.g., transfer/learning ratio = 81.9%. vs 14.8% at 100% coherence). Second and more importantly, we found complete transfer of direction learning from 40% to 100% coherence, a critical baseline that is missing in this study. The transfer effect suggests that similar brain mechanisms underlie motion direction processing at two coherence levels. Therefore, this study's conclusions regarding the roles of dorsal and ventral areas in motion direction processing at two coherence levels, as well as the effects of perceptual learning, are not supported by proper experimental evidence. It remains unexplained why distinct impacts of dorsal and ventral rTMS stimulations on motion direction discrimination were observed.
Xin Yu Xie; Cong Yu
In: Journal of Vision, 20 (2), pp. 1–9, 2020.
Perceptual learning, which improves stimulus discrimination, typically results from training with a single stimulus condition. Two major learning mechanisms, early cortical neural plasticity and response reweighting, have been proposed. Here we report a new format of perceptual learning that by design may have bypassed these mechanisms. Instead, it is more likely based on abstracted stimulus evidence from multiple stimulus conditions. Specifically, we had observers practice orientation discrimination with Gabors or symmetric dot patterns at up to 47 random or rotating location x orientation conditions. Although each condition received sparse trials (12 trials/session), the practice produced significant orientation learning. Learning also transferred to a Gabor at a single untrained condition with two-to three-times lower orientation thresholds. Moreover, practicing a single stimulus condition with matched trial frequency (12 trials/session) failed to produce significant learning. These results suggest that learning with multiple stimulus conditions may not come from early cortical plasticity or response reweighting with each particular condition. Rather, it may materialize through a new format of perceptual learning, in which orientation evidence invariant to particular orientations and locations is first abstracted from multiple stimulus conditions and then reweighted by later learning mechanisms. The coarse-to-fine transfer of orientation learning from multiple Gabors or symmetric dot patterns to a single Gabor also suggest the involvement of orientation concept learning by the learning mechanisms.
Xin Yu Xie; Lei Liu; Cong Yu
In: Vision Research, 174 , pp. 69–76, 2020.
Patients with central vision loss depend on peripheral vision for everyday functions. A preferred retinal locus (PRL) on the intact retina is commonly trained as a new “fovea” to help. However, reprogramming the fovea-centered oculomotor control is difficult, so saccades often bring the defunct fovea to block the target. Aligning PRL with distant targets also requires multiple saccades and sometimes head movements. To overcome these problems, we attempted to train normal-sighted observers to form a preferred retinal annulus (PRA) around a simulated scotoma, so that they could rely on the same fovea-centered oculomotor system and make short saccades to align PRA with the target. Observers with an invisible simulated central scotoma (5° radius) practiced making saccades to see a tumbling-E target at 10° eccentricity. The otherwise blurred E target became clear when saccades brought a scotoma-abutting clear window (2° radius) to it. The location of the clear window was either fixed for PRL training, or changing among 12 locations for PRA training. Various cues aided the saccades through training. Practice quickly established a PRL or PRA. Comparing to PRL-trained observers whose first saccade persistently blocked the target with scotoma, PRA-trained observers produced more accurate first saccade. The benefits of more accurate PRA-based saccades also outweighed the costs of slower latency. PRA training may provide an efficient strategy to cope with central vision loss, especially for aging patients who have major difficulties adapting to a PRL.
Ye Xia; Mauro Manassi; Ken Nakayama; Karl Zipser; David Whitney
Visual crowding in driving Journal Article
In: Journal of Vision, 20 (6), pp. 1–17, 2020.
Visual crowding-the deleterious influence of nearby objects on object recognition-is considered to be a major bottleneck for object recognition in cluttered environments. Although crowding has been studied for decades with static and artificial stimuli, it is still unclear how crowding operates when viewing natural dynamic scenes in real-life situations. For example, driving is a frequent and potentially fatal real-life situation where crowding may play a critical role. In order to investigate the role of crowding in this kind of situation, we presented observers with naturalistic driving videos and recorded their eye movements while they performed a simulated driving task. We found that the saccade localization on pedestrians was impacted by visual clutter, in a manner consistent with the diagnostic criteria of crowding (Bouma's rule of thumb, flanker similarity tuning, and the radial-tangential anisotropy). In order to further confirm that altered saccadic localization is a behavioral consequence of crowding, we also showed that crowding occurs in the recognition of cluttered pedestrians in a more conventional crowding paradigm. We asked participants to discriminate the gender of pedestrians in static video frames and found that the altered saccadic localization correlated with the degree of crowding of the saccade targets. Taken together, our results provide strong evidence that crowding impacts both recognition and goal-directed actions in natural driving situations.
Yanfang Xia; Filip Melinscak; Dominik R Bach
Saccadic scanpath length: an index for human threat conditioning Journal Article
In: Behavior Research Methods, pp. 1–14, 2020.
Threat-conditioned cues are thought to capture overt attention in a bottom-up process. Quantification of this phenomenon typically relies on cue competition paradigms. Here, we sought to exploit gaze patterns during exclusive presentation of a visual conditioned stimulus, in order to quantify human threat conditioning. To this end, we capitalized on a summary statistic of visual search during CS presentation, scanpath length. During a simple delayed threat conditioning paradigm with full-screen monochrome conditioned stimuli (CS), we observed shorter scanpath length during CS+ compared to CS- presentation. Retrodictive validity, i.e., effect size to distinguish CS+ and CS-, was maximized by considering a 2-s time window before US onset. Taking into account the shape of the scan speed response resulted in similar retrodictive validity. The mechanism underlying shorter scanpath length appeared to be longer fixation duration and more fixation on the screen center during CS+ relative to CS- presentation. These findings were replicated in a second experiment with similar setup, and further confirmed in a third experiment using full-screen patterns as CS. This experiment included an extinction session during which scanpath differences appeared to extinguish. In a fourth experiment with auditory CS and instruction to fixate screen center, no scanpath length differences were observed. In conclusion, our study suggests scanpath length as a visual search summary statistic, which may be used as complementary measure to quantify threat conditioning with retrodictive validity similar to that of skin conductance responses.
Jordana S Wynn; Jennifer D Ryan; Morris Moscovitch
In: Journal of Experimental Psychology: General, 149 (3), pp. 518–529, 2020.
In our daily lives we rely on prior knowledge to make predictions about the world around us such as where to search for and locate common objects. Yet, equally important in visual search is the ability to inhibit such processes when those predictions fail. Mounting evidence suggests that relative to younger adults, older adults have difficulty retrieving episodic memories and inhibiting prior knowledge, even when that knowledge is detrimental to the task at hand. However, the consequences of these age-related changes for visual search remain unclear. In the present study, we used eye movement monitoring to investigate whether overreliance on prior knowledge alters the gaze patterns and performance of older adults during visual search. Younger and older adults searched for target objects in congruent or incongruent locations in real-world scenes. As predicted, targets in congruent locations were detected faster than targets in incongruent locations, and this effect was enhanced in older adults. Analysis of viewing behavior revealed that prior knowledge effects emerged early in search, as evidenced by initial saccades, and continued throughout search, with greater viewing of congruent regions by older relative to younger adults, suggesting that schema biasing of online processing increases with age. Finally, both younger and older adults showed enhanced memory for the location of congruent targets and the identity of incongruent targets, with schema-guided viewing during search predicting poor memory for schema-incongruent targets in younger adults on both tasks. Our results provide novel evidence that older adults' overreliance on prior knowledge has consequences for both active vision and memory.
Jordana S Wynn; Jennifer D Ryan; Bradley R Buchsbaum
Eye movements support behavioral pattern completion Journal Article
In: Proceedings of the National Academy of Sciences, 117 (11), pp. 6246–6254, 2020.
The ability to recall a detailed event from a simple reminder is supported by pattern completion, a cognitive operation performed by the hippocampus wherein existing mnemonic representations are retrieved from incomplete input. In behavioral studies, pattern completion is often inferred through the false endorsement of lure (i.e., similar) items as old. However, evidence that such a response is due to the specific retrieval of a similar, previously encoded item is severely lacking. We used eye movement (EM) monitoring during a partial-cue recognition memory task to index reinstatement of lure images behaviorally via the recapitulation of encoding-related EMs or gaze reinstatement. Participants reinstated encoding-related EMs following degraded retrieval cues and this reinstatement was negatively correlated with accuracy for lure images, suggesting that retrieval of existing representations (i.e., pattern completion) underlies lure false alarms. Our findings provide evidence linking gaze reinstatement and pattern completion and advance a functional role for EMs in memory retrieval.
Chao Jung Wu; Chia Yu Liu; Chung Hsuan Yang; Yu Cin Jian
In: European Journal of Psychology of Education, pp. 1–18, 2020.
Despite decades of research on the close link between eye movements and human cognitive processes, the exact nature of the link between eye movements and deliberative thinking in problem-solving remains unknown. Thus, this study explored the critical eye-movement indicators of deliberative thinking and investigated whether visual behaviors could predict performance on arithmetic word problems of various difficulties. An eye tracker and test were employed to collect 69 sixth-graders' eye-movement behaviors and responses. No significant difference was found between the successful and unsuccessful groups on the simple problems, but on the difficult problems, the successful problem-solvers demonstrated significantly greater gaze aversion, longer fixations, and spontaneous reflections. Notably, the model incorporating RT-TFD, NOF of 500 ms, and pupil size indicators could best predict participants' performance, with an overall hit rate of 74%, rising to 80% when reading comprehension screening test scores were included. These results reveal the solvers' engagement strategies or show that successful problem-solvers were well aware of problem difficulty and could regulate their cognitive resources efficiently. This study sheds light on the development of an adapted learning system with embedded eye tracking to further predict students' visual behaviors, provide real-time feedback, and improve their problem-solving performance.
Karlijn Woutersen; Anna C Geuzebroek; Albert V van den Berg; Jeroen Goossens
In: Investigative Ophthalmology and Visual Science, 61 (5), pp. 1–11, 2020.
PURPOSE. Postchiasmatic brain damage commonly results in an area of reduced visual sensitivity or blindness in the contralesional hemifield. Previous studies have shown that the ipsilesional visual field can be impaired too. Here, we examine whether assessing visual functioning of the “intact” ipsilesional visual field can be useful to understand difficulties experienced by patients with visual field defects. METHODS. We compared the performance of 14 patients on a customized version of the useful field of view test that presents stimuli in both hemifields but only assesses functioning of their intact visual half-field (iUFOV) with that of equivalent hemifield assessments in 17 age-matched healthy control participants. In addition, we mapped visual field sensitivity with the Humphrey Field Analyzer. Last, we used an adapted version of the National Eye Institute Visual Quality of Life-25 to measure their experienced visual quality of life. RESULTS. We found that patients performed worse on the second and third iUFOV subtests, but not on the first subtest. Furthermore, patients scored significantly worse on almost every subscale, except ocular pain. Summed iUFOV scores (assessing the intact hemifield only) and Humphrey field analyzer scores (assessing both hemifields combined) showed almost similar correlations with the subscale scores of the adapted National Eye Institute Visual Quality of Life-25. CONCLUSIONS. The iUFOV test is sensitive to deficits in the visual field that are not picked up by traditional perimetry. We therefore believe this task is of interest for patients with postchiasmatic brain lesions and should be investigated further.
Luca Wollenberg; Nina M Hanning; Heiner Deubel
In: Journal of Vision, 20 (9), pp. 1–17, 2020.
Saccadic eye movements are typically preceded by selective shifts of visual attention. Recent evidence, however, suggests that oculomotor selection can occur in the absence of attentional selection when saccades erroneously land in between nearby competing objects (saccade averaging). This study combined a saccade task with a visual discrimination task to investigate saccade target selection during episodes of competition between a saccade target and a nearby distractor. We manipulated the spatial predictability of target and distractor locations and asked participants to execute saccades upon variably delayed go-signals. This allowed us to systematically investigate the capacity to exert top-down eye movement control (as reflected in saccade endpoints) based on the spatiotemporal dynamics of visual attention during movement preparation (measured as visual sensitivity). Our data demonstrate that the predictability of target and distractor locations, despite not affecting the deployment of visual attention prior to movement preparation, largely improved the accuracy of short-latency saccades. Under spatial uncertainty, a short go-signal delay likewise enhanced saccade accuracy substantially, which was associated with a more selective deployment of attentional resources to the saccade target. Moreover, we observed a systematic relationship between the deployment of visual attention and saccade accuracy, with visual discrimination performance being significantly enhanced at the saccade target relative to the distractor only before the execution of saccades accurately landing at the saccade target. Our results provide novel insights linking top-down eye movement control to the operation of selective visual attention during movement preparation.
Christian Wolf; Markus Lappe
In: Attention, Perception, and Psychophysics, 82 (8), pp. 3863–3877, 2020.
Humans scan their visual environment using saccade eye movements. Where we look is influenced by bottom-up salience and top-down factors, like value. For reactive saccades in response to suddenly appearing stimuli, it has been shown that short-latency saccades are biased towards salience, and that top-down control increases with increasing latency. Here, we show, in a series of six experiments, that this transition towards top-down control is not determined by the time it takes to integrate value information into the saccade plan, but by the time it takes to inhibit suddenly appearing salient stimuli. Participants made consecutive saccades to three fixation crosses and a vertical bar consisting of a high-salient and a rewarded low-salient region. Endpoints on the bar were biased towards salience whenever it appeared or reappeared shortly before the last saccade was initiated. This was also true when the eye movement was already planned. When the location of the suddenly appearing salient region was predictable, saccades were aimed in the opposite direction to nullify this sudden onset effect. Successfully inhibiting salience, however, could only be achieved by previewing the target. These findings highlight the importance of inhibition for top-down eye-movement control.
Lisa Wirz; Lars Schwabe
In: Neuropsychologia, 138 , pp. 1–13, 2020.
Rapid attentional orienting toward relevant stimuli and efficient disengagement from irrelevant stimuli are critical for survival. Here, we examined the roles of memory processes, emotional arousal and acute stress in attentional disengagement. To this end, 64 healthy participants encoded negative and neutral facial expressions and, after being exposed to a stress or control manipulation, performed an attention task in which they had to disengage from these previously encoded as well as novel face stimuli. During the attention task, electroencephalography (EEG) and pupillometry data were recorded. Our results showed overall faster reaction times after acute stress and when participants had to disengage from emotionally negative or old facial expressions. Further, pupil dilations were larger in response to neutral faces. During disengagement, our EEG data revealed a reduced N2pc amplitude when participants disengaged from neutral compared to negative facial expressions when these were not presented before, as well as earlier onset latencies for the N400f (for disengagement from negative and old faces), the N2pc, and the LPP (for disengagement from negative faces). In addition, early visual processing of negative faces, as reflected in the P1 amplitude, was enhanced specifically in stressed participants. Our findings indicate that attentional disengagement is improved for negative and familiar stimuli and that stress facilitates not only attentional disengagement but also emotional processing in general. Together, these processes may represent important mechanisms enabling efficient performance and rapid threat detection.