All EyeLink Publications
All 10,000+ peer-reviewed EyeLink research publications up until 2021 (with some early 2022s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
Floor van den Berg; Jelle Brouwer; Thomas B. Tienkamp; Josje Verhagen; Merel KeijzerFloor van den Berg
In: Frontiers in Psychology, vol. 13, pp. 1-17, 2022.
Introduction: It has been proposed that bilinguals’ language use patterns are differentially associated with executive control. To further examine this, the present study relates the social diversity of bilingual language use to performance on a color- shape switching task (CSST) in a group of bilingual university students with diverse linguistic backgrounds. Crucially, this study used language entropy as a measure of bilinguals’ language use patterns. This continuous measure reflects a spectrum of language use in a variety of social contexts, ranging from compartmentalized use to fully integrated use. Methods: Language entropy for university and non-university contexts was calculated from questionnaire data on language use. Reaction times (RTs) were measured to calculate global RT and switching and mixing costs on the CSST, representing conflict monitoring, mental set shifting, and goal maintenance, respectively. In addition, this study innovatively recorded a potentially more sensitive measure of set shifting abilities, namely, pupil size during task performance. Results: Higher university entropy was related to slower global RT. Neither university entropy nor non-university entropy were associated with switching costs as manifested in RTs. However, bilinguals with more compartmentalized language use in non-university contexts showed a larger difference in pupil dilation for switch trials in comparison with non-switch trials. Mixing costs in RTs were reduced for bilinguals with higher diversity of language use in non-university contexts. No such effects were found for university entropy. Discussion: These results point to the social diversity of bilinguals’ language use as being associated with executive control, but the direction of the effects may depend on social context (university vs. non-university). Importantly, the results also suggest that some of these effects may only be detected by using more sensitive measures, such as pupil dilation. The paper discusses theoretical and practical implications regarding the language entropy measure and the cognitive effects of bilingual experiences more generally, as well as as how methodological choices can advance our understanding of these effects.
Yueyuan Zheng; Xinchen Ye; Janet H. Hsiao
In: Learning and Instruction, vol. 77, pp. 101542, 2022.
We examined whether adding video and subtitles to an audio lesson facilitates its comprehension and whether the comprehension depends on participants' cognitive abilities, including working memory and executive functions, and where they looked during video viewing. Participants received lessons consisting of statements of facts under four conditions: audio-only, audio with verbatim subtitles, audio with relevant video, and audio with both subtitles and video. Comprehension was assessed as the accuracy in answering multiple-choice questions for content memory. We found that subtitles facilitated comprehension whereas video did not. In addition, comprehension of audio lessons with video depended on participants' cognitive abilities and eye movement pattern: a more centralized (looking mainly at the screen center) eye movement pattern predicted better comprehension as opposed to a distributed pattern (with distributed regions of interest). Thus, whether video facilitates comprehension of audio lessons depends on both learners' cognitive abilities and where they look during video viewing.
Aspen H. Yoo; Alfredo Bolaños; Grace E. Hallenbeck; Masih Rahmati; Thomas C. Sprague; Clayton E. Curtis
In: Journal of Cognitive Neuroscience, vol. 34, no. 2, pp. 365–379, 2022.
Humans allocate visual working memory (WM) resource according to behavioral relevance, resulting in more precise memories for more important items. Theoretically, items may be maintained by feature-tuned neural populations, where the relative gain of the populations encoding each item determines precision. To test this hypothesis, we compared the amplitudes of delay period activity in the different parts of retinotopic maps representing each of several WM items, predicting the amplitudes would track behavioral priority. Using fMRI, we scanned participants while they remembered the location of multiple items over a WM delay and then reported the location of one probed item using a memory-guided saccade. Importantly, items were not equally probable to be probed (0.6, 0.3, 0.1, 0.0), which was indicated with a precue. We analyzed fMRI activity in 10 visual field maps in occipital, parietal, and frontal cortex known to be important for visual WM. In early visual cortex, but not association cortex, the amplitude of BOLD activation within voxels corresponding to the retinotopic location of visual WM items increased with the priority of the item. Interestingly, these results were contrasted with a common finding that higher-level brain regions had greater delay period activity, demonstrating a dissociation between the absolute amount of activity in a brain area and the activity of different spatially selective populations within it. These results suggest that the distribution of WM resources according to priority sculpts the relative gains of neural populations that encode items, offering a neural mechanism for how prioritization impacts memory precision.
Jiahui Wang; Abigail Stebbins; Richard E. Ferdig
In: Computers and Education, vol. 178, pp. 104405, 2022.
Research has provided evidence of the significant promise of using educational games for learning. However, there is limited understanding of how individual differences (e.g., self-efficacy and prior knowledge) affect visual processing of game elements and learning from an educational game. This study aimed to address these gaps by: a) examining the effects of students' self-efficacy and prior knowledge on learning from a physics game; and b) exploring how learners with distinct levels of self-efficacy and prior knowledge differ in their visual behavior with respect to the game elements. The visual behavior of 69 undergraduate students was recorded as they played an educational game focusing on Newtonian mechanics. Individual differences in self-efficacy in learning physics and prior knowledge were assessed prior to the game, while a comprehension test was administered immediately after gameplay. Wilcoxon signed-rank tests showed that all participants significantly improved in their understanding of Newtonian mechanics. Mann- Whitney U tests indicated learning gains were not significantly different between the groups with varying levels of prior knowledge or self-efficacy. Additionally, a series of Mann-Whitney U tests of the eye tracking data suggested the learners with high self-efficacy tended to pay more attention to the motion map - a critical navigation component of the game. Further, the high prior knowledge individuals excelled in attentional control abilities and exhibited effective visual processing strategies. The study concludes with important implications for the future design of educational games and developing individualized instructional support in game-based learning. 1.
Jérôme Tagu; Árni Kristjánsson
In: Quarterly Journal of Experimental Psychology, vol. 75, no. 2, pp. 260–276, 2022.
A vast amount of research has been carried out to understand how humans visually search for targets in their environment. However, this research has typically involved search for one unique target among several distractors. Although this line of research has yielded important insights into the basic characteristics of how humans explore their visual environment, this may not be a very realistic model for everyday visual orientation. Recently, researchers have used multi-target displays to assess orienting in the visual field. Eye movements in such tasks are, however, less well understood. Here, we investigated oculomotor dynamics during four visual foraging tasks differing in target crypticity (feature-based foraging vs. conjunction-based foraging) and the effector type being used for target selection (mouse foraging vs. gaze foraging). Our results show that both target crypticity and effector type affect foraging strategies. These changes are reflected in oculomotor dynamics, feature foraging being associated with focal exploration (long fixations and short-amplitude saccades), and conjunction foraging with ambient exploration (short fixations and high-amplitude saccades). These results provide important new information for existing accounts of visual attention and oculomotor control and emphasise the usefulness of foraging tasks for a better understanding of how humans orient in the visual environment.
Jérôme Tagu; Árni Kristjánsson
In: Cognition, vol. 218, pp. 1–12, 2022.
A critical question in visual foraging concerns the mechanisms driving the next target selection. Observers first identify a set of candidate targets, and then select the best option among these candidates. Recent evidence suggests that target selection relies on internal biases towards proximity (nearest target from the last selection), priming (target from the same category as the last selection) and value (target associated with high value). Here, we tested the role of eye movements in target selection, and notably whether disabling eye movements during target selection could affect search strategy. We asked observers to perform four foraging tasks differing by selection modality and target value. During gaze foraging, participants had to accurately fixate the targets to select them and could not anticipate the next selection with their eyes, while during mouse foraging they selected the targets with mouse clicks and were free to move their eyes. We moreover manipulated both target value and proximity. Our results revealed notable individual differences in search strategy, confirming the existence of internal biases towards value, proximity and priming. Critically, there were no differences in search strategy between mouse and gaze foraging, suggesting that disabling eye movements during target selection did not affect foraging behaviour. These results importantly suggest that overt orienting is not necessary for target selection. This study provides fundamental information for theoretical conceptions of attentional selection, and emphasizes the importance of covert attention for target selection during visual foraging.
Carlos Sillero‐Rejon; Osama Mahmoud; Ricardo M. Tamayo; Alvaro Arturo Clavijo‐Alvarez; Sally Adams; Olivia M. Maynard
In: Addiction, pp. 1–11, 2022.
Aims: To measure how cigarette packaging (standardised packaging and branded packag- ing) and health warning size affect visual attention and pack preferences among Colombian smokers and non-smokers. Design: To explore visual attention, we used an eye-tracking experiment where non- smokers, weekly smokers and daily smokers were shown cigarette packs varying in warning size (30%-pictorial on top of the text, 30%-pictorial and text side-by-side, 50%, 70%) and packaging (standardised packaging, branded packaging). We used a discrete choice experiment (DCE) to examine the impact of warning size, packaging and brand name on preferences to try, taste perceptions and perceptions of harm. Setting: Eye-tracking laboratory, Universidad Nacional de Colombia, Bogotá, Colombia. Participants: Participants (n = 175) were 18 to 40 years old. Measurements: For the eye-tracking experiment, our primary outcome measure was the number of fixations toward the health warning compared with the branding. For the DCE, outcome measures were preferences to try, taste perceptions and harm perceptions. Findings: We observed greater visual attention to warning labels on standardised versus branded packages (F[3,167] = 22.87, P < 0.001) and when warnings were larger (F[9,161] = 147.17, P < 0.001); as warning size increased, the difference in visual attention to warnings between standardised and branded packaging decreased (F[9,161] = 4.44, P < 0.001). Non-smokers visually attended toward the warnings more than smokers, but as warning size increased these differences decreased (F[6,334] = 2.92
Weikang Shi; Sébastien Ballesta; Camillo Padoa-Schioppa
In: Journal of Neuroscience, vol. 42, no. 1, pp. 33–43, 2022.
A series of studies in which monkeys chose between two juices offered in variable amounts identified in the orbitofrontal cortex (OFC) different groups of neurons encoding the value of individual options ( offer value ), the binary choice outcome ( chosen juice ) and the chosen value . These variables capture both the input and the output of the choice process, suggesting that the cell groups identified in OFC constitute the building blocks of a decision circuit. Several lines of evidence support this hypothesis. However, in previous experiments offers were presented simultaneously, raising the question of whether current notions generalize to when goods are presented or are examined in sequence. Recently, [Ballesta and Padoa-Schioppa (2019)] examined OFC activity under sequential offers. An analysis of neuronal responses across time windows revealed that a small number of cell groups encoded specific sequences of variables. These sequences appeared analogous to the variables identified under simultaneous offers, but the correspondence remained tentative. Thus in the present study we examined the relation between cell groups found under sequential versus simultaneous offers. We recorded from the OFC while monkeys chose between different juices. Trials with simultaneous and sequential offers were randomly interleaved in each session. We classified cells in each choice modality and we examined the relation between the two classifications. We found a strong correspondence – in other words, the cell groups measured under simultaneous offers and under sequential offers were one and the same. This result indicates that economic choices under simultaneous or sequential offers rely on the same neural circuit. Significance Statement Research in the past 20 years has shed light on the neuronal underpinnings of economic choices. A large number of results indicates that decisions between goods are formed in a neural circuit within the orbitofrontal cortex (OFC). In most previous studies, subjects chose between two goods offered simultaneously. Yet, in daily situations, goods available for choice are often presented or examined in sequence. Here we recorded neuronal activity in the primate OFC alternating trials under simultaneous and under sequential offers. Our analyses demonstrate that the same neural circuit supports choices in the two modalities. Hence current notions on the neuronal mechanisms underlying economic decisions generalize to choices under sequential offers. ### Competing Interest Statement The authors have declared no competing interest. : #ref-2
Arunava Samaddar; Brooke S. Jackson; Christopher J. Helms; Nicole A. Lazar; Jennifer E. McDowell; Cheolwoo Park
In: Computational Statistics and Data Analysis, vol. 167, pp. 107361, 2022.
In the analysis of functional magnetic resonance imaging (fMRI) data, a common type of analysis is to compare differences across scanning sessions. A challenge to direct comparisons of this type is the low signal-to-noise ratio in fMRI data. By using the property that brain signals from a task-related experiment may exhibit a similar pattern in regions of interest across participants, a semiparametric approach under shape invariance to quantify and test the differences in sessions and groups is developed. The common function is estimated with local polynomial regression and the shape invariance model parameters are estimated using evolutionary optimization methods. The efficacy of the semi-parametric approach is demonstrated on a study of brain activation changes across two sessions associated with practice-related cognitive control. The objective of the study is to evaluate neural circuitry supporting a cognitive control task, and associated practice-related changes via acquisition of blood oxygenation level dependent (BOLD) signal collected using fMRI. By using the proposed approach, BOLD signals in multiple regions of interest for control participants and participants with schizophrenia are compared as they perform a cognitive control task (known as the antisaccade task) at two sessions, and the effects of task practice in these groups are quantified.
Nuria Sagarra; Nicole Rodriguez
In: Languages, vol. 7, pp. 15, 2022.
Children acquire language more easily than adults, though it is controversial whether this faculty declines as a result of a critical period or something else. To address this question, we investigate the role of age of acquisition and proficiency on morphosyntactic processing in adult monolinguals and bilinguals. Spanish monolinguals and intermediate and advanced early and late bilinguals of Spanish read sentences with adjacent subject–verb number agreements and violations and chose one of four pictures. Eye-tracking data revealed that all groups were sensitive to the violations and attended more to more salient plural and preterit verbs than less obvious singular and present verbs, regardless of AoA and proficiency level. We conclude that the processing of adjacent SV agreement depends on perceptual salience and language use, rather than AoA or proficiency. These findings support usage-based theories of language acquisition.
Johannes Rennig; Michael S Beauchamp
In: NeuroImage, vol. 247, pp. 118796, 2022.
Regions of the human posterior superior temporal gyrus and sulcus (pSTG/S) respond to the visual mouth movements that constitute visual speech and the auditory vocalizations that constitute auditory speech, and neural responses in pSTG/S may underlie the perceptual benefit of visual speech for the comprehension of noisy auditory speech. We examined this possibility through the lens of multivoxel pattern responses in pSTG/S. BOLD fMRI data was collected from 22 participants presented with speech consisting of English sentences presented in five different formats: visual-only; auditory with and without added auditory noise; and audiovisual with and without auditory noise. Participants reported the intelligibility of each sentence with a button press and trials were sorted post-hoc into those that were more or less intelligible. Response patterns were measured in regions of the pSTG/S identified with an independent localizer. Noisy audiovisual sentences with very similar physical properties evoked very different response patterns depending on their intelligibility. When a noisy audiovisual sentence was reported as intelligible, the pattern was nearly identical to that elicited by clear audiovisual sentences. In contrast, an unintelligible noisy audiovisual sentence evoked a pattern like that of visual-only sentences. This effect was less pronounced for noisy auditory-only sentences, which evoked similar response patterns regardless of intelligibility. The successful integration of visual and auditory speech produces a characteristic neural signature in pSTG/S, highlighting the importance of this region in generating the perceptual benefit of visual speech.
Megan J. Raden; Andrew F. Jarosz
Strategy transfer on fluid reasoning tasks Journal Article
In: Intelligence, vol. 91, pp. 101618, 2022.
Strategy use on reasoning tasks has consistently been shown to correlate with working memory capacity and accuracy, but it is still unclear to what degree individual preferences, working memory capacity, and features of the task itself contribute to strategy use. The present studies used eye tracking to explore the potential for strategy transfer between reasoning tasks. Study 1 demonstrated that participants are consistent in what strategy they use across reasoning tasks and that strategy transfer between tasks is possible. Additionally, post-hoc an- alyses identified certain ambiguous items in the figural analogies task that required participants to assess the response bank to reach solution, which appeared to push participants towards a more response-based strategy. Study 2 utilized a between-subjects design to manipulate this “ambiguity” in figural analogies problems prior to completing the RAPM. Once again, participants transferred strategies between tasks when primed with different strategies, although this did not affect their ability to accurately solve the problem. Importantly, strategy use changed considerably depending on the ambiguity of the initial reasoning task. The results provided across the two studies suggest that participants are consistent in what strategies they employ across reasoning tasks, and that if features of the task push participants towards a different strategy, they will transfer that strategy to another reasoning task. Furthermore, to understand the role of strategy use on reasoning tasks, future work will require a diverse sample of both reasoning tasks and strategy use measures. Fluid
Alessandro Piras; Aurelio Trofè; Andrea Meoni; Milena Raffi
In: Human Movement Science, vol. 81, pp. 102905, 2022.
The role of optic flow in the control of balance in persons with Parkinson's disease (PD) has yet to be studied. Since basal ganglia are understood to have a role in controlling ocular fixation, we have hypothesized that persons with PD would exhibit impaired performance in fixation tasks, i.e., altered postural balance due to the possible relationships between postural disorders and visual perception. The aim of this preliminary study was to investigate how people affected by PD respond to optic flow stimuli presented with radial expanding motion, with the intention to see how the stimulation of different retinal portions may alter the static postural sway. We measured the body sway using center of pressure parameters recorded from two force platforms during the presentation of the foveal, peripheral and full field radial optic flow stimuli. Persons with PD had different visual responses in terms of fixational eye movement characteristics, with greater postural alteration in the sway area and in the medio-lateral direction than the age-matched control group. Balance impairment in the medio-lateral oscillation is often observed in persons with atypical Parkinsonism, but not in Parkinson's disease. Persons with PD are more dependent on visual feedback with respect to age-matched control subjects, and this could be due to their impaired peripheral kinesthetic feedback. Visual stimulation of standing posture would provide reliable signs in the differential diagnosis of Parkinsonism.
Pablo Oyarzo; David Preiss; Diego Cosmelli
In: Psychophysiology, pp. e13994, 2022.
Although eye movements during reading have been studied extensively, their variation due to attentional fluctuations such as spontaneous distractions is not well understood. Here we used a naturalistic reading task combined with an at- tentional sampling method to examine the effects of mind wandering— and the subsequent metacognitive awareness of its occurrence— on eye movements and pupillary dynamics. Our goal was to better understand the attentional and meta- cognitive processes involved in the initiation and termination of mind wander- ing episodes. Our results show that changes in eye behavior are consistent with underlying independent cognitive mechanisms working in tandem to sustain the attentional resources required for focused reading. In addition to changes in blink frequency, blink duration, and the number of saccades, variations in eye movements during unaware distractions point to a loss of the perceptual asym- metry that is usually observed in attentive, left- to- right reading. Also, before self- detected distractions, we observed a specific increase in pupillary diameter, indicating the likely presence of an anticipatory autonomic process that could contribute to becoming aware of the current attentional state. These findings stress the need for further research tackling the temporal structure of attentional dynamics during tasks that have a significant real- world impact.
Joel T. Martin; Annalise H. Whittaker; Stephen J. Johnston
In: European Journal of Neuroscience, vol. 44, pp. 1–22, 2022.
Baseline and task-evoked pupil measures are known to reflect the activity of the nervous system's central arousal mechanisms. With the increasing availability, affordability and flexibility of video-based eye tracking hardware, these measures may one day find practical application in real-time biobehavioral monitoring systems to assess performance or fitness for duty in tasks requiring vigilant attention. But real-world vigilance tasks are predominantly visual in their nature and most research in this area has taken place in the auditory domain. Here we explore the relationship between pupil size—both baseline and task-evoked—and behavioral performance measures in two novel vigilance tasks requiring visual target detection: 1) a traditional vigilance task involving prolonged, continuous, and uninterrupted performance (n = 28), and 2) a psychomotor vigilance task (n = 25). In both tasks, behavioral performance and task-evoked pupil responses declined as time spent on task increased, corroborating previous reports in the literature of a vigilance decrement with a corresponding reduction in task-evoked pupil measures. Also in line with previous findings, baseline pupil size did not show a consistent relationship with performance measures. We discuss our findings considering the adaptive gain theory of locus coeruleus function and question the validity of the assumption that baseline (prestimulus) pupil size and task-evoked (poststimulus) pupil measures correspond to the tonic and phasic firing modes of the LC. ### Competing Interest Statement The authors have declared no competing interest.
Ana Marcet; Manuel Perea
In: Quarterly Journal of Experimental Psychology, vol. 75, no. 1, pp. 148–155, 2022.
Lexical stress in multisyllabic words is consistent in some languages (e.g., first syllable in Finnish), but it is variable in others (e.g., Spanish, English). To help lexical processing in a transparent language like Spanish, scholars have proposed a set of rules specifying which words require an accent mark indicating lexical stress in writing. However, recent word recognition using that lexical decision showed that word identification times were not affected by the omission of a word's accent mark in Spanish. To examine this question in a paradigm with greater ecological validity, we tested whether omitting the accent mark in a Spanish word had a deleterious effect during silent sentence reading. A target word was embedded in a sentence with its accent mark or not. Results showed no reading cost of omitting the word's accent mark in first-pass eye fixation durations, but we found a cost in the total reading time spent on the target word (i.e., including re-reading). Thus, the omission of an accent mark delays late, but not early, lexical processing in Spanish. These findings help constrain the locus of accent mark information in models of visual word recognition and reading. Furthermore, these findings offer some clues on how to simplify the Spanish rules of accentuation.
Sixin Liao; Lili Yu; Jan-Louis Kruger; Erik D. Reichle
This study investigated how semantically relevant auditory information might affect the reading of subtitles, and if such effects might be modulated by the concurrent video content. Thirty-four native Chinese speakers with English as their second language watched video with English subtitles in six conditions defined by manipulating the nature of the audio (Chinese/L1 audio vs. English/L2 audio vs. no audio) and the presence versus absence of video content. Global eye-movement analyses showed that participants tended to rely less on subtitles with Chinese or English audio than without audio, and the effects of audio were more pronounced in the presence of video presentation. Lexical processing of subtitles was not modulated by the audio. However, Chinese audio, which presumably obviated the need to read the subtitles, resulted in more superficial post-lexical processing of the subtitles relative to either the English or no audio. On the contrary, English audio accentuated post-lexical processing of the subtitles compared with Chinese audio or no audio, indicating that participants might use English audio to support subtitle reading (or vice versa) and thus engaged in deeper processing of the subtitles. These findings suggest that, in multimodal reading situations, eye movements are not only controlled by processing difficulties associated with properties of words (e.g., their frequency and length) but also guided by metacognitive strategies involved in monitoring comprehension and its online modulation by different information sources.
Astar Lev; Yoram Braw; Tomer Elbaum; Michael Wagner; Yuri Rassovsky
In: Journal of Attention Disorders, vol. 26, no. 2, pp. 245–255, 2022.
Objective: The use of continuous performance tests (CPTs) for assessing ADHD related cognitive impairment is ubiquitous. Novel psychophysiological measures may enhance the data that is derived from CPTs and thereby improve clinical decision-making regarding diagnosis and treatment. As part of the current study, we integrated an eye tracker with the MOXO-dCPT and assessed the utility of eye movement measures to differentiate ADHD patients and healthy controls. Method: Adult ADHD patients and gender/age-matched healthy controls performed the MOXO-dCPT while their eye movements were monitored (n = 33 per group). Results: ADHD patients spent significantly more time gazing at irrelevant regions, both on the screen and outside of it, than healthy controls. The eye movement measures showed adequate ability to classify ADHD patients. Moreover, a scale that combined eye movement measures enhanced group prediction, compared to the sole use of conventional MOXO-dCPT indices. Conclusions: Integrating an eye tracker with CPTs is a feasible way of enhancing diagnostic precision and shows initial promise for clarifying the cognitive profile of ADHD patients. Pending replication, these findings point toward a promising path for the evolution of existing CPTs.
Timo L. Kvamme; Mesud Sarmanlu; Christopher Bailey; Morten Overgaard
In: Neuroscience, vol. 482, pp. 1–17, 2022.
Spontaneous neural oscillations are key predictors of perceptual decisions to bind multisensory signals into a unified percept. Research links decreased alpha power in the posterior cortices to attention and audiovisual binding in the sound-induced flash illusion (SIFI) paradigm. This suggests that controlling alpha oscillations would be a way of controlling audiovisual binding. In the present feasibility study we used MEG-neurofeedback to train one group of subjects to increase left/right and another to increase right/left alpha power ratios in the parietal cortex. We tested for changes in audiovisual binding in a SIFI paradigm where flashes appeared in both hemifields. Results showed that the neurofeedback induced a significant asymmetry in alpha power for the left/right group, not seen for the right/left group. Corresponding asymmetry changes in audiovisual binding in illusion trials (with 2, 3, and 4 beeps paired with 1 flash) were not apparent. Exploratory analyses showed that neurofeedback training effects were present for illusion trials with the lowest numeric disparity (i.e., 2 beeps and 1 flash trials) only if the previous trial had high congruency (2 beeps and 2 flashes). Our data suggest that the relation between parietal alpha power (an index of attention) and its effect on audiovisual binding is dependent on the learned causal structure in the previous stimulus. The present results suggests that low alpha power biases observers towards audiovisual binding when they have learned that audiovisual signals originate from a common origin, consistent with a Bayesian causal inference account of multisensory perception.
Koji Kuraoka; Kae Nakamura
In: Neuroscience Research, 2022.
Studies in human subjects have revealed that autonomic responses provide objective and biologically relevant information about cognitive and affective states. Measures of autonomic responses can also be applied to studies of non-human primates, which are neuro-anatomically and physically similar to humans. Facial temperature and pupil size are measured remotely and can be applied to physiological experiments in primates, preferably in a head-fixed condition. However, detailed guidelines for the use of these measures in non-human primates is lacking. Here, we review the neuronal circuits and methodological considerations necessary for measuring and analyzing facial temperature and pupil size in non-human primates. Previous studies have shown that the modulation of these measures primarily reflects sympathetic reactions to cognitive and emotional processes, including alertness, attention, and mental effort, over different time scales. Integrated analyses of autonomic, behavioral, and neurophysiological data in primates are promising methods that reflect multiple dimensions of emotion and could potentially provide tools for understanding the mechanisms underlying neuropsychiatric disorders and vulnerabilities characterized by cognitive and affective disturbances.
Jan-Louis Kruger; Natalia Wisniewska; Sixin Liao
In: Applied Psycholinguistics, vol. 43, no. 1, pp. 211–236, 2022.
High subtitle speed undoubtedly impacts the viewer experience. However, little is known about how fast subtitles might impact the reading of individual words. This article presents new findings on the effect of subtitle speed on viewers' reading behavior using word-based eye-tracking measures with specific attention to word skipping and rereading. In multimodal reading situations such as reading subtitles in video, rereading allows people to correct for oculomotor error or comprehension failure during linguistic processing or integrate words with elements of the image to build a situation model of the video. However, the opportunity to reread words, to read the majority of the words in the subtitle and to read subtitles to completion, is likely to be compromised when subtitles are too fast. Participants watched videos with subtitles at 12, 20, and 28 characters per second (cps) while their eye movements were recorded. It was found that comprehension declined as speed increased. Eye movement records also showed that faster subtitles resulted in more incomplete reading of subtitles. Furthermore, increased speed also caused fewer words to be reread following both horizontal eye movements (likely resulting in reduced lexical processing) and vertical eye movements (which would likely reduce higher-level comprehension and integration).
Nadezhda Kerimova; Pavel Sivokhin; Diana Kodzokova; Karine Nikogosyan; Vasily Klucharev
In: Urban Forestry and Urban Greening, vol. 68, pp. 127460, 2022.
We used an eye-tracking technique to investigate the effect of green zones and car ownership on the attrac- tiveness of the courtyards of multistorey apartment buildings. Two interest groups—20 people who owned a car and 20 people who did not a car—observed 36 images of courtyards. Images were digitally modified to manipulate the spatial arrangement of key courtyard elements: green zones, parking lots, and children's play- grounds. The participants were asked to rate the attractiveness of courtyards during hypothetical renting decisions. Overall, we investigated whether visual exploration and appraisal of courtyards differed between people who owned a car and those who did not. The participants in both interest groups gazed longer at perceptually salient playgrounds and parking lots than at greenery. We also observed that participants gazed significantly longer at the greenery in courtyards rated as most attractive than those rated as least attractive. They gazed significantly longer at parking lots in courtyards rated as least attractive than those rated as most attractive. Using regression analysis, we further investigated the relationship between gaze fixations on courtyard elements and the attractiveness ratings of courtyards. The model confirmed a significant positive relationship between the number and duration of fixations on greenery and the attractiveness estimates of courtyards, while the model showed an opposite relationship for the duration of fixations on parking lots. Interestingly, the positive association between fixations on greenery and the attractiveness of courtyards was significantly stronger for participants who owned cars than for those who did not. These findings confirmed that the more people pay attention to green areas, the more positively they evaluate urban areas. The results also indicate that urban greenery may differentially affect the preferences of interest groups. 1.
Ignace T C Hooge; Diederick C Niehorster; Marcus Nystrom; Richard Andersson; Roy S Hessels
In: Behavior Research Methods, pp. 1–12, 2022.
Eye trackers are applied in many research fields (e.g., cognitive science, medicine, marketing research). To give meaning to the eye-tracking data, researchers have a broad choice of classification methods to extract various behaviors (e.g., saccade, blink, fixation) from the gaze signal. There is extensive literature about the different classification algorithms. Surprisingly, not much is known about the effect of fixation and saccade selection rules that are usually (implicitly) applied. We want to answer the following question: What is the impact of the selection-rule parameters (minimal saccade amplitude and minimal fixation duration) on the distribution of fixation durations? To answer this question, we used eye-tracking data with high and low quality and seven different classification algorithms. We conclude that selection rules play an important role in merging and selecting fixation candidates. For eye-tracking data with good-to-moderate precision (RMSD < 0.5◦), the classification algorithm of choice does not matter too much as long as it is sensitive enough and is followed by a rule that selects saccades with amplitudes larger than 1.0◦ and a rule that selects fixations with duration longer than 60 ms. Because of the importance of selection, researchers should always report whether they performed selection and the values of their parameters.
Christoph Helmchen; Björn Machner; Andreas Sprenger; David S. Zee
Monocular patching attenuates vertical nystagmus in Wernicke's encephalopathy via release of activity in subcortical visual pathways Journal Article
In: Movement Disorders Clinical Practice, vol. 9, no. 1, pp. 107–109, 2022.
Frauke Heins; Markus Lappe
In: Journal of Vision, vol. 22, no. 1, pp. 1–16, 2022.
Saccadic eye movements bring objects of interest onto our fovea. These gaze shifts are essential for visual perception of our environment and the interaction with the objects within it. They precede our actions and are thus modulated by current goals. It is assumed that saccadic adaptation, a recalibration process that restores saccade accuracy in case of error, is mainly based on an implicit comparison of expected and actual post-saccadic position of the target on the retina. However, there is increasing evidence that task demands modulate saccade adaptation and that errors in task performance may be sufficient to induce changes to saccade amplitude. We investigated if human participants are able to flexibly use different information sources within the post-saccadic visual feedback in task-dependent fashion. Using intra-saccadic manipulation of the visual input, participants were either presented with congruent post-saccadic information, indicating the saccade target unambiguously, or incongruent post-saccadic information, creating conflict between two possible target objects. Using different task instructions, we found that participants were able to modify their saccade behavior such that they achieved the goal of the task. They succeeded in decreasing saccade gain or maintaining it, depending on what was necessary for the task, irrespective of whether the post-saccadic feedback was congruent or incongruent. It appears that action intentions prime task-relevant feature dimensions and thereby facilitated the selection of the relevant information within the post-saccadic image. Thus, participants use post-saccadic feedback flexibly, depending on their intentions and pending actions.
Erin Goddard; Thomas A. Carlson; Alexandra Woolgar
In: Journal of Cognitive Neuroscience, vol. 34, no. 2, pp. 290–312, 2022.
Attention can be deployed in different ways: When searching for a taxi in New York City, we can decide where to attend (e.g., to the street) and what to attend to (e.g., yellow cars). Although we use the same word to describe both processes, nonhuman primate data suggest that these produce distinct effects on neural tuning. This has been challenging to assess in humans, but here we used an opportunity afforded by multivariate decoding of MEG data. We found that attending to an object at a particular location and attending to a particular object feature produced effects that interacted multiplicatively. The two types of attention induced distinct patterns of enhancement in occipital cortex, with feature-selective attention producing relatively more enhancement of small feature differences and spatial attention producing relatively larger effects for larger feature differences. An information flow analysis further showed that stimulus representations in occipital cortex were Granger-caused by coding in frontal cortices earlier in time and that the timing of this feedback matched the onset of attention effects. The data suggest that spatial and feature-selective attention rely on distinct neural mechanisms that arise from frontal-occipital information exchange, interacting multiplicatively to selectively enhance task-relevant information.
Marco Esposito; Clarissa Ferrari; Claudia Fracassi; Carlo Miniussi; Debora Brignani
In: European Journal of Neuroscience, pp. 1–45, 2022.
Over the past two decades, the postulated modulatory effects of transcranial direct current stimulation (tDCS) on the human brain have been extensively investigated. However, recent concerns on reliability of tDCS effects have been raised, principally due to reduced replicability This article is protected by copyright. All rights reserved. and to interindividual variability in response to tDCS. These inconsistencies are likely due to the interplay between the level of induced cortical excitability and unaccounted structural and state-dependent functional factors. On these grounds, we aimed at verifying whether the behavioural effects induced by a common tDCS montage (F3-rSOA) were influenced by the participants' arousal levels, as part of a broader mechanism of state-dependency. Pupillary dynamics were recorded during an auditory oddball task while applying either a sham or real tDCS. The tDCS effects were evaluated as a function of subjective and physiological arousal predictors (STAI-Y State scores and pre-stimulus pupil size, respectively). We showed that prefrontal tDCS hindered task learning effects on response speed such that performance improvement occurred during sham, but not real stimulation. Moreover, both subjective and physiological arousal predictors significantly explained performance during real tDCS, with interaction effects showing performance improvement only with moderate arousal levels; likewise, pupil response was affected by real tDCS according to the ongoing levels of arousal, with reduced dilation during higher arousal trials. These findings highlight the potential role of arousal in shaping the neuromodulatory outcome, thus emphasizing a more careful interpretation of null or negative results while also encouraging more individually tailored tDCS applications based on arousal levels, especially in clinical populations.
Mina Elhamiasl; Gabriella Silva; Andrea M. Cataldo; Hillary Hadley; Erik Arnold; James W. Tanaka; Tim Curran; Lisa S. Scott
In: Vision Research, vol. 191, pp. 107971, 2022.
Previous work suggests that subordinate-level object training improves exemplar-level perceptual discrimination over basic-level training. However, the extent to which visual fixation strategies and the use of visual features, such as color and spatial frequency (SF), change with improved discrimination was not previously known. In the current study, adults (n = 24) completed 6 days of training with 2 families of computer-generated novel objects. Participants were trained to identify one object family at the subordinate level and the other object family at the basic level. Before and after training, discrimination accuracy and visual fixations were measured for trained and untrained exemplars. To examine the impact of training on visual feature use, image color and SF were manipulated and tested before and after training. Discrimination accuracy increased for the object family trained at the subordinate-level, but not for the family trained at the basic level. This increase was seen for all image manipulations (color, SF) and generalized to untrained exemplars within the trained family. Both subordinate- and basic-level training increased average fixation duration and saccadic amplitude and decreased the number of total fixations. Collectively, these results suggest a dissociation between discrimination accuracy, indicative of recognition, and the associated pattern of changes present for visual fixations.
Lorenzo Diana; Giulia Scotti; Edoardo N Aiello; Patrick Pilastro; Aleksandra K Eberhard-moscicka; Ren M Müri; Nadia Bolognini
In: Brain Sciences, vol. 12, no. 71, pp. 1–20, 2022.
Transcranial Direct Current Stimulation (tDCS) has been employed to modulate visuo- spatial attentional asymmetries, however, further investigation is needed to characterize tDCS- associated variability in more ecological settings. In the present research, we tested the effects of offline, anodal conventional tDCS (Experiment 1) and HD-tDCS (Experiment 2) delivered over the posterior parietal cortex (PPC) and Frontal Eye Field (FEF) of the right hemisphere in healthy participants. Attentional asymmetries were measured by means of an eye tracking-based, ecological paradigm, that is, a Free Visual Exploration task of naturalistic pictures. Data were analyzed from a spatiotemporal perspective. In Experiment 1, a pre-post linear mixed model (LMM) indicated a leftward attentional shift after PPC tDCS; this effect was not confirmed when the individual baseline performance was considered. In Experiment 2, FEF HD-tDCS was shown to induce a significant leftward shift of gaze position, which emerged after 6 s of picture exploration and lasted for 200 ms. The present results do not allow us to conclude on a clear efficacy of offline conventional tDCS and HD- tDCS in modulating overt visuospatial attention in an ecological setting. Nonetheless, our findings highlight a complex relationship among stimulated area, focality of stimulation, spatiotemporal aspects of deployment of attention, and the role of individual baseline performance in shaping the effects of tDCS.
Lei Cui; Chuanli Zang; Xiaochen Xu; Wenxin Zhang; Yuhan Su; Simon P. Liversedge
In: Quarterly Journal of Experimental Psychology, vol. 75, no. 1, pp. 18–29, 2022.
We report a boundary paradigm eye movement experiment to investigate whether the predictability of the second character of a two-character compound word affects how it is processed prior to direct fixation during reading. The boundary was positioned immediately prior to the second character of the target word, which itself was either predictable or unpredictable. The preview was either a pseudocharacter (nonsense preview) or an identity preview. We obtained clear preview effects in all conditions, but more importantly, skipping probability for the second character of the target word and the whole target word from pretarget was greater when it was predictable than when it was not predictable from the preceding context. Interactive effects for later measures on the whole target word (gaze duration and go-past time) were also obtained. These results demonstrate that predictability information from preceding sentential context and information regarding the likely identity of upcoming characters are used concurrently to constrain the nature of lexical processing during natural Chinese reading.
Ruth E. Corps; Charlotte Brooke; Martin J. Pickering
In: Journal of Memory and Language, vol. 122, pp. 104298, 2022.
Comprehenders often predict what they are going to hear. But do they make the best predictions possible? We addressed this question in three visual-world eye-tracking experiments by asking when comprehenders consider perspective. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nicełdots) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress, distractor: hairdryer) objects. In all three experiments, participants rapidly predicted semantic associates of the verb. But participants also predicted consistently-that is, consistent with their beliefs about what the speaker would ultimately say. They predicted consistently from the speaker's perspective in Experiment 1, their own perspective in Experiment 2, and the character's perspective in Experiment 3. This consistent effect occurred later than the associative effect. We conclude that comprehenders consider perspective when predicting, but not from the earliest moments of prediction, consistent with a two-stage account.
Alasdair D. F. Clarke; Jessica L. Irons; Warren James; Andrew B. Leber; Amelia R. Hunt
In: Quarterly Journal of Experimental Psychology, vol. 75, no. 2, pp. 289–296, 2022.
A striking range of individual differences has recently been reported in three different visual search tasks. These differences in performance can be attributed to strategy, that is, the efficiency with which participants control their search to complete the task quickly and accurately. Here, we ask whether an individual's strategy and performance in one search task is correlated with how they perform in the other two. We tested 64 observers and found that even though the test–retest reliability of the tasks was high, an observer's performance and strategy in one task was not predictive of their behaviour in the other two. These results suggest search strategies are stable over time, but context-specific. To understand visual search, we therefore need to account not only for differences between individuals but also how individuals interact with the search task and context.
Alexis Cheviet; Jana Masselink; Eric Koun; Roméo Salemme; Markus Lappe; Caroline Froment-Tilikete; Denis Pélisson
In: Cerebral Cortex, pp. 1–21, 2022.
Saccadic adaptation (SA) is a cerebellar-dependent learning of motor commands (MC), which aims at preserving saccade accuracy. Since SA alters visual localization during fixation and even more so across saccades, it could also involve changes of target and/or saccade visuospatial representations, the latter (CDv) resulting from a motor-to-visual transformation (forward dynamics model) of the corollary discharge of the MC. In the present study, we investigated if, in addition to its established role in adaptive adjustment of MC, the cerebellum could contribute to the adaptation-associated perceptual changes. Transfer of backward and forward adaptation to spatial perceptual performance (during ocular fixation and trans-saccadically) was assessed in eight cerebellar patients and eight healthy volunteers. In healthy participants, both types of SA altered MC as well as internal representations of the saccade target and of the saccadic eye displacement. In patients, adaptation-related adjustments of MC and adaptation transfer to localization were strongly reduced relative to healthy participants, unraveling abnormal adaptation-related changes of target and CDv. Importantly, the estimated changes of CDv were totally abolished following forward session but mainly preserved in backward session, suggesting that an internal model ensuring trans-saccadic localization could be located in the adaptation-related cerebellar networks or in downstream networks, respectively.
Yi-Ting Chen; Ming-Chou Ho
In: Learning and Individual Differences, vol. 93, pp. 102106, 2022.
Background: Extant eye-tracking studies suggest that foreign-language learners tend to read the native language captions while watching foreign-language videos. However, it remains unclear how the captions affect the learners' eye movements when watching Math videos. Purpose: While watching teaching videos, we seek to determine how the lesson type (English or Math), cognitive load (high or low), and caption type (meaningful, no captions, or meaningless) affect the dwell times and fixation counts on the captions. Methods: One hundred and eighty undergraduate students were randomly and equally assigned to six (2 lesson type × 3 caption type) conditions. Each participant watched two short teaching videos (one low load and one high load). After watching each video, a comprehension test and three self-reported items (fatigue, effort, and difficulty) regarding this particular video were given. Results: We reported more dwell times and fixation counts on the meaningful captions, compared to the meaningless captions and no captions. In the high-load condition, viewers watching an English lesson relied more on the meaningful captions than they did when watching a Math lesson. In the low-load condition, the dwell times and fixation counts on the captions were similar between the English and Math lessons. Finally, the captions did not affect the comprehension test performances after ruling out individual differences in the prior performances of English and Math. Conclusions: English language learning may rely more on the captions than is the case in learning Math. This study provides the direction for designing multimedia teaching materials in the current trend of multimedia teaching. In
Frederick H. F. Chan; Hin Suen; Antoni B. Chan; Janet H. Hsiao; Tom J. Barry
In: European Journal of Pain, vol. 26, no. 1, pp. 181–196, 2022.
Background: Studies examining the effect of biased cognitions on later pain outcomes have primarily focused on attentional biases, leaving the role of interpretation biases largely unexplored. Also, few studies have examined pain-related cognitive biases in elderly persons. The current study aims to fill these research gaps. Methods: Younger and older adults with and without chronic pain (N = 126) completed an interpretation bias task and a free-viewing task of injury and neutral scenes at baseline. Participants' pain intensity and disability were assessed at baseline and at a 6-month follow-up. A machine-learning data-driven approach to analysing eye movement data was adopted. Results: Eye movement analyses revealed two common attentional pattern subgroups for scene-viewing: an “explorative” group and a “focused” group. At baseline, participants with chronic pain endorsed more injury-/illness-related interpretations compared to pain-free controls, but they did not differ in eye movements on scene images. Older adults interpreted illness-related scenarios more negatively compared to younger adults, but there was also no difference in eye movements between age groups. Moreover, negative interpretation biases were associated with baseline but not follow-up pain disability, whereas a focused gaze tendency for injury scenes was associated with follow-up but not baseline pain disability. Additionally, there was an indirect effect of interpretation biases on pain disability 6 months later through attentional bias for pain-related images. Conclusions: The present study provided evidence for pain status and age group differences in injury-/illness-related interpretation biases. Results also revealed distinct roles of interpretation and attentional biases in pain chronicity. Significance: Adults with chronic pain endorsed more injury-/illness-related interpretations than pain-free controls. Older adults endorsed more illness interpretations than younger adults. A more negative interpretation bias indirectly predicted pain disability 6 months later through hypervigilance towards pain.
Olivia G. Calancie; Donald C. Brien; Jeff Huang; Brian C. Coe; Linda Booij; Sarosh Khalid-Khan; Douglas P. Munoz
In: Journal of Neuroscience, vol. 42, no. 1, pp. 69–80, 2022.
When presented with a periodic stimulus, humans spontaneously adjust their movements from reacting to predicting the timing of its arrival, but little is known about how this sensorimotor adaptation changes across development. To investigate this, we analyzed saccade behavior in 114 healthy humans (ages 6–24 years) performing the visual metronome task, who were instructed to move their eyes in time with a visual target that alternated between two known locations at a fixed rate, and we compared their behavior to per- formance in a random task, where target onsets were randomized across five interstimulus intervals (ISIs) and thus the timing of appearance was unknown. Saccades initiated before registration of the visual target, thus in anticipation of its appearance, were la- beled predictive [saccade reaction time (SRT),90ms] and saccades that were made in reaction to its appearance were labeled reac- tive (SRT.90ms). Eye-tracking behavior including saccadic metrics (e.g., peak velocity, amplitude), pupil size following saccade to target, and blink behavior all varied as a function of predicting or reacting to periodic targets. Compared with reactive saccades, pre- dictive saccades had a lower peak velocity, a hypometric amplitude, smaller pupil size, and a reduced probability of blink occurrence before target appearance. The percentage of predictive and reactive saccades changed inversely from ages 8–16, at which they reached adult-levels of behavior. Differences in predictive saccades for fast and slow target rates are interpreted by differential maturation of cerebellar-thalamic-striatal pathways.
Philippa Broadbent; Daniel E. Schoth; Christina Liossi
In: Pain, vol. 163, no. 2, pp. 319–333, 2022.
Attentional bias to pain-related information may contribute to chronic pain maintenance. It is theoretically predicted that attentional bias to pain-related language derives from attentional bias to painful sensations; however, the complex interconnection between these types of attentional bias has not yet been tested. This study aimed to investigate the association between attentional bias to pain words and attentional bias to the location of pain, as well as the moderating role of pain-related interpretation bias in this association. Fifty-four healthy individuals performed a visual probe task with pain-related and neutral words, during which eye movements were tracked. In a subset of trials, participants were presented with a cold pain stimulus on one hand. Pain-related interpretation and memory biases were also assessed. Attentional bias to pain words and attentional bias to the pain location were not significantly correlated, although the association was significantly moderated by interpretation bias. A combination of pain-related interpretation bias and attentional bias to painful sensations was associated with avoidance of pain words. In addition, first fixation durations on pain words were longer when the pain word and cold pain stimulus were presented on the same side of the body, as compared to on opposite sides. This indicates that congruency between the locations of pain and pain-related information may strengthen attentional bias. Overall, these findings indicate that cognitive biases to pain-related information interact with cognitive biases to somatosensory information. The implications of these findings for attentional bias modification interventions are discussed.
Rhona M. Amos; Kilian G. Seeber; Martin J. Pickering
In: Cognition, vol. 220, pp. 104987, 2022.
We report the results of an eye-tracking study which used the Visual World Paradigm (VWP) to investigate the time-course of prediction during a simultaneous interpreting task. Twenty-four L1 French professional conference interpreters and twenty-four L1 French professional translators untrained in simultaneous interpretation listened to sentences in English and interpreted them simultaneously into French while looking at a visual scene. Sentences contained a highly predictable word (e.g., The dentist asked the man to open his mouth a little wider). The visual scene comprised four objects, one of which depicted either the target object (mouth; bouche), an English phonological competitor (mouse; souris), a French phonological competitor (cork; bouchon), or an unrelated word (bone; os). We considered 1) whether interpreters and translators predict upcoming nouns during a simultaneous interpreting task, 2) whether interpreters and translators predict the form of these nouns in English and in French and 3) whether interpreters and translators manifest different predictive behaviour. Our results suggest that both interpreters and translators predict upcoming nouns, but neither group predicts the word-form of these nouns. In addition, we did not find significant differences between patterns of prediction in interpreters and translators. Thus, evidence from the visual-world paradigm shows that prediction takes place in simultaneous interpreting, regardless of training and experience. However, we were unable to establish whether word-form was predicted.
Carlos Alós-Ferrer; Alexander Ritschel
Attention and salience in preference reversals Journal Article
In: Experimental Economics, pp. 1–28, 2022.
We investigate the implications of Salience Theory for the classical preference reversal phenomenon, where monetary valuations contradict risky choices. It has been stated that one factor behind reversals is that monetary valuations of lotteries are inflated when elicited in isolation, and that they should be reduced if an alternative lottery is present and draws attention. We conducted two preregistered experiments, an online choice study (N=256) and an eye-tracking study (N = 64), in which we investigated salience and attention in preference reversals, manipulating salience through the presence or absence of an alternative lottery during evaluations. We find that the alternative lottery draws attention, and that fixations on that lottery influence the evaluation of the target lottery as predicted by Salience Theory. The effect, however, is of a modest magnitude and fails to translate into an effect on preference reversal rates in either experiment. We also use transitions (eye movements) across outcomes of different lotteries to study attention on the states of the world underlying Salience Theory, but we find no evidence that larger salience results in more transitions.
Emily J. Allen; Ghislain St-Yves; Yihan Wu; Jesse L. Breedlove; Jacob S. Prince; Logan T. Dowdle; Matthias Nau; Brad Caron; Franco Pestilli; Ian Charest; J. Benjamin Hutchinson; Thomas Naselaris; Kendrick Kay
In: Nature Neuroscience, vol. 25, no. 1, pp. 116–126, 2022.
Extensive sampling of neural activity during rich cognitive phenomena is critical for robust understanding of brain function. Here we present the Natural Scenes Dataset (NSD), in which high-resolution functional magnetic resonance imaging responses to tens of thousands of richly annotated natural scenes were measured while participants performed a continuous recognition task. To optimize data quality, we developed and applied novel estimation and denoising techniques. Simple visual inspections of the NSD data reveal clear representational transformations along the ventral visual pathway. Further exemplifying the inferential power of the dataset, we used NSD to build and train deep neural network models that predict brain activity more accurately than state-of-the-art models from computer vision. NSD also includes substantial resting-state and diffusion data, enabling network neuroscience perspectives to constrain and enhance models of perception and memory. Given its unprecedented scale, quality and breadth, NSD opens new avenues of inquiry in cognitive neuroscience and artificial intelligence.
Delia A. Gheorghe; Muriel T. N. Panouillères; Nicholas D. Walsh
In: Cerebellum and Ataxias, vol. 8, no. 1, pp. 1–11, 2021.
Background: Transcranial Direct Current Stimulation (tDCS) over the prefrontal cortex has been shown to modulate subjective, neuronal and neuroendocrine responses, particularly in the context of stress processing. However, it is currently unknown whether tDCS stimulation over other brain regions, such as the cerebellum, can similarly affect the stress response. Despite increasing evidence linking the cerebellum to stress-related processing, no studies have investigated the hormonal and behavioural effects of cerebellar tDCS. Methods: This study tested the hypothesis of a cerebellar tDCS effect on mood, behaviour and cortisol. To do this we employed a single-blind, sham-controlled design to measure performance on a cerebellar-dependent saccadic adaptation task, together with changes in cortisol output and mood, during online anodal and cathodal stimulation. Forty-five participants were included in the analysis. Stimulation groups were matched on demographic variables, potential confounding factors known to affect cortisol levels, mood and a number of personality characteristics. Results: Results showed that tDCS polarity did not affect cortisol levels or subjective mood, but did affect behaviour. Participants receiving anodal stimulation showed an 8.4% increase in saccadic adaptation, which was significantly larger compared to the cathodal group (1.6%). Conclusion: The stimulation effect on saccadic adaptation contributes to the current body of literature examining the mechanisms of cerebellar stimulation on associated function. We conclude that further studies are needed to understand whether and how cerebellar tDCS may module stress reactivity under challenge conditions.
Sarah Chabal; Sayuri Hayakawa; Viorica Marian
In: Cognitive Research: Principles and Implications, vol. 6, no. 2, pp. 1–10, 2021.
Over the course of our lifetimes, we accumulate extensive experience associating the things that we see with the words we have learned to describe them. As a result, adults engaged in a visual search task will often look at items with labels that share phonological features with the target object, demonstrating that language can become activated even in non-linguistic contexts. This highly interactive cognitive system is the culmination of our linguistic and visual experiences—and yet, our understanding of how the relationship between language and vision develops remains limited. The present study explores the developmental trajectory of language-mediated visual search by examining whether children can be distracted by linguistic competitors during a non-linguistic visual search task. Though less robust compared to what has been previously observed with adults, we find evidence of phonological competition in children as young as 8 years old. Furthermore, the extent of language activation is predicted by individual differences in linguistic, visual, and domain-general cognitive abilities, with the greatest phonological competition observed among children with strong language abilities combined with weaker visual memory and inhibitory control. We propose that linguistic expertise is fundamental to the development of language-mediated visual search, but that the rate and degree of automatic language activation depends on interactions among a broader network of cognitive abilities.
Jasmine R. Aziz; Samantha R. Good; Raymond M. Klein; Gail A. Eskes
In: Cortex, vol. 136, pp. 28–40, 2021.
Studying age-related changes in working memory (WM) and visual search can provide insights into mechanisms of visuospatial attention. In visual search, WM is used to remember previously inspected objects/locations and to maintain a mental representation of the target to guide the search. We sought to extend this work, using aging as a case of reduced WM capacity. The present study tested whether various domains of WM would predict visual search performance in both young (n = 47; aged 18–35 yrs) and older (n = 48; aged 55–78) adults. Participants completed executive and domain-specific WM measures, and a naturalistic visual search task with (single) feature and triple-conjunction (three-feature) search conditions. We also varied the WM load requirements of the search task by manipulating whether a reference picture of the target (i.e., target template) was displayed during the search, or whether participants needed to search from memory. In both age groups, participants with better visuospatial executive WM were faster to locate complex search targets. Working memory storage capacity predicted search performance regardless of target complexity; however, visuospatial storage capacity was more predictive for young adults, whereas verbal storage capacity was more predictive for older adults. Displaying a target template during search diminished the involvement of WM in search performance, but this effect was primarily observed in young adults. Age-specific interactions between WM and visual search abilities are discussed in the context of mechanisms of visuospatial attention and how they may vary across the lifespan.
Aaron Veldre; Roslyn Wong; Sally Andrews
In: Attention, Perception, and Psychophysics, vol. 83, no. 1, pp. 18–26, 2021.
The gaze-contingent moving-window paradigm was used to assess the size and symmetry of the perceptual span in older readers. The eye movements of 49 cognitively intact older adults (60–88 years of age) were recorded as they read sentences varying in difficulty, and the availability of letter information to the right and left of fixation was manipulated. To reconcile discrepancies in previous estimates of the perceptual span in older readers, individual differences in written language proficiency were assessed with tests of vocabulary, reading comprehension, reading speed, spelling ability, and print exposure. The results revealed that higher proficiency older adults extracted information up to 15 letter spaces to the right of fixation, while lower proficiency readers showed no additional benefit beyond 9 letters to the right. However, all readers showed improvements to reading with the availability of up to 9 letters to the left—confirming previous evidence of reduced perceptual span asymmetry in older readers. The findings raise questions about whether the source of age-related changes in parafoveal processing lies in the adoption of a risky reading strategy involving an increased propensity to both guess upcoming words and make corrective regressions.
Mikael Rubin; Michael J. Telch
In: Journal of Traumatic Stress, vol. 34, no. 1, pp. 182–189, 2021.
Posttraumatic stress disorder (PTSD) is related to dysfunctional emotional processing, thus motivating the search for physiological indices that can elucidate this process. Toward this aim, we compared pupillary response patterns in response to angry and fearful auditory stimuli among 99 adults, some with PTSD (n = 14), some trauma-exposed without PTSD (TE; n = 53), and some with no history of trauma exposure (CON; n = 32). We hypothesized that individuals with PTSD would show more pupillary response to angry and fearful auditory stimuli compared to those in the TE and CON groups. Among participants who had experienced a traumatic event, we explored the association between PTSD symptoms and pupillary response; contrary to our prediction, individuals with PTSD displayed the least pupillary response to fearful auditory stimuli compared those in the TE
From cooking a meal to finding a route to a destination, many real life decisions can be decomposed into a hierarchy of sub-decisions. In a hierarchy, choosing which decision to think about requires planning over a potentially vast space of possible decision sequences. To gain insight into how people decide what to decide on, we studied a novel task that combines perceptual decision making, active sensing and hierarchical and counterfactual reasoning. Human participants had to find a target hidden at the lowest level of a decision tree. They could solicit information from the different nodes of the decision tree to gather noisy evidence about the target's location. Feedback was given only after errors at the leaf nodes and provided ambiguous evidence about the cause of the error. Despite the complexity of task (with 10 7 latent states) participants were able to plan efficiently in the task. A computational model of this process identified a small number of heuristics of low computational complexity that accounted for human behavior. These heuristics include making categorical decisions at the branching points of the decision tree rather than carrying forward entire probability distributions, discarding sensory evidence deemed unreliable to make a choice, and using choice confidence to infer the cause of the error after an initial plan failed. Plans based on probabilistic inference or myopic sampling norms could not capture participants' behavior. Our results show that it is possible to identify hallmarks of heuristic planning with sensing in human behavior and that the use of tasks of intermediate complexity helps identify the rules underlying human ability to reason over decision hierarchies.
Inbal Ziv; Yoram S. Bonneh
In: Journal of Vision, vol. 21, no. 2, pp. 1–20, 2021.
Our eyes are never still, but tend to "freeze" in response to stimulus onset. This effect is termed "oculomotor inhibition" (OMI); its magnitude and time course depend on the stimulus parameters, attention, and expectation. We previously showed that the time course and duration of microsaccade and spontaneous eye-blink inhibition provide an involuntary measure of low-level visual properties such as contrast sensitivity during fixation. We investigated whether this stimulus-dependent inhibition also occurs during smooth pursuit, for both the catch-up saccades and the pursuit itself. Observers followed a target with continuous back-and-forth horizontal motion while a Gabor patch was briefly flashed centrally with varied spatial frequency and contrast. Catch-up saccades of the size of microsaccades had a similar pattern of inhibition as microsaccades during fixation, with stronger inhibition onset and faster inhibition release for more salient stimuli. Moreover, a similar stimulus dependency of inhibition was shown for pursuit latencies and peak velocity. Additionally, microsaccade latencies at inhibition release, peak pursuit velocities, and latencies at minimum pursuit velocity were correlated with contrast sensitivity.We demonstrated the generality of OMI to smooth pursuit for both microsaccades and the pursuit itself and its close relation to the low-level processes that define saliency, such as contrast sensitivity.
Kristin Marie Zimmermann; Kirsten Daniela Schmidt; Franziska Gronow; Jens Sommer; Frank Leweke; Andreas Jansen
In: NeuroImage, vol. 238, pp. 1–14, 2021.
Studies on social cognition often use complex visual stimuli to asses neural processes attributed to abilities like “mentalizing” or “Theory of Mind” (ToM). During the processing of these stimuli, eye gaze, however, shapes neural signal patterns. Individual differences in neural operations on social cognition may therefore be obscured if individuals' gaze behavior differs systematically. These obstacles can be overcome by the combined analysis of neural signal and natural viewing behavior. Here, we combined functional magnetic resonance imaging (fMRI) with eye-tracking to examine effects of unconstrained gaze on neural ToM processes in healthy individuals with differing levels of emotional awareness, i.e. alexithymia. First, as previously described for emotional tasks, people with higher alexithymia levels look less at eyes in both ToM and task-free viewing contexts. Further, we find that neural ToM processes are not affected by individual differences in alexithymia per se. Instead, depending on alexithymia levels, gaze on critical stimulus aspects reversely shapes the signal in medial prefrontal cortex (MPFC) and anterior temporoparietal junction (TPJ) as distinct nodes of the ToM system. These results emphasize that natural selective attention affects fMRI patterns well beyond the visual system. Our study implies that, whenever using a task with multiple degrees of freedom in scan paths, ignoring the latter might obscure important conclusions.
Yijing Zhuang; Li Gu; Jingchang Chen; Zixuan Xu; Lily Y. L. Chan; Lei Feng; Qingqing Ye; Shenglan Zhang; Jin Yuan; Jinrong Li
In: Frontiers in Neuroscience, vol. 15, pp. 710578, 2021.
Contrast sensitivity (CS) is important when assessing functional vision. However, current techniques for assessing CS are not suitable for young children or non-verbal individuals because they require reliable, subjective perceptual reports. This study explored the feasibility of applying eye tracking technology to quantify CS as a first step toward developing a testing paradigm that will not rely on observers' behavioral or language abilities. Using a within-subject design, 27 healthy young adults completed CS measures for three spatial frequencies with best-corrected vision and lens-induced optical blur. Monocular CS was estimated using a five-alternative, forced-choice grating detection task. Thresholds were measured using eye movement responses and conventional key-press responses. CS measured using eye movements compared well with results obtained using key-press responses [Pearson's rbest–corrected = 0.966, P < 0.001]. Good test–retest variability was evident for the eye-movement-based measures (Pearson's r = 0.916, P < 0.001) with a coefficient of repeatability of 0.377 log CS across different days. This study provides a proof of concept that eye tracking can be used to automatically record eye gaze positions and accurately quantify human spatial vision. Future work will update this paradigm by incorporating the preferential looking technique into the eye tracking methods, optimizing the CS sampling algorithm and adapting the methodology to broaden its use on infants and non-verbal individuals.
Ran Zhuang; Yanyan Tu; Xiangzhen Wang; Yanju Ren; Richard A. Abrams
In: Experimental Brain Research, vol. 239, no. 11, pp. 3381–3395, 2021.
It is known that movements of visual attention are influenced by features in a scene, such as colors, that are associated with value or with loss. The present study examined the detailed nature of these attentional effects by employing the gap paradigm—a technique that has been used to separately reveal changes in attentional capture and shifting, and changes in attentional disengagement. In four experiments, participants either looked toward or away from stimuli with colors that had been associated either with gains or with losses. We found that participants were faster to look to colors associated with gains and slower to look away from them, revealing effects of gains on both attentional capture and attentional disengagement. On the other hand, participants were both slower to look to features associated with loss, and faster to look away from such features. The pattern of results suggested, however, that the latter finding was not due to more rapid disengagement from loss-associated colors, but instead to more rapid shifting of attention away from such colors. Taken together, the results reveal a complex pattern of effects of gains and losses on the disengagement, capture, and shifting of visual attention, revealing a remarkable flexibility of the attention system.
Qian Zhuang; Xiaoxiao Zheng; Benjamin Becker; Wei Lei; Xiaolei Xu; Keith M. Kendrick
In: Psychoneuroendocrinology, vol. 133, pp. 105412, 2021.
The respective roles of the neuropeptides arginine vasopressin (AVP) and oxytocin (OXT) in modulating social cognition and for therapeutic intervention in autism spectrum disorder have not been fully established. In particular, while numerous studies have demonstrated effects of oxytocin in promoting social attention the role of AVP has not been examined. The present study employed a randomized, double-blind, placebo (PLC)-controlled between-subject design to explore the social- and emotion-specific effects of AVP on both bottom-up and top-down attention processing with a validated emotional anti-saccade eye-tracking paradigm in 80 healthy male subjects (PLC = 40
Yikang Zhu; Lihua Xu; Wenzheng Wang; Qian Guo; Shan Chen; Caidi Zhang; Tianhong Zhang; Xiaochen Hu; Paul Enck; Chunbo Li; Jianhua Sheng; Jijun Wang
In: Asian Journal of Psychiatry, vol. 66, pp. 1–6, 2021.
Interpersonal communication is a specific scenario in which patients with psychiatric symptoms may manifest different behavioral patterns due to psychopathology. This was a pilot study by eye-tracking technology to investigate attentive bias during social information processing in schizophrenia. We enrolled 39 patients with schizophrenia from Shanghai Mental Health Center and 42 age-, gender- and education-matched healthy controls. The experiment was a free-viewing task, in which pictures with three types of degree of interpersonal communication were shown. We used two measures: 1) initial fixation duration, 2) total gaze duration. The Positive and Negative Syndrome Scale (PANSS) was used to determine symptom severity. The ratio of first fixation duration for pictures of communicating vs. non-communicating persons was significantly lower in patients than in controls (Mann-Whitney U = 512
Shengnan Zhu; Yang Zhang; Junli Dong; Lihong Chen; Wenbo Luo
In: Journal of Vision, vol. 21, no. 4, pp. 1–9, 2021.
The role of different spatial frequency bands in threat detection has been explored extensively. However, most studies use manual responses and the results are mixed. Here, we aimed to investigate the contribution of spatial frequency information to threat detection by using three response types, including manual responses, eye movements, and reaching movements, together with a priming paradigm. The results showed that both saccade and reaching responses were significantly faster to threatening stimuli than to nonthreatening stimuli when primed by low-spatial-frequency gratings rather than by high-spatial-frequency gratings. However, the manual response times to threatening stimuli were comparable to nonthreatening stimuli, irrespective of the spatial frequency content of the primes. The findings provide clear evidence that low-spatial-frequency information can facilitate threat detection in a response-specific manner, possibly through the subcortical magnocellular pathway dedicated to processing threat-related signals, which is automatically prioritized in the oculomotor system and biases behavior.
Ruomeng Zhu; Mateo Obregón; Hamutal Kreiner; Richard Shillcock
In: Attention, Perception, and Psychophysics, vol. 83, no. 7, pp. 3035–3045, 2021.
We investigated small temporal nonalignments between the two eyes' fixations in the reading of English and Chinese. We define nine different patterns of asynchrony and report their spatial distribution across the screen of text. We interpret them in terms of their implications for ocular prevalence—prioritizing the input from one eye over the input from the other eye in higher perception/cognition, even when binocular fusion has occurred. The data are strikingly similar across the two very different orthographies. Asynchronies, in which one eye begins the fixation earlier and/or ends it later, occur most frequently in the hemifield corresponding to that eye. We propose that such small asynchronies cue higher processing to prioritize the input from that eye, during and after binocular fusion.
Mengyan Zhu; Xiangling Zhuang; Guojie Ma
In: Reading and Writing, vol. 34, no. 3, pp. 773–790, 2021.
In Chinese reading, the possibility and mechanism of semantic parafoveal processing has been debated for a long time. To advance the topic, “semantic preview benefit” in Chinese reading was reexamined, with a specific focus on how it is affected by the semantic relatedness between preview and target words at the two-character word level. Eighty critical two-character words were selected as target words. Reading tasks with gaze-contingent boundary paradigms were used to study whether different semantic-relatedness preview conditions influenced parafoveal processing. The data showed that synonyms (the most closely related preview) produced significant preview benefit compared with the semantic-related (non-synonyms) condition, even when plausibility was controlled. This result indicates that the larger extent of semantic preview benefit is mainly caused by the larger semantic relatedness between preview and target words. Moreover, plausibility is not the only cause of semantic preview benefit in Chinese reading. These findings improve the current understanding of the mechanism of parafoveal processing in Chinese reading and the implications on modeling eye movement control are discussed.
Ying Joey Zhou; Luca Iemi; Jan-Mathijs Schoffelen; Floris P. Lange; Saskia Haegens
In: Journal of Neuroscience, vol. 41, no. 46, pp. 1–43, 2021.
Alpha activity (8–14 Hz) is the dominant rhythm in the awake brain and is thought to play an important role in setting the internal state of the brain. Previous work has associated states of decreased alpha power with enhanced neural excitability. However, evidence is mixed on whether and how such excitability enhancement modulates sensory signals of interest versus noise differently, and what, if any, are the consequences for subsequent perception. Here, human subjects (male and female) performed a visual detection task in which we manipulated their decision criteria in a blockwise manner. Although our manipulation led to substantial criterion shifts, these shifts were not reflected in prestimulus alpha band changes. Rather, lower prestimulus alpha power in occipital-parietal areas improved perceptual sensitivity and enhanced information content decodable from neural activity patterns. Additionally, oscillatory alpha phase immediately before stimulus presentation modulated accuracy. Together, our results suggest that alpha band dynamics modulate sensory signals of interest more strongly than noise.
Yang Zhou; Matthew C. Rosen; Sruthi K. Swaminathan; Nicolas Y. Masse; Ou Zhu; David J. Freedman
In: eLife, vol. 10, pp. 1–30, 2021.
Comparing sequential stimuli is crucial for guiding complex behaviors. To understand mechanisms underlying sequential decisions, we compared neuronal responses in the prefrontal cortex (PFC), the lateral intraparietal (LIP), and medial intraparietal (MIP) areas in monkeys trained to decide whether sequentially presented stimuli were from matching (M) or nonmatching (NM) categories. We found that PFC leads M/NM decisions, whereas LIP and MIP appear more involved in stimulus evaluation and motor planning, respectively. Compared to LIP, PFC showed greater nonlinear integration of currently visible and remembered stimuli, which correlated with the monkeys' M/NM decisions. Furthermore, multi-module recurrent networks trained on the same task exhibited key features of PFC and LIP encoding, including nonlinear integration in the PFC-like module, which was causally involved in the networks' decisions. Network analysis found that nonlinear units have stronger and more widespread connections with input, output, and within-area units, indicating putative circuit-level mechanisms for sequential decisions.
Yan Bang Zhou; Qiang Li; Hong Zhi Liu
Visual attention and time preference reversals Journal Article
In: Judgment and Decision Making, vol. 16, no. 4, pp. 1010–1038, 2021.
Time preference reversal refers to systematic inconsistencies between preferences and bids for intertemporal options. From the two eye-tracking studies (N1 = 60
Xiaomei Zhou; Shruti Vyas; Jinbiao Ning; Margaret C. Moulson
Naturalistic face learning in infants and adults Journal Article
In: Psychological Science, pp. 1–17, 2021.
Everyday face recognition presents a difficult challenge because faces vary naturally in appearance as a result of changes in lighting, expression, viewing angle, and hairstyle. We know little about how humans develop the ability to learn faces despite natural facial variability. In the current study, we provide the first examination of attentional mechanisms underlying adults' and infants' learning of naturally varying faces. Adults ( n = 48) and 6- to 12-month-old infants ( n = 48) viewed videos of models reading a storybook; the facial appearance of these models was either high or low in variability. Participants then viewed the learned face paired with a novel face. Infants showed adultlike prioritization of face over nonface regions; both age groups fixated the face region more in the high- than low-variability condition. Overall, however, infants showed less ability to resist contextual distractions during learning, which potentially contributed to their lack of discrimination between the learned and novel faces. Mechanisms underlying face learning across natural variability are discussed.
Wei Zhou; Aiping Wang; Ming Yan
In: Vision Research, vol. 182, pp. 20–26, 2021.
In the present study, we explored the perceptual span of skilled Uighur readers during their natural reading of sentences. The Uighur script is based on Arabic letters and it runs horizontally from right to left, offering a test to understand the effect of text direction. We utilized the gaze contingent moving window paradigm, in which legible text was provided only within a window that moved in synchrony with readers' eyes while all other letters were masked. The size of the window was manipulated systematically to determine the smallest size that allowed readers to show normal reading behaviors. Comparisons of window conditions with the baseline condition showed that the Uighur readers reached asymptotic performance in reading speed and gaze duration when windows revealed at least five letters to the right and twelve letters to the left of the currently fixated one. The present study is the first to document the size of the perceptual span in a horizontally leftwards running script. Cross-script comparisons with prior findings suggest that the size of the perceptual span for a certain writing system is likely influenced by its reading direction and visual complexity.
Shou Han Zhou; Gerard Loughnane; Redmond O'connell; Mark A. Bellgrove; Trevor T. J. Chong
In: Journal of Cognitive Neuroscience, vol. 33, no. 6, pp. 1020–1031, 2021.
Current models of perceptual decision-making assume that choices are made after evidence in favor of an alternative accumulates to a given threshold. This process has recently been revealed in human EEG recordings, but an unresolved issue is how these neural mechanisms are modulated by competing, yet task-irrelevant, stimuli. In this study, we tested 20 healthy participants on a motion direction discrimination task. Participants monitored two patches of random dot motion simultaneously presented on either side of fixation for periodic changes in an upward or downward motion, which could occur equiprobably in either patch. On a random 50% of trials, these periods of coherent vertical motion were accompanied by simultaneous task-irrelevant, horizontal motion in the contralateral patch. Our data showed that these distractors selectively increased the amplitude of early target selection responses over scalp sites contralateral to the distractor stimulus, without impacting on responses ipsilat-eral to the distractor. Importantly, this modulation mediated a decrement in the subsequent buildup rate of a neural signature of evidence accumulation and accounted for a slowing of RTs. These data offer new insights into the functional interactions between target selection and evidence accumulation signals, and their susceptibility to task-irrelevant distractors. More broadly, these data neurally inform future models of perceptual decision-making by highlighting the influence of early processing of competing stimuli on the accumulation of perceptual evidence.
Peng Zhou; Jiawei Shi; Likan Zhan
In: Applied Psycholinguistics, vol. 42, no. 1, pp. 181–205, 2021.
The present study investigated whether 4- and 5-year-old Mandarin-speaking children are able to process garden-path constructions in real time when the working memory burden associated with revision and reanalysis is kept to minimum. In total, 25 4-year-olds, 25 5-year-olds, and 30 adults were tested using the visual-world paradigm of eye tracking. The obtained eye gaze patterns reflect that the 4- and 5-year-olds, like the adults, committed to an initial misinterpretation and later successfully revised their initial interpretation. The findings show that preschool children are able to revise and reanalyze their initial commitment and then arrive at the correct interpretation using the later-encountered linguistic information when processing the garden-path constructions in the current study. The findings also suggest that although the 4-year-olds successfully processed the garden-path constructions in real time, they were not as effective as the 5-year-olds and the adults in revising and reanalyzing their initial mistaken interpretation when later encountering the critical linguistic cue. Taken together, our findings call for a fine-grained model of child sentence processing.
In: Frontiers in Psychology, vol. 12, pp. 711420, 2021.
Although the relationship between cognitive processes and saccadic eye movements has been outlined, the relationship between specific cognitive processes underlying saccadic eye movements and skill level of soccer players remains unclear. Present study used the prosaccade task as a tool to investigate the difference in saccadic eye movements in skilled and less skilled Chinese female adolescent soccer players. Fifty-six healthy female adolescent soccer players (range: 14–18years, mean age: 16.5years) from Fujian Youth Football Training Base (Fujian Province, China) took part in the experiment. In the prosaccade task, participants were instructed to fixate at the cross at the center of the screen as long as the target appeared peripherally. They were told to saccade to the target as quickly and accurately as possible once it appeared. The results indicated that skilled soccer players exhibited shorter saccade latency (p=0.031), decreased variability of saccade latency (p=0.013), and higher spatial accuracy of saccade (p=0.032) than their less skilled counterparts. The shorter saccade latency and decreased variability of saccade latency may imply that the attentional system of skilled soccer player is superior which leads to smaller attention fluctuation and less attentional lapse. Additionally, higher spatial accuracy of saccade may imply potential structural differences in brain underlying saccadic eye movement between skilled and less skilled soccer players. More importantly, the results of the present study demonstrated that soccer players' cognitive capacities vary as a function of their skill levels. The limitations of the present study and future directions of research were discussed.
Hong Zhou; Xia Wang; Di Ma; Yanyan Jiang; Fan Li; Yunchuang Sun; Jing Chen; Wei Sun; Elmar H. Pinkhardt; Bernhard Landwehrmeyer; Albert Ludolph; Lin Zhang; Guiping Zhao; Zhaoxia Wang
In: Brain and Behavior, vol. 11, no. 7, pp. 1–10, 2021.
Introduction: Clinical diagnosis of Parkinsonism is still challenging, and the diagnostic biomarkers of Multiple System Atrophy (MSA) are scarce. This study aimed to investigate the diagnostic value of the combined eye movement tests in patients with Parkinson's disease (PD) and those with MSA. Methods: We enrolled 96 PD patients, 33 MSA patients (18 with MSA-P and 15 with MSA-C), and 40 healthy controls who had their horizontal ocular movements measured. The multiple-step pattern of memory-guided saccade (MGS), the hypometria/hypermetria of the reflexive saccade, the abnormal saccade in smooth pursuit movement (SPM), gaze-evoked nystagmus, and square-wave jerks in gaze-holding test were qualitatively analyzed. The reflexive saccadic parameters and gain of SPM were also quantitatively analyzed. Results: The MGS test showed that patients with either diagnosis had a significantly higher incidence of multiple-step pattern compared with controls (68.6%, 65.2%, and versus. 2.5%, p <.05, in PD, MSA, versus. controls, respectively). The reflexive saccade test showed that MSA patients showing a prominent higher incidence of the abnormal saccade (63.6%, both hypometria and hypermetria) than that of PD patients and controls (33.3%, 7.5%, respectively, hypometria) (p <.05). The SPM test showed PD patients had mildly decreased gain among whom 28.1% presenting “saccade intrusions”; and that MSA patients had the significant decreased gain with 51.5% presenting “catch-up saccades”(p <.05). Only MSA patients showed gaze-evoked nystagmus (24.2%), square-wave jerks (6.1%) in gaze-holding test (p <.05). Conclusions: A panel of eye movements tests may help to differentiate PD from MSA. The combined presence of hypometria and hypermetria in saccadic eye movement, the impaired gain of smooth pursuit movement with “catch-up saccades,” gaze-evoked nystagmus, square-wave jerks in gaze-holding test, and multiple-step pattern in MGS may provide clues to the diagnosis of MSA.
Feng Zhou; X. Jessie Yang; Joost C. F. Winter
In: IEEE Transactions on Intelligent Transportation Systems, pp. 1–12, 2021.
Situation awareness (SA) is critical to improving takeover performance during the transition period from automated driving to manual driving. Although many studies measured SA during or after the driving task, few studies have attempted to predict SA in real time in automated driving. In this work, we propose to predict SA during the takeover transition period in conditionally automated driving using eye-tracking and self-reported data. First, a tree ensemble machine learning model, named LightGBM (Light Gradient Boosting Machine), was used to predict SA. Second, in order to understand what factors influenced SA and how, SHAP (SHapley Additive exPlanations) values of individual predictor variables in the LightGBM model were calculated. These SHAP values explained the prediction model by identifying the most important factors and their effects on SA, which further improved the model performance of LightGBM through feature selection. We standardized SA between 0 and 1 by aggregating three performance measures (i.e., placement, distance, and speed estimation of vehicles with regard to the ego-vehicle) of SA in recreating simulated driving scenarios, after 33 participants viewed 32 videos with six lengths between 1 and 20 s. Using only eye-tracking data, our proposed model outperformed other selected machine learning models, having a root-mean-squared error (RMSE) of 0.121, a mean absolute error (MAE) of 0.096, and a 0.719 correlation coefficient between the predicted SA and the ground truth. The code is available at https://github.com/refengchou/Situation-awareness-prediction. Our proposed model provided important implications on how to monitor and predict SA in real time in automated driving using eye-tracking data.
Alexander Zhigalov; Katharina Duecker; Ole Jensen
In: PLoS Computational Biology, vol. 17, no. 6, pp. 1–24, 2021.
The aim of this study is to uncover the network dynamics of the human visual cortex by driving it with a broadband random visual flicker. We here applied a broadband flicker (1–720 Hz) while measuring the MEG and then estimated the temporal response function (TRF) between the visual input and the MEG response. This TRF revealed an early response in the 40–60 Hz gamma range as well as in the 8–12 Hz alpha band. While the gamma band response is novel, the latter has been termed the alpha band perceptual echo. The gamma echo preceded the alpha perceptual echo. The dominant frequency of the gamma echo was subject-specific thereby reflecting the individual dynamical properties of the early visual cortex. To understand the neuronal mechanisms generating the gamma echo, we implemented a pyramidal-interneuron gamma (PING) model that produces gamma oscillations in the presence of constant input currents. Applying a broadband input current mimicking the visual stimulation allowed us to estimate TRF between the input current and the population response (akin to the local field potentials). The TRF revealed a gamma echo that was similar to the one we observed in the MEG data. Our results suggest that the visual gamma echo can be explained by the dynamics of the PING model even in the absence of sustained gamma oscillations.
Junming Zheng; Muhammad Waqqas Khan Tarin; Denghui Jiang; Min Li; Jing Ye; Lingyan Chen; Tianyou He; Yushan Zheng
In: Urban Forestry and Urban Greening, vol. 61, pp. 127101, 2021.
Plant structure and architecture have a significant influence on how people interpret them. Bamboo plants have highly ornamental attributes, but the traits that attract people the most are still unknown. Therefore, to assess the people's preference for ornamental features of bamboo plants, eye-tracking measures (fixation count, percent of dwell time, pupil size, and saccade amplitude) and a questionnaire survey about subjective preference were conducted by ninety college students as the participants. The result showed that subjective ratings of stem color, leaf stripes, and stem stripes showed a significant positive correlation with the fixation count. The pupil size and saccade amplitude of different ornamental features were not correlated with the subjective ratings. According to random forest model, fixation count was the most influential aspect affecting subjective ratings. Based on integrated eye-tracking measures and subjective ratings, we conclude that people prefer the ornamental features like green stem, green stem with irregular yellow stripes or yellow stem with narrow green stripes, leaves with less number of stripes, normal stem, and tree. In addition, people prefer natural traits, for instance, green stem, normal stem, and tree, related to latent conscious belief and evolutionary adaptation. Abnormal traits, such as leaf stripes and stem stripes attract people's visual attention and interests, making the fixation count and increasing the percentage of dwell time. This study has significant implications for landscape experts in the design and maintenance of ornamental bamboo plantations in China as well as in other areas of the world.
Haiyan Zheng; Xiaoxiao Ying; Xianghang He; Jia Qu; Fang Hou
In: Investigative Ophthalmology & Visual Science, vol. 62, no. 9, pp. 1–11, 2021.
PURPOSE. To investigate the temporal characteristics of visual processing at the fovea and the periphery in high myopia. METHODS. Eighteen low (LM, ≤ −0.50 and > −6.00 D) and 18 high myopic (HM, ≤ −6.00 D) participants took part in this study. The contrast thresholds in an orientation discrimination task under various stimulus onset asynchrony (SOA) masking conditions were measured at the fovea and a more peripheral area (7°) for the two groups. An elaborated perceptual template model (ePTM) was fit to the behavioral data for each participant. RESULTS. An analysis of variance with three factors (SOA, degree of myopia and eccentricity) was performed on the threshold data. The interaction between SOA and degree of myopia in the fovea was significant (F (4, 128) = 2.66
Annie Zheng; Jessica A. Church
In: Child Development, vol. 92, no. 4, pp. 1652–1672, 2021.
Children perform worse than adults on tests of cognitive flexibility, which is a component of executive function. To assess what aspects of a cognitive flexibility task (cued switching) children have difficulty with, investigators tested where eye gaze diverged over age. Eye-tracking was used as a proxy for attention during the preparatory period of each trial in 48 children ages 8–16 years and 51 adults ages 18–27 years. Children fixated more often and longer on the cued rule, and made more saccades between rule and response options. Behavioral performance correlated with gaze location and saccades. Mid-adolescents were similar to adults, supporting the slow maturation of cognitive flexibility. Lower preparatory control and associated lower cognitive flexibility task performance in development may particularly relate to rule processing.
Sainan Zhao; Lin Li; Min Chang; Jingxin Wang; Kevin B Paterson
In: Quarterly Journal of Experimental Psychology, vol. 74, no. 1, pp. 68–78, 2021.
Older adults are thought to compensate for slower lexical processing by making greater use of contextual knowledge, relative to young adults, to predict words in sentences. Accordingly, compared to young adults, older adults should produce larger contextual predictability effects in reading times and skipping rates for words. Empirical support for this account is nevertheless scarce. Perhaps the clearest evidence to date comes from a recent Chinese study showing larger word predictability effects for older adults in reading times but not skipping rates for two-character words. However, one possibility is that the absence of a word-skipping effect in this experiment was due to the older readers skipping words infrequently because of difficulty processing two-character words parafoveally. We therefore took a further look at this issue, using one-character target words to boost word-skipping. Young (18–30 years) and older (65+ years) adults read sentences containing a target word that was either highly predictable or less predictable from the prior sentence context. Our results replicate the finding that older adults produce larger word predictability effects in reading times but not word-skipping, despite high skipping rates. We discuss these findings in relation to ageing effects on reading in different writing systems.
Yi Zhang; Ke Xu; Zhongling Pi; Jiumin Yang
In: Behaviour and Information Technology, pp. 1–10, 2021.
Although more and more online courses use video lectures that feature an instructor and slides, there are few specific guidelines for designing these video lectures. This experiment tested whether the instructor should appear on the screen and whether her position on the screen (left, middle, right of the content on the slides) influenced students. Students were randomly assigned to watch one of four video lectures on the topic of sleep. The results showed that the video lectures with an instructor's presence (regardless of position) motivated students more than the video lecture without an instructor presence did. Learning performance and satisfaction were highest when the instructor appeared on the right side of the screen. Furthermore, eye movement data showed that compared to students in all other conditions, students in the middle condition paid more attention to the instructor and less attention to the learning content, and switched more between instructor and learning content. The findings highlight the positive effects of the instructor appearing on the right side of the screen in video lectures with slides.
Yan-Bo Zhang; Peng-Chong Wang; Yun Ma; Xiang-Yun Yang; Fan-Qiang Meng; Simon A Broadley; Jing Sun; Zhan-Jiang Li
In: World Journal of Psychiatry, vol. 11, no. 3, pp. 73–86, 2021.
BACKGROUND: Illness anxiety disorder (IAD) is a common, distressing, and debilitating condition with the key feature being a persistent conviction of the possibility of having one or more serious or progressive physical disorders. Because eye movements are guided by visual-spatial attention, eye-tracking technology is a comparatively direct, continuous measure of attention direction and speed when stimuli are oriented. Researchers have tried to identify selective visual attention biases by tracking eye movements within dot-probe paradigms because dot-probe paradigm can distinguish these attentional biases more clearly. AIM: To examine the association between IAD and biased processing of illness-related information. METHODS: A case-control study design was used to record eye movements of individuals with IAD and healthy controls while participants viewed a set of pictures from four categories (illness-related, socially threatening, positive, and neutral images). Biases in initial orienting were assessed from the location of the initial shift in gaze, and biases in the maintenance of attention were assessed from the duration of gaze that was initially fixated on the picture per image category. RESULTS: The eye movement of the participants in the IAD group was characterized by an avoidance bias in initial orienting to illness-related pictures. There was no evidence of individuals with IAD spending significantly more time viewing illness-related images compared with other images. Patients with IAD had an attention bias at the early stage and overall attentional avoidance. In addition, this study found that patients with significant anxiety symptoms showed attention bias in the late stages of attention processing. CONCLUSION: Illness-related information processing biases appear to be a robust feature of IAD and may have an important role in explaining the etiology and maintenance of the disorder.
Xinyuan Zhang; Mario Dalmaso; Luigi Castelli; Shimin Fu; Giovanni Galfano
In: Scientific Reports, vol. 11, pp. 1–11, 2021.
The averted gaze of others triggers reflexive attentional orienting in the corresponding direction. This phenomenon can be modulated by many social factors. Here, we used an eye-tracking technique to investigate the role of ethnic membership in a cross-cultural oculomotor interference study. Chinese and Italian participants were required to perform a saccade whose direction might be either congruent or incongruent with the averted-gaze of task-irrelevant faces belonging to Asian and White individuals. The results showed that, for Chinese participants, White faces elicited a larger oculomotor interference than Asian faces. By contrast, Italian participants exhibited a similar oculomotor interference effect for both Asian and White faces. Hence, Chinese participants found it more difficult to suppress eye-gaze processing of White rather than Asian faces. The findings provide converging evidence that social attention can be modulated by social factors characterizing both the face stimulus and the participants. The data are discussed with reference to possible cross-cultural differences in perceived social status.
Xinru Zhang; Zhongling Pi; Chenyu Li; Weiping Hu
In: British Journal of Educational Technology, vol. 52, no. 2, pp. 606–618, 2021.
Intrinsic motivation is seen as the principal source of vitality in educational settings. This study examined whether intrinsic motivation promoted online group creativity and tested a cognitive mechanism that might explain this effect. University students (N = 72; 61 women) who volunteered to participate were asked to fulfill a creative task with a peer using online software. The peer was actually a fake participant who was programed to send prepared answers in sequence. Ratings of creativity (fluency, flexibility and originality) and eye movement data (focus on own vs. peer's ideas on the screen) were used to compare students who were induced to have high intrinsic motivation and those induced to have low intrinsic motivation. Results showed that compared to participants with low intrinsic motivation, those with high intrinsic motivation showed higher fluency and flexibility on the creative task and spent a larger percentage of time looking at their own ideas on the screen. The two groups did not differ in how much they looked at the peer's ideas. In addition, students' percentage dwell time on their own ideas mediated the beneficial effect of intrinsic motivation on idea fluency. These results suggest that although intrinsic motivation could enhance the fluency of creative ideas in an online group, it does not necessarily promote interaction among group members. Given the importance of interaction in online group setting, findings of this study suggest that in addition to enhancing intrinsic motivation, other measures should be taken to promote the interaction behavior in online groups. Practitioner Notes What is already known about this topic The generation of creative ideas in group settings calls for both individual effort and cognitive stimulation from other members. Intrinsic motivation has been shown to foster creativity in face-to-face groups, which is primarily due the promotion of individual effort. In online group settings, students' creativity tends to rely on intrinsic motivation because the extrinsic motivation typically provided by teachers' supervision and peer pressure in face-to-face settings is minimized online. What this paper adds Creative performance in online groups benefits from intrinsic motivation. Intrinsic motivation promotes creativity through an individual's own cognitive effort instead of interaction among members. Implications for practice and/or policy Improving students' intrinsic motivation is an effective way to promote creativity in online groups. Teachers should take additional steps to encourage students to interact more with each other in online groups.
Xiaoli Zhang; Julie D. Golomb
In: eNeuro, vol. 8, no. 2, pp. 1–19, 2021.
We can focus visuospatial attention by covertly attending to relevant locations, moving our eyes, or both simultaneously. How does shifting versus holding covert attention during fixation compare with maintaining covert attention across saccades? We acquired human fMRI data during a combined saccade and covert attention task. On Eyes-fixed trials, participants either held attention at the same initial location (“hold atten-tion”) or shifted attention to another location midway through the trial (“shift attention”). On Eyes-move trials, participants made a saccade midway through the trial, while maintaining attention in one of two reference frames: the “retinotopic attention” condition involved holding attention at a fixation-relative location but shifting to a different screen-centered location, whereas the “spatiotopic attention” condition involved holding attention on the same screen-centered location but shifting relative to fixation. We localized the brain network sensitive to attention shifts (shift > hold attention), and used multivoxel pattern time course (MVPTC) analyses to investigate the patterns of brain activity for spatiotopic and retinotopic attention across saccades. In the attention shift network, we found transient information about both whether covert shifts were made and whether saccades were executed. Moreover, in this network, both retinotopic and spatiotopic conditions were represented more similarly to shifting than to holding covert attention. An exploratory searchlight analysis revealed additional regions where spatiotopic was relatively more similar to shifting and retinotopic more to holding. Thus, maintaining retinotopic and spatiotopic attention across saccades may involve different types of updating that vary in similarity to covert attention “hold” and “shift” signals across different regions.
TianHong Zhang; YingYu Yang; LiHua Xu; XiaoChen Tang; YeGang Hu; Xin Xiong; YanYan Wei; HuiRu Cui; YingYing Tang; HaiChun Liu; Tao Chen; Zhi Liu; Li Hui; ChunBo Li; XiaoLi Guo; JiJun Wang
In: The World Journal of Biological Psychiatry, pp. 1–13, 2021.
Objectives: We used eye-tracking to evaluate multiple facial context processing and event-related potential (ERP) to evaluate multiple facial recognition in individuals at clinical high risk (CHR) for psychosis. Methods: In total, 173 subjects (83 CHRs and 90 healthy controls [HCs]) were included and their emotion perception performances were accessed. A total of 40 CHRs and 40 well-matched HCs completed an eye-tracking task where they viewed pictures depicting a person in the foreground, presented as context-free, context-compatible, and context-incompatible. During the two-year follow-up, 26 CHRs developed psychosis, including 17 individuals who developed first-episode schizophrenia (FES). Eighteen well-matched HCs were made to complete the face number detection ERP task with image stimuli of one, two, or three faces. Results: Compared to the HC group, the CHR group showed reduced visual attention to contextual processing when viewing multiple faces. With the increasing complexity of contextual faces, the differences in eye-tracking characteristics also increased. In the ERP task, the N170 amplitude decreased with a higher face number in FES patients, while it increased with a higher face number in HCs. Conclusions: Individuals in the very early phase of psychosis showed facial processing deficits with supporting evidence of different scan paths during context processing and disruption of N170 during multiple facial recognition.
Luming Zhang; Xiaoqin Zhang; Mingliang Xu; Ling Shao
In: IEEE Transactions on Neural Networks and Learning Systems, pp. 1–14, 2021.
Categorizing aerial photographs with varied weather/lighting conditions and sophisticated geomorphic factors is a key module in autonomous navigation, environmental evaluation, and so on. Previous image recognizers cannot fulfill this task due to three challenges: 1) localizing visually/semantically salient regions within each aerial photograph in a weakly annotated context due to the unaffordable human resources required for pixel-level annotation; 2) aerial photographs are generally with multiple informative attributes (e.g., clarity and reflectivity), and we have to encode them for better aerial photograph modeling; and 3) designing a cross-domain knowledge transferal module to enhance aerial photograph perception since multiresolution aerial photographs are taken asynchronistically and are mutually complementary. To handle the above problems, we propose to optimize aerial photograph's feature learning by leveraging the low-resolution spatial composition to enhance the deep learning of perceptual features with a high resolution. More specifically, we first extract many BING-based object patches (Cheng et al., 2014) from each aerial photograph. A weakly supervised ranking algorithm selects a few semantically salient ones by seamlessly incorporating multiple aerial photograph attributes. Toward an interpretable aerial photograph recognizer indicative to human visual perception, we construct a gaze shifting path (GSP) by linking the top-ranking object patches and, subsequently, derive the deep GSP feature. Finally, a cross-domain multilabel SVM is formulated to categorize each aerial photograph. It leverages the global feature from low-resolution counterparts to optimize the deep GSP feature from a high-resolution aerial photograph. Comparative results on our compiled million-scale aerial photograph set have demonstrated the competitiveness of our approach. Besides, the eye-tracking experiment has shown that our ranking-based GSPs are over 92% consistent with the real human gaze shifting sequences.
Luming Zhang; Zhigeng Pan; Ling Shao
In: IEEE Transactions on Image Processing, vol. 30, pp. 7803–7814, 2021.
Intelligently understanding the sophisticated topological structures from aerial photographs is a useful technique in aerial image analysis. Conventional methods cannot fulfill this task due to the following challenges: 1) the topology number of an aerial photo increases exponentially with the topology size, which requires a fine-grained visual descriptor to discriminatively represent each topology; 2) identifying visually/semantically salient topologies within each aerial photo in a weakly-labeled context, owing to the unaffordable human resources required for pixel-level annotation; and 3) designing a cross-domain knowledge transferal module to augment aerial photo perception, since multi-resolution aerial photos are taken asynchronistically in practice. To handle the above problems, we propose a unified framework to understand aerial photo topologies, focusing on representing each aerial photo by a set of visually/semantically salient topologies based on human visual perception and further employing them for visual categorization. Specifically, we first extract multiple atomic regions from each aerial photo, and thereby graphlets are built to capture the each aerial photo topologically. Then, a weakly-supervised ranking algorithm selects a few semantically salient graphlets by seamlessly encoding multiple image-level attributes. Toward a visualizable and perception-aware framework, we construct gaze shifting path (GSP) by linking the top-ranking graphlets. Finally, we derive the deep GSP representation, and formulate a semi-supervised and cross-domain SVM to partition each aerial photo into multiple categories. The SVM utilizes the global composition from low-resolution counterparts to enhance the deep GSP features from high-resolution aerial photos which are partially-annotated. Extensive visualization results and categorization performance comparisons have demonstrated the competitiveness of our approach.
Li Zhang; Guoli Yan; Valerie Benson
In: PLoS ONE, vol. 16, no. 5, pp. 1–14, 2021.
The current study examined how emotional faces impact on attentional control at both involuntary and voluntary levels in children with and without autism spectrum disorder (ASD). A non-face single target was either presented in isolation or synchronously with emotional face distractors namely angry, happy and neutral faces. ASD and typically developing children made more erroneous saccades towards emotional distractors relative to neutral distractors in parafoveal and peripheral conditions. Remote distractor effects were observed on saccade latency in both groups regardless of distractor type, whereby time taken to initiate an eye movement to the target was longest in central distractor conditions, followed by parafoveal and peripheral distractor conditions. The remote distractor effect was greater for angry faces compared to happy faces in the ASD group. Proportions of failed disengagement trials from central distractors, for the first saccade, were higher in the angry distractor condition compared with the other two distractor conditions in ASD, and this effect was absent for the typical group. Eye movement results suggest difficulties in disengaging from fixated angry faces in ASD. Atypical disengagement from angry faces at the voluntary level could have consequences for the development of higher-level socio-communicative skills in ASD.
Guangyao Zhang; Binke Yuan; Huimin Hua; Ya Lou; Nan Lin; Xingshan Li
In: Brain and Language, vol. 213, pp. 1–10, 2021.
Although there are considerable individual differences in eye movements during text reading, their neural correlates remain unclear. In this study, we investigated the relationship between the first-pass fixation duration (FPFD) in natural reading and resting-state functional connectivity (RSFC) in the brain. We defined the brain regions associated with early visual processing, word identification, attention shifts, and oculomotor control as seed regions. The results showed that individual FPFDs were positively correlated with individual RSFCs between the early visual network, visual word form area, and eye movement control/dorsal attention network. Our findings provide new evidence on the neural correlates of eye movements in text reading and indicate that individual differences in fixation time may shape the RSFC differences in the brain through the time-on-task effect and the mechanism of Hebbian learning.
Fan Zhang; Zhicheng Lin; Yang Zhang; Ming Zhang
In: Journal of Experimental Psychology: General, vol. 150, no. 9, pp. 1–12, 2021.
Animal physiological and human neuroimaging studies have established a link between attention and ␥-band (30–90 Hz) oscillations and synchronizations. However, a behavioral link between entrained ␥-band oscillations and attention has been fraught with technical challenges. In particular, while entrainment at mid-␥ band (40–70 Hz) has been claimed to be privileged in evoking attentional modulations without awareness, the effect may be attributed to display artifacts. Here, by exploiting isoluminant chromatic flicker without luminance modulation and not subject to these artifacts, we tested attentional attraction by chromatic flicker too fast to perceive. Awareness of flicker was subjectively and objectively tested with a high-powered design and evaluated with traditional and Bayesian statistics. Across 2 experiments in human participants, we observed—and also replicated—that 30-Hz chromatic flicker outside mid-␥ band attracted attention, resulting in a facilitation effect at a 50 ms interstimulus interval (ISI) and an inhibition effect at a 500 ms ISI. The attention test was confirmed to be more sensitive to the cue than the direct cue-localization task was. We further showed that these attention effects were absent for 50-Hz chromatic flicker. These results provide strong direct evidence against a privileged role of mid-␥ band in unconscious attention, but are consistent with known cortical responses to chromatic flicker in early visual cortex. Taken together, our findings provide behavioral evidence that entrained synchronization may serve as a mechanism for bottom-up attention selection and that chromatic flicker
Beizhen Zhang; Janis Ying Ying Kan; Mingpo Yang; Xiaochun Wang; Jiahao Tu; Michael Christopher Dorris
In: Nature Communications, vol. 12, no. 1, pp. 3410, 2021.
Value-based decision making involves choosing from multiple options with different values. Despite extensive studies on value representation in various brain regions, the neural mechanism for how multiple value options are converted to motor actions remains unclear. To study this, we developed a multi-value foraging task with varying menu of items in non-human primates using eye movements that dissociates value and choice, and conducted electrophysiological recording in the midbrain superior colliculus (SC). SC neurons encoded “absolute” value, independent of available options, during late fixation. In addition, SC neurons also represent value threshold, modulated by available options, different from conventional motor threshold. Electrical stimulation of SC neurons biased choices in a manner predicted by the difference between the value representation and the value threshold. These results reveal a neural mechanism directly transforming absolute values to categorical choices within SC, supporting highly efficient value-based decision making critical for real-world economic behaviors.
Paul Zerr; Surya Gayet; Floris Esschert; Mitchel Kappen; Zoril Olah; Stefan Van der Stigchel
In: Memory and Cognition, vol. 49, no. 5, pp. 1036–1049, 2021.
Accessing the contents of visual short-term memory (VSTM) is compromised by information bottlenecks and visual interference between memorization and recall. Retro-cues, displayed after the offset of a memory stimulus and prior to the onset of a probe stimulus, indicate the test item and improve performance in VSTM tasks. It has been proposed that retro-cues aid recall by transferring information from a high-capacity memory store into visual working memory (multiple-store hypothesis). Alternatively, retro-cues could aid recall by redistributing memory resources within the same (low-capacity) working memory store (single-store hypothesis). If retro-cues provide access to a memory store with a capacity exceeding the set size, then, given sufficient training in the use of the retro-cue, near-ceiling performance should be observed. To test this prediction, 10 observers each performed 12 hours across 8 sessions in a retro-cue change-detection task (40,000+ trials total). The results provided clear support for the single-store hypothesis: retro-cue benefits (difference between a condition with and without retro-cues) emerged after a few hundred trials and then remained constant throughout the testing sessions, consistently improving performance by two items, rather than reaching ceiling performance. Surprisingly, we also observed a general increase in performance throughout the experiment in conditions with and without retro-cues, calling into question the generalizability of change-detection tasks in assessing working memory capacity as a stable trait of an observer (data and materials are available at osf.io/9xr82 and github.com/paulzerr/retrocues). In summary, the present findings suggest that retro-cues increase capacity estimates by redistributing memory resources across memoranda within a low-capacity working memory store.
Tao Zeng; Yating Mu; Taoyan Zhu
In: Cognitive Processing, vol. 22, no. 2, pp. 185–207, 2021.
This article explores the domain generality of hierarchical representation between linguistic and mathematical cognition by adopting the structural priming paradigm in an eye-tracking reading experiment. The experiment investigated whether simple arithmetic equations with high (e.g., (7 + 2) × 3 + 1)- or low (e.g., 7 + 2 × 3 + 1)- attachment influence language users' interpretation of Chinese ambiguous structures (NP1 + He + NP2 + De + NP3; Quantifier + NP1 + De + NP2; NP1 + Kan/WangZhe + NP2 + AP). On the one hand, behavioral results showed that high-attachment primes led to more high-attachment interpretation, while low-attachment primes led to more low-attachment interpretation. On the other hand, the eye movement data indicated that structural priming was of great help to reduce dwell time on the ambiguous structure. There were structural priming effects from simple arithmetic to three different structures in Chinese, which provided new evidence on the cross-domain priming from simple arithmetic to language. Besides attachment priming effect at global level, online sentence integration at local level was found to be structure-dependent by some differences in eye movement measures. Our results have provided some evidence for the Representational Account.
Tao Zeng; Wen Mao; Yarong Gao
In: Journal of Psycholinguistic Research, no. 1-26, 2021.
The present study attempted to explore the abstract priming effects from mathematical equations to Mandarin Chinese structure NP1 + You + NP2 + Hen + AP in an on-line comprehension task with the aim to figure out the mechanism that underlying these effects. The results revealed that compared with baseline priming conditions, participants tended to choose more high-attachment options in high-attachment priming conditions and more low-attachment priming options in low-attachment priming conditions. Such difference had reached a significant level, which provided evidence for the shared structural representation across mathematical and linguistic domains. Additionally, the fixations sequences during arithmetic calculations reflected those equations were processed hierarchically and could be extracted in parallel instead of being scanned in a sequentially left-to-right order. Our results have provided some evidence for the Representational Account.
Alessandra Zarcone; Vera Demberg
In: Discourse Processes, vol. 58, no. 9, pp. 804–819, 2021.
There is now a well-established literature showing that people anticipate upcoming concepts and words during language processing. Commonsense knowledge about typical event sequences and verbal selectional preferences can contribute to anticipating what will be mentioned next. We here investigate how temporal discourse connectives (before, after), which signal event ordering along a temporal dimension, modulate predictions for upcoming discourse referents. Our study analyses anticipatory gaze in the visual world and supports the idea that script knowledge, temporal connectives (before eating → menu, appetizer), and the verb's selectional preferences (order → appetizer) jointly contribute to shaping rapid prediction of event participants.
Chuanli Zang; Ying Fu; Xuejun Bai; Guoli Yan; Simon P. Liversedge
In: Journal of Memory and Language, vol. 119, pp. 1–15, 2021.
Chinese idioms are likely to be represented and processed as Multi-Constituent Units (MCUs, a multi-word unit with a single lexical representation, see Zang, 2019). Chinese idioms with a 1-character verb and 2-character noun structure are processed foveally, but not parafoveally, as a single lexical unit (Yu et al., 2016), probably because the verb only loosely constrains noun identity. By contrast, Chinese idioms with modifier-noun structure are more likely MCU candidates due to significant modifier constraint over the subsequent noun. We investigated whether idioms of this type are parafoveally and foveally processed as MCUs during natural reading. In Experiment 1, we manipulated phrase type (idiom or matched phrase) and preview of the noun (identity, unrelated character or pseudocharacter) using the boundary paradigm (Rayner, 1975). A larger preview effect occurred for idioms on the modifier with shorter fixations for identical than unrelated and pseudocharacter previews. This suggests idioms are parafoveally processed to a greater extent than matched phrases. In Experiment 2, preview of the modifier and noun of idioms and phrases (identity or pseudocharacter) was orthogonally manipulated (c.f., Cutter, Drieghe & Liversedge, 2014). For identity modifiers, a greater noun preview effect occurred for idioms relative to phrases providing further evidence that modifier-noun idioms are lexicalised MCUs and processed parafoveally as single, unified representations.
Tania S. Zamuner; Theresa Rabideau; Margarethe Mcdonald; H. Henny Yeung
In: Journal of Child Language, pp. 1–25, 2021.
This study investigates how children aged two to eight years ( N = 129) and adults ( N = 29) use auditory and visual speech for word recognition. The goal was to bridge the gap between apparent successes of visual speech processing in young children in visual-looking tasks, with apparent difficulties of speech processing in older children from explicit behavioural measures. Participants were presented with familiar words in audio-visual (AV), audio-only (A-only) or visual-only (V-only) speech modalities, then presented with target and distractor images, and looking to targets was measured. Adults showed high accuracy, with slightly less target-image looking in the V-only modality. Developmentally, looking was above chance for both AV and A-only modalities, but not in the V-only modality until 6 years of age (earlier on /k/-initial words). Flexible use of visual cues for lexical access develops throughout childhood.
Mengxi Yun; Masafumi Nejime; Masayuki Matsumoto
Single-unit recording in awake behaving non-human primates Journal Article
In: Bio-protocol, vol. 11, no. 8, pp. 1–16, 2021.
Non-human primates (NHPs) have been widely used as a species model in studies to understand higher brain functions in health and disease. These studies employ specifically designed behavioral tasks in which animal behavior is well-controlled, and record neuronal activity at high spatial and temporal resolutions while animals are performing the tasks. Here, we present a detailed procedure to conduct single-unit recording, which fulfils high spatial and temporal resolutions while macaque monkeys (i.e., widely used NHPs) perform behavioral tasks in a well-controlled manner. This procedure was used in our previous study to investigate the dynamics of neuronal activity during economic decision-making by the monkeys. Monkeys' behavior was quantitated by eye position tracking and button press/release detection. By inserting a microelectrode into the brain, with a grid system in reference to magnetic resonance imaging, we precisely recorded the brain regions. Our experimental system permits rigorous investigation of the link between neuronal activity and behavior.
Nicole H. Yuen; Fred Tam; Nathan W. Churchill; Tom A. Schweizer; Simon J. Graham
In: Frontiers in Human Neuroscience, vol. 15, pp. 1–20, 2021.
Introduction: Driving motor vehicles is a complex task that depends heavily on how visual stimuli are received and subsequently processed by the brain. The potential impact of distraction on driving performance is well known and poses a safety concern – especially for individuals with cognitive impairments who may be clinically unfit to drive. The present study is the first to combine functional magnetic resonance imaging (fMRI) and eye-tracking during simulated driving with distraction, providing oculomotor metrics to enhance scientific understanding of the brain activity that supports driving performance. Materials and Methods: As initial work, twelve healthy young, right-handed participants performed turns ranging in complexity, including simple right and left turns without oncoming traffic, and left turns with oncoming traffic. Distraction was introduced as an auditory task during straight driving, and during left turns with oncoming traffic. Eye-tracking data were recorded during fMRI to characterize fixations, saccades, pupil diameter and blink rate. Results: Brain activation maps for right turns, left turns without oncoming traffic, left turns with oncoming traffic, and the distraction conditions were largely consistent with previous literature reporting the neural correlates of simulated driving. When the effects of distraction were evaluated for left turns with oncoming traffic, increased activation was observed in areas involved in executive function (e.g., middle and inferior frontal gyri) as well as decreased activation in the posterior brain (e.g., middle and superior occipital gyri). Whereas driving performance remained mostly unchanged (e.g., turn speed, time to turn, collisions), the oculomotor measures showed that distraction resulted in more consistent gaze at oncoming traffic in a small area of the visual scene; less time spent gazing at off-road targets (e.g., speedometer, rear-view mirror); more time spent performing saccadic eye movements; and decreased blink rate. Conclusion: Oculomotor behavior modulated with driving task complexity and distraction in a manner consistent with the brain activation features revealed by fMRI. The results suggest that eye-tracking technology should be included in future fMRI studies of simulated driving behavior in targeted populations, such as the elderly and individuals with cognitive complaints – ultimately toward developing better technology to assess and enhance fitness to drive.
Xinger Yu; Timothy D. Hanks; Joy J. Geng
In: Psychological Science, pp. 1–16, 2021.
When searching for a target object, we engage in a continuous “look-identify” cycle in which we use known features of the target to guide attention toward potential targets and then to decide whether the selected object is indeed the target. Target information in memory (the target template or attentional template) is typically characterized as having a single, fixed source. However, debate has recently emerged over whether flexibility in the target template is relational or optimal. On the basis of evidence from two experiments using college students ( Ns = 30 and 70, respectively), we propose that initial guidance of attention uses a coarse relational code, but subsequent decisions use an optimal code. Our results offer a novel perspective that the precision of template information differs when guiding sensory selection and when making identity decisions during visual search.
Lili Yu; Yanping Liu; Erik D. Reichle
In: Journal of Experimental Psychology: General, vol. 150, no. 8, pp. 1612–1641, 2021.
Chinese words consist of a variable number of characters that are normally written in continuous lines, without the blank spaces that are used to separate words in most alphabetic writing systems. These conventions raise questions about the relative roles of character versus whole-word processing in word identification, and how words are segmented from strings of characters for the purpose of their identification and saccade targeting. The present article attempts to address these questions by reporting an eye-movement experiment in which 60 participants read a corpus of sentences containing two-character target words that varied in terms of their overall frequency and the frequency of their initial characters. We examine participants' eye movements using both corpus-based statistical models and more standard analyses of our target words. In addition to documenting how key lexical variables influence eye movements and highlighting a few discrepancies between the results obtained using our two statistical approaches, our experiment shows that high-frequency initial characters can actually slow word identification. We discuss the theoretical significance of this finding and others for current models of Chinese reading, and then describe a new computational model of eye-movement control during the reading of Chinese. Finally, we report simulations showing that this model can account for our findings. (PsycInfo Database Record (c) 2020 APA, all rights reserved)
Seng Bum Michael Yoo; Jiaxin Cindy Tu; Benjamin Yost Hayden
In: Nature Communications, vol. 12, pp. 1–14, 2021.
Successful pursuit and evasion require rapid and precise coordination of navigation with adaptive motor control. We hypothesize that the dorsal anterior cingulate cortex (dACC), which communicates bidirectionally with both the hippocampal complex and premotor/motor areas, would serve a mapping role in this process. We recorded responses of dACC ensembles in two macaques performing a joystick-controlled continuous pursuit/evasion task. We find that dACC carries two sets of signals, (1) world-centric variables that together form a representation of the position and velocity of all relevant agents (self, prey, and predator) in the virtual world, and (2) avatar-centric variables, i.e. self-prey distance and angle. Both sets of variables are multiplexed within an overlapping set of neurons. Our results suggest that dACC may contribute to pursuit and evasion by computing and continuously updating a multicentric representation of the unfolding task state, and support the hypothesis that it plays a high-level abstract role in the control of behavior.
Kyung Yoo; Jeongyeol Ahn; Sang-Hun Lee
In: PLoS ONE, vol. 16, no. 12, pp. 1–32, 2021.
Pupillometry, thanks to its strong relationship with cognitive factors and recent advancements in measuring techniques, has become popular among cognitive or neural scientists as a tool for studying the physiological processes involved in mental or neural processes. Despite this growing popularity of pupillometry, the methodological understanding of pupillometry is limited, especially regarding potential factors that may threaten pupillary measurements' validity. Eye blinking can be a factor because it frequently occurs in a manner dependent on many cognitive components and induces a pulse-like pupillary change consisting of constriction and dilation with substantive magnitude and length. We set out to characterize the basic properties of this “blink-locked pupillary response (BPR),” including the shape and magnitude of BPR and their variability across subjects and blinks, as the first step of studying the confounding nature of eye blinking. Then, we demonstrated how the dependency of eye blinking on cognitive factors could confound, via BPR, the pupillary responses that are supposed to reflect the cognitive states of interest. By building a statistical model of how the confounding effects of eye blinking occur, we proposed a probabilistic-inference algorithm of de-confounding raw pupillary measurements and showed that the proposed algorithm selectively removed BPR and enhanced the statistical power of pupillometry experiments. Our findings call for attention to the presence and confounding nature of BPR in pupillometry. The algorithm we developed here can be used as an effective remedy for the confounding effects of BPR on pupillometry.
Panpan Yao; Adrian Staub; Xingshan Li
In: Psychonomic Bulletin & Review, pp. 1–10, 2021.
Previous research has demonstrated effects of both orthographic neighborhood size and neighbor frequency in word recognition in Chinese. A large neighborhood—where neighborhood size is defined by the number of words that differ from a target word by a single character—appears to facilitate word recognition, while the presence of a higher-frequency neighbor has an inhibitory effect. The present study investigated modulation of these effects by a word's predictability in context. In two eye-movement experiments, the predictability of a target word in each sentence was manipulated. Target words differed in their neighborhood size (Experiment 1) and in whether they had a higher-frequency neighbor (Experiment 2). The study replicated the previously observed effects of neighborhood size and neighbor frequency when the target word was unpredictable, but in both experiments neighborhood effects were absent when the target was predictable. These results suggest that when a word is preactivated by context, the activation of its neighbors may be diminished to such an extent that these neighbors do not effectively compete for selection.
Panpan Yao; Timothy J. Slattery; Xingshan Li
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–40, 2021.
In the current study, we conducted two eye-tracking reading experiments to explore whether sentence context can influence neighbor effects in word recognition during Chinese reading. Chinese readers read sentences in which the targets' orthographic neighbors were either plausible or implausible with the pre-target context. The results revealed that the neighbor effect was influenced by context: the context in the biased condition (where only targets but not neighbors can fit in the pre-target context) evoked a significantly weaker inhibitory neighbor effect than in the neutral condition (where both targets and neighbors can fit in the pre-target context). These results indicate that contextual information can be used to modulate neighbor effects during on-line sentence reading in Chinese.
Panpan Yao; Reem Alkhammash; Xingshan Li
In: Scientific Studies of Reading, pp. 1–19, 2021.
We aimed to tackle the question about the time course of plausibility effect in online processing of Chinese nouns in temporarily ambiguous structures, and whether L2ers can immediately use the plausibility information generated from classifier-noun associations in analyzing ambiguous structures. Two eye-tracking experiments were conducted to explore how native Chinese speakers (Experiment 1) and high-proficiency Dutch-Chinese learners (Experiment 2) online process 4-character novel noun-noun combinations in Chinese. In each pair of nominal phrases (Numeral+Classifier+Noun1 +Noun2), the plausibility of Classifier-Noun1 varied (plausible vs. implausible) while the whole nominal phrases were always plausible. Results showed that the plausibility of Classifier-Noun1 associations had an immediate effect on Noun1, and a reversed effect on Noun2 for both groups of participants. These findings indicated that plausibility plays an immediate role in incremental semantic integration during online processing of Chinese. Similar to native Chinese speakers, high-proficiency L2ers can also use the plausibility infor- mation of classifier-noun associations in syntactic reanalysis. Sentence
Bo Yao; Jason R. Taylor; Briony Banks; Sonja A. Kotz
In: NeuroImage, vol. 239, pp. 118313, 2021.
Growing evidence shows that theta-band (4–7 Hz) activity in the auditory cortex phase-locks to rhythms of overt speech. Does theta activity also encode the rhythmic dynamics of inner speech? Previous research established that silent reading of direct speech quotes (e.g., Mary said: “This dress is lovely!”) elicits more vivid inner speech than indirect speech quotes (e.g., Mary said that the dress was lovely). As we cannot directly track the phase alignment between theta activity and inner speech over time, we used EEG to measure the brain's phase-locked responses to the onset of speech quote reading. We found that direct (vs. indirect) quote reading was associated with increased theta phase synchrony over trials at 250–500 ms post-reading onset, with sources of the evoked activity estimated in the speech processing network. An eye-tracking control experiment confirmed that increased theta phase synchrony in direct quote reading was not driven by eye movement patterns, and more likely reflects synchronous phase resetting at the onset of inner speech. These findings suggest a functional role of theta phase modulation in reading-induced inner speech.
Beier Yao; Martin Rolfs; Christopher McLaughlin; Emily L. Isenstein; Sylvia B. Guillory; Hannah Grosman; Deborah A. Kashy; Jennifer H. Foss-Feig; Katharine N. Thakkar
In: Journal of Vision, vol. 21, no. 8, pp. 1–20, 2021.
Corollary discharge (CD) signals are “copies” of motor signals sent to sensory regions that allow animals to adjust sensory consequences of self-generated actions. Autism spectrum disorder (ASD) is characterized by sensory and motor deficits, which may be underpinned by altered CD signaling. We evaluated oculomotor CD using the blanking task, which measures the influence of saccades on visual perception, in 30 children with ASD and 35 typically developing (TD) children. Participants were instructed to make a saccade to a visual target. Upon saccade initiation, the presaccadic target disappeared and reappeared to the left or right of the original position. Participants indicated the direction of
Jiumin Yang; Yi Zhang; Zhongling Pi; Yaohui Xie
In: Learning and Individual Differences, vol. 91, pp. 1–9, 2021.
The study tested achievement motivation as a moderator of the relationship between pre-interpolated questions and learning from video lectures. Participants were 63 university students who were selected from a group of 123 volunteers, based on having high (n = 31) or low (n = 32) scores on the Achievement Motivation Scale. The students in each group were randomly assigned to view an instructional video with or without interpolated pre-questions. Visual attention was assessed by eye tracking measures of fixation duration and first time to fixation, and learning performance was assessed by tests of retention and transfer. The results of ANCOVAs showed that after controlling for prior knowledge, students with high achievement motivation benefitted more from the pre-questions than students with low achievement motivation. Among students with high achievement motivation, there was longer fixation duration to the learning materials and better transfer in the pre-questions condition than in the no-questions condition, but these differences based on video type were not apparent among students with low achievement. The findings have practical implications: interpolated pre-questions in video learning appear to be helpful for highly motivated students, and the benefit is seen in transfer rather than retention.