EyeLink 认知出版物
所有EyeLink认知和感知研究出版物,直至2023年(部分早于2024年)均按年份列出。您可以使用视觉搜索、场景感知、面部处理等关键词搜索出版物。您还可以搜索单个作者姓名。如果我们错过了任何EyeLink认知或感知文章,请给我们发电子邮件!
2024 |
Yordanka Zafirova; Anna Bognár; Rufin Vogels Configuration-sensitive face-body interactions in primate visual cortex Journal Article In: Progress in Neurobiology, vol. 232, pp. 1–16, 2024. @article{Zafirova2024, Traditionally, the neural processing of faces and bodies is studied separately, although they are encountered together, as parts of an agent. Despite its social importance, it is poorly understood how faces and bodies interact, particularly at the single-neuron level. Here, we examined the interaction between faces and bodies in the macaque inferior temporal (IT) cortex, targeting an fMRI-defined patch. We recorded responses of neurons to monkey images in which the face was in its natural location (natural face-body configuration), or in which the face was mislocated with respect to the upper body (unnatural face-body configuration). On average, the neurons did not respond stronger to the natural face-body configurations compared to the summed responses to their faces and bodies, presented in isolation. However, the neurons responded stronger to the natural compared to the unnatural face-body configurations. This configuration effect was present for face- and monkey-centered images, did not depend on local feature differences between configurations, and was present when the face was replaced by a small object. The face-body interaction rules differed between natural and unnatural configurations. In sum, we show for the first time that single IT neurons process faces and bodies in a configuration-specific manner, preferring natural face-body configurations. |
Lei Yuan; Miriam Novack; David Uttal; Steven Franconeri Language systematizes attention: How relational language enhances relational representation by guiding attention Journal Article In: Cognition, vol. 243, pp. 1–14, 2024. @article{Yuan2024, Language can affect cognition, but through what mechanism? Substantial past research has focused on how labeling can elicit categorical representation during online processing. We focus here on a particularly powerful type of language-relational language-and show that relational language can enhance relational representation in children through an embodied attention mechanism. Four-year-old children were given a color-location conjunction task, in which they were asked to encode a two-color square, split either vertically or horizontally (e.g., red on the left, blue on the right), and later recall the same configuration from its mirror reflection. During the encoding phase, children in the experimental condition heard relational language (e.g., "Red is on the left of blue"), while those in the control condition heard generic non-relational language (e.g., "Look at this one, look at it closely"). At recall, children in the experimental condition were more successful at choosing the correct relational representation between the two colors compared to the control group. Moreover, they exhibited different attention patterns as predicted by the attention shift account of relational representation (Franconeri et al., 2012). To test the sustained effect of language and the role of attention, during the second half of the study, the experimental condition was given generic non-relational language. There was a sustained advantage in the experimental condition for both behavioral accuracies and signature attention patterns. Overall, our findings suggest that relational language enhances relational representation by guiding learners' attention, and this facilitative effect persists over time even in the absence of language. Implications for the mechanism of how relational language can enhance the learning of relational systems (e.g., mathematics, spatial cognition) by guiding attention will be discussed. |
Lei Wang; Xufeng Zhou; Jie Yang; Fu Zeng; Shuzhen Zuo; Makoto Kusunoki; Huimin Wang; Yong-di Zhou; Aihua Chen; Sze Chai Kwok Mixed coding of content-temporal detail by dorsomedial posterior parietal neurons Journal Article In: Journal of Neuroscience, vol. 44, no. 3, pp. 1–16, 2024. @article{Wang2024, The dorsomedial posterior parietal cortex (dmPPC) is part of a higher-cognition network implicated in elaborate processes under- pinning memory formation, recollection, episode reconstruction, and temporal information processing. Neural coding for complex episodic processing is however under-documented. Here, we recorded extracellular neural activities from three male rhesus macaques (Macaca mulatta) and revealed a set of neural codes of “neuroethogram” in the primate parietal cortex. Analyzing neural responses in macaque dmPPC to naturalistic videos, we discovered several groups of neurons that are sensitive to different categories of ethogram items, low-level sensory features, and saccadic eye movement. We also discovered that the processing of category and feature information by these neurons is sustained by the accumulation of temporal information over a long timescale of up to 30 s, corroborating its reported long temporal receptive windows. We performed an additional behavioral experiment with additional two male rhesus macaques and found that saccade-related activities could not account for the mixed neuronal responses elicited by the video stimuli. We further observed monkeys' scan paths and gaze consistency are modulated by video content. Taken altogether, these neural findings explain how dmPPC weaves fabrics of ongoing experiences together in real time. The high dimensionality of neural representations should motivate us to shift the focus of attention from pure selectivity neurons to mixed selectivity neurons, especially in increasingly complex naturalistic task designs. |
Inês S. Veríssimo; Zachary Nudelman; Christian N. L. Olivers Does crowding predict conjunction search? An individual differences approach Journal Article In: Vision Research, vol. 216, pp. 1–13, 2024. @article{Verissimo2024, Searching for objects in the visual environment is an integral part of human behavior. Most of the information used during such visual search comes from the periphery of our vision, and understanding the basic mechanisms of search therefore requires taking into account the inherent limitations of peripheral vision. Our previous work using an individual differences approach has shown that one of the major factors limiting peripheral vision (crowding) is predictive of single feature search, as reflected in response time and eye movement measures. Here we extended this work, by testing the relationship between crowding and visual search in a conjunction-search paradigm. Given that conjunction search involves more fine-grained discrimination and more serial behavior, we predicted it would be strongly affected by crowding. We tested sixty participants with regard to their sensitivity to both orientation and color-based crowding (as measured by critical spacing) and their efficiency in searching for a color/orientation conjunction (as indicated by manual response times and eye movements). While the correlations between the different crowding tasks were high, the correlations between the different crowding measures and search performance were relatively modest, and no higher than those previously observed for single-feature search. Instead, observers showed very strong color selectivity during search. The results suggest that conjunction search behavior relies more on top-down guidance (here by color) and is therefore relatively less determined by individual differences in sensory limitations as caused by crowding. |
Jacob C. Tanner; Joshua Faskowitz; Lisa Byrge; Daniel P. Kennedy; Olaf Sporns; Richard F. Betzel Synchronous high-amplitude co-fluctuations of functional brain networks during movie-watching Journal Article In: Imaging Neuroscience, vol. 1, pp. 1–21, 2024. @article{Tanner2024, Recent studies have shown that functional connectivity can be decomposed into its exact frame- wise contributions, revealing short- lived, infrequent, and high- amplitude time points referred to as “events.” Events contribute disproportionately to the time- averaged connectivity pattern, improve identifiability and brain- behavior associations, and differences in their expression have been linked to endogenous hormonal fluctuations and autism. Here, we explore the characteristics of events while subjects watch movies. Using two independently acquired imaging datasets in which participants passively watched movies, we find that events synchronize across individuals and based on the level of synchronization, can be categorized into three distinct classes: those that synchronize at the boundaries between movies, those that synchronize during movies, and those that do not synchronize at all. We find that boundary events, compared to the other categories, exhibit greater amplitude, distinct co- fluctuation patterns, and temporal propagation. We show that underlying boundary events 1 is a specific mode of co-fluctuation involving the activation of control and salience systems alongside the deactivation of visual systems. Events that synchronize during the movie, on the other hand, display a pattern of co-fluctuation that is time- locked to the movie stimulus. Finally, we found that subjects' time-varying brain networks are most similar to one another during these synchronous events. |
Teresa Sousa; Alexandre Sayal; João V. Duarte; Gabriel N. Costa; Miguel Castelo-Branco A human cortical adaptive mutual inhibition circuit underlying competition for perceptual decision and repetition suppression reversal Journal Article In: NeuroImage, vol. 285, pp. 1–10, 2024. @article{Sousa2024, A model based on inhibitory coupling has been proposed to explain perceptual oscillations. This 'adapting reciprocal inhibition' model postulates that it is the strength of inhibitory coupling that determines the fate of competition between percepts. Here, we used an fMRI-based adaptation technique to reveal the influence of neighboring neuronal populations, such as reciprocal inhibition, in motion-selective hMT+/V5. If reciprocal inhibition exists in this region, the following predictions should hold: 1. stimulus-driven response would not simply decrease, as predicted by simple repetition-suppression of neuronal populations, but instead, increase due to the activity from adjacent populations; 2. perceptual decision involving competing representations, should reflect decreased reciprocal inhibition by adaptation; 3. neural activity for the competing percept should also later on increase upon adaptation. Our results confirm these three predictions, showing that a model of perceptual decision based on adapting reciprocal inhibition holds true. Finally, they also show that the net effect of the well-known repetition suppression phenomenon can be reversed by this mechanism. |
Claudio M. Privitera; Sean Noah; Thom Carney; Stanley A. Klein; Agatha Lenartowicz; Stephen P. Hinshaw; James T. McCracken; Joel T. Nigg; Sarah L. Karalunas; Rory C. Reid; Mercedes T. Oliva; Samantha S. Betts; Gregory V. Simpson Pupillary dilations in a Target/Distractor visual task paradigm and attention deficit hyperactivity disorder (ADHD) Journal Article In: Neuroscience Letters, vol. 818, pp. 1–6, 2024. @article{Privitera2024, ADHD is a neurocognitive disorder characterized by attention difficulties, hyperactivity, and impulsivity, often persisting into adulthood with substantial personal and societal consequences. Despite the importance of neurophysiological assessment and treatment monitoring tests, their availability outside of research settings remains limited. Cognitive neuroscience investigations have identified distinct components associated with ADHD, including deficits in sustained attention, inefficient enhancement of attended Targets, and altered suppression of ignored Distractors. In this study, we examined pupil activity in control and ADHD subjects during a sustained visual attention task specifically designed to evaluate the mechanisms underlying Target enhancement and Distractor suppression. Our findings revealed some distinguishing factors between the two groups which we discuss in light of their neurobiological implications. |
Juan D. Guevara Pinto; Megan H. Papesh High target prevalence may reduce the spread of attention during search tasks Journal Article In: Attention, Perception, & Psychophysics, vol. 86, no. 1, pp. 62–83, 2024. @article{Pinto2024, Target prevalence influences many cognitive processes during visual search, including target detection, search efficiency, and item processing. The present research investigated whether target prevalence may also impact the spread of attention during search. Relative to low-prevalence searches, high-prevalence searches typically yield higher fixation counts, particularly during target-absent trials. This may emerge because the attention spread around each fixation may be smaller for high than low prevalence searches. To test this, observers searched for targets within object arrays in Experiments 1 (free-viewing) and 2 (gaze-contingent viewing). In Experiment 3, observers searched for targets in a Rapid Serial Visual Presentation (RSVP) stream at the center of the display while simultaneously processing occasional peripheral objects. Experiment 1 used fixation patterns to estimate attentional spread, and revealed that attention was narrowed during high, relative to low, prevalence searches. This effect was weakened during gaze-contingent search (Experiment 2) but emerged again when eye movements were unnecessary in RSVP search (Experiment 3). These results suggest that, although task demands impact how attention is allocated across displays, attention may also narrow when searching for frequent targets. |
Arthur Pabst; Zoé Bollen; Nicolas Masson; Mado Gautier; Christophe Geus; Pierre Maurage Altered attentional processing of facial expression features in severe alcohol use disorder: An eye-tracking study. Journal Article In: Journal of Psychopathology and Clinical Science, vol. 133, no. 1, pp. 103–114, 2024. @article{Pabst2024, Social cognition impairments, and notably emotional facial expression (EFE) recognition difficulties, as well as their functional and clinical correlates, are increasingly documented in severe alcohol use disorder (SAUD). However, insights into their underlying mechanisms are lacking. Here, we tested if SAUD was associated with alterations in the attentional processing of EFEs. In a preregistered study, 40 patients with SAUD and 40 healthy controls (HCs) had to identify the emotional expression conveyed by faces while having their gaze recorded by an eye-tracker. We assessed indices of initial (first fixation locations) and later (number of fixations and dwell-time) attention with reference to regions of interest corresponding to the eyes, mouth, and nose, which carry key information for EFE recognition. We centrally found that patients had less first fixations to key facial features in general, as well as less fixations and dwell time to the eyes specifically, relative to the rest of the face, compared to controls. These effects were invariant across emotional expressions. Additional exploratory analyses revealed that patients with SAUD had a less structured viewing pattern than controls. These results offer novel, direct, evidence that patients with SAUD's socioaffective difficulties already emerge at the facial attentional pro- cessing stage, along with precisions regarding the nature and generalizability of the effects. Potential implications for the mechanistic conceptualization and treatment of social cognition difficulties in SAUD are discussed. |
Siobhan M. McAteer; Anthony McGregor; Daniel T. Smith Precision in spatial working memory examined with mouse pointing Journal Article In: Vision Research, vol. 215, pp. 1–10, 2024. @article{McAteer2024, ABSTRACT memory (VSWM) is limited. However, there is continued debate surrounding the nature of this capacity limitation. The resource model (Bays et al., 2009) proposes that VSWM capacity is limited by the precision with which visuospatial features can be retained. In one of the few studies of spatial working memory, Schneegans and Bays (2016) report that memory guided pointing responses show a monotonic decrease in precision as set size increases, consistent with resource models. Here we report two conceptual replications of this study that use mouse responses rather than pointing responses. Overall results are consistent with the resource model, as there was an exponential increase in localisation error and monotonic increases in the probability of misbinding and guessing with increases in set size. However, an unexpected result of Experiment One was that, unlike Schneegans and Bays (2016), imprecision did not increase between set sizes of 2 and 8. Experiment Two replicated this effect and ruled out the possibility that the invariance of imprecision at set sizes greater than 2 was a product of oculomotor strategies during recall. We speculate that differences in imprecision are related to additional visuomotor transformations required for memory-guided mouse localisation compared to memory-guided manual pointing localisation. These data demonstrate the importance of consid- ering the nature of the response modality when interpreting VSWM data. 1. |
Xu Liu; Yu Li; Lihua Xu; Tianhong Zhang; Huiru Cui; Yanyan Wei; Mengqing Xia; Wenjun Su; Yingying Tang; Xiaochen Tang; Dan Zhang; Lothar Spillmann; Ian Max Andolina; Niall McLoughlin; Wei Wang; Jijun Wang Spatial and temporal abnormalities of spontaneous fixational saccades and their correlates with positive and cognitive symptoms in schizophrenia Journal Article In: Schizophrenia Bulletin, vol. 50, no. 1, pp. 78–88, 2024. @article{Liu2024, BACKGROUND AND HYPOTHESIS: Visual fixation is a dynamic process, with the spontaneous occurrence of microsaccades and macrosaccades. These fixational saccades are sensitive to the structural and functional alterations of the cortical-subcortical-cerebellar circuit. Given that dysfunctional cortical-subcortical-cerebellar circuit contributes to cognitive and behavioral impairments in schizophrenia, we hypothesized that patients with schizophrenia would exhibit abnormal fixational saccades and these abnormalities would be associated with the clinical manifestations. STUDY DESIGN: Saccades were recorded from 140 drug-naïve patients with first-episode schizophrenia and 160 age-matched healthy controls during ten separate trials of 6-second steady fixations. Positive and negative symptoms were assessed using the Positive and Negative Syndrome Scale (PANSS). Cognition was assessed using the Measurement and Treatment Research to Improve Cognition in Schizophrenia Consensus Cognitive Battery (MCCB). STUDY RESULTS: Patients with schizophrenia exhibited fixational saccades more vertically than controls, which was reflected in more vertical saccades with angles around 90° and a greater vertical shift of horizontal saccades with angles around 0° in patients. The fixational saccades, especially horizontal saccades, showed longer durations, faster peak velocities, and larger amplitudes in patients. Furthermore, the greater vertical shift of horizontal saccades was associated with higher PANSS total and positive symptom scores in patients, and the longer duration of horizontal saccades was associated with lower MCCB neurocognitive composite, attention/vigilance, and speed of processing scores. Finally, based solely on these fixational eye movements, a K-nearest neighbors model classified patients with an accuracy of 85%. Conclusions: Our results reveal spatial and temporal abnormalities of fixational saccades and suggest fixational saccades as a promising biomarker for cognitive and positive symptoms and for diagnosis of schizophrenia. |
Marianna Kyriacou Not batting an eye: Figurative meanings of L2 idioms do not interfere with literal uses Journal Article In: Languages, vol. 9, no. 32, pp. 1–15, 2024. @article{Kyriacou2024, Encountering idioms (hit the sack = “go to bed”) in a second language (L2) often results in a literal-first understanding (“literally hit a sack”). The figurative meaning is retrieved later, subject to idiom familiarity and L2 proficiency, and typically at a processing cost. Intriguingly recent findings report the overextension of idiom use in inappropriate contexts by advanced L2 users, with greater L2 proficiency somewhat mitigating this effect. In this study, we tested the tenability of this finding by comparing eye-movement patterns for idioms used literally, vs. literal control phrases (hit the dirt) in an eye-tracking-while-reading paradigm. We hypothesised that if idiom overextension holds, processing delays should be observed for idioms, as the (over)activated but contextually irrelevant figurative meanings would cause interference. In contrast, unambiguous control phrases should be faster to process. The results demonstrated undifferentiated processing for idioms used literally and control phrases across measures, with L2 proficiency affecting both similarly. Therefore, the findings do not support the hypothesis that advanced L2 users overextend idiom use in inappropriate contexts, nor that L2 proficiency modulates this tendency. The results are also discussed in light of potential pitfalls pertaining to idiom priming under typical experimental settings. WABBLE: |
Kristina Krasich; Kevin O'Neill; Samuel Murray; James R. Brockmole; Felipe De Brigard; Antje Nuthmann A computational modeling approach to investigating mind wandering-related adjustments to gaze behavior during scene viewing Journal Article In: Cognition, vol. 242, pp. 1–10, 2024. @article{Krasich2024, Research on gaze control has long shown that increased visual-cognitive processing demands in scene viewing are associated with longer fixation durations. More recently, though, longer durations have also been linked to mind wandering, a perceptually decoupled state of attention marked by decreased visual-cognitive processing. Toward better understanding the relationship between fixation durations and visual-cognitive processing, we ran simulations using an established random-walk model for saccade timing and programming and assessed which model parameters best predicted modulations in fixation durations associated with mind wandering compared to attentive viewing. Mind wandering-related fixation durations were best described as an increase in the variability of the fixation-generating process, leading to more variable—sometimes very long—durations. In contrast, past research showed that increased processing demands increased the mean duration of the fixation-generating process. The findings thus illustrate that mind wandering and processing demands modulate fixation durations through different mechanisms in scene viewing. This suggests that processing demands cannot be inferred from changes in fixation durations without understanding the underlying mechanism by which these changes were generated. |
Ziva Korda; Sonja Walcher; Christof Korner; Mathias Benedek Decoupling of the pupillary light response during internal attention : The modulating effect of luminance intensity Journal Article In: Acta Psychologica, vol. 242, pp. 1–11, 2024. @article{Korda2024, In a world full of sensory stimuli, attention guides us between the external environment and our internal thoughts. While external attention involves processing sensory stimuli, internal attention is devoted to self-generated representations such as planning or spontaneous mind wandering. They both draw from common cognitive resources, thus simultaneous engagement in both often leads to interference between processes. In order to maintain internal focus, an attentional mechanism known as perceptual decoupling takes effect. This mechanism supports internal cognition by decoupling attention from the perception of sensory information. Two previous studies of our lab investigated to what extent perceptual decoupling is evident in voluntary eye movements. Findings showed that the effect is mediated by the internal task modality and workload (visuospatial > arithmetic and high > low, respectively). However, it remains unclear whether it extends to involuntary eye behavior, which may not share cognitive resources with internal activities. Therefore, the present experiment aimed to further elucidate attentional dynamics by examining whether internal attention affects the pupillary light response (PLR). Specifically, we consistently observed that workload and task modality of the internal task reduced the PLR to luminance changes of medium intensity. However, the PLR to strong luminance changes was less or not at all affected by the internal task. These results suggest that perceptual decoupling effects may be less consistent in involuntary eye behavior, particularly in the context of a salient visual stimulus. |
Damian Koevoet; Marnix Naber; Stefan Stigchel The intensity of internal and external attention assessed with pupillometry Journal Article In: Journal of Cognition, vol. 7, pp. 1–10, 2024. @article{Koevoet2024, Not only is visual attention shifted to objects in the external world, attention can also be directed to objects in memory. We have recently shown that pupil size indexes how strongly items are attended externally, which was reflected in more precise encoding into visual working memory. Using a retro-cuing paradigm, we here replicated this finding by showing that stronger pupil constrictions during encoding were reflective of the depth of encoding. Importantly, we extend this previous work by showing that pupil size also revealed the intensity of internal attention toward content stored in visual working memory. Specifically, pupil dilation during the prioritization of one among multiple internally stored representations predicted the precision of the prioritized item. Furthermore, the dynamics of the pupillary responses revealed that the intensity of internal and external attention independently determined the precision of internalized visual representations. Our results show that both internal and external attention are not all-or-none processes, but should rather be thought of as continuous resources that can be deployed at varying intensities. The employed pupillometric approach allows to unravel the intricate interplay between internal and external attention and their effects on visual working memory. |
Janina Hüer; Pankhuri Saxena; Stefan Treuea Pathway-selective optogenetics reveals the functional anatomy of top–down attentional modulation in the macaque visual cortex Journal Article In: Proceedings of the National Academy of Sciences, vol. 121, no. 3, pp. 1–9, 2024. @article{Hueer2024, Spatial attention represents a powerful top–down influence on sensory responses in primate visual cortical areas. The frontal eye field (FEF) has emerged as a key candidate area for the source of this modulation. However, it is unclear whether the FEF exerts its effects via its direct axonal projections to visual areas or indirectly through other brain areas and whether the FEF affects both the enhancement of attended and the suppression of unattended sensory responses. We used pathway- selective optogenetics in rhesus macaques performing a spatial attention task to inhibit the direct input from the FEF to area MT, an area along the dorsal visual pathway specialized for the processing of visual motion information. Our results show that the optogenetic inhibition of the FEF input specifically reduces attentional modulation in MT by about a third without affecting the neurons' sensory response component. We find that the direct FEF- to- MT pathway contributes to both the enhanced processing of target stimuli and the suppression of distractors. The FEF, thus, selectively modulates firing rates in visual area MT, and it does so via its direct axonal projections. |
Nimrod Hertz-Palmor; Yam Yosef; Hadar Hallel; Inbar Bernat; Amit Lazarov Exploring the ‘mood congruency' hypothesis of attention allocation – An eye-tracking study Journal Article In: Journal of Affective Disorders, vol. 347, pp. 619–629, 2024. @article{HertzPalmor2024, Background: The ‘mood-congruency' hypothesis of attention allocation postulates that individuals' current emotional states affect their attention allocation, such that mood-congruent stimuli take precedence over non-congruent ones. This hypothesis has been further suggested as an underlying mechanism of biased attention allocation in depression. Methods: The present research explored the mood-congruency hypothesis using a novel video-based mood elicitation procedure (MEP) and an established eye-tracking attention allocation assessment task, elaborating prior research in the field. Specifically, in Study 1 (n = 91), a video-based MEP was developed and rigorously validated. In study 2 (n = 60), participants' attention allocation to sad and happy face stimuli, each presented separately alongside neutral faces, was assessed before and after the video-based MEP, with happiness induced in one group (n = 30) while inducing sadness in the other (n = 30). Results: In Study 1, the MEP yielded the intended modification of participants' current mood states (eliciting either sadness or happiness). Study 2 showed that while the MEP modified mood in the intended direction in both groups, replicating the results of Study 1, corresponding changes in attention allocation did not ensue in either group. A Bayesian analysis of pre-to-post mood elicitation changes in attention allocation supported this null finding. Moreover, results revealed an attention bias to happy faces across both groups and assessment points, suggestive of a trait-like positive bias in attention allocation among non-selected participants. Conclusion: Current results provide no evidence supporting the mood-congruency hypothesis, which suggests that (biased) attention allocation may be better conceptualized as a depressive trait, rather than a mood-congruent state. |
Yawen Guo; Jon D. Elhai; Christian Montag; Yang Wang; Haibo Yang Problematic mobile gamers have attention bias toward game social information Journal Article In: Computers in Human Behavior, vol. 152, pp. 1–13, 2024. @article{Guo2024, Attention bias towards game information influences players' problematic mobile game usage (PMGU). Social experience is an important part of games. This study aimed to explore attention bias mechanisms of problematic mobile gamers for game social information. Experiments 1 and 2 recruited 68 participants (19.82 ± 1.38 years), and used the dot-probe task to investigate attention bias among problematic mobile gamers. Results showed that reaction time and trial-level bias scores (TL-BS) of socially anxious problematic mobile gamers toward game social information were not significantly different from those toward game non-social information. Experiment 3 recruited 35 participants (19.71 ± 1.18 years), and combined eye-tracking technology with the dot-probe task to investigate problematic mobile gamers' attention bias and dynamic visual processing. Results of this last experiment showed that socially anxious problematic mobile gamers' first fixation latency for game social information was significantly shorter than for game non-social information, and their gaze duration and total fixation duration were significantly longer for social than game non-social information. In summary, the eye tracking experiments give support for the idea that socially anxious problematic mobile gamers show attention bias towards game social information, which is presented as the vigilance-maintenance pattern. |
M. Ghorbani; F. S. Izadi; S. S. Roshan; R. Ebrahimpour Assessing prospective teachers' geometric transformations thinking: A Van Hiele Theory-based analysis with eye tracking cognitive science method Journal Article In: Technology of Education Journal, vol. 18, no. 1, pp. 67–88, 2024. @article{Ghorbani2024, Background and Objectives: Geometric transformations have played a crucial role throughout history in various aspects of human life. Symmetry is one of the important concepts in school mathematics. Students' academic performance is intricately connected to the knowledge and skills of their educators. Recognizing the importance of prospective teachers )PTs( as future educators, in the initial stage, the aim of this research is to assess and analyze the levels of geometric thinking among prospective elementary teachers )PETs( utilizing Van Hiele's theory. Subsequently, the research seeks to delve into the thinking process and gaze patterns of prospective mathematics education teachers (PMETs) using the cognitive science method of eye tracking. Materials and Methods: This study focuses on investigating and evaluating the thinking of geometric transformations and problem-solving skills among prospective teachers (PTs(. The research method employed a combined survey method, encompassing two distinct tests conducted on two groups of PTs. The accessible statistical sample includes 50 participating PETs and 21 participating PEMTs from Iran. The PETs of Farhangian University of Isfahan were divided into two groups: 42 students who had not learned the concept of geometric transformations in their undergraduate program (NPGT), and 8 students who had learned this concept in their undergraduate program )PGT). To assess the level of geometric thinking among participants, a self-made geometric test based on Van Hiele's theory was utilized. The test reliability was assessed using Cronbach's alpha coefficient, which yielded a value of 0.68. Additionally, the validity of the test has been confirmed by some professors. In evaluating geometric thinking, a cognitive science method was performed. This method involved designing a psychophysical experiment and recording eye movements of the PMETs. The psychophysical experiment part was conducted in the computer laboratory of Shahid Rajaee Teacher Training University, Tehran, and was performed by Eyelink device and MATLAB software on student teachers of mathematics education of this university. Findings: The results of the research show that students recognize the shape with symmetry as a symmetrical shape, but they perform poorly in determining the type of symmetry of symmetrical shapes, especially when a shape has rotational symmetry or oblique axial symmetry or a combination of several types of symmetry. In the first stage, the evaluation of PETs responses showed that 34% of them were in the first level and 18% in the second level of Van Hiele. The cognitive findings revealed that PMETs demonstrated superior performance in recognizing symmetries characterized by a single type of symmetry, in contrast to shapes involving combinations of various symmetries. Examining the recorded eye-tracking images of the students revealed a difference in gaze patterns between the groups that gave correct and incorrect answers. In addition, this difference is also evident among images with different symmetries (reflection, central, rotational). Conclusions: The current research confirms the weakness of students in identifying the type of symmetry in symmetrical shapes. It also emphasizes the need to pay more attention to the training of PTs during their academic years. To address this, it is suggested to revise the curriculum concerning geometric transformations in the university courses for PTs training, additionally, the utilization of software such as Augmented Reality (AR) and GeoGebra can contribute to enhancing cognitive and visual abilities of PTs in comprehending the concept of symmetry. |
Nora Geiser; Brigitte Charlotte Kaufmann; Samuel Elia Johannes Knobel; Dario Cazzoli; Tobias Nef; Thomas Nyffeler Comparison of uni- and multimodal motion stimulation on visual neglect: A proof-of-concept study Journal Article In: Cortex, vol. 171, pp. 194–203, 2024. @article{Geiser2024, Spatial neglect is characterized by the failure to attend stimuli presented in the contralesional space. Typically, the visual modality is more severely impaired than the auditory one. This dissociation offers the possibility of cross-modal interactions, whereby auditory stimuli may have beneficial effects on the visual modality. A new auditory motion stimulation method with music dynamically moving from the right to the left hemispace has recently been shown to improve visual neglect. The aim of the present study was twofold: a) to compare the effects of unimodal auditory against visual motion stimulation, i.e., smooth pursuit training, which is an established therapeutical approach in neglect therapy and b) to explore whether a combination of auditory + visual motion stimulation, i.e., multimodal motion stimulation, would be more effective than unimodal auditory or visual motion stimulation. 28 patients with left-sided neglect due to a first-ever, right-hemispheric subacute stroke were included. Patients either received auditory, visual, or multimodal motion stimulation. The between-group effect of each motion stimulation condition as well as a control group without motion stimulation was investigated by means of a one-way ANOVA with the patient's visual exploration behaviour as an outcome variable. Our results showed that unimodal auditory motion stimulation is equally effective as unimodal visual motion stimulation: both interventions significantly improved neglect compared to the control group. Multimodal motion stimulation also significantly improved neglect, however, did not show greater improvement than unimodal auditory or visual motion stimulation alone. Besides the established visual motion stimulation, this proof-of-concept study suggests that auditory motion stimulation seems to be an alternative promising therapeutic approach to improve visual attention in neglect patients. Multimodal motion stimulation does not lead to any additional therapeutic gain. In neurorehabilitation, the implementation of either auditory or visual motion stimulation seems therefore reasonable. |
Beatriz García-Carrión; Francisco Muñoz-Leiva; Salvador Del Barrio-García; Lucia Porcu The effect of online message congruence, destination-positioning, and emojis on users' cognitive effort and affective evaluation Journal Article In: Journal of Destination Marketing and Management, vol. 31, pp. 1–13, 2024. @article{GarciaCarrion2024, In today's digital world, it is crucial that Destination Management Organizations (DMOs) understand how tourists process and assimilate the information they receive through social media, whether this is posted online by the destination itself or by other users. When it comes to understanding the effectiveness of DMOs' integrated marketing communication (IMC) strategies, it is important to examine the extent to which the congruence between those online messages posted by the destination and those posted by other users (electronic word-of-mouth) influences the effectiveness of the communication. Similarly, it is also of value to understand the degree to which the use of emojis in social media messages may enhance the effect of congruence on IMC effectiveness. The scientific literature has found that tourists' responses to the information published online by the destination will depend on the type of positioning it adopts on its social media. The novelty of the present study work lies in addressing these issues from a neuroscientific perspective, using eye-tracking technology, to study (i) the user's cognitive effort (based on ocular indicators) when processing social media content and (ii) their affective evaluation of that content. A factorial experiment is conducted on a sample of 58 Facebook users. The results point to the important role played by the level of message congruence in users' information-processing and demonstrate the contextualizing effect exerted by emojis. Additionally, this study highlights the need for further research into the cognitive processing of tourism messages relative to different positioning strategies. |
Eunice G. Fernandes; Benjamin W. Tatler; Gillian Slessor; Louise H. Phillips Age differences in gaze following: Older adults follow gaze more than younger adults when free-viewing scenes Journal Article In: Experimental Aging Research, vol. 50, no. 1, pp. 84–101, 2024. @article{Fernandes2024, Previous research investigated age differences in gaze following with an attentional cueing paradigm where participants view a face with averted gaze, and then respond to a target appearing in a location congruent or incongruent with the gaze cue. However, this paradigm is far removed from the way we use gaze cues in everyday settings. Here we recorded the eye movements of younger and older adults while they freely viewed naturalistic scenes where a person looked at an object or location. Older adults were more likely to fixate and made more fixations to the gazed-at location, compared to younger adults. Our findings suggest that, contrary to what was observed in the traditional gaze-cueing paradigm, in a non-constrained task that uses contextualized stimuli older adults follow gaze as much as or even more than younger adults. |
Cynthia Faurite; Louise Kauffmann; Benoit R. Cottereau Interaction between central and peripheral vision: Influence of distance and spatial frequencies Journal Article In: Journal of Vision, vol. 1, no. 3, pp. 1–22, 2024. @article{Faurite2024, Visual scene perception is based on reciprocal interactions between central and peripheral information. Such interactions are commonly investigated through the semantic congruence effect, which usually reveals a congruence effect of central vision on peripheral vision as strong as the reverse. The aim of the present study was to further investigate the mechanisms underlying central-peripheral visual interactions using a central-peripheral congruence paradigm through three behavioral experiments. We presented simultaneously a central and a peripheral stimulus, that could be either semantically congruent or incongruent. To assess the congruence effect of central vision on peripheral vision, participants had to categorize the peripheral target stimulus while ignoring the central distractor stimulus. To assess the congruence effect of the peripheral vision on central vision, they had to categorize the central target stimulus while ignoring the peripheral distractor stimulus. Experiment 1 revealed that the physical distance between central and peripheral stimuli influences central-peripheral visual interactions: Congruence effect of central vision is stronger when the distance between the target and the distractor is the shortest. Experiments 2 and 3 revealed that the spatial frequency content of distractors also influence central-peripheral interactions: Congruence effect of central vision is observed only when the distractor contained high spatial frequencies while congruence effect of peripheral vision is observed only when the distractor contained low spatial frequencies. These results raise the question of how these influences are exerted (bottom-up vs. top-down) and are discussed based on the retinocortical properties of the visual system and the predictive brain hypothesis. |
Camille Fakche; Laura Dugué Perceptual cycles travel across retinotopic space Journal Article In: Journal of Cognitive Neuroscience, vol. 36, no. 1, pp. 200–216, 2024. @article{Fakche2024, Visual perception waxes and wanes periodically over time at low frequencies (theta: 4–7 Hz; alpha: 8–13 Hz), creating “peceptual cycles.” These perceptual cycles can be induced when stimulating the brain with a flickering visual stimulus at the theta or alpha frequency. Here, we took advantage of the well-known organization of the visual system into retinotopic maps (topographic correspondence between visual and cortical spaces) to assess the spatial organization of induced perceptual cycles. Specifically, we tested the hypothesis that they can propagate across the retinotopic space. A disk oscillating in luminance (inducer) at 4, 6, 8, or 10 Hz was presented in the periphery of the visual field to induce perceptual cycles at specific frequencies. EEG recordings verified that the brain responded at the corresponding inducer frequencies and their first harmonics. Perceptual cycles were assessed with a concurrent detection task—target stimuli were displayed at threshold contrast (50% detection) at random times during the inducer. Behavioral results confirmed that perceptual performance was modulated periodically by the inducer at each frequency. We additionally manipulated the distance between the target and the inducer (three possible positions) and showed that the optimal phase, that is, moment of highest target detection, shifted across target distance to the inducer, specifically when its flicker frequency was in the alpha range (8 and 10 Hz). These results demonstrate that induced alpha perceptual cycles travel across the retinotopic space in humans at a propagation speed of 0.3–0.5 m/sec, consistent with the speed of unmyelinated horizontal connections in the visual cortex. |
Eeva Eskola; Eeva-Leena Kataja; Jukka Hyönä; Hetti Hakanen; Saara Nolvi; Tuomo Häikiö; Juho Pelto; Hasse Karlsson; Linnea Karlsson; Riikka Korja Lower maternal emotional availability is related to increased attention toward fearful faces during infancy Journal Article In: Infant Behavior and Development, vol. 74, pp. 1–12, 2024. @article{Eskola2024, It has been suggested that infants' age-typical attention biases for faces and facial expressions have an inherent connection with the parent–infant interaction. However, only a few previous studies have addressed this topic. To investigate the association between maternal caregiving behaviors and an infant's attention for emotional faces, 149 mother–infant dyads were assessed when the infants were 8 months. Caregiving behaviors were observed during free-play interactions and coded using the Emotional Availability Scales. The composite score of four parental dimensions, that are sensitivity, structuring, non-intrusiveness, and non-hostility, was used in the analyses. Attention disengagement from faces was measured using eye tracking and face-distractor paradigm with neutral, happy, and fearful faces and scrambled-face control pictures as stimuli. The main finding was that lower maternal emotional availability was related to an infant's higher attention to fearful faces (p = .042), when infant sex and maternal age, education, and concurrent depressive and anxiety symptoms were controlled. This finding indicates that low maternal emotional availability may sensitize infants' emotion processing system for the signals of fear at least during this specific age around 8 months. The significance of the increased attention toward fearful faces during infancy is an important topic for future research. |
H. Ershaid; M. Lizarazu; D. J. McLaughlin; M. Cooke; O. Simantiraki; M. Koutsogiannaki; M. Lallier Contributions of listening effort and intelligibility to cortical tracking of speech in adverse listening conditions Journal Article In: Cortex, vol. 172, pp. 54–71, 2024. @article{Ershaid2024, Cortical tracking of speech is vital for speech segmentation and is linked to speech intelligibility. However, there is no clear consensus as to whether reduced intelligibility leads to a decrease or an increase in cortical speech tracking, warranting further investigation of the factors influencing this relationship. One such factor is listening effort, defined as the cognitive resources necessary for speech comprehension, and reported to have a strong negative correlation with speech intelligibility. Yet, no studies have examined the relationship between speech intelligibility, listening effort, and cortical tracking of speech. The aim of the present study was thus to examine these factors in quiet and distinct adverse listening conditions. Forty-nine normal hearing adults listened to sentences produced casually, presented in quiet and two adverse listening conditions: cafeteria noise and re- verberant speech. Electrophysiological responses were registered with electroencephalogram, and listening effort was estimated subjectively using self-reported scores and objectively using pupillometry. Results indicated varying impacts of adverse conditions on intelligibility, listening effort, and cortical tracking of speech, depending on the preservation of the speech temporal envelope. The more distorted envelope in the reverberant condition led to higher listening effort, as reflected in higher subjective scores, increased pupil diameter, and stronger cortical tracking of speech in the delta band. These findings suggest that using measures of listening effort in addition to those of intelligibility is useful for interpreting cortical tracking of speech results. Moreover, reading and phonological skills of participants were positively correlated with listening effort in the cafeteria condition, suggesting a special role of expert language skills in processing speech in this noisy condition. Implications for future research and theories linking atypical cortical tracking of speech and reading disorders are further discussed. |
Yufei Du; Haibo Yang The influence of subjective value on mobile payment security warnings: An eye movement study Journal Article In: Displays, vol. 82, pp. 1–11, 2024. @article{Du2024, Payment security has become a vital issue with the popularization of mobile payments among people and in various fields. Warnings are designed to alert users to potential risks but are only effective if users understand them. The current study aims to investigate whether the subjective value of colour formed by experiences influences the effectiveness of mobile payment security warnings. Using eye-tracking techniques, Experiment 1 compared the difference in user behaviour between the high-risk condition (red warnings) and the low-risk condition (green warnings). Experiment 2 detected whether the amounts transferred impacted users' behaviour that was affected by the subjective value of colour. The results showed that compared to a warning with a low-risk condition, warnings with a high-risk condition could capture the attention of participants more quickly, leading to more payment rejection. The results also showed that when making macro payments, the amounts may be prioritized over the subjective value of colour to drive attention and make the payment decision. This study shows the influence of users' characteristics on the interaction process and provides data to support interaction interface design and user behaviour research. |
Carola Dolci; Einat Rashal; Elisa Santandrea; Suliann Ben; Leonardo Chelazzi; Emiliano Macaluso; C. Nico Boehler The dynamics of statistical learning in visual search and its interaction with salience processing: An EEG study Journal Article In: NeuroImage, vol. 286, pp. 1–12, 2024. @article{Dolci2024, Visual attention can be guided by statistical regularities in the environment, that people implicitly learn from past experiences (statistical learning, SL). Moreover, a perceptually salient element can automatically capture attention, gaining processing priority through a bottom-up attentional control mechanism. The aim of our study was to investigate the dynamics of SL and if it shapes attentional target selection additively with salience processing, or whether these mechanisms interact, e.g. one gates the other. In a visual search task, we therefore manipulated target frequency (high vs. low) across locations while, in some trials, the target was salient in terms of colour. Additionally, halfway through the experiment, the high-frequency location changed to the opposite hemifield. EEG activity was simultaneously recorded, with a specific interest in two markers related to target selection and post-selection processing, respectively: N2pc and SPCN. Our results revealed that both SL and saliency significantly enhanced behavioural performance, but also interacted with each other, with an attenuated saliency effect at the high-frequency target location, and a smaller SL effect for salient targets. Concerning processing dynamics, the benefit of salience processing was more evident during the early stage of target se- lection and processing, as indexed by a larger N2pc and early-SPCN, whereas SL modulated the underlying neural activity particularly later on, as revealed by larger late-SPCN. Furthermore, we showed that SL was rapidly acquired and adjusted when the spatial imbalance changed. Overall, our findings suggest that SL is flexible to changes and, combined with salience processing, jointly contributes to establishing attentional priority. |
Vaibhav A. Diwadkar; Deborah Kashy; Jacqueline Bao; Katharine N. Thakkar Abnormal oculomotor corollary discharge signaling as a trans-diagnostic mechanism of psychosis Journal Article In: Schizophrenia Bulletin, pp. 1–11, 2024. @article{Diwadkar2024, Background and Hypothesis: Corollary discharge (CD) signals are “copies” of motor signals sent to sensory areas to predict the corresponding input. They are a posited mechanism enabling one to distinguish actions generated by oneself vs external forces. Consequently, altered CD is a hypothesized mechanism for agency disturbances in psychosis. Previous studies have shown a decreased influence of CD signals on visual perception in individuals with schizophrenia—particularly in those with more severe positive symptoms. We therefore hypothesized that altered CD may be a trans-diagnostic mechanism of psychosis. Study Design: We examined oculomotor CD (using the blanking task) in 49 participants with schizophrenia or schizoaffective disorder (SZ), 36 bipolar participants with psychosis (BPP), and 40 healthy controls (HC). Participants made a saccade to a visual target. Upon saccade initiation, the target disappeared and reappeared at a horizontally displaced position. Participants indicated the direction of displacement. With intact CD, participants can make accurate perceptual judgements. Otherwise, participants may use saccade landing site as a proxy of pre-saccadic target to inform perception. Thus, multi-level modeling was used to examine the influence of target displacement and saccade landing site on displacement judgements. Study Results: SZ and BPP were equally less sensitive to target displacement than HC. Moreover, regardless of diagnosis, SZ and BPP with more severe positive symptoms were more likely to rely on saccade landing site. Conclusions: These results suggest that altered CD may be a trans-diagnostic mechanism of psychosis. |
Larisa-maria Dinu; Alexandra-Livia Georgescu; Samriddhi N. Singh; Nicola C. Byrom; G. Overton; Bryan F. Singer; Eleanor J. Dommett Sign-tracking and goal-tracking in humans: Utilising eye-tracking in clinical and non-clinical populations Journal Article In: Behavioural Brain Research, vol. 461, pp. 1–10, 2024. @article{Dinu2024, Background: In Pavlovian conditioning, learned behaviour varies according to the perceived value of environmental cues. For goal-trackers (GT), the cue merely predicts a reward, whilst for sign-trackers (ST), the cue holds incentive value. The sign-tracking/goal-tracking model is well-validated in animals, but translational work is lacking. Despite the model's relevance to several conditions, including attention deficit hyperactivity disorder (ADHD), we are unaware of any studies that have examined the model in clinical populations. Methods: The current study used an eye-tracking Pavlovian conditioning paradigm to identify ST and GT in non- clinical (N = 54) and ADHD (N = 57) participants. Eye movements were recorded whilst performing the task. Dwell time was measured for two areas of interest: sign (i.e., cue) and goal (i.e., reward), and an eye-gaze index (EGI) was computed based on the dwell time sign-to-goal ratio. Higher EGI values indicate sign-tracking behaviour. ST and GT were determined using median and tertiary split approaches in both samples. Results: Despite greater propensity for sign-tracking in those with ADHD, there was no significant difference between groups. The oculomotor conditioned response was reward-specific (CS+) and present, at least partly, from the start of the task indicating dispositional and learned components. There were no differences in externalising behaviours between ST and GT for either sample. Conclusions: Sign-tracking is associated with CS+ trials only. There may be both dispositional and learned components to sign-tracking, potentially more common in those with ADHD. This holds translational potential for understanding individual differences in reward-learning. |
Sarah C. Creel; Conor I. Frye Minimal gains for minimal pairs: Difficulty in learning similar-sounding words continues into preschool Journal Article In: Journal of Experimental Child Psychology, vol. 240, pp. 1–27, 2024. @article{Creel2024, A critical indicator of spoken language knowledge is the ability to discern the finest possible distinctions that exist between words in a language—minimal pairs, for example, the distinction between the novel words beesh and peesh. Infants differentiate similar-sounding novel labels like “bih” and “dih” by 17 months of age or earlier in the context of word learning. Adult word learners readily distinguish similar-sounding words. What is unclear is the shape of learning between infancy and adulthood: Is there a nonlinear increase early in development, or is there protracted improvement as experience with spoken language amasses? Three experiments tested monolingual English-speaking children aged 3 to 6 years and young adults. Children underperformed when learning minimal-pair words compared with adults (Experiment 1), compared with learning dissimilar words even when speech materials were optimized for young children (Experiment 2), and when the number of word instances during learning was quadrupled (Experiment 3). Nonetheless, the youngest group readily recognized familiar minimal pairs (Experiment 3). Results are consistent with a lengthy trajectory for detailed sound pattern learning in one's native language(s), although other interpretations are possible. Suggestions for research on developmental trajectories across various age ranges are made. |
Mariya V. Cherkasova; Luke Clark; Jason J. S. Barton; A. Jon Stoessl; A. Winstanley Risk-promoting effects of reward-paired cues in human sign- and goal-trackers Journal Article In: Behavioural Brain Research, vol. 461, pp. 1–13, 2024. @article{Cherkasova2024, Animal research suggests trait-like individual variation in the degree of incentive salience attribution to reward- predictive cues, defined phenotypically as sign-tracking (high) and goal-tracking (low incentive salience attri- bution). While these phenotypes have been linked to addiction features in rodents, their translational validity is less clear. Here, we examined whether sign- and goal-tracking in healthy human volunteers modulates the effects of reward-paired cues on decision making. Sign-tracking was measured in a Pavlovian conditioning paradigm as the amount of eye gaze fixation on the reward-predictive cue versus the location of impending reward delivery. In Study 1 (Cherkasova et al., 2018), participants were randomly assigned to perform a binary choice task in which rewards were either accompanied (cued |
Maya Campbell; Nicole Oppenheimer; Alex L. White Severe processing capacity limits for sub-lexical features of letter strings Journal Article In: Attention, Perception, & Psychophysics, pp. 1–10, 2024. @article{Campbell2024, When reading, the visual system is confronted with many words simultaneously. How much of that information can a reader process at once? Previous studies demonstrated that low-level visual features of multiple words are processed in parallel, but lexical attributes are processed serially, for one word at a time. This implies that an internal bottleneck lies somewhere between early visual and lexical analysis. We used a dual-task behavioral paradigm to investigate whether this bottleneck lies at the stage of letter recognition or phonological decoding. On each trial, two letter strings were flashed briefly, one above and one below fixation, and then masked. In the letter identification experiment, participants indicated whether a vowel was present in a particular letter string. In the phonological decoding experiment, participants indicated whether the letter string was pronounceable. We compared accuracy in a focused attention condition, in which participants judged only one of the two strings, with accuracy in a divided attention condition, in which participants judged both strings independently. In both experiments, the cost of dividing attention was so large that it supported a serial model: participants were able to process only one letter string per trial. Furthermore, we found a stimulus processing trade-off that is characteristic of serial processing: When participants judged one string correctly, they were less likely to judge the other string correctly. Therefore, the bottleneck that constrains word recognition under these conditions arises at a sub-lexical level, perhaps due to a limit on the efficiency of letter recognition. |
Zoé Bollen; Arthur Pabst; Nicolas Masson; Reinout W. Wiers; Matt Field; Pierre Maurage Craving modulates attentional bias towards alcohol in severe alcohol use disorder: An eye-tracking study Journal Article In: Addiction, vol. 119, no. 1, pp. 1–11, 2024. @article{Bollen2024, Background and aims: Competing models disagree on three theoretical questions regarding alcohol-related attentional bias (AB), a key process in severe alcohol use disorder (SAUD): (1) is AB more of a trait (fixed, associated with alcohol use severity) or state (fluid, associated with momentary craving states) characteristic of SAUD; (2) does AB purely reflect the over-activation of the reflexive/reward system or is it also influenced by the activity of the reflective/control system and (3) does AB rely upon early or later processing stages? We addressed these issues by investigating the time-course of AB and its modulation by subjective craving and cognitive load in SAUD. Design: A free-viewing eye-tracking task, presenting pictures of alcoholic and non-alcoholic beverages, combined with a concurrent cognitive task with three difficulty levels. Setting: A laboratory setting in the detoxification units of three Belgian hospitals. Participants: We included 30 patients with SAUD self-reporting craving at testing time, 30 patients with SAUD reporting a total absence of craving and 30 controls matched on sex and age. All participants from SAUD groups met the DSM-5 criteria for SAUD. Measurements: We assessed AB through early and late eye-tracking indices. We evaluated the modulation of AB by craving (comparison between patients with/without craving) and cognitive load (variation of AB with the difficulty level of the concurrent task). Findings: Dwell time measure indicated that SAUD patients with craving allocated more attention towards alcohol-related stimuli than patients without craving (P < 0.001 |
Jacek Bielas; Damian Przybycień; Łukasz Michalczyk Temperament affected visuospatial orienting on discrimination tasks Journal Article In: Perceptual and Motor Skills, vol. 0, no. 0, pp. 1–15, 2024. @article{Bielas2024, In the Posner cueing paradigm, the early attentional capture and subsequent inhibition of return (IOR) of attention to the same location, although they are microscale phenomena measured in milliseconds, seem to encapsulate the interaction between two fundamental dimensions of behavior - engaging in and sustaining activity versus withdrawing from and inhibiting activity. In the field of differential psychology, the dynamics of reciprocal relations between these behavioral dimensions have been thought to be determined by central nervous system properties that constitute an individual's temperament. Yet the research on any differential effects of temperament on visuospatial orienting is rather sparse and has produced ambiguous results. Here, we used saccadic responses to measure whether individual differences in reactivity as a temperamental trait might affect orienting of visuospatial attention on discrimination cueing tasks. Our results suggested that, in individuals with lower reactivity, attentional capture took place at a short stimulus onset asynchrony (SOA), producing a facilitatory cueing effect, which was not the case in those who were higher in reactivity. We explain and discuss these results with the Regulative Theory of Temperament. |
Omer Azriel; Gal Arad; Daniel S. Pine; Amit Lazarov; Yair Bar-Haim Attention bias vs. attention control modification for social anxiety disorder: A randomized controlled trial Journal Article In: Journal of Anxiety Disorders, vol. 101, pp. 1–10, 2024. @article{Azriel2024, Gaze-Contingent Music Reward Therapy (GC-MRT) is an eye-tracking-based attention bias modification protocol for social anxiety disorder (SAD) with established clinical efficacy. However, it remains unclear if improvement following GC-MRT hinges on modification of threat-related attention or on more general enhancement of attention control. Here, 50 patients with SAD were randomly allocated to GC-MRT using either threat faces or shapes. Results indicate comparable reductions in social anxiety and co-morbid depression symptoms in the two conditions. Patients in the shapes condition showed a significant increase in attention control and a reduction in attention to both the trained shapes and threat faces, whereas patients in the faces condition showed a reduction in attention to threat faces only. These findings suggest that enhancement of attention control, independent of valence-specific attention modification, may facilitate reduction in SAD symptoms. Alternative interpretations and clinical implications of the current findings are discussed. |
Victoria I. Nicholls; Jan Wiener; Andrew Isaac Meso; Sebastien Miellet The impact of perceptual complexity on road crossing decisions in younger and older adults Journal Article In: Scientific Reports, vol. 14, no. 479, pp. 1–14, 2024. @article{Nicholls2024, Cognitive abilities decline with healthy ageing which can have a critical impact on day-to-day activities. One example is road crossing where older adults (OAs) disproportionally fall victim to pedestrian accidents. The current research examined two virtual reality experiments that investigated how the complexity of the road crossing situation impacts OAs (N = 19, ages 65–85) and younger adults (YAs |
Yunyun Mu; Anna Schubö; Jan Tünnermann Adapting attentional control settings in a shape-changing environment Journal Article In: Attention, Perception, & Psychophysics, pp. 1–18, 2024. @article{Mu2024, In rich visual environments, humans have to adjust their attentional control settings in various ways, depending on the task. Especially if the environment changes dynamically, it remains unclear how observers adapt to these changes. In two experiments (online and lab-based versions of the same task), we investigated how observers adapt their target choices while searching for color singletons among shape distractor contexts that changed over trials. The two equally colored targets had shapes that differed from each other and matched a varying number of distractors. Participants were free to select either target. The results show that participants adjusted target choices to the shape ratio of distractors: even though the task could be finished by focusing on color only, participants showed a tendency to choose targets matching with fewer distractors in shape. The time course of this adaptation showed that the regularities in the changing environment were taken into account. A Bayesian modeling approach was used to provide a fine-grained picture of how observers adapted their behavior to the changing shape ratio with three parameters: the strength of adaptation, its delay relative to the objective distractor shape ratio, and a general bias toward specific shapes. Overall, our findings highlight that systematic changes in shape, even when it is not a target-defining feature, influence how searchers adjust their attentional control settings. Furthermore, our comparison between lab-based and online assessments with this paradigm suggests that shape is a good choice as a feature dimension in adaptive choice online experiments. WABBLE |
Alma Sophia Merscher; Matthias Gamer Fear lies in the eyes of the beholder—Robust evidence for reduced gaze dispersion upon avoidable threat Journal Article In: Psychophysiology, vol. 61, no. 1, pp. 1–16, 2024. @article{Merscher2024, A rapid detection and processing of relevant information in our environment is crucial for survival. The human eyes are drawn to social or threatening stimuli as they may carry essential information on how to behave appropriately in a given context. Recent studies further showed a centralization of gaze that reminded of freezing behaviors in rodents. Probably constituting a component of an adaptive defense mode, centralized eye movements predicted the speed of motor actions. Here we conducted two experiments to examine if and how these presumably survival-relevant gaze patterns interact. Subjects viewed images including social, that is, faces (Experiment 1 |
2023 |
Chuanli Zang; Zhichao Zhang; Manman Zhang; Federica Degno; Simon P. Liversedge; Zhang Manman; Federica Degno; Simon P. Liversedge Examining semantic parafoveal-on-foveal effects using a Stroop boundary paradigm Journal Article In: Journal of Memory and Language, vol. 128, pp. 1–14, 2023. @article{Zang2023, The issue of whether lexical processing occurs serially or in parallel has been a central and contentious issue in respect of models of eye movement control in reading for well over a decade. A critical question in this regard concerns whether lexical parafoveal-on-foveal effects exist in reading. Because Chinese is an unspaced and densely packed language, readers may process parafoveal words to a greater extent than they do in spaced alphabetic languages. In two experiments using a novel Stroop boundary paradigm (Rayner, 1975), participants read sentences containing a single-character color-word whose preview was manipulated (identity or pseudocharacter, printed in black [no-color], or in a color congruent or incongruent with the character meaning). Two boundaries were used, one positioned two characters before the target and one immediately to the left of the target. The previews changed from black to color and then back to black as the eyes crossed the first and then the second boundary respectively. In Experiment 1 four color-words (red, green, yellow and blue) were used and in Experiment 2 only red and green color-words were used as targets. Both experiments showed very similar patterns such that reading times were increased for colored compared to no-color previews indicating a parafoveal visual interference effect. Most importantly, however, there were no robust interactive effects. Preview effects were comparable for congruent and incongruent color previews at the pretarget region when the data were combined from both experiments. These results favour serial processing accounts and indicate that even under very favourable experimental conditions, lexical semantic parafoveal-on-foveal effects are minimal. |
Xibo Zuo; Ying Ling; Todd Jackson Testing links between pain-related biases in visual attention and recognition memory: An eye-tracking study based on an impending pain paradigm Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 5, pp. 1057 –1071, 2023. @article{Zuo2023, Although separate lines of research have evaluated pain-related biases in attention or memory, laboratory studies examining links between attention and memory for pain-related information have received little consideration. In this eye-tracking experiment, we assessed relations between pain-related attention biases (ABs) and recognition memory biases (MBs) among 122 pain-free adults randomly assigned to impending pain (n = 59) versus impending touch (n = 63) conditions, wherein offsets of trials that included pain images were followed by subsequent possibly painful and non-painful somatosensory stimulation, respectively. Gaze biases of participants were assessed during presentations of pain-neutral (P-N) and happy-neutral (H-N) face image pairs within these conditions. Subsequently, condition differences in recognition accuracy for previously viewed versus novel pained and happy face images were examined. Overall gaze durations were significantly longer for pain (vs. neutral) faces that signalled impending pain than impending non-painful touch, particularly among the less resilient in the former condition. Impending pain cohorts also exhibited comparatively better recognition accuracy for both pained and happy face images. Finally, longer gaze durations on pain faces that signalled potential pain, but not potential touch, were related to more accurate recognition of previously viewed pain faces. In sum, pain cues that signal potential personal discomfort maintain visual attention more fully and are subsequently recognised more accuracy than are pain cues that signal non-painful touch stimulation. |
Eirini Zormpa; Antje S. Meyer; Laurel E. Brehm In conversation, answers are remembered better than the questions themselves Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 49, no. 12, pp. 1971–1988, 2023. @article{Zormpa2023, Language is used in communicative contexts to identify and successfully transmit new information that should be later remembered. In three studies, we used question–answer pairs, a naturalistic device for focus- ing information, to examine how properties of conversations inform later item memory. In Experiment 1, participants viewed three pictures while listening to a recorded question–answer exchange between two peo- ple about the locations oftwo of the displayed pictures. In a memory recognition test conducted online a day later, participants recognized the names of pictures that served as answers more accurately than the names of pictures that appeared as questions. This suggests that this type of focus indeed boosts memory. In Experiment 2, participants listened to the same items embedded in declarative sentences. There was a reduced memory benefit for the second item, confirming the role of linguistic focus on later memory beyond a simple serial-position effect. In Experiment 3, two participants asked and answered the same questions about objects in a dialogue. Here, answers continued to receive a memory benefit, and this focus effect was accentuated by language production such that information-seekers remembered the answers to their questions better than information-givers remembered the questions they had been asked. Combined, these studies show how people's memory for conversation is modulated by the referential status of the items men- tioned and by the speaker's roles of the conversation participants. |
Feriel Zoghlami; Matteo Toscani Foveal to peripheral extrapolation of facial emotion Journal Article In: Perception, vol. 52, no. 7, pp. 514–523, 2023. @article{Zoghlami2023, Peripheral vision is characterized by poor resolution. Recent evidence from brightness perception suggests that missing information is filled out with information at fixation. Here we show a novel filling-out mechanism: when participants are presented with a crowd of faces, the perceived emotion of faces in peripheral vision is biased towards the emotion of the face at fixation. This mechanism is particularly important in social situations where people often need to perceive the overall mood of a crowd. Some faces in the crowd are more likely to catch people's attention and be looked at directly, while others are only seen peripherally. Our findings suggest that the perceived emotion of these peripheral faces, and the overall perceived mood of the crowd, is biased by the emotions of the faces that people look at directly. |
Artyom Zinchenko; Markus Conci; Hermann J. Müller; Thomas Geyer Environmental regularities mitigate attentional misguidance in contextual cueing of visual search Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–13, 2023. @article{Zinchenko2023, Visual search is faster when a fixed target location is paired with a spatially invariant (vs. randomly changing) distractor configuration, thus indicating that repeated contexts are learned, thereby guiding attention to the target (contextual cueing [CC]). Evidence for memory-guided attention has also been revealed with electrophysiological (electroencephalographic [EEG]) recordings, starting with an enhanced early posterior neg- ativity (N1pc), which signals a preattentive bias toward the target, and, subsequently, attentional and postselective components, such as the posterior contralateral negativity (PCN) and contralateral delay activ- ity (CDA), respectively. Despite effective learning, relearning of previously acquired contexts is inflexible: The CC benefits disappear when the target is relocated to a new position within an otherwise invariant context and corresponding EEG correlates are diminished. The present study tested whether global statistical properties that induce predictions going beyond the immediate invariant layout can facilitate contextual relearning. Global statistical regularities were implemented by presenting repeated and nonrepeated displays in separate streaks (mini blocks) of trials in the relocation phase, with individual displays being presented in a fixed and thus predictable order. Our results revealed a significant CC effect (and an associated modulation of the N1pc, PCN, and CDA components) during initial learning. Critically, the global statistical regularities in the relocation phase also resulted in a reliable CC effect, thus revealing effective relearning with predictive streaks. Moreover, this relearning was reflected in an enhanced PCN amplitude for repeated relative to non- repeated contexts. Temporally ordered contexts may thus adapt memory-based guidance of attention, par- ticularly the allocation of covert attention in the visual display. |
Laoura Ziaka; Athanassios Protopapas Cognitive control beyond single-item tasks: insights from pupillometry, gaze, and behavioral measures Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 49, no. 7, pp. 968–988, 2023. @article{Ziaka2023, Cognitive control has been typically examined using single-item tasks. This has implications for the generalizability of theories of control implementation. Previous studies have revealed that different control demands are posed by tasks depending on whether they present stimuli individually (i.e., single-item) or simultaneously in array format (i.e., multi-item). In the present study, we tracked within-task performance in single-item and multi-item Stroop tasks using simultaneous pupillometry, gaze, and behavioral response measures, aiming to explore the implications of format differences for cognitive control. The results indicated within-task performance decline in the multi-item version of the Stroop task, accompanied by pupil constriction and dwell time increase, in both the incongruent and the neutral condition. In contrast, no performance decline or dwell time increase was observed in the course of the single-item version of the task.We interpret these findings in terms of capacity constraints on cognitive control, with implications for cognitive control research, and highlight the need for better understanding of the cognitive demands of multi-item tasks. |
Xi Zhu; Amit Lazarov; Sarah Dolan; Yair Bar-Haim; Daniel G. Dillon; Diego A. Pizzagalli; Franklin Schneier Resting state connectivity predictors of symptom change during gaze-contingent music reward therapy of social anxiety disorder Journal Article In: Psychological Medicine, vol. 53, no. 7, pp. 3115–3123, 2023. @article{Zhu2023a, Background Social anxiety disorder (SAD) is common, first-line treatments are often only partially effective, and reliable predictors of treatment response are lacking. Here, we assessed resting state functional connectivity (rsFC) at pre-treatment and during early treatment as a potential predictor of response to a novel attention bias modification procedure, gaze-contingent music reward therapy (GC-MRT). Methods Thirty-two adults with SAD were treated with GC-MRT. rsFC was assessed with multi-voxel pattern analysis of fMRI at pre-treatment and after 2-3 weeks. For comparison, 20 healthy control (HC) participants without treatment were assessed twice for rsFC over the same time period. All SAD participants underwent clinical evaluation at pre-treatment, early-treatment (week 2-3), and post-treatment. Results SAD and depressive symptoms improved significantly from pre-treatment to post-treatment. After 2-3 weeks of treatment, decreased connectivity between the executive control network (ECN) and salience network (SN), and increased connectivity within the ECN predicted improvement in SAD and depressive symptoms at week 8. Increased connectivity between the ECN and default mode network (DMN) predicted greater improvement in SAD but not depressive symptoms at week 8. Connectivity within the DMN decreased significantly after 2-3 weeks of treatment in the SAD group, while no changes were found in HC over the same time interval. Conclusion We identified early changes in rsFC during a course of GC-MRT for SAD that predicted symptom change. Connectivity changes within the ECN, ECN-DMN, and ECN-SN may be related to mechanisms underlying the clinical effects of GC-MRT and warrant further study in controlled trials. |
Dandan Zhu; Xuan Shao; Qiangqiang Zhou; Xiongkuo Min; Guangtao Zhai; Xiaokang Yang A novel lightweight audio-visual saliency model for videos Journal Article In: ACM Transactions on Multimedia Computing, Communications and Applications, vol. 19, no. 4, pp. 1–22, 2023. @article{Zhu2023, Audio information has not been considered an important factor in visual attention models regardless of many psychological studies that have shown the importance of audio information in the human visual perception system. Since existing visual attention models only utilize visual information, their performance is limited but also requires high-computational complexity due to the limited information available. To overcome these problems, we propose a lightweight audio-visual saliency (LAVS) model for video sequences. To the best of our knowledge, this article is the first trial to utilize audio cues for an efficient deep-learning model for the video saliency estimation. First, spatial-temporal visual features are extracted by the lightweight receptive field block (RFB) with the bidirectional ConvLSTM units. Then, audio features are extracted by using an improved lightweight environment sound classification model. Subsequently, deep canonical correlation analysis (DCCA) aims at capturing the correspondence between audio and spatial-temporal visual features, thus obtaining a spatial-temporal auditory saliency. Lastly, the spatial-temporal visual and auditory saliency are fused to obtain the audio-visual saliency map. Extensive comparative experiments and ablation studies validate the performance of the LAVS model in terms of effectiveness and complexity. |
Ying Joey Zhou; Aarti Ramchandran; Saskia Haegens Alpha oscillations protect working memory against distracters in a modality-specific way Journal Article In: NeuroImage, vol. 278, pp. 1–9, 2023. @article{Zhou2023d, Alpha oscillations are thought to be involved in suppressing distracting input in working-memory tasks. Yet, the spatial-temporal dynamics of such suppression remain unclear. Key questions are whether such suppression reflects a domain-general inattentiveness mechanism, or occurs in a stimulus- or modality-specific manner within cortical areas most responsive to the distracters; and whether the suppression is proactive (i.e., preparatory) or reactive. Here, we addressed these questions using a working-memory task where participants had to memorize an array of visually presented digits and reproduce one of them upon being probed. We manipulated the presence of distracters and the sensory modality in which distracters were presented during memory maintenance. Our results show that sensory areas most responsive to visual and auditory distracters exhibited stronger alpha power increase after visual and auditory distracter presentation respectively. These results suggest that alpha oscillations underlie distracter suppression in a reactive, modality-specific manner. |
Yang Zhou; Ou Zhu; David J. Freedman Posterior parietal cortex plays a causal role in abstract memory-based visual categorical decisions Journal Article In: Journal of Neuroscience, vol. 43, no. 23, pp. 4315–4328, 2023. @article{Zhou2023c, Neural activity in the lateral intraparietal cortex (LIP) correlates with both sensory evaluation and motor planning underlying visuomotor decisions. We previously showed that LIP plays a causal role in visually-based perceptual and categorical decisions, and preferentially contributes to evaluating sensory stimuli over motor planning. In that study, however, monkeys reported their decisions with a saccade to a colored target associated with the correct motion category or direction. Since LIP is known to play a role in saccade planning, it remains unclear whether LIP's causal role in such decisions extend to decision-making tasks which do not involve saccades. Here, we employed reversible pharmacological inactivation of LIP neural activity while two male monkeys performed delayed match to category (DMC) and delayed match to sample (DMS) tasks. In both tasks, monkeys needed to maintain gaze fixation throughout the trial and report whether a test stimulus was a categorical match or nonmatch to the previous sample stimulus by releasing a touch bar. LIP inactivation impaired monkeys' behavioral performance in both tasks, with deficits in both accuracy and reaction time (RT). Furthermore, we recorded LIP neural activity in the DMC task targeting the same cortical locations as in the inactivation experiments. We found significant neural encoding of the sample category, which was correlated with monkeys' categorical decisions in the DMC task. Taken together, our results demonstrate that LIP plays a generalized role in visual categorical decisions independent of the task-structure and motor response modality. |
Xing Zhou; Yuxiang Hao; Shuangxing Xu; Qi Zhang Statistical learning of target location and distractor location rely on different mechanisms during visual search Journal Article In: Attention, Perception, & Psychophysics, vol. 85, no. 2, pp. 342–365, 2023. @article{Zhou2023g, More studies have demonstrated that people have the capacity to learn and make use of environmental regularities. This capacity is known as statistical learning (SL). Despite rich empirical findings, it is not clear how the two forms of SL (SL of target location and SL of distractor location) influence visual search and whether they rely on the shared cognitive mechanism. In Experiment 1 and Experiment 2, we manipulated the probability of target location and the probability of distractor location, respectively. The results suggest that attentional guidance (they referred to overt attention) may mainly contribute to the SL effect of the target location and the distractor location, which is in line with the notion of priority mapping. To a small extent, facilitation of response selection may also contribute to the SL effect of the target location but does not contribute to the SL effect of the distractor location. However, the main difference between the two kinds of SL occurred in the early stage (it involved covert attention). Together, our findings indicate that the two forms of SL reflect partly shared and partly independent cognitive mechanisms. |
Peng Zhou; Huimin Ma; Bochao Zou; Xiaowen Zhang; Shuyan Zhao; Yuxin Lin; Yidong Wang; Lei Feng; Gang Wang A conceptual framework of cognitive-affective theory of mind: Towards a precision identification of mental disorders Journal Article In: npj Mental Health Research, vol. 2, no. 1, pp. 1–11, 2023. @article{Zhou2023a, To explore the minds of others, which is traditionally referred to as Theory of Mind (ToM), is perhaps the most fundamental ability of humans as social beings. Impairments in ToM could lead to difficulties or even deficits in social interaction. The present study focuses on two core components of ToM, the ability to infer others' beliefs and the ability to infer others' emotions, which we refer to as cognitive and affective ToM respectively. Charting both typical and atypical trajectories underlying the cognitive-affective ToM promises to shed light on the precision identification of mental disorders, such as depressive disorders (DD) and autism spectrum disorder (ASD). However, most prior studies failed to capture the underlying processes involved in the cognitive-affective ToM in a fine-grained manner. To address this problem, we propose an innovative conceptual framework, referred to as visual theory of mind (V-ToM), by constructing visual scenes with emotional and cognitive meanings and by depicting explicitly a four-stage process of how humans make inferences about the beliefs and emotions of others. Through recording individuals' eye movements while looking at the visual scenes, our model enables us to accurately measure each stage involved in the computation of cognitive-affective ToM, thereby allowing us to infer about potential difficulties that might occur in each stage. Our model is based on a large sample size ( n > 700) and a novel audio-visual paradigm using visual scenes containing cognitive-emotional meanings. Here we report the obtained differential features among healthy controls, DD and ASD individuals that overcome the subjectivity of conventional questionnaire-based assessment, and therefore could serve as valuable references for mental health applications based on AI-aided digital medicine. |
Junyi Zhou; Wenjie Zhuang Physically active undergraduates perform better on executive-related oculomotor control: Evidence from the antisaccade task and pupillometry Journal Article In: PsyCh Journal, vol. 12, no. 1, pp. 17–24, 2023. @article{Zhou2023e, Previous studies have shown that exercise can improve executive function in young and older adults. However, it remains controversial whether a sufficient amount of physical activity leads to higher-level executive function. To examine the effect of physical activity on executive function, we used eye-tracking technology and the antisaccade task in 41 young undergraduates with various levels of physical activity. Moreover, we also investigated their differences in cognitive ability by examining their pupil size during the antisaccade task. Eye-tracking results showed that physically active individuals showed shorter saccade latency and higher accuracy in the antisaccade task than their physically inactive counterparts. Furthermore, the former showed larger pupil size during the preparatory period of antisaccade. These findings suggest that individuals with higher-level physical activity have higher-level executive function. The larger pupil sizes of physically active individuals may imply that their locus coeruleus-norepinephrine system and executive-related prefrontal cortex are more active, which contributes to their higher-level cognitive ability. |
Junyi Zhou; Zhanshuang Bai In: Frontiers in Psychology, vol. 14, pp. 1–8, 2023. @article{Zhou2023, Introduction: Previous studies have shown that brief moderate-intensity aerobic exercise can improve the executive function of healthy adults. The present study sought to examine and compare the effects of brief moderate-intensity aerobic exercise on the executive functions of undergraduates with and without mobile phone addiction. Method: Thirty-two healthy undergraduates with mobile phone addiction were recruited and randomly assigned to either an exercise or control group. Likewise, 32 healthy undergraduates without mobile phone addiction were recruited and randomly assigned to either an exercise or control group. Participants were asked to perform moderate-intensity aerobic exercise for 15 minutes for the exercise groups. The executive functions of all participants were assessed via the antisaccade task twice (i.e., pre-test and post-test). Results: The results showed that the saccade latency, variability of saccade latency, and error rate decreased significantly from pre-test to post-test for all participants. More importantly, after the 15-min moderate-intensity aerobic exercise intervention, participants in the exercise groups showed significantly shorter saccade latency than their counterparts in the control groups, regardless of whether they are with mobile phone addiction. Discussion: This result is consistent with previous studies demonstrating that brief moderate-intensity aerobic exercise can improve one's executive function. Furthermore, the absence of significant interaction among Time, Group, and Intervention implies that the effects of brief moderate-intensity aerobic exercise on executive function are comparable between participants with and without mobile phone addiction. The present study supports the previous conclusion that brief moderate-intensity aerobic exercise can improve one's executive function effectively, and extends it to the population with mobile phone addiction. In summary, the present study has some implications for understanding of the relationship between exercise, executive function, and mobile phone addiction. |
Alexander Zhigalov; Ole Jensen Perceptual echoes as travelling waves may arise from two discrete neuronal sources Journal Article In: NeuroImage, vol. 272, pp. 1–9, 2023. @article{Zhigalov2023, Growing evidence suggests that travelling waves are functionally relevant for cognitive operations in the brain. Several electroencephalography (EEG) studies report on a perceptual alpha-echo, representing the brain response to a random visual flicker, propagating as a travelling wave across the cortical surface. In this study, we ask if the propagating activity of the alpha-echo is best explained by a set of discrete sources mixing at the sensor level rather than a cortical travelling wave. To this end, we presented participants with gratings modulated by random noise and simultaneously acquired the ongoing MEG. The perceptual alpha-echo was estimated using the temporal response function linking the visual input to the brain response. At the group level, we observed a spatial decay of the amplitude of the alpha-echo with respect to the sensor where the alpha-echo was the largest. Importantly, the propagation latencies consistently increased with the distance. Interestingly, the propagation of the alpha-echoes was predominantly centro-lateral, while EEG studies reported mainly posterior-frontal propagation. Moreover, the propagation speed of the alpha-echoes derived from the MEG data was around 10 m/s, which is higher compared to the 2 m/s reported in EEG studies. Using source modelling, we found an early component in the primary visual cortex and a phase-lagged late component in the parietal cortex, which may underlie the travelling alpha-echoes at the sensor level. We then simulated the alpha-echoes using realistic EEG and MEG forward models by placing two sources in the parietal and occipital cortices in accordance with our empirical findings. The two-source model could account for both the direction and speed of the observed alpha-echoes in the EEG and MEG data. Our results demonstrate that the propagation of the perceptual echoes observed in EEG and MEG data can be explained by two sources mixing at the scalp level equally well as by a cortical travelling wave. Importantly, these findings should not be directly extrapolated to intracortical recordings, where travelling waves gradually propagate at a sub-millimetre scale. |
Yueyuan Zheng; Janet H. Hsiao Differential audiovisual information processing in emotion recognition: An eye-tracking study Journal Article In: Emotion, vol. 23, no. 4, pp. 1028–1039, 2023. @article{Zheng2023a, Recent research has suggested that dynamic emotion recognition involves strong audiovisual association; that is, facial or vocal information alone automatically induces perceptual processes in the other modality. We hypothesized that different emotions may differ in the automaticity of audiovisual association, resulting in differential audiovisual information processing. Participants judged the emotion of a talking-head video under audiovisual, video-only (with no sound), and audio-only (with a static neutral face) conditions. Among the six basic emotions, disgust had the largest audiovisual advantage over the unimodal conditions in recognition accuracy. In addition, in the recognition of all the emotions except for disgust, participants' eye-movement patterns did not change significantly across the three conditions, suggesting mandatory audiovisual information processing. In contrast, in disgust recognition, participants' eye movements in the audiovisual condition were less eyes-focused than the video-only condition and more eyes-focused than the audio-only condition, suggesting that audio information in the audiovisual condition interfered with eye-movement planning for important features (eyes) for disgust. In addition, those whose eye-movement pattern was affected less by concurrent disgusted voice information benefited more in recognition accuracy. Disgust recognition is learned later in life and thus may involve a reduced amount of audiovisual associative learning. Consequently, audiovisual association in disgust recognition is less automatic and demands more attentional resources than other emotions. Thus, audiovisual information processing in emotion recognition depends on the automaticity of audiovisual association of the emotion resulting from associative learning. This finding has important implications for real-life emotion recognition and multimodal learning. |
Wei Zheng; Xiaolu Wang Humor experience facilitates ongoing cognitive tasks: Evidence from pun comprehension Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–9, 2023. @article{Zheng2023, Empirical findings on embodied cognition have shown that bodily states (e.g., bodily postures and affective states) can influence how people appreciate humor. A case in point is that participants were reported to read pleasant sentences faster than the unpleasant controls when their muscles responsible for smiling were activated. However, little research has examined whether the feeling of amusement derived from humor processing like pun comprehension can exert a backward influence on ongoing cognitive tasks. In the present study, the participants' eye movements were tracked while they rated the comprehensibility of humorous sentences (homophone puns) and two types of unfunny control sentences (congruent and incongruent). Fixation measures showed an advantage in the critical homophone region for the congruent controls relative to the homophone puns; however, this pattern was reversed in terms of total sentence reading time. In addition, the humor rating scores acquired after the eye-tracking experiment were found negatively correlated to the overall sentence reading time, suggesting that the greater amusement the participant experienced the faster they would finish the rating task. Taken together, the current results indicate that the positive affect derived from humor can in turn provide immediate feedback to the cognitive system, which enhances text comprehension. As a result, the current finding provides more empirical evidence for the exploration of the interaction between the body and cognition. |
Ziyue Zhao; Wei Su; Juan Hou The influence of resource-gaining capacity on mate preferences: An eye tracking study Journal Article In: BMC Psychology, vol. 11, no. 1, pp. 1–13, 2023. @article{Zhao2023b, To investigate whether resource-gaining capacity influences mate preferences, explicit (self-report data) and implicit tasks (eye tracking data) were used to explore whether individuals' resource-gaining capacity influences mate preferences and whether there are sex differences in mate preferences under two different conditions (short-term and long-term strategies). A total of 59 college students completed a questionnaire collecting basic demographic information, the Resource-Gaining Capacity Scale and the two above tasks. The results showed that (1) in the short-term mating, individuals with higher resource-gaining capacity paid more attention to “good parent” than those with lower resource-gaining capacity, while individuals with lower resource-gaining capacity preferred “good provider” than those with higher resource-gaining capacity. (2) In the long-term mating, women valued “good provider” traits more than men, and they paid more attention to “good parent” traits than men in the short-term. In addition, no matter in the short-term or the long-term mating, men placed more value on “good genes” traits than women. (3) Compared with long-term mating, individuals of both sexes had preferences based on “good genes” in short-term mating, while they had preferences based on “good parent” and “good provider” in long-term mating compared with short-term mating. (4) Regarding explicit mate selection, “good parent” traits were most preferred by the participants, while the implicit eye tracking data indicated that participants preferred partners who were “good providers” and had “good genes”. |
Ziyao Zhang; Nancy B. Carlisle Assessing recoding accounts of negative attentional templates using behavior and eye tracking Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 49, no. 4, pp. 509–532, 2023. @article{Zhang2023h, Can we use attentional control to ignore known distractor features? Providing cues before a visual search trial about an upcoming distractor color (negative cue) can lead to reaction time benefits compared with no cue trials. This suggests top-down control may use negative templates to actively suppress distractor features, a notion that challenges the mechanisms of top-down control provided in many theories of attention. However, there is currently mixed support for this mechanism in the literature. Alternative explanations have been proposed, which do not require suppression within top-down control but instead involve recoding the negative cue into a positive template based on color or spatial layouts. In three experiments, we contrasted the predictions of active suppression and the recoding strategies. Across experiments, we found consistent evidence against a color recoding account. We also found evidence of accuracy, reaction time, and eye movement benefits when location recoding was not possible. These results suggest that prior benefits from negative cues cannot be explained exclusively by spatial or color recoding. The results indicate that active suppression likely plays a role in the attentional benefits following negative cues. |
Yuyang Zhang; Jing Yang; Zhisheng Edward Wen Learners with low working memory capacity benefit more from the presence of an instructor 's face in video lectures Journal Article In: Journal of Intelligence, vol. 11, no. 5, pp. 1–14, 2023. @article{Zhang2023, This current study explores the influence of learners' working memory capacity (WMC) on the facilitation effect of an instructor's presence during video lectures. Sixty-four undergraduates were classified into high and low WMC groups based on their performance in an operation span task. They watched three types of video lectures on unfamiliar topics in a random order: video lectures with an instructor's voiceover but without presence (VN), video lectures with the instructor's face picture (VP), and video lectures with the same instructor talking (VV). We collected their eye movement data during the video lectures and their learning performance in the comprehension tests following each video. Two-way ANOVA and post-hoc analyses showed that the instructor's presence significantly improved comprehension performance in only the low WMC group. They allocated more attention to the instructor's face picture and talking head than the high WMC group. Our results highlight the value of the instructor's presence as a social cue in video lectures, which is particularly beneficial for learners with a low WMC. |
Yili Zhang; Tengfei Wang; Menglei Chen; Hai Lou; Jiangchuan Ye; Jiahui Shi; Xu Wen Effects of moderate-intensity aerobic exercise on cognitive fatigue relief: A randomised self-controlled study Journal Article In: International Journal of Sport and Exercise Psychology, pp. 1–19, 2023. @article{Zhang2023g, Although it was reported both rest and physical activity can alleviate cognitive fatigue to some extent, there is no direct scientific evidence determining which approach is more effective. This study aimed to investigate the effect of moderate-intensity aerobic exercise on the alleviation of cognitive fatigue. A 30-min TloadDback task was used to induce cognitive fatigue in 20 healthy adults, and 12-min quiet rest and moderate-intensity aerobic exercise were performed in random order. During the cognitive task, standard deviation of the NN interval (SDNN), total frequency (TP, 0–0.4 Hz) and very low frequency (VLF, 0–0.15 Hz) of heart rate variability increased significantly. The blink duration and number, fixation number, saccade amplitude and number increased significantly with time, while fixation duration and pupil size decreased significantly. After 12-min intervention protocols, the participants' feeling of fatigue, vigour and boredom recovered significantly. The recovery of fixation duration was better after quiet rest, while the pupil size was significantly larger after aerobic exercise. It was found that both quiet rest and aerobic exercise can alleviate cognitive fatigue, but aerobic exercise may be more effective in the recovery of arousal levels. |
Yi Zhang; Ke Xu; Yun Pan; Zhongling Pi; Jiumin Yang The effects of segmentation design and drawing on video learning: A mediation analysis Journal Article In: Active Learning in Higher Education, pp. 1–21, 2023. @article{Zhang2023f, The current study investigated the effects of segmentation design and drawing on college students' video learning. Participants were 158 college students randomly assigned to view either a segmented or continuous video lecture (video type: segmented vs continuous) and who either received instructed to draw while learning or no instructions at all (learning strategy: drawing vs passive viewing). Participants' eye movements were recorded as they viewed the video, and data was collected regarding their learning satisfaction, cognitive load, both immediate and 7-day delayed learning outcomes, and their perceptions regarding the instructional efficiency of the lectures. The results showed that the drawing activity moderated the segmentation effect in that students did not benefit from the segmented video design when viewing passively, but did when required to draw while viewing. Furthermore, the positive effect of segmentation was mediated by drawing accuracy. |
Yi Zhang; Caixia Liu; Yana Xing; Zhongling Pi; Jiumin Yang How does drawing influence the effectiveness of oral self-explanation versus instructional explanation in video learning? Journal Article In: British Journal of Educational Technology, pp. 1–20, 2023. @article{Zhang2023e, This study investigated the effects of two types of oral explanations (ie, self-explanation vs. instructional explanation) and drawing activity (no drawing vs. drawing) on video learning outcomes. These outcomes were measured by visual attention to the video (indexed by fixation time on text and diagram areas), explanation quality (indexed by personal references, concepts, and elaborations), drawing quality, behaviour patterns and overall learning performance gain. A total of 116 undergraduate and graduate students watched a 4-min video on the human body's respiratory system. They were randomly assigned to one of four conditions (explanation generation: self-explanation vs. instructional explanation × drawing activity: no drawing vs. drawing). Results indicated that without a drawing requirement, students in the self-explanation condition displayed fewer personal references and exhibited a lower learning performance gain than those in the instructional explanation condition. Conversely, when drawing was required, self-explanation students demonstrated higher drawing quality and better learning performance gain. Additionally, students in the drawing condition directed more attention to the diagram area than those in the no drawing condition. These findings suggest that in video learning (1) educators should encourage students to produce oral instructional explanations and (2) if the goal is for students to generate self-explanations, they should also be prompted to draw to bolster their self-explanation efforts. |
Songzhu Zhang In: Journal of Psycholinguistic Research, vol. 52, no. 6, pp. 2919–2935, 2023. @article{Zhang2023d, This study is based on an experimental method of eye-tracking to investigate how translators perceive and understand translated literary texts and how different stylistic features influence their perception. This methodology allowed us to observe which parts of the text translators focused on the most, providing valuable data on their reading patterns and cognitive processes. Among English-Chinese translators, 95 out of 120 participants (79%) showed a tendency to prioritize faithfully conveying the source text's meaning over crafting a target text that aligns with Chinese stylistically. In the specific context of Chinese-English translation out of the 120 instances examined, the translators exhibited a reduced fixation duration on words in the source language, accounting for 34 instances (28%). This suggests a greater concern for preserving the source text's meaning rather than adapting it to the target culture. This research can assist translators and linguists in translating the stylistic features of English and Chinese literary texts more effectively. Future studies can explore other language stylistic features that may impact translation and compare translation styles across various literary genres and language pairs. |
Qiong Zhang; Weifeng Sun; Kailing Huang; Li Qin; Shirui Wen; Xiaoyan Long; Quan Wang; Li Feng Frontal lobe epilepsy: An eye tracking study of memory and attention Journal Article In: Frontiers in Neuroscience, vol. 17, pp. 1–11, 2023. @article{Zhang2023c, Objective: To explore the characteristics and mechanisms of working memory impairment in patients with frontal lobe epilepsy (FLE) through a memory game paradigm combined with eye tracking technology. Method: We included 44 patients with FLE and 50 healthy controls (HC). All participants completed a series of neuropsychological scale assessments and a short-term memory game on an automated computer-based memory evaluation platform with an eye tracker. Results: Memory scale scores of FLE patients including digit span (U = 747.50 |
Ling Zhang; Naiqing Song; Guowei Wu; Jinfa Cai Understanding the cognitive processes of mathematical problem posing: Evidence from eye movements Journal Article In: Educational Studies in Mathematics, pp. 1–30, 2023. @article{Zhang2023b, This study concerns the cognitive process of mathematical problem posing, conceptualized in three stages: understanding the task, constructing the problem, and expressing the problem. We used the eye tracker and think-aloud methods to deeply explore students' behavior in these three stages of problem posing, especially focusing on investigating the influence of task situation format and mathematical maturity on students' thinking. The study was conducted using a 2 × 2 mixed design: task situation format (with or without specific numerical information) × subject category (master's students or sixth graders). Regarding the task situation format, students' performance on tasks with numbers was found to be significantly better than that on tasks without numbers, which was reflected in the metrics of how well they understood the task and the complexity and clarity of the posed problems. In particular, students spent more fixation duration on understanding and processing the information in tasks without numbers; they had a longer fixation duration on parts involving presenting uncertain numerical information; in addition, the task situation format with or without numbers had an effect on students' selection and processing of information related to the numbers, elements, and relationships rather than information regarding the context presented in the task. Regarding the subject category, we found that mathematical maturity did not predict the quantity of problems posed on either type of task. There was no significant main group difference found in the eye-movement metrics. |
Andrea M. Zawoyski; Scott P. Ardoin; Katherine S. Binder The impact of test-taking strategies on eye movements of elementary students during reading comprehension assessment Journal Article In: School Psychology, vol. 38, no. 1, pp. 59–66, 2023. @article{Zawoyski2023, Teachers often encourage students to use test-taking strategies during reading comprehension assessments, but these strategies are not always evidence-based. One common strategy involves teaching students to read the questions before reading an associated passage. Research findings comparing the passage-first (PF) and questions-first (QF) strategies are mixed. The present study employed eye-tracking technology to record 84 third and fourth-grade participants' eye movements (EMs) as they read a passage and responded to multiple-choice (MC) questions using PF and QF strategies in a within-subject design. Although there were no significant differences between groups in accuracy on MC questions, EM measures revealed that the PF condition was superior to the QF condition for elementary readers in terms of efficiency in reading and responding to questions. These findings suggest that the PF strategy supports a more comprehensive understanding of the text. Ultimately, within the PF condition, students required less time to obtain the same accuracy outcomes they attained when reading in the QF condition. School psychologists can improve reading comprehension instruction by encouraging the importance of teaching children to gain meaning from the text rather than search the passage for answers to MC questions |
Alessandro Zanini; Audrey Dureux; Janahan Selvanayagam; Stefan Everling Ultra-high field fMRI identifies an action-observation network in the common marmoset Journal Article In: Communications Biology, vol. 6, no. 1, pp. 1–11, 2023. @article{Zanini2023, The observation of others' actions activates a network of temporal, parietal and premotor/prefrontal areas in macaque monkeys and humans. This action-observation network (AON) has been shown to play important roles in social action monitoring, learning by imitation, and social cognition in both species. It is unclear whether a similar network exists in New-World primates, which separated from Old-Word primates ~35 million years ago. Here we used ultra-high field fMRI at 9.4 T in awake common marmosets (Callithrix jacchus) while they watched videos depicting goal-directed (grasping food) or non-goal-directed actions. The observation of goal-directed actions activates a temporo-parieto-frontal network, including areas 6 and 45 in premotor/prefrontal cortices, areas PGa-IPa, FST and TE in occipito-temporal region and areas V6A, MIP, LIP and PG in the occipito-parietal cortex. These results show overlap with the humans and macaques' AON, demonstrating the existence of an evolutionarily conserved network that likely predates the separation of Old and New-World primates. |
Tania S. Zamuner; Theresa Rabideau; Margarethe McDonald; H. Henny Yeung Developmental change in children's speech processing of auditory and visual cues: An eyetracking study Journal Article In: Journal of Child Language, vol. 50, pp. 27–51, 2023. @article{Zamuner2023, This study investigates how children aged two to eight years (N = 129) and adults (N = 29) use auditory and visual speech for word recognition. The goal was to bridge the gap between apparent successes of visual speech processing in young children in visual-looking tasks, with apparent difficulties of speech processing in older children from explicit behavioural measures. Participants were presented with familiar words in audio-visual (AV), audio-only (A-only) or visual-only (V-only) speech modalities, then presented with target and distractor images, and looking to targets was measured. Adults showed high accuracy, with slightly less target-image looking in the V-only modality. Developmentally, looking was above chance for both AV and A-only modalities, but not in the V-only modality until 6 years of age (earlier on /k/-initial words). Flexible use of visual cues for lexical access develops throughout childhood. |
Tom Zalmenson; Omer Azriel; Yair Bar-Haim Enhanced recognition of disgusted expressions occurs in spite of attentional avoidance at encoding Journal Article In: Frontiers in Psychology, vol. 13, pp. 1–8, 2023. @article{Zalmenson2023, Introduction: Negative emotional content is prioritized in memory. Prioritized attention to negative stimuli has been suggested to mediate this valence-memory association. However, research suggests only a limited role for attention in this observed memory advantage. We tested the role of attention in memory for disgusted facial expressions, a powerful social–emotional stimulus. Methods: We measured attention using an incidental, free-viewing encoding task and memory using a surprise memory test for the viewed expressions. Results and Discussion: Replicating prior studies, we found increased attentional dwell-time for neutral over disgusted expressions at encoding. However, contrary to the attention-memory link hypothesis, disgusted faces were better remembered than neutral faces. Although dwell-time was found to partially mediate the association between valence and memory, this effect was much weaker than the opposite direct effect. These findings point to independence of memory for disgusted faces from attention during encoding. |
Mengxi Yun; Masafumi Nejime; Takashi Kawai; Jun Kunimatsu; Hiroshi Yamada; Hyung Goo R. Kim; Masayuki Matsumoto Distinct roles of the orbitofrontal cortex, ventral striatum, and dopamine neurons in counterfactual thinking of decision outcomes Journal Article In: Science Advances, vol. 9, no. 32, pp. 1–14, 2023. @article{Yun2023, Individuals often assess past decisions by comparing what was gained with what would have been gained had they acted differently. Thoughts of past alternatives that counter what actually happened are called “counterfactuals.” Recent theories emphasize the role of the prefrontal cortex in processing counterfactual outcomes in decision-making, although how subcortical regions contribute to this process remains to be elucidated. Here we report a clear distinction among the roles of the orbitofrontal cortex, ventral striatum and midbrain dopamine neurons in processing counterfactual outcomes in monkeys. Our findings suggest that actually gained and counterfactual outcome signals are both processed in the cortico-subcortical network constituted by these regions but in distinct manners and integrated only in the orbitofrontal cortex in a way to compare these outcomes. This study extends the prefrontal theory of counterfactual thinking and provides key insights regarding how the prefrontal cortex cooperates with subcortical regions to make decisions using counterfactual information. |
Wenwen Yu; Yiwei Li; Xueying Cao; Licheng Mo; Yuming Chen; Dandan Zhang The role of ventrolateral prefrontal cortex on voluntary emotion regulation of social pain Journal Article In: Human Brain Mapping, vol. 44, no. 13, pp. 4710–4721, 2023. @article{Yu2023b, The right ventrolateral prefrontal cortex (rVLPFC) is highly engaged in emotion regulation of social pain. However, there is still lack of both inhibition and excitement evidence to prove the causal relationship between this brain region and voluntary emotion regulation. This study used high-frequency (10 Hz) and low-frequency (1 Hz) repetitive transcranial magnetic stimulation (rTMS) to separately activate or inhibit the rVLPFC in two groups of participants. We recorded participants' emotion ratings as well as their social attitude and prosocial behaviors following emotion regulation. Also, we used eye tracker to record the changes of pupil diameter to measure emotional feelings objectively. A total of 108 healthy participants were randomly assigned to the activated, inhibitory or sham rTMS groups. They were required to accomplish three sequential tasks: the emotion regulation (cognitive reappraisal) task, the favorability rating task, and the donation task. Results show that the rVLPFC-inhibitory group reported more negative emotions and showed larger pupil diameter while the rVLPFC-activated group showed less negative emotions and reduced pupil diameter during emotion regulation (both compared with the sham rTMS group). In addition, the activated group gave more positive social evaluation to peers and donated more money to a public welfare activity than the rVLPFC-inhibitory group, among which the change of social attitude was mediated by regulated emotion. Taken together, these findings reveal that the rVLPFC plays a causal role in voluntary emotion regulation of social pain and can be a potential brain target in treating deficits of emotion regulation in psychiatric disorders. |
Qiuchen Yu; Jiangfeng Gou; Yan Li; Zhongling Pi; Jiumin Yang Introducing support for learner control: Temporal and organizational cues in instructional videos Journal Article In: British Journal of Educational Technology, pp. 1–24, 2023. @article{Yu2023a, Instructional videos risk overloading learners' limited working memory resources due to the transient information effect. Learner control is one way to mitigate this concern, but has shown almost zero overall effect and considerable heterogeneity. Consequently, it is essential to identify when learner control is most beneficial. The present study examined the influence of cues on learners' behaviour, cognitive process, metacognition and learning performance in an interactive learning environment. Employing a 2 (temporal cues: without vs. with) × 2 (organizational cues: without vs. with) between-subject design, 117 participants were randomly assigned to one of four conditions: no cues, temporal cues, organizational cues and temporal cues + organizational cues. Among these, temporal cues (ie, progress bar) serve as time-related signals designed to regulate pacing, and organizational cues (ie, table of contents) provide a structural framework for the content. Significant cueing effects were observed for both cue types at germane cognitive load and transfer. Notably, our results indicate that organizational cues effectively guide learners' attention towards the underlying structure, thus promoting cognitive processing. These unique benefits are evident in improved topic recall, retention and monitoring accuracy. Importantly, combined temporal cues and organizational cues were seen to not only allow learners to exhibit more engagement behaviours (ie, skimming) but also assist learners in accurately judging their learning. These findings strongly support the recommendation to use cues to enhance the effectiveness of learner control. Practitioner notes What is already known about this topic Instructional videos may overload limited working memory resources due to the transient information effect. The overall effect of including learner control within educational technology was almost zero (g = 0.05) but showed higher heterogeneity. It is unclear whether embedding various cues in an instructional video improves the effectiveness of learner control. What this paper adds Both temporal and organizational cues aided in increasing learners' germane cognitive load and enhancing their transfer. Organizational cues helped learners understand the underlying structure, thus facilitating deeper cognitive processing, improved metacognition and ultimately boosted learning performance. Combined temporal and organizational cues lead to engagement behaviours and accurate self-monitoring. Implications for practice and/or policy Providing instructional support is important in assisting learners with the complexities of learner-controlled instruction. Embedding cues help learners process the content deeply by giving learners control over the instructional video. |
Yang Yiling; Katharine Shapcott; Alina Peter; Johanna Klon-Lipok; Huang Xuhui; Andreea Lazar; Wolf Singer Robust encoding of natural stimuli by neuronal response sequences in monkey visual cortex Journal Article In: Nature Communications, vol. 14, no. 1, pp. 1–18, 2023. @article{Yiling2023a, Parallel multisite recordings in the visual cortex of trained monkeys revealed that the responses of spatially distributed neurons to natural scenes are ordered in sequences. The rank order of these sequences is stimulus-specific and maintained even if the absolute timing of the responses is modified by manipulating stimulus parameters. The stimulus specificity of these sequences was highest when they were evoked by natural stimuli and deteriorated for stimulus versions in which certain statistical regularities were removed. This suggests that the response sequences result from a matching operation between sensory evidence and priors stored in the cortical network. Decoders trained on sequence order performed as well as decoders trained on rate vectors but the former could decode stimulus identity from considerably shorter response intervals than the latter. A simulated recurrent network reproduced similarly structured stimulus-specific response sequences, particularly once it was familiarized with the stimuli through non-supervised Hebbian learning. We propose that recurrent processing transforms signals from stationary visual scenes into sequential responses whose rank order is the result of a Bayesian matching operation. If this temporal code were used by the visual system it would allow for ultrafast processing of visual scenes. |
Yang Yiling; Johanna Klon-Lipok; Wolf Singer Joint encoding of stimulus and decision in monkey primary visual cortex Journal Article In: Cerebral Cortex, pp. 1–6, 2023. @article{Yiling2023, We investigated whether neurons in monkey primary visual cortex (V1) exhibit mixed selectivity for sensory input and behavioral choice. Parallel multisite spiking activity was recorded from area V1 of awake monkeys performing a delayed match-to-sample task. The monkeys had to make a forced choice decision of whether the test stimulus matched the preceding sample stimulus. The population responses evoked by the test stimulus contained information about both the identity of the stimulus and with some delay but before the onset of the motor response the forthcoming choice. The results of subspace identification analysis indicate that stimulus-specific and decision-related information coexists in separate subspaces of the high-dimensional population activity, and latency considerations suggest that the decision-related information is conveyed by top-down projections. |
Jacob L. Yates; Shanna H. Coop; Gabriel H. Sarch; Ruei Jr Wu; Daniel A. Butts; Michele Rucci; Jude F. Mitchell Detailed characterization of neural selectivity in free viewing primates Journal Article In: Nature Communications, vol. 14, no. 1, pp. 1–11, 2023. @article{Yates2023, Fixation constraints in visual tasks are ubiquitous in visual and cognitive neuroscience. Despite its widespread use, fixation requires trained subjects, is limited by the accuracy of fixational eye movements, and ignores the role of eye movements in shaping visual input. To overcome these limitations, we developed a suite of hardware and software tools to study vision during natural behavior in untrained subjects. We measured visual receptive fields and tuning properties from multiple cortical areas of marmoset monkeys who freely viewed full-field noise stimuli. The resulting receptive fields and tuning curves from primary visual cortex (V1) and area MT match reported selectivity from the literature which was measured using conventional approaches. We then combined free viewing with high-resolution eye tracking to make the first detailed 2D spatiotemporal measurements of foveal receptive fields in V1. These findings demonstrate the power of free viewing to characterize neural responses in untrained animals while simultaneously studying the dynamics of natural behavior. |
Yao Yao; Katrina Connell; Stephen Politzer-Ahles Hearing emotion in two languages: A pupillometry study of Cantonese–Mandarin bilinguals' perception of affective cognates in L1 and L2 Journal Article In: Bilingualism: Language and Cognition, vol. 26, no. 4, pp. 795–808, 2023. @article{Yao2023d, Differential affective processing has been widely documented for bilinguals: L1 affective words elicit higher levels of arousal and stronger emotionality ratings than L2 affective words (Pavlenko, 2012). In this study, we focus on two closely related Chinese languages, Mandarin and Cantonese, whose affective lexicons are highly overlapping, with shared lexical items that only differ in pronunciation across languages. We recorded L1 Cantonese – L2 Mandarin bilinguals' pupil responses to auditory tokens of Cantonese and Mandarin affective words. Our results showed that Cantonese–Mandarin bilinguals had stronger pupil responses when the affective words were pronounced in Cantonese (L1) than when the same words were pronounced in Mandarin (L2). The effect was most evident in taboo words and among bilinguals with lower L2 proficiency. We discuss the theoretical implications of the findings in the frameworks of exemplar theory and models of the bilingual lexicon. |
Panpan Yao; David Hall; Hagit Borer; Linnaea Stockall Dutch–Mandarin learners' online use of syntactic cues to anticipate mass vs. count interpretations Journal Article In: Second Language Research, pp. 1–38, 2023. @article{Yao2023b, It remains unclear whether late second language learners (L2ers) can acquire sufficient knowledge about unique-to-L2 constructions through implicit learning to build anticipations during real-time processing. To tackle this question, we conducted a visual world paradigm experiment to investigate high-proficiency late first-language Dutch second-language Mandarin Chinese learners' online processing of syntactic cues to count vs. mass interpretations in Chinese which are unique-to-L2 and never explicitly taught. The results showed that late Dutch–Mandarin learners were sensitive to a mass-biased syntactic cue in real-time processing, and exhibited some native-like anticipatory behaviour. These findings indicate that late L2ers can acquire unique-to-L2 constructions through implicit learning, and can automatically use this knowledge to make predictions. |
Mengna Yao; Bincheng Wen; Mingpo Yang; Jiebin Guo; Haozhou Jiang; Chao Feng; Yilei Cao; Huiguang He; Le Chang High-dimensional topographic organization of visual features in the primate temporal lobe Journal Article In: Nature Communications, vol. 14, no. 1, pp. 1–23, 2023. @article{Yao2023a, The inferotemporal cortex supports our supreme object recognition ability. Numerous studies have been conducted to elucidate the functional organization of this brain area, but there are still important questions that remain unanswered, including how this organization differs between humans and non-human primates. Here, we use deep neural networks trained on object categorization to construct a 25-dimensional space of visual features, and systematically measure the spatial organization of feature preference in both male monkey brains and human brains using fMRI. These feature maps allow us to predict the selectivity of a previously unknown region in monkey brains, which is corroborated by additional fMRI and electrophysiology experiments. These maps also enable quantitative analyses of the topographic organization of the temporal lobe, demonstrating the existence of a pair of orthogonal gradients that differ in spatial scale and revealing significant differences in the functional organization of high-level visual areas between monkey and human brains. |
Xiaozhi Yang; Ian Krajbich A dynamic computational model of gaze and choice in multi-attribute decisions Journal Article In: Psychological Review, vol. 130, no. 1, pp. 52–70, 2023. @article{Yang2023e, When making decisions, how people allocate their attention influences their choices. One empirical finding is that people are more likely to choose the option that they have looked at more. This relation has been formalized with the attentional drift-diffusion model (aDDM; Krajbich et al., 2010). However, options often have multiple attributes, and attention is also thought to govern the relative weighting of those attributes (Roe et al., 2001). Little is known about how these two distinct features of the choice process interact; we still lack a model (and tests of that model) that incorporate both option- and attribute-wise attention. Here, we propose a multi-attribute attentional drift-diffusion model (maaDDM) to account for attentional discount factors on both options and attributes. We then use five eye-tracking datasets (two-alternative, two-attribute preferential tasks) from different choice domains to test the model. We find very stable option-level and attribute-level attentional discount factors across datasets, though nonfixated options are consistently discounted more than nonfixated attributes. Additionally, we find that people generally discount the nonfixated attribute of the nonfixated option in a multiplicative way, and so that feature is consistently discounted the most. Finally, we also find that gaze allocation reflects attribute weights, with more gaze to higher-weighted attributes. In summary, our work uncovers an intricate interplay between attribute weights, gaze processes, and preferential choice. |
Xiaomeng Yang; Fuxing Wang; Richard E. Mayer; Xiangen Hu; Chuanhua Gu Ocular foundations of the spatial contiguity principle: Designing multimedia materials for parafoveal vision Journal Article In: Journal of Educational Psychology, vol. 115, no. 8, pp. 1125–1140, 2023. @article{Yang2023d, The spatial contiguity principle is that people learn and perform better when corresponding printed text and graphics are placed near rather than far from each other on the screen or page. This is a well-established design principle in multimedia learning. However, there is insufficient research to establish the appropriate distance between text and graphics that is conducive for integrative processing. The current study examines a new objective indicator of spatial contiguity based on the characteristics of human visual processing, and hypothesizes that corresponding text and graphic information presented within parafoveal vision promotes integrative processing better than information in peripheral vision. Experiments 1 and 2 asked participants to judge the similarities of two text–picture cards and found that presenting the two cards within parafoveal vision (rather than peripheral vision) led to faster comparison (in both Experiments) and higher scores (only in Experiment 2) for a simple version of the comparison task, but did not lower cognitive load. Experiment 3 found that students who viewed an onscreen multimedia lesson that presented corresponding text–picture information in parafoveal vision (rather than peripheral vision) scored higher on retention and application tests and experienced lower cognitive load measured by a secondary task. Across all three experiments, eye-tracking results showed presenting corresponding text–picture information in parafoveal vision yielded more integrative saccades and longer fixation time on text, indicating that spatial contiguity encourages integrative processing. This study replicates and extends the spatial contiguity effect, and offers a new quantifiable indicator of spatial continuity for the future research. |
Tianqi Yang; Yang He; Lin Wu; Hui Wang; Xiuchao Wang; Yahong Li; Yaning Guo; Shengjun Wu; Xufeng Liu The effects of object size on spatial orientation: An eye movement study Journal Article In: Frontiers in Neuroscience, vol. 17, pp. 1–8, 2023. @article{Yang2023c, Introduction: The processing of visual information in the human brain is divided into two streams, namely, the dorsal and ventral streams, object identification is related to the ventral stream and motion processing is related to the dorsal stream. Object identification is interconnected with motion processing, object size was found to affect the information processing of motion characteristics in uniform linear motion. However, whether the object size affects the spatial orientation is still unknown. Methods: Thirty-eight college students were recruited to participate in an experiment based on the spatial visualization dynamic test. Eyelink 1,000 Plus was used to collect eye movement data. The final direction difference (the difference between the final moving direction of the target and the final direction of the moving target pointing to the destination point), rotation angle (the rotation angle of the knob from the start of the target movement to the moment of key pressing) and eye movement indices under conditions of different object sizes and motion velocities were compared. Results: The final direction difference and rotation angle under the condition of a 2.29°-diameter moving target and a 0.76°-diameter destination point were significantly smaller than those under the other conditions (a 0.76°-diameter moving target and a 0.76°-diameter destination point; a 0.76°-diameter moving target and a 2.29°-diameter destination point). The average pupil size under the condition of a 2.29°-diameter moving target and a 0.76°-diameter destination point was significantly larger than the average pupil size under other conditions (a 0.76°-diameter moving target and a 0.76°-diameter destination point; a 0.76°-diameter moving target and a 2.29°-diameter destination point). Discussion: A relatively large moving target can resist the landmark attraction effect in spatial orientation, and the influence of object size on spatial orientation may originate from differences in cognitive resource consumption. The present study enriches the interaction theory of the processing of object characteristics and motion characteristics and provides new ideas for the application of eye movement technology in the examination of spatial orientation ability. |
Tianqi Yang; Yaning Guo; Xianyang Wang; Shengjun Wu; Xiuchao Wang; Hui Wang; Xufeng Liu The influence of representational gravity on spatial orientation: An eye movement study Journal Article In: Current Psychology, pp. 1–9, 2023. @article{Yang2023b, Spatial orientation is a fundamental subject in aviation psychology. The influence of representational gravity can lead to systematic errors during uniform linear motion. However, it remains unclear whether representational gravity during motion can affect spatial orientation. In this study, college students from Xi'an, China were recruited to participate in an experiment based on the Spatial Visualization Dynamic Test. We compared the accuracy of spatial orientation estimation and eye movement indices when the main direction of spatial orientation was in the lower right versus when it was in the upper right. The results revealed that individuals were prone to overestimate the adjustment angle when the main direction of spatial orientation was in the lower right, and underestimate the adjustment angle when the main direction of spatial orientation was in the upper right; the average pupil size was significantly larger when the main direction of spatial orientation was in the lower right than that when the main direction of spatial orientation was in the upper right. In conclusion, spatial orientation in motion was influenced by representational gravity, and when representational gravity aligned with the main direction of spatial orientation, it led to increased cognitive resource consumption. |
Ruyi Yang; Peng Zhao; Liyang Wang; Chenli Feng; Chen Peng; Zhexuan Wang; Yingying Zhang; Minqian Shen; Kaiwen Shi; Shijun Weng; Chunqiong Dong; Fu Zeng; Tianyun Zhang; Xingdong Chen; Shuiyuan Wang; Yiheng Wang; Yuanyuan Luo; Qingyuan Chen; Yuqing Chen; Chengyong Jiang; Shanshan Jia; Zhaofei Yu; Jian Liu; Fei Wang; Su Jiang; Wendong Xu; Liang Li; Gang Wang; Xiaofen Mo; Gengfeng Zheng; Aihua Chen; Xingtao Zhou; Chunhui Jiang; Yuanzhi Yuan; Biao Yan; Jiayi Zhang Assessment of visual function in blind mice and monkeys with subretinally implanted nanowire arrays as artificial photoreceptors Journal Article In: Nature Biomedical Engineering, pp. 1–37, 2023. @article{Yang2023a, Retinal prostheses could restore image-forming vision in conditions of photoreceptor degeneration. However, contrast sensitivity and visual acuity are often insufficient. Here we report the performance, in mice and monkeys with induced photoreceptor degeneration, of subretinally implanted gold-nanoparticle-coated titania nanowire arrays providing a spatial resolution of 77.5 μm and a temporal resolution of 3.92 Hz in ex vivo retinas (as determined by patch-clamp recording of retinal ganglion cells). In blind mice, the arrays allowed for the detection of drifting gratings and flashing objects at light-intensity thresholds of 15.70–18.09 μW mm–2, and offered visual acuities of 0.3–0.4 cycles per degree, as determined by recordings of visually evoked potentials and optomotor-response tests. In monkeys, the arrays were stable for 54 weeks, allowed for the detection of a 10-μW mm–2 beam of light (0.5° in beam angle) in visually guided saccade experiments, and induced plastic changes in the primary visual cortex, as indicated by long-term in vivo calcium imaging. Nanomaterials as artificial photoreceptors may ameliorate visual deficits in patients with photoreceptor degeneration. |
Chaoqing Yang; Linlin He; Yucheng Liu; Ziyang Lin; Lizhu Luo; Shan Gao Anti-saccades reveal impaired attention control over negative social evaluation in individuals with depressive symptoms Journal Article In: Journal of Psychiatric Research, vol. 165, pp. 64–69, 2023. @article{Yang2023, Depressed individuals are excessively sensitive to negative information but blunt to positive information, which has been considered as vulnerability to depression. Here, we focused on inhibitory control over attentional bias on social evaluation in individuals with depression. We engaged individuals with and without depressive symptoms (categorized by Beck Depression Inventory-II) in a novel attention control task using positive and negative evaluative adjectives as self-referential feedback given by social others. Participants were instructed to look at sudden onset feedback targets (pro-saccade) or the mirror location of the targets (anti-saccade) when correct saccade latencies and saccade errors were collected. The two indices showed that while both groups displayed longer latencies and more errors for anti-saccade relative to pro-saccade responses depressed individuals spent more time reacting correctly and made more errors than non-depressed individuals in the anti-saccade trials and such group differences were not observed in the pro-saccade trials. Although group differences in correct anti-saccade latencies were found for both positive and negative stimuli, depressed individuals spent more time making correct anti-saccade responses to negative social feedback than to positive ones whereas non-depressed individuals featured longer correct anti-saccade latencies for positive relative to negative evaluations. Our results suggest that depressed individuals feature an impaired ability in attention control for self-referential evaluations, notably those of negative valence, shedding new light on depression-distorted self-schema and corresponding social dysfunctions. |
Zhihao Yan; Zeyang Yang; Mark D. Griffiths “Danmu” preference, problematic online video watching, loneliness and personality: An eye-tracking study and survey study Journal Article In: BMC Psychiatry, vol. 23, no. 1, pp. 1–13, 2023. @article{Yan2023b, ‘Danmu' (i.e., comments that scroll across online videos), has become popular on several Asian online video platforms. Two studies were conducted to investigate the relationships between Danmu preference, problematic online video watching, loneliness and personality. Study 1 collected self-report data on the study variables from 316 participants. Study 2 collected eye-tracking data of Danmu fixation (duration, count, and the percentages) from 87 participants who watched videos. Results show that fixation on Danmu was significantly correlated with problematic online video watching, loneliness, and neuroticism. Self-reported Danmu preference was positively associated with extraversion, openness, problematic online video watching, and loneliness. The studies indicate the potential negative effects of Danmu preference (e.g., problematic watching and loneliness) during online video watching. The study is one of the first empirical investigations of Danmu and problematic online video watching using eye-tracking software. Online video platforms could consider adding more responsible use messaging relating to Danmu in videos. Such messages may help users to develop healthier online video watching habits. |
Zhenjie Xu; Jie Hu; Yingying Wang Bilateral eye movements disrupt the involuntary perceptual representation of trauma-related memories Journal Article In: Behaviour Research and Therapy, vol. 165, pp. 1–10, 2023. @article{Xu2023c, Bilateral eye movement (EM) is a critical component in eye movement desensitization and reprocessing (EMDR), an effective treatment for post-traumatic stress disorder. However, the role of bilateral EM in alleviating trauma-related symptoms is unclear. Here we hypothesize that bilateral EM selectively disrupts the perceptual representation of traumatic memories. We used the trauma film paradigm as an analog for trauma experience. Nonclinical participants viewed trauma films followed by a bilateral EM intervention or a static Fixation period as a control. Perceptual and semantic memories for the film were assessed with different measures. Results showed a significant decrease in perceptual memory recognition shortly after the EM intervention and subsequently in the frequency and vividness of film-related memory intrusions across one week, relative to the Fixation condition. The EM intervention did not affect the explicit recognition of semantic memories, suggesting a dissociation between perceptual and semantic memory disruption. Furthermore, the EM intervention effectively reduced psychophysiological affective responses, including the skin conductance response and pupil size, to film scenes and subjective affective ratings of film-related intrusions. Together, bilateral EMs effectively reduce the perceptual representation and affective response of trauma-related memories. Further theoretical developments are needed to elucidate the mechanism of bilateral EMs in trauma treatment. |
Ying Xu; Jia-Qiong Xie; Fu-Xing Wang; Rebecca L. Monk; James Gaskin; Jin-Liang Wang The impact of Weibo features on user's information comprehension: The mediating role of cognitive load Journal Article In: Social Science Computer Review, vol. 41, no. 6, pp. 2010–2028, 2023. @article{Xu2023d, Social media, such as Microblogs, have become an important source for people to obtain information. However, we know little about how this would influence our comprehension over online information. Based on the cognitive load theory, this research explores whether and how two important features of Weibo, which are the feedback function and information fragmentation, would increase cognitive load and may in turn hinder users' information comprehension in Weibo. A 2 (feedback or non-feedback) × 2 (strong-interference or weak-interference information) between-participants experimental design was conducted. Our results revealed that the Weibo feedback function and interference information exerted a negative impact over information comprehension via inducing increased cognitive load. Specifically, these results deepened our understanding regarding the impact of Weibo features on online information comprehension and suggest the mechanism by which this occurs. This finding has implications for how to minimize the potential cost of using Weibo and maximize the adaptive development of social media. |
Mengran Xu; Katelyn Rowe; Christine Purdon To approach or to avoid: The role of ambivalent motivation towards high calorie food images in restrained eaters Journal Article In: Cognitive Therapy and Research, vol. 47, no. 4, pp. 669–680, 2023. @article{Xu2023b, Background: Individuals who engage in restrained eating are often torn between eating enjoyment and weight control. Recent research found visual attention to threat varied according to motivation, and people with ambivalent motivation about threat showed greater anxiety. Methods: A total number of 225 individuals high in restrained eating completed a passive viewing task in which they were presented with image pairs of high calorie food and neutral objects while their eye movements were tracked. Participants also rated their motivation to look towards and away from food images and completed measures of mood and thought-shape fusion. Results: Two-thirds of participants reported strong motivation to look at food images, and the rest were highly motivated to avoid, were indifferent, or were ambivalent. Visual attention to food images varied according to motivation. Ambivalent individuals had higher thought-shape fusion scores and were more restrained in their eating than engagers and indifferent individuals. Conclusions: These findings suggest that motivation to attend to and avoid food images are important factors to study, as they are associated with attentional biases and eating pathology. Clinical implications are also discussed. |
Luzi Xu; Zhong Yang; Huichao Ji; Wei Chen; Zhuomian Lin; Yushang Huang; Xiaowei Ding Direct evidence for proactive suppression of salient-but-irrelevant emotional information inputs Journal Article In: Emotion, vol. 23, no. 7, pp. 2039–2058, 2023. @article{Xu2023a, It has long been debated whether emotional information inherently captures attention. The mainstream view suggests that the attentional processing of emotional information is automatic and difficult to be controlled. Here, we provide direct evidence that salient-but-irrelevant emotional information inputs can be proactively suppressed. First, we demonstrated that both negative and positive emotional distractors (fearful and happy faces) induced attentional capture effects (i.e., more attention allocated to emotional distractors than neutral distractors) in the singleton-detection mode (Experiment 1), but attentional suppression effects (i.e., less attention allocated to emotional distractors than neutral distractors) in the feature-search mode that strengthened task motivation (Experiment 2). The suppression effects in the feature-search mode disappeared when emotional information was disrupted through face inversion, showing that the suppression effects were driven by emotional information rather than low-level visual factors (Experiment 3). Furthermore, the suppression effects also disappeared when the identity of emotional faces became unpredictable (Experiment 4), suggesting that the suppression was highly dependent on the predictability of emotional distractors. Importantly, we reproduced the suppression effects using eye-tracking methods and found that there was no attentional capture by emotional distractors before the appearance of the attentional suppression effects (Experiment 5). These findings suggest that irrelevant emotional stimuli that have the potential to cause distraction can be proactively suppressed by the attention system |
Zedong Xie; Meng Zhang; Zunping Ma The impact of mental simulation on subsequent tourist experience–dual evidence from eye tracking and self-reported measurement Journal Article In: Current Issues in Tourism, vol. 26, no. 18, pp. 2915–2930, 2023. @article{Xie2023c, Tourism research has always sought to find ways to improve tourists' experience evaluation and create added value for them. However, the academic community has focused on the on-site and post-travel stages of tourists, and neglected the pre-travel stage. This study examines the influence of guided mental simulation of an upcoming tourist experience on subsequent on-site tourist experience and experience evaluation. The research simulated real-world experience with tour videos shot from the first-person perspective, and measured the variables using both eye movements and self-reporting. Multivariate ANOVA and multigroup analysis were then performed on the data. The results showed that a process simulation of tourists having an engagement experience and an outcome simulation of tourists having a sight-seeing experience resulted in a higher engagement level and higher emotional response during the on-site experience, higher evaluation of the experience, and a greater impact of engagement level on their evaluation. This study expands the research on tourists' psychological experience in the pre-travel stage. Results indicate that the period from the moment consumers book or purchase the tourist product to the moment they actually embark on the tourist experience is a valuable marketing window. |
Weizhen Xie; Weiwei Zhang Pupillary evidence reveals the influence of conceptual association on brightness perception Journal Article In: Psychonomic Bulletin & Review, vol. 30, no. 4, pp. 1388–1395, 2023. @article{Xie2023a, Our visual experience often varies based on momentary thoughts and feelings. For example, when positive concepts are invoked, visual objects may appear brighter (e.g., a “brighter” smile). However, it remains unclear whether this phenomenological experience is driven by a genuine top-down modulation of brightness perception or by a mere response bias. To investigate this issue, we use pupillometry as a more objective measure of perceived brightness. We asked participants to judge the brightness level of an isoluminant gray color patch after evaluating the valence of a positive or negative word. We found that the gray color patch elicited greater pupillary light reflex and more frequent “brighter” responses after observers had evaluated the valence of a positive word. As pupillary light reflex is unlikely driven by voluntary control, these results suggest that the conceptual association between affect and luminance can modulate brightness perception. |
Pei Xie; Han-Bin Sang; Chao-Zheng Huang; Ai-Bao Zhou Effect of body-related information on food attentional bias in women with body weight dissatisfaction Journal Article In: Scientific Reports, vol. 13, no. 1, pp. 1–10, 2023. @article{Xie2023, Women with body weight dissatisfaction (BWD) have long-term negative assessments of their body weight, which are often associated with poor eating behavior. In this study, we investigated the effect of body-related information on the food cue processing and attention of women with BWD. Sixty-eight women were recruited and assigned to either a BWD (NPSS-F > 2) (n = 32) or a no body weight dissatisfaction (NBWD) group (NPSS-F < 1) (n = 36). We measured attentional bias to food cues (high- and low-calorie) with a food probe task after exposure to body-related information and recorded eye tracking data. Body-related images were presented prior to a pair of stimulus images (food–neutral or neutral–neutral). Body-related information and food type were repeated measure factors in our study. Our results showed that the first fixation duration bias for high-calorie foods was significantly longer than for low-calorie foods after exposure to overweight cues in the BWD group. Compared with the NBWD group, the BWD group showed longer first fixation duration bias for high-calorie foods after exposure to overweight cues. The direction for high-calorie foods was significantly more often than that for low-calorie foods in the BWD group after exposure to body-related information. Our findings suggest that compared to women with NBWD, women with BWD may be more susceptible to body-related information, resulting in increased attention to high-calorie foods. |
Mo Xiaohong; Xie Zhihao; Luh Ding-Bang A hybrid macro and micro method for consumer emotion and behavior research Journal Article In: IEEE Access, vol. 11, pp. 83430–83445, 2023. @article{Xiaohong2023, To investigate impacts of intelligent and fashion factors of sports bras on consumers' emotions, decision-making and behavior, a quantitative analysis method combing macro affective computing and micro emotion data was proposed. The context where a consumer purchased sports bras was first simulated. In this process, an eye tracker and a multi-channel physiological recorder were utilized to collect physiological signal data from participants in an experimental setting. Then, big data and machine learning were both adopted to macroscopically perform data pre-processing, build a computational model, fulfill relevant prediction and evaluation, analyze correlations in physiological data features, and explore potential values existing in data. Furthermore, highly correlated data features were extracted to investigate micro causalities and identify reasons why consumer behavior and decision-making were supported by data about emotional physiology. The proposed method may provide considerably reliable data support for designers, product service providers, and other practitioners. As an innovative and universal integration approach, it has the potential to be applied in medical science, psychology, management science and other fields. |
Xue-Zhen Xiao; Gaoding Jia; Aiping Wang Semantic preview benefit of Tibetan-Chinese bilinguals during Chinese reading Journal Article In: Language Learning and Development, vol. 19, no. 1, pp. 1–15, 2023. @article{Xiao2023a, When reading Chinese, skilled native readers regularly gain a preview benefit (PB) when the parafoveal word is orthographically or semantically related to the target word. Evidence shows that non-native, beginning Chinese readers can obtain an orthographic PB during Chinese reading, which indicates the parafoveal processing of low-level visual information. However, whether non-native Chinese readers who are more proficient in Chinese can make use of high-level parafoveal information remains unknown. Therefore, this study examined parafoveal processing during Chinese reading among Tibetan-Chinese bilinguals with high Chinese proficiency and compared their PB effects with those from native Chinese readers. Tibetan-Chinese bilinguals demonstrated both orthographic and semantic PB but did not show phonological PB and only differed from native Chinese in the identical PB when preview characters were identical to the targets. These findings demonstrate that non-native Chinese readers can extract semantic information from parafoveal preview during Chinese reading and highlight the modulation of parafoveal processing efficiency by reading proficiency. The results are in line with the direct route to access the mental lexicon of visual Chinese characters among non-native Chinese speakers. |
Naiqi G. Xiao; Lauren L. Emberson Visual perception is highly flexible and context dependent in young infants: A case of top-down-modulated motion perception Journal Article In: Psychological Science, vol. 34, no. 8, pp. 875–886, 2023. @article{Xiao2023, Top-down modulation is an essential cognitive component in human perception. Despite mounting evidence of top-down perceptual modulation in adults, it is largely unknown whether infants can engage in this cognitive function. Here, we examined top-down modulation of motion perception in 6- to 8-month-old infants (recruited in North America) via their smooth-pursuit eye movements. In four experiments, we demonstrated that infants' perception of motion direction can be flexibly shaped by briefly learned predictive cues when no coherent motion is available. The current findings present a novel insight into infant perception and its development: Infant perceptual systems respond to predictive signals engendered from higher-level learning systems, leading to a flexible and context-dependent modulation of perception. This work also suggests that the infant brain is sophisticated, interconnected, and active when placed in a context in which it can learn and predict. |
Yanfang Xia; Jelena Wehrli; Samuel Gerster; Marijn Kroes; Maxime Houtekamer; Dominik R. Bach Measuring human context fear conditioning and retention after consolidation Journal Article In: Learning and Memory, vol. 30, no. 7, pp. 139–150, 2023. @article{Xia2023a, Fear conditioning is a laboratory paradigm commonly used to investigate aversive learning and memory. In context fear conditioning, a configuration of elemental cues (conditioned stimulus [CTX]) predicts an aversive event (unconditioned stimulus [US]). To quantify context fear acquisition in humans, previous work has used startle eyeblink responses (SEBRs), skin conductance responses (SCRs), and verbal reports, but different quantification methods have rarely been compared. Moreover, preclinical intervention studies mandate recall tests several days after acquisition, and it is unclear how to induce and measure context fear memory retention over such a time interval. First, we used a semi-immersive virtual reality paradigm. In two experiments (N = 23 and N = 28), we found successful declarative learning and memory retention over 7 d but no evidence of other conditioned responses. Next, we used a configural fear conditioning paradigm with five static room images as CTXs in two experiments (N = 29 and N = 24). Besides successful declarative learning and memory retention after 7 d, SCR and pupil dilation in response to CTX onset differentiated CTX+/CTX− during acquisition training, and SEBR and pupil dilation differentiated CTX+/CTX− during the recall test, with medium to large effect sizes for the most sensitive indices (SEBR: Hedge's g = 0.56 and g = 0.69; pupil dilation: Hedge's g = 0.99 and g = 0.88). Our results demonstrate that with a configural learning paradigm, context fear memory retention can be demonstrated over 7 d, and we provide robust and replicable measurement methods to this end. |
Tiansheng Xia; Yingqi Yan; Jiayue Guo Color in web-advertising: The effect of color hue contrast on web satisfaction and advertising memory Journal Article In: Current Psychology, pp. 1–14, 2023. @article{Xia2023, There has been a growth in e-commerce, presenting consumers with varied forms of advertising. A key goal of web advertising is to leave a lasting impression on the user, and web satisfaction is an important measure of the quality and usability of a web page after an ad is placed on it. This experiment manipulated participants' purpose in web browsing (free browsing versus goal oriented) and the color combination of the web background and the vertical-ad background (high or low hue contrast) to predict users' satisfaction with the web page and the degree of ad recall. The psychological mechanisms of this effect were also explored using an eye-tracking device to record and analyze eye movements. The participants were 120 university students, 64.2% of whom were female and 35.8% of whom were male. During free browsing, participants could simulate the daily use of a browser to browse the web and were given 120 s to do so, and in the task-oriented browsing condition, participants were told in advance that they had to summarize the headlines of each news item one at a time within 120 s. The results showed that, in the free-viewing task, the hue contrast between the ad–web background colors negatively affected web satisfaction and ad memory whereas there was no significant difference in this effect in the goal-oriented task. Furthermore, in the free-viewing task, the level of attentional intrusion mediated the effect of ad–web hue contrast on the degree of ad recall; color harmony mediated the effect of hue contrast on the user's evaluation of web satisfaction. These results can act as a new reference for web design research and marketing practice. |
Nicholas J. Wyche; Mark Edwards; Stephanie C. Goodhew An updating-based working memory load alters the dynamics of eye movements but not their spatial extent during free viewing of natural scenes Journal Article In: Attention, Perception, & Psychophysics, pp. 1–22, 2023. @article{Wyche2023, The relationship between spatial deployments of attention and working memory load is an important topic of study, with clear implications for real-world tasks such as driving. Previous research has generally shown that attentional breadth broadens under higher load, while exploratory eye-movement behaviour also appears to change with increasing load. However, relatively little research has compared the effects of working memory load on different kinds of spatial deployment, especially in conditions that require updating of the contents of working memory rather than simple retrieval. The present study undertook such a comparison by measuring participants' attentional breadth (via an undirected Navon task) and their exploratory eye-movement behaviour (a free-viewing recall task) under low and high updating working memory loads. While spatial aspects of task performance (attentional breadth, and peripheral extent of image exploration in the free-viewing task) were unaffected by the load manipulation, the exploratory dynamics of the free-viewing task (including fixation durations and scan-path lengths) changed under increasing load. These findings suggest that temporal dynamics, rather than the spatial extent of exploration, are the primary mechanism affected by working memory load during the spatial deployment of attention. Further, individual differences in exploratory behaviour were observed on the free-viewing task: all metrics were highly correlated across working memory load blocks. These findings suggest a need for further investigation of individual differences in eye-movement behaviour; potential factors associated with these individual differences, including working memory capacity and persistence versus flexibility orientations, are discussed. |
Junru Wu; Min Li; Wenbo Ma; Zhihao Zhang; Mingsha Zhang; Xuemei Li In: Gerontology, vol. 69, no. 3, pp. 321–335, 2023. @article{Wu2023d, Background: Among the elderly, dementia is a common and disabling disorder with primary manifestations of cognitive impairments. Diagnosis and intervention in its early stages is the key to effective treatment. Nowadays, the test of cognitive function relies mainly on neuropsychological tests, such as the Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA). However, they have noticeable shortcomings, e.g., the biases of subjective judgments from physicians and the cost of the labor of these well-trained physicians. Thus, advanced and objective methods are urgently needed to evaluate cognitive functions. Methods: We developed a cognitive assessment system through measuring the saccadic eye movements in three tasks. The cognitive functions were evaluated by both our system and the neuropsychological tests in 310 subjects, and the evaluating results were directly compared. Results: In general, most saccadic parameters correlate well with the MMSE and MoCA scores. Moreover, some subjects with high MMSE and MoCA scores have high error rates in performing these three saccadic tasks due to various errors. The primary error types vary among tasks, indicating that different tasks assess certain specific brain functions preferentially. Thus, to improve the accuracy of evaluation through saccadic tasks, we built a weighted model to combine the saccadic parameters of the three saccadic tasks, and our model showed a good diagnosis performance in detecting patients with cognitive impairment. Conclusion: The comprehensive analysis of saccadic parameters in multiple tasks could be a reliable, objective, and sensitive method to evaluate cognitive function and thus to help diagnose cognitive impairments. |
Hao Wu; Zhentao Zuo; Zejian Yuan; Tiangang Zhou; Yan Zhuo; Nanning Zheng; Badong Chen Neural representation of gestalt grouping and attention effect in human visual cortex Journal Article In: Journal of Neuroscience Methods, vol. 399, no. 28, pp. 1–11, 2023. @article{Wu2023c, Background: The brain aggregates meaningless local sensory elements to form meaningful global patterns in a process called perceptual grouping. Current brain imaging studies have found that neural activities in V1 are modulated during visual grouping. However, how grouping is represented in each of the early visual areas, and how attention alters these representations, is still unknown. New method: We adopted MVPA to decode the specific content of perceptual grouping by comparing neural activity patterns between gratings and dot lattice stimuli which can be grouped with proximity law. Furthermore, we quantified the grouping effect by defining the strength of grouping, and assessed the effect of attention on grouping. Results: We found that activity patterns to proximity grouped stimuli in early visual areas resemble these to grating stimuli with the same orientations. This similarity exists even when there is no attention focused on the stimuli. The results also showed a progressive increase of representational strength of grouping from V1 to V3, and attention modulation to grouping is only significant in V3 among all the visual areas. Comparison with existing methods: Most of the previous work on perceptual grouping has focused on how activity amplitudes are modulated by grouping. Using MVPA, the present work successfully decoded the contents of neural activity patterns corresponding to proximity grouping stimuli, thus shed light on the availability of content-decoding approach in the research on perceptual grouping. Conclusions: Our work found that the content of the neural activity patterns during perceptual grouping can be decoded in the early visual areas under both attended and unattended task, and provide novel evidence that there is a cascade processing for proximity grouping through V1 to V3. The strength of grouping was larger in V3 than in any other visual areas, and the attention modulation to the strength of grouping was only significant in V3 among all the visual areas, implying that V3 plays an important role in proximity grouping. |