全部EyeLink出版物
以下按年份列出了截至2024年(2025年初)的所有13000多篇同行评审的EyeLink研究出版物。您可以使用视觉搜索、平滑追求、帕金森氏症等关键字搜索出版物库。您还可以搜索单个作者的姓名。按研究领域分组的眼动追踪研究可以在解决方案页面上找到。如果我们错过了任何EyeLink眼动追踪论文,请给我们发电子邮件!
2024 |
Chao Jung Wu; Chia Yu Liu An eye-tracking study of college students' infographic-reading processes Journal Article In: Journalism and Mass Communication Quarterly, pp. 1–31, 2024. @article{Wu2024, We know little about how readers, especially readers with various characteristics, incorporate materials with highly synthesized words and graphs like infographics. We collected eye movements from 95 college students as they read infographics and categorized them into high-/low-score groups based on comprehension scores. Participants initially inspected the word areas that corresponded to the graph areas with the highest perceptual salience. The high-score group showed greater total fixation duration (TFD), TFD ratios of graphs, and transition numbers between words and graphs, indicating more processing of infographics. The low-score group showed greater TFD ratios of words and saccade amplitudes, indicating information-searching behavior. |
Jaeger Wongtrakun; Shou-Han Zhou; Mark A. Bellgrove; Trevor T. J. Chong; James P. Coxon The effect of congruent versus incongruent distractor positioning on electrophysiological signals during perceptual decision-making Journal Article In: The Journal of Neuroscience, vol. 44, no. 45, pp. 1–9, 2024. @article{Wongtrakun2024, Key event-related potentials (ERPs) of perceptual decision-making such as centroparietal positivity (CPP) elucidate how evidence is accumulated toward a given choice. Furthermore, this accumulation can be impacted by visual target selection signals such as the N2 contralateral (N2c). How these underlying neural mechanisms of perceptual decision-making are influenced by the spatial congruence of distractors relative to target stimuli remains unclear. Here, we used electroencephalography (EEG) in humans of both sexes to investigate the effect of distractor spatial congruency (same vs different hemifield relative to targets) on perceptual decision-making. We confirmed that responses for perceptual decisions were slower for spatially incongruent versus congruent distractors of high salience. Similarly, markers of target selection (N2c peak amplitude) and evidence accumulation (CPP slope) were found to be lower when distractors were spatially incongruent versus congruent. To evaluate the effects of congruency further, we applied drift diffusion modeling to participant responses, which showed that larger amplitudes of both ERPs were correlated with shorter nondecision times when considering the effect of congruency. The modeling also suggested that congruency's effect on behavior occurred prior to and during evidence accumulation when considering the effects of the N2c peak and CPP slope. These findings point to spatially incongruent distractors, relative to congruent distractors, influencing decisions as early as the initial sensory processing phase and then continuing to exert an effect as evidence is accumulated throughout the decision-making process. Overall, our findings highlight how key electrophysiological signals of perceptual decision-making are influenced by the spatial congruence of target and distractor. |
Roslyn Wong; Aaron Veldre; Sally Andrews Are there independent effects of constraint and predictability on eye movements during reading? Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 50, no. 2, pp. 331–345, 2024. @article{Wong2024b, Evidence of processing costs for unexpected words presented in place of a more expected completion remains elusive in the eye-movement literature. The current study investigated whether such prediction error costs depend on the source of constraint violation provided by the prior context. Participants' eye movements were recorded as they read predictable words and unpredictable alternatives that were either semantically related or unrelated in three-sentence passages. The passages differed in whether the source of constraint originated solely from the global context provided by the first two semantically rich senten- ces of the passage, from the local context provided by the final sentence of the passage, from both the global and local context, or from none of the three sentences of the passage. The results revealed the expected processing advantage for predictable completions in any constraining context, although the rela- tive contributions of the different sources of constraint varied across the time course of word processing. Unpredictable completions, however, did not yield any processing costs when the context constrained to- ward a different word, instead producing immediate processing benefits in the presence of any constrain- ing context. Moreover, the initial processing of related unpredictable completions was enhanced further by the provision of a supportive global context. Predictability effects therefore do not appear to be deter- mined by cloze probability alone but also by the nature of the prior contextual constraint especially when they encourage the construction of higher-level discourse representations. The implications of these find- ings for understanding existing theoretical models of predictive processing are discussed. |
Roslyn Wong; Aaron Veldre; Sally Andrews Looking for immediate and downstream evidence of lexical prediction in eye movements during reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 77, no. 10, pp. 2040 –2064, 2024. @article{Wong2024a, Previous investigations of whether readers make predictions about the full identity of upcoming words have focused on the extent to which there are processing consequences when readers encounter linguistic input that is incompatible with their expectations. To date, eye-movement studies have revealed inconsistent evidence of the processing costs that would be expected to accompany lexical prediction. This study investigated whether readers' lexical predictions were observable during or downstream from their initial point of activation. Three experiments assessed readers' eye movements to predictable and unpredictable words, and then to subsequent downstream words, which probed the lingering activation of previously expected words. The results showed novel evidence of processing costs for unexpected input but only when supported by a plausible linguistic environment, suggesting that readers could strategically modulate their predictive processing. However, there was limited evidence that their lexical predictions affected downstream processing. The implications of these findings for understanding the role of prediction in language processing are discussed. |
Raymond Ka Wong; Janahan Selvanayagam; Kevin Johnston; Stefan Everling Functional specialization and distributed processing across marmoset lateral prefrontal subregions Journal Article In: Cerebral Cortex, vol. 34, no. 10, pp. 1–15, 2024. @article{Wong2024, A prominent aspect of primate lateral prefrontal cortex organization is its division into several cytoarchitecturally distinct subregions. Neurophysiological investigations in macaques have provided evidence for the functional specialization of these subregions, but an understanding of the relative representational topography of sensory, social, and cognitive processes within them remains elusive. One explanatory factor is that evidence for functional specialization has been compiled largely from a patchwork of findings across studies, in many animals, and with considerable variation in stimulus sets and tasks. Here, we addressed this by leveraging the common marmoset (Callithrix jacchus) to carry out large-scale neurophysiological mapping of the lateral prefrontal cortex using high-density microelectrode arrays, and a diverse suite of test stimuli including faces, marmoset calls, and spatial working memory task. Task-modulated units and units responsive to visual and auditory stimuli were distributed throughout the lateral prefrontal cortex, while those with saccade-related activity or face-selective responses were restricted to 8aV, 8aD, 10, 46 V, and 47. Neurons with contralateral visual receptive fields were limited to areas 8aV and 8aD. These data reveal a mixed pattern of functional specialization in the lateral prefrontal cortex, in which responses to some stimuli and tasks are distributed broadly across lateral prefrontal cortex subregions, while others are more limited in their representation. |
Matthew B. Winn The effort of repairing a misperceived word can impair perception of following words, especially for listeners with cochlear implants Journal Article In: Ear & Hearing, vol. 45, no. 6, pp. 1527–1541, 2024. @article{Winn2024, Objectives: In clinical and laboratory settings, speech recognition is typically assessed in a way that cannot distinguish accurate auditory perception from misperception that was mentally repaired or inferred from context. Previous work showed that the process of repairing misperceptions elicits greater listening effort, and that this elevated effort lingers well after the sentence is heard. That result suggests that cognitive repair strategies might appear successful when testing a single utterance but fail for everyday continuous conversational speech. The present study tested the hypothesis that the effort of repairing misperceptions has the consequence of carrying over to interfere with perception of later words after the sentence. Design: Stimuli were open-set coherent sentences that were presented intact or with a word early in the sentence replaced with noise, forcing the listener to use later context to mentally repair the missing word. Sentences were immediately followed by digit triplets, which served to probe carryover effort from the sentence. Control conditions allowed for the comparison to intact sentences that did not demand mental repair, as well as to listening conditions that removed the need to attend to the post-sentence stimuli, or removed the post-sentence digits altogether. Intelligibility scores for the sentences and digits were accompanied by time-series measurements of pupil dilation to assess cognitive load during the task, as well as subjective rating of effort. Participants included adults with cochlear implants (CIs), as well as an age-matched group and a younger group of listeners with typical hearing for comparison. Results: For the CI group, needing to repair a missing word during a sentence resulted in more errors on the digits after the sentence, especially when the repair process did not result in a coherent sensible perception. Sentences that needed repair also contained more errors on the words that were unmasked. All groups showed substantial increase of pupil dilation when sentences required repair, even when the repair was successful. Younger typical hearing listeners showed clear differences in moment-To-moment allocation of effort in the different conditions, while the other groups did not. Conclusions: For CI listeners, the effort of needing to repair misperceptions in a sentence can last long enough to interfere with words that follow the sentence. This pattern could pose a serious problem for regular communication but would go overlooked in typical testing with single utterances, where a listener has a chance to repair misperceptions before responding. Carryover effort was not predictable by basic intelligibility scores, but can be revealed in behavioral data when sentences are followed immediately by extra probe words such as digits. |
Hanna E. Willis; Bradley Caron; Matthew R. Cavanaugh; Lucy Starling; Sara Ajina; Franco Pestilli; Marco Tamietto; Krystel R. Huxlin; Kate E. Watkins; Holly Bridge Rehabilitating homonymous visual field deficits: White matter markers of recovery — stage 2 registered report Journal Article In: Brain Communications, vol. 6, no. 5, pp. 1–16, 2024. @article{Willis2024, Damage to the primary visual cortex or its afferent white matter tracts results in loss of vision in the contralateral visual field that can present as homonymous visual field deficits. Evidence suggests that visual training in the blind field can partially reverse blindness at trained locations. However, the efficacy of visual training is highly variable across participants, and the reasons for this are poorly understood. It is likely that variance in residual neural circuitry following the insult may underlie the variation among patients. Many stroke survivors with visual field deficits retain residual visual processing in their blind field despite a lack of awareness. Previous research indicates that intact structural and functional connections between the dorsal lateral geniculate nucleus and the human extrastriate visual motion-processing area hMT+ are necessary for blindsight to occur. We therefore hypothesized that changes in this white matter pathway may underlie improvements resulting from motion discrimination training.Eighteen stroke survivors with long-standing, unilateral, homonymous field defects from retro-geniculate brain lesions completed 6 months of visual training at home. This involved performing daily sessions of a motion discrimination task, at two non-overlapping locations in the blind field, at least 5 days per week. Motion discrimination and integration thresholds, Humphrey perimetry and structural and diffusion-weighted MRI were collected pre- and post-training. Changes in fractional anisotropy (FA) were analysed in visual tracts connecting the ipsilesional dorsal lateral geniculate nucleus and hMT+, and the ipsilesional dorsal lateral geniculate nucleus and primary visual cortex. The (non-visual) tract connecting the ventral posterior lateral nucleus of the thalamus and the primary somatosensory cortex was analysed as a control. Changes in white matter intintegrity were correlated with improvements in motion discrimination and Humphrey perimetry. We found that the magnitude of behavioural improvement was not directly related to changes in FA in the pathway between the dorsal lateral geniculate nucleus and hMT+ or dorsal lateral geniculate nucleus and primary visual cortex. Baseline FA in either tract also failed to predict improvements in training. However, an exploratory analysis showed a significant increase in FA in the distal part of the tract connecting the dorsal lateral geniculate nucleus and hMT+, suggesting that 6 months of visual training in chronic, retro-geniculate strokes may enhance white matter microstructural integrity of residual geniculo-extrastriate pathways. |
Jonathon Whitlock; Ryan Hubbard; Huiyu Ding; Lili Sahakyan Trial-level fluctuations in pupil dilation at encoding reflect strength of relational binding Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 50, no. 2, pp. 212–229, 2024. @article{Whitlock2024, Eye-tracking methodologies have revealed that eye movements and pupil dilations are influenced by our pre-vious experiences. Dynamic fluctuations in pupil size during learning reflect in part the formation of mem-ories for learned information, while viewing behavior during memory testing is influenced by memory retrieval and drawn to previously learned associations. However, no study to date has linked fluctuations in pupil dilation at encoding to the magnitude of viewing behavior at test. The current investigation involved monitoring eye movements both in single item recognition and relational recognition tasks. In the item task, all faces were presented with the same background scene and memory for faces was subsequently tested, whereas in the relational task each face was presented with its own unique background scene and memory for the face-scene association was subsequently tested. Pupil size changes during encoding predicted the magnitude of preferential viewing during test, as well as future recognition accuracy. These effects emerged only in the relational task, but not in the item task, and were replicated in an additional experiment in which stimulus luminance was more tightly controlled. A follow-up experiment and additional analyses ruled out differences in orienting instructions or number of fixations to the encoding display as explanations of the observed effects. The results shed light on the links between pupil dilation, memory encoding, and eye movement patterns during recognition and suggest that trial-level fluctuations in pupil dilation during encoding reflect relational binding of items to their context rather than general memory formation or strength. |
Will Whitham; Bradley Karstadt; Nicola C. Anderson; Walter F. Bischof; Steven J. Schapiro; Alan Kingstone; Richard Coss; Elina Birmingham; Jessica L. Yorzinski Predator gaze captures both human and chimpanzee attention Journal Article In: PLoS ONE, vol. 19, no. 11, pp. 1–23, 2024. @article{Whitham2024, Primates can rapidly detect potential predators and modify their behavior based on the level of risk. The gaze direction of predators is one feature that primates can use to assess risk levels: recognition of a predator's direct stare indicates to prey that it has been detected and the level of risk is relatively high. Predation has likely shaped visual attention in primates to quickly assess the level of risk but we know little about the constellation of low-level (e.g., contrast, color) and higher-order (e.g., category membership, perceived threat) visual features that primates use to do so. We therefore presented human and chimpanzee (Pan troglodytes) participants with photographs of potential predators (lions) and prey (impala) while we recorded their overt attention with an eye-tracker. The gaze of the predators and prey was either directed or averted. We found that both humans and chimpanzees visually fixated the eyes of predators more than those of prey. In addition, they directed the most attention toward the eyes of directed (rather than averted) predators. Humans, but not chimpanzees, gazed at the eyes of the predators and prey more than other features. Importantly, low-level visual features of the predators and prey did not provide a good explanation of the observed gaze patterns. |
Kayla M. Whearty; Ivan Ruiz; Anna R. Knippenberg; Gregory P. Strauss In: Neuropsychology, vol. 38, no. 5, pp. 475–485, 2024. @article{Whearty2024, Objective: The present study explored the hypothesis that anhedonia reflects an emotional memory impairment for pleasant stimuli, rather than diminished hedonic capacity in individuals with schizophrenia (SZ). Method: Participants included 30 SZ and 30 healthy controls (HCs) subjects who completed an eye-tracking emotion-induced memory trade-off task where contextually relevant pleasant, unpleasant, or neutral items were inserted into the foreground of neutral background scenes. Passive viewing and poststimulus elaboration blocks were administered to assess differential encoding mechanisms, and immediate and 1-week recognition testing phases were completed to assess the effects of delay interval. Participants also made self-reports of positive emotion, negative emotion, and arousal in response to the stimuli. Results: Results indicated that SZ experienced stimuli similarly to HC. Both groups demonstrated the typical emotion-induced memory trade-off during the passive viewing and poststimulus elaboration encoding blocks, as indicated by more hits for emotional than neutral items and fewer hits for backgrounds paired with emotional than neutral items. Eye-tracking data also indicated that both groups were more likely to fixate earlier and have longer dwell time on emotional than neutral items. At the 1-week delay, the emotion-induced memory trade-off was eliminated in both groups, and SZ showed fewer overall hits across valence conditions. Greater severity of anhedonia was specifically associated with impaired recognition for pleasant stimuli at the immediate recognition phase. Conclusions: Findings suggest that anhedonia in SZ is associated with emotional memory impairment, particularly a deficit in encoding positive stimuli. |
Mirjam C. M. Wever; Geert-Jan Will; Lisanne A. E. M. Houtum; Loes H. C. Janssen; Wilma G. M. Wentholt; Iris M. Spruit; Marieke S. Tollenaar; Bernet M. Elzinga Neural and affective responses to prolonged eye contact with parents in depressed and nondepressed adolescents Journal Article In: Cognitive, Affective, & Behavioral Neuroscience, vol. 24, no. 3, pp. 567–581, 2024. @article{Wever2024, Eye contact improves mood, facilitates connectedness, and is assumed to strengthen the parent–child bond. Adolescent depression is linked to difficulties in social interactions, the parent–child bond included. Our goal was to elucidate adolescents' affective and neural responses to prolonged eye contact with one's parent in nondepressed adolescents (HC) and how these responses are affected in depressed adolescents. While in the scanner, 59 nondepressed and 19 depressed adolescents were asked to make eye contact with their parent, an unfamiliar peer, an unfamiliar adult, and themselves by using videos of prolonged direct and averted gaze, as an approximation of eye contact. After each trial, adolescents reported on their mood and feelings of connectedness, and eye movements and BOLD-responses were assessed. In HCs, eye contact boosted mood and feelings of connectedness and increased activity in inferior frontal gyrus (IFG), temporal pole, and superior frontal gyrus. Unlike HCs, eye contact did not boost the mood of depressed adolescents. While HCs reported increased mood and feelings of connectedness to the sight of their parent versus others, depressed adolescents did not. Depressed adolescents exhibited blunted overall IFG activity. These findings show that adolescents are particularly sensitive to eye contact and respond strongly to the sight of their parents. This sensitivity seems to be blunted in depressed adolescents. For clinical purposes, it is important to gain a better understanding of how the responsivity to eye contact in general and with their parents in particular, can be restored in adolescents with depression. |
Joshua D. Weirick; Jiyeon Lee Syntactic flexibility and lexical encoding in aging sentence production: An eye tracking study Journal Article In: Frontiers in Psychology, vol. 15, pp. 1–16, 2024. @article{Weirick2024, Purpose: Successful sentence production requires lexical encoding and ordering them into a correct syntactic structure. It remains unclear how different processes involved in sentence production are affected by healthy aging. We investigated (a) if and how aging affects lexical encoding and syntactic formulation during sentence production, using auditory lexical priming and eye tracking-while-speaking paradigms and (b) if and how verbal working memory contributes to age-related changes in sentence production. Methods: Twenty older and 20 younger adults described transitive and dative action pictures following auditory lexical primes, by which the relative ease of encoding the agent or theme nouns (for transitive pictures) and the theme and goal nouns (for dative pictures) was manipulated. The effects of lexical priming on off-line syntactic production and real-time eye fixations to the primed character were measured. Results: In offline production, older adults showed comparable priming effects to younger adults, using the syntactic structure that allows earlier mention of the primed lexical item in both transitive and dative sentences. However, older adults showed longer lexical priming effects on eye fixations to the primed character during the early stages of sentence planning. Preliminary analysis indicated that reduced verbal working memory may in part account for longer lexical encoding, particularly for older adults. Conclusion: These findings indicate that syntactic flexibility for formulating different grammatical structures remains largely robust with aging. However, lexical encoding processes are more susceptible to age-related changes, possibly due to changes in verbal working memory. |
Emily R. Weichart; Layla Unger; Nicole King; Vladimir M. Sloutsky; Brandon M. Turner “The eyes are the window to the representation”: Linking gaze to memory precision and decision weights in object discrimination tasks Journal Article In: Psychological Review, vol. 131, no. 4, pp. 1045–1067, 2024. @article{Weichart2024, Humans selectively attend to task-relevant information in order to make accurate decisions. However, selective attention incurs consequences if the learning environment changes unexpectedly. This trade-off has been underscored by studies that compare learning behaviors between adults and young children: broad sampling during learning comes with a breadth of information in memory, often allowing children to notice details of the environment that are missed by their more selective adult counterparts. The current work extends the exemplar-similarity account of object discrimination to consider both the intentional and consequential aspects of selective attention when predicting choice. In a novel direct input approach, we used trial-level eyetracking data from training and test to replace the otherwise freely estimated attention dynamics of the model. We demonstrate that only a model imbued with gaze correlates of memory precision in addition to decision weights can accurately predict key behaviors associated with (a) selective attention to a relevant dimension, (b) distributed attention across dimensions, and (c) flexibly shifting strategies between tasks. Although humans engage in selective attention with the intention of being accurate in the moment, our findings suggest that its consequences on memory constrain the information that is available for making decisions in the future. |
Yipu Wei; Yingjia Wan; Michael K. Tanenhaus Spontaneous perspective-taking in real-time language comprehension: Evidence from eye-movements and grain of coordination Journal Article In: Scientific Reports, vol. 14, no. 1, pp. 1–10, 2024. @article{Wei2024a, Linguistic communication requires interlocutors to consider differences in each other's knowledge (perspective-taking). However, perspective-taking might either be spontaneous or strategic. We monitored listeners' eye movements in a referential communication task. A virtual speaker gave temporally ambiguous instructions with scalar adjectives (“big” in “big cubic block”). Scalar adjectives assume a contrasting object (a small cubic block). We manipulated whether the contrasting object (a small triangle) for a competitor object (a big triangle) was in common ground (visible to both speaker and listener) or was occluded so it was in the listener's privileged ground, in which case perspective-taking would allow earlier reference-resolution. We used a complex visual context with multiple objects, making strategic perspective-taking unlikely when all objects are in the listener's referential domain. A turn-taking, puzzle-solving task manipulated whether participants could anticipate a more restricted referential domain. Pieces were either confined to a small area (requiring fine-grained coordination) or distributed across spatially distinct regions (requiring only coarse-grained coordination). Results strongly supported spontaneous perspective-taking: Although comprehension was less time-locked in the coarse-grained condition, participants in both conditions used perspective information to identify the target referent earlier when the competitor contrast was in privileged ground, even when participants believed instructions were computer-generated. |
Yanjun Wei; Yingjuan Tang; Adam John Privitera Functional priority of syntax over semantics in Chinese 'ba' construction: Evidence from eye-tracking during natural reading Journal Article In: Language and Cognition, vol. 16, no. 2, pp. 380–400, 2024. @article{Wei2024b, Studies on sentence processing in inflectional languages support that syntactic structure building functionally precedes semantic processing. Conversely, most EEG studies of Chinese sentence processing do not support the priority of syntax. One possible explanation is that the Chinese language lacks morphological inflections. Another explanation may be that the presentation of separate sentence components on individual screens in EEG studies disrupts syntactic framework construction during sentence reading. The present study investigated this explanation using a self-paced reading experiment mimicking rapid serial visual presentation in EEG studies and an eye-tracking experiment reflecting natural reading. In both experiments, Chinese 'ba' sentences were presented to Chinese young adults in four conditions that differed across the dimensions of syntactic and semantic congruency. Evidence supporting the functional priority of syntax over semantics was limited to only the natural reading context, in which syntactic violations blocked the processing of semantics. Additionally, we observed a later stage of integrating plausible semantics with a failed syntax. Together, our findings extend the functional priority of syntax to the Chinese language and highlight the importance of adopting more ecologically valid methods when investigating sentence reading. |
Wei Wei; Kangning Wang; Shuang Qiu; Huiguang He A MultiModal Vigilance (MMV) dataset during RSVP and SSVEP brain-computer interface tasks Journal Article In: Scientific Data, vol. 11, no. 1, pp. 1–14, 2024. @article{Wei2024, Vigilance represents an ability to sustain prolonged attention and plays a crucial role in ensuring the reliability and optimal performance of various tasks. In this report, we describe a MultiModal Vigilance (MMV) dataset comprising seven physiological signals acquired during two Brain-Computer Interface (BCI) tasks. The BCI tasks encompass a rapid serial visual presentation (RSVP)-based target image retrieval task and a steady-state visual evoked potential (SSVEP)-based cursor-control task. The MMV dataset includes four sessions of seven physiological signals for 18 subjects, which encompasses electroencephalogram(EEG), electrooculogram (EOG), electrocardiogram (ECG), photoplethysmogram (PPG), electrodermal activity (EDA), electromyogram (EMG), and eye movement. The MMV dataset provides data from four stages: 1) raw data, 2) pre-processed data, 3) trial data, and 4) feature data that can be directly used for vigilance estimation. We believe this dataset will achieve flexible reuse and meet the various needs of researchers. And this dataset will greatly contribute to advancing research on physiological signal-based vigilance research and estimation. |
Jelena M. Wehrli; Yanfang Xia; Aslan Abivardi; Birgit Kleim; Dominik R. Bach The impact of doxycycline on human contextual fear memory Journal Article In: Psychopharmacology, vol. 241, no. 5, pp. 1065–1077, 2024. @article{Wehrli2024, Rationale: Previous work identified an attenuating effect of the matrix metalloproteinase (MMP) inhibitor doxycycline on fear memory consolidation. This may present a new mechanistic approach for the prevention of trauma-related disorders. However, so far, this has only been unambiguously demonstrated in a cued delay fear conditioning paradigm, in which a simple geometric cue predicted a temporally overlapping aversive outcome. This form of learning is mainly amygdala dependent. Psychological trauma often involves the encoding of contextual cues, which putatively necessitates partly different neural circuits including the hippocampus. The role of MMP signalling in the underlying neural pathways in humans is unknown. Methods: Here, we investigated the effect of doxycycline on configural fear conditioning in a double-blind placebo-controlled randomised trial with 100 (50 females) healthy human participants. Results: Our results show that participants successfully learned and retained, after 1 week, the context-shock association in both groups. We find no group difference in fear memory retention in either of our pre-registered outcome measures, startle eye-blink responses and pupil dilation. Contrary to expectations, we identified elevated fear-potentiated startle in the doxycycline group early in the recall test, compared to the placebo group. Conclusion: Our results suggest that doxycycline does not substantially attenuate contextual fear memory. This might limit its potential for clinical application. |
Simon Weber; Thomas Christophel; Kai Görgen; Joram Soch; John-Dylan Haynes Working memory signals in early visual cortex are present in weak and strong imagers Journal Article In: Human Brain Mapping, vol. 45, no. 3, pp. 1–17, 2024. @article{Weber2024, It has been suggested that visual images are memorized across brief periods of time by vividly imagining them as if they were still there. In line with this, the contents of both working memory and visual imagery are known to be encoded already in early visual cortex. If these signals in early visual areas were indeed to reflect a combined imagery and memory code, one would predict them to be weaker for individuals with reduced visual imagery vividness. Here, we systematically investigated this question in two groups of participants. Strong and weak imagers were asked to remember images across brief delay periods. We were able to reliably reconstruct the memorized stimuli from early visual cortex during the delay. Importantly, in contrast to the prediction, the quality of reconstruction was equally accurate for both strong and weak imagers. The decodable information also closely reflected behavioral precision in both groups, suggesting it could contribute to behavioral performance, even in the extreme case of completely aphantasic individuals. Our data thus suggest that working memory signals in early visual cortex can be present even in the (near) absence of phenomenal imagery. |
Aline Wauters; Dimitri M. L. Van Ryckeghem; Melanie Noel; Kendra Mueri; Sabine Soltani; Tine Vervoort Parental narrative style moderates the relation between pain-related attention and memory biases in youth with chronic pain Journal Article In: Pain, vol. 165, pp. 126–137, 2024. @article{Wauters2024, Negatively biased pain memories robustly predict maladaptive pain outcomes in children. Both attention bias to pain and parental narrative style have been linked with the development of these negative biases, with previous studies indicating that how parents talk to their child about the pain might buffer the influence of children's attention bias to pain on the development of such negatively biased pain memories. This study investigated the moderating role of parental narrative style in the relation between pain-related attention and memory biases in a pediatric chronic pain sample who underwent a cold pressor task. Participants were 85 youth-parent dyads who reminisced about youth's painful event. Eye-tracking technology was used to assess youth's attention bias to pain information, whereas youth's pain-related memories were elicited 1 month later through telephone interview. Results indicated that a parental narrative style using less repetitive yes–no questions, more emotion words, and less fear words buffered the influence of high levels of youth's attention bias to pain in the development of negatively biased pain memories. Opposite effects were observed for youth with low levels of attention bias to pain. Current findings corroborate earlier results on parental reminiscing in the context of pain (memories) but stress the importance of matching narrative style with child characteristics, such as child attention bias to pain, in the development of negatively biased pain memories. Future avenues for parent–child reminiscing and clinical implications for pediatric chronic pain are discussed. |
Annie Warman; Allan Clark; George L. Malcolm; Maximillian Havekost; Stéphanie Rossit Is there a lower visual field advantage for object affordances? A registered report Journal Article In: Quarterly Journal of Experimental Psychology, vol. 77, no. 11, pp. 2151 –2164, 2024. @article{Warman2024, It's been repeatedly shown that pictures of graspable objects can facilitate visual processing, even in the absence of reach-to-grasp actions, an effect often attributed to the concept of affordances. A classic demonstration of this is the handle compatibility effect, characterised by faster reaction times when the orientation of a graspable object's handle is compatible with the hand used to respond, even when the handle orientation is task-irrelevant. Nevertheless, it is debated whether the speeded reaction times are a result of affordances or spatial compatibility. First, we investigated whether we could replicate the handle compatibility effect while controlling for spatial compatibility. Participants (N = 68) responded with left or right-handed keypresses to whether the object was upright or inverted and, in separate blocks, whether the object was red or green. We failed to replicate the handle compatibility effect, with no significant difference between compatible and incompatible conditions, in both tasks. Second, we investigated whether there is a lower visual field (VF) advantage for the handle compatibility effect in line with what has been found for hand actions. A further 68 participants responded to object orientation presented either in the upper or lower VF. A significant handle compatibility effect was observed in the lower VF, but not the upper VF. This suggests that there is a lower VF advantage for affordances, possibly as the lower VF is where our actions most frequently occur. However, future studies should explore the impact of eye movements on the handle compatibility effect and tool affordances. |
Aengus Ward; Shiyu He Medieval reading in the twenty-first century? Journal Article In: Digital Scholarship in the Humanities, vol. 39, no. 4, pp. 1134–1155, 2024. @article{Ward2024, Reading practices in medieval manuscripts have often been the subject of critical analysis in the past. Recent technological developments have extended the range of analytical possibilities; one such development is that of eye tracking. In the present article, we outline the results of an experiment using eye tracking technologies which were carried out recently in Spain. The analysis points to particular trends in the ways in which modern readers interact with medieval textual forms and we use this analysis to point to future possibilities in the use of eye tracking to broaden and deepen our understanding of the workings of the medieval page. |
Zhiyun Wang; Qingfang Zhang Ageing of grammatical advance planning in spoken sentence production: An eye movement study Journal Article In: Psychological Research, vol. 88, pp. 652–669, 2024. @article{Wang2024n, This study used an image-description paradigm with concurrent eye movement recordings to investigate differences of grammatical advance planning between young and older speakers in spoken sentence production. Participants were asked to produce sentences with simple or complex initial phrase structures (IPS) in Experiment 1 while producing individual words in Experiment 2. Young and older speakers showed comparable speaking latencies in sentence production task, whereas older speakers showed longer latencies than young speakers in word production task. Eye movement data showed that compared with young speakers, older speakers had higher fixation percentage on object 1, lower percentage of gaze shift from object 1 to 2, and lower fixation percentage on object 2 in simple IPS sentences, while they showed similar fixation percentage on object 1, similar percentage of gaze shift from object 1 to 2, and lower fixation percentage on object 2 in complex IPS sentences, indicating a decline of grammatical encoding scope presenting on eye movement patterns. Meanwhile, speech analysis showed that older speakers presented longer utterance duration, slower speech rate, and longer and more frequently occurred pauses in articulation, indicating a decline of speech articulation in older speakers. Thus, our study suggests that older speakers experience an ageing effect in the sentences with complex initial phrases due to limited cognitive resources. |
Zhenni Wang; Chen Zhang; Qihui Guo; Qing Fan; Lihui Wang Concurrent oculomotor hyperactivity and deficient anti-saccade performance in obsessive-compulsive disorder Journal Article In: Journal of Psychiatric Research, vol. 180, pp. 402–410, 2024. @article{Wang2024l, Existing studies mainly focused on the inhibition of the task-interfering response to understand the inhibitory deficits of obsessive-compulsive disorder (OCD). However, recent studies suggested that inhibitory function is broadly involved in response preparation and implementation. It is yet unknown if the inhibition dysfunction in OCD extends beyond the task-interfering response to the general inhibitory function. Here we address this issue based on the multidimensional eye-movement measurements, which can better capture the inhibitory deficits than manual responses. Thirty-one OCD patients and 32 healthy controls (HCs) completed the anti-saccade task where multidimensional eye-movement features were developed. Confirmatory factor analysis (CFA) suggested two components of inhibitory function that negatively correlated with each other: one component of oculomotor hyperactivity in generating oculomotor output which is characterized with early premature saccades, early cross rates and saccade number; the other component of task-specific oculomotor efficiency which is characterized with task accuracy, saccade latency, correction rate, and amplitude gain. Importantly, OCD showed both stronger oculomotor hyperactivity and deficient oculomotor efficiency than HCs, and the machine-learning-based classifications showed that the features of oculomotor hyperactivity had higher prediction accuracy than the features of oculomotor efficiency in distinguishing OCD from HCs. Our results suggested that OCD has concurrent deficits in oculomotor hyperactivity and oculomotor efficiency, which may originate from a common inhibitory dysfunction. |
Zhenni Wang; Radha Nila Meghanathan; Stefan Pollmann; LihuiWang Common structure of saccades and microsaccades in visual perception Journal Article In: Journal of Vision, vol. 24, no. 4, pp. 1–13, 2024. @article{Wang2024k, We obtain large amounts of external information through our eyes, a process often considered analogous to picture mapping onto a camera lens. However, our eyes are never as still as a camera lens, with saccades occurring between fixations and microsaccades occurring within a fixation. Although saccades are agreed to be functional for information sampling in visual perception, it remains unknown if microsaccades have a similar function when eye movement is restricted. Here, we demonstrated that saccades and microsaccades share common spatiotemporal structures in viewing visual objects. Twenty-seven adults viewed faces and houses in free-viewing and fixation-controlled conditions. Both saccades and microsaccades showed distinctive spatiotemporal patterns between face and house viewing that could be discriminated by pattern classifications. The classifications based on saccades and microsaccades could also be mutually generalized. Importantly, individuals who showed more distinctive saccadic patterns between faces and houses also showed more distinctive microsaccadic patterns. Moreover, saccades and microsaccades showed a higher structure similarity for face viewing than house viewing and a common orienting preference for the eye region over the mouth region. These findings suggested a common oculomotor program that is used to optimize information sampling during visual object perception. |
Yao Wang; Yue Jiang; Zhiming Hu; Constantin Ruhdorfer; Mihai Bâce; Andreas Bulling VisRecall++: Analysing and predicting visualisation gecallability from gaze behaviour Journal Article In: Proceedings of the ACM on Human-Computer Interaction, vol. 8, pp. 1–18, 2024. @article{Wang2024j, Question answering has recently been proposed as a promising means to assess the recallability of information visualisations. However, prior works are yet to study the link between visually encoding a visualisation in memory and recall performance. To fill this gap, we propose VisRecall++ – a novel 40-participant recallability dataset that contains gaze data on 200 visualisations and 1,000 questions, including identifying the title and retrieving values. We measured recallability by asking participants questions after they observed the visualisation for 10 seconds. Our analyses reveal several insights, such as saccade amplitude, number of fixations, and fixation duration significantly differ between high and low recallability groups. Finally, we propose GazeRecallNet – a novel computational method to predict recallability from gaze behaviour that outperforms the state-of-the-art model RecallNet and three other baselines on this task. Taken together, our results shed light on assessing recallability from gaze behaviour and inform future work on recallability-based visualisation optimisation. |
Yang Wang; Jon D. Elhaiorcid; Christian Montagorcid; Lei Zhangorcid; Haibo Yang Attentional bias to social media stimuli is moderated by fear of missing out among problematic social media users Journal Article In: Journal of Behavioral Addictions, vol. 3, pp. 807–822, 2024. @article{Wang2024i, Background and aims: Previous evidence has indicated that problematic social media use (PSMU) is characterized by an attentional bias to social media icons (such as Facebook icons), but not to social webpages (such as Facebook webpages). They suggest that there may be other factors influencing attentional bias like fear of missing out (FoMO). But it remains unclear how FoMO moderates attentional bias in PSMU. This study aims to investigate whether PSMU show attentional bias for stimuli associated with social media, and how FoMO moderates on attentional bias among PSMU through experimental methods. Methods: Based on the Interaction of Person-Affect-Cognition-Execution (I-PACE) model, this study explored mechanisms of attentional bias to social media icons (such as WeChat) related to PSMU and further examined the role of FoMO in this relationship. Specifically, attentional bias patterns to social media icons of 62 participants (31 PSMU and 31 control group) were explored during a dot-probe paradigm combined with eye-tracking in Experiment 1, and attentional bias patterns to social media icons of another 61 individuals with PSMU with different FoMO levels was explored during a dot-probe paradigm combined with eye-tracking in Experiment 2. Results: Results revealed that individuals with PSMU had an attentional bias toward social media icons, demonstrated by attentional maintenance, and such bias such bias was moderated by FoMO negatively, demonstrated by attentional vigilance and maintenance in PSMU/high FoMO. Conclusion: These results suggest that attentional bias is a common mechanism associated with PSMU, and FoMO is a key factor on the development of PSMU. |
Xinrui Wang; Hui Jing Lu; Hanran Li; Lei Chang Childhood environmental unpredictability and experimentally primed uncertainty in relation to intuitive versus deliberate visual search Journal Article In: Current Psychology, vol. 43, no. 5, pp. 4737–4750, 2024. @article{Wang2024o, Visual search is an integral part of animal life. Two search strategies, intuitive vs. deliberate search, are adopted by almost all animals including humans to adapt to different extent of environmental uncertainty. In two eye-tracking experiments involving simple visual search (Study 1) and complex information search (Study 2), we used the evolutionary life history (LH) approach to investigate the interaction between childhood environmental unpredictability and primed concurrent uncertainty in enabling these two search strategies. The results indicate that when individuals with greater childhood unpredictability were exposed to uncertainty cues, they exhibited intuitive rather than deliberate visual search (i.e., fewer fixations, reduced dwell time, a larger saccade size, and fewer repetitive inspections relative to individuals with lower childhood unpredictability). We conclude that childhood environment is crucial in calibrating LH including visual and cognitive strategies to adaptively respond to current environmental conditions. |
Sinuo Wang; Yang He; Jie Hu; Jianan Xia; Ke Fang; Junna Yu; Yingying Wang Eye movement intervention facilitates concurrent perception and memory processing Journal Article In: Cerebral Cortex, vol. 34, no. 5, pp. 1–13, 2024. @article{Wang2024h, A widely used psychotherapeutic treatment for post-traumatic stress disorder (PTSD) involves performing bilateral eye movement (EM) during trauma memory retrieval. However, how this treatment - described as eye movement desensitization and reprocessing (EMDR) - alleviates trauma-related symptoms is unclear. While conventional theories suggest that bilateral EM interferes with concurrently retrieved trauma memories by taxing the limited working memory resources, here, we propose that bilateral EM actually facilitates information processing. In two EEG experiments, we replicated the bilateral EM procedure of EMDR, having participants engaging in continuous bilateral EM or receiving bilateral sensory stimulation (BS) as a control while retrieving short- or long-term memory. During EM or BS, we presented bystander images or memory cues to probe neural representations of perceptual and memory information. Multivariate pattern analysis of the EEG signals revealed that bilateral EM enhanced neural representations of simultaneously processed perceptual and memory information. This enhancement was accompanied by heightened visual responses and increased neural excitability in the occipital region. Furthermore, bilateral EM increased information transmission from the occipital to the frontoparietal region, indicating facilitated information transition from low-level perceptual representation to high-level memory representation. These findings argue for theories that emphasize information facilitation rather than disruption in the EMDR treatment. |
Shengyuan Wang; Yanhua Lin; Xiaowei Ding Unmasking social attention: The key distinction between social and non-social attention emerges in disengagement, not engagement Journal Article In: Cognition, vol. 249, pp. 1–13, 2024. @article{Wang2024g, The debate surrounding whether social and non-social attention share the same mechanism has been contentious. While prior studies predominantly focused on engagement, we examined the potential disparity between social and non-social attention from both perspectives of engagement and disengagement, respectively. We developed a two-stage attention-shifting paradigm to capture both attention engagement and disengagement. Combining results from five eye-tracking experiments, we supported that the disengagement of social attention markedly outpaces that of non-social attention, while no significant discrepancy emerges in engagement. We uncovered that the faster disengagement of social attention came from its social nature by eliminating alternative explanations including broader fixation distribution width, reduced directional salience in the peripheral visual field, decreased cue-object categorical consistency, reduced perceived validity, and faster processing time. Our study supported that the distinction between social and non-social attention is rooted in attention disengagement, not engagement. |
Pengchao Wang; Wei Mu; Gege Zhan; Aiping Wang; Zuoting Song; Tao Fang; Xueze Zhang; Junkongshuai Wang; Lan Niu; Jianxiong Bin; Lihua Zhang; Jie Jia; Xiaoyang Kang Preference detection of the humanoid robot face based on EEG and eye movement Journal Article In: Neural Computing and Applications, vol. 36, no. 19, pp. 11603–11621, 2024. @article{Wang2024f, The face of a humanoid robot can affect the user experience, and the detection of face preference is particularly important. Preference detection belongs to a branch of emotion recognition that has received much attention from researchers. Most of the previous preference detection studies have been conducted based on a single modality. In this paper, we detect face preferences of humanoid robots based on electroencephalogram (EEG) signals and eye movement signals for single modality, canonical correlation analysis fusion modality, and bimodal deep autoencoder (BDAE) fusion modality, respectively. We validated the theory of frontal asymmetry by analyzing the preference patterns of EEG and found that participants had higher alpha wave energy for preference faces. In addition, hidden preferences extracted by EEG signals were better classified than preferences from participants' subjective feedback, and also, the classification performance of eye movement data was improved. Finally, experimental results showed that BDAE multimodal fusion using frontal alpha and beta power spectral densities and eye movement information as features performed best, with the highest average accuracy of 83.13% for the SVM and 71.09% for the KNN. |
Mengsi Wang; Donna E. Gill; Jeannie Judge; Chuanli Zang; Xuejun Bai; Simon P. Liversedge Column setting and text justification influence return-sweep eye movement behavior during Chinese multi-line reading Journal Article In: Cognitive Research: Principles and Implications, vol. 9, no. 1, pp. 1–18, 2024. @article{Wang2024e, People regularly read multi-line texts in different formats and publishers, internationally, must decide how to present text to make reading most effective and efficient. Relatively few studies have examined multi-line reading, and fewer still Chinese multi-line reading. Here, we examined whether texts presented in single or double columns, and either left-justified or fully-justified affect Chinese reading. Text format had minimal influence on overall reading time; however, it significantly impacted return-sweeps (large saccades moving the eyes from the end of one line of text to the beginning of the next). Return-sweeps were launched and landed further away from margins and involved more corrective saccades in single- than double-column format. For left- compared to fully-justified format, return-sweeps were launched and landed closer to margins. More corrective saccades also occurred. Our results showed more efficient return-sweep behavior for fully- than left-justified text. Moreover, there were clear trade-off effects such that formats requiring increased numbers of shorter return-sweeps produced more accurate targeting and reduced numbers of corrective fixations, whereas formats requiring reduced numbers of longer return-sweeps caused less accurate targeting and an increased rate of corrective fixations. Overall, our results demonstrate that text formats substantially affect return-sweep eye movement behavior during Chinese reading without affecting efficiency and effectiveness, that is, the overall time it takes to read and understand the text. |
Lei Wang; Xufeng Zhou; Jie Yang; Fu Zeng; Shuzhen Zuo; Makoto Kusunoki; Huimin Wang; Yong-di Zhou; Aihua Chen; Sze Chai Kwok Mixed coding of content-temporal detail by dorsomedial posterior parietal neurons Journal Article In: Journal of Neuroscience, vol. 44, no. 3, pp. 1–16, 2024. @article{Wang2024, The dorsomedial posterior parietal cortex (dmPPC) is part of a higher-cognition network implicated in elaborate processes under- pinning memory formation, recollection, episode reconstruction, and temporal information processing. Neural coding for complex episodic processing is however under-documented. Here, we recorded extracellular neural activities from three male rhesus macaques (Macaca mulatta) and revealed a set of neural codes of “neuroethogram” in the primate parietal cortex. Analyzing neural responses in macaque dmPPC to naturalistic videos, we discovered several groups of neurons that are sensitive to different categories of ethogram items, low-level sensory features, and saccadic eye movement. We also discovered that the processing of category and feature information by these neurons is sustained by the accumulation of temporal information over a long timescale of up to 30 s, corroborating its reported long temporal receptive windows. We performed an additional behavioral experiment with additional two male rhesus macaques and found that saccade-related activities could not account for the mixed neuronal responses elicited by the video stimuli. We further observed monkeys' scan paths and gaze consistency are modulated by video content. Taken altogether, these neural findings explain how dmPPC weaves fabrics of ongoing experiences together in real time. The high dimensionality of neural representations should motivate us to shift the focus of attention from pure selectivity neurons to mixed selectivity neurons, especially in increasingly complex naturalistic task designs. |
Kangning Wang; Wei Wei; Weibo Yi; Shuang Qiu; Huiguang He; Minpeng Xu; Dong Ming Contrastive fine-grained domain adaptation network for EEG-based vigilance estimation Journal Article In: Neural Networks, vol. 179, pp. 1–18, 2024. @article{Wang2024d, Vigilance state is crucial for the effective performance of users in brain-computer interface (BCI) systems. Most vigilance estimation methods rely on a large amount of labeled data to train a satisfactory model for the specific subject, which limits the practical application of the methods. This study aimed to build a reliable vigilance estimation method using a small amount of unlabeled calibration data. We conducted a vigilance experiment in the designed BCI-based cursor-control task. Electroencephalogram (EEG) signals of eighteen participants were recorded in two sessions on two different days. And, we proposed a contrastive fine-grained domain adaptation network (CFGDAN) for vigilance estimation. Here, an adaptive graph convolution network (GCN) was built to project the EEG data of different domains into a common space. The fine-grained feature alignment mechanism was designed to weight and align the feature distributions across domains at the EEG channel level, and the contrastive information preservation module was developed to preserve the useful target-specific information during the feature alignment. The experimental results show that the proposed CFGDAN outperforms the compared methods in our BCI vigilance dataset and SEED-VIG dataset. Moreover, the visualization results demonstrate the efficacy of the designed feature alignment mechanisms. These results indicate the effectiveness of our method for vigilance estimation. Our study is helpful for reducing calibration efforts and promoting the practical application potential of vigilance estimation methods. |
Jiahui Wang Does working memory capacity influence learning from video and attentional processing of the instructor's visuals? Journal Article In: Behaviour & Information Technology, vol. 43, no. 1, pp. 95–109, 2024. @article{Wang2024c, Existing evidence suggested learners with differences in attention and cognition might respond to the same media in differential ways. The current study focused on one format of video design–instructor visibility and explored the moderating effects of working memory capacity on learning from such video design and if learners with high and low working memory capacity attended to the instructor's visuals differently. Participants watched a video either with or without the instructor's visuals on the screen, while their visual attention was recorded simultaneously. After the video, participants responded to a learning test that measured retention and transfer. Although the results did not show working memory capacity moderated the instructor visibility effects on learning or influenced learners' visual attention to the instructor's visuals, the findings did indicate working memory capacity was a positive predictor of retention performance regardless of the video design. Discussions and implications of the findings were provided. |
Jiahui Wang Mind wandering in videos that integrate instructor's visuals: An eye tracking study Journal Article In: Innovations in Education and Teaching International, vol. 61, no. 5, pp. 972–987, 2024. @article{Wang2024m, With an increasing number of videos integrating instructor's visuals on screen, we know little about the impacts of this design on mind wandering. The study aims to investigate a) how instructor visibility impacts mind wandering; b) the relationship between mind wandering and retention performance; c) how visual behaviour during video-watching influences mind wandering. Each participant watched a video with or without instructor visibility, while their visual behaviour was recorded by an eye tracker. Retention performance was measured at the completion of the video. Mind wandering was inferred via global self-report measure and objective eye tracking measure. Both measures of mind wandering indicated the instructor visible video resulted in less mind wandering. Findings suggested mind wandering impaired retention performance. Additionally, visual attention to the instructor was associated with less mind wandering. |
Danhui Wang; Dingyi Niu; Tianzhi Li; Xiaolei Gao The effect of visual word segmentation cues in Tibetan reading Journal Article In: Brain Sciences, vol. 14, no. 10, pp. 1–20, 2024. @article{Wang2024b, Background/Objectives: In languages with within-word segmentation cues, the removal or replacement of these cues in a text hinders reading and lexical recognition, and adversely affects saccade target selection during reading. However, the outcome of artificially introducing visual word segmentation cues into a language that lacks them is unknown. Tibetan exemplifies a language that does not provide visual cues for word segmentation, relying solely on visual cues for morpheme segmentation. Moreover, previous studies have not examined word segmentation in the Tibetan language. Therefore, this study investigated the effects of artificially incorporated visual word segmentation cues and basic units of information processing in Tibetan reading. Methods: We used eye-tracking technology and conducted two experiments with Tibetan sentences that artificially incorporated interword spaces and color alternation markings as visual segmentation cues. Conclusions: The results indicated that interword spaces facilitate reading and lexical recognition and aid in saccade target selection during reading. Color alternation markings facilitate reading and vocabulary recognition but do not affect saccade selection. Words are more likely to be the basic units of information processing and exhibit greater psychological reality than morphemes. These findings shed light on the nature and rules of Tibetan reading and provide fundamental data to improve eye movement control models for reading alphabetic writing systems. Furthermore, our results may offer practical guidance and a scientific basis for improving the efficiency of reading, information processing, and word segmentation in Tibetan reading. |
Andi Wang; Ana Pellicer-Sánchez Exploring L2 learners' processing of unknown words during subtitled viewing through self-reports Journal Article In: International Review of Applied Linguistics in Language Teaching, no. 2, pp. 1–30, 2024. @article{Wang2024a, Studies have shown the benefits of subtitled viewing for incidental vocabulary learning, but the effects of different subtitling types varied across studies. The effectiveness of different types of subtitled viewing could be related to how unknown vocabulary is processed during viewing. However, no studies have investigated L2 learners' processing of unknown words in viewing beyond exploring learners' attention allocation. The present research followed a qualitative approach to explore L2 learners' processing of unknown words during subtitled viewing under three conditions (i.e., captions, L1 subtitles, and bilingual subtitles) by tapping into learners' reported awareness of the unknown words and the vocabulary processing strategies used to engage with unknown words. According to stimulated recall data (elicited by eye-tracking data) from 45 intermediate-to-advanced-level Chinese learners of English, captions led to increased awareness of the unknown words. Moreover, the types of strategies learners used to cope with unknown vocabulary were determined by subtitling type. |
Kerri Walter; Michelle Freeman; Peter Bex Quantifying task-related gaze Journal Article In: Attention, Perception, & Psychophysics, vol. 86, no. 4, pp. 1318–1329, 2024. @article{Walter2024, Competing theories attempt to explain what guides eye movements when exploring natural scenes: bottom-up image salience and top-down semantic salience. In one study, we apply language-based analyses to quantify the well-known observation that task influences gaze in natural scenes. Subjects viewed ten scenes as if they were performing one of two tasks. We found that the semantic similarity between the task and the labels of objects in the scenes captured the task-dependence of gaze (t(39) = 13.083; p < 0.001). In another study, we examined whether image salience or semantic salience better predicts gaze during a search task, and if viewing strategies are affected by searching for targets of high or low semantic relevance to the scene. Subjects searched 100 scenes for a high- or low-relevance object. We found that image salience becomes a worse predictor of gaze across successive fixations, while semantic salience remains a consistent predictor (X2(1 |
Daniel Walper; Alexandra Bendixen; Sabine Grimm; Anna Schubö; Wolfgang Einhäuser Attention deployment in natural scenes: Higher-order scene statistics rather than semantics modulate the N2pc component Journal Article In: Journal of vision, vol. 24, no. 6, pp. 1–28, 2024. @article{Walper2024, Which properties of a natural scene affect visual search? We consider the alternative hypotheses that low-level statistics, higher-level statistics, semantics, or layout affect search difficulty in natural scenes. Across three experiments (n = 20 each), we used four different backgrounds that preserve distinct scene properties: (a) natural scenes (all experiments); (b) 1/f noise (pink noise, which preserves only low-level statistics and was used in Experiments 1 and 2); (c) textures that preserve low-level and higher-level statistics but not semantics or layout (Experiments 2 and 3); and (d) inverted (upside-down) scenes that preserve statistics and semantics but not layout (Experiment 2). We included "split scenes" that contained different backgrounds left and right of the midline (Experiment 1, natural/noise; Experiment 3, natural/texture). Participants searched for a Gabor patch that occurred at one of six locations (all experiments). Reaction times were faster for targets on noise and slower on inverted images, compared to natural scenes and textures. The N2pc component of the event-related potential, a marker of attentional selection, had a shorter latency and a higher amplitude for targets in noise than for all other backgrounds. The background contralateral to the target had an effect similar to that on the target side: noise led to faster reactions and shorter N2pc latencies than natural scenes, although we observed no difference in N2pc amplitude. There were no interactions between the target side and the non-target side. Together, this shows that-at least when searching simple targets without own semantic content-natural scenes are more effective distractors than noise and that this results from higher-order statistics rather than from semantics or layout. |
Sonja Walcher; Živa Korda; Christof Körner; Mathias Benedek How workload and availability of spatial reference shape eye movement coupling in visuospatial working memory Journal Article In: Cognition, vol. 249, pp. 1–16, 2024. @article{Walcher2024, Eyes are active in memory recall and visual imagination, yet our grasp of the underlying qualities and factors of these internally coupled eye movements is limited. To explore this, we studied 50 participants, examining how workload, spatial reference availability, and imagined movement direction influence internal coupling of eye movements. We designed a visuospatial working memory task in which participants mentally moved a black patch along a path within a matrix and each trial involved one step along this path (presented via speakers: up, down, left, or right). We varied workload by adjusting matrix size (3 × 3 vs. 5 × 5), manipulated availability of a spatial frame of reference by presenting either a blank screen (requiring participants to rely solely on their mental representation of the matrix) or spatial reference in the form of an empty matrix, and contrasted active task performance to two control conditions involving only active or passive listening. Our findings show that eye movements consistently matched the imagined movement of the patch in the matrix, not driven solely by auditory or semantic cues. While workload influenced pupil diameter, perceived demand, and performance, it had no observable impact on internal coupling. The availability of spatial reference enhanced coupling of eye movements, leading more frequent, precise, and resilient saccades against noise and bias. The absence of workload effects on coupled saccades in our study, in combination with the relatively high degree of coupling observed even in the invisible matrix condition, indicates that eye movements align with shifts in attention across both visually and internally represented information. This suggests that coupled eye movements are not merely strategic efforts to reduce workload, but rather a natural response to where attention is directed. |
Klaus Heusinger; Frederike Weeber; Jet Hoek; Andreas Brocher Informativity, information status and the accessibility of indefinite noun phrases Journal Article In: Glossa: a journal of general linguistics, vol. 9, no. 1, pp. 1–20, 2024. @article{Heusinger2024, In discourse processing, speakers collaborate toward a shared mental model by introducing discourse referents and picking them up with the adequate linguistic forms. Discourse referents compete with each other with respect to their prominence and their accessibility for pronouns. This study focuses on transitive sentences with proper names as subjects and indefinite noun phrases as second arguments, typically direct objects. An ambiguous pronoun in the subsequent sentence may access either referent of the first sentence. Various factors have been shown to influence pronoun resolution, including informativity (how informative is the phrase in which the referent is introduced? E.g., the waiter vs. the waiter at the entrance) and information status (is the referent given or new in the context?). While both factors have been independently shown to increase referent accessibility, our visual-world eye-tracking experiment shows an original and quite unexpected effect: informativity and information status interact when it comes to the accessibility of indefinite noun phrases: a higher degree of informativity increases accessibility when a referent is brand-new, but surprisingly decreases accessibility when a referent is inferred. We discuss a potential explanation for this surprising pattern in terms of a mismatch between the denotational type of the indefinite and the type required by the modification. We conclude that indefinites strongly interact with additional semantic, contextual and communicative parameters in establishing their referents. |
Luc Virlet; Laurent Sparrow; Jose Barela; Patrick Berquin; Cedrick Bonnet Proprioceptive intervention improves reading performance in developmental dyslexia: An eye-tracking study Journal Article In: Research in Developmental Disabilities, vol. 153, pp. 1–10, 2024. @article{Virlet2024, Developmental dyslexia is characterized by difficulties in learning to read, affecting cognition and causing failure at school. Interventions for children with developmental dyslexia have focused on improving linguistic capabilities (phonics, orthographic and morphological instructions), but developmental dyslexia is accompanied by a wide variety of sensorimotor impairments. The goal of this study was to examine the effects of a proprioceptive intervention on reading performance and eye movement in children with developmental dyslexia. Nineteen children diagnosed with developmental dyslexia were randomly assigned to a regular Speech Therapy (ST) or to a Proprioceptive and Speech Intervention (PSI), in which they received both the usual speech therapy and a proprioceptive intervention aimed to correct their sensorimotor impairments (prism glasses, oral neurostimulation, insoles and breathing instructions). Silent reading performance and eye movements were measured pre- and post-intervention (after nine months). In the PSI group, reading performance improved and eye movements were smoother and faster, reaching values similar to those of children with typical reading performance. The recognition of written words also improved, indicating better lexical access. These results show that PSI might constitute a valuable tool for reading improvement children with developmental dyslexia. |
Ana Vilotijević; Sebastiaan Mathôt Non-image-forming vision as measured through ipRGC-mediated pupil constriction is not modulated by covert visual attention Journal Article In: Cerebral Cortex, vol. 34, no. 3, pp. 1–9, 2024. @article{Vilotijevic2024, In brightness, the pupil constricts, while in darkness, the pupil dilates; this is known as the pupillary light response (PLR). The PLR is driven by all photoreceptors: rods and cones, which contribute to image-forming vision, and intrinsically photosensitive retinal ganglion cells (ipRGCs), which mainly contribute to non-image-forming vision. Rods and cones cause immediate pupil constriction upon light exposure, whereas ipRGCs cause sustained constriction throughout light exposure. Recent studies have shown that covert attention modulated the initial PLR; however, it remains unclear whether the same holds for the sustained PLR. We tested this by leveraging ipRGCs' responsiveness to blue light, causing the most prominent sustained constriction. While replicating previous studies by showing that pupils constricted more when either directly looking at, or covertly attending to, bright as compared to dim stimuli (with the same color), we also found that the pupil constricted more when directly looking at blue as compared to red stimuli (with the same luminosity). Crucially, however, in two high-powered studies (n = 60), we did not find any pupil-size difference when covertly attending to blue as compared to red stimuli. This suggests that ipRGC-mediated pupil constriction, and possibly non-image-forming vision more generally, is not modulated by covert attention. |
Pamela Villavicencio; Cristina Malla; Joan López-Moliner Prediction of time to contact under perceptual and contextual uncertainties Journal Article In: Journal of Vision, vol. 24, no. 6, pp. 1–18, 2024. @article{Villavicencio2024, Accurately estimating time to contact (TTC) is crucial for successful interactions with moving objects, yet it is challenging under conditions of sensory and contextual uncertainty, such as occlusion. In this study, participants engaged in a prediction motion task, monitoring a target that moved rightward and an occluder. The participants' task was to press a key when they predicted the target would be aligned with the occluder's right edge. We manipulated sensory uncertainty by varying the visible and occluded periods of the target, thereby modulating the time available to integrate sensory information and the duration over which motion must be extrapolated. Additionally, contextual uncertainty was manipulated by having a predictable and unpredictable condition, meaning the occluder either reliably indicated where the moving target would disappear or provided no such indication. Results showed differences in accuracy between the predictable and unpredictable occluder conditions, with different eye movement patterns in each case. Importantly, the ratio of the time the target was visible, which allows for the integration of sensory information, to the occlusion time, which determines perceptual uncertainty, was a key factor in determining performance. This ratio is central to our proposed model, which provides a robust framework for understanding and predicting human performance in dynamic environments with varying degrees of uncertainty. |
Simone Viganò; Rena Bayramova; Christian F. Doeller; Roberto Bottini Spontaneous eye movements reflect the representational geometries of conceptual spaces Journal Article In: Proceedings of the National Academy of Sciences, vol. 121, no. 17, pp. 1–10, 2024. @article{Vigano2024, Functional neuroimaging studies indicate that the human brain can represent concepts and their relational structure in memory using coding schemes typical of spatial navigation. However, whether we can read out the internal representational geometries of conceptual spaces solely from human behavior remains unclear. Here, we report that the relational structure between concepts in memory might be reflected in spontaneous eye movements during verbal fluency tasks: When we asked participants to randomly generate numbers, their eye movements correlated with distances along the left- to- right one- dimensional geometry of the number space (mental number line), while they scaled with distance along the ring- like two- dimensional geometry of the color space (color wheel) when they randomly generated color names. Moreover, when participants ran- domly produced animal names, eye movements correlated with low- dimensional sim- ilarity in word frequencies. These results suggest that the representational geometries used to internally organize conceptual spaces might be read out from gaze behavior. |
Pedro G. Vieira; Matthew R. Krause; Christopher C. Pack Temporal interference stimulation disrupts spike timing in the primate brain Journal Article In: Nature Communications, vol. 15, no. 1, pp. 11–17, 2024. @article{Vieira2024a, Electrical stimulation can regulate brain activity, producing clear clinical benefits, but focal and effective neuromodulation often requires surgically implanted electrodes. Recent studies argue that temporal interference (TI) stimulation may provide similar outcomes non-invasively. During TI, scalp electrodes generate multiple electrical fields in the brain, modulating neural activity only at their intersection. Despite considerable enthusiasm for this approach, little empirical evidence demonstrates its effectiveness, especially under conditions suitable for human use. Here, using single-neuron recordings in non-human primates, we establish that TI reliably alters the timing, but not the rate, of spiking activity. However, we show that TI requires strategies—high carrier frequencies, multiple electrodes, and amplitude-modulated waveforms—that also limit its effectiveness. Combined, these factors make TI 80 % weaker than other forms of non-invasive brain stimulation. Although unlikely to cause widespread neuronal entrainment, TI may be ideal for disrupting pathological oscillatory activity, a hallmark of many neurological disorders. |
João Vieira; Elisângela Teixeira; Erica Rodrigues; Hayward J. Godwin; Denis Drieghe When function words carry content Journal Article In: Quarterly Journal of Experimental Psychology, pp. 1–14, 2024. @article{Vieira2024, Studies on eye movements during reading have primarily focussed on the processing of content words (CWs), such as verbs and nouns. Those few studies that have analysed eye movements on function words (FWs), such as articles and prepositions, have reported that FWs are typically skipped more often and, when fixated, receive fewer and shorter fixations than CWs. However, those studies were often conducted in languages where FWs contain comparatively little information (e.g., the in English). In Brazilian Portuguese (BP), FWs can carry gender and number marking. In the present study, we analysed data from the RASTROS corpus of natural reading in BP and examined the effects of word length, predictability, frequency and word class on eye movements. Very limited differences between FWs and CWs were observed mostly restricted to the skipping rates of short words, such that FWs were skipped more often than CWs. For fixation times, differences were either nonexistent or restricted to atypical FWs, such as low frequency FWs, warranting further research. As such, our results are more compatible with studies showing limited or no differences in processing speed between FWs and CWs when influences of word length, frequency and predictability are taken into account. |
Inês S. Veríssimo; Zachary Nudelman; Christian N. L. Olivers Does crowding predict conjunction search? An individual differences approach Journal Article In: Vision Research, vol. 216, pp. 1–13, 2024. @article{Verissimo2024, Searching for objects in the visual environment is an integral part of human behavior. Most of the information used during such visual search comes from the periphery of our vision, and understanding the basic mechanisms of search therefore requires taking into account the inherent limitations of peripheral vision. Our previous work using an individual differences approach has shown that one of the major factors limiting peripheral vision (crowding) is predictive of single feature search, as reflected in response time and eye movement measures. Here we extended this work, by testing the relationship between crowding and visual search in a conjunction-search paradigm. Given that conjunction search involves more fine-grained discrimination and more serial behavior, we predicted it would be strongly affected by crowding. We tested sixty participants with regard to their sensitivity to both orientation and color-based crowding (as measured by critical spacing) and their efficiency in searching for a color/orientation conjunction (as indicated by manual response times and eye movements). While the correlations between the different crowding tasks were high, the correlations between the different crowding measures and search performance were relatively modest, and no higher than those previously observed for single-feature search. Instead, observers showed very strong color selectivity during search. The results suggest that conjunction search behavior relies more on top-down guidance (here by color) and is therefore relatively less determined by individual differences in sensory limitations as caused by crowding. |
Anca Velisar; Natela M. Shanidze Noise estimation for head-mounted 3D binocular eye tracking using Pupil Core eye-tracking goggles Journal Article In: Behavior Research Methods, vol. 56, no. 1, pp. 53–79, 2024. @article{Velisar2024, Head-mounted, video-based eye tracking is becoming increasingly common and has promise in a range of applications. Here, we provide a practical and systematic assessment of the sources of measurement uncertainty for one such device – the Pupil Core – in three eye-tracking domains: (1) the 2D scene camera image; (2) the physical rotation of the eye relative to the scene camera 3D space; and (3) the external projection of the estimated gaze point location onto the target plane or in relation to world coordinates. We also assess eye camera motion during active tasks relative to the eye and the scene camera, an important consideration as the rigid arrangement of eye and scene camera is essential for proper alignment of the detected gaze. We find that eye camera motion, improper gaze point depth estimation, and erroneous eye models can all lead to added noise that must be considered in the experimental design. Further, while calibration accuracy and precision estimates can help assess data quality in the scene camera image, they may not be reflective of errors and variability in gaze point estimation. These findings support the importance of eye model constancy for comparisons across experimental conditions and suggest additional assessments of data reliability may be warranted for experiments that require the gaze point or measure eye movements relative to the external world. |
Jennifer A. Veitch; Naomi J. Miller Effects of temporal light modulation on individuals sensitive to pattern glare Journal Article In: Leukos, vol. 20, no. 3, pp. 310–346, 2024. @article{Veitch2024, Solid-state lighting systems can vary widely in the degree of temporal light modulation (TLM) of their light output. TLM is known to have visual, cognitive, and behavioral effects but there are few recommendations for limits on the acceptable TLM in everyday lighting systems and there is little information concerning individual differences in sensitivity. This paper is a re-analysis of previously presented data, focusing on two subgroups in a larger sample: those scoring low or high on the Wilkins Pattern Glare Sensitivity (PGS) test, which is a validated test that identifies people at high risk of visual stress. The results show that the PGS groups differed in their sensitivity to TLM conditions, despite short exposures and a restricted field of view. |
Janne M. Veerbeek; Henrik Rühe; Beatrice Ottiger; Stephan Bohlhalter; Thomas Nyffeler; Dario Cazzoli Impact of neglect on the relationship between upper limb motor function and upper limb performance in the (hyper)acute poststroke phase Journal Article In: Neurorehabilitation and Neural Repair, vol. 39, no. 2, pp. 138–41, 2024. @article{Veerbeek2024, Visuospatial neglect (VSN) is a negative, strong, and independent predictor of poor outcome after stroke, and is associated with poorer upper limb (UL) motor recovery in terms of function or capacity (ie, in standardized, lab-based testing). Although the main aim of stroke rehabilitation is to re-establish optimal functioning in daily life, the impact of VSN on UL performance (ie, in unstructured, everyday environments) is largely unknown. In this proof of principle study, the impact of VSN on the strength of the association between UL motor function (Jamar Hand Dynamometer) and UL performance (Upper Limb Lucerne ICF-based Multidisciplinary Observation Scale) was investigated in 65 (hyper)acute first-ever stroke patients. In a moderator analysis, the interaction term was negative and significant, showing that VSN suppresses the use of UL motor function in daily life (ie, performance). This finding suggests that, when considering UL performance in the (hyper)acute phase after stroke, interventions aimed to reduce deficits in both UL motor function and visuospatial function should already be started in the acute stroke unit setting. |
Monica Vanoncini; Stefanie Hoehl; Birgit Elsner; Sebastian Wallot; Natalie Boll-Avetisyan; Ezgi Kayhan Mother-infant social gaze dynamics relate to infant brain activity and word segmentation Journal Article In: Developmental Cognitive Neuroscience, vol. 65, pp. 1–8, 2024. @article{Vanoncini2024, The ‘social brain', consisting of areas sensitive to social information, supposedly gates the mechanisms involved in human language learning. Early preverbal interactions are guided by ostensive signals, such as gaze patterns, which are coordinated across body, brain, and environment. However, little is known about how the infant brain processes social gaze in naturalistic interactions and how this relates to infant language development. During free-play of 9-month-olds with their mothers, we recorded hemodynamic cortical activity of ´social brain` areas (prefrontal cortex, temporo-parietal junctions) via fNIRS, and micro-coded mother's and infant's social gaze. Infants' speech processing was assessed with a word segmentation task. Using joint recurrence quantification analysis, we examined the connection between infants' ´social brain` activity and the temporal dynamics of social gaze at intrapersonal (i.e., infant's coordination, maternal coordination) and interpersonal (i.e., dyadic coupling) levels. Regression modeling revealed that intrapersonal dynamics in maternal social gaze (but not infant's coordination or dyadic coupling) coordinated significantly with infant's cortical activity. Moreover, recurrence quantification analysis revealed that intrapersonal maternal social gaze dynamics (in terms of entropy) were the best predictor of infants' word segmentation. The findings support the importance of social interaction in language development, particularly highlighting maternal social gaze dynamics. |
Ondřej Vaníček; Lucie Krejčová; Martin Hůla; Kateřina Potyszová; Kateřina Klapilová; Klára Bártová Eye-tracking does not reveal early attention processing of sexual copulatory movement in heterosexual men and women Journal Article In: Scientific Reports, vol. 14, no. 1, pp. 1–8, 2024. @article{Vanicek2024, Men and women respond differently when presented with sexual stimuli. Men's reaction is gender-specific, and women's reaction is gender-nonspecific. This might be a result of differential cognitive processing of sexual cues, namely copulatory movement (CM), which is present in almost every dynamic erotic stimulus. A novelty eye-tracking procedure was developed to assess the saliency of short film clips containing CM or non-CM sexual activities. Results from 29 gynephilic men and 31 androphilic women showed only small and insignificant effects in attention bias and no effects in attentional capture. Our results suggest that CM is not processed differently in men and women and, therefore, is not the reason behind gender-nonspecific sexual responses in women. |
Nele Vanbilsen; Valentina Pergher; Marc M. Van Hulle Effects of task-specific strategy on attentional control game training: Preliminary data from healthy adults Journal Article In: Current Psychology, vol. 43, no. 2, pp. 1864–1878, 2024. @article{Vanbilsen2024, Although recent studies showed the beneficial effect of video game training, it is still unclear whether the used strategy plays an important role in enhancing performance in the trained cognitive ability and in promoting transfers to other cognitive domains. We investigated behaviourally the effect of strategy on the outcomes of visual attentional control game training and both behaviourally and in terms of EEG-based event-related potentials (ERPs), the effect on other cognitive domains. We recruited 21 healthy adults, divided into three groups: a strategy-training group (STG) instructed to use a specific strategy, a non-strategy training group (NSTG) that self-developed their strategy, and a passive control group (PCG) that underwent only pre- and post-tests. Our results showed that the use of a specific strategy made the STG participants respond faster to the trained contrast level task, but not on the contour exercises task. Furthermore, both STG and NSTG showed pre- and post-transfers, however no significant differences were found when comparing the groups, for both behaviour and ERP responses. In conclusion, we believe these preliminary results provide evidence for the importance of strategy choice in cognitive training protocols. |
Elle Heusden; Christian N. L. Olivers; Mieke Donk The effects of eccentricity on attentional capture Journal Article In: Attention, Perception, & Psychophysics, vol. 86, no. 2, pp. 422–438, 2024. @article{Heusden2024, Visual attention may be captured by an irrelevant yet salient distractor, thereby slowing search for a relevant target. This phenomenon has been widely studied using the additional singleton paradigm in which search items are typically all presented at one and the same eccentricity. Yet, differences in eccentricity may well bias the competition between target and distractor. Here we investigate how attentional capture is affected by the relative eccentricities of a target and a distractor. Participants searched for a shape-defined target in a grid of homogeneous nontargets of the same color. On 75% of trials, one of the nontarget items was replaced by a salient color-defined distractor. Crucially, target and distractor eccentricities were independently manipulated across three levels of eccentricity (i.e., near, middle, and far). Replicating previous work, we show that the presence of a distractor slows down search. Interestingly, capture as measured by manual reaction times was not affected by target and distractor eccentricity, whereas capture as measured by the eyes was: items close to fixation were more likely to be selected than items presented further away. Furthermore, the effects of target and distractor eccentricity were largely additive, suggesting that the competition between saliency- and relevance-driven selection was modulated by an independent eccentricity-based spatial component. Implications of the dissociation between manual and oculomotor responses are also discussed. |
Anouk Heide; Maaike Wessel; Danae Papadopetraki; Dirk E. M. Geurts; Teije H. Prooije; Frank Gommans; Bastiaan R. Bloem; Michiel F. Dirkx; Rick C. Helmich Propranolol reduces Parkinson's tremor and inhibits tremor-related activity in the motor cortex: A placebo-controlled crossover trial Journal Article In: Annals of Neurology, pp. 1–12, 2024. @article{Heide2024, Objective: Parkinson's disease (PD) resting tremor is thought to be initiated in the basal ganglia and amplified in the cerebello-thalamo-cortical circuit. Because stress worsens tremor, the noradrenergic system may play a role in amplifying tremor. We tested if and how propranolol, a non-selective beta-adrenergic receptor antagonist, reduces PD tremor and whether or not this effect is specific to stressful conditions. Methods: In a cross-over, double-blind intervention study, participants with PD resting tremor received propranolol (40 mg, single dose) or placebo (counter-balanced) on 2 different days. During both days, we assessed tremor severity (with accelerometry) and tremor-related brain activity (with functional magnetic resonance imaging), as well as heart rate and pupil diameter, while subjects performed a stressful cognitive load task that has been linked to the noradrenergic system. We tested for effects of drug (propranolol vs placebo) and stress (cognitive load vs rest) on tremor power and tremor-related brain activity. Results: We included 27 PD patients with prominent resting tremor. Tremor power significantly increased during cognitive load versus rest (F[1,19] = 13.8; p = 0.001; (Formula presented.) = 0.42) and decreased by propranolol versus placebo (F[1,19] = 6.4; p = 0.02; (Formula presented.) = 0.25), but there was no interaction. We observed task-related brain activity in a stress-sensitive cognitive control network and tremor power-related activity in the cerebello-thalamo-cortical circuit. Propranolol significantly reduced tremor-related activity in the motor cortex compared to placebo (F[1,21] = 5.3; p = 0.03; (Formula presented.) = 0.20), irrespective of cognitive load. Interpretation: Our findings indicate that propranolol has a general, context-independent, tremor-reducing effect that may be implemented at the level of the primary motor cortex. |
Ine Van der Cruyssen; Gershon Ben-Shakhar; Yoni Pertzov; Nitzan Guy; Quinn Cabooter; Lukas J. Gunschera; Bruno Verschuere The validation of online webcam-based eye-tracking: The replication of the cascade effect, the novelty preference, and the visual world paradigm Journal Article In: Behavior Research Methods, vol. 56, no. 5, pp. 4836–4849, 2024. @article{VanderCruyssen2024, The many benefits of online research and the recent emergence of open-source eye-tracking libraries have sparked an interest in transferring time-consuming and expensive eye-tracking studies from the lab to the web. In the current study, we validate online webcam-based eye-tracking by conceptually replicating three robust eye-tracking studies (the cascade effect |
A. Van Den Kerchove; H. Si-Mohammed; M. M. Van Hulle; F. Cabestaing Correcting for ERP latency jitter improves gaze-independent BCI decoding Journal Article In: Journal of Neural Engineering, vol. 21, no. 4, pp. 1–15, 2024. @article{VanDenKerchove2024, Objective. Patients suffering from heavy paralysis or Locked-in-Syndrome can regain communication using a Brain-Computer Interface (BCI). Visual event-related potential (ERP) based BCI paradigms exploit visuospatial attention (VSA) to targets laid out on a screen. However, performance drops if the user does not direct their eye gaze at the intended target, harming the utility of this class of BCIs for patients suffering from eye motor deficits. We aim to create an ERP decoder that is less dependent on eye gaze. Approach. ERP component latency jitter plays a role in covert visuospatial attention (VSA) decoding. We introduce a novel decoder which compensates for these latency effects, termed Woody Classifier-based Latency Estimation (WCBLE). We carried out a BCI experiment recording ERP data in overt and covert visuospatial attention (VSA), and introduce a novel special case of covert VSA termed split VSA, simulating the experience of patients with severely impaired eye motor control. We evaluate WCBLE on this dataset and the BNCI2014-009 dataset, within and across VSA conditions to study the dependency on eye gaze and the variation thereof during the experiment. Main results. WCBLE outperforms state-of-the-art methods in the VSA conditions of interest in gaze-independent decoding, without reducing overt VSA performance. Results from across-condition evaluation show that WCBLE is more robust to varying VSA conditions throughout a BCI operation session. Significance. Together, these results point towards a pathway to achieving gaze independence through suited ERP decoding. Our proposed gaze-independent solution enhances decoding performance in those cases where performing overt VSA is not possible. |
Willem S. Boxtel; Michael Linge; Rylee Manning; Lily N. Haven; Jiyeon Lee Online eye tracking for aphasia: A feasibility study comparing web and lab tracking and implications for clinical use Journal Article In: Brain and Behavior, vol. 14, no. 11, pp. 1–19, 2024. @article{Boxtel2024, Background & Aims: Studies using eye-tracking methodology have made important contributions to the study of language disorders such as aphasia. Nevertheless, in clinical groups especially, eye-tracking studies often include small sample sizes, limiting the generalizability of reported findings. Online, webcam-based tracking offers a potential solution to this issue, but web-based tracking has not been compared with in-lab tracking in past studies and has never been attempted in groups with language impairments. Materials & Methods: Patients with post-stroke aphasia (n = 16) and age-matched controls (n = 16) completed identical sentence-picture matching tasks in the lab (using an EyeLink system) and on the web (using WebGazer.js), with the order of sessions counterbalanced. We examined whether web-based eye tracking is as sensitive as in-lab eye tracking in detecting group differences in sentence processing. Results: Patients were less accurate and slower to respond to all sentence types than controls. Proportions of gazes to the target and foil picture were computed in 100 ms increments, which showed that the two modes of tracking were comparably sensitive to overall group differences across different sentence types. Web tracking showed comparable fluctuations in gaze proportions to target pictures to lab tracking in most analyses, whereas a delay of approximately 500–800 ms appeared in web compared to lab data. Discussion & Conclusions: Web-based eye tracking is feasible to study impaired language processing in aphasia and is sensitive enough to detect most group differences between controls and patients. Given that validations of webcam-based tracking are in their infancy and how transformative this method could be to several disciplines, much more testing is warranted. |
Alessandra Valentini; Rachel E. Pye; Carmel Houston-Price; Jessie Ricketts; Julie A. Kirkby Online processing shows advantages of bimodal listening-while-reading for vocabulary learning: An eye-tracking study Journal Article In: Reading Research Quarterly, vol. 59, no. 1, pp. 79–101, 2024. @article{Valentini2024, Children can learn words incidentally from stories. This kind of learning is enhanced when stories are presented both aurally and in written format, compared to just a written presentation. However, we do not know why this bimodal presentation is beneficial. This study explores two possible explanations: whether the bimodal advantage manifests online during story exposure, or later, at word retrieval. We collected eye-movement data from 34 8-to 9-year-old children exposed to two stories, one presented in written format (reading condition), and the second presented aurally and written at the same time (bimodal condition). Each story included six unfamiliar words (non-words) that were repeated three times, as well as definitions and clues to their meaning. Following exposure, the learning of the new words' meanings was assessed. Results showed that, during story presentation, children spent less time fixating the new words in the bimodal condition, compared to the reading condition, indicating that the bimodal advantage occurs online. Learning was greater in the bimodal condition than the reading condition, which may reflect either an online bimodal advantage during story presentation or an advantage at retrieval. The results also suggest that the bimodal condition was more conducive to learning than the reading condition when children looked at the new words for a shorter amount of time. This is in line with an online advantage of the bimodal condition, as it suggests that less effort is required to learn words in this condition. These results support educational strategies that routinely present new vocabulary in two modalities simultaneously. |
Roman Vakhrushev; Arezoo Pooresmaeili Interaction of spatial attention and the associated reward value of audiovisual objects Journal Article In: Cortex, vol. 179, pp. 271–285, 2024. @article{Vakhrushev2024, Reward value and selective attention both enhance the representation of sensory stimuli at the earliest stages of processing. It is still debated whether and how reward-driven and attentional mechanisms interact to influence perception. Here we ask whether the interaction between reward value and selective attention depends on the sensory modality through which the reward information is conveyed. Human participants first learned the reward value of uni-modal visual and auditory stimuli during a conditioning phase. Subsequently, they performed a target detection task on bimodal stimuli containing a previously rewarded stimulus in one, both, or neither of the modalities. Additionally, participants were required to focus their attention on one side and only report targets on the attended side. Our results showed a strong modulation of visual and auditory event-related potentials (ERPs) by spatial attention. We found no main effect of reward value but importantly we found an interaction effect as the strength of attentional modulation of the ERPs was significantly affected by the reward value. When reward effects were examined separately with respect to each modality, auditory value-driven modulation of attention was found to dominate the ERP effects whereas visual reward value on its own led to no effect, likely due to its interference with the target processing. These results inspire a two-stage model where first the salience of a high reward stimulus is enhanced on a local priority map specific to each sensory modality, and at a second stage reward value and top-down attentional mechanisms are integrated across sensory modalities to affect perception. |
Hariklia Vagias; Michelle L. Byrne; Lyn Millist; Owen White; Meaghan Clough; Joanne Fielding Visuo-cognitive phenotypes in early multiple sclerosis: A multisystem model of visual processing Journal Article In: Journal of Clinical Medicine, vol. 13, no. 3, pp. 1–19, 2024. @article{Vagias2024, Background: Cognitive impairment can emerge in the earliest stages of multiple sclerosis (MS), with heterogeneity in cognitive deficits often hindering symptom identification and management. Sensory–motor dysfunction, such as visual processing impairment, is also common in early disease and can impact neuropsychological task performance in MS. However, cognitive phenotype research in MS does not currently consider the relationship between early cognitive changes and visual processing impairment. Objectives: This study explored the relationship between cognition and visual processing in early MS by adopting a three-system model of afferent sensory, central cognitive and efferent ocular motor visual processing to identify distinct visuo-cognitive phenotypes. Methods: Patients with clinically isolated syndrome and relapsing–remitting MS underwent neuro-ophthalmic, ocular motor and neuropsychological evaluation to assess each visual processing system. The factor structure of ocular motor variables was examined using exploratory factor analysis, and phenotypes were identified using latent profile analysis. Results: Analyses revealed three ocular-motor constructs (cognitive control, cognitive processing speed and basic visual processing) and four visuo-cognitive phenotypes (early visual changes, efferent-cognitive, cognitive control and afferent-processing speed). While the efferent-cognitive phenotype was present in significantly older patients than was the early visual changes phenotype, there were no other demographic differences between phenotypes. The efferent-cognitive and cognitive control phenotypes had poorer performance on the Symbol Digit Modalities Test compared to that of other phenotypes; however, no other differences in performance were detected. Conclusion: Our findings suggest that distinct visual processing deficits in early MS may differentially impact cognition, which is not captured using standard neuropsychological evaluation. Further research may facilitate improved symptom identification and intervention in early disease. |
Maiko Uesaki; Arnab Biswas; Hiroshi Ashida; Gerrit Maus Blue-yellow combination enhances perceived motion in Rotating Snakes illusion Journal Article In: i-Perception, vol. 15, no. 2, pp. 1–9, 2024. @article{Uesaki2024, The Rotating Snakes illusion is a visual illusion where a stationary image elicits a compelling sense of anomalous motion. There have been recurring albeit anecdotal claims that the perception of illusory motion is more salient when the image consists of patterns with the combination of blue and yellow; however, there is limited empirical evidence that supports those claims. In the present study, we aimed to assess whether the Rotating Snakes illusion is more salient in its blue-yellow variation, compared to red-green and greyscale variations when the luminance of corresponding elements within the patterns were equated. Using the cancellation method, we found that the velocity required to establish perceptual stationarity was indeed greater for the stimulus composed of patterns with a blue-yellow combination than the other two variants. Our findings provide, for the first time, empirical evidence that the presence of colour affects the magnitude of illusion in the Rotating Snakes illusion. |
Motoaki Uchimura; Hironori Kumano; Shigeru Kitazawa Neural transformation from retinotopic to background-centric coordinates in the macaque precuneus Journal Article In: The Journal of Neuroscience, vol. 44, no. 48, pp. 1–19, 2024. @article{Uchimura2024, Visual information is initially represented in retinotopic coordinates and later in craniotopic coordinates. Psychophysical evidence suggests that visual information is further represented in more general coordinates related to the external world; however, the neural basis of nonegocentric coordinates remains elusive. This study investigates the automatic transformation from egocentric to nonegocentric coordinates in the macaque precuneus (two males, one female), identified by a functional imaging study as a key area for nonegocentric representation. We found that 6.2% of neurons in the precuneus have receptive fields (RFs) anchored to the background rather than to the retina or the head, while 16% had traditional retinotopic RFs. Notably, these two types were not exclusive: many background-centric neurons initially encode a stimulus' position in retinotopic coordinates (up to ∼90 ms from the stimulus onset) but later shift to background coordinates, peaking at ∼150 ms. Regarding retinotopic information, the stimulus dominated the initial period, whereas the background dominated the later period. In the absence of a background, there is a dramatic surge in retinotopic information about the stimulus during the later phase, clearly delineating two distinct periods of retinotopic encoding: one focusing on the figure to be attended and another on the background. These findings suggest that the initial retinotopic information of the stimulus is combined with the background retinotopic information in a subsequent stage, yielding a more stable representation of the stimulus relative to the background through time-division multiplexing. |
Sandra Tyralla; Eckart Zimmermann Serial dependencies in motor targeting as a function of target appearance Journal Article In: Journal of Vision, vol. 24, no. 13, pp. 1–13, 2024. @article{Tyralla2024, In order to bring stimuli of interest into our central field of vision, we perform saccadic eye movements. After every saccade, the error between the predicted and actual landing position is monitored. In the laboratory, artificial post-saccadic errors are created by displacing the target during saccade execution. Previous research found that even a single post-saccadic error induces immediate amplitude changes to minimize that error. The saccadic amplitude adjustment could result from a recalibration of the saccade target representation. We asked if recalibration follows an integration scheme in which the impact magnitude of the previous post-saccadic target location depends on the certainty of the current target. We asked subjects to perform saccades to Gaussian blobs as targets, the visuospatial certainty of which we manipulated by changing its spatial constant. In separate sessions, either the pre-saccadic or post-saccadic target was uncertain. Additionally, we manipulated the contrast to further decrease certainty, changing the spatial constant mid-saccade. We found saccade-by-saccade amplitude reductions only with a currently uncertain target, a previously certain one, and a constant target contrast. We conclude that the features of the pre-saccadic target (i.e., size and contrast) determine the extent to which post-saccadic error shapes upcoming saccade amplitudes. |
Massimo Turatto; Matteo De Tommaso; Leonardo Chelazzi Learning to ignore visual onset distractors hinges on a configuration-dependent coordinates system Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 50, no. 10, pp. 971–988, 2024. @article{Turatto2024, Decrement of attentional capture elicited by visual onset distractors, consistent with habituation, has been extensively characterized over the past several years. However, the type of spatial frame of reference according to which such decrement occurs in the brain remains unknown. Here, four related experiments are reported to shed light on this issue. Observers were asked to discriminate the orientation of a titled line while ignoring a salient but task-irrelevant visual onset that occurred on some trials. The experiments all involved an initial habituation phase, during which capture elicited by the onset distractor progressively decreased, as in prior studies. Importantly, in all experiments, the location of the target and the distractor remained fixed during this phase. After habituation was established, in a final test phase of the various experiments, the spatial arrangement of the target and the distractor was changed to test for the relative contribution to habituation of retinotopic, spatiotopic, and configuration-dependent visual representations. Experiment 1 indicated that spatiotopic representations contribute little, if at all, to the observed decrement in attentional capture. The results from Experiment 2 were compatible with the notion that such capture reduction occurs in either retinotopic- or configuration-specific representations. However, Experiment 3 ruled out the contribution of retinotopic representations, leaving configuration-specific representation as the sole viable interpretation. This conclusion was confirmed by the results of Experiments 4 and 5. In conclusion, visual onset distractors appear to be rejected at a level of the visual hierarchy where visual events are encoded in a configuration-specific or context-dependent manner. |
Zhanhan Tu; Christopher Degg; Michael Bach; Rebecca McLean; Viral Sheth; Mervyn G. Thomas; Shangqing Yang; Irene Gottlob; Frank A. Proudlock ERG responses in albinism, idiopathic infantile nystagmus, and controls Journal Article In: Investigative Ophthalmology & Visual Science, vol. 65, no. 4, pp. 1–8, 2024. @article{Tu2024, Purpose: Our primary aim was to compare adult full-field ERG (ffERG) responses in albinism, idiopathic infantile nystagmus (IIN), and controls. A secondary aim was to investigate the effect of within-subject changes in nystagmus eye movements on ffERG responses. Methods: Dilated Ganzfeld flash ffERG responses were recorded using DTL electrodes under conditions of dark (standard and dim flash) and light adaptation in 68 participants with albinism, 43 with IIN, and 24 controls. For the primary aim, the effect of group and age on ffERG responses was investigated. For the secondary aim, null region characteristics were determined using eye movements recorded prior to ffERG recordings. ffERG responses were recorded near and away from the null regions of 18 participants also measuring the success rate of recordings. Results: For the primary aim, age-adjusted photopic a- and b-wave amplitudes were consistently smaller in IIN compared with controls (P < 0.0001), with responses in both groups decreasing with age. In contrast, photopic a-wave amplitudes increased with age in albinism (P = 0.0035). For the secondary aim, more intense nystagmus significantly reduced the success rate of measurable responses. Within-subject changes in nystagmus intensity generated small, borderline significant differences in photopic b-wave peak times and a-and b-wave amplitudes under scotopic conditions with standard flash. Conclusions: Age-adjusted photopic ffERG responses are significantly reduced in IIN adding to the growing body of evidence of retinal abnormalities in IIN. Differences between photopic responses in albinism and controls depend on age. Success at obtaining ffERG responses could be improved by recording responses at the null region. |
Marius Tröndle; Nicolas Langer Decomposing neurophysiological underpinnings of age-related decline in visual working memory Journal Article In: Neurobiology of Aging, vol. 139, pp. 30–43, 2024. @article{Troendle2024, Exploring the neural basis of age-related decline in working memory is vital in our aging society. Previous electroencephalographic studies suggested that the contralateral delay activity (CDA) may be insensitive to age-related decline in lateralized visual working memory (VWM) performance. Instead, recent evidence indicated that task-induced alpha power lateralization decreases in older age. However, the relationship between alpha power lateralization and age-related decline of VWM performance remains unknown, and recent studies have questioned the validity of these findings due to confounding factors of the aperiodic signal. Using a sample of 134 participants, we replicated the age-related decrease of alpha power lateralization after adjusting for the aperiodic signal. Critically, the link between task performance and alpha power lateralization was found only when correcting for aperiodic signal biases. Functionally, these findings suggest that age-related declines in VWM performance may be related to the decreased ability to prioritize relevant over irrelevant information. Conversely, CDA amplitudes were stable across age groups, suggesting a distinct neural mechanism possibly related to preserved VWM encoding or early maintenance. |
Ana María Triana; Juha Salmi; Nicholas Mark Edward Alexander Hayward; Jari Saramäki; Enrico Glerean 2024. @book{Triana2024, Our behavior and mental states are constantly shaped by our environment and experiences. However, little is known about the response of brain functional connectivity to environmental, physiological, and behavioral changes on different timescales, from days to months. This gives rise to an urgent need for longitudinal studies that collect high-frequency data. To this end, for a single subject, we collected 133 days of behavioral data with smartphones and wearables and performed 30 functional magnetic resonance imaging (fMRI) scans measuring attention, memory, resting state, and the effects of naturalistic stimuli. We find traces of past behavior and physiology in brain connectivity that extend up as far as 15 days. While sleep and physical activity relate to brain connectivity during cognitively demanding tasks, heart rate variability and respiration rate are more relevant for resting-state connectivity and movie-watching. This unique data set is openly accessible, offering an exceptional opportunity for further discoveries. Our results demonstrate that we should not study brain connectivity in isolation, but rather acknowledge its interdependence with the dynamics of the environment, changes in lifestyle, and short-term fluctuations such as transient illnesses or restless sleep. These results reflect a prolonged and sustained relationship between external factors and neural processes. Overall, precision mapping designs such as the one employed here can help to better understand intraindividual variability, which may explain some of the observed heterogeneity in fMRI findings. The integration of brain connectivity, physiology data and environmental cues will propel future environmental neuroscience research and support precision healthcare. |
Gabriel Trevino; John J. Lee; Joshua S. Shimony; Patrick H. Luckett; Eric C. Leuthardt Complexity organization of resting-state functional-MRI networks Journal Article In: Human Brain Mapping, vol. 45, no. 12, pp. 1–15, 2024. @article{Trevino2024, Entropy measures are increasingly being used to analyze the structure of neural activity observed by functional magnetic resonance imaging (fMRI), with resting-state networks (RSNs) being of interest for their reproducible descriptions of the brain's functional architecture. Temporal correlations have shown a dichotomy among these networks: those that engage with the environment, known as extrinsic, which include the visual and sensorimotor networks; and those associated with executive control and self-referencing, known as intrinsic, which include the default mode network and the frontoparietal control network. While these inter-voxel temporal correlations enable the assessment of synchrony among the components of individual networks, entropic measures introduce an intra-voxel assessment that quantifies signal features encoded within each blood oxygen level-dependent (BOLD) time series. As a result, this framework offers insights into comprehending the representation and processing of information within fMRI signals. Multiscale entropy (MSE) has been proposed as a useful measure for characterizing the entropy of neural activity across different temporal scales. This measure of temporal entropy in BOLD data is dependent on the length of the time series; thus, high-quality data with fine-grained temporal resolution and a sufficient number of time frames is needed to improve entropy precision. We apply MSE to the Midnight Scan Club, a highly sampled and well-characterized publicly available dataset, to analyze the entropy distribution of RSNs and evaluate its ability to distinguish between different functional networks. Entropy profiles are compared across temporal scales and RSNs. Our results have shown that the spatial distribution of entropy at infra-slow frequencies (0.005–0.1 Hz) reproduces known parcellations of RSNs. We found a complexity hierarchy between intrinsic and extrinsic RSNs, with intrinsic networks robustly exhibiting higher entropy than extrinsic networks. Finally, we found new evidence that the topography of entropy in the posterior cerebellum exhibits high levels of entropy comparable to that of intrinsic RSNs. |
Michael P. Trevarrow; Miranda J. Munoz; Yessenia M. Rivera; Rishabh Arora; Quentin H. Drane; Gian D. Pal; Leonard Verhagen Metman; Lisa C. Goelz; Daniel M. Corcos; Fabian J. David Medication improves velocity, reaction time, and movement time but not amplitude or error during memory-guided reaching in Parkinson's disease Journal Article In: Physiological Reports, vol. 12, no. 17, pp. 1–14, 2024. @article{Trevarrow2024, The motor impairments experienced by people with Parkinson's disease (PD) are exacerbated during memory-guided movements. Despite this, the effect of antiparkinson medication on memory-guided movements has not been elucidated. We evaluated the effect of antiparkinson medication on motor control during a memory-guided reaching task with short and long retention delays in participants with PD and compared performance to age-matched healthy control (HC) participants. Thirty-two participants with PD completed the motor section of the Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS III) and performed a memory-guided reaching task with two retention delays (0.5 s and 5 s) while on and off medication. Thirteen HC participants completed the MDS-UPDRS III and performed the memory-guided reaching task. In the task, medication increased movement velocity, decreased movement time, and decreased reaction time toward what was seen in the HC. However, movement amplitude and reaching error were unaffected by medication. Shorter retention delays increased movement velocity and amplitude, decreased movement time, and decreased error, but increased reaction times in the participants with PD and HC. Together, these results imply that antiparkinson medication is more effective at altering the neurophysiological mechanisms controlling movement velocity and reaction time compared with other aspects of movement control. |
Caterina Trentin; Giulia Rinaldi; Magdalena A. Chorzcepa; Michaela A. Imhof; Heleen A. Slagter; Christian N. L. Olivers A certain future strengthens the past: knowing ahead how to act on an object prioritizes its visual working memory representation Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–15, 2024. @article{Trentin2024a, Findings from recent studies indicate that planning an action toward an object strengthens its visual working memory (VWM) representation, emphasizing the importance of sensorimotor links in VWM. In the present study, we investigated to what extent such sensorimotor links are modulated by how well-defined an action plan is. In three eye-tracking experiments, we asked participants to memorize a visual stimulus for a subsequent memory test, whereby they performed a specific hand movement toward memory-matching probes. We manipulated action uncertainty so that in the defined action condition, participants knew before the memory delay what specific action they would have to perform at the memory test, while in the undefined action condition, they were informed about the specific action on the object in VWM only after the delay. Importantly, during the delay, participants were presented with a visual detection task, designed to measure any attentional biases toward the memorized object. Across the three experiments, we found moderate evidence that knowing in advance how to act on an object prioritized its mnemonic representation, as expressed in an increased attentional bias toward it. Our results support the idea that knowing what action to perform on an object strengthens its representation in VWM, and further highlight the importance of considering action in the study of VWM. |
Caterina Trentin; Chris Olivers; Heleen A. Slagter Action planning renders objects in working memory more attentionally salient Journal Article In: Journal of Cognitive Neuroscience, vol. 36, no. 10, pp. 2166–2183, 2024. @article{Trentin2024, A rapidly growing body of work suggests that visual working memory (VWM) is fundamentally action oriented. Consistent with this, we recently showed that attention is more strongly biased by VWM representations of objects when we plan to act on those objects in the future. Using EEG and eye tracking, here, we investigated neurophysiological correlates of the interactions between VWM and action. Participants (n = 36) memorized a shape for a subsequent VWM test. At test, a probe was presented along with a secondary object. In the action condition, participants gripped the actual probe if it matched the memorized shape, whereas in the control condition, they gripped the secondary object. Crucially, during the VWM delay, participants engaged in a visual selection task, in which they located a target as fast as possible. The memorized shape could either encircle the target (congruent trials) or a distractor (incongruent trials). Replicating previous findings, we found that eye gaze was biased toward the VWM-matching shape and, importantly, more so when the shape was directly associated with an action plan. Moreover, the ERP results revealed that during the selection task, future action-relevant VWM-matching shapes elicited (1) a stronger Ppc (posterior positivity contralateral), signaling greater attentional saliency; (2) an earlier PD (distractor positivity) component, suggesting faster suppression; (3) a larger inverse (i.e., positive) sustained posterior contralateral negativity in incongruent trials, consistent with stronger suppression of action-associated distractors; and (4) an enhanced response-locked positivity over left motor regions, possibly indicating enhanced inhibition of the response associated with the memorized item during the interim task. Overall, these results suggest that action planning renders objects in VWM more attentionally salient, supporting the notion of selection-for-action in working memory. |
Vít Třebický; Petr Tureček; Jitka Třebická Fialová; Žaneta Pátková; Dominika Grygarová; Jan Havlíček In: Evolution and Human Behavior, vol. 45, no. 6, pp. 1–11, 2024. @article{Trebicky2024, Facial and bodily features represent salient visual stimuli upon which people spontaneously attribute various fitness-relevant characteristics such as attractiveness or formidability. While existing evidence predominantly relies on sequential stimuli presentation tasks, real-world social comparisons often involve assessing two or multiple individuals. In studies using two-alternative forced-choice tasks, participants usually perform at rates above the chance to select the expected option. However, these tasks use dichotomized and artificially manipulated stimuli that lack generalizability in situations where the differences between individuals are less likely to be ‘clear-cut'. We tested whether the probability of selection will proportionally increase with increasing degrees of difference between the stimuli or whether there is a discrimination threshold if the stimuli are perceived as too similar. In two registered studies comprising online (N = 446) and onsite (N = 56) participants, we explored the influence of the degree of difference in attractiveness and formidability ratings between stimuli pairs on both the probability of selection and selection speed. Participants were presented with randomly selected pairs of men (30 pairs of faces, 30 pairs of bodies) and tasked with choosing the more attractive or formidable target. Applying Bayesian inference, our findings reveal a systematic impact of the degree of difference on both the selection probability and speed. As differences in attractiveness or formidability increased, both men and women exhibited a heightened propensity and speed in selecting the higher-scoring stimuli. Our study demonstrates that people discriminate even slight differences in attractiveness and formidability, indicating that cognitive processes underlying the perception of these characteristics had undergone natural selection for a high level of discrimination. |
Tobiasz Trawiński; Chuanli Zang; Simon P. Liversedge; Yao Ge; Ying Fu; Nick Donnelly; Tobiasz Trawinski; Chuanli Zang; Simon P. Liversedge; Yao Ge; Ying Fu; Nick Donnelly The influence of culture on the viewing of Western and East Asian paintings Journal Article In: Psychology of Aesthetics, Creativity, and the Arts, vol. 18, no. 2, pp. 121–142, 2024. @article{Trawinski2024, The influence of British and Chinese culture on the viewing of paintings from Western and East Asian traditions was explored in an old/new discrimination task. Accuracy data were considered alongside signal detection measures of sensitivity and bias. The results showed participant culture and painting tradition interacted but only with respect to response bias and not sensitivity. Eye movements were also recorded during encoding and discrimination. Paintings were split into regions of interest defined by faces, or the theme and context to analyze the eye movement data. With respect to the eye movement data, the results showed that a match between participant culture and painting tradition increased the viewing of faces in paintings at the expense of the viewing of other locations, an effect interpreted as a manifestation of the Other Race Effect on the viewing of paintings. There was, however, no evidence of broader influence of culture on the eye movements made to paintings as might be expected if culture influenced the allocation of attention more generally. Taken together, these findings suggest culture influences the viewing of paintings but only in response to challenges to the encoding of faces. |
Alessandro Toso; Annika P. Wermuth; Ayelet Arazi; Anke Braun; Tineke Grent-‘t Jong; Peter J. Uhlhaas; Tobias H. Donner 40 Hz steady-state response in human auditory cortex is shaped by gabaergic neuronal inhibition Journal Article In: The Journal of Neuroscience, vol. 44, no. 24, pp. 1–10, 2024. @article{Toso2024, The 40 Hz auditory steady-state response (ASSR), an oscillatory brain response to periodically modulated auditory stimuli, is a promising, noninvasive physiological biomarker for schizophrenia and related neuropsychiatric disorders. The 40 Hz ASSR might be amplified by synaptic interactions in cortical circuits, which are, in turn, disturbed in neuropsychiatric disorders. Here, we tested whether the 40 Hz ASSR in the human auditory cortex depends on two key synaptic components of neuronal interactions within cortical circuits: excitation via N-methyl-aspartate glutamate (NMDA) receptors and inhibition via gamma-amino-butyric acid (GABA) receptors. We combined magnetoencephalography (MEG) recordings with placebo-controlled, low-dose pharmacological interventions in the same healthy human participants (13 males, 7 females). All participants exhibited a robust 40 Hz ASSR in auditory cortices, especially in the right hemisphere, under a placebo. The GABAA receptor–agonist lorazepam increased the amplitude of the 40 Hz ASSR, while no effect was detectable under the NMDA blocker memantine. Our findings indicate that the 40 Hz ASSR in the auditory cortex involves synaptic (and likely intracortical) inhibition via the GABAA receptor, thus highlighting its utility as a mechanistic signature of cortical circuit dysfunctions involving GABAergic inhibition. |
Christof Elias Topfstedt; Luca Wollenberg; Thomas Schenk Training enables substantial decoupling of visual attention and saccade preparation Journal Article In: Vision Research, vol. 221, pp. 1–13, 2024. @article{Topfstedt2024, Visual attention is typically shifted toward the targets of upcoming saccadic eye movements. This observation is commonly interpreted in terms of an obligatory coupling between attentional selection and oculomotor programming. Here, we investigated whether this coupling is facilitated by a habitual expectation of spatial congruence between visual and motor targets. To this end, we conducted a dual-task (i.e., concurrent saccade task and visual discrimination task) experiment in which male and female participants were trained to either anticipate spatial congruence or incongruence between a saccade target and an attention probe stimulus. To assess training-induced effects of expectation on premotor attention allocation, participants subsequently completed a test phase in which the attention probe position was randomized. Results revealed that discrimination performance was systematically biased toward the expected attention probe position, irrespective of whether this position matched the saccade target or not. Overall, our findings demonstrate that visual attention can be substantially decoupled from ongoing oculomotor programming and suggest an important role of habitual expectations in the attention-action coupling. |
Ivan Tomić; Paul M. Bays A dynamic neural resource model bridges sensory and working memory Journal Article In: eLife, vol. 12, pp. 1–38, 2024. @article{Tomic2024a, Probing memory of a complex visual image within a few hundred milliseconds after its disappearance reveals significantly greater fidelity of recall than if the probe is delayed by as little as a second. Classically interpreted, the former taps into a detailed but rapidly decaying visual sensory or ‘iconic' memory (IM), while the latter relies on capacity-limited but comparatively stable visual working memory (VWM). While iconic decay and VWM capacity have been extensively studied independently, currently no single framework quantitatively accounts for the dynamics of memory fidelity over these time scales. Here, we extend a stationary neural population model of VWM with a temporal dimension, incorporating rapid sensory-driven accumulation of activity encoding each visual feature in memory, and a slower accumulation of internal error that causes memorized features to randomly drift over time. Instead of facilitating read-out from an independent sensory store, an early cue benefits recall by lifting the effective limit on VWM signal strength imposed when multiple items compete for representation, allowing memory for the cued item to be supplemented with information from the decaying sensory trace. Empirical measurements of human recall dynamics validate these predictions while excluding alternative model architectures. A key conclusion is that differences in capacity classically thought to distinguish IM and VWM are in fact contingent upon a single resource-limited WM store. |
Ivan Tomić; Dagmar Adamcová; Máté Fehér; Paul M. Bays Dissecting the components of error in analogue report tasks: Error in analogue report tasks Journal Article In: Behavior Research Methods, vol. 56, pp. 8196–8213, 2024. @article{Tomic2024, Over the last two decades, the analogue report task has become a standard method for measuring the fidelity of visual representations across research domains including perception, attention, and memory. Despite its widespread use, there has been no methodical investigation of the different task parameters that might contribute to response variability. To address this gap, we conducted two experiments manipulating components of a typical analogue report test of memory for colour hue. We found that human response errors were independently affected by changes in storage and maintenance requirements of the task, demonstrated by a strong effect of set size even in the absence of a memory delay. In contrast, response variability remained unaffected by physical size of the colour wheel, implying negligible contribution of motor noise to task performance, or by its chroma radius, highlighting non-uniformity of the standard colour space. Comparing analogue report to a matched forced-choice task, we found variation in adjustment criterion made a limited contribution to analogue report variability, becoming meaningful only with low representational noise. Our findings validate the analogue report task as a robust measure of representational fidelity for most purposes, while also quantifying non-representational sources of noise that would limit its reliability in specialized settings. |
Daniel Toledano; Mor Sasi; Shlomit Yuval-Greenberg; Dominique Lamy On the timing of overt attention deployment: Eye-movement evidence for the priority accumulation framework Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 50, no. 5, pp. 431–450, 2024. @article{Toledano2024, Most visual-search theories assume that our attention is automatically allocated to the location with the highest priority at any given moment. The Priority Accumulation Framework (PAF) challengesthis assumption. It suggests that the priority weight at each location accumulates across sequential events and that evidence for the presence of action-relevant information contributes to determining when attention is deployed to the location with the highest accumulated priority. Here, we tested these hypotheses for overt attention by recording first saccades in a free-viewing spatial-cueing task. We manipulated search difficulty (Experiments 1 and 2) and cue salience (Experiment 2). Standard theories posit that when oculomotor capture by the cue occurs, it is initiated before the search display appears; therefore, these theories predict that the cue's impact on the distribution of first saccades should be independent of search difficulty but influenced by the cue's saliency. By contrast, PAF posits that the cue can bias competition later, after processing of the search display has already started, and therefore predicts that such late impact should increase with both search difficulty and cue salience. The results fully supported PAF's predictions. Our account suggests a distinction between attentional capture and attentional-priority bias that resolves enduring inconsistencies in the attentional-capture literature. |
Simon P. Tiffin-Richards In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 50, no. 11, pp. 1844–1861, 2024. @article{TiffinRichards2024a, Readers of different ages and across different languages routinely process information of upcoming words in a sentence, before their eyes move to fixate them directly (parafoveal processing). However, there is inconsistent evidence of similar parafoveal processing in a reader's second language (L2). In this eye movement study, the gaze-contingent boundary paradigm (Rayner, 1975a) was used to test whether parafoveal processing of orthographic information is an integral part of both beginning and proficient L2 reading. The eye movements of beginning L2-learners (n = 53, aged 11–14 years) and highly proficient L2-users (n = 56, aged 19–65 years) were recorded while they read sentences in their first language (L1) German and L2 English. Sentences each contained a cognate target word (e.g., English: tunnel, German: Tunnel). The parafoveal preview of the targets either (a) preserved the spelling and meaning of the target (identity condition), (b) preserved letter identities but transposed the position of two adjacent letters (transposed-letter [TL] condition, e.g., tunenl/Tunenl), or substituted the identity of two adjacent letters (substituted-letter condition, e.g., tunocl/Tunocl). TL previews elicited longer early first-pass reading times than identity previews in both L1 and L2 reading in children and adults, suggesting that letter position was processed parafoveally. Substituted-letter previews resulted in longer reading times than TL previews in children and adults in L1 and L2, suggesting that letter identity information was processed independently of position information. These results suggest that letter position and identity information are extracted from the parafovea during L1 and L2 reading, facilitating word recognition in children and adults. |
Simon P. Tiffin-Richards Cognate facilitation in bilingual reading: The influence of orthographic and phonological similarity on lexical decisions and eye-movements Journal Article In: Bilingualism: Language and Cognition, pp. 1–18, 2024. @article{TiffinRichards2024, A central finding of bilingual research is that cognates – words that share semantic, phonological, and orthographic characteristics across languages – are processed faster than non-cognate words. However, it remains unclear whether cognate facilitation effects are reliant on identical cognates, or whether facilitation simply varies along a continuum of cross-language orthographic and phonological similarity. In two experiments, German–English bilinguals read identical cognates, close cognates, and non-cognates in a lexical decision task and a sentence-reading task while their eye movements were recorded. Participants read the stimuli in their L1 German and L2 English. Converging results found comparable facilitation effects of identical and close cognates vs. non-cognates. Cognate facilitation could be described as a continuous linear effect of cross-language orthographic similarity on lexical decision accuracy and latency, as well as fixation durations. Cross-language phonological similarity modulated the continuous orthographic similarity effect in single word recognition, but not in sentence processing. |
Zhenghe Tian; Jingwen Chen; Cong Zhang; Bin Min; Bo Xu; Liping Wang Mental programming of spatial sequences in working memory in the macaque frontal cortex Journal Article In: Science, vol. 385, no. 1437, pp. 1–1, 2024. @article{Tian2024a, WM refers to our ability to temporarily maintain and manipulate information, which is foundational to the organization of goal-directed behavior. Although the nature of WM maintenance has been the focus of WM research in the past decades, WM manipulation or volitional control is more complex and has received less attention. The control process is what makes WM distinct and sets it apart from short-term memory. Previous human imaging studies have shown that the frontal cortex was highly involved inWM control. However, the neural dynamics and computational mechanisms supporting the control are not understood. We aimed to characterize these neural computations in the frontal cortex of nonhuman primates. |
Yanying Tian; Min Hai; Yongchun Wang; Minmin Yan; Tingkang Zhang; Jingjing Zhao; Yonghui Wang Is the precedence of social re-orienting only inherent to the initiators? Journal Article In: Quarterly Journal of Experimental Psychology, pp. 1–14, 2024. @article{Tian2024, Previous researches have revealed that initiators preferentially re-orient their attention towards responders with whom they have established joint attention (JA). However, it remains unclear whether this precedence of social re-orienting is inherent to initiators or applies equally to responders, and whether this social re-orienting is modulated by the social contexts in which JA is achieved. To address these issues, the present study adopted a modified virtual-reality paradigm to manipulate social roles (initiator vs. responder), social behaviours (JA vs. Non-JA), and social contexts (intentional vs. incidental). Results indicated that people, whether as initiators or responders, exhibited a similar prioritisation pattern of social re-orienting, and this was independent of the social contexts in which JA was achieved, revealing that the prioritisation of social re-orienting is an inherent social attentional mechanism in humans. It should be noted, however, that the distinct social cognitive systems engaged when individuals switched roles between initiator and responder were only driven during intentional (Experiment 1) rather than incidental (Experiment 2) JA. These findings provide potential insights for understanding the shared attention system and the integrated framework of attentional and mentalising processes. |
Jessica A. F. Thompson; Hannah Sheahan; Tsvetomira Dumbalska; Julian Sandbrink; Manuela Piazza; Christopher Summerfield Zero-shot counting with a dual-stream neural network model Journal Article In: Neuron, vol. 112, no. 24, pp. 4147–4158, 2024. @article{Thompson2024, To understand a visual scene, observers need to both recognize objects and encode relational structure. For example, a scene comprising three apples requires the observer to encode concepts of ‘‘apple'' and ‘‘three.'' In the primate brain, these functions rely on dual (ventral and dorsal) processing streams. Object recognition in primates has been successfully modeled with deep neural networks, but how scene structure (including numerosity) is encoded remains poorly understood. Here, we built a deep learning model, based on the dual-stream architecture of the primate brain, which is able to count items ‘‘zero-shot''—even if the objects themselves are unfamiliar. Our dual-stream network forms spatial response fields and lognormal number codes that resemble those observed in the macaque posterior parietal cortex. The dual-stream network also makes successful predictions about human counting behavior. Our results provide evidence for an enactive theory of the role of the posterior parietal cortex in visual scene understanding. |
Nikita Thomas; Jennifer H. Acton; Jonathan T. Erichsen; Tony Redmond; Matt J. Dunn Reliability of gaze-contingent perimetry Journal Article In: Behavior Research Methods, vol. 56, no. 5, pp. 4883–4892, 2024. @article{Thomas2024a, Standard automated perimetry, a psychophysical task performed routinely in eyecare clinics, requires observers to maintain fixation for several minutes at a time in order to measure visual field sensitivity. Detection of visual field damage is confounded by eye movements, making the technique unreliable in poorly attentive individuals and those with pathologically unstable fixation, such as nystagmus. Microperimetry, which utilizes ‘partial gaze-contingency' (PGC), aims to counteract eye movements but only corrects for gaze position errors prior to each stimulus onset. Here, we present a novel method of visual field examination in which stimulus position is updated during presentation, which we refer to as ‘continuous gaze-contingency' (CGC). In the first part of this study, we present three case examples that demonstrate the ability of CGC to measure the edges of the physiological blind spot in infantile nystagmus with greater accuracy than PGC and standard ‘no gaze-contingency' (NoGC), as initial proof-of-concept for the utility of the paradigm in measurements of absolute scotomas in these individuals. The second part of this study focused on healthy observers, in which we demonstrate that CGC has the lowest stimulus positional error (gaze-contingent precision: CGC = ± 0.29° |
Elizabeth H. X. Thomas; Susan L. Rossell; Jessica B. Myles; Eric J. Tan; Erica Neill; Sean P. Carruthers; Philip J. Sumner; Kiymet Bozaoglu; Caroline Gurvich The relationship of schizotypy and saccade performance in patients with schizophrenia and non-clinical individuals Journal Article In: Journal of Individual Differences, vol. 45, no. 4, pp. 244–254, 2024. @article{Thomas2024, Deficits in saccade performance (i.e., rapid eye movements) are commonly observed in people with schizophrenia. Investigations of the schizotypy-saccade relationship have been exclusively explored in non-clinical individuals, with mixed findings. Of the three saccadic paradigms, research has predominantly focused on the antisaccade paradigm, while the relationship between schizotypy and prosaccade and memory-guided saccade performance remains underexplored. This study aimed to investigate the relationship between schizotypy and saccade performance across the three saccadic paradigms in both patients and non-clinical individuals. Sixty-two patients with schizophrenia/schizoaffective disorder and 148 non-clinical individuals completed the Oxford-Liverpool Inventory of Feelings and Experiences (O-LIFE) self-report questionnaire as a measure of schizotypy. All participants also completed a prosaccade, memory-guided saccade and antisaccade task. Canonical correlation analyses were conducted to examine the collective, multivariate relationship between the set of schizotypy variables and the sets of prosaccade, memory-guided saccade and antisaccade variables. Differences between patients and non-clinical groups were in line with previous research. In the non-clinical group, Cognitive Disorganisation was the highest contributing variable to prosaccade performance and prosaccade latency was the highest contributing variable to schizotypy. There was no significant relationship between schizotypy and memory-guided or antisaccade performance. No significant relationships between schizotypy and saccade performance were observed in the patient group. Our findings suggest a relationship between disorganized schizotypy and basic processing speed in non-clinical individuals. This relationship was not observed in patients, suggesting that sub-clinical saccade performance may not mirror impairments observed in schizophrenia. Our findings in the non-clinical group were inconsistent with previous studies. These used different schizotypy inventories, suggesting that schizotypy measures derived from different conceptual backgrounds may not be comparable. |
Jordy Thielen; Tessa M. Leeuwen; Simon J. Hazenberg; Anna Z. L. Wester; Floris P. Lange; Rob Lier Amodal completion across the brain: The impact of structure and knowledge Journal Article In: Journal of vision, vol. 24, no. 6, pp. 10, 2024. @article{Thielen2024, This study investigates the phenomenon of amodal completion within the context of naturalistic objects, employing a repetition suppression paradigm to disentangle the influence of structure and knowledge cues on how objects are completed. The research focuses on early visual cortex (EVC) and lateral occipital complex (LOC), shedding light on how these brain regions respond to different completion scenarios. In LOC, we observed suppressed responses to structure and knowledge-compatible stimuli, providing evidence that both cues influence neural processing in higher-level visual areas. However, in EVC, we did not find evidence for differential responses to completions compatible or incompatible with either structural or knowledge-based expectations. Together, our findings suggest that the interplay between structure and knowledge cues in amodal completion predominantly impacts higher-level visual processing, with less pronounced effects on the early visual cortex. This study contributes to our understanding of the complex mechanisms underlying visual perception and highlights the distinct roles played by different brain regions in amodal completion. |
Maria Theobald; Joseph Colantonio; Igor Bascandziev; Elizabeth Bonawitz; Garvin Brod Do reflection prompts promote children's conflict monitoring and revision of misconceptions? Journal Article In: Child Development, vol. 95, no. 4, pp. e253–e269, 2024. @article{Theobald2024, We tested whether reflection prompts enhance conflict monitoring and facilitate the revision of misconceptions. German children (N = 97 |
Antonia F. Ten Brink; Iris Heiner; H. Chris Dijkerman; Christoph Strauch Pupil dilation reveals the intensity of touch Journal Article In: Psychophysiology, vol. 61, no. 6, pp. 1–13, 2024. @article{TenBrink2024, Touch is important for many aspects of our daily activities. One of the most important tactile characteristics is its perceived intensity. However, quantifying the intensity of perceived tactile stimulation is not always possible using overt responses. Here, we show that pupil responses can objectively index the intensity of tactile stimulation in the absence of overt participant responses. In Experiment 1 (n = 32), we stimulated three reportedly differentially sensitive body locations (finger, forearm, and calf) with a single tap of a tactor while tracking pupil responses. Tactile stimulation resulted in greater pupil dilation than a baseline without stimulation. Furthermore, pupils dilated more for the more sensitive location (finger) than for the less sensitive location (forearm and calf). In Experiment 2 (n = 20) we extended these findings by manipulating the intensity of the stimulation with three different intensities, here a short vibration, always at the little finger. Again, pupils dilated more when being stimulated at higher intensities as compared to lower intensities. In summary, pupils dilated more for more sensitive parts of the body at constant stimulation intensity and for more intense stimulation at constant location. Taken together, the results show that the intensity of perceived tactile stimulation can be objectively measured with pupil responses – and that such responses are a versatile marker for touch research. Our findings may pave the way for previously impossible objective tests of tactile sensitivity, for example in minimally conscious state patients. |
Rebecca Taylor; Antimo Buonocore; Alessio Fracasso Saccadic “inhibition” unveils the late influence of image content on oculomotor programming Journal Article In: Experimental Brain Research, vol. 242, pp. 2281–2294, 2024. @article{Taylor2024b, Image content is prioritized in the visual system. Faces are a paradigmatic example, receiving preferential processing along the visual pathway compared to other visual stimuli. Moreover, face prioritization manifests also in behavior. People tend to look at faces more frequently and for longer periods, and saccadic reaction times can be faster when targeting a face as opposed to a phase-scrambled control. However, it is currently not clear at which stage image content affects oculomotor planning and execution. It can be hypothesized that image content directly influences oculomotor signal generation. Alternatively, the image content could exert its influence on oculomotor planning and execution at a later stage, after the image has been processed. Here we aim to disentangle these two alternative hypotheses by measuring the frequency of saccades toward a visual target when the latter is followed by a visual transient in the central visual field. Behaviorally, this paradigm leads to a reduction in saccade frequency that happens about 90 ms after any visual transient event, also known as saccadic “inhibition”. In two experiments, we measured occurrence of saccades in visually guided saccades as well as microsaccades during fixation, using face and noise-matched visual stimuli. We observed that while the reduction in saccade occurrence was similar for both stimulus types, face stimuli lead to a prolonged reduction in eye movements. Moreover, saccade kinematics were altered by both stimulus types, showing an amplitude reduction without change in peak velocity for the earliest saccades. Taken together, our experiments imply that face stimuli primarily affect the later stages of the behavioral phenomenon of saccadic “inhibition”. We propose that while some stimulus features are processed at an early stage and can quickly influence eye movements, a delayed signal conveying image content information is necessary to further inhibit/delay activity in the oculomotor system to trigger eye movements. |
Madison R. Taylor; Marian Berryhill; Dennis Mathew; Nicholas G. Murray Elevated smooth pursuit gain in collegiate athletes with sport-related concussion immediately following injury Journal Article In: Journal of Ophthalmic and Vision Research, vol. 19, no. 2, pp. 227–234, 2024. @article{Taylor2024a, Purpose: Although there is evidence that sport-related concussion (SRC) affects oculomotor function and perceptual ability, experiments are often poorly controlled and are not replicable. This study aims to test the hypothesis that there are decreased values when assessing oculomotor impairment indicating poorer performance in SRC patients. Methods: Fifteen DI athletes presenting with SRC (7 females, 8 males) and 15 student volunteers (CON) (12 females, 3 males) completed a dynamic visual acuity (DVA) task that involved answering the direction of a moving stimulus (Landolt C) while wearing a head-mounted binocular eye tracker. There were 120 trials total with 60 trials presenting at 30° per second and 60 presenting at 90° per second. Various eye movement measurements, including horizontal smooth pursuit eye movements (SPEM) gain and saccadic peak velocity, were analyzed between groups using univariate ANOVAs. Saccade count in SPEM trials, accuracy, and vision were analyzed using Kruskal-Wallis tests. Results: There was no statistical difference in saccadic peak velocity: SRC = 414.7 ± 42°/s |
Emily D. Taylor; Tobias Feldmann-Wüstefeld Reward-modulated attention deployment is driven by suppression, not attentional capture Journal Article In: NeuroImage, vol. 299, pp. 1–12, 2024. @article{Taylor2024, One driving factor for attention deployment towards a stimulus is its associated value due to previous experience and learning history. Previous visual search studies found that when looking for a target, distractors associated with higher reward produce more interference (e.g., longer response times). The present study investigated the neural mechanism of such value-driven attention deployment. Specifically, we were interested in which of the three attention sub-processes are responsible for the interference that was repeatedly observed behaviorally: enhancement of relevant information, attentional capture by irrelevant information, or suppression of irrelevant information. We replicated earlier findings showing longer response times and lower accuracy when a target competed with a high-reward compared to a low-reward distractor. We also found a spatial gradient of interference: behavioral performance dropped with increasing proximity to the target. This gradient was steeper for high- than low-reward distractors. Event-related potentials of the EEG signal showed the reason for the reward-induced attentional bias: High-reward distractors required more suppression than low-reward distractors as evident in larger Pd components. This effect was only found for distractors near targets, showing the additional filtering needs required for competing stimuli in close proximity. As a result, fewer attentional resources can be distributed to the target when it competes with a high-reward distractor, as evident in a smaller target-N2pc amplitude. The distractor-N2pc, indicative of attentional capture, was neither affected by distance nor reward, showing that attentional capture alone cannot explain interference by stimuli of high value. In sum our results show that the higher need for suppression of high-value stimuli contributes to reward-modulated attention deployment and increased suppression can prevent attentional capture of high-value stimuli. |
John M. Tauber; Scott L. Brincat; Emily P. Stephen; Jacob A. Donoghue; Leo Kozachkov; Emery N. Brown; Earl K. Miller Propofol-mediated unconsciousness disrupts progression of sensory signals through the cortical hierarchy Journal Article In: Journal of Cognitive Neuroscience, vol. 36, no. 2, pp. 394–413, 2024. @article{Tauber2024, A critical component of anesthesia is the loss of sensory perception. Propofol is the most widely used drug for general anesthesia, but the neural mechanisms of how and when it disrupts sensory processing are not fully understood. We analyzed local field potential and spiking recorded from Utah arrays in auditory cortex, associative cortex, and cognitive cortex of nonhuman primates before and during propofol-mediated unconsciousness. Sensory stimuli elicited robust and decodable stimulus responses and triggered periods of stimulus-related synchronization between brain areas in the local field potential of Awake animals. By contrast, propofol-mediated unconsciousness eliminated stimulus-related synchrony and drastically weakened stimulus responses and information in all brain areas except for auditory cortex, where responses and information persisted. However, we found stimuli occurring during spiking Up states triggered weaker spiking responses than in Awake animals in auditory cortex, and little or no spiking responses in higher order areas. These results suggest that propofol's effect on sensory processing is not just because of asynchronous Down states. Rather, both Down states and Up states reflect disrupted dynamics. |
Dilce Tanriverdi; Frans W. Cornelissen Rapid assessment of peripheral visual crowding Journal Article In: Frontiers in Neuroscience, vol. 18, pp. 1–14, 2024. @article{Tanriverdi2024, Visual crowding, the phenomenon in which the ability to distinguish objects is hindered in cluttered environments, has critical implications for various ophthalmic and neurological disorders. Traditional methods for assessing crowding involve time-consuming and attention-demanding psychophysical tasks, making routine examination challenging. This study sought to compare trial-based Alternative Forced-Choice (AFC) paradigms using either manual or eye movement responses and a continuous serial search paradigm employing eye movement responses to evaluate their efficiency in rapidly assessing peripheral crowding. In all paradigms, we manipulated the orientation of a central Gabor patch, which could be presented alone or surrounded by six Gabor patches. We measured participants' target orientation discrimination thresholds using adaptive psychophysics to assess crowding magnitude. Depending on the paradigm, participants either made saccadic eye movements to the target location or responded manually by pressing a key or moving a mouse. We compared these paradigms in terms of crowding magnitude, assessment time, and paradigm demand. Our results indicate that employing eye movement-based paradigms for assessing peripheral visual crowding yields results faster compared to paradigms that necessitate manual responses. Furthermore, when considering similar levels of confidence in the threshold measurements, both a novel serial search paradigm and an eye movement-based 6AFC paradigm proved to be the most efficient in assessing crowding magnitude. Additionally, crowding estimates obtained through either the continuous serial search or the 6AFC paradigms were consistently higher than those obtained using the 2AFC paradigms. Lastly, participants did not report a clear difference between paradigms in terms of their perceived demand. In conclusion, both the continuous serial search and the 6AFC eye movement response paradigms enable a fast assessment of visual crowding. These approaches may potentially facilitate future routine crowding assessment. However, the usability of these paradigms in specific patient populations and specific purposes should be assessed. |
Jacob C. Tanner; Joshua Faskowitz; Lisa Byrge; Daniel P. Kennedy; Olaf Sporns; Richard F. Betzel Synchronous high-amplitude co-fluctuations of functional brain networks during movie-watching Journal Article In: Imaging Neuroscience, vol. 1, pp. 1–21, 2024. @article{Tanner2024, Recent studies have shown that functional connectivity can be decomposed into its exact frame- wise contributions, revealing short- lived, infrequent, and high- amplitude time points referred to as “events.” Events contribute disproportionately to the time- averaged connectivity pattern, improve identifiability and brain- behavior associations, and differences in their expression have been linked to endogenous hormonal fluctuations and autism. Here, we explore the characteristics of events while subjects watch movies. Using two independently acquired imaging datasets in which participants passively watched movies, we find that events synchronize across individuals and based on the level of synchronization, can be categorized into three distinct classes: those that synchronize at the boundaries between movies, those that synchronize during movies, and those that do not synchronize at all. We find that boundary events, compared to the other categories, exhibit greater amplitude, distinct co- fluctuation patterns, and temporal propagation. We show that underlying boundary events 1 is a specific mode of co-fluctuation involving the activation of control and salience systems alongside the deactivation of visual systems. Events that synchronize during the movie, on the other hand, display a pattern of co-fluctuation that is time- locked to the movie stimulus. Finally, we found that subjects' time-varying brain networks are most similar to one another during these synchronous events. |
Enze Tang; Hongwei Ding Emotion effects in second language processing: Evidence from eye movements in natural sentence reading Journal Article In: Bilingualism, vol. 27, no. 3, pp. 460–479, 2024. @article{Tang2024, There exists insufficient eye-tracking evidence on the differences in emotional word processing between the first language (L1) and second language (L2) readers. This study conducted an eye-tracking experiment to investigate the emotional effects in L2 sentence reading, and to explore the modulation of L2 proficiency and individual emotional states. Adapted from Knickerbocker et al. (2015), the current study recorded eye movements at both early and late processing stages when late Chinese–English bilinguals read emotion-label and neutral target words in natural L2 sentences. Results indicated that L2 readers did not show the facilitation effects of lexical affective connotations during sentence reading, and they even demonstrated processing disadvantages for L2 emotional words. Additionally, the interaction effect between L2 proficiency and emotion was consistently significant for the measure of total reading time in positive words. Measurements of participants' depressive and anxious states were not robustly correlated with eye movement measures. Our findings supplemented new evidence to existing sparse eye-tracking experiments on L2 emotion processing, and lent support to several theoretical frameworks in the bilingual research field, including the EMOTIONAL CONTEXTS OF LEARNING THEORY, LEXICAL QUALITY HYPOTHESIS and REVISED HIERARCHICAL MODEL. |
Reiji Tanaka; Kei Watanabe; Takafumi Suzuki; Kae Nakamura; Masaharu Yasuda; Hiroshi Ban; Ken Okada; Shigeru Kitazawa An easy-to-implement, non-invasive head restraint method for monkey fMRI Journal Article In: NeuroImage, vol. 285, pp. 1–12, 2024. @article{Tanaka2024, Functional magnetic resonance imaging (fMRI) in behaving monkeys has a strong potential to bridge the gap between human neuroimaging and primate neurophysiology. In monkey fMRI, to restrain head movements, researchers usually surgically implant a plastic head-post on the skull. Although time-proven to be effective, this technique could create burdens for animals, including a risk of infection and discomfort. Furthermore, the presence of extraneous objects on the skull, such as bone screws and dental cement, adversely affects signals near the cortical surface. These side effects are undesirable in terms of both the practical aspect of efficient data collection and the spirit of “refinement” from the 3R's. Here, we demonstrate that a completely non-invasive fMRI scan in awake monkeys is possible by using a plastic head mask made to fit the skull of individual animals. In all of the three monkeys tested, longitudinal, quantitative assessment of head movements showed that the plastic mask has effectively suppressed head movements, and we were able to obtain reliable retinotopic BOLD signals in a standard retinotopic mapping task. The present, easy-to-make plastic mask has a strong potential to simplify fMRI experiments in awake monkeys, while giving data that is as good as or even better quality than that obtained with the conventional head-post method. |
Hideki Tamura; Shigeki Nakauchi; Tetsuto Minami Glossiness perception and its pupillary response Journal Article In: Vision Research, vol. 219, pp. 1–10, 2024. @article{Tamura2024, Recent studies have revealed that pupillary response changes depend on perceptual factors such as subjective brightness caused by optical illusions and luminance. However, the manner in which the perceptual factor that is derived from the glossiness perception of object surfaces affects the pupillary response remains unclear. We investigated the relationship between the glossiness perception and pupillary response through a glossiness rating experiment that included recording the pupil diameter. We prepared general object images (original) and randomized images (shuffled) that comprised the same images with randomized small square regions as stimuli. The image features were controlled by matching the luminance histogram. The observers were asked to rate the perceived glossiness of the stimuli presented for 3,000 ms and the changes in their pupil diameters were recorded. Images with higher glossiness ratings constricted the pupil size more than those with lower glossiness ratings at the peak constriction of the pupillary responses during the stimulus duration. The linear mixed-effects model demonstrated that the glossiness rating, image category (original/shuffled), variance of the luminance histogram, and stimulus area were most effective in predicting the pupillary responses. These results suggest that the illusory brightness obtained by the image regions of high-glossiness objects, such as specular highlights, induce pupil constriction. |
Agnieszka Szarkowska; Valentina Ragni; Sonia Szkriba; Sharon Black; David Orrego-Carmona; Jan Louis Kruger In: PLoS ONE, vol. 19, no. 10, pp. 1–29, 2024. @article{Szarkowska2024a, Every day, millions of viewers worldwide engage with subtitled content, and an increasing number choose to watch without sound. In this mixed-methods study, we examine the impact of sound presence or absence on the viewing experience of both first-language (L1) and second-language (L2) viewers when they watch subtitled videos. We explore this novel phenomenon through comprehension and recall post-tests, self-reported cognitive load, immersion, and enjoyment measures, as well as gaze pattern analysis using eye tracking. We also investigate viewers' motivations for opting for audiovisual content without sound and explore how the absence of sound impacts their viewing experience, using in-depth, semi-structured interviews. Our goal is to ascertain whether these effects are consistent among L2 and L1 speakers from different language varieties. To achieve this, we tested L1-British English, L1-Australian English and L2-English (L1-Polish) language speakers (n = 168) while they watched English-language audiovisual material with English subtitles with and without sound. The findings show that when watching videos without sound, viewers experienced increased cognitive load, along with reduced comprehension, immersion and overall enjoyment. Examination of participants' gaze revealed that the absence of sound significantly affected the viewing experience, increasing the need for subtitles and thus increasing the viewers' propensity to process them more thoroughly. The absence of sound emerged as a global constraint that made reading more effortful. Triangulating data from multiple sources made it possible to tap into some of the metacognitive strategies employed by viewers to maintain comprehension in the absence of sound. We discuss the implications within the context of the growing trend of watching subtitled videos without sound, emphasising its potential impact on cognitive processes and the viewing experience. |