All EyeLink Publications
All 9000+ peer-reviewed EyeLink research publications up until 2020 (with some early 2021s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
Pedro G Vieira; Matthew R Krause; Christopher C Pack
In: PLoS Biology, 18 (10), pp. 1–14, 2020.
Transcranial alternating current stimulation (tACS) modulates brain activity by passing electrical current through electrodes that are attached to the scalp. Because it is safe and noninvasive, tACS holds great promise as a tool for basic research and clinical treatment. However, little is known about how tACS ultimately influences neural activity. One hypothesis is that tACS affects neural responses directly, by producing electrical fields that interact with the brain's endogenous electrical activity. By controlling the shape and location of these electric fields, one could target brain regions associated with particular behaviors or symptoms. However, an alternative hypothesis is that tACS affects neural activity indirectly, via peripheral sensory afferents. In particular, it has often been hypothesized that tACS acts on sensory fibers in the skin, which in turn provide rhythmic input to central neurons. In this case, there would be little possibility of targeted brain stimulation, as the regions modulated by tACS would depend entirely on the somatosensory pathways originating in the skin around the stimulating electrodes. Here, we directly test these competing hypotheses by recording single-unit activity in the hippocampus and visual cortex of alert monkeys receiving tACS. We find that tACS entrains neuronal activity in both regions, so that cells fire synchronously with the stimulation. Blocking somatosensory input with a topical anesthetic does not significantly alter these neural entrainment effects. These data are therefore consistent with the direct stimulation hypothesis and suggest that peripheral somatosensory stimulation is not required for tACS to entrain neurons.
Manuel Vidal; Andrea Desantis; Laurent Madelain
In: PLoS ONE, 15 (2), pp. 1–27, 2020.
Saccadic eye movements bring events of interest to the center of the retina, enabling detailed visual analysis. This study explored whether irrelevant auditory (experiments A, B & F), visual (C & D) or tactile signals (E & F) delivered around the onset of a visual target modulates saccade latency. Participants were instructed to execute a quick saccade toward a target stepping left or right from a fixation position. We observed an interaction between auditory beeps or tactile vibrations and the oculomotor reaction that included two components: a warning effect resulting in faster saccades when the signal and the target were presented simultaneously; and a modulation effect with shorter - or longer - latencies when auditory and tactile signals were delivered before - or after - the target onset. Combining both modalities only increased the modulation effect to a limited extent, pointing to a saturation of the multisensory interaction with the motor control. Interestingly, irrelevant visual stimuli (black background or isoluminant noise stripes in peripheral vision, flashed for 10 ms) increased saccade latency whether they were presented just before or after target onset. The lack of latency reduction with visual signals suggests that the modulation observed in the auditory and tactile experiments was not related to priming effects but rather to low-level audio- and tactile-visual integration. The increase in saccade latency observed with irrelevant visual stimuli is discussed in relation to saccadic inhibition. Our results demonstrate that signals conveying no information regarding where and when a visual target would appear modulate saccadic reactivity, much like in multisensory temporal binding, but only when these signals come from a different modality.
Parag Verma; Neelu J Ahuja
Impact of eye movements on the brain cognitive process Journal Article
In: PalArch's Journal Of Archaeology Of Egypt/Egyptology, 17 (6), pp. 7985–8001, 2020.
Eye movement research is a profoundly, dynamic, and beneficial exploration area. The presented work focus on how the movement of the eye can be thought of as a window to the brain and psyche. Specifically, the work examines how signaling deliberation depends on the direction of the focused regions i.e. fixation locations, and therefore find out how the fixation locations are chosen. In particular, work support to the determination of fixations during the visual investigation may be to an enormous degree dependent on the focused retinotoplasted model of the structure. However, these models largely disregard the spatiotemporal structure in the succession of eye movement. The direction of the eye movement in a spatiotemporal structure requires an understanding of the spatiotemporal properties of visual measurements. In light of this, work examine the accessibility of external data to internal etymology about causes on the natural environment. Current exploration looks at eye movement behavior in a publically accessible dataset of eye movement on real-world complex images with a sample size n = 48. Presented work report the baseline proportion of conduct of eye movement in our example, which includes the mean fixation duration, saccade amplitude, and introductory saccade inactivity in light of the fact that visual rumor is a unique cycle arranged to investigate or misuse form can be balanced. For examination on a high evanescent target, work suggests another strategy: the density of the restoration allows precise earthly connections of eye movement and different activities, such as viewing and clicking images. The Paper ends with an approach and suggests that eye movement research has come to a reasonable level and can be combined with other investigative techniques to use this window for the cerebrum and psyche without much stretch.
Valentina Vencato; Laurent Madelain
Perception of saccadic reaction time Journal Article
In: Scientific Reports, 10 , pp. 1–11, 2020.
That saccadic reaction times (SRTs) may depend on reinforcement contingencies has been repeatedly demonstrated. It follows that one must be able to discriminate one's latencies to adequately assign credit to one's actions, which is to connect behaviour to its consequence. To quantify the ability to perceive one's SRT, we used an adaptive procedure to train sixteen participants in a stepping visual target saccade paradigm. Subsequently, we measured their RTs perceptual threshold at 75% in a conventional constant stimuli procedure. For each trial, observers had to saccade to a stepping target. Then, in a 2-AFC task, they had to choose one value representing the actual SRT, while the other value proportionally differed from the actual SRT. The relative difference between the two alternatives was computed by either adding or subtracting from the actual SRT a percent-difference value randomly chosen among a fixed set. Feedback signalling the correct choice was provided after each response. Overall, our results showed that the 75% SRT perceptual threshold averaged 23% (about 40 ms). The ability to discriminate small SRT differences provides support for the possibility that the credit assignment problem may be solved even for short reaction times.
Aaron Veldre; Erik D Reichle; Roslyn Wong; Sally Andrews
In: Cognition, 197 , pp. 1–14, 2020.
Recent eye-movement evidence suggests readers are more likely to skip a high-frequency word than a low-frequency word independently of the semantic or syntactic acceptability of the word in the sentence. This has been interpreted as strong support for a serial processing mechanism in which the decision to skip a word is based on the completion of a preliminary stage of lexical processing prior to any assessment of contextual fit. The present large-scale study was designed to reconcile these findings with the plausibility preview effect: higher skipping and reduced first-pass reading times for words that are previewed by contextually plausible, compared to implausible, sentence continuations that are unrelated to the target word. Participants' eye movements were recorded as they read sentences containing a short (3–4 letters) or long (6 letters) target word. The boundary paradigm was used to present parafoveal previews which were either higher or lower frequency than the target, and either plausible or implausible in the sentence context. The results revealed strong, independent effects of all three factors on target skipping and early measures of target fixation duration, while frequency and plausibility interacted on later measures of target fixation duration. Simulations using the E-Z Reader model of eye-movement control in reading demonstrated that plausibility effects on skipping are potentially consistent with the assumption that higher-level contextual information only affects post-lexical integration processes. However, no current model of eye movements in reading provides an explicit account of the information or processes that allow readers to rapidly detect an integration failure.
Awel Vaughan-Evans; Simon P Liversedge; Gemma Fitzsimmons; Manon W Jones
Syntactic co-activation in natural reading Journal Article
In: Visual Cognition, 28 (10), pp. 541–556, 2020.
The extent to which syntactic co-activation occurs during natural reading is currently unknown. Here, we measured the eye movements of Welsh-English bilinguals and English monolinguals as they read English sentences. Target words were manipulated to create nonwords that were consistent or inconsistent with the rules of Welsh soft mutation (a morphosyntactic process that alters the initial consonant of words). Nonwords were only visible in parafoveal preview, and a direct fixation triggered the presentation of the normal English word. Linear mixed effects analyses revealed a robust parafoveal preview benefit for identity (television) compared with mutated (delevision) and aberrant previews (belevision), and a parafoveal-on-foveal effect in our bilingual sample. Bilingual readers' sentence reanalysis was affected by the implicit Welsh mutation, but only in contexts that would elicit a mutation in Welsh. Our findings suggest that morphosyntactic rules are co-activated during natural reading, however further investigations must evaluate the robustness of this effect.
Martin R Vasilev; Mark Yates; Ethan Prueitt; Timothy J Slattery
In: Quarterly Journal of Experimental Psychology, pp. 1–23, 2020.
There is a growing understanding that the parafoveal preview effect during reading may represent a combination of preview benefits and preview costs due to interference from parafoveal masks. It has been suggested that visually degrading the parafoveal masks may reduce their costs, but adult readers were later shown to be highly sensitive to degraded display changes. Four experiments examined how preview benefits and preview costs are influenced by the perception of distinct parafoveal degradation at the target word location. Participants read sentences with four preview types (identity, orthographic, phonological, and letter-mask preview) and two levels of visual degradation (0% vs. 20%). The distinctiveness of the target word degradation was either eliminated by degrading all words in the sentence (Experiments 1a–2a) or remained present, as in previous research (Experiments 1b–2b). Degrading the letter masks resulted in a reduction in preview costs, but only when all words in the sentence were degraded. When degradation at the target word location was perceptually distinct, it induced costs of its own, even for orthographically and phonologically related previews. These results confirm previous reports that traditional parafoveal masks introduce preview costs that overestimate the size of the true benefit. However, they also show that parafoveal degradation has the unintended consequence of introducing additional costs when participants are aware of distinct degradation on the target word. Parafoveal degradation appears to be easily perceived and may temporarily orient attention away from the reading task, thus delaying word processing.
Marloes L van Moort; Arnout Koornneef; Paul W van den Broek
In: Discourse Processes, pp. 1–20, 2020.
To build a coherent accurate mental representation of a text, readers routinely validate information they read against the preceding text and their background knowledge. It is clear that both sources affect processing, but when and how they exert their influence remains unclear. To examine the time course and cognitive architecture of text-based and knowledge-based validation processes, we used eye-tracking methodology. Participants read versions of texts that varied systematically in (in)coherence with prior text or background knowledge. Contradictions with respect to prior text and background knowledge both were found to disrupt reading but in different ways: The two types of contradiction led to distinct patterns of processes, and, importantly, these differences were evident already in early processing stages. Moreover, knowledge-based incoherence triggered more pervasive and longer (repair) processes than did text-based incoherence. Finally, processing of text-based and knowledge-based incoherence was not influenced by readers' working memory capacity.
Freek Van Ede; Alexander G Board; Anna C Nobre
In: Proceedings of the National Academy of Sciences, 117 (39), pp. 24590–24598, 2020.
Adaptive behavior relies on the selection of relevant sensory information from both the external environment and internal memory representations. In understanding external selection, a classic distinction is made between voluntary (goal-directed) and involuntary (stimulus-driven) guidance of attention. We have developed a task- the anti-retrocue task-to separate and examine voluntary and involuntary guidance of attention to internal representations in visual working memory. We show that both voluntary and involuntary factors influence memory performance but do so in distinct ways. Moreover, by tracking gaze biases linked to attentional focusing inmemory, we provide direct evidence for an involuntary "retro-capture" effect whereby external stimuli involuntarily trigger the selection of featurematching internal representations. We show that stimulus-driven and goal-directed influences compete for selection inmemory, and that the balance of this competition-as reflected in oculomotor signatures of internal attention-predicts the quality of ensuing memory-guided behavior. Thus, goal-directed and stimulus-driven factors together determine the fate not only of perception, but also of internal representations in working memory.
Stefan Van der Stigchel; Martijn J Schut; Jasper Fabius; Nathan Van der Stoep
In: Journal of Vision, 20 (9), pp. 1–12, 2020.
Whenever we move our eyes, some visual information obtained before a saccade is combined with the visual information obtained after a saccade. Interestingly, saccades rarely land exactly on the saccade target, which may pose a problem for transsaccadic perception as it could affect the quality of postsaccadic input. Recently, however, we showed that transsaccadic feature integration is actually unaffected by deviations of saccade landing points. Possibly, transsaccadic integration remains unaffected because the presaccadic shift of attention follows the intended saccade target and not the actual saccade landing point during regular saccades. Here, we investigated whether saccade landing point errors can in fact alter transsaccadic perception when the presaccadic shift of attention follows the saccade landing point deviation. Given that saccadic adaptation not only changes the saccade vector, but also the presaccadic shift of attention, we combined a feature report paradigm with saccadic adaptation. Observers reported the color of the saccade target, which occasionally changed slightly during a saccade to the target. This task was performed before and after saccadic adaptation. The results showed that, after adaptation, presaccadic color information became less precise and transsaccadic perception had a stronger reliance on the postsaccadic color estimate. Therefore, although previous studies have shown that transsaccadic perception is generally unaffected by saccade landing point deviations, our results reveal that this cannot be considered a general property of the visual system. When presaccadic shifts of attention follow altered saccade landing points, transsaccadic perception is affected, suggesting that transsaccadic feature perception might be dependent on visual spatial attention.
Matteo Valsecchi; Carlos Cassanello; Arvid Herwig; Martin Rolfs; Karl R Gegenfurtner
In: Journal of Vision, 20 (4), pp. 1–15, 2020.
Repeated exposure to a consistent trans-saccadic step in the position of the saccadic target reliably produces a change of saccadic gain, a well-established trans-saccadic motor learning phenomenon known as saccadic adaptation. Trans-saccadic changes can also produce perceptual effects. Specifically, a systematic increase or decrease in the size of the object that is being foveated changes the perceptually equivalent size between fovea and periphery. Previous studies have shown that this recalibration of perceived size can be established within a few dozen trials, persists overnight, and generalizes across hemifields. In the current study, we use a novel adjustment paradigm to characterize both temporally and spatially the learning process that subtends this form of recalibration, and directly compare its properties to those of saccadic adaptation. We observed that sinusoidal oscillations in the amplitude of the trans-saccadic change produce sinusoidal oscillations in the reported peripheral size, with a lag of under 10 trials. This is qualitatively similar to what has been observed in the case of saccadic adaptation. We also tested whether learning is generalized to the mirror location on the opposite hemifield for both size recalibration and saccade adaptation. Here the results were markedly different, showing almost complete generalization for recalibration and no generalization for saccadic adaptation. We conclude that perceptual and visuomotor consequences of trans-saccadic changes rely on learning mechanisms that are distinct but develop on similar time scales.
Sofia Vallila-Rohter; Brendan Czupryna
In: Topics in Language Disorders, 40 (1), pp. 110–123, 2020.
Studies have identified deficits in attention in individuals with aphasia in language and nonlanguage tasks. Attention may play a role in the construction and use of language, as well as in learning and the process of rehabilitation, yet the role of attention on rehabilitation is not fully understood. To improve the understanding of attention and learning in aphasia, this study replicated an experiment that utilized category learning to examine attentional allocation. Ten individuals with aphasia subsequent to left hemisphere stroke and 20 age-matched controls completed a computer-based category learning task while eye gaze data were collected using an eye tracker. Stimulus items comprised 4 features that differed in the reliability with which they predicted category membership (referred to as their diagnosticity). In this study, no differences were observed between individuals with aphasia and control participants on behavioral measures of accuracy and response time, though accuracies overall were lower than those of prior studies examining this task in young adults. Eye gaze data demonstrated that over the course of training, controls and individuals with aphasia learned to reduce the number of looks to the feature of lowest diagnosticity, suggestive of optimized attentional allocation. Eye gaze patterns, however, did not show increased looking or look times to all features of highest diagnosticity, which has been seen in young adults. Older adults and individuals with aphasia may benefit from additional processing time or additional trials during category learning to optimize attention and behavioral accuracy. Findings are relevant to consider in clinical settings where visual stimuli are presented as instructional, supporting, and/or compensatory tools.
Andrius Vabalas; Emma Gowen; Ellen Poliakoff; Alexander J Casson
In: Scientific Reports, 10 , pp. 1–13, 2020.
Autism is a developmental condition currently identified by experts using observation, interview, and questionnaire techniques and primarily assessing social and communication deficits. Motor function and movement imitation are also altered in autism and can be measured more objectively. In this study, motion and eye tracking data from a movement imitation task were combined with supervised machine learning methods to classify 22 autistic and 22 non-autistic adults. The focus was on a reliable machine learning application. We have used nested validation to develop models and further tested the models with an independent data sample. Feature selection was aimed at selection stability to assure result interpretability. Our models predicted diagnosis with 73% accuracy from kinematic features, 70% accuracy from eye movement features and 78% accuracy from combined features. We further explored features which were most important for predictions to better understand movement imitation differences in autism. Consistent with the behavioural results, most discriminative features were from the experimental condition in which non-autistic individuals tended to successfully imitate unusual movement kinematics while autistic individuals tended to fail. Machine learning results show promise that future work could aid in the diagnosis process by providing quantitative tests to supplement current qualitative ones.
Sandra Utz; Claus Christian Carbon
In: i-Perception, 11 (4), pp. 1–10, 2020.
van Lier and Koning introduced the more-or-less morphing face illusion: The detection of changes in a constantly morphing face-sequence is strongly suppressed by fast eye saccades triggered by a moving fixation dot. Modulators of this intriguing effect were investigated with systematically varied facial stimuli (e.g., human faces from varying morphological groups, emotional states) and fixation location. Results replicated the overall pattern of moving fixations substantially reducing the sensitivity to detect transitions. Importantly, a deviation from real to perceived changes could only be detected when faces were altered in a way not happening in real world—by changing identity. When emotional states of faces were changed, people were capable of perceiving these changes: A situation very similar to everyday life where we might quickly inspect a face by executing fast eye saccades but where we are still aware of transient changes of the emotional state of the very same person.
Franziska Usée; Arthur M Jacobs; Jana Lüdtke
In: Frontiers in Psychology, 11 , pp. 1–21, 2020.
Reading is known to be a highly complex, emotion-inducing process, usually involving connected and cohesive sequences of sentences and paragraphs. However, most empirical results, especially from studies using eye tracking, are either restricted to simple linguistic materials (e.g., isolated words, single sentences) or disregard valence-driven effects. The present study addressed the need for ecologically valid stimuli by examining the emotion potential of and reading behavior in emotional vignettes, often used in applied psychological contexts and discourse comprehension. To allow for a cross-domain comparison in the area of emotion induction, negatively and positively valenced vignettes were constructed based on pre-selected emotional pictures from the Nencki Affective Picture System (NAPS; Marchewka et al., 2014). We collected ratings of perceived valence and arousal for both material groups and recorded eye movements of 42 participants during reading and picture viewing. Linear mixed-effects models were performed to analyze effects of valence (i.e., valence category, valence rating) and stimulus domain (i.e., textual, pictorial) on ratings of perceived valence and arousal, eye movements in reading, and eye movements in picture viewing. Results supported the success of our experimental manipulation: emotionally positive stimuli (i.e., vignettes, pictures) were perceived more positively and less arousing than emotionally negative ones. The cross-domain comparison indicated that vignettes are able to induce stronger valence effects than their pictorial counterparts, no differences between vignettes and pictures regarding effects on perceived arousal were found. Analyses of eye movements in reading replicated results from experiments using isolated words and sentences: perceived positive text valence attracted shorter reading times than perceived negative valence at both the supralexical and lexical level. In line with previous findings, no emotion effects on eye movements in picture viewing were found. This is the first eye tracking study reporting superior valence effects for vignettes compared to pictures and valence-specific effects on eye movements in reading at the supralexical level.
Aditya Upadhyayula; Jonathan Flombaum
In: Cognition, 205 , pp. 1–13, 2020.
In many settings “keep your eye on the ball” is good advice. People fixate important objects to obtain high quality information. Perhaps equally often, however, we engage with multiple important, moving, and unpredictable objects. Where should we fixate in these situations, and where do we? Do we for example appropriately center fixations to manage spatial non-uniformity in our visual system? And do we fixate empty space strategically to gain as much information as possible about multiple objects of interest? We explored these issues in the context of Multiple Object Tracking (MOT), wherein observers track several moving objects (targets) within a larger set of moving objects (nontargets), all the objects physically indistinguishable from one another. Among the features that make MOT an interesting paradigm is that it cannot be accommodated by continuous gaze to one important object, because there are multiple such objects in a given trial. Instead, it demands sustained processing of inputs from an entire display and iterated inferences about target versus nontarget identities. MOT therefore demands a strategic interaction between eye movements and cognition: the observer should seek fixation locations that minimize the aggregate probability of confusing any target with any nontarget. Individuals who meet this fixation challenge should perform the task better than those who meet the challenge less effectively. Here we describe a probabilistic model that implements the basic computations needed to do MOT, estimating the positions of targets, predicting their future positions, and inferring correspondences between new inputs and represented targets. The quality of the input received by the model depends on its fixation location at a given moment. We simulated a group of fifty participants who all performed the same MOT trials, with the model adopting each observer's fixation locations in the respective simulations. The model reliably predicted individual participant tracking performances and their relative rankings within the cohort. The results suggest that an individual's relative capability in this cognitively demanding task is in part determined by his/her utilization of eye fixations to control the quality and relevance of incoming visual input.
Layla Unger; Olivera Savic; Vladimir M Sloutsky
In: Cognition, 198 , pp. 1–17, 2020.
Our knowledge about the world is represented not merely as a collection of concepts, but as an organized lexico-semantic network in which concepts can be linked by relations, such as “taxonomic” relations between members of the same stable category (e.g., cat and sheep), or association between entities that occur together or in the same context (e.g., sock and foot). To date, accounts of the origins of semantic organization have largely overlooked how sensitivity to statistical regularities ubiquitous in the environment may play a powerful role in shaping semantic development. The goal of the present research was to investigate how associations in the form of statistical regularities with which labels for concepts co-occur in language (e.g., sock and foot) and taxonomic relatedness (e.g., sock and pajamas) shape semantic organization of 4–5-year-olds and adults. To examine these aspects of semantic organization across development, we conducted three experiments examining effects of co-occurrence and taxonomic relatedness on cued recall (Experiment 1), word-picture matching (Experiment 2), and looking dynamics in a Visual World paradigm (Experiment 3). Taken together, the results of the three experiments provide evidence that co-occurrence-based links between concepts manifest in semantic organization from early childhood onward, and are increasingly supplemented by taxonomic links. We discuss these findings in relation to theories of semantic development.
Irem Undeger; Renée M Visser; Andreas Olsson
In: Cerebral Cortex, 30 (10), pp. 5410–5419, 2020.
Attributing intentions to others' actions is important for learning to avoid their potentially harmful consequences. Here, we used functional magnetic resonance imaging multivariate pattern analysis to investigate how the brain integrates information about others' intentions with the aversive outcome of their actions. In an interactive aversive learning task, participants (n = 33) were scanned while watching two alleged coparticipants (confederates)-one making choices intentionally and the other unintentionally-leading to aversive (a mild shock) or safe (no shock) outcomes to the participant. We assessed the trial-by-trial changes in participants' neural activation patterns related to observing the coparticipants and experiencing the outcome of their choices. Participants reported a higher number of shocks, more discomfort, and more anger to shocks given by the intentional player. Intentionality enhanced responses to aversive actions in the insula, anterior cingulate cortex, inferior frontal gyrus, dorsal medial prefrontal cortex, and the anterior superior temporal sulcus. Our findings indicate that neural pattern similarities index the integration of social and threat information across the cortex.
Anastasia Ulicheva; Hannah Harvey; Mark Aronoff; Kathleen Rastle
In: Cognition, 195 , pp. 103810, 2020.
Substantial research has been undertaken to understand the relationship between spelling and sound, but we know little about the relationship between spelling and meaning in alphabetic writing systems. We present a computational analysis of English writing in which we develop new constructs to describe this relationship. Diagnosticity captures the amount of meaningful information in a given spelling, whereas specificity estimates the degree of dispersion of this meaning across different spellings for a particular sound sequence. Using these two constructs, we demonstrate that particular suffix spellings tend to be reserved for particular meaningful functions. We then show across three paradigms (nonword classification, spelling, and eye tracking during sentence reading) that this form of regularity between spelling and meaning influences the behaviour of skilled readers, and that the degree of this behavioural sensitivity mirrors the strength of spelling-to-meaning regularities in the writing system. We close by arguing that English spelling may have become fractionated such that the high degree of spelling-sound inconsistency maximises the transmission of meaningful information.
Liis Uiga; Catherine M Capio; Donghyun Ryu; William R Young; Mark R Wilson; Thomson W L Wong; Andy C Y Tse; Rich S W Masters
In: Journals of Gerontology - Series B Psychological Sciences and Social Sciences, 75 (2), pp. 282–292, 2020.
Objectives: The aim of this study was to examine the association between conscious monitoring and control of movements (i.e., movement-specific reinvestment) and visuomotor control during walking by older adults. Method: The Movement-Specific Reinvestment Scale (MSRS) was administered to 92 community-dwelling older adults, aged 65-81 years, who were required to walk along a 4.8-m walkway and step on the middle of a target as accurately as possible. Participants' movement kinematics and gaze behavior were measured during approach to the target and when stepping on it. Results: High scores on the MSRS were associated with prolonged stance and double support times during approach to the stepping target, and less accurate foot placement when stepping on the target. No associations between MSRS and gaze behavior were observed. Discussion: Older adults with a high propensity for movement-specific reinvestment seem to need more time to "plan" future stepping movements, yet show worse stepping accuracy than older adults with a low propensity for movement-specific reinvestment. Future research should examine whether older adults with a higher propensity for reinvestment are more likely to display movement errors that lead to falling.
Alexandra Ţurcan; Hannah Howman; Ruth Filik
In: Journal of Experimental Psychology: Learning Memory and Cognition, 46 (10), pp. 1966–1976, 2020.
This article addresses a current theoretical debate between modular and interactive accounts of sarcasm processing, by investigating the role of context (specifically, knowing that a character has been sarcastic before) in the comprehension of a sarcastic remark. An eye-tracking experiment was conducted in which participants were asked to read texts that introduced a character as being either sarcastic or not and ended in either a literal or an unfamiliar sarcastic remark. The results indicated that when the character was previously literal, a subsequent sarcastic remark was more difficult to process than its literal counterpart. However, when the context was supportive of the sarcastic interpretation (i.e., the character was known to be sarcastic), subsequent sarcastic remarks were as easy to read as literal equivalents, which would support the predictions of interactive accounts. Importantly, this effect was not preceded by a main effect of literality, which constitutes evidence against the predictions of modular accounts.
Leslie Tricoche; Johan Ferrand-Verdejo; Denis Pélisson; Martine Meunier
In: Frontiers in Behavioral Neuroscience, 13 , pp. 1–13, 2020.
“Social facilitation” refers to the enhancement or impairment of performance engendered by the mere presence of others. It has been demonstrated for a diversity of behaviors. This study assessed whether it also concerns attention and eye movements and if yes, which decision-making mechanisms it affects. Human volunteers were tested in three different tasks (saccades, visual search, and continuous performance) either alone or in the presence of a familiar peer. The results failed to reveal any significant peer influence on the visual search and continuous performance tasks. For saccades, by contrast, they showed a negative or positive peer influence depending on the complexity of the testing protocol. Pro-and anti-saccades were both inhibited when pseudorandomly mixed, and both facilitated when performed separately. Peer presence impaired or improved reaction times, i.e., the speed to initiate the saccade, as well as peak velocity, i.e., the driving force moving the eye toward the target. Effect sizes were large, with Cohen's d-values ranging for reaction times (RTs) from 0.50 to 0.95. Analyzing RT distributions using the LATER (Linear Approach to Threshold with Ergodic Rate) model revealed that social inhibition of pro- and anti-saccades in the complex protocol was associated with a significant increase in the rate of rise. The present demonstration that the simple presence of a familiar peer can inhibit or facilitate saccades depending on task difficulty strengthens a growing body of evidence showing social modulations of eye movements and attention processes. The present lack of effect on visual search and continuous performance tasks contrasts with peer presence effects reported earlier using similar tasks, and future studies are needed to determine whether it is due to an intermediate level of difficulty maximizing individual variability. Together with an earlier study of the social inhibition of anti-saccades also using the LATER model, which showed an increase of the threshold, the present increase of the rate of rise suggests that peer presence can influence both the top-down and bottom-up attention-related processes guiding the decision to move the eyes.
Thomas Treal; Philip L Jackson; Aurore Meugnot
In: Computers in Human Behavior, 112 , pp. 1–10, 2020.
The use of computer-generated characters (avatars) is increasingly being used to study emotion and social cognition in humans, as it offers a highly controllable, yet potentially interactive experimental set-up. However, avatars often fall short at conveying credible emotions. This study explored the interaction between body motion and facial expression on the perceived intensity and believability of an avatar's pain expression. Adults were shown videos of an agematched avatar displaying facial expressions of pain while the body was static or while the trunk was oscillating at varied amplitudes, which represented human dynamic equilibrium (idle motion) or trunk rocking expressing a sustained pain (pain behavior). Pupil size was recorded during the task as an objective marker of emotional reaction. Results showed that the avatar's pain was perceived to be more intense and more believable in the presence of both idle motion and trunk rocking than in the static condition. Pupil dilated more when facial pain expression was combined with trunk rocking than in the static and idle conditions. This work demonstrated the critical role of idle motion when creating dynamical pain-expressing avatars, as well as the potentiating effect of body motion when combined with facial expression on the perception of avatar's pain.
David A Tovar; Jacob A Westerberg; Michele A Cox; Kacie Dougherty; Thomas A Carlson; Mark T Wallace; Alexander Maier
In: Frontiers in Systems Neuroscience, 14 , pp. 1–14, 2020.
Most of the mammalian neocortex is comprised of a highly similar anatomical structure, consisting of a granular cell layer between superficial and deep layers. Even so, different cortical areas process different information. Taken together, this suggests that cortex features a canonical functional microcircuit that supports region-specific information processing. For example, the primate primary visual cortex (V1) combines the two eyes' signals, extracts stimulus orientation, and integrates contextual information such as visual stimulation history. These processes co-occur during the same laminar stimulation sequence that is triggered by the onset of visual stimuli. Yet, we still know little regarding the laminar processing differences that are specific to each of these types of stimulus information. Univariate analysis techniques have provided great insight by examining one electrode at a time or by studying average responses across multiple electrodes. Here we focus on multivariate statistics to examine response patterns across electrodes instead. Specifically, we applied multivariate pattern analysis (MVPA) to linear multielectrode array recordings of laminar spiking responses to decode information regarding the eye-of-origin, stimulus orientation, and stimulus repetition. MVPA differs from conventional univariate approaches in that it examines patterns of neural activity across simultaneously recorded electrode sites. We were curious whether this added dimensionality could reveal neural processes on the population level that are challenging to detect when measuring brain activity without the context of neighboring recording sites. We found that eye-of-origin information was decodable for the entire duration of stimulus presentation, but diminished in the deepest layers of V1. Conversely, orientation information was transient and equally pronounced along all layers. More importantly, using time-resolved MVPA, we were able to evaluate laminar response properties beyond those yielded by univariate analyses. Specifically, we performed a time generalization analysis by training a classifier at one point of the neural response and testing its performance throughout the remaining period of stimulation. Using this technique, we demonstrate repeating (reverberating) patterns of neural activity that have not previously been observed using standard univariate approaches.
Fatemeh Torabi Asr; Vera Demberg
In: Discourse Processes, 57 (4), pp. 376–399, 2020.
Connectives can facilitate the processing of discourse relations by helping comprehenders to infer the intended coherence relation holding between two text spans. Previous experimental studies have focused on pairs of connectives that are very different from one another to be able to compare and formalize the distinguishing effects of these particles in discourse comprehension. In this article, we compare two connectives, but and although, which overlap in terms of the relations they can signal. We demonstrate in a set of carefully controlled studies that while a connective can be a marker of several discourse relations, it can have a specific fine-grained biasing effect on linguistic inferences and that this bias can be derived (or predicted) from the connectives' distribution of relations found in production data. The effects that we find speak to the ambiguity of discourse connectives, in general, and the different functions of but and although, in particular. These effects cannot be explained within the earlier accounts of discourse connectives, which propose that each connective has a core meaning or processing instruction. Instead, we here lay out a probabilistic account of connective meaning and interpretation, which is based on the distribution of connectives in production and is supported by our experimental findings.
Josef Toon; Anuenue Kukona
In: Cognitive Science, 44 (1), pp. 1–22, 2020.
Two visual world experiments investigated the activation of semantically related concepts during the processing of environmental sounds and spoken words. Participants heard environmental sounds such as barking or spoken words such as “puppy” while viewing visual arrays with objects such as a bone (semantically related competitor) and candle (unrelated distractor). In Experiment 1, a puppy (target) was also included in the visual array; in Experiment 2, it was not. During both types of auditory stimuli, competitors were fixated significantly more than distractors, supporting the coactivation of semantically related concepts in both cases; comparisons of the two types of auditory stimuli also revealed significantly larger effects with environmental sounds than spoken words. We discuss implications of these results for theories of semantic knowledge.
Kristen M Tooley
Contrasting mechanistic accounts of the lexical boost Journal Article
In: Memory and Cognition, 48 (5), pp. 815–838, 2020.
While many recent studies focused on abstract syntactic priming effects have implicated an error-based learning mechanism, there is little consensus on the most likely mechanism underlying the lexical boost. The current study aimed at refining understanding of the mechanism that leads to this priming effect. In two eye-tracking during reading experiments, the nature of the lexical boost was investigated by comparing predictions from competing accounts in terms of decay and the requirement of structural overlap between primes and targets. Experiment 1 revealed facilitation of target structure processing for shorter relative to longer primes, when there were fewer intervening words between prime and target verbs. In Experiment 2, significant lexically boosted priming effects were observed, but only when the target structure also appeared in the prime, and not when the prime had a different structure but a high degree of lexical overlap with the target. Overall, these results are most consistent with a short-lived mechanistic account rather than an error-based learning account of the lexical boost. Furthermore, these results align with dual-mechanism accounts of syntactic priming whereby different mechanisms are claimed to produce abstract syntactic priming effects and the lexical boost.
Xiuli Xiuhong Tong; Wei Shen; Zhao Li; Mengdi Xu; Liping Pan; Shelley Xiuli Tong; Xiuli Xiuhong Tong
In: Quarterly Journal of Experimental Psychology, 73 (4), pp. 617–628, 2020.
Combining eye-tracking technique with a revised visual world paradigm, this study examined how positional, phonological, and semantic information of radicals are activated in visual Chinese character recognition. Participants' eye movements were tracked when they looked at four types of invented logographic characters including a semantic radical in the legal (e.g., (Figure presented.)) and illegal positions ((Figure presented.)), a phonetic radical in the legal (e.g., (Figure presented.)) and illegal positions (e.g., (Figure presented.)). These logographic characters were presented simultaneously with either a sound-cued (e.g., /qiao2/) or meaning-cued (e.g., a picture of a bridge) condition. Participants appeared to allocate more visual attention towards radicals in legal, rather than illegal, positions. In addition, more eye fixations occurred on phonetic, rather than on semantic, radicals across both sound- and meaning-cued conditions, indicating participants' strong preference for phonetic over semantic radicals in visual character processing. These results underscore the universal phonology principle in processing non-alphabetic Chinese logographic characters.
In: International Journal of Trend in Research and Development, 7 (3), pp. 146–148, 2020.
Taking Table Lamp as the research object, the eye movement analysis method and subjective questionnaire survey method are used to explore the aesthetic preference of college students for the shape of table Lamp through the comprehensive analysis of the eye movement data of the subjects and the subjective questionnaire survey data, so as to provide design reference for enterprises and peer designers. An Sr research eyelink helmet-mounted oculomotor is used to record the eye movement characteristics of 20 subjects during viewing pictures of different Table Lamp shapes. The results show that the modern simplicity style is the most popular. The second is European style and Chinese style.
In: Laterality, pp. 1–25, 2020.
Previous research suggests that the right visual field advantage on the lexical decision task occurs independent of the visual quality of stimuli [Chiarello, C., Senehi, J., & Soulier, M. (1986). Viewing conditions and hemisphere asymmetry for the lexical decision. Neuropsychologia, 24(4), 521–529]. However, previous studies examining these effects have had methodological limitations that were addressed and controlled for in the present study. Participants performed a divided visual field, lexical decision task for words that varied in size (Experiment 1) and visibility (Experiment 2). Results showed a quality by visual field interaction effect. In both experiments, response times were faster for targets presented to the right visual field in the high quality (i.e., large font, high visibility) conditions; however, visual quality resulted in no differences for targets presented to the left visual field. Furthermore, this quality by visual field interaction effect was only observed when the target was a word. These results suggest that the left hemisphere advantage for lexical decision depends on the perceptual quality of targets, consistent with an early stage of processing account of hemispheric asymmetry during lexical decision. Findings are discussed within the context of word recognition and decision-based models.
Simon P Tiffin-Richards; Sascha Schroeder
In: Journal of Experimental Psychology: Learning Memory and Cognition, 46 (9), pp. 1701–1713, 2020.
Words are seldom read in isolation. Predicting or anticipating upcoming words in a text, based on the context in which they are read, is an important aspect of efficient language processing. In sentence reading, words with congruent preceding context have been shown to be processed faster than words read in neutral or incongruous contexts. The onset of contextual facilitation effects is found very early in the first-pass-reading eye-movement and electroencephalogram (EEG) measures of skilled adult readers. However, the effect of contextual facilitation on children's eye movements during reading remains largely unexplored. To fill this gap, we tracked children's and adults' eye movements while reading stories with embedded words that were either strongly or weakly related to a clear narrative theme. Our central finding is that children showed late contextual facilitation effects during text reading as opposed to both early and late facilitation effects found in skilled adult readers. Contextual constraint had a similar effect on children's and adults' initiation of regressive saccades, whereas children invested more time in rereading relative to adults after encountering weakly contextually constrained words. Quantile regression analyses revealed that contextual facilitation effects had an early onset in adults' first-pass reading, whereas they only had a late onset in children's gaze durations.
Matsya R Thulasiram; Ryan W Langridge; Hana H Abbas; Jonathan J Marotta
In: Experimental Brain Research, 238 (6), pp. 1433–1440, 2020.
Previous investigations have uncovered a strong visual bias toward the index finger when reaching and grasping stationary or horizontally moving targets. The present research sought to explore whether the index finger or thumb would serve as a significant focus for gaze in tasks involving vertically translating targets. Participants executed right-handed reach-to-grasp movements towards upward or downward moving 2-D targets on a computer screen. When the target first appeared, participants made anticipatory fixations in the direction of the eventual movement path (i.e. well above upwardly moving targets or well below downwardly moving targets) and upon movement onset, fixations shifted toward the leading edge of the target. For upward moving targets, fixations remained toward the leading edge upon reach onset, whereas for downward moving targets, fixations shifted toward the centre of the target. The same central fixation location was observed at the time of grasp for all targets. Furthermore, for downwardly moving targets, the placement of the thumb appears to have influenced fixation location in conjunction with, not replacement of, the influence of the index finger. These findings are indicative of the increasingly relevant role of the thumb in mediating reaching and grasping downwardly moving targets.
Mervyn G Thomas; Gail D E Maconachie; Cris S Constantinescu; Wai Man Chan; Brenda Barry; Michael Hisaund; Viral Sheth; Helen J Kuht; Rob A Dineen; Sreemathi Harieaswar; Elizabeth C Engle; Irene Gottlob
In: British Journal of Ophthalmology, 104 (4), pp. 547–550, 2020.
Background The genetic basis of monocular elevation deficiency (MED) is unclear. It has previously been considered to arise due to a supranuclear abnormality. Methods Two brothers with MED were referred to Leicester Royal Infirmary, UK from the local opticians. Their father had bilateral ptosis and was unable to elevate both eyes, consistent with the diagnosis of congenital fibrosis of extraocular muscles (CFEOM). Candidate sequencing was performed in all family members. Results Both affected siblings (aged 7 and 12 years) were unable to elevate the right eye. Their father had bilateral ptosis, left esotropia and bilateral limitation of elevation. Chin up head posture was present in the older sibling and the father. Bell's phenomenon and vertical rotational vestibulo-ocular reflex were absent in the right eye for both children. Mild bilateral facial nerve palsy was present in the older sibling and the father. Both siblings had slight difficulty with tandem gait. MRI revealed hypoplastic oculomotor nerve. Left anterior insular focal cortical dysplasia was seen in the older sibling. Sequencing of TUBB3 revealed a novel heterozygous variant (c.1263GtextgreaterC, p.E421D) segregating with the phenotype. This residue is in the C-terminal H12 $alpha$-helix of $beta$-tubulin and is one of three putative kinesin binding sites. Conclusion We show that familial MED can arise from a TUBB3 variant and could be considered a limited form of CFEOM. Neurological features such as mild facial palsy and cortical malformations can be present in patients with MED. Thus, in individuals with congenital MED, consideration may be made for TUBB3 mutation screening.
Elizabeth H X Thomas; Maria Steffens; Christopher Harms; Susan L Rossell; Caroline Gurvich; Ulrich Ettinger
In: Psychophysiology, 58 , pp. 1–14, 2020.
Deficits on saccade tasks, particularly antisaccade performance, have been reliably reported in schizophrenia. However, less evidence is available on saccade performance in relation to schizotypy, a personality constellation harboring risk for schizophrenia. Here, we report a large empirical study of the associations of schizotypy and neuroticism with antisaccade and prosaccade performance (Study I). Additionally, we carried out meta-analyses of the association between schizotypy and antisaccade error rate (Study II). In Study I
Philip Thierfelder; Gillian Wigglesworth; Gladys Tang
In: Cognition, 201 , pp. 1–14, 2020.
Research has found that deaf readers unconsciously activate sign translations of written words while reading. However, the ways in which different sign phonological parameters associated with these sign translations tie into reading processes have received little attention in the literature. In this study on Chinese reading, we used a parafoveal preview paradigm to investigate how four different types of sign phonologically related preview affect reading processes in adult deaf signers of Hong Kong Sign Language (HKSL). The four types of sign phonologically related preview-target pair were: (1) pairs with HKSL translations that overlapped in three parameters—handshape, location, and movement; (2) pairs that overlapped in only handshape and location; (3) pairs that only overlapped in handshape and movement; and (4) pairs that only overlapped in location and movement. Results showed that the handshape parameter was of particular importance as only sign translation pairs that had handshape among their overlapping sign phonological parameters led to early sign activation. Furthermore, we found that, compared to control previews, deaf readers took longer to read targets when the sign translation previews overlapped with targets in either handshape and movement or handshape, movement, and location. In contrast, fixation times on targets were shorter when previews and targets overlapped location and any single additional parameter—either handshape or movement. These results indicate that the phonological parameters of handshape, location, and movement are activated via orthography during Chinese reading and can have different effects on parafoveal processing in deaf signers of HKSL.
Philip Thierfelder; Gillian Wigglesworth; Gladys Tang
In: Quarterly Journal of Experimental Psychology, 73 (12), pp. 2217–2235, 2020.
We used an error disruption paradigm to investigate how deaf readers from Hong Kong, who had varying levels of reading fluency, use orthographic, phonological, and mouth-shape-based (i.e., “visemic”) codes during Chinese sentence reading while also examining the role of contextual information in facilitating lexical retrieval and integration. Participants had their eye movements recorded as they silently read Chinese sentences containing orthographic, homophonic, homovisemic, or unrelated errors. Sentences varied in terms of how much contextual information was available leading up to the target word. Fixation time analyses revealed that in early fixation measures, deaf readers activated word meanings primarily through orthographic representations. However, in contexts where targets were highly predictable, fixation times on homophonic errors decreased relative to those on unrelated errors, suggesting that higher levels of contextual predictability facilitated early phonological activation. In the measure of total reading time, results indicated that deaf readers activated word meanings primarily through orthographic representations, but they also appeared to activate word meanings through visemic representations in late error recovery processes. Examining the influence of reading fluency level on error recovery processes, we found that, in comparison to deaf readers with lower reading fluency levels, those with higher reading fluency levels could more quickly resolve homophonic and orthographic errors in the measures of gaze duration and total reading time, respectively. We conclude with a discussion of the theoretical implications of these findings as they relate to the lexical quality hypothesis and the dual-route cascaded model of reading by deaf adults.
Masahiko Terao; Shin'ya Nishida
In: i-Perception, 11 (3), pp. 1–13, 2020.
Many studies have investigated various effects of smooth pursuit on visual motion processing, especially the effects related to the additional retinal shifts produced by eye movement. In this article, we show that the perception of apparent motion during smooth pursuit is determined by the interelement proximity in retinal coordinates and also by the proximity in objective world coordinates. In Experiment 1, we investigated the perceived direction of the two-frame apparent motion of a square-wave grating with various displacement sizes under fixation and pursuit viewing conditions. The retinal and objective displacements between the two frames agreed with each other under the fixation condition. However, the displacements differed by 180 degrees in terms of phase shift, under the pursuit condition. The proportions of the reported motion direction between the two viewing conditions did not coincide when they were plotted as a function of either the retinal displacement or of the objective displacement; however, they did coincide when plotted as a function of a mixture of the two. The result from Experiment 2 showed that the perceived jump size of the apparent motion was also dependent on both retinal and objective displacements. Our findings suggest that the detection of the apparent motion during smooth pursuit considers the retinal proximity and also the objective proximity. This mechanism may assist with the selection of a motion path that is more likely to occur in the real world and, therefore, be useful for ensuring perceptual stability during smooth pursuit.
Yi Yang Teoh; Ziqing Yao; William A Cunningham; Cendri A Hutcherson
In: Nature Communications, 11 , pp. 1–13, 2020.
Dual-process models of altruistic choice assume that automatic responses give way to deliberation over time, and are a popular way to conceptualize how people make generous choices and why those choices might change under time pressure. However, these models have led to conflicting interpretations of behaviour and underlying psychological dynamics. Here, we propose that flexible, goal-directed deployment of attention towards information priorities provides a more parsimonious account of altruistic choice dynamics. We demonstrate that time pressure tends to produce early gaze-biases towards a person's own outcomes, and that individual differences in this bias explain how individuals' generosity changes under time pressure. Our gaze-informed drift-diffusion model incorporating moment-to-moment eye-gaze further reveals that underlying social preferences both drive attention, and interact with it to shape generosity under time pressure. These findings help explain existing inconsistencies in the field by emphasizing the role of dynamic attention-allocation during altruistic choice.
Clément Tarrano; Nicolas Wattiez; Cécile Delorme; Eavan M McGovern; Vanessa Brochard; Stéphane Thobois; Christine Tranchant; David Grabli; Bertrand Degos; Jean Christophe Corvol; Jean Michel Pedespan; Pierre Krystkoviak; Jean Luc Houeto; Adrian Degardin; Luc Defebvre; Romain Valabrègue; Marie Vidailhet; Pierre Pouget; Emmanuel Roze; Yulia Worbe
Visual sensory processing is altered in myoclonus dystonia Journal Article
In: Movement Disorders, 35 (1), pp. 151–160, 2020.
Background: Abnormal sensory processing, including temporal discrimination threshold, has been described in various dystonic syndromes. Objective: To investigate visual sensory processing in DYT-SGCE and identify its structural correlates. Methods: DYT-SGCE patients without DBS (DYT-SGCE-non-DBS) and with DBS (DYT-SGCE-DBS) were compared to healthy volunteers in three tasks: a temporal discrimination threshold, a movement orientation discrimination, and movement speed discrimination. Response times attributed to accumulation of sensory visual information were computationally modelized, with $mu$ parameter indicating sensory mean growth rate. We also identified the structural correlates of behavioral performance for temporal discrimination threshold. Results: Twenty-four DYT-SGCE-non-DBS, 13 DYT-SGCE-DBS, and 25 healthy volunteers were included in the study. In DYT-SGCE-DBS, the discrimination threshold was higher in the temporal discrimination threshold (P = 0.024), with no difference among the groups in other tasks. The sensory mean growth rate ($mu$) was lower in DYT-SGCE in all three tasks (P textless 0.01), reflecting a slower rate of sensory accumulation for the visual information in these patients independent of DBS. Structural imaging analysis showed a thicker left primary visual cortex (P = 0.001) in DYT-SGCE-non-DBS compared to healthy volunteers, which also correlated with lower $mu$ in temporal discrimination threshold (P = 0.029). In DYT-SGCE-non-DBS, myoclonus severity also correlated with a lower $mu$ in the temporal discrimination threshold task (P = 0.048) and with thicker V1 on the left (P = 0.022). Conclusion: In DYT-SGCE, we showed an alteration of the visual sensory processing in the temporal discrimination threshold that correlated with myoclonus severity and structural changes in the primary visual cortex.
Benjamin Tari; James J Vanhie; Glen R Belfry; Kevin J Shoemaker; Matthew Heath
In: Journal of Neurophysiology, 124 (3), pp. 930–940, 2020.
A single bout of aerobic exercise improves executive function; however, the mechanism for the improvement remains unclear. One proposal asserts that an exercise-mediated increase in cerebral blood flow (CBF) enhances the efficiency of executive-related cortical structures. To examine this, participants completed separate 10-min sessions of moderate- to heavy-intensity aerobic exercise, a hypercapnic environment (i.e., 5% CO2), and a nonexercise and nonhypercapnic control condition. The hypercapnic condition was included because it produces an increase in CBF independent of metabolic demands. An estimate of CBF was achieved via transcranial Doppler ultrasound and near-infrared spectroscopy that provided measures of middle cerebral artery blood velocity (BV) and deoxygenated hemoglobin (HHb), respectively. Exercise intensity was adjusted to match participant-specific changes in BV and HHb associated with the hypercapnic condition. Executive function was assessed before and after each session via antisaccades (i.e., saccade mirror-symmetrical to a target) because the task is mediated via the same executive networks that demonstrate task-dependent modulation following single and chronic bouts of aerobic exercise. Results showed that hypercapnic and exercise conditions were associated with comparable BV and HHb changes, whereas the control condition did not produce a change in either metric. In terms of antisaccade performance, the exercise and hypercapnic, but not control, conditions demonstrated improved postcondition reaction times (RT), and the magnitude of the hypercapnic and exercise-based increase in estimated CBF was reliably related to the postcondition improvement in RT. Accordingly, results evince that an increase in CBF represents a candidate mechanism for a postexercise improvement in executive function. NEW & NOTEWORTHY Single-bout aerobic exercise “boosts” executive function, and increased cerebral blood flow (CBF) has been proposed as a mechanism for the benefit. In this study, participants completed 10 min of aerobic exercise and 10 min of inhaling a hypercapnic gas, a manipulation known to increase CBF independently of metabolic demands. Both exercise and hypercapnic conditions improved executive function for at least 20 min. Accordingly, an increase in CBF is a candidate mechanism for the postexercise improvement in executive function.
Benjamin Tari; Luc Tremblay; Matthew Heath
In: Experimental Brain Research, pp. 1–8, 2020.
A remote visual distractor increases saccade reaction time (RT) to a visual target and may reflect the time required to resolve conflict between target- and distractor-related information within a common retinotopic representation in the superior colliculus (SC) (i.e., the remote distractor effect: RDE). Notably, because the SC serves as a sensorimotor interface it is possible that the RDE may be associated with the pairing of an acoustic distractor with a visual target; that is, the conflict related to saccade generation signals may be sensory-independent. To address that issue, we employed a traditional RDE experiment involving a visual target and visual proximal and remote distractors (Experiment 1) and an experiment wherein a visual target was presented with acoustic proximal and remote distractors (Experiment 2). As well, Experiments 1 and 2 employed no-distractor trials. Experiment 1 RTs elicited a reliable RDE, whereas Experiment 2 RTs for proximal and remote distractors were shorter than their no distractor counterparts. Accordingly, findings demonstrate that the RDE is sensory specific and arises from conflicting visual signals within a common retinotopic map. As well, Experiment 2 findings indicate that an acoustic distractor supports an intersensory facilitation that optimizes oculomotor planning.
Ömer Daglar Tanrikulu; Andrey Chetverikov; Árni Kristjánsson
In: Journal of Vision, 20 (8), pp. 1–18, 2020.
Observers can learn complex statistical properties of visual ensembles, such as their probability distributions. Even though ensemble encoding is considered critical for peripheral vision, whether observers learn such distributions in the periphery has not been studied. Here, we used a visual search task to investigate how the shape of distractor distributions influences search performance and ensemble encoding in peripheral and central vision. Observers looked for an oddly oriented bar among distractors taken from either uniform or Gaussian orientation distributions with the same mean and range. The search arrays were either presented in the foveal or peripheral visual fields. The repetition and role reversal effects on search times revealed observers' internal model of distractor distributions. Our results showed that the shape of the distractor distribution influenced search times only in foveal, but not in peripheral search. However, role reversal effects revealed that the shape of the distractor distribution could be encoded peripherally depending on the interitem spacing in the search array. Our results suggest that, although peripheral vision might rely heavily on summary statistical representations of feature distributions, it can also encode information about the distributions themselves.
L Tankelevitch; E Spaak; M F S Rushworth; M G Stokes
In: Journal of Neuroscience, 40 (26), pp. 5033–5050, 2020.
Studies of selective attention typically consider the role of task goals or physical salience, but recent work has shown that attention can also be captured by previously reward-associated stimuli, even when these are no longer relevant (i.e., value-driven attentional capture; VDAC). We used magnetoencephalography (MEG) to investigate how previously reward-associated stimuli are processed, the time-course of reward history effects, and how this relates to the behavioural effects of VDAC. Male and female human participants first completed a reward learning task to establish stimulus-reward associations. Next, we measured attentional capture in a separate task by presenting these stimuli in the absence of reward contingency, and probing their effects on the processing of separate target stimuli presented at different time lags. Using time-resolved multivariate pattern analysis, we found that learned value modulated the spatial selection of previously rewarded stimuli in occipital, inferior temporal, and parietal cortex from ~260ms after stimulus onset. This value modulation was related to the strength of participants' behavioural VDAC effect and persisted into subsequent target processing. Furthermore, we found a spatially invariant value signal from ~340ms. Importantly, learned value did not influence the neural discriminability of the previously rewarded stimuli in visual cortical areas. Our results suggest that VDAC is underpinned by learned value signals which modulate spatial selection throughout posterior visual and parietal cortex. We further suggest that VDAC can occur in the absence of changes in early visual cortical processing. Significance statement Attention is our ability to focus on relevant information at the expense of irrelevant information. It can be affected by previously learned but currently irrelevant stimulus-reward associations, a phenomenon termed “value-driven attentional capture” (VDAC). The neural mechanisms underlying VDAC remain unclear. It has been speculated that reward learning induces visual cortical plasticity which modulates early visual processing to capture attention. Although we find that learned value modulates spatial attention in sensory brain areas, an effect which correlates with VDAC, we find no relevant signatures of visual cortical plasticity.
Pengfei Tang; Zhong Yao; Jing Luan; Jie Xiao
In: Behaviour and Information Technology, pp. 1–18, 2020.
Educational informatisation (e.g. e-learning, m-learning, massive open online courses (MOOCs)) has actively increased, leading to a greater focus on the design and development of course management systems. In this study, a research model based on cognitive fit theory and scanpath theory is proposed to investigate how information presentation formats (flow diagram navigation versus menu navigation) of a course management system influence user experience and intention. Performance load (cognitive load and kinematic load) and user perception (perceived usefulness and ease of use) are considered to evaluate user experience. The results of an eye tracking experiment utilised in this research reveal the following. First, information presentation formats can significantly influence user experience of course management systems. Second, flow diagram navigation fits students' tasks better and leads to lower performance load and better use perception. Third, performance load and user perception shows a significant effect on user satisfaction and thereby affect the intention. These findings deepen our understanding of the importance of information presentation and enrich its theoretical foundation for course management systems development. Practically, these findings provide designers with guidelines on how to improve user experience and increase use intention by varying information presentation formats.
Cheng Tang; Roger Herikstad; Aishwarya Parthasarathy; Camilo Libedinsky; Shih Cheng Yen
In: eLife, 9 , pp. 1–23, 2020.
The lateral prefrontal cortex is involved in the integration of multiple types of information, including working memory and motor preparation. However, it is not known how downstream regions can extract one type of information without interference from the others present in the network. Here, we show that the lateral prefrontal cortex of non-human primates contains two minimally dependent low-dimensional subspaces: one that encodes working memory information, and another that encodes motor preparation information. These subspaces capture all the information about the target in the delay periods, and the information in both subspaces is reduced in error trials. A single population of neurons with mixed selectivity forms both subspaces, but the information is kept largely independent from each other. A bump attractor model with divisive normalization replicates the properties of the neural data. These results provide new insights into neural processing in prefrontal regions.
Ken W S Tan; Chris Scholes; Neil W Roach; Elizabeth M Haris; Paul V McGraw
Impact of microsaccades on visual shape processing Journal Article
In: Journal of Neurophysiology, 2020.
Sensitivity to subtle changes in the shape of visual objects has been attributed to the existence of global pooling mechanisms that integrate local form information across space. While global pooling is typically demonstrated under steady fixation, other work suggests prolonged fixation can lead to a collapse of global structure. Here we ask whether small ballistic eye movements that naturally occur during periods of fixation affect the global processing of radial frequency (RF) patterns - closed contours created by sinusoidally modulating the radius of a circle. Observers were asked to discriminate the shapes of circular and RF modulated patterns while fixational eye movements were recorded binocularly at 500Hz. Microsaccades were detected using a velocity-based algorithm, allowing trials to be sorted according to the relative timing of stimulus and microsaccade onset. Results revealed clear peri-saccadic changes in shape discrimination thresholds. Performance was impaired when microsaccades occurred close to stimulus onset, but facilitated when they occurred shortly afterwards. In contrast, global integration of shape was unaffected by the timing of microsaccades. These findings suggest that microsaccades alter the discrimination sensitivity to briefly presented shapes but do not disrupt the spatial pooling of local form signals.
Noam Tal-Perry; Shlomit Yuval-Greenberg
In: Scientific Reports, 10 , pp. 1–9, 2020.
A series of recent studies suggested that eye movements are linked to temporal predictability. In these studies, temporal predictability was manipulated by setting the interval between a cue and a target (foreperiod) to be either fixed or random, in separate blocks. Findings showed that shortly prior to target onset, oculomotor behavior was reduced in the fixed relative to the random condition. This effect was interpreted as reflecting the formation of temporal expectation. However, it is still unknown whether the effect is driven by target-specific temporal orienting (orienting hypothesis), or whether it is a result of a more general and context-dependent state of certainty that participants may experience during blocks with a high predictability rate (certainty hypothesis). In the following study we examined this question by dissociating certainty and orienting. In each trial, a temporal cue (fixation color change) was followed by a slightly tilted grating-patch, on which participants made tilt-discrimination decision. The distribution of the foreperiods was varied between blocks to be either fully fixed (100% of trials with the same foreperiod), mostly fixed (80% of trials with one foreperiod and 20% with another) or random (five foreperiods in equal probabilities). The two hypotheses led to different prediction models which were tested against the experimental data. Results were highly consistent with the orienting hypothesis and inconsistent with the certainty hypothesis, supporting the link between oculomotor inhibition and temporal orienting and its validity as a marker for target-specific expectations in future studies. ### Competing Interest Statement The authors have declared no competing interest.
Travis N Talcott; Nicholas Gaspelin
Prior target locations attract overt attention during search Journal Article
In: Cognition, 201 , pp. 1–17, 2020.
A key question about visual search is how we guide attention to objects that are relevant to our goals. Traditionally, theories of visual attention have emphasized guidance by explicit knowledge of the target feature. But there is growing evidence that attention is also implicitly guided by prior experience. One such example is the phenomenon of location priming, whereby attention is automatically allocated to the location where the search target was previously found. Problematically, much of the previous evidence for location priming has been disputed because it relies exclusively on manual response time, making unclear the relative contribution of location priming on attentional allocation and later cognitive processes. The current study addressed this issue by measuring shifts of gaze, which provide a more direct measure of attentional orienting. In five experiments, first saccades were strongly attracted to the target location from the previous trial, even though this location was not predictive of the target location on the current trial. This oculomotor priming effect was so strong that it effectively disrupted attentional guidance to the search target. The results suggest that memories of recent experience can powerfully influence attentional allocation.
Tobias Talanow; Anna Maria Kasparbauer; Julia V Lippold; Bernd Weber; Ulrich Ettinger
In: Brain Imaging and Behavior, 14 (1), pp. 72–88, 2020.
Although research on goal-directed, proactive inhibitory control (IC) and stimulus-driven, reactive IC is growing, no previous study has compared proactive IC in conditions of uncertainty with regard to upcoming inhibition to conditions of certain upcoming IC. Therefore, we investigated effects of certainty and uncertainty on behavior and blood oxygen level dependent (BOLD) signal in proactive and reactive IC. In two studies, healthy adults performed saccadic go/no-go and prosaccade/antisaccade tasks. The certainty manipulation had a highly significant behavioral effect in both studies, with inhibitory control being more successful under certain than uncertain conditions on both tasks (p ≤ 0.001). Saccadic go responses were significantly less efficient under conditions of uncertainty than certain responding (p textless 0.001). Event-related functional magnetic resonance imaging (fMRI) (one study) revealed a dissociation of certainty- and uncertainty-related proactive inhibitory neural correlates in the go/no-go task, with lateral and medial prefrontal and occipital cortex showing stronger deactivations during uncertainty than during certain upcoming inhibition, and lateral parietal cortex being activated more strongly during certain upcoming inhibition than uncertainty or certain upcoming responding. In the antisaccade task, proactive BOLD effects arose due to stronger deactivations in uncertain response conditions of both tasks and before certain prosaccades than antisaccades. Reactive inhibition-related BOLD increases occurred in inferior parietal cortex and supramarginal gyrus (SMG) in the go/no-go task only. Proactive IC may imply focusing attention on the external environment for encoding salient or alerting events as well as inhibitory mechanisms that reduce potentially distracting neural processes. SMG and inferior parietal cortex may play an important role in both proactive and reactive IC of saccades.
Alan Taitz; Florencia M Assaneo; Diego E Shalom; Marcos A Trevisan
In: Scientific Reports, 10 , pp. 1–10, 2020.
Silent reading is a cognitive operation that produces verbal content with no vocal output. One relevant question is the extent to which this verbal content is processed as overt speech in the brain. To address this, we acquired sound, eye trajectories and lips' dynamics during the reading of consonant-consonant-vowel (CCV) combinations which are infrequent in the language. We found that the duration of the first fixations on the CCVs during silent reading correlate with the duration of the transitions between consonants when the CCVs are actually uttered. With the aid of an articulatory model of the vocal system, we show that transitions measure the articulatory effort required to produce the CCVs. This means that first fixations during silent reading are lengthened when the CCVs require a greater laryngeal and/or articulatory effort to be pronounced. Our results support that a speech motor code is used for the recognition of infrequent text strings during silent reading.
Jérôme Tagu; Árni Kristjánsson
In: Quarterly Journal of Experimental Psychology, pp. 1–17, 2020.
A vast amount of research has been carried out to understand how humans visually search for targets in their environment. However, this research has typically involved search for one unique target among several distractors. Although this line of research has yielded important insights into the basic characteristics of how humans explore their visual environment, this may not be a very realistic model for everyday visual orientation. Recently, researchers have used multi-target displays to assess orienting in the visual field. Eye movements in such tasks are, however, less well understood. Here, we investigated oculomotor dynamics during four visual foraging tasks differing in target crypticity (feature-based foraging vs. conjunction-based foraging) and the effector type being used for target selection (mouse foraging vs. gaze foraging). Our results show that both target crypticity and effector type affect foraging strategies. These changes are reflected in oculomotor dynamics, feature foraging being associated with focal exploration (long fixations and short-amplitude saccades), and conjunction foraging with ambient exploration (short fixations and high-amplitude saccades). These results provide important new information for existing accounts of visual attention and oculomotor control and emphasise the usefulness of foraging tasks for a better understanding of how humans orient in the visual environment.
Jérôme Tagu; Karine Doré-Mazars; Dorine Vergilino-Perez
In: Experimental Brain Research, 238 (2), pp. 411–425, 2020.
Hemispheric specialization refers to the fact that cerebral hemispheres are not equivalent and that cognitive processes are lateralized in the brain. Although the potential links between handedness and the left hemisphere specialization for language have been widely studied, little attention has been paid to other motor preferences, such as eye dominance, that also are lateralized in the brain. For example, saccadic accuracy is higher in the hemifield contralateral to the dominant eye compared to the ipsilateral hemifield. Saccade accuracy is, however, also known to be sensitive to other functional asymmetries, such as the lateralization of visuo-spatial attention in the right hemisphere of the brain. Using a global effect paradigm in three different saccade latency ranges, we here propose to use saccade accuracy as an indicator of visual functional asymmetries. We show that for the shortest latencies, saccade accuracy is higher in the left than in the right visual hemifield, which could be due to the lateralization of visuo-spatial attention in the right hemisphere. For the longest latencies, however, saccade accuracy is higher toward the right than the left hemifield, probably due to the lateralization of local and global processing in the left and right hemispheres, respectively. These results could have a major impact on studies designed to measure the degree of lateralization of individuals. We here discuss both the theoretical and clinical contributions of these results.
Davide Tabarelli; Christian Keitel; Joachim Gross; Daniel Baldauf
In: NeuroImage, 208 , pp. 1–18, 2020.
Successfully interpreting and navigating our natural visual environment requires us to track its dynamics constantly. Additionally, we focus our attention on behaviorally relevant stimuli to enhance their neural processing. Little is known, however, about how sustained attention affects the ongoing tracking of stimuli with rich natural temporal dynamics. Here, we used MRI-informed source reconstructions of magnetoencephalography (MEG) data to map to what extent various cortical areas track concurrent continuous quasi-rhythmic visual stimulation. Further, we tested how top-down visuo-spatial attention influences this tracking process. Our bilaterally presented quasi-rhythmic stimuli covered a dynamic range of 4–20 Hz, subdivided into three distinct bands. As an experimental control, we also included strictly rhythmic stimulation (10 vs 12 Hz). Using a spectral measure of brain-stimulus coupling, we were able to track the neural processing of left vs. right stimuli independently, even while fluctuating within the same frequency range. The fidelity of neural tracking depended on the stimulation frequencies, decreasing for higher frequency bands. Both attended and non-attended stimuli were tracked beyond early visual cortices, in ventral and dorsal streams depending on the stimulus frequency. In general, tracking improved with the deployment of visuo-spatial attention to the stimulus location. Our results provide new insights into how human visual cortices process concurrent dynamic stimuli and provide a potential mechanism – namely increasing the temporal precision of tracking – for boosting the neural representation of attended input.
Martin Szinte; David Aagten-Murphy; Donatas Jonikaitis; Luca Wollenberg; Heiner Deubel
Sounds are remapped across saccades Journal Article
In: Scientific Reports, 10 , pp. 1–11, 2020.
To achieve visual space constancy, our brain remaps eye-centered projections of visual objects across saccades. Here, we measured saccade trajectory curvature following the presentation of visual, auditory, and audiovisual distractors in a double-step saccade task to investigate if this stability mechanism also accounts for localized sounds. We found that saccade trajectories systematically curved away from the position at which either a light or a sound was presented, suggesting that both modalities are represented in eye-centered oculomotor centers. Importantly, the same effect was observed when the distractor preceded the execution of the first saccade. These results suggest that oculomotor centers keep track of visual, auditory and audiovisual objects by remapping their eye-centered representations across saccades. Furthermore, they argue for the existence of a supra-modal map which keeps track of multi-sensory object locations across our movements to create an impression of space constancy.
Georgia F Symons; Meaghan Clough; William T O'Brien; Joel Ernest; Sabrina Salberg; Daniel Costello; Mujun Sun; Rhys D Brady; Stuart J McDonald; David K Wright; Owen White; Larry Abel; Terence J O'Brien; Jesse Mccullough; Roxanne Aniceto; I-Hsuan Lin; Denes V Agoston; Joanne Fielding; Richelle Mychasiuk; Sandy R Shultz
In: Journal of Concussion, 4 , pp. 1–11, 2020.
Mild brain injuries are frequent in athletes engaging in collision sports and have been linked to a range of long-term neurological abnormalities. There is a need to identify how these potential abnormalities manifest using objective measures; determine whether changes are due to concussive and/or sub-concussive injuries; and examine how biological sex affects outcomes. This study investigated cognitive, cellular, and molecular biomarkers in male and female amateur Australian footballers (i.e. Australia's most participated collision sport). 95 Australian footballers (69 males, 26 females), both with and without a history of concussion, as well as 49 control athletes (28 males, 21 females) with no history of brain trauma or participation in collision sports were recruited to the study. Ocular motor assessment was used to examine cognitive function. Telomere length, a biomarker of cellular senescence and neurological health, was examined in saliva. Serum levels of tau, phosphorylated tau, neurofilament light chain, and 4-hydroxynonenal were used as markers to assess axonal injury and oxidative stress. Australian footballers had reduced telomere length (p = 0.031) and increased serum protein levels of 4-hydroxynonenal (p = 0.001), tau (p = 0.007), and phosphorylated tau (p = 0.036). These findings were independent of concussion history and sex. No significant ocular motor differences were found. Taken together, these findings suggest that engagement in collision sports, regardless of sex or a history of concussion, is associated with shortened telomeres, axonal injury, and oxidative stress. These saliva- and serum-based biomarkers may be useful to monitor neurological injury in collision sport athletes.
Hong Mei Sun; Guo En Yin
The influence of theoretical knowledge on similarity judgment Journal Article
In: Cognitive Processing, 21 (1), pp. 23–32, 2020.
The similarity of the features between two entities has been assumed to be the essential factor for distinguishing these two entities across a variety of cognitive acts; however, the mechanism underlying the similarity processing remains unclear. The perceptual-based account suggests that similarity judgment is based on perceptual features between entities, whereas other accounts assume that similarity judgment relies heavily on one's previous knowledge of the entities. In Experiment 1, we explored the influence of theoretical knowledge on similarity judgment when perceptual features conflict with conceptual information. In Experiment 2, we examined whether categorization tasks further influence the results of the similarity judgment. Our results showed that the theoretical knowledge contributed to the overall similarity of the stimuli. In addition, carrying out a categorization task or not did not contribute more to the processes of the similarity judgment. Overall, these findings suggest that the conceptual information is more important than perceptual features while judging the similarity of two entities; if sufficient theoretical knowledge is available, the criteria for carrying out the categorization task might be consistent with those for the similarity judgment in the present study.
Emma Sumner; Samuel B Hutton; Elisabeth L Hill
In: Advances in Neurodevelopmental Disorders, pp. 1–12, 2020.
Objectives: Sensorimotor difficulties are often reported in autism spectrum disorders (ASD). Visual and motor skills are linked in that the processing ofvisual information can help in guiding motor movements. The present study investigated oculomotor skill and its relation to general motor skill in ASD by providing a comprehensive assessment of oculomotor control. Methods: Fifty children (25 ASD; 25 typically developing [TD]), aged 7–10 years, completed a motor assessment (comprising fine and gross motor tasks) and oculomotor battery (comprising fixation, smooth pursuit, prosaccade and antisaccade tasks). Results: No group differences were found for antisaccade errors, nor saccade latencies in prosaccade and antisaccade tasks, but increased saccade amplitude variability was observed in children with ASD, suggesting a reduced consistency in saccade accuracy. Children with ASD also demonstrated poorer fixation stability than their peers and spent less time in pursuit of a moving target. Motor skill was not correlated with saccade amplitude variability. However, regression analyses revealed that motor skill (and not diagnosis) accounted for variance in fixation performance and fast smooth pursuit. Conclusions: The findings highlight the importance of considering oculomotor paradigms to inform the functional impact of neuropathologies in ASD and also assessing the presentation of co-occurring difficulties to further our understanding ofASD. Avenues for future research are suggested.
Juan Su; Guoen Yin; Xuejun Bai; Guoli Yan; Stoyan Kurtev; Kayleigh L Warrington; Victoria A McGowan; Simon P Liversedge; Kevin B Paterson
In: Attention, Perception, and Psychophysics, 82 (4), pp. 1566–1572, 2020.
Readers can acquire useful information from only a narrow region of text around each fixation (the perceptual span), which extends asymmetrically in the direction of reading. Studies with bilingual readers have additionally shown that this asymmetry reverses with changes in horizontal reading direction. However, little is known about the perceptual span's flexibility following orthogonal (vertical vs. horizontal) changes in reading direction, because of the scarcity of vertical writing systems and because changes in reading direction often are confounded with text orientation. Accordingly, we assessed effects in a language (Mongolian) that avoids this confound, in which text is conventionally read vertically but can also be read horizontally. Sentences were presented normally or in a gaze-contingent paradigm in which a restricted region of text was displayed normally around each fixation and other text was degraded. The perceptual span effects on reading rates were similar in both reading directions. These findings therefore provide a unique (nonconfounded) demonstration of perceptual span flexibility.
Roger W Strong; George A Alvarez
In: Journal of Vision, 20 (8), pp. 1–21, 2020.
Attentional tracking and working memory tasks are often performed better when targets are divided evenly between the left and right visual hemifields, rather than contained within a single hemifield (Alvarez & Cavanagh, 2005; Delvenne, 2005). However, this bilateral field advantage does not provide conclusive evidence of hemifield-specific control of attention and working memory, because it can be explained solely from hemifield-limited spatial interference at early stages of visual processing. If control of attention and working memory is specific to each hemifield, maintaining target information should become more difficult as targets move between the two hemifields. Observers in the present study maintained targets that moved either within or between the left and right hemifields, using either attention (Experiment 1) or working memory (Experiment 2). Maintaining spatial information was more difficult when target items moved between the hemifields compared with when target items moved within their original hemifields, consistent with hemifield-specific control of spatial attention and working memory. However, this pattern was not found for maintaining identity information (e.g., color) in working memory (Experiment 3). Together, these results provide evidence that control of spatial attention and working memory is specific to each hemifield, and that hemifield-specific control is a unique signature of spatial processing.
Carla M Strickland-Hughes; Kaitlyn E Dillon; Robin L West; Natalie C Ebner
In: Cognition, 200 , pp. 1–11, 2020.
Successfully learning and remembering people's names is a challenging memory task for adults of all ages, and this already difficult social skill worsens with age, even in normative “healthy” aging. The own-age bias, a type of in-group bias, could affect the difficulty of this task across age. Past evidence supports an own-age bias in face processing, wherein individuals preferably attend to and better recognize faces of members of their own age group. However, the own-age bias has not been examined previously in relation to explicit face-name associative encoding and subsequent name retrieval, despite the importance of this social skill. Using behavioral and eye-tracking methodology, this cross-sectional research investigated the own-age bias for name memory (recognition and recall) and visual attention (fixation count, looking time, and normalized pupil size) when learning novel face-name pairs. Younger adult (n = 90) and older adult (n = 84) participants completed a face-name association task that tested name memory for younger and older female and male faces, while eye-tracking data were recorded. The visual attention variables taken from the eye-tracking data showed significant age-of-face effects at both encoding and retrieval, but no overall own-age bias in attention. Both younger and older participants showed an own-age bias in name recall with better memory for names paired with faces of their own age, as compared to other-aged faces. This cross-over effect for name memory suggests that memory for information with high social and affective relevance to the individual may be relatively spared in aging, despite overall age-related declines in memory performance.
Rhonda J N Stopyn; Thomas Hadjistavropoulos; Jeff Loucks
In: Journal of Nonverbal Behavior, pp. 1–22, 2020.
Nonverbal pain cues such as facial expressions, are useful in the systematic assessment of pain in people with dementia who have severe limitations in their ability to communicate. Nonetheless, the extent to which observers rely on specific pain-related facial responses (e.g., eye movements, frowning) when judging pain remains unclear. Observers viewed three types of videos of patients expressing pain (younger patients, older patients without dementia, older patients with dementia) while wearing an eye tracker device that recorded their viewing behaviors. They provided pain ratings for each patient in the videos. These observers assigned higher pain ratings to older adults compared to younger adults and the highest pain ratings to patients with dementia. Pain ratings assigned to younger adults showed greater correspondence to objectively coded facial reactions compared to older adults. The correspondence of observer ratings was not affected by the cognitive status of target patients as there were no differences between the ratings assigned to older adults with and without dementia. Observers' percentage of total dwell time (amount of time that an observer glances or fixates within a defined visual area of interest) across specific facial areas did not predict the correspondence of observers' pain ratings to objective coding of facial responses. Our results demonstrate that patient characteristics such as age and cognitive status impact the pain decoding process by observers when viewing facial expressions of pain in others.
Susanne Stoll; Nonie J Finlayson; Samuel D Schwarzkopf
In: NeuroImage, 220 , pp. 1–17, 2020.
Our visual system readily groups dynamic fragmented input into global objects. How the brain represents global object perception remains however unclear. To address this question, we recorded brain responses using functional magnetic resonance imaging whilst observers viewed a dynamic bistable stimulus that could either be perceived globally (i.e., as a grouped and coherently moving shape) or locally (i.e., as ungrouped and incoherently moving elements). We further estimated population receptive fields and used these to back-project the brain activity measured during stimulus perception into visual space via a searchlight procedure. Global perception resulted in universal suppression of responses in lower visual cortex accompanied by wide-spread enhancement in higher object-sensitive cortex. However, follow-up experiments indicated that higher object-sensitive cortex is suppressed if global perception lacks shape grouping, and that grouping-related suppression can be diffusely confined to stimulated sites and accompanied by background enhancement once stimulus size is reduced. These results speak to a non-generic involvement of higher object-sensitive cortex in perceptual grouping and point to an enhancement-suppression mechanism mediating the perception of figure and ground.
Hrvoje Stojić; Jacob L Orquin; Peter Dayan; Raymond J Dolan; Maarten Speekenbrink
Uncertainty in learning, choice, and visual fixation Journal Article
In: Proceedings of the National Academy of Sciences, 117 (6), pp. 3291–3300, 2020.
Uncertainty plays a critical role in reinforcement learning and decision making. However, exactly how it influences behavior remains unclear. Multiarmed-bandit tasks offer an ideal test bed, since computational tools such as approximate Kalman filters can closely characterize the interplay between trial-by-trial values, uncertainty, learning, and choice. To gain additional insight into learning and choice processes, we obtained data from subjects' overt allocation of gaze. The estimated value and estimation uncertainty of options influenced what subjects looked at before choosing; these same quantities also influenced choice, as additionally did fixation itself. A momentary measure of uncertainty in the form of absolute prediction errors determined how long participants looked at the obtained outcomes. These findings affirm the importance of uncertainty in multiple facets of behavior and help delineate its effects on decision making.
Brenda M Stoesz; Jessica Sutton
In: Canadian Journal of Learning and Technology, 46 (2), pp. 1–21, 2020.
Research has demonstrated that students' learning outcomes and motivation to learn are influenced by the visual design of learning technologies (e.g., learning management systems or LMS). One aspect of LMS design that has not been thoroughly investigated is visual complexity. In two experiments, postsecondary students rated the visual complexity of images of LMS after exposure durations of 50-500 ms. Perceptions of complexity were positively correlated across timed conditions and working memory capacity was associated with complexity ratings. Low-level image metrics were also found to predict perceptions of the LMS complexity. Results demonstrate the importance of the visual design of learning technologies and suggest that additional research on the impact of LMS visual complexity on learning outcomes is warranted.
Lena Stock; Charlotte Krüger-Zechlin; Zain Deeb; Lars Timmermann; Josefine Waldthaler
In: Frontiers in Aging Neuroscience, 12 , pp. 1–9, 2020.
Background: Patients with Parkinson's disease (PD) show eye movement abnormalities and frequently complain about difficulties in reading. So far, it is unclear whether basal ganglia dysfunction or cognitive impairment has a greater impact on eye movements during reading. Objective: To analyze eye movement behavior during a natural reading task with respect to cognitive state and dopaminergic therapy in PD and healthy controls. Methods: Eye movements of 59 PD patients and 29 age- and education-matched healthy controls were recorded during mute, self-paced reading of a text. 25 cognitively normal PD patients performed the task additionally in off medication state. Clinical assessment included a comprehensive neuropsychological test battery and the motor section of MDS—Unified Parkinson's Disease Rating Scale (MDS-UPDRS). Results: PD-mild cognitive impairment (MCI) was diagnosed in 21 patients. Reading speed was significantly reduced in PD-MCI compared to healthy controls and PD patients without MCI due to higher numbers of progressive saccades. Cognitively intact PD patients showed no significant alterations of reading speed or eye movement pattern during reading. The fixation duration tended to be prolonged in PD compared to healthy controls and decreased significantly after levodopa intake. Scores for executive functions, attention, and language correlated with reading speed in the PD group. Conclusion: The present study is the first to reveal (1) reduced reading speed with altered reading pattern in PD with MCI and (2) a relevant impact of levodopa on fixation duration during reading in PD. Further research is needed to determine whether therapeutic interventions, e.g., levodopa or neuropsychological training, improve the subjective reading experience for patients with PD.
Gabriel M Stine; Ariel Zylberberg; Jochen Ditterich; Michael N Shadlen
In: eLife, 9 , pp. 1–28, 2020.
Many tasks used to study decision-making encourage subjects to integrate evidence over time. Such tasks are useful to understand how the brain operates on multiple samples of information over prolonged timescales, but only if subjects actually integrate evidence to form their decisions. We explored the behavioral observations that corroborate evidence-integration in a number of task-designs. Several commonly accepted signs of integration were also predicted by non-integration strategies. Furthermore, an integration model could fit data generated by non-integration models. We identified the features of non-integration models that allowed them to mimic integration and used these insights to design a motion discrimination task that disentangled the models. In human subjects performing the task, we falsified a non-integration strategy in each and confirmed prolonged integration in all but one subject. The findings illustrate the difficulty of identifying a decision-maker's strategy and support solutions to achieve this goal.
Emma E M Stewart; Carolin Hubner; Alexander C Schutz
In: Journal of Vision, 20 (10), pp. 1–25, 2020.
Humans do not notice small displacements to objects that occur during saccades, termed saccadic suppression of displacement (SSD), and this effect is reduced when a blank is introduced between the pre- and postsaccadic stimulus (Bridgeman, Hendry, & Stark, 1975; Deubel, Schneider, & Bridgeman, 1996). While these effects have been studied extensively in adults, it is unclear how these phenomena are characterized in children. A potentially related mechanism, saccadic suppression of contrast sensitivity—a prerequisite to achieve a stable percept—is stronger for children (Bruno, Brambati, Perani, & Morrone, 2006). However, the evidence for how transsaccadic stimulus displacements may be suppressed or integrated is mixed. While they can integrate basic visual feature information from an early age, they cannot integrate multisensory information (Gori, Viva, Sandini, & Burr, 2008; Nardini, Jones, Bedford, & Braddick, 2008), suggesting a failure in the ability to integrate more complex sensory information. We tested children 7 to 12 years old and adults 19 to 23 years old on their ability to perceive intrasaccadic stimulus displacements, with and without a postsaccadic blank. Results showed that children had stronger SSD than adults and a larger blanking effect. Children also had larger undershoots and more variability in their initial saccade endpoints, indicating greater intrinsic uncertainty, and they were faster in executing corrective saccades to account for these errors. Together, these results suggest that children may have a greater internal expectation or prediction of saccade error than adults; thus, the stronger SSD in children may be due to higher intrinsic uncertainty in target localization or saccade execution.
Emily R Stern; Carina Brown; Molly Ludlow; Rebbia Shahab; Katherine Collins; Alexis Lieval; Russell H Tobe; Dan V Iosifescu; Katherine E Burdick; Lazar Fleysher
In: Human Brain Mapping, 41 (6), pp. 1611–1625, 2020.
Obsessive–compulsive disorder (OCD) is highly heterogeneous. While obsessions often involve fear of harm, many patients report uncomfortable sensations and/or urges that drive repetitive behaviors in the absence of a specific fear. Prior work suggests that urges in OCD may be similar to everyday “urges-for-action” (UFA) such as the urge to blink, swallow, or scratch, but very little work has investigated the pathophysiology underlying urges in OCD. In the current study, we used an urge-to-blink approach to model sensory-based urges that could be experimentally elicited and compared across patients and controls using the same task stimuli. OCD patients and controls suppressed eye blinking over a period of 60 s, alternating with free blinking blocks, while brain activity was measured using functional magnetic resonance imaging. OCD patients showed significantly increased activation in several regions during the early phase of eyeblink suppression (first 30 s), including mid-cingulate, insula, striatum, parietal cortex, and occipital cortex, with lingering group differences in parietal and occipital regions during late eyeblink suppression (last 30 s). There were no differences in brain activation during free blinking blocks, and no conditions where OCD patients showed reduced activation compared to controls. In an exploratory analysis of blink counts performed in a subset of subjects, OCD patients were less successful than controls in suppressing blinks. These data indicate that OCD patients exhibit altered brain function and behavior when experiencing and suppressing the urge to blink, raising the possibility that the disorder is associated with a general abnormality in the UFA system that could ultimately be targeted by future treatments.
Madeleine Y Stepper; Bettina Rolke; Elisabeth Hein
In: Attention, Perception, and Psychophysics, 82 (3), pp. 1024–1037, 2020.
Our visual system is able to establish associations between corresponding images across space and time and to maintain the identity of objects, even though the information our retina receives is ambiguous. It has been shown that lower level factors—as, for example, spatiotemporal proximity—can affect this correspondence problem. In addition, higher level factors—as, for example, semantic knowledge—can influence correspondence, suggesting that correspondence might also be solved at a higher object-based level of processing, which could be mediated by attention. To test this hypothesis, we instructed participants to voluntarily direct their attention to individual elements in the Ternus display. In this ambiguous apparent motion display, three elements are aligned next to each other and shifted by one position from one frame to the next. This shift can be either perceived as all elements moving together (group motion) or as one element jumping across the others (element motion). We created a competitive Ternus display, in which the color of the elements was manipulated in such a way that the percept was biased toward element motion for one color and toward group motion for another color. If correspondence can be established at an object-based level, attending toward one of the biased elements should increase the likelihood that this element determines the correspondence solution and thereby that the biased motion is perceived. Our results were in line with this hypothesis providing support for an object-based correspondence process that is based on a one-to-one mapping of the most similar elements mediated via attention.
Simon R Steinkamp; Simone Vossel; Gereon R Fink; Ralph Weidner
In: Human Brain Mapping, 41 (13), pp. 3765–3780, 2020.
Hemispatial neglect, after unilateral lesions to parietal brain areas, is characterized by an inability to respond to unexpected stimuli in contralesional space. As the visual field's horizontal meridian is most severely affected, the brain networks controlling visuospatial processes might be tuned explicitly to this axis. We investigated such a potential directional tuning in the dorsal and ventral frontoparietal attention networks, with a particular focus on attentional reorientation. We used an orientation-discrimination task where a spatial precue indicated the target position with 80% validity. Healthy participants (n = 29) performed this task in two runs and were required to (re-)orient attention either only along the horizontal or the vertical meridian, while fMRI and behavioral measures were recorded. By using a general linear model for behavioral and fMRI data, dynamic causal modeling for effective connectivity, and other predictive approaches, we found strong statistical evidence for a reorientation effect for horizontal and vertical runs. However, neither neural nor behavioral measures differed between vertical and horizontal reorienting. Moreover, models from one run successfully predicted the cueing condition in the respective other run. Our results suggest that activations in the dorsal and ventral attention networks represent higher-order cognitive processes related to spatial attentional (re-)orientating that are independent of directional tuning and that unilateral attention deficits after brain damage are based on disrupted interactions between higher-level attention networks and sensory areas.
Maximilian Stefani; Marian Sauter; Wolfgang Mack
In: Attention, Perception, and Psychophysics, 82 (2), pp. 637–654, 2020.
In a circular visual search paradigm, the disengagement of attention is automatically delayed when a fixated but irrelevant center item shares features of the target item. Additionally, if mismatching letters are presented on these items, response times (RTs) are slowed further, while matching letters evoke faster responses (Wright, Boot, & Brockmole, 2015a). This is interpreted as a functional reason of the delayed disengagement effect in terms of deeper processing of the fixation item. The purpose of the present study was the generalization of these findings to unfamiliar symbols and to linear instead of circular layouts. Experiments 1 and 2 replicated the functional delayed disengagement effect with letters and symbols. In Experiment 3, the search layout was changed from circular to linear and only saccades from left to right had to be performed. We did not find supportive data for the proposed functional nature of the effect. In Experiments 4 and 5, we tested whether the unidirectional saccade decision, a potential blurring by adjacent items, or a lack of statistical power was the cause of the diminished effects in Experiment 3. With increased sample sizes, the delayed disengagement effect as well as its functional underpinning were now observed consistently. Taken together, our results support prior assumptions that delayed disengagement effects are functionally rooted in a deeper processing of the fixation items. They also generalize to unfamiliar symbols and linear display layouts.
In: Journal of Experimental Psychology: Human Perception and Performance, 46 (11), pp. 1235–1251, 2020.
The time a reader's eyes spend on a word is influenced by visual (e.g., contrast) as well as lexical (e.g., word frequency) and contextual (e.g., predictability) factors. Well-known visual word recognition models predict that visual and higher-level manipulations may have interactive effects on early eye movement measures, because of cascaded processing between levels. Previous eye movement studies provide conflicting evidence as to whether they do, possibly because of inconsistent manipulations or limited statistical power. In the present study, 2 highly powered experiments used sentences in which a target word's frequency and predictability were factorially manipulated. Experiment 1 also manipulated visual contrast, and Experiment 2 also manipulated font difficulty. Robust main effects of all manipulations were evident in both experiments. In Experiment 1, interactions between the effect of contrast and the effects of frequency and predictability were numerically small and statistically unreliable in both early (word skipping, first fixation duration) and later (gaze duration, go-past time) measures. In Experiment 2, frequency and predictability did demonstrate convincing interactions with font difficulty, but only in the later measures, possibly implicating a checking mechanism. We conclude that although the predicted interactions in early eye movement measures may exist, they are sufficiently weak that they are difficult to detect even in large eye movement experiments.
Maurryce D Starks; Anna Shafer-Skelton; Michela Paradiso; Aleix M Martinez; Julie D Golomb
In: Journal of Experimental Psychology: Human Perception and Performance, 46 (12), pp. 1538–1552, 2020.
The “spatial congruency bias” is a behavioral phenomenon where 2 objects presented sequentially are more likely to be judged as being the same object if they are presented in the same location (Golomb, Kupitz, & Thiemann, 2014), suggesting that irrelevant spatial location information may be bound to object representations. Here, we examine whether the spatial congruency bias extends to higher-level object judgments of facial identity and expression. On each trial, 2 real-world faces were sequentially presented in variable screen locations, and subjects were asked to make same-different judgments on the facial expression (Experiments 1–2) or facial identity (Experiment 3) of the stimuli. We observed a robust spatial congruency bias for judgments of facial identity, yet a more fragile one for judgments of facial expression. Subjects were more likely to judge 2 faces as displaying the same expression if they were presented in the same location (compared to in different locations), but only when the faces shared the same identity. On the other hand, a spatial congruency bias was found when subjects made judgments on facial identity, even across faces displaying different facial expressions. These findings suggest a possible difference between the binding of facial identity and facial expression to spatial location. (PsycInfo Database Record (c) 2020 APA, all rights reserved)
Lauren Spinner; Lindsey Cameron; Heather J Ferguson
In: Journal of Experimental Child Psychology, 199 , pp. 1–29, 2020.
Differences between children's and parents' implicit and explicit gender stereotypes were investigated in two experiments. For the first time, the visual world paradigm compared parents' and 7-8-year-old children's looking preferences toward masculine- and feminine-typed objects stereotypically associated with a story character's gender. In Experiment 1 participants listened to sentences that included a verb that inferred intentional action with an object (e.g., “Lilly/Alexander will play with the toy”), and in Experiment 2 the verb was replaced with a neutral verb (e.g., “Lilly/Alexander will trip over the toy”). A questionnaire assessed participants' explicit gender stereotype endorsement (and knowledge [Experiment 2]) of children's toys. Results revealed that parents and children displayed similar implicit stereotypes, but different explicit stereotypes, to one another. In Experiment 1, both children and parents displayed looking preferences toward the masculine-typed object when the story character was male and looking preferences toward the feminine-typed object when the character was female. No gender effects were found with a neutral verb in Experiment 2, reinforcing the impact of gender stereotypes on implicit processing and showing that the effects are not simply driven by gender stereotypic name–object associations. In the explicit measure, parents did not endorse the gender stereotypes related to toys but rather appeared to be egalitarian, whereas children's responses were gender stereotypic.
Eelke Spaak; Floris P de Lange
In: Journal of Neuroscience, 40 (1), pp. 191–202, 2020.
Humans can rapidly and seemingly implicitly learn to predict typical locations of relevant items when those items are encountered in familiar spatial contexts. Two important questions remain, however, concerning this type of learning: (1) which neural structures and mechanisms are involved in acquiring and exploiting such contextual knowledge? (2) Is this type of learning truly implicit and unconscious? We now answer both these questions after closely examining behavior and recording neural activity using MEG while observers (male and female) were acquiring and exploiting statistical regularities. Computational modeling of behavioral data suggested that, after repeated exposures to a spatial context, participants' behavior was marked by an abrupt switch to an exploitation strategy of the learnt regularities. MEG recordings showed that hippocampus and prefrontal cortex (PFC) were involved in the task and furthermore revealed a striking dissociation: only the initial learning phase was associated with hippocampal theta band activity, while the subsequent exploitation phase showed a shift in theta band activity to the PFC. Intriguingly, the behavioral benefit of repeated exposures to certain scenes was inversely related to explicit awareness of such repeats, demonstrating the implicit nature of the expectations acquired. Together, these findings demonstrate that (1a) hippocampus and PFC play complementary roles in the implicit, unconscious learning and exploitation of spatial statistical regularities; (1b) these mechanisms are implemented in the theta frequency band; and (2) contextual knowledge can indeed be acquired unconsciously, and awareness of such knowledge can even interfere with the exploitation thereof.
David Souto; Lily Smith; Jennifer Sudkamp; Marina Bloj
In: Psychonomic Bulletin & Review, 27 (6), pp. 1239–1246, 2020.
Physical interactions between objects, or between an object and the ground, are amongst the most biologically relevant for live beings. Prior knowledge of Newtonian physics may play a role in disambiguating an object's movement as well as foveation by increasing the spatial resolution of the visual input. Observers were shown a virtual 3D scene, representing an ambiguously rotating ball translating on the ground. The ball was perceived as rotating congruently with friction, but only when gaze was located at the point of contact. Inverting or even removing the visual context had little influence on congruent judgements compared with the effect of gaze. Counterintuitively, gaze at the point of contact determines the solution of perceptual ambiguity, but independently of visual context. We suggest this constitutes a frugal strategy, by which the brain infers dynamics locally when faced with a foveated input that is ambiguous.
Jake Son; Lei Ai; Ryan Lim; Ting Xu; Stanley Colcombe; Alexandre Rosa Franco; Jessica Cloud; Stephen Laconte; Jonathan Lisinski; Arno Klein; Cameron R Craddock; Michael Milham
In: Cerebral Cortex, 30 (3), pp. 1171–1184, 2020.
The collection of eye gaze information during functional magnetic resonance imaging (fMRI) is important for monitoring variations in attention and task compliance, particularly for naturalistic viewing paradigms (e.g., movies). However, the complexity and setup requirements of current in-scanner eye tracking solutions can preclude many researchers from accessing such information. Predictive eye estimation regression (PEER) is a previously developed support vector regression-based method for retrospectively estimating eye gaze from the fMRI signal in the eye's orbit using a 1.5-min calibration scan. Here, we provide confirmatory validation of the PEER method's ability to infer eye gaze on a TR-by-TR basis during movie viewing, using simultaneously acquired eye tracking data in five individuals (median angular deviation textless 2°). Then, we examine variations in the predictive validity of PEER models across individuals in a subset of data (n = 448) from the Child Mind Institute Healthy Brain Network Biobank, identifying head motion as a primary determinant. Finally, we accurately classify which of the two movies is being watched based on the predicted eye gaze patterns (area under the curve = 0.90 ± 0.02) and map the neural correlates of eye movements derived from PEER. PEER is a freely available and easy-to-use tool for determining eye fixations during naturalistic viewing.
Rosyl S Somai; Martijn J Schut; Stefan Van Der Stigchel
In: Cortex, 122 , pp. 108–114, 2020.
We use visual working memory (VWM) to maintain the visual features of objects in our world. Although the capacity of VWM is limited, it is unlikely that this limit will pose a problem in daily life, as visual information can be supplemented with input from our external visual world by using eye movements. In the current study, we influenced the trade-off between eye movements and VWM utilization by introducing a cost to a saccade. Higher costs were created by adding a delay in stimulus availability to a copying task. We show that increased saccade cost results in less saccades towards the model and an increased dwell time on the model. These results suggest a shift from making eye movements towards taxing internal VWM. Our findings reveal that the trade-off between executing eye-movements and building an internal representation of our world is based on an adaptive mechanism, governed by cost-efficiency.
Sabine Soltani; Dimitri M L van Ryckeghem; Tine Vervoort; Lauren C Heathcote; Keith Yeates; Christopher Sears; Melanie Noel
In: Pain, 161 (10), pp. 2263–2273, 2020.
Attentional biases are posited to play a key role in the development and maintenance of chronic pain in adults and youth. However, research to date has yielded mixed findings, and few studies have examined attentional biases in pediatric samples. This study used eye-gaze tracking to examine attentional biases to pain-related stimuli in a clinical sample of youth with chronic pain and pain-free controls. The moderating role of attentional control was also examined. Youth with chronic pain (n = 102) and pain-free controls (n = 53) viewed images of children depicting varying levels of pain expressiveness paired with neutral faces while their eye gaze was recorded. Attentional control was assessed using both a questionnaire and a behavioural task. Both groups were more likely to first fixate on high pain faces but showed no such orienting bias for moderate or low pain faces. Youth with chronic pain fixated longer on all pain faces than neutral faces, whereas youth in the control group exhibited a total fixation bias only for high and moderate pain faces. Attentional control did not moderate attentional biases between or within groups. The results lend support to theoretical models positing the presence of attentional biases in youth with chronic pain. Further research is required to clarify the nature of attentional biases and their relationship to clinical outcomes.
Emma J Solly; Meaghan Clough; Allison M McKendrick; Paige Foletta; Owen B White; Joanne Fielding
In: Neurology, 95 , pp. e1784–e1791, 2020.
OBJECTIVE: To determine whether changes to cortical processing of visual information can be evaluated objectively using 3 simple ocular motor tasks to measure performance in patients with visual snow syndrome (VSS). METHODS: Sixty-four patients with VSS (32 with migraine and 32 with no migraine) and 23 controls participated. Three ocular motor tasks were included: prosaccade (PS), antisaccade (AS), and interleaved AS-PS tasks. All these tasks have been used extensively in both neurologically healthy and diseased states. RESULTS: We demonstrated that, compared to controls, the VSS group generated significantly shortened PS latencies (p = 0.029) and an increased rate of AS errors (p = 0.001), irrespective of the demands placed on visual processing (i.e., task context). Switch costs, a feature of the AS-PS task, were comparable across groups, and a significant correlation was found between shortened PS latencies and increased AS error rates for patients with VSS (r = 0.404). CONCLUSION: We identified objective and quantifiable measures of visual processing changes in patients with VSS. The absence of any additional switch cost on the AS-PS task in VSS suggests that the PS latency and AS error differences are attributable to a speeded PS response rather than to impaired executive processes more commonly implicated in poorer AS performance. We propose that this combination of latency and error deficits, in conjunction with intact switching performance, will provide a VS behavioral signature that contributes to our understanding of VSS and may assist in determining the efficacy of therapeutic interventions.
Joshua Snell; Jan Theeuwes
In: Journal of Memory and Language, 113 , pp. 104127, 2020.
A wealth of research attests to the key role of statistical learning in the acquisition and execution of skilled reading. Little is known, however, about how regularities impact the way readers navigate through their linguistic environment. While previous studies have mostly gauged the recognition of single words, oculomotor processes are likely influenced by multiple words at once. With these premises in mind, we performed analyses on the GECO book reading corpus to determine whether repeatedly encountering a given sentence structure improves oculomotor control. In the reading materials we labeled structures on the basis of both low- and high-level properties: respectively word length combinations (e.g., a 2-letter word followed by a 6-letter word followed by a 4-letter word) and syntactic structures (e.g., an article followed by a noun followed by a verb). Our analyses show that repeatedly encountering a structure leads to fewer and shorter fixations, and fewer corrective saccades. Critically, learning curves are steeper for structures that have a higher overall frequency, hence evidencing true statistical learning over and above readers' general tendency to accelerate as they progress through the book. Further, data from Dutch-English bilingual readers suggest that these types of learning occur across languages and at various levels of proficiency. We surmise that the reading system is tuned to statistical regularities pertaining not just to single words but also combinations of words. These regularities impact both linguistic processing and oculomotor control.
Bryor Snefjella; Nadia Lana; Victor Kuperman
In: Journal of Memory and Language, 115 , pp. 1–18, 2020.
The present paper addresses two under-studied dimensions of novel word learning. We ask (a) whether originally meaningless novel words can acquire emotional connotations from their linguistic contexts, and (b) whether these acquired connotations can affect the quality of orthographic and semantic word learning and its retention over time. In five experiments using three stimuli sets, L1 speakers of English learned nine novel words embedded in contexts that were consistently positive, neutral or negative. Reading times were recorded during the learning phase, and vocabulary post-tests were administered immediately after that phase and after one week to assess learning. With two of three stimulus sets, the answer to (a) was positive: readers learned both the forms, definitional meanings and emotional connotations of novel words from their contexts. We confirmed (b) in two of three stimulus sets as well. Items were learned more accurately (by 10% to 20%) in positive rather than negative or neutral contexts. We propose the transfer of affect to a word from its collocations to be a virtually unstudied yet efficient mechanism of learning affective meanings. We further demonstrate that the transfer that occurs over a few exposures to a novel word in context is sufficient to elicit a long-lasting positivity advantage previously shown in existing words only. Null results in one stimulus set suggest that contextual transfer of affect is contingent on other contextual properties, such as text complexity. These findings are pitted against theories of vocabulary acquisition.
Max Kailler Smith; Marcia Grabowecky
In: Attention, Perception, and Psychophysics, 82 (2), pp. 729–738, 2020.
Anne Treisman's scientific career included broad-ranging contributions that advanced our understanding of the attentional mechanisms that people rely on to make sense of the world. In this paper, we describe results from a visual-search paradigm first developed by Grabowecky and Treisman (Grabowecky, 1992). Their design exploited known feature-search asymmetries (Treisman & Gormican, 1988) to investigate the role of a center of mass (CoM) mechanism in determining the initial locus of visual-spatial attention in visual search. The original experiment supported the hypothesis that CoM influences initial orienting of visual-spatial attention, as targets near the CoM of a multi-element array were detected more quickly than targets distant from the CoM. These findings were replicated in a follow-up experiment using a different feature-search asymmetry, with eye-tracking added to verify central fixation. We also investigated whether CoM had any influence on pop-out search, and found no evidence that it does. Surprisingly, the effect of position of the search array on the CoM suggested that CoM may be computed independently for elements contained within each visual hemifield. Whereas our work on CoM with Treisman was initiated within an earlier theoretical context, the present results are also compatible with contemporary theoretical advances; both the early results and the new results can be integrated within current ways of thinking about attention and pre-attentive mechanisms.
Miha Slapničar; Valerija Tompa; Saša A Glažar; Iztok Devetak; Jerneja Pavlin
In: Acta Chimica Slovenica, 67 (3), pp. 904–915, 2020.
This paper aims to identify differences in the justification of the selection of 3D dynamic submicroscopic-representation (SMR) of the solid and liquid states of water, as well as the freezing of water presented in selected authentic tasks. According to students' achievements in solving these tasks at different levels of education, their explanations were identified. To explain in greater detail how students attempted to solve the authentic tasks, an eye-tracking method was used to identify the differences in the total fixation durations on specific areas of interest at the specific SMRs between successful and unsuccessful students in three age groups. A total of 79 students participated in this research. The data were collected with a structured interview conducted with students when solving three authentic tasks displayed on the computer screen. The tasks comprise text (as problem and questions), macro-images (photos of the phenomena) and SMRs of the phenomena. The eye-tracker was also used to measure the students' gaze fixations at the particular area of interest. The results show that successful students' justifications for a correct SMR include macroscopic and sub-microscopic representations of the chosen concepts. Along different stages of education, the selection success increases and sufficient justifications comprise the sub-microscopic level. It could be concluded that there are mostly no significant differences between successful and unsuccessful students within the same age group in the total fixation duration at the correct SMR. Further studies are needed to investigate the information-processing strategies between high and low achievers in solving various authentic tasks comprising SMRs and those that integrate all three levels of the representation of chemical concepts.
Anka Slana Ozimič; Grega Repovš
In: Journal of Memory and Language, 112 , pp. 104090, 2020.
To better understand the sources of visual working memory limitations we explore the possibility that its capacity is limited by two systems: a representational system that enables formation of independent representations of visual objects, and an active maintenance system that enables sustained activation of the established representations in the absence of external stimuli. A total of 392 participants took part in four experiments in which they were asked to maintain orientation of items presented to the left, right or both visual hemifields. In all four experiments participants were able to maintain more items when they were distributed across both versus one visual hemifield, consistent with the proposal that bilateral display enables utilization of representational capacities of both hemispheres. Bilateral capacity, however, did not reach the combined representational potential of both hemispheres, indicating that the capacity is further limited by a second, unitary active maintenance system. Our study further suggests that both systems' capacities change throughout the lifespan very similarly. They both increase through development, reach a peak at the same age and decrease in healthy aging. This indicates that systems beyond executive processes, which receive most attention in the literature, are contributing to the decline in working memory in healthy aging.
Anka Slana Ozimič; Grega Repovš
In: Data in Brief, 30 , pp. 1–6, 2020.
This article describes the data collected in four experiments presented in the paper “Visual working memory capacity is limited by two systems that change across lifespan” . The data includes behavioural results from a sample of 397 healthy participants performing a visual working memory span task in which they had to maintain the orientations of items presented to the left, right, or both visual hemifields. It also includes a simulation of experimental data for a number of possible scenarios. The repository  encompasses individual raw data files, a Python preprocessing script used for filtering raw data and the resulting dataset, an R script used to carry out the statistical analysis of the preprocessed data as well as an R script used for the simulations reported in the original paper. Finally, the repository includes an R generated analysis report, containing results of statistical tests and related visual materials, as well as the results of the simulation.
Matthias J Sjerps; Caitlin Decuyper; Antje S Meyer
In: Quarterly Journal of Experimental Psychology, 73 (3), pp. 357–374, 2020.
In everyday conversation, interlocutors often plan their utterances while listening to their conversational partners, thereby achieving short gaps between their turns. Important issues for current psycholinguistics are how interlocutors distribute their attention between listening and speech planning and how speech planning is timed relative to listening. Laboratory studies addressing these issues have used a variety of paradigms, some of which have involved using recorded speech to which participants responded, whereas others have involved interactions with confederates. This study investigated how this variation in the speech input affected the participants' timing of speech planning. In Experiment 1, participants responded to utterances produced by a confederate, who sat next to them and looked at the same screen. In Experiment 2, they responded to recorded utterances of the same confederate. Analyses of the participants' speech, their eye movements, and their performance in a concurrent tapping task showed that, compared with recorded speech, the presence of the confederate increased the processing load for the participants, but did not alter their global sentence planning strategy. These results have implications for the design of psycholinguistic experiments and theories of listening and speaking in dyadic settings.
Carlos Sillero-Rejon; Ute Leonards; Marcus R Munafò; Craig Hedge; Janet Hoek; Benjamin Toll; Harry Gove; Isabel Willis; Rose Barry; Abi Robinson; Olivia M Maynard
Avoidance of tobacco health warnings? An eye-tracking approach Journal Article
In: Addiction, 116 , pp. 126–138, 2020.
Aims: Among three eye-tracking studies, we examined how cigarette pack features affected visual attention and self-reported avoidance of and reactance to warnings. Design: Study 1: smoking status × warning immediacy (short-term versus long-term health consequences) × warning location (top versus bottom of pack). Study 2: smoking status × warning framing (gain-framed versus loss-framed) × warning format (text-only versus pictorial). Study 3: smoking status × warning severity (highly severe versus moderately severe consequences of smoking). Setting: University of Bristol, UK, eye-tracking laboratory. Participants: Study 1: non-smokers (n = 25), weekly smokers (n = 25) and daily smokers (n = 25). Study 2: non-smokers (n = 37), smokers contemplating quitting (n = 37) and smokers not contemplating quitting (n = 43). Study 3: non-smokers (n = 27), weekly smokers (n = 26) and daily smokers (n = 26). Measurements: For all studies: visual attention, measured as the ratio of the number of fixations to the warning versus the branding, self-reported predicted avoidance of and reactance to warnings and for study 3, effect of warning on quitting motivation. Findings: Study 1: greater self-reported avoidance [mean difference (MD) = 1.14; 95% confidence interval (CI) = 0.94, 1.35, P textless 0.001, $eta$p2 = 0.64] and visual attention (MD = 0.89, 95% CI = 0.09, 1.68
Ramona Siebert; Nick Taubert; Silvia Spadacenta; Peter W Dicke; Martin A Giese; Peter Thier
In: eNeuro, 7 (4), pp. 1–17, 2020.
Research on social perception in monkeys may benefit from standardized, controllable, and ethologically valid renditions of conspecifics offered by monkey avatars. However, previous work has cautioned that monkeys, like humans, show an adverse reaction toward realistic synthetic stimuli, known as the “uncanny valley” effect. We developed an improved naturalistic rhesus monkey face avatar capable of producing facial expressions (fear grin, lip smack and threat), animated by motion capture data of real monkeys. For validation, we addition-ally created decreasingly naturalistic avatar variants. Eight rhesus macaques were tested on the various videos and avoided looking at less naturalistic avatar variants, but not at the most naturalistic or the most unnaturalis-tic avatar, indicating an uncanny valley effect for the less naturalistic avatar versions. The avoidance was deepened by motion and accompanied by physiological arousal. Only the most naturalistic avatar evoked facial expressions comparable to those toward the real monkey videos. Hence, our findings demonstrate that the uncanny valley reaction in monkeys can be overcome by a highly naturalistic avatar.
Diksha Shukla; Zain Al-Shamil; Glen Belfry; Matthew Heath
In: Experimental Brain Research, 238 (10), pp. 2333–2346, 2020.
Executive function entails the core components of response inhibition, working memory and cognitive flexibility. An accumulating literature has shown that a single bout of exercise improves the response inhibition and working memory components of executive function; however, limited work has examined a putative exercise-related improvement to cognitive flexibility. To address this limitation, Experiment 1 entailed a 20-min session of moderate intensity aerobic exercise (via cycle ergometer), and pre- and post-exercise cognitive flexibility was examined via a task-switching paradigm involving alternating pro- and antisaccades (AABB: A = prosaccade
Zhenhua Shi; Xiaomo Chen; Changming Zhao; He He; Veit Stuphorn; Dongrui Wu
In: IEEE Transactions on Neural Systems and Rehabilitation Engineering, 28 (9), pp. 1908–1920, 2020.
Multi-view learning improves the learning performance by utilizing multi-view data: data collected from multiple sources, or feature sets extracted from the same data source. This approach is suitable for primate brain state decoding using cortical neural signals. This is because the complementary components of simultaneously recorded neural signals, local field potentials (LFPs) and action potentials (spikes), can be treated as two views. In this paper, we extended broad learning system (BLS), a recently proposed wide neural network architecture, from single-view learning to multi-view learning, and validated its performance in decoding monkeys' oculomotor decision from medial frontal LFPs and spikes. We demonstrated that medial frontal LFPs and spikes in non-human primate do contain complementary information about the oculomotor decision, and that the proposed multi-view BLS is a more effective approach for decoding the oculomotor decision than several classical and state-of-the-art single-view and multi-view learning approaches.
In: SAGE Open, 10 (2), pp. 1–7, 2020.
Previous research has focused on documenting the perceptual mechanisms of facial expressions of so-called basic emotions; however, little is known about eye movement in terms of recognizing crying expressions. The present study aimed to clarify the visual pattern and the role of face gender in recognizing smiling and crying expressions. Behavioral reactions and fixations duration were recorded, and proportions of fixation counts and viewing time directed at facial features (eyes, nose, and mouth area) were calculated. Results indicated that crying expressions could be processed and recognized faster than that of smiling expressions. Across these expressions, eyes and nose area received more attention than mouth area, but in smiling facial expressions, participants fixated longer on the mouth area. It seems that proportional gaze allocation at facial features was quantitatively modulated by different expressions, but overall gaze distribution was qualitatively similar across crying and smiling facial expressions. Moreover, eye movements showed visual attention was modulated by the gender of faces: Participants looked longer at female faces with smiling expressions relative to male faces. Findings are discussed around the perceptual mechanisms underlying facial expressions recognition and the interaction between gender and expression processing.
Zhuowen Shen; Yun Ding; Jason Satel; Zhiguo Wang
In: Visual Cognition, pp. 1–13, 2020.
Inhibition of return (IOR), an inhibitory aftereffect of attentional orienting, usually reveals itself in slower responses to targets appearing at previously attended locations in spatial cueing tasks. Many of the neural substrates underlying visual working memory are also closely linked to attention. The present study examined whether the contents held in working memory interfere with IOR by requiring participants to keep a set of spatial locations in working memory while they performed a spatial cueing task. Results revealed that the presence of a concurrent working memory load modulated IOR when the cueing task involved saccadic responses (Experiment 4), but not when more resource-demanding responses were required in the cueing task (Experiments 1–3). The present study also revealed that working memory load had little effect on the time course of IOR. We suggest that the attentional control setting (ACS) selected to accommodate the cognitive tasks at hand determines whether working memory will interfere with IOR and spatial attention in general.
Wei Shen; Jukka Hyönä; Youxi Wang; Meiling Hou; Jing Zhao
In: Memory and Cognition, pp. 1–12, 2020.
Two experiments were conducted to investigate the extent to which the lexical tone can affect spoken-word recognition in Chinese using a printed-word paradigm. Participants were presented with a visual display of four words—namely, a target word (e.g., 象限, xiang4xian4, “quadrant”), a tone-consistent phonological competitor (e.g., 相册, xiang4ce4, “photo album”), or a tone-inconsistent phonological competitor (e.g., 香菜, xiang1cai4, “coriander”), and two unrelated distractors. Simultaneously, they were asked to listen to a spoken target word presented in isolation (Experiment 1) or embedded in neutral/predictive sentence contexts (Experiment 2), and then click on the target word on the screen. Results showed significant phonological competitor effects (i.e., the fixation proportion on the phonological competitor was higher than that on the distractors) under both tone conditions. Specifically, a larger phonological competitor effect was observed in the tone-consistent condition than in the tone-inconsistent condition when the spoken word was presented in isolation and the neutral sentence contexts. This finding suggests a partial role of lexical tone in constraining spoken-word recognition. However, when embedded in a predictive sentence context, the phonological competitor effect was only observed in the tone-consistent condition and absent in the tone-inconsistent condition. This result indicates that the predictive sentence context can strengthen the role of lexical tone.
Adi Shechter; David L Share
In: Psychological Science, pp. 1–16, 2020.
Rapid and seemingly effortless word recognition is a virtually unquestioned characteristic of skilled reading, yet the definition and operationalization of the concept of cognitive effort have proven elusive. We investigated the cognitive effort involved in oral and silent word reading using pupillometry among adults (Experiment 1
Luke H Shaw; Edward G Freedman; Michael J Crosse; Eric Nicholas; Allen M Chen; Matthew S Braiman; Sophie Molholm; John J Foxe
In: Neuroscience, 436 , pp. 122–135, 2020.
Individuals respond faster to presentations of bisensory stimuli (e.g. audio-visual targets) than to presentations of either unisensory constituent in isolation (i.e. to the auditory-alone or visual-alone components of an audio-visual stimulus). This well-established multisensory speeding effect, termed the redundant signals effect (RSE), is not predicted by simple linear summation of the unisensory response time probability distributions. Rather, the speeding is typically faster than this prediction, leading researchers to ascribe the RSE to a so-called co-activation account. According to this account, multisensory neural processing occurs whereby the unisensory inputs are integrated to produce more effective sensory-motor activation. However, the typical paradigm used to test for RSE involves random sequencing of unisensory and bisensory inputs in a mixed design, raising the possibility of an alternate attention-switching account. This intermixed design requires participants to switch between sensory modalities on many task trials (e.g. from responding to a visual stimulus to an auditory stimulus). Here we show that much, if not all, of the RSE under this paradigm can be attributed to slowing of reaction times to unisensory stimuli resulting from modality switching, and is not in fact due to speeding of responses to AV stimuli. As such, the present data do not support a co-activation account, but rather suggest that switching and mixing costs akin to those observed during classic task-switching paradigms account for the observed RSE.
Nino Sharvashidze; Alexander C Schütz
Task-dependent eye-movement patterns in viewing art Journal Article
In: Journal of Eye Movement Research, 13 (2), pp. 1–17, 2020.
In art schools and classes for art history students are trained to pay attention to different aspects of an artwork, such as art movement characteristics and painting techniques. Experts are better at processing style and visual features of an artwork than nonprofessionals. Here we tested the hypothesis that experts in art use different, task-dependent viewing strategies than nonprofes- sionals when analyzing a piece of art. We compared a group of art history students with a group of students with no art education background, while viewing 36 paintings under three discrim- ination tasks. Participants were asked to determine the art movement, the date and the medium of the paintings. We analyzed behavioral and eye-movement data of 27 participants. Our ob- servers adjusted their viewing strategies according to the task, resulting in longer fixation du- rations and shorter saccade amplitudes for the medium detection task. We found higher task accuracy and subjective confidence, less congruence and higher dispersion in fixation locations in experts. Expertise also influenced saccade metrics, biasing it towards larger saccade ampli- tudes, advocating a more holistic scanning strategy of experts in all three tasks. WIBBLE:
Andréanne Sharp; Christine Turgeon; Aaron Paul Johnson; Sebastian Pannasch; François Champoux; Dave Ellemberg
Congenital deafness leads to altered overt oculomotor behaviors Journal Article
In: Frontiers in Neuroscience, 14 , pp. 1–8, 2020.
The human brain is highly cross-modal, and sensory information may affect a wide range of behaviors. In particular, there is evidence that auditory functions are implicated in oculomotor behaviors. Considering this apparent auditory-oculomotor link, one might wonder how the loss of auditory input from birth might have an influence on these motor behaviors. Eye movement tracking enables to extract several components, including saccades and smooth pursuit. One study suggested that deafness can alter saccades processing. Oculomotor behaviors have not been examined further in the deaf. The main goal of this study was to examine smooth pursuit following deafness. A pursuit task paradigm was used in this experiment. Participants were instructed to move their eyes to follow a target as it moved. The target movements have a possibility of four different trajectories (horizontal, vertical, elliptic clockwise, and elliptic counter-clockwise). Results indicate a significant reduction in the ability to track a target in both elliptical conditions showing that more complex motion processing differs in deaf individuals. The data also revealed significantly more saccades per trial in the vertical, anti-clockwise, and, to a lesser extent, the clockwise elliptic condition. This suggests that auditory deprivation from birth leads to altered overt oculomotor behaviors.
Katharine A Shapcott; Joscha T Schmiedt; Kleopatra Kouroupaki; Ricardo Kienitz; Andreea Lazar; Wolf Singer; Michael C Schmid
In: Cerebral Cortex, 30 (9), pp. 4871–4881, 2020.
In order for organisms to survive, they need to detect rewarding stimuli, for example, food or a mate, in a complex environment with many competing stimuli. These rewarding stimuli should be detected even if they are nonsalient or irrelevant to the current goal. The value-driven theory of attentional selection proposes that this detection takes place through reward-associated stimuli automatically engaging attentional mechanisms. But how this is achieved in the brain is not very well understood. Here, we investigate the effect of differential reward on the multiunit activity in visual area V4 of monkeys performing a perceptual judgment task. Surprisingly, instead of finding reward-related increases in neural responses to the perceptual target, we observed a large suppression at the onset of the reward indicating cues. Therefore, while previous research showed that reward increases neural activity, here we report a decrease. More suppression was caused by cues associated with higher reward than with lower reward, although neither cue was informative about the perceptually correct choice. This finding of reward-associated neural suppression further highlights normalization as a general cortical mechanism and is consistent with predictions of the value-driven attention theory.
Huiru Shao; Jing Li; Wenbo Wan; Huaxiang Zhang; Jiande Sun
Saccadic trajectory-based identity authentication Journal Article
In: Multimedia Tools and Applications, 79 (7-8), pp. 4891–4905, 2020.
The saccadic trajectory is generated by extra-ocular muscles in the eyes, which is a complex mechanism related to brain-driven neural signal. The saccadic trajectory has the characteristics of non-reproducibility and non-contact. In this paper, we propose a saccadic trajectory-based identity authentication method considering that saccadic trajectory can be used as a behavior-based biometric. In this method, we adopt Velocity-Threshold (I-VT) algorithm to extract saccadic trajectories from the whole eye movement data, extract features via wavelet packet transform and authenticate the identity via classifying these features by SVM. In this paper, we verify the proposed method on EMDBv1.0 dataset for horizontal eye movements. We select one subject to be the host and randomly choose another 50 subjects from the remaining 58 subjects as the attackers. We achieve the best performance via optimizing feature selection and the parameter of SVM. The experiment results show that the average accuracy for accepting the host can reach 98.09%, and the average accuracy for rejecting the attackers can reach 99.55%. It demonstrates that the saccadic trajectory-based identity authentication is promising in information security.