EyeLink 认知出版物
所有EyeLink认知和感知研究出版物,直至2023年(部分早于2024年)均按年份列出。您可以使用视觉搜索、场景感知、面部处理等关键词搜索出版物。您还可以搜索单个作者姓名。如果我们错过了任何EyeLink认知或感知文章,请给我们发电子邮件!
2019 |
Palash Bera; Pnina Soffer; Jeffrey Parsons Using eye tracking to expose cognitive processes Journal Article In: MIS Quarterly, vol. 43, no. 4, pp. 1105–1126, 2019. @article{Bera2019, Conceptual models are used to communicate information about a domain during the development of information systems. In two experimental studies using business process models, we demonstrate how eye tracking can contribute to understanding the cognitive processes by which readers use conceptual modeling scripts to perform problem solving tasks. In the first study, we compare scripts generated using two process modeling grammars and demonstrate how attention paid to specific parts of scripts generated using grammar variations, and differences in visual association between parts of a diagram, account for task performance. In the second study, we use a combination of eye tracking and verbal protocol analysis to examine how visual association between parts of conceptual modeling scripts can indicate cognitive integration while performing problem solving tasks. The studies show that task performance can be explained with different mental processes, reflected in specific eye tracking behavior, where scripts developed following different rules invoke different cognitive processes. We show that attention can be measured by eye tracking and can explain task performance. In addition, we show that visual association (which is observable) between parts of a modeling script involves cognitive integration (which is not observable). This finding can be used to improve conceptual modeling grammars in several ways, including understanding the effects of alternative visual arrangements of models on how effectively they communicate domain knowledge for particular tasks, and guiding the design of visual modeling notations. 1 |
Maayan Merhav; Martin Riemer; Thomas Wolbers Spatial updating deficits in human aging are associated with traces of former memory representations Journal Article In: Neurobiology of Aging, vol. 76, pp. 53–61, 2019. @article{Merhav2019, The ability to update spatial memories is important for everyday situations, such as remembering where we left our keys or parked our car. Although rodent studies have suggested that old age might impair spatial updating, direct evidence for such a deficit in humans is missing. Here, we tested whether spatial updating deficits occur in human aging, whether the learning mode influences spatial updating, and what mnemonic mechanism underlies the presumed deficits. To address these questions, younger and older participants had to indicate the latest location of relocated items, following either incidental or intentional learning. Using eye tracking, we further quantified memory traces of the original and updated locations. We found that older participants were selectively impaired in recalling locations of relocated items. Furthermore, they depicted relatively stronger representations of the original locations, which were correlated with their spatial updating deficits. The findings demonstrate that stronger representations of former spatial contexts can impair spatial updating in aging, a mechanism that can help explain the commonly observed age-related decline in spatial memory. |
Sebastian Michelmann; Bernhard P. Staresina; Howard Bowman; Simon Hanslmayr Speed of time-compressed forward replay flexibly changes in human episodic memory Journal Article In: Nature Human Behaviour, vol. 3, no. 2, pp. 143–154, 2019. @article{Michelmann2019, Remembering information from continuous past episodes is a complex task 1 . On the one hand, we must be able to recall events in a highly accurate way, often including exact timings. On the other hand, we can ignore irrelevant details and skip to events of interest. Here, we track continuous episodes consisting of different subevents as they are recalled from memory. In behavioural and magnetoencephalography data, we show that memory replay is temporally compressed and proceeds in a forward direction. Neural replay is characterized by the reinstatement of temporal patterns from encoding 2,3 . These fragments of activity reappear on a compressed timescale. Herein, the replay of subevents takes longer than the transition from one subevent to another. This identifies episodic memory replay as a dynamic process in which participants replay fragments of fine-grained temporal patterns and are able to skip flexibly across subevents. |
Katsumi Minakata; Matthias Gondan Differential coactivation in a redundant signals task with weak and strong go/no-go stimuli Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 4, pp. 922–929, 2019. @article{Minakata2019, When participants respond to stimuli of two sources, response times (RTs) are often faster when both stimuli are presented together relative to the RTs obtained when presented separately (redundant signals effect [RSE]). Race models and coactivation models can explain the RSE. In race models, separate channels process the two stimulus components, and the faster processing time determines the overall RT. In audiovisual experiments, the RSE is often higher than predicted by race models, and coactivation models have been proposed that assume integrated processing of the two stimuli. Where does coactivation occur? We implemented a go/no-go task with randomly intermixed weak and strong auditory, visual, and audiovisual stimuli. In one experimental session, participants had to respond to strong stimuli and withhold their response to weak stimuli. In the other session, these roles were reversed. Interestingly, coactivation was only observed in the experimental session in which participants had to respond to strong stimuli. If weak stimuli served as targets, results were widely consistent with the race model prediction. The pattern of results contradicts the inverse effectiveness law. We present two models that explain the result in terms of absolute and relative thresholds. |
James S. Morandini; Aaron Veldre; Alex O. Holcombe; Kevin Hsu; Amy Lykins; J. Michael Bailey; Ilan Dar-Nimrod Visual attention to sexual stimuli in mostly heterosexuals Journal Article In: Archives of Sexual Behavior, vol. 48, pp. 1371–1385, 2019. @article{Morandini2019, Individuals who report mostly heterosexual orientations (i.e., mostly sexually attracted to the opposite sex, but occasionally attracted to the same sex) outnumber all other non-heterosexual individuals combined. The present study examined whether mostly heterosexual men and women view same- and other-sex sexual stimuli differently than exclusively heterosexual men and women. A novel eye-tracking paradigm was used with 162 mostly and exclusively heterosexual men and women. Compared to exclusively heterosexual men, mostly heterosexual men demonstrated greater attention to sexually explicit features (i.e., genital regions and genital contact regions) of solo male and male–male erotic stimuli, while demonstrating equivalent attention to sexually explicit features of solo female and female–female erotic stimuli. Mediation analyses suggested that differences between mostly and exclusively heterosexual profiles in men could be explained by mostly heterosexual men's increased sexual attraction to solo male erotica, and their increased sexual attraction and reduced disgust to the male–male erotica. No comparable differences in attention were observed between mostly and exclusively heterosexual women—although mostly heterosexual women did demonstrate greater fixation on visual erotica overall—a pattern of response that was found to be mediated by reduced disgust. |
Kiki Arkesteijn; Artem V. Belopolsky; Jeroen B. J. Smeets; Mieke Donk The limits of predictive remapping of attention across eye movements Journal Article In: Frontiers in Psychology, vol. 10, pp. 1146, 2019. @article{Arkesteijn2019, With every eye movement, visual input projected onto our retina changes drastically. The fundamental question of how we keep track of relevant objects and movement targets has puzzled scientists for more than a century. Recent advances suggested that this can be accomplished through the process of predictive remapping of visual attention to the future post-saccadic locations of relevant objects. Evidence for the existence of predictive remapping of attention was first provided by Rolfs et al. (2011) (Nature Neuroscience, 14, 252-256). However, they used a single distant control location away from the task-relevant locations, which could have biased the allocation of visual attention. In this study we used a similar experimental paradigm as Rolfs et al. (2011), but probed attention equally likely at all possible locations. Our results showed that discrimination performance was higher at the remapped location than at a distant control location, but not compared to the other two control locations. A re-analysis of the results obtained by Rolfs et al. (2011) revealed a similar pattern. Together, these findings suggest that it is likely that previous reports of the predictive remapping of attention were due to a diffuse spread of attention to the task-relevant locations rather than to a specific shift toward the target's future retinotopic location. |
Thomas Armstrong; Mira Engel; Trevor Press; Anneka Sonstroem; Julian Reed Fast-forwarding disgust conditioning: US pre-exposure facilitates the acquisition of oculomotor avoidance Journal Article In: Motivation and Emotion, vol. 43, no. 4, pp. 681–695, 2019. @article{Armstrong2019, During human development, disgust is acquired to a broad range of stimuli, from rotting food to moral transgressions. Disgust's expansion surely involves associative learning, yet little is known about Pavlovian disgust conditioning. The present study examined conditioned disgust responding as revealed by oculomotor avoidance, the tendency to look away from offensive stimuli. In two experiments, oculomotor avoidance was acquired to a neutral image associated with a disgusting image. However, to our surprise, participants initially dwelled on disgusting images, avoiding them only after multiple exposures. In Experiment 1, this “rubbernecking” response delayed oculomotor avoidance of the associated neutral image. In Experiment 2, we exhausted rubbernecking prior to conditioning by repeatedly exposing participants to the disgusting images. This procedure elicited earlier oculomotor avoidance of the associated neutral stimulus, essentially fast-forwarding conditioning. These findings reveal competing motivational tendencies elicited by disgust stimuli that complicate associative disgust learning. |
Ryszard Auksztulewicz; Nicholas E. Myers; Jan W. Schnupp; Anna C. Nobre Rhythmic temporal expectation boosts neural activity by increasing neural gain Journal Article In: Journal of Neuroscience, vol. 39, no. 49, pp. 9806–9817, 2019. @article{Auksztulewicz2019, Temporal orienting improves sensory processing, akin to other top–down biases. However, it is unknown whether these improvements reflect increased neural gain to any stimuli presented at expected time points, or specific tuning to task-relevant stimulus aspects. Furthermore, while other top–down biases are selective, the extent of trade-offs across time is less well characterized. Here, we tested whether gain and/or tuning ofauditory frequency processing in humans is modulated by rhythmic temporal expectations, and whether these modulations are specific to time points relevant for task performance. Healthy participants (N⫽ 23) of either sex performed an auditory discrimination task while their brain activity was measured using magnetoencephalography/electroencephalography (M/EEG). Acoustic stimulation consisted ofsequences ofbriefdistractors interspersed with targets, presented in a rhythmic or jittered way. Target rhythmicity not only improved behavioral discrimination accuracy and M/EEG-based decoding oftargets, but also ofirrelevant distrac- tors preceding these targets. To explain this finding in terms ofincreased sensitivity and/or sharpened tuning to auditory frequency, we estimated tuning curves based on M/EEG decoding results, with separate parameters describing gain and sharpness. The effect of rhythmic expectation on distractor decoding was linked to gain increase only, suggesting increased neural sensitivity to any stimuli presented at relevant time points. |
Federico Avila; Claudio Delrieux; Gustavo Gasaneo Complexity analysis of eye-tracking trajectories: Permutation entropy may unravel cognitive styles Journal Article In: The European Physical Journal B, vol. 92, pp. 1–7, 2019. @article{Avila2019, We propose a novel adaptation of permutation entropy analysis applied to eye-tracking data. Eye movements arising during cognitive tasks are characterized as sequences of trajectories within a space of ordinal trajectory patterns, thus taking advantage of recent advancements in the study of complex processes in terms of statistical complexity. Results show correlations between the permutation entropies of the eye-tracking trajectories and the type of cognitive task being performed by the subjects. Moreover, the behavior of subjects along all the experiments cluster together into two groups within a projection of the ordinal pattern space in the three principal components. This strongly suggests the existence of two different underlying problem solving styles among the subjects, which are expressed in how the movement sequences are organized. |
Emma L. Axelsson; Rachel A. Robbins; Helen F. Copeland; Hester W. Covell Body inversion effects with photographic images of body postures: Is it about faces? Journal Article In: Frontiers in Psychology, vol. 10, pp. 2686, 2019. @article{Axelsson2019, As with faces, participants are better at discriminating upright bodies than inverted bodies. This inversion effect is reliable for whole figures, namely, bodies with heads, but it is less reliable for headless bodies. This suggests that removal of the head disrupts typical processing of human figures, and raises questions about the role of faces in efficient body discrimination. In most studies, faces are occluded, but the aim here was to exclude faces in a more ecologically valid way by presenting photographic images of human figures from behind (about-facing), as well as measuring gaze to different parts of the figures. Participants determined whether pairs of sequentially presented body postures were the same or different for whole and headless figures. Presenting about-facing figures (heads seen from behind) and forward-facing figures with faces enabled a comparison of the effect of the presence or absence of faces. Replicating previous findings, there were inversion effects for forward-facing whole figures, but less reliable effects for headless images. There were also inversion effects for about-facing whole figures, but not about-facing headless figures. Accuracy was higher in the forward- compared to the about-facing conditions, but proportional dwell time was greater to bodies in about-facing images. Likewise, despite better discrimination of forward-facing upright compared to inverted whole figures, participants focused more on the heads and less on the bodies in upright compared to inverted images. However, there was no clear relationship between performance and dwell time proportions to heads. Body inversion effects (BIEs) were found with about-facing whole figures and headless forward-facing figures, despite the absence of faces. With inverted whole figures, there was a significant relationship between performance and greater looking at bodies, and less at heads suggesting that in more difficult conditions a focus on bodies is associated with better discrimination. Overall, the findings suggest that the visual system has greater sensitivity to bodies in their most experienced form, which is typically upright and with a head. Otherwise, the more a face is implied by the context, as in whole figures or forward- rather than about-facing headless bodies, the better the performance as holistic/configural processing is likely stronger. |
Dominik R. Bach; Monika Näf; Markus Deutschmann; Shiva K. Tyagarajan; Boris B. Quednow Threat memory reminder under matrix metalloproteinase 9 inhibitor doxycycline globally reduces subsequent memory plasticity Journal Article In: Journal of Neuroscience, vol. 39, no. 47, pp. 9424 –9434, 2019. @article{Bach2019, Associative memory can be rendered malleable by a reminder. Blocking the ensuing re-consolidation process is suggested as a therapeutic target for unwanted aversive memories. Matrix metalloproteinase (MMP)-9 is required for structural synapse remodelling involved in memory consolidation. Inhibiting MMP-9 with doxycycline is suggested to attenuate human threat conditioning. Here, we investigate whether MMP-9 inhibition also interferes with threat memory re-consolidation. N=78 male and female human participants learned the association between two visual conditioned stimuli (CS+) and a 50% chance of an unconditioned nociceptive stimulus (US), and between CS- and the absence of US. On day 7, one CS+ was reminded without reinforcement 3.5 hours after ingesting either 200 mg doxycycline, or placebo. On day 14, retention of CS memory was assessed under extinction, by fear-potentiated startle. Contrary to our expectations, we observed a greater CS+/CS- difference in participants who were reminded under doxycycline, compared to placebo. Participants who were reminded under placebo showed extinction learning during the retention test, which was not observed in the doxycycline group. There was no difference between the reminded and the non-reminded CS+ in either group. In contrast, during re-learning after the retention test, CS+/CS- difference was more pronounced in the placebo than the doxycycline group. To summarize, a single dose of doxycycline appeared to have no specific impact on re-consolidation, but to globally impair extinction learning, and threat re-learning, after drug clearance. |
Brett Bahle; Andrew Hollingworth Contrasting episodic and template-based guidance during search through natural scenes Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 45, no. 4, pp. 523–536, 2019. @article{Bahle2019, Visual search through natural scenes can be guided by knowledge of where a target object has been observed previously (episodic guidance) and knowledge of that object's visual properties (template guidance). In the present experiments, we compared the relative contributions of these two sources of guidance. Episodic guidance was implemented in a contextual cuing task: participants searched multiple times through a set of scenes for a target letter that appeared in a consistent location within each scene. Template guidance was implemented by the color match between a critical distractor in each scene and a secondary visual working memory (VWM) load. There were four main findings. First, search time decreased with increasing scene repetition; episodic memory guided search. Second, the critical distractor was fixated more frequently on match compared with mismatch trials, consistent with automatic template guidance. Third, the VWM-match effect persisted in blocks with strong episodic guidance. Finally, VWM-match effects were observed from the first saccade during search, whereas episodic guidance to the target developed only later in the trial. The results support a view of natural search in which template-based mechanisms operate early during search in a manner that is not strongly constrained by scene-based forms of guidance, such as episodic knowledge. Public Significance Statement Real-world searches are guided by knowledge of where a target object has been observed previously and knowledge of that object's visual features. The present study investigates the interaction between these two sources of guidance during search. By better understanding how these searches are performed, vital tasks in the real world that rely on similar sources of knowledge (e.g., a baggage screener looking for dangerous items or a radiologist looking for tumors) can be potentially improved. |
Romy S. Bakker; Luc P. J. Selen; W. Pieter Medendorp Transformation of vestibular signals for the decisions of hand choice during whole body motion Journal Article In: Journal of Neurophysiology, vol. 121, no. 6, pp. 2392–2400, 2019. @article{Bakker2019, In daily life, we frequently reach toward objects while our body is in motion. We have recently shown that body accelerations influence the decision of which hand to use for the reach, possibly by modulating the body-centered computations of the expected reach costs. However, head orientation relative to the body was not manipulated, and hence it remains unclear whether vestibular signals contribute in their head-based sensory frame or in a transformed body-centered reference frame to these cost calculations. To test this, subjects performed a preferential reaching task to targets at various directions while they were sinusoidally translated along the lateral body axis, with their head either aligned with the body (straight ahead) or rotated 18° to the left. As a measure of hand preference, we determined the target direction that resulted in equiprobable right/left-hand choices. Results show that head orientation affects this balanced target angle when the body is stationary but does not further modulate hand preference when the body is in motion. Furthermore, reaction and movement times were larger for reaches to the balanced target angle, resembling a competitive selection process, and were modulated by head orientation when the body was stationary. During body translation, reaction and movement times depended on the phase of the motion, but this phase-dependent modulation had no interaction with head orientation. We conclude that the brain transforms vestibular signals to body-centered coordinates at the early stage of reach planning, when the decision of hand choice is computed. |
Sébastien Ballesta; Camillo Padoa-Schioppa Economic decisions through circuit inhibition Journal Article In: Current Biology, vol. 29, no. 22, pp. 3814–3824, 2019. @article{Ballesta2019, Economic choices between goods are thought to rely on the orbitofrontal cortex (OFC), but the decision mechanisms remain poorly understood. To shed light on this fundamental issue, we recorded from the OFC of monkeys choosing between two juices offered sequentially. An analysis of firing rates across time windows revealed the presence of different groups of neurons similar to those previously identified under simultaneous offers. This observation suggested that economic decisions in the two modalities are formed in the same neural circuit. We then examined several hypotheses on the decision mechanisms. OFC neurons encoded good identities and values in a juice-based representation (labeled lines). Contrary to previous assessments, our data argued against the idea that decisions rely on mutual inhibition at the level of offer values. In fact, we showed that previous arguments for mutual inhibition were confounded by differences in value ranges. Instead, decisions seemed to involve mechanisms of circuit inhibition, whereby each offer value indirectly inhibited neurons encoding the opposite choice outcome. Our results reconcile a variety of previous findings and provide a general account for the neuronal underpinnings of economic choices. |
Rodrigo Balp; Florian Waszak; Thérèse Collins Remapping versus short-term memory in visual stability across saccades Journal Article In: Attention, Perception, and Psychophysics, vol. 81, pp. 98–108, 2019. @article{Balp2019, Saccadic eye movements cause displacements of the image of the visual world projected on the retina. Despite the ubiquitous nature of saccades, subjective experience of the world is continuous and stable. In five experiments, we addressed the mechanisms that may support visual stability: matching of pre- and postsaccadic locations of the target by an internal copy of the saccade, or retention of the visual attributes of the target in short-term memory across the saccade. Healthy human adults were instructed to make a saccade to a peripheral Gabor patch. While the saccade was in midflight, the patch could change location, orientation, or both. The change occurred either immediately or following a 250-ms blank during which no visual stimuli were available. In separate experiments, subjects had to report either whether the patch stepped to the left or right or whether the orientation rotated clockwise or counterclockwise. Consistent with previous findings, we found that transsaccadic displacement discrimination was enhanced by the addition of the blank. However, contrary to previous findings reported in the literature, the feature change did not improve performance. Transsaccadic orientation change discrimination did not depend on either an irrelevant temporal blank or a simultaneous irrelevant target displacement. Taken together, these findings suggest that orientation is not a relevant visual feature for transsaccadic correspondence. |
Daniela Balslev; Bartholomäus Odoj Distorted gaze direction input to attentional priority map in spatial neglect Journal Article In: Neuropsychologia, vol. 131, pp. 119–128, 2019. @article{Balslev2019, A contribution of the gaze signals to the attention imbalance in spatial neglect is presumed. Direct evidence however, is still lacking. Theoretical models for spatial attention posit an internal representation of locations that are selected in the competition for neural processing resources – an attentional priority map. Following up on our recent research showing an imbalance in the allocation of attention after an oculoproprioceptive perturbation in healthy volunteers, we investigated here whether the lesion in spatial neglect distorts the gaze direction input to this representation. Information about one's own direction of gaze is critical for the coordinate transformation between retinotopic and hand proprioceptive locations. To assess the gaze direction input to the attentional priority map, patients with left spatial neglect performed a cross-modal attention task in their normal, right hemispace. They discriminated visual targets whose location was cued by the patient's right index finger hidden from view. The locus of attention in response to the cue was defined as the location with the largest decrease in reaction time for visual discrimination in the presence vs. absence of the cue. In two control groups consisting of healthy elderly and patients with a right hemisphere lesion without neglect, the loci of attention were at the exact location of the cues. In contrast, neglect patients allocated attention at 0.5⁰-2⁰ rightward of the finger for all tested locations. A control task using reaching to visual targets in the absence of visual hand feedback ruled out a general error in visual localization. These findings demonstrate that in spatial neglect the gaze direction input to the attentional priority map is distorted. This observation supports the emerging view that attention and gaze are coupled and suggests that interventions that target gaze signals could alleviate spatial neglect. |
A. Banner; Shai Gabay; S. Shamay-Tsoory Androstadienone, a putative chemosignal of dominance, increases gaze avoidance among men with high social anxiety Journal Article In: Psychoneuroendocrinology, vol. 102, pp. 9–15, 2019. @article{Banner2019, Socially anxious individuals show increased sensitivity toward social threat signals, including cues of dominance. This sensitivity may account for the hypervigilance and gaze avoidance commonly reported in individuals with social anxiety. This study examines visual scanning behavior in response to androstadienone (androsta-4,16,-dien-3-one), a putative chemosignal of dominance. We tested whether exposure to androstadienone would increase hypervigilance and gaze avoidance among individuals with high social anxiety. In a double-blind, placebo-controlled, within-subject design, 26 participants with high social anxiety and 26 with low social anxiety were exposed to androstadienone and a control solution on two separate days. On each day, an eye-tracker recorded their spontaneous scanning behavior while they viewed facial images of men depicting dominant and neutral poses. The results indicate that among participants with high social anxiety, androstadienone increased gaze avoidance by reducing the percentage of fixations made to the eye-region and the total amount of time spent gazing at the eye-region of the faces. Participants with low social anxiety did not show this effect. These findings indicate that androstadienone serves as a threatening chemosignal of dominance, further supporting the link between hypersensitivity toward social threat cues and the perpetuation of social anxiety. |
Annamaria Barczak; Saskia Haegens; Deborah A. Ross; Tammy McGinnis; Peter Lakatos; Charles E. Schroeder Dynamic modulation of cortical excitability during visual active sensing Journal Article In: Cell Reports, vol. 27, no. 12, pp. 3447–3459, 2019. @article{Barczak2019, Visual physiology is traditionally investigated by presenting stimuli with gaze held constant. However, during active viewing of a scene, information is actively acquired using systematic patterns of fixations and saccades. Prior studies suggest that during such active viewing, both nonretinal, saccade-related signals and “extra-classical” receptive field inputs modulate visual processing. This study used a set of active viewing tasks that allowed us to compare visual responses with and without direct foveal input, thus isolating the contextual eye movement-related influences. Studying nonhuman primates, we find strong contextual modulation in primary visual cortex (V1): excitability and response amplification immediately after fixation onset, transiting to suppression leading up to the next saccade. Time-frequency decomposition suggests that this amplification and suppression cycle stems from a phase reset of ongoing neuronal oscillatory activity. The impact of saccade-related contextual modulation on stimulus processing makes active visual sensing fundamentally different from the more passive processes investigated in traditional paradigms. By isolating contextual eye movement-related influences during active vision, Barczak et al. show that eye movements affect excitability in V1 such that responses are amplified immediately after fixation onset and suppressed as the next saccade approaches. This amplification and suppression cycle stems from a phase reset of ambient oscillatory activity in V1. |
Stephen J. Agauas; Laura E. Thomas Change detection for real-world objects in perihand space Journal Article In: Attention, Perception, and Psychophysics, vol. 81, no. 7, pp. 2365–2383, 2019. @article{Agauas2019, Recent evidence has demonstrated that observers experience visual-processing biases in perihand space that may be tied to the hands' relevance for grasping actions. Our previous work suggested that when the hands are positioned to afford a power-grasp action, observers show increased temporal sensitivity that could aid with fast and forceful action, whereas when the hands are instead at the ready to perform a precision-grasp action, observers show enhanced spatial sensitivity that benefits delicate and detail-oriented actions. In the present investigation we seek to extend these previous findings by examining how object affordances may interact with hand positioning to shape visual biases in perihand space. Across three experiments, we examined how long participants took to perform a change detection task on photos of real objects, while we manipulated hand position (near/far from display), grasp posture (power/precision), and change type (orientation/identity). Participants viewed objects that afforded either a power grasp or a precision grasp, or were ungraspable. Although we were unable to uncover evidence of altered vision in perihand space in our first experiment, mirroring previous findings, in Experiments 2 and 3 our participants showed grasp-dependent biases near the hands when detecting changes to target objects that afforded a power grasp. Interestingly, ungraspable target objects were not subject to the same perihand space biases. Taken together, our results suggest that the influence of hand position on change detection performance is mediated not only by the hands' grasp posture, but also by a target object's affordances for grasping. |
Luis Aguado; Karisa B. Parkington; Teresa Dieguez-Risco; José A. Hinojosa; Roxane J. Itier Joint modulation of facial expression processing by contextual congruency and task demands Journal Article In: Brain Sciences, vol. 9, pp. 1–20, 2019. @article{Aguado2019, Faces showing expressions of happiness or anger were presented together with sentences that described happiness-inducing or anger-inducing situations. Two main variables were manipulated: (i) congruency between contexts and expressions (congruent/incongruent) and (ii) the task assigned to the participant, discriminating the emotion shown by the target face (emotion task) or judging whether the expression shown by the face was congruent or not with the context (congruency task). Behavioral and electrophysiological results (event-related potentials (ERP)) showed that processing facial expressions was jointly influenced by congruency and task demands. ERP results revealed task effects at frontal sites, with larger positive amplitudes between 250–450 ms in the congruency task, reflecting the higher cognitive effort required by this task. Effects of congruency appeared at latencies and locations corresponding to the early posterior negativity (EPN) and late positive potential (LPP) components that have previously been found to be sensitive to emotion and affective congruency. The magnitude and spatial distribution of the congruency effects varied depending on the task and the target expression. These results are discussed in terms of the modulatory role of context on facial expression processing and the different mechanisms underlying the processing of expressions of positive and negative emotions. |
Andrea Alamia; Rufin VanRullen; Emanuele Pasqualotto; André Mouraux; Alexandre Zenon Pupil-linked arousal responds to unconscious surprisal Journal Article In: Journal of Neuroscience, vol. 39, no. 27, pp. 5369–5376, 2019. @article{Alamia2019, Pupil size under constant illumination reflects brain arousal state, and dilates in response to novel information, or surprisal. Whether this response can be observed regardless of conscious perception is still unknown. In the present study, male and female adult humans performed an implicit learning task across a series of three experiments. We measured pupil and brain-evoked potentials to stimuli that violated transition statistics but were not relevant to the task. We found that pupil size dilated following these surprising events, in the absence of awareness of transition statistics, and only when attention was allocated to the stimulus. These pupil responses correlated with central potentials, evoking an anterior cingulate origin. Arousal response to surprisal outside the scope of conscious perception points to the fundamental relationship between arousal and information processing and indicates that pupil size can be used to track the progression of implicit learning. |
Stepan Aleshin; Gergo Ziman; Ilona Kovács; Jochen Braun Perceptual reversals in binocular rivalry: Improved detection from OKN Journal Article In: Journal of Vision, vol. 19, no. 3, pp. 1–18, 2019. @article{Aleshin2019, When binocular rivalry is induced by opponent motion displays, perceptual reversals are often associated with changed oculomotor behavior (Frässle, Sommer, Jansen, Naber, & Einhäuser, 2014; Fujiwara et al., 2017). Specifically, the direction of smooth pursuit phases in optokinetic nystagmus typically corresponds to the direction of motion that dominates perceptual appearance at any given time. Here we report an improved analysis that continuously estimates perceived motion in terms of ''cumulative smooth pursuit.'' In essence, smooth pursuit segments are identified, interpolated where necessary, and joined probabilistically into a continuous record of cumulative smooth pursuit (i.e., probability of eye position disregarding blinks, saccades, signal losses, and artefacts). The analysis is fully automated and robust in healthy, developmental, and patient populations. To validate reliability, we compare volitional reports of perceptual reversals in rivalry displays, and of physical reversals in nonrivalrous control displays. Cumulative smooth pursuit detects physical reversals and estimates eye velocity more accurately than existing methods do (Frässle et al., 2014). It also appears to distinguish dominant and transitional perceptual states, detecting changes with a precision of ± 100 ms. We conclude that cumulative smooth pursuit significantly improves the monitoring of binocular rivalry by means of recording optokinetic nystagmus. |
Robert G. Alexander; Roxanna J. Nahvi; Gregory J. Zelinsky Specifying the precision of guiding features for visual search Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 45, no. 9, pp. 1248–1264, 2019. @article{Alexander2019, Visual search is the task of finding things with uncertain locations. Despite decades of research, the features that guide visual search remain poorly specified, especially in realistic contexts. This study tested the role of two features-shape and orientation- both in the presence and absence of hue information. We conducted five experiments to describe preview-target mismatch effects, decreases in performance caused by differences between the image of the target as it appears in the preview and as it appears in the actual search display. These mismatch effects provide direct measures of feature importance, with larger performance decrements expected for more important features. Contrary to previous conclusions, our data suggest that shape and orientation only guide visual search when color is not available. By varying the probability of mismatch in each feature dimension, we also show that these patterns of feature guidance do not change with the probability that the previewed feature will be invalid. We conclude that the target representations used to guide visual search are much less precise than previously believed, with participants encoding and using color and little else. |
Defne Alfandari; Artem V. Belopolsky; Christian N. L. Olivers Eye movements reveal learning and information-seeking in attentional template acquisition Journal Article In: Visual Cognition, vol. 27, no. 5-8, pp. 467–486, 2019. @article{Alfandari2019, Visual attention serves to select relevant visual information. However, observers often first need to find out what is relevant. Little is known about this information-seeking process and how it affects attention. We employed a cued visual search task in combination with eye tracking to investigate which oculomotor measures reflect the acquisition of information for a subsequent task. A cue indicated as to which target to look for in a following search display. Cue-target combinations were repeated several times, enabling learning of the target. We found that reductions in cue fixation times and saccade size provided stable indices of learning. Despite the learning, participants continued to attend to repeated cues. Several factors contribute to people attending to information they already know: First, the information value provided by the cue continues to drive attention. Second, even in the absence of information value, attention continues to be directed to cue features that previously signalled relevant information. Third, the decision to attend to a known cue depends on cognitive effort. We propose that this combination of information value, previous relevance, and effort is best captured within an information-seeking framework, and that oculomotor parameters provide a useful proxy for uncovering these factors and their interactions. |
Sara Alhanbali; Piers Dawes; Rebecca E. Millman; Kevin J. Munro Measures of listening effort are multidimensional Journal Article In: Ear & Hearing, vol. 40, no. 5, pp. 1084–1097, 2019. @article{Alhanbali2019, OBJECTIVES: Listening effort can be defined as the cognitive resources required to perform a listening task. The literature on listening effort is as confusing as it is voluminous: measures of listening effort rarely correlate with each other and sometimes result in contradictory findings. Here, we directly compared simultaneously recorded multimodal measures of listening effort. After establishing the reliability of the measures, we investigated validity by quantifying correlations between measures and then grouping-related measures through factor analysis. DESIGN: One hundred and sixteen participants with audiometric thresholds ranging from normal to severe hearing loss took part in the study (age range: 55 to 85 years old, 50.3% male). We simultaneously measured pupil size, electroencephalographic alpha power, skin conductance, and self-report listening effort. One self-report measure of fatigue was also included. The signal to noise ratio (SNR) was adjusted at 71% criterion performance using sequences of 3 digits. The main listening task involved correct recall of a random digit from a sequence of six presented at a SNR where performance was around 82 to 93%. Test-retest reliability of the measures was established by retesting 30 participants 7 days after the initial session. RESULTS: With the exception of skin conductance and the self-report measure of fatigue, interclass correlation coefficients (ICC) revealed good test-retest reliability (minimum ICC: 0.71). Weak or nonsignificant correlations were identified between measures. Factor analysis, using only the reliable measures, revealed four underlying dimensions: factor 1 included SNR, hearing level, baseline alpha power, and performance accuracy; factor 2 included pupillometry; factor 3 included alpha power (during speech presentation and during retention); factor 4 included self-reported listening effort and baseline alpha power. CONCLUSIONS: The good ICC suggests that poor test reliability is not the reason for the lack of correlation between measures. We have demonstrated that measures traditionally used as indicators of listening effort tap into multiple underlying dimensions. We therefore propose that there is no "gold standard" measure of listening effort and that different measures of listening effort should not be used interchangeably. When choosing method(s) to measure listening effort, the nature of the task and aspects of increased listening demands that are of interest should be taken into account. The findings of this study provide a framework for understanding and interpreting listening effort measures. |
Roy Amit; Dekel Abeles; Marisa Carrasco; Shlomit Yuval-Greenberg Oculomotor inhibition reflects temporal expectations Journal Article In: NeuroImage, vol. 184, pp. 279–292, 2019. @article{Amit2019a, The accurate extraction of signals out of noisy environments is a major challenge of the perceptual system. Forming temporal expectations and continuously matching them with perceptual input can facilitate this process. In humans, temporal expectations are typically assessed using behavioral measures, which provide only retrospective but no real-time estimates during target anticipation, or by using electrophysiological measures, which require extensive preprocessing and are difficult to interpret. Here we show a new correlate of temporal expectations based on oculomotor behavior. Observers performed an orientation-discrimination task on a central grating target, while their gaze position and EEG were monitored. In each trial, a cue preceded the target by a varying interval (“foreperiod”). In separate blocks, the cue was either predictive or non-predictive regarding the timing of the target. Results showed that saccades and blinks were inhibited more prior to an anticipated regular target than a less-anticipated irregular one. This consistent oculomotor inhibition effect enabled a trial-by-trial classification according to interval-regularity. Additionally, in the regular condition the slope of saccade-rate and drift were shallower for longer than shorter foreperiods, indicating their adjustment according to temporal expectations. Comparing the sensitivity of this oculomotor marker with those of other common predictability markers (e.g. alpha-suppression) showed that it is a sensitive marker for cue-related anticipation. In contrast, temporal changes in conditional probabilities (hazard-rate) modulated alpha-suppression more than cue-related anticipation. We conclude that pre-target oculomotor inhibition is a correlate of temporal predictions induced by cue-target associations, whereas alpha-suppression is more sensitive to conditional probabilities across time. |
Roy Amit; Dekel Abeles; Shlomit Yuval-Greenberg Transient and sustained effects of stimulus properties on the generation of microsaccades Journal Article In: Journal of Vision, vol. 19, no. 1, pp. 1–23, 2019. @article{Amit2019, Saccades shift the gaze rapidly every few hundred milliseconds from one fixated location to the next, producing a flow of visual input into the visual system even in the absence of changes in the environment. During fixation, small saccades called microsaccades are produced 1–3 times per second, generating a flow of visual input. The characteristics of this visual flow are determined by the timings of the saccades and by the characteristics of the visual stimuli on which they are performed. Previous models of microsaccade generation have accounted for the effects of external stimulation on the production of microsaccades, but they have not considered the effects of the prolonged background stimulus on which microsaccades are performed. The effects of this stimulus on the process of microsaccade generation could be sustained, following its prolonged presentation, or transient, through the visual transients produced by the microsaccades themselves. In four experiments, we varied the properties of the constant displays and examined the resulting modulation of microsaccade properties: their sizes, their timings, and the correlations between properties of consecutive microsaccades. Findings show that displays of higher spatial frequency and contrast produce smaller microsaccades and longer minimal intervals between consecutive microsaccades; and smaller microsaccades are followed by smaller and delayed microsaccades. We explain these findings in light of previous models and suggest a conceptual model by which both sustained and transient effects of the stimulus have central roles in determining the generation of microsaccades. |
Brian A. Anderson; Haena Kim On the relationship between value-driven and stimulus-driven attentional capture Journal Article In: Attention, Perception, and Psychophysics, vol. 81, no. 3, pp. 607–613, 2019. @article{Anderson2019, Reward history, physical salience, and task relevance all influence the degree to which a stimulus competes for attention, reflecting value-driven, stimulus-driven, and goal-contingent attentional capture, respectively. Theories of value-driven attention have likened reward cues to physically salient stimuli, positing that reward cues are preferentially processed in early visual areas as a result of value-modulated plasticity in the visual system. Such theories predict a strong coupling between value-driven and stimulus-driven attentional capture across individuals. In the present study, we directly test this hypothesis, and demonstrate a robust correlation between value-driven and stimulus-driven attentional capture. Our findings suggest substantive overlap in the mechanisms of competition underlying the attentional priority of reward cues and physically salient stimuli. |
Brian A. Anderson; Haena Kim Test–retest reliability of value-driven attentional capture Journal Article In: Behavior Research Methods, vol. 51, no. 2, pp. 720–726, 2019. @article{Anderson2019a, Attention is biased toward learned predictors of reward. The degree to which attention is automatically drawn to arbitrary reward cues has been linked to a variety of psychopathologies, including drug dependence, HIV-risk behaviors, depressive symptoms, and attention deficit/hyperactivity disorder. In the context of addiction specifically, attentional biases toward drug cues have been related to drug craving and treatment outcomes. Given the potential role of value-based attention in psychopathology, the ability to quantify the magnitude of such bias before and after a treatment intervention in order to assess treatment-related changes in attention allocation would be desirable. However, the test–retest reliability of value-driven attentional capture by arbitrary reward cues has not been established. In the present study, we show that an oculomotor measure of value-driven attentional capture produces highly robust test–retest reliability for a behavioral assessment, whereas the response time (RT) measure more commonly used in the attentional bias literature does not. Our findings provide methodological support for the ability to obtain a reliable measure of susceptibility to value-driven attentional capture at multiple points in time, and they highlight a limitation of RT-based measures that should inform the use of attentional-bias tasks as an assessment tool. |
Efsun Annac; Mathias Pointner; Patrick H. Khader; Hermann J. Müller; Xuelian Zang; Thomas Geyer Recognition of incidentally learned visual search arrays is supported by fixational eye movements Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 45, no. 12, pp. 2147–2164, 2019. @article{Annac2019, Repeated encounter of abstract target-distractor letter arrangements leads to improved visual search for such displays. This contextual-cueing effect is attributed to incidental learning of display configurations. Whether observers can consciously access the memory underlying the cueing effect is still a controversial issue. The current study uses a novel recognition task and eyetracking to tackle this question. Experiment 1 investigated observers' ability to recognize or "generate" the display quadrant of the target in a previous search array when the target was now substituted by distractor element as well as where observers' eye fixations would fall while they freely viewed the recognition display, examining the link between the fixation pattern and explicit recognition judgments. Experiment 2 tested whether eye fixations would serve a critical role for explicit retrieval from context memory. Experiment 3 asked whether eye fixations of the target region are critical for context-based facilitation of search reaction times to manifest. The results revealed longer fixational dwell times in the target quadrant for learned relative to foil displays. Further, explicit recognition was enhanced, and above chance level, when observers were made to fixate the target quadrant as compared to when they were prevented from doing so. However, the manifestation of contextual cueing of visual search did itself not require fixations of the target quadrant. Moreover, contextual-cueing of search reaction times was significantly correlated with both fixational dwell times and observers' explicit generation performance. The results argue in favor of contextual cueing of visual search being the result of a single, explicit, memory system, though it could nevertheless receive support from separable-automatic versus controlled-retrieval processes. Fixational eye movements, that is, the directed overt allocation of visual attention, provide an interface between these processes in context cueing. |
Eduardo A. Aponte; Klaas E. Stephan; Jakob Heinzle Switch costs in inhibitory control and voluntary behaviour: A computational study of the antisaccade task Journal Article In: European Journal of Neuroscience, vol. 50, no. 7, pp. 3205–3220, 2019. @article{Aponte2019, An integral aspect of human cognition is the ability to inhibit stimulus-driven, habitual responses, in favour of complex, voluntary actions. In addition, humans can also alternate between different tasks. This comes at the cost of degraded performance when compared to repeating the same task, a phenomenon called the “task-switch cost.” While task switching and inhibitory control have been studied extensively, the interaction between them has received relatively little attention. Here, we used the SERIA model, a computational model of antisaccade behaviour, to draw a bridge between them. We investigated task switching in two versions of the mixed antisaccade task, in which participants are cued to saccade either in the same or in the opposite direction to a peripheral stimulus. SERIA revealed that stopping a habitual action leads to increased inhibitory control that persists onto the next trial, independently of the upcoming trial type. Moreover, switching between tasks induces slower and less accurate voluntary responses compared to repeat trials. However, this only occurs when participants lack the time to prepare the correct response. Altogether, SERIA demonstrates that there is a reconfiguration cost associated with switching between voluntary actions. In addition, the enhanced inhibition that follows antisaccade but not prosaccade trials explains asymmetric switch costs. In conclusion, SERIA offers a novel model of task switching that unifies previous theoretical accounts by distinguishing between inhibitory control and voluntary action generation and could help explain similar phenomena in paradigms beyond the antisaccade task. |
Ayelet Arazi; Yaffa Yeshurun; Ilan Dinstein Neural variability is quenched by attention Journal Article In: Journal of Neuroscience, vol. 39, no. 30, pp. 5975–5985, 2019. @article{Arazi2019, Attention can be subdivided into several components, including alertness and spatial attention. It is believed that the behavioral benefits of attention, such as increased accuracy and faster reaction times, are generated by an increase in neural activity and a decrease in neural variability, which enhance the signal-to-noise ratio of task-relevant neural populations. However, empirical evidence regarding attention-related changes in neural variability in humans is extremely rare. Here we used EEG to demonstrate that trial-by-trial neural variability was reduced by visual cues that modulated alertness and spatial attention. Reductions in neural variability were specific to the visual system and larger in the contralateral hemisphere of the attended visual field. Subjects with higher initial levels of neural variability and larger decreases in variability exhibited greater behavioral benefits from attentional cues. These findings demonstrate that both alertness and spatial attention modulate neural variability and highlight the importance of reducing/quenching neural variability for attaining the behavioral benefits of attention. |
Joseph M. Arizpe; Danielle L. Noles; Jack W. Tsao; Annie W. -Y. Chan Eye movement dynamics differ between encoding and recognition of faces Journal Article In: Vision, vol. 3, pp. 9, 2019. @article{Arizpe2019, Facial recognition is widely thought to involve a holistic perceptual process, and optimal recognition performance can be rapidly achieved within two fixations. However, is facial identity encoding likewise holistic and rapid, and how do gaze dynamics during encoding relate to recognition? While having eye movements tracked, participants completed an encoding (“study”) phase and subsequent recognition (“test”) phase, each divided into blocks of one-or five-second stimulus presentation time conditions to distinguish the influences of experimental phase (encoding/recognition) and stimulus presentation time (short/long). Within the first two fixations, several differences between encoding and recognition were evident in the temporal and spatial dynamics of the eye-movements. Most importantly, in behavior, the long study phase presentation time alone caused improved recognition performance (i.e., longer time at recognition did not improve performance), revealing that encoding is not as rapid as recognition, since longer sequences of eye-movements are functionally required to achieve optimal encoding than to achieve optimal recognition. Together, these results are inconsistent with a scan path replay hypothesis. Rather, feature information seems to have been gradually integrated over many fixations during encoding, enabling recognition that could subsequently occur rapidly and holistically within a small number of fixations. |
Candice C. Morey Perceptual grouping boosts visual working memory capacity and reduces effort during retention Journal Article In: British Journal of Psychology, vol. 110, no. 2, pp. 306–327, 2019. @article{Morey2019, Consistent, robust boosts to visual working memory capacity are observed when colour–location arrays contain duplicate colours. The prevailing explanation suggests that duplicated colours are encoded as one perceptual group. If so, then we should observe not only higher working memory capacity overall for displays containing duplicates, but specifically an improved ability to remember unique colours from displays including duplicates compared with displays comprising all uniquely coloured items. Furthermore, less effort should be required to retain displays as colour redundancy increases. I recorded gaze position and pupil sizes during a visual change detection task including displays of 4–6 items with either all unique colours, two items with a common colour, or three items with a common colour in samples of young and healthy elderly adults. Increased redundancy was indeed associated with higher estimated working memory capacity, both for tests of duplicates and uniquely coloured items. Redundancy was also associated with decreased pupil size during retention, especially in young adults. While elderly adults also benefited from colour redundancy, spillover to unique items was less obvious with low redundancy than in young adults. This experiment confirms previous findings and presents complementary novel evidence linking perceptual grouping via colour redundancy with decreased mental effort. |
Michael J. Morgan; Joshua A. Solomon Attention and the motion aftereffect Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 12, pp. 2848–2864, 2019. @article{Morgan2019, We measured the effects of attentional distraction on the time course and asymptote of motion adaptation strength, using visual search performance (percent correct and reaction time). In the first two experiments, participants adapted to a spatial array of moving Gabor patches, either all vertically oriented (Experiment 1) or randomly oriented (Experiment 2). On each trial, the adapting array was followed by a test array in which all of the test patches except one were identical in orientation and movement direction to their retinotopically corresponding adaptors, but the target moved in the opposite direction to its adaptor. Participants were required to identify the location of the changed target with a mouse click. The ability to do so increased with the number of adapting trials. Neither search speed nor accuracy was affected by an attentionally demanding conjunction task at the fixation point during adaptation, suggesting low-level (preattentive) sites in the visual pathway for the adaptation. In Experiment 3, the same participants were required to identify the one element in the test array that was slowly moving. Reaction times in this case were elevated following adaptation, but once again there was no significant effect of the distracting task upon performance. In Experiment 4, participants were required to make eye movements, so that retinotopically corresponding adaptors could be distinguished from spatiotopically corresponding adaptors. Performance in Experiments 1 and 2 correlated positively with reaction times in Experiment 3, suggesting a general trait for adaptation strength. |
Kentaro Morita; Kenichiro Miura; Michiko Fujimoto; Hidenaga Yamamori; Yuka Yasuda; Noriko Kudo; Hirotsugu Azechi; Naohiro Okada; Daisuke Koshiyama; Manabu Ikeda; Kiyoto Kasai; Ryota Hashimoto Eye movement abnormalities and their association with cognitive impairments in schizophrenia Journal Article In: Schizophrenia Research, vol. 209, pp. 255–262, 2019. @article{Morita2019, Background: Eye movement abnormalities have been identified in schizophrenia; however, their relevance to cognition is still unknown. In this study, we explored the general relationship between eye movements and cognitive function. Methods: The three eye movement measures (scanpath length, horizontal position gain, and duration of fixations) that were previously reported to be useful in distinguishing subjects with schizophrenia from healthy subjects, as well as Wechsler Adult Intelligence Scale-III (WAIS-III) scores, were collected and tested for association in 113 subjects with schizophrenia and 404 healthy subjects. Results: Scanpath length was positively correlated with matrix reasoning and digit symbol coding in subjects with schizophrenia and correlated with vocabulary and symbol search in healthy subjects. Upon testing for interaction effects of diagnosis and scanpath length on correlated WAIS-III scores, a significant interaction effect was only observed for matrix reasoning. The positive correlation between scanpath length and matrix reasoning, which was specific to subjects with schizophrenia, remained significant after controlling for demographic confounders such as medication and negative symptoms. No correlation was observed between the two other eye movement measures and any of the WAIS-III scores. Conclusions: Herein, we reveal novel findings on the association between eye-movement-based measures of visual exploration and cognitive scores requiring visual search in subjects with schizophrenia and in healthy subjects. The association between scanpath length and matrix reasoning, a measure of perceptual organization in subjects with schizophrenia, implies the existence of common cognitive processes, and subjects with longer scanpath length may be advantageous in performance of perceptual organization tasks. |
Adam P. Morris; Bart Krekelberg A stable visual world in primate primary visual cortex Journal Article In: Current Biology, vol. 29, no. 9, pp. 1471–1480, 2019. @article{Morris2019, Humans and other primates rely on eye movements to explore visual scenes and to track moving objects. As a result, the image that is projected onto the retina—and propagated throughout the visual cortical hierarchy—is almost constantly changing and makes little sense without taking into account the momentary direction of gaze. How is this achieved in the visual system? Here, we show that in primary visual cortex (V1), the earliest stage of cortical vision, neural representations carry an embedded “eye tracker” that signals the direction of gaze associated with each image. Using chronically implanted multi-electrode arrays, we recorded the activity of neurons in area V1 of macaque monkeys during tasks requiring fast (exploratory) and slow (pursuit) eye movements. Neurons were stimulated with flickering, full-field luminance noise at all times. As in previous studies, we observed neurons that were sensitive to gaze direction during fixation, despite comparable stimulation of their receptive fields. We trained a decoder to translate neural activity into metric estimates of gaze direction. This decoded signal tracked the eye accurately not only during fixation but also during fast and slow eye movements. After a fast eye movement, the eye-position signal arrived in V1 at approximately the same time at which the new visual information arrived from the retina. Using simulations, we show that this V1 eye-position signal could be used to take into account the sensory consequences of eye movements and map the fleeting positions of objects on the retina onto their stable position in the world. Visual input arrives as a series of snapshots, each taken from a different line of sight, due to eye movements from one part of a scene to another. How do we nevertheless see a stable visual world? Morris and Krekelberg show that in primary visual cortex, the neural representation of each snapshot includes “metadata” that tracks gaze direction. |
Jayne Morriss; Eugene McSorley Intolerance of uncertainty is associated with reduced attentional inhibition in the absence of direct threat Journal Article In: Behaviour Research and Therapy, vol. 118, pp. 1–6, 2019. @article{Morriss2019, Intolerance of uncertainty (IU) is a dispositional tendency to find uncertain situations aversive. There is limited understanding as to how IU may bias attention to uncertainty in the absence of direct threat. Here we examined the extent to which uncertain distractors and individual differences in IU impacted eye-movements during an attentional capture task. Participants were asked to move their eyes towards a target, whilst ignoring an array of distractors. An additional distractor could appear before or after the target in a near or far location from the target. We observed high IU individuals to display fewer first saccades to the target in all conditions. The results were specific to IU, over trait anxiety. Overall, these results suggest that IU modulates attention to uncertainty in the absence of direct threat. Such findings inform the conceptualisation of IU and its relation to psychopathology. |
Alexander Morzycki; Alison Wong; Paul Hong; Michael Bezuhly Assessing attentional bias in secondary cleft lip deformities: An eye-tracking study Journal Article In: The Cleft Palate-Craniofacial Journal, vol. 56, no. 2, pp. 257–264, 2019. @article{Morzycki2019, Objective: Using a well-established measure of attention, we aimed to objectively identify differences in severity between types of simulated secondary cleft lip deformities. Design: Volunteer participants viewed a series of images of a child digitally modified to simulate different secondary unilateral cleft lip deformities (long lip, short lip, white roll/vermilion disjunction, and vermilion excess), a lip scar with no secondary deformity, or a normal lip. Eye movements were recorded using a table-mounted eye-tracking device. Dwell times for 7 facial regions (eyes, nose, mouth, left ear, right ear, scar, and entire face) were compared. Participants: Forty-six naive adults (25 male; mean age 25.5 years) were recruited from our local university community. Main Outcome: The primary outcome of the study was cumulative dwell time between facial regions (eyes, nose, mouth, left ear, right ear, scar, and entire face). Results: Participants spent significantly more time focused on the upper lip regions in patients with simulated secondary deformities relative to those who did not (P <.01). Severe short lip deformities resulted in longer fixation times than severe long lips (P <.05). Participants spent less time focused on the eye region in the presence of a secondary lip deformity (P <.05). When total facial fixation time was assessed, short lip deformities resulted in the greatest duration dwell time (P <.001). Conclusions: This study presents objective data to support the concept that observers show varying degrees of attentional bias to the lip region depending on the type and severity of the simulated secondary cleft lip deformity. |
Iska Moxon-Emre; Norman A. S. Farb; Adeoye A. Oyefiade; Eric Bouffet; Suzanne Laughlin; Jovanka Skocic; Cynthia B. Medeiros; Donald J. Mabbott Facial emotion recognition in children treated for posterior fossa tumours and typically developing children: A divergence of predictors Journal Article In: NeuroImage: Clinical, vol. 23, pp. 10886, 2019. @article{MoxonEmre2019, Facial emotion recognition (FER) deficits are evident and pervasive across neurodevelopmental, psychiatric, and acquired brain disorders in children, including children treated for brain tumours. Such deficits are thought to perpetuate challenges with social relationships and decrease quality of life. The present study combined eye-tracking, neuroimaging and cognitive assessments to evaluate if visual attention, brain structure, and general cognitive function contribute to FER in children treated for posterior fossa (PF) tumours (patients: n = 36) and typically developing children (controls: n = 18). To assess FER, all participants completed the Diagnostic Analysis of Nonverbal Accuracy (DANVA2), a computerized task that measures FER using photographs, while their eye-movements were recorded. Patients made more FER errors than controls (p < .01). Although we detected subtle deficits in visual attention and general cognitive function in patients, we found no associations with FER. Compared to controls, patients had evidence of white matter (WM) damage, (i.e., lower fractional anisotropy [FA] and higher radial diffusivity [RD]), in multiple regions throughout the brain (all p < .05), but not in specific WM tracts associated with FER. Despite the distributed WM differences between groups, WM predicted FER in controls only. In patients, factors associated with their disease and treatment predicted FER. Our study provides insight into predictors of FER that may be unique to children treated for PF tumours, and highlights a divergence in associations between brain structure and behavioural outcomes in clinical and typically developing populations; a concept that may be broadly applicable to other neurodevelopmental and clinical populations that experience FER deficits. |
Timothy H. Muller; Rogier B. Mars; Timothy E. Behrens; Jill X. O'Reilly Control of entropy in neural models of environmental state Journal Article In: eLife, vol. 8, pp. 1–30, 2019. @article{Muller2019a, Humans and animals construct internal models of their environment in order to select appropriate courses of action. The representation of uncertainty about the current state of the environment is a key feature of these models that controls the rate of learning as well as directly affecting choice behaviour. To maintain flexibility, given that uncertainty naturally decreases over time, most theoretical inference models include a dedicated mechanism to drive up model uncertainty. Here we probe the long-standing hypothesis that noradrenaline is involved in determining the uncertainty, or entropy, and thus flexibility, of neural models. Pupil diameter, which indexes neuromodulatory state including noradrenaline release, predicted increases (but not decreases) in entropy in a neural state model encoded in human medial orbitofrontal cortex, as measured using multivariate functional MRI. Activity in anterior cingulate cortex predicted pupil diameter. These results provide evidence for top-down, neuromodulatory control of entropy in neural state models. |
Aidan P. Murphy; David A. Leopold A parameterized digital 3D model of the Rhesus macaque face for investigating the visual processing of social cues Journal Article In: Journal of Neuroscience Methods, vol. 324, pp. 1–14, 2019. @article{Murphy2019, Background: Rhesus macaques are the most popular model species for studying the neural basis of visual face processing and social interaction using intracranial methods. However, the challenge of creating realistic, dynamic, and parametric macaque face stimuli has limited the experimental control and ethological validity of existing approaches. New method: We performed statistical analyses of in vivo computed tomography data to generate an anatomically accurate, three-dimensional representation of Rhesus macaque cranio-facial morphology. The surface structures were further edited, rigged and textured by a professional digital artist with careful reference to photographs of macaque facial expression, colouration and pelage. Results: The model offers precise, continuous, parametric control of craniofacial shape, emotional expression, head orientation, eye gaze direction, and many other parameters that can be adjusted to render either static or dynamic high-resolution faces. Example single-unit responses to such stimuli in macaque inferotemporal cortex demonstrate the value of parametric control over facial appearance and behaviours. Comparison with existing method(s): The generation of such a high-dimensionality and systematically controlled stimulus set of conspecific faces, with accurate craniofacial modelling and professional finalization of facial details, is currently not achievable using existing methods. Conclusions: The results herald a new set of possibilities in adaptive sampling of a high-dimensional and socially meaningful feature space, thus opening the door to systematic testing of hypotheses about the abundant neural specialization for faces found in the primate. |
Tal Nahari; Oryah C. Lancry-Dayan; Gershon Ben-Shakhar; Yoni Pertzov Detecting concealed familiarity using eye movements: The role of task demands Journal Article In: Cognitive Research: Principles and Implications, vol. 4, no. 10, pp. 1–16, 2019. @article{Nahari2019, Background: What can theories regarding memory-related gaze preference contribute to the field of deception detection? While abundant research has examined the ability to detect concealed information through physiological responses, only recently has the scientific community started to explore how eye tracking can be utilized for that purpose. However, previous attempts to detect deception through eye movements have led to relatively low detection ability in comparison to physiological measures. In the current study, we demonstrate that the modulation of gaze behavior by familiarity, changes considerably when participants perform a visual detection task in comparison to a short-term memory task (that was used in a previous study). Thus, we highlight the importance of theory-based selection of task demands for improving the ability to detect concealed information using eye-movement measures. Results: During visual exploration of four faces (some familiar and some unfamiliar) gaze was allocated preferably on familiar faces, manifested by more fixations. However, this preference tendency vanished once participants were instructed to convey countermeasures and conceal their familiarity by deploying gaze equally to all faces. This gaze behavior during the visual detection task differed significantly from the one observed during a short-term memory task used in a previous study in which a preference towards unfamiliar faces was evident even when countermeasures were applied. Conclusions: Different tasks elicit different patterns of gaze behavior towards familiar and unfamiliar faces. Moreover, the ability to voluntarily control gaze behavior is tightly related to task demands. Adequate ability to control gaze was observed in the current visual detection task when memorizing the faces was not required for a successful accomplishment of the task. Thus, applied settings would benefit from a short-term memory task which is much more robust to countermeasure efforts. Beyond shedding light on theories of gaze preference, these findings provide a backbone for future research in the field of deception detection via eye movements. |
Maisy Best; Ian P. L. McLaren; Frederick Verbruggen Instructed and acquired contingencies in response-inhibition tasks Journal Article In: Journal of Cognition, vol. 2, no. 1, pp. 1–23, 2019. @article{Best2019, Inhibitory control can be triggered directly via the retrieval of previously acquired stimulus- stop associations from memory. However, a recent study suggests that this item-specific stop learning may be mediated via expectancies of the contingencies in play (Best, Lawrence, Logan, McLaren, & Verbruggen, 2016). This could indicate that stimulus-stop learning also induces strategic proactive changes in performance. We further tested this hypothesis in the present study. In addition to measuring expectancies following task completion, we introduced a between-subjects expectancy manipulation in which one group of participants were informed about the stimulus-stop contingencies and another group did not receive any information about the stimulus-stop contingencies. Moreover, we combined this instruction manipulation with a distractor manipulation that was previously used to examine strategic proactive adjustments. We found that the stop-associated items slowed responding in both conditions. Furthermore, participants in both conditions generated expectancies following task completion that were consistent with the stimulus-stop contingencies. The distractor manipulation was ineffective. However, we found differences in the relationship between the expectancy ratings and task performance: in the instructed condition, the expectancies reliably correlated with the response slowing for the stop-associated items, whereas in the uninstructed condition we found no reliable correlation. These differences between the correlations were reliable, and our conclusions were further supported by Bayesian analyses. We conclude that stimulus-stop associations that are acquired either via task instructions or via task practice have similar effects on behaviour but could differ in how they elicit response slowing. |
Maisy Best; Frederick Verbruggen Does learning influence the detection of signals in a response-inhibition task? Journal Article In: Journal of Cognition, vol. 2, no. 1, pp. 1–21, 2019. @article{Best2019a, Learning can modulate various forms of action control, including response inhibition. People may learn associations between specific stimuli and the acts of going or stopping, influencing task performance. The present study tested whether people also learn associations between specific stimuli and features of the stop or no-go signal used in the task. Across two experiments, participants performed a response-inhibition task in which the contingencies between specific stimuli and the spatial locations of the 'go' and 'withhold' signals were manipulated. The contingencies between specific stimuli and either going or withholding were also manipulated, such that a subset of stimuli were associated with responding and another subset with withholding a response. Although there was clear evidence that participants learned to associate specific stimuli with the acts of going or withholding, there was no evidence that participants acquired the spatial signal-location associations. The absence of signal learning was supported by Bayesian analyses. These findings challenge our previous proposals that learning always influences signal-detection processes in response-inhibition tasks where features of the signal remain the same throughout the task. |
R. Bianco; B. P. Gold; Aaron P. Johnson; V. B. Penhune Music predictability and liking enhance pupil dilation and promote motor learning in non-musicians Journal Article In: Scientific Reports, vol. 9, pp. 17060, 2019. @article{Bianco2019, Humans can anticipate music and derive pleasure from it. Expectations facilitate the learning of movements associated with anticipated events, and they are also linked with reward, which may further facilitate learning of the anticipated rewarding events. The present study investigates the synergistic effects of predictability and hedonic responses to music on arousal and motor-learning in a naïve population. Novel melodies were manipulated in their overall predictability (predictable/unpredictable) as objectively defined by a model of music expectation, and ranked as high/medium/low liked based on participants' self-reports collected during an initial listening session. During this session, we also recorded ocular pupil size as an implicit measure of listeners' arousal. During the following motor task, participants learned to play target notes of the melodies on a keyboard (notes were of similar motor and musical complexity across melodies). Pupil dilation was greater for liked melodies, particularly when predictable. Motor performance was facilitated in predictable rather than unpredictable melodies, but liked melodies were learned even in the unpredictable condition. Low-liked melodies also showed learning but mostly in participants with higher scores of task perceived competence. Taken together, these results highlight the effects of stimuli predictability on learning, which can be however overshadowed by the effects of stimulus liking or task-related intrinsic motivation. |
Narcisse P. Bichot; Rui Xu; Azriel Ghadooshahy; Michael L. Williams; Robert Desimone The role of prefrontal cortex in the control of feature attention in area V4 Journal Article In: Nature Communications, vol. 10, pp. 5727, 2019. @article{Bichot2019, When searching for an object in a cluttered scene, we can use our memory of the target object features to guide our search, and the responses of neurons in multiple cortical visual areas are enhanced when their receptive field contains a stimulus sharing target object features. Here we tested the role of the ventral prearcuate region (VPA) of prefrontal cortex in the control of feature attention in cortical visual area V4. VPA was unilaterally inactivated in monkeys performing a free-viewing visual search for a target stimulus in an array of stimuli, impairing monkeys' ability to find the target in the array in the affected hemifield, but leaving intact their ability to make saccades to targets presented alone. Simultaneous recordings in V4 revealed that the effects of feature attention on V4 responses were eliminated or greatly reduced while leaving the effects of spatial attention on responses intact. Altogether, the results suggest that feedback from VPA modulates processing in visual cortex during attention to object features. |
Daniel Birman; Justin L. Gardner A flexible readout mechanism of human sensory representations Journal Article In: Nature Communications, vol. 10, pp. 3500, 2019. @article{Birman2019, Attention can both enhance and suppress cortical sensory representations. However, changing sensory representations can also be detrimental to behavior. Behavioral consequences can be avoided by flexibly changing sensory readout, while leaving the representations unchanged. Here, we asked human observers to attend to and report about either one of two features which control the visibility of motion while making concurrent measurements of cortical activity with BOLD imaging (fMRI). We extend a well-established linking model to account for the relationship between these measurements and find that changes in sensory representation during directed attention are insufficient to explain perceptual reports. Adding a flexible downstream readout is necessary to best explain our data. Such a model implies that observers should be able to recover information about ignored features, a prediction which we confirm behaviorally. Thus, flexible readout is a critical component of the cortical implementation of human adaptive behavior. |
Christopher D. Blair; Jelena Ristic Attention combines similarly in covert and overt conditions Journal Article In: Vision, vol. 3, pp. 16, 2019. @article{Blair2019, Attention is classically classified according to mode of engagement into voluntary and reflexive, and type of operation into covert and overt. The first distinguishes whether attention is elicited intentionally or by unexpected events; the second, whether attention is directed with or without eye movements. Recently, this taxonomy has been expanded to include automated orienting engaged by overlearned symbols and combined attention engaged by a combination of several modes of function. However, so far, combined effects were demonstrated in covert conditions only, and, thus, here we examined if attentional modes combined in overt responses as well. To do so, we elicited automated, voluntary, and combined orienting in covert, i.e., when participants responded manually and maintained central fixation, and overt cases, i.e., when they responded by looking. The data indicated typical effects for automated and voluntary conditions in both covert and overt data, with the magnitudes of the combined effect larger than the magnitude of each mode alone as well as their additive sum. No differences in the combined effects emerged across covert and overt conditions. As such, these results show that attentional systems combine similarly in covert and overt responses and highlight attention's dynamic flexibility in facilitating human behavior. |
Ilona M. Bloem; Sam Ling Normalization governs attentional modulation within human visual cortex Journal Article In: Nature Communications, vol. 10, pp. 5660, 2019. @article{Bloem2019, Although attention is known to increase the gain of visuocortical responses, its underlying neural computations remain unclear. Here, we use fMRI to test the hypothesis that a neural population's ability to be modulated by attention is dependent on divisive normalization. To do so, we leverage the feature-tuned properties of normalization and find that visuocortical responses to stimuli sharing features normalize each other more strongly. Comparing these normalization measures to measures of attentional modulation, we demonstrate that subpopulations which exhibit stronger normalization also exhibit larger attentional benefits. In a converging experiment, we reveal that attentional benefits are greatest when a subpopulation is forced into a state of stronger normalization. Taken together, these results suggest that the degree to which a subpopulation exhibits normalization plays a role in dictating its potential for attentional benefits. |
Amarender R. Bogadhi; Anil Bollimunta; David A. Leopold; Richard J. Krauzlis Spatial attention deficits are causally linked to an area in macaque temporal cortex Journal Article In: Current Biology, vol. 29, no. 5, pp. 726–736, 2019. @article{Bogadhi2019, Spatial neglect is a common clinical syndrome involving disruption of the brain's attention-related circuitry, including the dorsocaudal temporal cortex. In macaques, the attention deficits associated with neglect can be readily modeled, but the absence of evidence for temporal cortex involvement has suggested a fundamental difference from humans. To map the neurological expression of neglect-like attention deficits in macaques, we measured attention-related fMRI activity across the cerebral cortex during experimental induction of neglect through reversible inactivation of the superior colliculus and frontal eye fields. During inactivation, monkeys exhibited hallmark attentional deficits of neglect in tasks using either motion or non-motion stimuli. The behavioral deficits were accompanied by marked reductions in fMRI attentional modulation that were strongest in a small region on the floor of the superior temporal sulcus; smaller reductions were also found in frontal eye fields and dorsal parietal cortex. Notably, direct inactivation of the mid-superior temporal sulcus (STS) cortical region identified by fMRI caused similar neglect-like spatial attention deficits. These results identify a putative macaque homolog to temporal cortex structures known to play a central role in human neglect. |
Claudia Bonmassar; Francesco Pavani; Wieske Zoest The role of eye movements in manual responses to social and nonsocial cues Journal Article In: Attention, Perception, and Psychophysics, vol. 81, no. 5, pp. 1236–1252, 2019. @article{Bonmassar2019, Gaze and arrow cues cause covert attention shifts even when they are uninformative. Nonetheless, it is unclear to what extent oculomotor behavior influences manual responses to social and nonsocial stimuli. In two experiments, we tracked the gaze of participants during the cueing task with nonpredictive gaze and arrow cues. In Experiment 1, the discrimination task was easy and eye movements were not necessary, whereas in Experiment 2 they were instrumental in identifying the target. Validity effects on manual response time (RT) were similar for the two cues in Experiment 1 and in Experiment 2, though in the presence of eye movements observers were overall slower to respond to the arrow cue compared with the gaze cue. Cue direction had an effect on saccadic performance before the discrimination was presented and throughout the duration of the trial. Furthermore, we found evidence of a distinct impact of the type of cue on diverse oculomotor components. While saccade latencies were affected by the type of cue, both before and after the target onset, saccade landing positions were not. Critically, the manual validity effect was predicted by the landing position of the initial eye movement. This work suggests that the relationship between eye movements and attention is not straightforward. In the presence of overt selection, saccade latency related to the overall speed of manual response, while eye movements landing position was closely related to manual performance in response to different cues. |
Christian Büsel; Thomas Ditye; Lukas Muttenthaler; Ulrich Ansorge A novel test of pure irrelevance-induced blindness Journal Article In: Frontiers in Psychology, vol. 10, pp. 375, 2019. @article{Buesel2019, Load theory claims that bottom-up attention is possible under conditions of low perceptual load but not high perceptual load. At variance with this claim, a recent one-trial study showed that under low load, with only two colors in the display - a ring and a disk -, an instruction to process only one of the two stimuli led to better memory performance for the color of the relevant than of the irrelevant stimulus. Control experiments showed that if instructed to pay attention to both objects, participants were able to memorize both colors. Thus, stimulus irrelevance diminished the likelihood of memory for a color stimulus under low perceptual-load conditions. Yet, we noted less than optimal design features in that prior study: a lack of more implicit priming measures of memory or attention and an interval between color stimulus presentation and memory test that probably exceeded 500 ms. We took care of these problems in the current one-trial study by improving the retrieval displays while leaving the encoding displays as in the original study and found that the results only partly replicated prior findings. In particular, there was no evidence of irrelevance-induced blindness under conditions in which a ring was designated as relevant, surrounding an irrelevant disk. However, a continuously cumulative meta-analysis across past and present experiments showed that our results do not refute the irrelevance-induced effects entirely. We conclude with recommendations for future tests of load theory. |
Xinying Cai; Camillo Padoa-Schioppa Neuronal evidence for good-based economic decisions under variable action costs Journal Article In: Nature Communications, vol. 10, pp. 393, 2019. @article{Cai2019, Previous work showed that economic decisions can be made independently of spatial contingencies. However, when goods available for choice bear different action costs, the decision necessarily reflects aspects of the action. One possibility is that "stimulus values" are combined with the corresponding action costs in a motor representation, and decisions are then made in actions space. Alternatively, action costs could be integrated with other determinants of value in a non-spatial representation. If so, decisions under variable action costs could take place in goods space. Here, we recorded from orbitofrontal cortex while monkeys chose between different juices offered in variable amounts. We manipulated action costs by varying the saccade amplitude, and we dissociated in time and space offer presentation from action planning. Neurons encoding the binary choice outcome did so well before the presentation of saccade targets, indicating that decisions were made in goods space. |
Francesca Capozzi; Nida Latif; Jelena Ristic It's not all in the face: Reduced face visibility does not modulate social segmentation Journal Article In: Visual Cognition, vol. 27, no. 1, pp. 38–45, 2019. @article{Capozzi2019, Humans rely on social information to parse environmental content into social and nonsocial events. Here, we assessed if information conveyed by faces was necessary for this process. We asked participants to view a video clip depicting a social interaction and mark social and nonsocial events while actors' faces were either visible or blurred. Keypress and eye-movement data were collected. Participants consistently identified social and nonsocial event boundaries independently of face availability, with greater agreement and less variability in keypresses for social relative to nonsocial events. Eye-tracking revealed that participants attended more to actors' faces when they were visible and more to bodies when faces were blurred. Thus, face information is not necessary for social segmentation, which appears to be a flexible process that depends on a range of information conveyed by both faces and bodies. |
Nancy B. Carlisle; Geoffrey F. Woodman Quantifying the attentional impact of working memory matching targets and distractors Journal Article In: Visual Cognition, vol. 27, no. 5-8, pp. 452–466, 2019. @article{Carlisle2019, Various theoretical proposals have been put forward to explain how memory representations control attention during visual search. In this study, we use the first saccade on each trial as a way to quantify the attentional impact of multiple types of representations held in working memory. Across two experiments, we found that a search target maintained in working memory was attended over 20 times more frequently than a non-memory-matching distractor. In addition, an item matching an additional object represented in working memory was attended 2 times more frequently than a non-memory matching distractor. These findings show that there is a measurable attentional impact of items maintained in working memory for a future task, however, such representations have a much weaker attentional impact than working memory representations of search targets. |
Nathan Caruana; Genevieve McArthur The mind minds minds: The effect of intentional stance on the neural encoding of joint attention Journal Article In: Cognitive, Affective and Behavioral Neuroscience, vol. 19, no. 6, pp. 1479–1491, 2019. @article{Caruana2019a, Recent neuroimaging studies have observed that the neural processing of social cues from a virtual reality character appears to be affected by "intentional stance" (i.e., attributing mental states, agency, and "humanness"). However, this effect could also be explained by individual differences or perceptual effects resulting from the design of these studies. The current study used a new design that measured centro-parietal P250, P350, and N170 event-related potentials (ERPs) in 20 healthy adults while they initiated gaze-related joint attention with a virtual character (“Alan”) in two conditions. In one condition, they were told that Alan was controlled by a human; in the other, they were told that he was controlled by a computer. When participants believed Alan was human, his congruent gaze shifts, which resulted in joint attention, generated significantly larger P250 ERPs than his incongruent gaze shifts. In contrast, his incongruent gaze shifts triggered significantly larger increases in P350 ERPs than his congruent gaze shifts. These findings support previous studies suggesting that intentional stance affects the neural processing of social cues from a virtual character. The outcomes also suggest the use of the P250 and P350 ERPs as objective indices of social engagement during the design of socially approachable robots and virtual agents. |
Nathan Caruana; Kiley Seymour; Jon Brock; Robyn Langdon Responding to joint attention bids in schizophrenia: An interactive eye-tracking study Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 8, pp. 2068–2083, 2019. @article{Caruana2019, This study investigated social cognition in schizophrenia using a virtual reality paradigm to capture the dynamic processes of evaluating and responding to eye gaze as an intentional communicative cue. A total of 21 patients with schizophrenia and 21 age-, gender-, and IQ-matched healthy controls completed an interactive computer game with an on-screen avatar that participants believed was controlled by an off-screen partner. On social trials, participants were required to achieve joint attention by correctly interpreting and responding to gaze cues. Participants also completed non-social trials in which they responded to an arrow cue within the same task context. While patients and controls took equivalent time to process communicative intent from gaze shifts, patients made significantly more errors than controls when responding to the directional information conveyed by gaze, but not arrow, cues. Despite no differences in response times to gaze cues between groups, patients were significantly slower than controls when responding to arrow cues. This is the opposite pattern of results previously observed in autistic adults using the same task and suggests that, despite general impairments in attention orienting or oculomotor control, patients with schizophrenia demonstrate a facilitation effect when responding to communicative gaze cues. Findings indicate a hyper-responsivity to gaze cues of communicative intent in schizophrenia. The possible effects of self-referential biases when evaluating gaze direction are discussed, as are clinical implications. |
Joao Castelhano; Isabel C. Duarte; Carlos Ferreira; Joao Duraes; Henrique Madeira; Miguel Castelo-Branco The role of the insula in intuitive expert bug detection in computer code: an fMRI study Journal Article In: Brain Imaging and Behavior, vol. 13, no. 3, pp. 623–637, 2019. @article{Castelhano2019, Software programming is a complex and relatively recent human activity, involving the integration of mathematical, recursive thinking and language processing. The neural correlates of this recent human activity are still poorly understood. Error monitoring during this type of task, requiring the integration of language, logical symbol manipulation and other mathematical skills, is particularly challenging. We therefore aimed to investigate the neural correlates of decision-making during source code understanding and mental manipulation in professional participants with high expertise. The present fMRI study directly addressed error monitoring during source code comprehension, expert bug detection and decision-making. We used C code, which triggers the same sort of processing irrespective of the native language of the programmer. We discovered a distinct role for the insula in bug monitoring and detection and a novel connectivity pattern that goes beyond the expected activation pattern evoked by source code understanding in semantic language and mathematical processing regions. Importantly, insula activity levels were critically related to the quality of error detection, involving intuition, as signalled by reported initial bug suspicion, prior to final decision and bug detection. Activity in this salience network (SN) region evoked by bug suspicion was predictive of bug detection precision, suggesting that it encodes the quality of the behavioral evidence. Connectivity analysis provided evidence for top-down circuit “reutilization” stemming from anterior cingulate cortex (BA32), a core region in the SN that evolved for complex error monitoring such as required for this type of recent human activity. Cingulate (BA32) and anterolateral (BA10) frontal regions causally modulated decision processes in the insula, which in turn was related to activity of math processing regions in early parietal cortex. In other words, earlier brain regions used during evolution for other functions seem to be reutilized in a top-down manner for a new complex function, in an analogous manner as described for other cultural creations such as reading and literacy. |
Matthew R. Cavanaugh; Antoine Barbot; Marisa Carrasco; Krystel R. Huxlin Feature-based attention potentiates recovery of fine direction discrimination in cortically blind patients Journal Article In: Neuropsychologia, vol. 128, pp. 315–324, 2019. @article{Cavanaugh2019, Training chronic, cortically-blind (CB) patients on a coarse [left-right] direction discrimination and integration (CDDI) task recovers performance on this task at trained, blind field locations. However, fine direction difference (FDD) thresholds remain elevated at these locations, limiting the usefulness of recovered vision in daily life. Here, we asked if this FDD impairment can be overcome by training CB subjects with endogenous, feature-based attention (FBA) cues. Ten CB subjects were recruited and trained on CDDI and FDD with an FBA cue or FDD with a neutral cue. After completion of each training protocol, FDD thresholds were re-measured with both neutral and FBA cues at trained, blind-field locations and at corresponding, intact-field locations. In intact portions of the visual field, FDD thresholds were lower when tested with FBA than neutral cues. Training subjects in the blind field on the CDDI task improved FDD performance to the point that a threshold could be measured, but these locations remained impaired relative to the intact field. FDD training with neutral cues resulted in better blind field FDD thresholds than CDDI training, but thresholds remained impaired relative to intact field levels, regardless of testing cue condition. Importantly, training FDD in the blind field with FBA lowered FDD thresholds relative to CDDI training, and allowed the blind field to reach thresholds similar to the intact field, even when FBA trained subjects were tested with a neutral rather than FBA cue. Finally, FDD training appeared to also recover normal integration thresholds at trained, blind-field locations, providing an interesting double dissociation with respect to CDDI training. In summary, mechanisms governing FBA appear to function normally in both intact and impaired regions of the visual field following V1 damage. Our results mark the first time that FDD thresholds in CB fields have been seen to reach intact field levels of performance. Moreover, FBA can be leveraged during visual training to recover normal, fine direction discrimination and integration performance at trained, blind-field locations, potentiating visual recovery of more complex and precise aspects of motion perception in cortically-blinded fields. |
Andrew Clement; Ryan E. O'Donnell; James R. Brockmole The functional arrangement of objects biases gaze direction Journal Article In: Psychonomic Bulletin & Review, vol. 26, no. 4, pp. 1266–1272, 2019. @article{Clement2019, A growing number of studies suggest that semantic knowledge can influence the control of gaze in scenes. For example, observers are more likely to look toward objects that are semantically related to the currently fixated object. Recent evidence also suggests that an object's functional orientation can bias gaze direction. However, it is unknown whether these semantic and functional relationships can interact to determine gaze control. To address this issue, the present study assessed whether the functional arrangement of multiple objects can influence gaze control. Participants fixated a central object (e.g., a key) flanked by two peripheral objects. After a brief delay, participants were free to shift their gaze toward the peripheral object of their choice. One of the peripheral objects was semantically related to the central object (e.g., a lock), and the objects were arranged to depict a functional or non-functional interaction (e.g., a key pointing toward or away from a lock). When the orientation of the central object was manipulated, participants were more likely to look in the direction this object was pointing. Moreover, the functional arrangement of objects modulated this central orienting bias. However, when the orientation of the peripheral objects was manipulated, only the peripheral objects' semantic relationships influenced gaze control. Together, these findings suggest that functional relationships play an important role in the allocation of gaze, and can interact with semantic relationships to determine gaze control. |
Kate M. Coffey; Nika Adamian; Tessel Blom; Elle Heusden; Patrick Cavanagh; Hinze Hogendoorn Expecting the unexpected: Temporal expectation increases the flash-grab effect Journal Article In: Journal of Vision, vol. 19, no. 13, pp. 1–14, 2019. @article{Coffey2019, In the flash-grab effect, when a disk is flashed on a moving background at the moment it reverses direction, the perceived location of the disk is strongly displaced in the direction of the motion that follows the reversal. Here, we ask whether increased expectation of the reversal reduces its effect on the motion-induced shift, as suggested by predictive coding models with first order predictions. Across four experiments we find that when the reversal is expected, the illusion gets stronger, not weaker. We rule out accumulating motion adaptation as a contributing factor. The pattern of results cannot be accounted for by first-order predictions of location. Instead, it appears that second-order predictions of event timing play a role. Specifically, we conclude that temporal expectation causes a transient increase in temporal attention, boosting the strength of the motion signal and thereby increasing the strength of the illusion. |
Thérèse Collins The perceptual continuity field is retinotopic Journal Article In: Scientific Reports, vol. 9, pp. 18841, 2019. @article{Collins2019, Visual perception is systematically biased towards input from the recent past: perceived orientation, numerosity, and face identity are pulled towards previously seen stimuli. To better understand the brain level at which serial dependence occurs, the present study examined its spatial tuning. In three experiments, serial dependence occurred between stimuli occupying the same retinal position. Serial dependence between stimuli at distant retinal locations was smaller, even when the stimuli occupied the same location in external space. The spatial window over which serial dependence occurs is thus retinotopic, but wide, suggesting that serial dependence occurs at late stages of visual processing. |
Katherine E. Conen; X. Camillo Padoa-Schioppa Partial adaptation to the value range in the macaque orbitofrontal cortex Journal Article In: Journal of Neuroscience, vol. 39, no. 18, pp. 3498 –3513, 2019. @article{Conen2019, Values available for choice in different behavioral contexts can vary immensely. To compensate for this variability, neuronal circuits underlying economic decisions undergo adaptation. In orbitofrontal cortex (OFC), neurons encode the subjective value of offered and chosen goods in a quasilinear way. Previous experiments found that the gain of the encoding is lower when the value range is wider. However, the parameters OFC neurons adapted to remained unclear. Furthermore, previous studies did not examine additive changes in neuronal responses. Computational considerations indicate that these factors can directly impact choice behavior. Here we investigated how OFC neurons adapt to changes in the value range. We recorded from two male rhesus monkeys during a juice choice task. Each session was divided into two blocks of trials. In each block, juices were offered within a set range of values, and ranges changed between blocks. Across blocks, neuronal responses adapted to both the maximum and the minimum value, but only partially. As a result, the minimum neural activity was elevated in some value ranges relative to others. Through simulation of a linear decision model, we showed that increasing the minimum response increases choice variability, lowering the expected payoff. This effect is modulated by the balance between cells with positive and negative encoding. The presence of these two populations induces a non-monotonic relationship between the value range and choice efficacy, such that the expected payoff is highest for decisions in an intermediate value range. |
Yong Qi Cong; Caroline Junge; Evin Aktar; Maartje Raijmakers; Anna Franklin; Disa Sauter Pre-verbal infants perceive emotional facial expressions categorically Journal Article In: Cognition and Emotion, vol. 33, no. 3, pp. 391–403, 2019. @article{Cong2019, Adults perceive emotional expressions categorically, with discrimination being faster and more accurate between expressions from different emotion categories (i.e. blends with two different predominant emotions) than between two stimuli from the same category (i.e. blends with the same predominant emotion). The current study sought to test whether facial expressions of happiness and fear are perceived categorically by pre-verbal infants, using a new stimulus set that was shown to yield categorical perception in adult observers (Experiments 1 and 2). These stimuli were then used with 7-month-old infants (N = 34) using a habituation and visual preference paradigm (Experiment 3). Infants were first habituated to an expression of one emotion, then presented with the same expression paired with a novel expression either from the same emotion category or from a different emotion category. After habituation to fear, infants displayed a novelty preference for pairs of between-category expressions, but not within-category ones, showing categorical perception. However, infants showed no novelty preference when they were habituated to happiness. Our findings provide evidence for categorical perception of emotional expressions in pre-verbal infants, while the asymmetrical effect challenges the notion of a bias towards negative information in this age group. |
Tim H. W. Cornelissen; Jona Sassenhagen; Melissa L. -H. Võ Improving free-viewing fixation-related EEG potentials with continuous-time regression Journal Article In: Journal of Neuroscience Methods, vol. 313, pp. 77–94, 2019. @article{Cornelissen2019, Background: In the analysis of combined ET-EEG data, there are several issues with estimating FRPs by averaging. Neural responses associated with fixations will likely overlap with one another in the EEG recording and neural responses change as a function of eye movement characteristics. Especially in tasks that do not constrain eye movements in any way, these issues can become confounds. New method: Here, we propose the use of regression based estimates as an alternative to averaging. Multiple regression can disentangle different influences on the EEG and correct for overlap. It thereby accounts for potential confounds in a way that averaging cannot. Specifically, we test the applicability of the rERP framework, as proposed by Smith and Kutas (2015b), (2017), or Sassenhagen (2018) to combined eye tracking and EEG data from a visual search and a scene memorization task. Results: Results show that the method successfully estimates eye movement related confounds in real experimental data, so that these potential confounds can be accounted for when estimating experimental effects. Comparison with existing methods: The rERP method successfully corrects for overlapping neural responses in instances where averaging does not. As a consequence, baselining can be applied without risking distortions. By estimating a known experimental effect, we show that rERPs provide an estimate with less variance and more accuracy than averaged FRPs. The method therefore provides a practically feasible and favorable alternative to averaging. Conclusions: We conclude that regression based ERPs provide novel opportunities for estimating fixation related EEG in free-viewing experiments. |
Francisco M. Costela; Russell L. Woods A free database of eye movements watching “Hollywood” videoclips Journal Article In: Data in Brief, vol. 25, pp. 1–9, 2019. @article{Costela2019a, The provided database of tracked eye movements was collected using an infra-red, video-camera Eyelink 1000 system, from 95 participants as they viewed ‘Hollywood' video clips. There are 206 clips of 30-s and eleven clips of 30-min for a total viewing time of about 60 hours. The database also provides the raw 30-s video clip files, a short preview of the 30-min clips, and subjective ratings of the content of the videos for each in categories: (1) genre; (2) importance of human faces; (3) importance of human figures; (4) importance of man-made objects; (5) importance of nature; (6) auditory information; (7) lighting; and (8) environment type. Precise timing of the scene cuts within the clips and the democratic gaze scanpath position (center of interest) per frame are provided. At this time, this eye-movement dataset has the widest age range (22–85 years) and is the third largest (in recorded video viewing time) of those that have been made available to the research community. The data-acquisition procedures are described, along with participant demographics, summaries of some common eye-movement statistics, and highlights of research topics in which the database was used. The dataset is freely available in the Open Science Framework repository (link in the manuscript) and can be used without restriction for educational and research purposes, providing that this paper is cited in any published work. |
Michele A. Cox; Kacie Dougherty; Geoffrey K. Adams; Eric A. Reavis; Jacob A. Westerberg; Brandon S. Moore; David A. Leopold; Alexander Maier Spiking suppression precedes cued attentional enhancement of neural responses in primary visual cortex Journal Article In: Cerebral Cortex, vol. 29, no. 1, pp. 77–90, 2019. @article{Cox2019a, Attending to a visual stimulus increases its detectability, even if gaze is directed elsewhere. This covert attentional selection is known to enhance spiking across many brain areas, including the primary visual cortex (V1). Here we investigate the temporal dynamics of attention-related spiking changes in V1 of macaques performing a task that separates attentional selection from the onset of visual stimulation. We found that preceding attentional enhancement there was a sharp, transient decline in spiking following presentation of an attention-guiding cue. This disruption of V1 spiking was not observed in a task-naive subject that passively observed the same stimulus sequence, suggesting that sensory activation is insufficient to cause suppression. Following this suppression, attended stimuli evoked more spiking than unattended stimuli, matching previous reports of attention-related activity in V1. Laminar analyses revealed a distinct pattern of activation in feedback-associated layers during both the cue-induced suppression and subsequent attentional enhancement. These findings suggest that top-down modulation of V1 spiking can be bidirectional and result in either suppression or enhancement of spiking responses. |
Michele A. Cox; Kacie Dougherty; Jacob A. Westerberg; Michelle S. Schall; Alexander Maier Temporal dynamics of binocular integration in primary visual cortex Journal Article In: Journal of Vision, vol. 19, no. 12, pp. 1–21, 2019. @article{Cox2019, Whenever we open our eyes, our brain quickly integrates the two eyes' perspectives into a combined view. This process of binocular integration happens so rapidly that even incompatible stimuli are briefly fused before one eye's view is suppressed in favor of the other (binocular rivalry). The neuronal basis for this brief period of fusion during incompatible binocular stimulation is unclear. Neuroanatomically, the eyes provide two largely separate streams of information that are integrated into a binocular response by the primary visual cortex (V1). However, the temporal dynamics underlying the formation of this binocular response are largely unknown. To address this question, we examined the temporal profile of binocular responses in V1 of fixating monkeys. We found that V1 processes binocular stimuli in a dynamic sequence that comprises at least two distinct temporal phases. An initial transient phase is characterized by enhanced spiking responses for both compatible and incompatible binocular stimuli compared to monocular stimulation. This transient is followed by a sustained response that differed markedly between congruent and incongruent binocular stimulation. Specifically, incompatible binocular stimulation resulted in overall response reduction relative to monocular stimulation (binocular suppression). In contrast, responses to compatible stimuli were either suppressed or enhanced (binocular facilitation) depending on the neurons' ocularity (selectivity for one eye over the other) and laminar location. These results suggest that binocular integration in V1 occurs in at least two sequential steps that comprise initial additive combination of the two eyes' signals followed by widespread differentiation between binocular concordance and discordance. |
Trevor J. Crawford; S. Taylor; Diako Mardanbegi; Megan Polden; Thomas D. W. Wilcockson; Rebecca Killick; Peter Sawyer; H. Gellersen; I. Leroi The effects of previous error and success in Alzheimer's disease and mild cognitive impairment Journal Article In: Scientific Reports, vol. 9, pp. 20204, 2019. @article{Crawford2019, This work investigated in Alzheimer's disease dementia (AD), whether the probability of making an error on a task (or a correct response) was influenced by the outcome of the previous trials. We used the antisaccade task (AST) as a model task given the emerging consensus that it provides a promising sensitive and early biological test of cognitive impairment in AD. It can be employed equally well in healthy young and old adults, and in clinical populations. This study examined eye-movements in a sample of 202 participants (42 with dementia due to AD; 65 with mild cognitive impairment (MCI); 95 control participants). The findings revealed an overall increase in the frequency of AST errors in AD and MCI compared to the control group, as predicted. The errors on the current trial increased in proportion to the number of consecutive errors on the previous trials. Interestingly, the probability of errors was reduced on the trials that followed a previously corrected error, compared to the trials where the error remained uncorrected, revealing a level of adaptive control in participants with MCI or AD dementia. There was an earlier peak in the AST distribution of the saccadic reaction times for the inhibitory errors in comparison to the correct saccades. These findings revealed that the inhibitory errors of the past have a negative effect on the future performance of healthy adults as well as people with a neurodegenerative cognitive impairment. |
Eileen T. Crehan; Robert R. Althoff; Hannah Riehl; Patricia A. Prelock; Tiffany Hutchins In: Journal of Autism and Developmental Disorders, pp. 1–6, 2019. @article{Crehan2019, There is a need for increased understanding of self-report measures for autistic individuals. In this preliminary study, we examine how a theory of mind self-report relates to other self-report measures for groups of autistic and neurotypical individuals, as well as eye tracking outcomes. Expected patterns of relatedness emerged between self-reports and the eye tracking findings, which lends validity to the theory of mind measure. Self-report measures are critical for autistic individuals to share their own experiences and this is the first step in establishing a theory of mind self-report tool. |
Felipe Criado-Boado; Diego Alonso-Pablos; Manuel J. Blanco; Yolanda Porto; Anxo Rodríguez-Paz; Elena Cabrejas; Elena Del Barrio-Álvarez; Luis M. Martínez Coevolution of visual behaviour, the material world and social complexity, depicted by the eye-tracking of archaeological objects in humans Journal Article In: Scientific Reports, vol. 9, pp. 3985, 2019. @article{CriadoBoado2019, We live in a cluttered visual world that is overflowing with information, the continuous processing of which would be a truly daunting task. Nevertheless, our brains have evolved to select which part of a visual scene is to be prioritized and analysed in detail, and which parts can be discarded or analysed at a later stage. This selection is in part determined by the visual stimuli themselves, and is known as "selective attention", which, in turn, determines how we explore and interact with our environment, including the distinct human artefacts produced in different socio-cultural contexts. Here we hypothesize that visual responses and material objects should therefore co-evolve to reflect changes in social complexity and culture throughout history. Using eye-tracking, we analysed the eye scan paths in response to prehistoric pottery ranging from the Neolithic through to the Iron Age (ca 6000-2000 BP), finding that each ceramic style caused a particular pattern of visual exploration. Horizontal movements become dominant in earlier periods, while vertical movements are more frequent in later periods that were marked by greater social complexity. |
Freya Crosby; Frouke Hermens Does it look safe? An eye tracking study into the visual aspects of fear of crime Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 3, pp. 599–615, 2019. @article{Crosby2019, Studies of fear of crime often focus on demographic and social factors, but these can be difficult to change. Studies of visual aspects have suggested that features reflecting incivilities, such as litter, graffiti, and vandalism increase fear of crime, but methods often rely on participants actively mentioning such aspects, and more subtle, less conscious aspects may be overlooked. To address these concerns, this study examined people's eye movements while they judged scenes for safety. In total, 40 current and former university students were asked to rate images of day-time and night-time scenes of Lincoln, UK (where they studied) and Egham, UK (unfamiliar location) for safety, maintenance, and familiarity while their eye movements were recorded. Another 25 observers not from Lincoln or Egham rated the same images in an Internet survey. Ratings showed a strong association between safety and maintenance and lower safety ratings for night-time scenes for both groups, in agreement with earlier findings. Eye movements of the Lincoln participants showed increased dwell times on buildings, houses, and vehicles during safety judgements and increased dwell times on streets, pavements, and markers of incivilities for maintenance. Results confirm that maintenance plays an important role in perceptions of safety, but eye movements suggest that observers also look for indicators of current or recent presence of people. |
Maria Cutumisu; Krystle-Lee Turgeon; Tasbire Saiyera; Steven Chuong; Lydia Marion González Esparza; Rob MacDonald; Vasyl Kokhan Eye tracking the feedback assigned to undergraduate students in a digital assessment game Journal Article In: Frontiers in Psychology, vol. 10, pp. 1931, 2019. @article{Cutumisu2019, High-quality feedback exerts a crucial influence on learning new skills and it is one of the most common psychological interventions. However, knowing how to deliver feedback effectively is challenging for educators in both traditional and online classroom environments. This study uses psychophysiological methodology to investigate attention allocation to different feedback valences (i.e., positive and negative feedback), as the eye tracker provides accurate information about individuals' locus of attention when they process feedback. We collected learning analytics via a behavioral assessment game and eye-movement measures via an eye tracker to infer undergraduate students' cognitive processing of feedback that is assigned to them after completing a task. The eye movements of n = 30 undergraduates at a university in Western Canada were tracked by the EyeLink 1000 Plus eye tracker while they played Posterlet, a digital game-based assessment. In Posterlet, students designed three posters and received critical (negative) or confirmatory (positive) feedback from virtual characters in the game after completing each poster. Analyses showed that, overall, students attended to critical feedback more than to confirmatory feedback, as measured by the time spent on feedback in total, per word, and per letter, and by the number of feedback fixations and revisits. However, there was no difference in dwell time between valences prior to any feedback revisits, suggesting that returning to read critical feedback more often than confirmatory feedback accounts for the overall dwell time difference between valences when feedback is assigned to students. The study summarizes the eye movement record on critical and confirmatory feedback, respectively. Implications of this research include enhancing our understanding of the differential temporal cognitive processing of feedback valences that may ultimately improve the delivery of feedback. |
Stefan Czoschke; Sebastian Henschke; Elke B. Lange On-item fixations during serial encoding do not affect spatial working memory Journal Article In: Attention, Perception, and Psychophysics, vol. 81, no. 8, pp. 2766–2787, 2019. @article{Czoschke2019, Ample evidence suggests that there is overlap between the eye-movement system and spatial working memory. Such overlapping structures or capacities may result in interference on the one hand and beneficial support on the other. We investigated eye-movement control during encoding of verbal or spatial information, keeping the display the same between tasks. Saccades to to-be-encoded items were scarce during spatial encoding in comparison with verbal encoding. However, despite replicating this difference across different tasks (serial, free recall) and presentation modalities (simultaneous, sequential presentation), we found no relation between item fixations and memory performance—that is, no costs or benefits. Inducing a change from covert to overt encoding did not affect spatial memory performance as well. In contrast, regressive fixations on prior items, that were no longer on the screen, were associated with increased spatial memory performance. Regressions occurred mainly at the end of the encoding period and were targeted at the first presented item. Our results suggest a dissociation between two types of fixations that accompany serial spatial memory: On-item fixations are epiphenomenal; regressions indicate rehearsal or output preparation. |
Mario Dalmaso; Luigi Castelli; Giovanni Galfano Self-related shapes can hold the eyes Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 9, pp. 2249–2260, 2019. @article{Dalmaso2019a, Increasing evidence suggests that individuals are highly sensitive to self-related stimuli. Here, we report two experiments conducted to assess whether two schematic stimuli, arbitrarily associated with either the self or a stranger, can shape attention holding in an oculomotor task. In both experiments, participants first completed a manual matching task in which they were asked to associate the self and a stranger with two shapes (triangle vs. square). Then, in an oculomotor task, they were asked to perform a saccade from the centre of the screen towards a peripheral target while either the triangle or the square were centrally presented. In Experiment 1, saccades had to be performed on each trial-irrespective of the central shape-while in Experiment 2, saccades had to be performed only when the central shape was associated with either the self or the stranger, depending on block instruction. Participants were slower to initiate a saccade away from the central shape when this was associated with the self rather than with the stranger, but this pattern of results emerged only in Experiment 2. Overall, these data suggest that stimuli associated with the self through episodic learning can hold attention when the self/other distinction is a task-relevant dimension. |
Mario Dalmaso; Luigi Castelli; Giovanni Galfano Anticipation of cognitive conflict is reflected in microsaccades: Evidence from a cued-flanker task Journal Article In: Journal of Eye Movement Research, vol. 12, no. 6, pp. 1–9, 2019. @article{Dalmaso2019, Microsaccade frequency has recently been shown to be sensitive to high-level cognitive processes such as attention and memory. In the present study we explored the effects of anticipated cognitive conflict. Participants were administered a variant of the flanker task, which is known to elicit cognitive interference. At the beginning of each trial, participants received a colour cue providing information about the upcoming target frame. In two thirds of the trials, the cue reliably informed the participants that in the upcoming trial the flankers either matched the central target letter or not. Hence, participants could accurately anticipate whether cognitive conflict would arise or not. On neutral trials, the cue provided no useful information. The results showed that microsaccadic rate time-locked to cue onset was reduced on trials in which an upcoming cognitive conflict was expected. These findings provide new insights about top-down modulations of microsaccade dynamics. |
Claudia Damiano; Dirk B. Walther Distinct roles of eye movements during memory encoding and retrieval Journal Article In: Cognition, vol. 184, pp. 119–129, 2019. @article{Damiano2019, A long line of research has shown that vision and memory are closely linked, such that particular eye movement behaviour aids memory performance. In two experiments, we ask whether the positive influence of eye movements on memory is primarily a result of overt visual exploration during the encoding or the recognition phase. Experiment 1 allowed participants to free-view images of scenes, followed by a new-old recognition memory task. Exploratory analyses found that eye movements during study were predictive of subsequent memory performance. Importantly, intrinsic image memorability does not explain this finding. Eye movements during test were only predictive of memory within the first 600 ms of the trial. To examine whether this relationship between eye movements and memory is causal, Experiment 2 manipulated participants' ability to make eye movements during either study or test in a new-old recognition task. Participants were either encouraged to freely explore the scene in both the study and test phases, or had to refrain from making eye movements in either the test phase, the study phase, or both. We found that hit rate was significantly higher when participants moved their eyes during the study phase, regardless of what they did in the test phase. False alarm rate, on the other hand, was affected only by eye movements during the test phase: it decreased when participants were encouraged to explore the scene. Taken together, these results reveal a dissociation of the role of eye movements during the encoding and recognition of scenes. Eye movements during study are instrumental in forming memories, and eye movements during recognition support the judgment of memory veracity. |
Claudia Damiano; John Wilder; Dirk B. Walther Mid-level feature contributions to category-specific gaze guidance Journal Article In: Attention, Perception, and Psychophysics, vol. 81, pp. 35–46, 2019. @article{Damiano2019a, Our research has previously shown that scene categories can be predicted from observers' eye movements when they view photographs of real-world scenes. The time course of category predictions reveals the differential influences of bottom-up and top-down information. Here we used these known differences to determine to what extent image features at different representational levels contribute toward guiding gaze in a category-specific manner. Participants viewed grayscale photographs and line drawings of real-world scenes while their gaze was tracked. Scene categories could be predicted from fixation density at all times over a 2-s time course in both photographs and line drawings. We replicated the shape of the prediction curve found previously, with an initial steep decrease in prediction accuracy from 300 to 500 ms, representing the contribution of bottom-up information, followed by a steady increase, representing top-down knowledge of category-specific information. We then computed the low-level features (luminance contrasts and orientation statistics), mid-level features (local symmetry and contour junctions), and Deep Gaze II output from the images, and used that information as a reference in our category predictions in order to assess their respective contributions to category-specific guidance of gaze. We observed that, as expected, low-level salience contributes mostly to the initial bottom-up peak of gaze guidance. Conversely, the mid-level features that describe scene structure (i.e., local symmetry and junctions) split their contributions between bottom-up and top-down attentional guidance, with symmetry contributing to both bottom-up and top-down guidance, while junctions play a more prominent role in the top-down guidance of gaze. |
Julien Dampuré; Pedro Javier López-Pérez; Horacio A. Barber Meaning-based attentional guidance as a function of foveal and task-related cognitive loads Journal Article In: Language, Cognition and Neuroscience, vol. 34, no. 1, pp. 1–12, 2019. @article{Dampure2019, The depth of parafoveal word processing depends on the amount of cognitive resources available. Whether this principle applies to the parafoveal semantic processing of multiple words remains, however, controversial. This study therefore aimed at testing the impact of the amount of cognitive resources available on the parafoveal semantic processing of words, by manipulating the foveal and task-related cognitive loads. Participants searched for words in displays of three semantically related or unrelated words, one of which was presented in the centre of the screen and two within the parafovea. The nature of the task and the characteristics of the centred word were manipulated to vary respectively the load associated to the task and to the foveal load. Analyses revealed more first saccades toward the parafoveal semantic distractors when both loads were low. These results indicate that fast parafoveal semantic word processing is constrained by the availability of cognitive resources. |
Antea D'Andrea; Federico Chella; Tom R. Marshall; Vittorio Pizzella; Gian Luca Romani; Ole Jensen; Laura Marzetti In: NeuroImage, vol. 188, pp. 722–732, 2019. @article{DAndrea2019, It is well known that attentional selection of relevant information relies on local synchronization of alpha band neuronal oscillations in visual cortices for inhibition of distracting inputs. Additionally, evidence for long-range coupling of neuronal oscillations between visual cortices and regions engaged in the anticipation of upcoming stimuli has been more recently provided. Nevertheless, on the one hand the relation between long-range functional coupling and anatomical connections is still to be assessed, and, on the other hand, the specific role of the alpha and beta frequency bands in the different processes underlying visuo-spatial attention still needs further clarification. We address these questions using measures of linear (frequency-specific) and nonlinear (cross-frequency) phase-synchronization in a cohort of 28 healthy subjects using magnetoencephalography. We show that alpha band phase-synchronization is modulated by the orienting of attention according to a parieto-occipital top-down mechanism reflecting behavior, and its hemispheric asymmetry is predicted by volume's asymmetry of specific tracts of the Superior-Longitudinal-Fasciculus. We also show that a network comprising parietal regions and the right putative Frontal-Eye-Field, but not the left, is recruited in the deployment of spatial attention through an alpha-beta cross-frequency coupling. Overall, we demonstrate that the visuospatial attention network features subsystems indexed by characteristic spectral fingerprints, playing different functional roles in the anticipation of upcoming stimuli and with diverse relation to fiber tracts. |
Souto David; Jayesha Chudasama; Dirk Kerzel; Alan Johnston Motion integration is anisotropic during smooth pursuit eye movements Journal Article In: Journal of Neurophysiology, vol. 121, pp. 1787–1797, 2019. @article{David2019, Smooth pursuit eye movements (pursuit) are used to minimize the retinal motion of moving objects. During pursuit, the pattern of motion on the retina carries not only information about the object movement but also reafferent information about the eye movement itself. The latter arises from the retinal flow of the stationary world in the direction opposite to the eye movement. To extract the global direction of motion of the tracked object and stationary world, the visual system needs to integrate ambiguous local motion measurements (i.e., the aperture problem). Unlike the tracked object, the stationary world's global motion is entirely determined by the eye movement and thus can be approximately derived from motor commands sent to the eye (i.e., from an efference copy). Because retinal motion opposite to the eye movement is dominant during pursuit, different motion integration mechanisms might be used for retinal motion in the same direction and opposite to pursuit. To investigate motion integration during pursuit, we tested direction discrimination of a brief change in global object motion. The global motion stimulus was a circular array of small static apertures within which one-dimensional gratings moved. We found increased coherence thresholds and a qualitatively different reflexive ocular tracking for global motion opposite to pursuit. Both effects suggest reduced sampling of motion opposite to pursuit, which results in an impaired ability to extract coherence in motion signals in the reafferent direction. We suggest that anisotropic motion integration is an adaptation to asymmetric retinal motion patterns experienced during pursuit eye movements. |
Cristina De la Malla; Simon K. Rushton; Kait Clark; Jeroen B. J. Smeets; Eli Brenner The predictability of a target's motion influences gaze, head, and hand movements when trying to intercept it Journal Article In: Journal of Neurophysiology, vol. 121, no. 6, pp. 2416–2427, 2019. @article{DelaMalla2019, Does the predictability of a target's movement and of the interception location influence how the target is intercepted? In a first experiment, we manipulated the predictability of the interception location. A target moved along a haphazardly curved path, and subjects attempted to tap on it when it entered a hitting zone. The hitting zone was either a large ring surrounding the target's starting position (ring condition) or a small disk that became visible before the target appeared (disk condition). The interception location gradually became apparent in the ring condition, whereas it was immediately apparent in the disk condition. In the ring condition, subjects pursued the target with their gaze. Their heads and hands gradually moved in the direction of the future tap position. In the disk condition, subjects immediately directed their gaze toward the hitting zone by moving both their eyes and heads. They also moved their hands to the future tap position sooner than in the ring condition. In a second and third experiment, we made the target's movement more predictable. Although this made the targets easier to pursue, subjects now shifted their gaze to the hitting zone soon after the target appeared in the ring condition. In the disk condition, they still usually shifted their gaze to the hitting zone at the beginning of the trial. Together, the experiments show that predictability of the interception location is more important than predictability of target movement in determining how we move to intercept targets. |
Chih-Yang Chen; Klaus-Peter Hoffmann; Claudia Distler; Ziad M. Hafed The foveal visual representation of the primate superior colliculus Journal Article In: Current Biology, vol. 29, no. 13, pp. 2109–2119, 2019. @article{Chen2019h, A defining feature of the primate visual system is its foveated nature. Processing of foveal retinal input is important not only for high-quality visual scene analysis but also for ensuring precise, albeit tiny, gaze shifts during high-acuity visual tasks. The representations of foveal retinal input in the primate lateral geniculate nucleus and early visual cortices have been characterized. However, how such representations translate into precise eye movements remains unclear. Here, we document functional and structural properties of the foveal visual representation of the midbrain superior colliculus. We show that the superior colliculus, classically associated with extra-foveal spatial representations needed for gaze shifts, is highly sensitive to visual input impinging on the fovea. The superior colliculus also represents such input in an orderly and very specific manner, and it magnifies the representation of foveal images in neural tissue as much as the primary visual cortex does. The primate superior colliculus contains a high-fidelity visual representation, with large foveal magnification, perfectly suited for active visuomotor control and perception. Chen et al. show that superior colliculus (SC) is highly sensitive to foveal visual input and that it magnifies foveal image representation much more than previously anticipated. Their topography data, with large foveal magnification, show that tiny foveal stimuli can activate a large portion of SC neural tissue due to fixational eye movements. |
Hung-Wen Chen; Su-Ling Yeh Effects of blue light on dynamic vision Journal Article In: Frontiers in Psychology, vol. 10, pp. 497, 2019. @article{Chen2019c, Dynamic vision is crucial to not only animals' hunting behaviors but also human activities, and yet little is known about how to enhance it, except for extensive trainings like athletics do. Exposure to blue light has been shown to enhance human alertness (Chellappa et al., 2011), perhaps through intrinsically photosensitive retinal ganglion cells (ipRGCs), which are sensitive to motion perception as revealed by animal studies. However, it remains unknown whether blue light can enhance human dynamic vision, a motion-related ability. We conducted five experiments under blue or orange light to test three important components of dynamic vision: eye pursuit accuracy (EPA, Experiment 1), kinetic visual acuity (KVA, Experiment 1 and 2), and dynamic visual acuity (DVA, Experiment 3-5). EPA was measured by the distance between the position of the fixation and the position of the target when participants tracked a target dot. In the KVA task, participants reported three central target numbers (randomly chosen from 0 to 9) moving toward participants in the depth plane, with speed threshold calculated by a staircase procedure. In the DVA task, three numbers were presented along the meridian line on the same depth plane, with motion direction (Experiment 3) and difficulty level (Experiment 4) manipulated, and a blue light filter lens was used to test the ipRGCs contribution (Experiment 5). Results showed that blue light enhanced EPA and DVA, but reduced KVA. Further, DVA enhancement was modulated by difficulty level: blue light enhancement effect was found only with hard task in the downward motion in Experiment 3 and with the low contrast target in Experiment 4. However, this blue light enhancement effect was not caused by mechanism of the ipRGCs, at least not in the range we tested. In this first study demonstrating the relationship between different components of dynamic vision and blue light, our findings that DVA can be enhanced under blue light with hard but not easy task indicate that blue light can enhance dynamic visual discrimination when needed. |
Jiageng Chen; Andrew B. Leber; Julie D. Golomb Attentional capture alters feature perception Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 45, no. 11, pp. 1443–1454, 2019. @article{Chen2019d, We live in a dynamic, distracting world. When distracting information captures attention, what are the consequences for perception? Previous literature has focused on effects such as reaction time (RT) slowing, accuracy decrements, and oculomotor capture by distractors. In the current study, we asked whether attentional capture by distractors can also more fundamentally alter target feature representations, and if so, whether participants are aware of such errors. Using a continuous report task and novel confidence range report paradigm, we discovered 2 types of feature-binding errors when a distractor was presented along with the target: First, when attention is strongly captured by the distractor, participants commit swapping errors (misreporting the color at the distractor location instead of the target color), which remarkably seem to occur without awareness. Second, when participants successfully resist capture, they tend to exhibit repulsion (perceptual distortion away from the color at the distractor location). Thus, we found that capture not only induces a spatial shift of attention, it also alters feature perception in striking ways. |
Jing Chen; Matteo Valsecchi; Karl R. Gegenfurtner Saccadic suppression measured by steady-state visual evoked potentials Journal Article In: Journal of Neurophysiology, vol. 122, no. 1, pp. 251–258, 2019. @article{Chen2019e, Visual sensitivity is severely impaired during the execution of saccadic eye movements. This phenomenon has been extensively characterized in human psychophysics and nonhuman primate single-neuron studies, but a physiological characterization in humans is less established. Here, we used a method based on steadystate visually evoked potential (SSVEP), an oscillatory brain response to periodic visual stimulation, to examine how saccades affect visual sensitivity. Observers made horizontal saccades back and forth, while horizontal black-and-white gratings flickered at 5-30 Hz in the background. We analyzed EEG epochs with a length of 0.3 s either centered at saccade onset (saccade epochs) or centered at fixations half a second before the saccade (fixation epochs). Compared with fixation epochs, saccade epochs showed a broadband power increase, which most likely resulted from saccade-related EEG activity. The execution of saccades, however, led to an average reduction of 57% in the SSVEP amplitude at the stimulation frequency. This result provides additional evidence for an active saccadic suppression in the early visual cortex in humans. Compared with previous functional MRI and EEG studies, an advantage of this approach lies in its capability to trace the temporal dynamics of neural activity throughout the time course of a saccade. In contrast to previous electrophysiological studies in nonhuman primates, we did not find any evidence for postsaccadic enhancement, even though simulation results show that our method would have been able to detect it. We conclude that SSVEP is a useful technique to investigate the neural correlates of visual perception during saccadic eye movements in humans. |
Nihong Chen; Kilho Shin; Rachel Millin; Yongqian Song; Miyoung Kwon; Bosco S. Tjan Cortical reorganization of peripheral vision induced by simulated central vision loss Journal Article In: Journal of Neuroscience, vol. 39, no. 18, pp. 3529 –3536, 2019. @article{Chen2019, When one's central vision is deprived, a spared part of the peripheral retina acts as a pseudofovea for fixation. The neural mechanisms underlying this compensatory adjustment remain unclear. Here we report cortical reorganization induced by simulated central vision loss. Human subjects of both sexes learned to place the target at an eccentric retinal locus outside their blocked visual field for object tracking. Before and after training, we measured visual crowding-a bottleneck of object identification in peripheral vision, using psychophysics and fMRI. We found that training led to an axis-specific reduction of crowding. The change of the crowding effect was reflected in the change of BOLD signal, as a release of cortical suppression in multiple visual areas starting as early as V1. Our findings suggest that the adult visual system is capable of reshaping its oculomotor control and sensory coding to adapt to impoverished visual input. |
Wenxiang Chen; Xiangling Zhuang; Zixin Cui; Guojie Ma Drivers' recognition of pedestrian road-crossing intentions: Performance and process Journal Article In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 64, pp. 552–564, 2019. @article{Chen2019a, Drivers' recognition of pedestrian road crossing intentions is an essential process during driver-pedestrian interaction. However, compared with the rich observational findings on interaction behavior, little is known on drivers' performance in recognizing pedestrian intentions, as well as the underlying cognitive processes. To fill in the gap, this study evaluated drivers' performance in making judgments of pedestrians' road crossing intentions in recorded natural driving scenes. Experienced and novice drivers identified pedestrians as “will cross” or “will not cross” at some time-to-arrival while their eye movements were recorded. The results showed that experienced drivers were more conservative in discriminating whether a pedestrian would cross or not (preferred a “pedestrian will cross” judgment) and took a higher level of information processing of pedestrian intention. Regardless of driving experience, drivers had a higher detection rate, earlier detection, higher level of information processing and quicker response over pedestrians who intended to cross than those did not intend to cross. A quicker response was also achieved when the time-to-arrival was smaller. Analysis of eye movements showed attentional bias to the upper body of pedestrians when recognizing intention. These findings offer an initial understanding of the intention recognition process during driver-pedestrian interaction and inform directions for autonomous driving research when interacting with pedestrians. |
Yupei Chen; Gregory J. Zelinsky Is there a shape to the attention spotlight? Computing saliency over proto-objects predicts fixations during scene viewing Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 45, no. 1, pp. 139–154, 2019. @article{Chen2019g, Attention controls the selective routing of visual inputs for classification. This "spotlight" of attention has been assumed to be a Gaussian, but here we propose that this routing occurs in the form of a shape. We show that a model of attention control that spatially averages saliency values over proto-objects (POs), fragments of feature-similar visual space, is better able to predict the fixation density maps and scanpaths made during the free viewing of 384 natural scenes by 12 participants than comparable saliency models that do not consider shape. We further show that this image-computable PO model is nearly as good in predicting fixations (density and scanpaths) as a model of fixation prediction that uses hand-segmented object labels. We interpret these results as suggesting that the spotlight of attention has a shape, and that these shapes can be quantified as regions of space that we refer to as POs. |
Jieun Cho; Sang Chul Chong Search termination when the target Is absent: The prevalence of coarse processing and its intertrial influence Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, pp. 1–16, 2019. @article{Cho2019, The visual system can flexibly distribute attentional resources to search areas, with this reflected in the spatial scale of information processing. Visual processing can be either coarse at a global level, or fine at a local level. Previous studies showed the transition between these 2 modes, from coarse to fine, but it has been unclear when and how this occurs. The current study investigated how processing modes change depending on target presence and distractor heterogeneity. In our experiments, participants searched for a uniquely oriented target. Experiment 1 showed that toward search termination, targetabsent trials revealed larger saccadic amplitudes with shorter fixation durations, compared with targetpresent trials. This suggests that the coarse processing mode appears reflecting the tendency to reject multiple distractors at a broad spatial scale in target-absent trials, if all the items are deemed to-berejected. On the other hand, in target-present trials, the transition toward focal processing is provoked by a target. Moreover, Experiment 2 showed decreased search durations when preceded by a target-absent trial. This implies that processing modes can be transferred between trials and that maintaining the coarse mode from a previous target-absent trial can be advantageous for starting a new search trial. |
Sabine Born Repeatedly flashed luminance noise can make objects look further apart Journal Article In: i-Perception, vol. 10, no. 3, pp. 1–10, 2019. @article{Born2019, Luminance noise is widely used as mask in Experimental Psychology. But can luminance noise also affect where we perceive an object or change the perceived distance between objects? In this study, I investigated the effect of a repeatedly flashed luminance noise pattern on the perceived separation between two bars. Indeed, compared to conditions without dynamic luminance noise, the spacing between the bars was overestimated when the pattern flashed on-and-off in the background. The cause for this remarkably stable effect remains unknown. Potential relations to apparent motion, masking, attentional biases, and other visual illusions are discussed. |
Sabine Born; Michael Puntiroli; Damien Jordan; Dirk Kerzel Saccadic selection does not eliminate attribute amnesia Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 45, no. 12, pp. 2165–2173, 2019. @article{Born2019b, Attribute amnesia (Chen & Wyble, 2015, 2016) demonstrates that we may not always be able to spontaneously retrieve a simple attribute of a visual object (e.g., its color) for conscious report, even though the object had just been the target in a visual task. Attribute amnesia has been suggested to reflect a lack of consolidation of the task-irrelevant attribute in visual working memory. Here we tested whether saccadic selection eliminates or attenuates attribute amnesia. Saccade targets have been shown to be preferentially encoded into visual working memory and may therefore be spared. We used simple color pop-out displays, asking participants to indicate the location of the color singleton letter target on each trial either by keypress or by making a saccade toward it. After a couple of trials and unannounced to the participants, we asked for the color and identity of the last target letter on a surprise trial. We found that saccade targets were not spared from attribute amnesia: Participants were as bad in correctly reporting the color in the saccade as in the keypress condition. For letter identity, the effect was attenuated but not abolished when the target was foveated for a short period of time. We argue that the current results do not refute an obligatory coupling between saccadic selection and encoding in visual working memory. However, the encoded information may not necessarily be stored in a manner that is robust enough to persist in the face of a surprise question. |
Vanessa K. Bowden; J. Edwin Dickinson; Robert J. Green; David R. Badcock Phase specific shape aftereffects explained by the tilt aftereffect Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 45, no. 7, pp. 889–910, 2019. @article{Bowden2019, Aftereffects of adaptation are frequently used to infer mechanisms of human visual perception. Commonly, the properties of stimuli are repelled from properties of the adaptor. For example, in the tilt aftereffect a line is repelled in orientation from a previously experienced line. Perceived orientation is predicted by the centroid of the responses of a population of mechanisms individually tuned to limited ranges of orientation but collectively sensitive to the whole possible range. Aftereffects are also predictable if the mechanisms are allowed to adapt. Adaptation across radial frequency patterns, patterns deformed from circular by a sinusoidal modulation of radius, causes repulsive aftereffects, sensitive to the relative amplitudes and orientations of the patterns. Here we show that these shape aftereffects can be accounted for by the application of local tilt aftereffects around the shape contour. We suggest that fields of tilt aftereffects might provide a general mechanism for exaggerating the perceptual difference between successively experienced stimuli, making them more discriminable. If the human visual system does indeed exploit this possibility, then the conclusions often made by studies assuming adaptation within mechanisms sensitive to the shape of stimuli will need to be reconsidered. |
Ellen R. Bradley; Alison Seitz; Andrea N. Niles; Katherine P. Rankin; Daniel H. Mathalon; Aoife O'Donovan; Joshua D. Woolley Oxytocin increases eye gaze in schizophrenia Journal Article In: Schizophrenia Research, vol. 212, pp. 177–185, 2019. @article{Bradley2019, Abnormal eye gaze is common in schizophrenia and linked to functional impairment. The hypothalamic neuropeptide oxytocin modulates visual attention to social stimuli, but its effects on eye gaze in schizophrenia are unknown. We examined visual scanning of faces in men with schizophrenia and neurotypical controls to quantify oxytocin effects on eye gaze. In a randomized, double-blind, crossover study, 33 men with schizophrenia and 39 matched controls received one dose of intranasal oxytocin (40 IU) and placebo on separate testing days. Participants viewed 20 color photographs of faces while their gaze patterns were recorded. We tested for differences in fixation time on the eyes between patients and controls as well as oxytocin effects using linear mixed-effects models. We also tested whether attachment style, symptom severity, and anti-dopaminergic medication dosage moderated oxytocin effects. In the placebo condition, patients showed reduced fixation time on the eyes compared to controls. Oxytocin was associated with an increase in fixation time among patients, but a decrease among controls. Higher attachment anxiety and greater symptom severity predicted increased fixation time on the eyes on oxytocin versus placebo. Anti-dopaminergic medication dosage and attachment avoidance did not impact response to oxytocin. Consistent with findings that oxytocin optimizes processing of social stimuli, intranasal oxytocin enhanced eye gaze in men with schizophrenia. Further work is needed to determine whether changes in eye gaze impact social cognition and functional outcomes. Both attachment anxiety and symptom severity predicted oxytocin response, highlighting the importance of examining potential moderators of oxytocin effects in future studies. |
Laurel Brehm; Linda Taschenberger; Antje S. Meyer Mental representations of partner task cause interference in picture naming Journal Article In: Acta Psychologica, vol. 199, pp. 1–13, 2019. @article{Brehm2019, Interference in picture naming occurs from representing a partner's preparations to speak (Gambi, van de Cavey, & Pickering, 2015). We tested the origins of this interference using a simple non-communicative joint naming task based on Gambi et al. (2015), where response latencies indexed interference from partner task and partner speech content, and eye fixations to partner objects indexed overt attention. Experiment 1 contrasted a partner- present condition with a control partner-absent condition to establish the role of the partner in eliciting interference. For latencies, we observed interference from the partner's task and speech content, with interference increasing due to partner task in the partner-present condition. Eye-tracking measures showed that interference in naming was not due to overt attention to partner stimuli but to broad expectations about likely utterances. Experiment 2 examined whether an equivalent non-verbal task also elicited interference, as predicted from a language as joint action framework. We replicated the finding of interference due to partner task and again found no relationship between overt attention and interference. These results support Gambi et al. (2015). Individuals co-represent a partner's task while speaking, and doing so does not require overt attention to partner stimuli. |
Zohar Bromberg; Opher Donchin; Shlomi Haar Eye movements during visuomotor adaptation represent only part of the explicit learning Journal Article In: eNeuro, vol. 6, no. 6, pp. 1–12, 2019. @article{Bromberg2019, Visuomotor rotations are learned through a combination of explicit strategy and implicit recalibration. However, measuring the relative contribution of each remains a challenge and the possibility of multiple explicit and implicit components complicates the issue. Recent interest has focused on the possibility that eye movements reflect explicit strategy. Here we compared eye movements during adaptation to two accepted measures of explicit learning - verbal report and the exclusion test. We found that while reporting, all subjects showed a match between all three measures. However, when subjects did not report their intention, the eye movements of some subjects suggested less explicit adaptation than what was measured in an exclusion test. Interestingly, subjects whose eye movements did match their exclusion could be clustered into two subgroups: fully implicit learners showing no evidence of explicit adaptation and explicit learners with little implicit adaptation. Subjects showing a mix of both explicit and implicit adaptation were also those where eye movements showed less explicit adaptation than did exclusion. Thus, our results support the idea of multiple components of explicit learning as only part of the explicit learning is reflected in the eye movements. Individual subjects may use explicit components that are reflected in the eyes or those that are not or some mixture of the two. Analysis of reaction times suggests that the explicit components reflected in the eye-movements involve longer reaction times. This component, according to recent literature, may be related to mental rotation. Significance Statement Visuomotor adaptation involves both explicit and implicit components: aware re-aiming and unaware error correction. Recent studies suggest that eye movements could be used to capture the explicit component, a method that would have significant advantages over other approaches. We show that eye movements capture only one component of explicit adaptation. This component scales with reaction time while the component unrelated to eye movements does not. Our finding has obvious practical implications for the use of eye movements as a proxy for explicit learning. However, our results also corroborate recent findings suggesting the existence of multiple explicit components, and, specifically, their decomposition into components correlated with reaction time and components that are not. |
Maria De Luca; Maria Rosa Pizzamiglio; Antonella Di Vita; Liana Palermo; Antonio Tanzilli; Claudia Dacquino; Laura Piccardi First the nose, last the eyes in congenital prosopagnosia: Look like your father looks Journal Article In: Neuropsychology, vol. 33, no. 6, pp. 855–861, 2019. @article{DeLuca2019, Objective: To contribute to the limited body of eye movement (EM) studies of children and family members with congenital prosopagnosia (CP), a task requiring a verbal response for the identification of personally familiar faces was used for the 1st time. Method: EMs were recorded in a father and his son (both diagnosed with CP) and controls (N = 2). In the identification tasks they watched personally familiar faces and distracters and responded by saying the names of the familiar faces or saying "I don't know." Two discrimination tasks were added to distinguish the specificity of the EM pattern for the recognition tasks. In all tasks, faces were presented 1 by 1 until the response onset; thus, the EM pattern was not saturated by overexposure to the stimulus. The 1st fixation position was examined to localize the 1st area of the face attended to. The spatial-temporal fixation pattern was examined to evaluate the attention devoted to specific regions. Results: Both family members were inaccurate and slower than controls in the identification but not the discrimination tasks. In all tasks, they made a number of fixations comparable to those of controls but showed longer fixation durations than controls did. In the identification tasks, they showed poor spatial-temporal distribution of fixations on the eyes and rare 1st fixations on the eyes. Conclusions: Consistent with the literature, both family members showed the typical reduced sampling of the eyes. Nevertheless, our protocol based on explicit verbal responses (which included EM only until response onset) showed that they did not increase the spatial sampling overall by making more fixations than controls did. Instead, they showed longer fixation durations across tasks; this was interpreted as a generalized problem with face processing in affording a more robust sampling of information. |
Nuno Alexandre De Sá Teixeira; Gianfranco Bosco; Sergio Delle Monache; Francesco Lacquaniti In: Experimental Brain Research, vol. 237, no. 12, pp. 3375–3390, 2019. @article{DeSaTeixeira2019, The perceived vanishing location of a moving target is systematically displaced forward, in the direction of motion—representational momentum—, and downward, in the direction of gravity—representational gravity. Despite a wealth of research on the factors that modulate these phenomena, little is known regarding their neurophysiological substrates. The present experiment aims to explore which role is played by cortical areas hMT/V5+, linked to the processing of visual motion, and TPJ, thought to support the functioning of an internal model of gravity, in modulating both effects. Participants were required to perform a standard spatial localization task while the activity of the right hMT/V5+ or TPJ sites was selectively disrupted with an offline continuous theta-burst stimulation (cTBS) protocol, interspersed with control blocks with no stimulation. Eye movements were recorded during all spatial localizations. Results revealed an increase in representational gravity contingent on the disruption of the activity of hMT/V5+ and, conversely, some evidence suggested a bigger representational momentum when TPJ was stimulated. Furthermore, stimulation of hMT/V5+ led to a decreased ocular overshoot and to a time-dependent downward drift of gaze location. These outcomes suggest that a reciprocal balance between perceived kinematics and anticipated dynamics might modulate these spatial localization responses, compatible with a push–pull mechanism. |
Matteo De Tommaso; Tommaso Mastropasqua; Massimo Turatto Multiple reward–cue contingencies favor expectancy over uncertainty in shaping the reward–cue attentional salience Journal Article In: Psychological Research, vol. 83, no. 2, pp. 332–346, 2019. @article{DeTommaso2019a, Reward-predicting cues attract attention because of their motivational value. A debated question regards the conditions under which the cue's attentional salience is governed more by reward expectancy rather than by reward uncertainty. To help shedding light on this relevant issue, here, we manipulated expectancy and uncertainty using three levels of reward-cue contingency, so that, for example, a high level of reward expectancy (p =.8) was compared with the highest level of reward uncertainty (p =.5). In Experiment 1, the best reward–cue during conditioning was preferentially attended in a subsequent visual search task. This result was replicated in Experiment 2, in which the cues were matched in terms of response history. In Experiment 3, we implemented a hybrid procedure consisting of two phases: an omission contingency procedure during conditioning, followed by a visual search task as in the previous experiments. Crucially, during both phases, the reward–cues were never task relevant. Results confirmed that, when multiple reward-cue contingencies are explored by a human observer, expectancy is the major factor controlling both the attentional and the oculomotor salience of the reward–cue. |