Cognitive Eye-Tracking Publications
All EyeLink eye tracker cognitive and perception eye tracker research publications up until 2024 (with some early 2025s) are listed below by year. You can search the eye-tracking publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2015 |
Victor Lafuente; Mehrdad Jazayeri; Michael N. Shadlen Representation of accumulating evidence for a decision in two parietal areas Journal Article In: Journal of Neuroscience, vol. 35, no. 10, pp. 4306–4318, 2015. @article{Lafuente2015, Decisions are often made by accumulating evidence for and against the alternatives. The momentary evidence represented by sensory neurons is accumulated by downstream structures to form a decision variable, linking the evolving decision to the formation of a motor plan. When decisions are communicated by eye movements, neurons in the lateral intraparietal area (LIP) represent the accumulation of evidence bearing on the potential targets for saccades. We now show that reach-related neurons from the medial intraparietal area (MIP) exhibit a gradual modulation of their firing rates consistent with the representation of an evolving decision variable. When decisions were communicated by saccades instead of reaches, decision-related activity was attenuated in MIP, whereas LIP neurons were active while monkeys communicated decisions by saccades or reaches. Thus, for decisions communicated by a hand movement, a parallel flow of sensory information is directed to parietal areas MIP and LIP during decision formation. |
Stefania Vito; Antimo Buonocore; Jean François Bonnefon; Sergio Della Sala Eye movements disrupt episodic future thinking Journal Article In: Memory, vol. 23, no. 6, pp. 796–805, 2015. @article{Vito2015, Remembering the past and imagining the future both rely on complex mental imagery. We considered the possibility that constructing a future scene might tap a component of mental imagery that is not as critical for remembering past scenes. Whereas visual imagery plays an important role in remembering the past, we predicted that spatial imagery plays a crucial role in imagining the future. For the purpose of teasing apart the different components underpinning scene construction in the two experiences of recalling episodic memories and shaping novel future events, we used a paradigm that might selectively affect one of these components (i.e., the spatial). Participants performed concurrent eye movements while remembering the past and imagining the future. These concurrent eye movements selectively interfere with spatial imagery, while sparing visual imagery. Eye movements prevented participants from imagining complex and detailed future scenes, but had no comparable effect on the recollection of past scenes. Similarities between remembering the past and imagining the future are coupled with some differences. The present findings uncover another fundamental divergence between the two processes. |
Sergio Delle Monache; Francesco Lacquaniti; Gianfranco Bosco Eye movements and manual interception of ballistic trajectories: effects of law of motion perturbations and occlusions Journal Article In: Experimental Brain Research, vol. 233, no. 2, pp. 359–374, 2015. @article{DelleMonache2015, Manual interceptions are known to depend critically on integration of visual feedback information and experience-based predictions of the interceptive event. Within this framework, coupling between gaze and limb movements might also contribute to the interceptive outcome, since eye movements afford acquisition of high-resolution visual information. We investigated this issue by analyzing subjects' head-fixed oculomotor behavior during manual interceptions. Subjects moved a mouse cursor to intercept computer-generated ballistic trajectories either congruent with Earth's gravity or perturbed with weightlessness (0g) or hypergravity (2g) effects. In separate sessions, trajectories were either fully visible or occluded before interception to enforce visual prediction. Subjects' oculomotor behavior was classified in terms of amounts of time they gazed at different visual targets and of overall number of saccades. Then, by way of multivariate analyses, we assessed the following: (1) whether eye movement patterns depended on targets' laws of motion and occlusions; and (2) whether interceptive performance was related to the oculomotor behavior. First, we found that eye movement patterns depended significantly on targets' laws of motion and occlusion, suggesting predictive mechanisms. Second, subjects coupled differently oculomotor and interceptive behavior depending on whether targets were visible or occluded. With visible targets, subjects made smaller interceptive errors if they gazed longer at the mouse cursor. Instead, with occluded targets, they achieved better performance by increasing the target's tracking accuracy and by avoiding gaze shifts near interception, suggesting that precise ocular tracking provided better trajectory predictions for the interceptive response. |
Loni Desanghere; Jonathan J. Marotta The influence of object shape and center of mass on grasp and gaze Journal Article In: Frontiers in Psychology, vol. 6, pp. 1537, 2015. @article{Desanghere2015, Recent experiments examining where participants look when grasping an object found that fixations favour the eventual index finger landing position on the object. Even though the act of picking up an object must involve complex high-level computations such as the visual analysis of object contours, surface properties, knowledge of an object's function and center of mass (COM) location, these investigations have generally used simple symmetrical objects – where COM and horizontal midline overlap. Less research has been aimed at looking at how variations in object properties, such as differences in curvature and changes in COM location, affect visual and motor control. The purpose of this study was to examine grasp and fixation locations when grasping objects whose COM was positioned to the left or right of the objects horizontal midline (Experiment 1) and objects whose COM was moved progressively further from the midline of the objects based on the alteration of the object's shape (Experiment 2). Results from Experiment 1 showed that object COM position influenced fixation locations and grasp locations differently, with fixations not as tightly linked to index finger grasp locations as was previously reported with symmetrical objects. Fixation positions were also found to be more central on the non-symmetrical objects. This difference in gaze position may provide a more holistic view, which would allow both index finger and thumb positions to be monitored while grasping. Finally, manipulations of COM distance (Experiment 2) exerted marked effects on the visual analysis of the objects when compared to its influence on grasp locations, with fixation locations more sensitive to these manipulations. Together, these findings demonstrate how object features differentially influence gaze vs. grasp positions during object interaction. |
Dario Cazzoli; Simon Jung; Thomas Nyffeler; Tobias Nef; Pascal Wurtz; Urs P. Mosimann; René M. Müri The role of the right frontal eye field in overt visual attention deployment as assessed by free visual exploration Journal Article In: Neuropsychologia, vol. 74, pp. 37–41, 2015. @article{Cazzoli2015, The frontal eye field (FEF) is known to be involved in saccade generation and visual attention control. Studies applying covert attentional orienting paradigms have shown that the right FEF is involved in attentional shifts to both the left and the right hemifield. In the current study, we aimed at examining the effects of inhibitory continuous theta burst (cTBS) transcranial magnetic stimulation over the right FEF on overt attentional orienting, as measured by a free visual exploration paradigm.In forty-two healthy subjects, free visual exploration of naturalistic pictures was tested in three conditions: (1) after cTBS over the right FEF; (2) after cTBS over a control site (vertex); and, (3) without any stimulation. The results showed that cTBS over the right FEF-but not cTBS over the vertex-triggered significant changes in the spatial distribution of the cumulative fixation duration. Compared to the group without stimulation and the group with cTBS over the vertex, cTBS over the right FEF decreased cumulative fixation duration in the left and in the right peripheral regions, and increased cumulative fixation duration in the central region.The present study supports the view that the right FEF is involved in the bilateral control of not only covert, but also of overt, peripheral visual attention. |
Sarah Chabal; Viorica Marian Speakers of different languages process the visual world differently Journal Article In: Journal of Experimental Psychology: General, vol. 144, no. 3, pp. 539–550, 2015. @article{Chabal2015, Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct language input, showing that linguistic information is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. (PsycINFO Database Record |
Sarah Chabal; Scott R. Schroeder; Viorica Marian Audio-visual object search is changed by bilingual experience Journal Article In: Attention, Perception, & Psychophysics, vol. 77, no. 8, pp. 2684–2693, 2015. @article{Chabal2015a, The current study examined the impact of language experience on the ability to efficiently search for objects in the face of distractions. Monolingual and bilingual participants completed an ecologically-valid, object-finding task that contained conflicting, consistent, or neutral auditory cues. Bilinguals were faster than monolinguals at locating the target item, and eye movements revealed that this speed advantage was driven by bilinguals' ability to overcome interference from visual distractors and focus their attention on the relevant object. Bilinguals fixated the target object more often than did their monolingual peers, who, in contrast, attended more to a distracting image. Moreover, bilinguals', but not monolinguals', object-finding ability was positively associated with their executive control ability. We conclude that bilinguals' executive control advantages extend to real-world visual processing and object finding within a multi-modal environment. |
Jason L. Chan; Michael J. Koval; Thilo Womelsdorf; Stephen G. Lomber; Stefan Everling Dorsolateral prefrontal cortex deactivation in monkeys reduces preparatory beta and gamma power in the superior colliculus Journal Article In: Cerebral Cortex, vol. 25, no. 12, pp. 4704–4714, 2015. @article{Chan2015, Cognitive control requires the selection and maintenance of task-relevant stimulus-response associations, or rules. The dorsolateral prefrontal cortex (DLPFC) has been implicated by lesion, functional imaging, and neurophysiological studies to be involved in encoding rules, but the mechanisms by which it modulates other brain areas are poorly understood. Here, the functional relationship of the DLPFC with the superior colliculus (SC) was investigated by bilaterally deactivating the DLPFC while recording local field potentials (LFPs) in the SC in monkeys performing an interleaved pro- and antisaccade task. Event-related LFPs showed differences between pro- and antisaccades and responded prominently to stimulus presentation. LFP power after stimulus onset was higher for correct saccades than erroneous saccades. Deactivation of the DLPFC did not affect stimulus onset related LFP activity, but reduced high beta (20-30 Hz) and high gamma (60-150 Hz) power during the preparatory period for both pro- and antisaccades. Spike rate during the preparatory period was positively correlated with gamma power and this relationship was attenuated by DLPFC deactivation. These results suggest that top-down control of the SC by the DLPFC may be mediated by beta oscillations. |
Steve W. C. Chang; Nicholas A. Fagan; Koji Toda; Amanda V. Utevsky; John M. Pearson; Michael L. Platt Neural mechanisms of social decision-making in the primate amygdala Journal Article In: Proceedings of the National Academy of Sciences, vol. 112, no. 52, pp. 16012–16017, 2015. @article{Chang2015, SignificanceMaking social decisions requires evaluation of benefits and costs to self and others. Long associated with emotion and vigilance, neurons in primate amygdala also signal reward and punishment as well as information about the faces and eyes of others. Here we show that neurons in the basolateral amygdala signal the value of rewards for self and others when monkeys make social decisions. These value-mirroring neurons reflected monkeys tendency to make prosocial decisions on a momentary as well as long-term basis. We also found that delivering the social peptide oxytocin into basolateral amygdala enhances both prosocial tendencies and attention to the recipients of prosocial decisions. Our findings endorse the amygdala as a critical neural nexus regulating social decisions. Social decisions require evaluation of costs and benefits to oneself and others. Long associated with emotion and vigilance, the amygdala has recently been implicated in both decision-making and social behavior. The amygdala signals reward and punishment, as well as facial expressions and the gaze of others. Amygdala damage impairs social interactions, and the social neuropeptide oxytocin (OT) influences human social decisions, in part, by altering amygdala function. Here we show in monkeys playing a modified dictator game, in which one individual can donate or withhold rewards from another, that basolateral amygdala (BLA) neurons signaled social preferences both across trials and across days. BLA neurons mirrored the value of rewards delivered to self and others when monkeys were free to choose but not when the computer made choices for them. We also found that focal infusion of OT unilaterally into BLA weakly but significantly increased both the frequency of prosocial decisions and attention to recipients for context-specific prosocial decisions, endorsing the hypothesis that OT regulates social behavior, in part, via amygdala neuromodulation. Our findings demonstrate both neurophysiological and neuroendocrinological connections between primate amygdala and social decisions. |
Philippe Chassy; Trym A. E. Lindell; Jessica A. Jones; Galina V. Paramei A relationship between visual complexity and aesthetic appraisal of car front images: An eye-tracker study Journal Article In: Perception, vol. 44, no. 8-9, pp. 1085–1097, 2015. @article{Chassy2015, Image aesthetic pleasure (AP) is conjectured to be related to image visual complexity (VC). The aim of the present study was to investigate whether (a) two image attributes, AP and VC, are reflected in eye-movement parameters; and (b) subjective measures of AP and VC are related. Participants (N=26) explored car front images (M=50) while their eye movements were recorded. Following image exposure (10seconds), its VC and AP were rated. Fixation count was found to positively correlate with the subjective VC and its objective proxy, JPEG compression size, suggesting that this eye-movement parameter can be considered an objective behavioral measure of VC. AP, in comparison, positively correlated with average dwelling time. Subjective measures of AP and VC were related too, following an inverted U-shape function best-fit by a quadratic equation. In addition, AP was found to be modulated by car prestige. Our findings reveal a close relationship between subjective and objective measures of complexity and aesthetic appraisal, which is interpreted within a prototype-based theory framework. |
Magdalena Chechlacz; Glyn W. Humphreys; Stamatios N. Sotiropoulos; Christopher Kennard; Dario Cazzoli Structural organization of the corpus callosum predicts attentional shifts after continuous theta burst stimulation Journal Article In: Journal of Neuroscience, vol. 35, no. 46, pp. 15353–15368, 2015. @article{Chechlacz2015, Repetitive transcranial magnetic stimulation (rTMS) applied over the right posterior parietal cortex (PPC) in healthy participants has been shown to trigger a significant rightward shift in the spatial allocation of visual attention, temporarily mimicking spatial deficits observed in neglect. In contrast, rTMS applied over the left PPC triggers a weaker or null attentional shift. However, large interindividual differences in responses to rTMS have been reported. Studies measuring changes in brain activation suggest that the effects of rTMS may depend on both interhemispheric and intrahemispheric interactions between cortical loci controlling visual attention. Here, we investigated whether variability in the structural organization of human white matter pathways subserving visual attention, as assessed by diffusion magnetic resonance imaging and tractography, could explain interindividual differences in the effects of rTMS. Most participants showed a rightward shift in the allocation of spatial attention after rTMS over the right intraparietal sulcus (IPS), but the size of this effect varied largely across participants. Conversely, rTMS over the left IPS resulted in strikingly opposed individual responses, with some participants responding with rightward and some with leftward attentional shifts. We demonstrate that microstructural and macrostructural variability within the corpus callosum, consistent with differential effects on cross-hemispheric interactions, predicts both the extent and the direction of the response to rTMS. Together, our findings suggest that the corpus callosum may have a dual inhibitory and excitatory function in maintaining the interhemispheric dynamics that underlie the allocation of spatial attention. |
Cheng Chen; Xianghui Chen; Min Gao; Qiong Yang; Hongmei Yan Contextual influence on the tilt after-effect in foveal and para-foveal vision Journal Article In: Neuroscience Bulletin, vol. 31, no. 3, pp. 307–316, 2015. @article{Chen2015c, A sensory stimulus can only be properly interpreted in light of the stimuli that surround it in space and time. The tilt illusion (TI) and tilt after-effect (TAE) provide good evidence that the perception of a target depends strongly on both its spatial and temporal context. In previous studies, the TI and TAE have typically been investigated separately, so little is known about their co-effects on visual perception and information processing mechanisms. Here, we considered the influence of the spatial context and the temporal effect together and asked how center- surround context affects the TAE in foveal and para- foveal vision. Our results showed that different center-surround spatial patterns signifi cantly affected the TAE for both foveal and para-foveal vision. In the fovea, the TAE was mainly produced by central adaptive gratings. Cross-oriented surroundings significantly inhibited the TAE, and iso-oriented surroundings slightly facilitated it; surround inhibition was much stronger than surround facilitation. In the para-fovea, the TAE was mainly decided by the surrounding patches. Likewise, a cross-oriented central patch inhibited the TAE, and an iso-oriented one facilitated it, but there was no significant difference between inhibition and facilitation. Our findings demonstrated, at the perceptual level, that our visual system adopts different mechanisms to process consistent or inconsistent central-surround orientation information and that the unequal magnitude magnitude of surround inhibition and facilitation is vitally important for the visual system to improve the detectability or discriminability of novel or incongruent stimuli. |
Lijing Chen; Yufang Yang Emphasizing the only character: EMPHASIS, attention and contrast Journal Article In: Cognition, vol. 136, pp. 222–227, 2015. @article{Chen2015b, In conversations, pragmatic information such as emphasis is important for identifying the speaker's/writer's intention. The present research examines the cognitive processes involved in emphasis processing. Participants read short discourses that introduced one or two character(s), with the character being emphasized or non-emphasized in subsequent texts. Eye movements showed that: (1) early processing of the emphasized word was facilitated, which may have been due to increased attention allocation, whereas (2) late integration of the emphasized character was inhibited when the discourse involved only this character. These results indicate that it is necessary to include other characters as contrastive characters to facilitate the integration of an emphasized character, and support the existence of a relationship between Emphasis and Contrast computation. Taken together, our findings indicate that both attention allocation and contrast computation are involved in emphasis processing, and support the incremental nature of sentence processing and the importance of contrast in discourse comprehension. |
Nigel T. M. Chen; Patrick J. F. Clarke; Tamara L. Watson; Colin MacLeod; Adam J. Guastella Attentional bias modification facilitates attentional control mechanisms: Evidence from eye tracking Journal Article In: Biological Psychology, vol. 104, pp. 139–146, 2015. @article{Chen2015d, Social anxiety is thought to be maintained by biased attentional processing towards threatening information. Research has further shown that the experimental attenuation of this bias, through the implementation of attentional bias modification (ABM), may serve to reduce social anxiety vulnerability. However, the mechanisms underlying ABM remain unclear. The present study examined whether inhibitory attentional control was associated with ABM. A non-clinical sample of participants was randomly assigned to receive either ABM or a placebo task. To assess pre-post changes in attentional control, participants were additionally administered an emotional antisaccade task. ABM participants exhibited a subsequent shift in attentional bias away from threat as expected. ABM participants further showed a subsequent decrease in antisaccade cost, indicating a general facilitation of inhibitory attentional control. Mediational analysis revealed that the shift in attentional bias following ABM was independent to the change in attentional control. The findings suggest that the mechanisms of ABM are multifaceted. |
Sheng-Chang Chen; Mi-Shan Hsiao; Hsiao-Ching She In: Computers in Human Behavior, vol. 53, pp. 169–180, 2015. @article{Chen2015e, This study examined the effectiveness of the different spatial abilities of high school students who constructed their understanding of the atomic orbital concepts and mental models after learning with multimedia learning materials presented in static and dynamic modes of 3D representation. A total of 60 high school students participated in this study and were randomly assigned into static and dynamic 3D representation groups. The dependent variables included a pre-test and post-test on atomic orbital concepts, an atomic orbital mental model construction test, and students' eye-movement behaviors. Results showed that students who learned with dynamic 3D representation allocated a significantly greater amount of attention, exhibited better performance on the mental model test, and constructed more sophisticated 3D hybridizations of the orbital mental model than the students in the static 3D group. The logistic regression result indicated that the dynamic 3D representation group students' number of saccades and number of re-readings were positive predictors, while the number of fixations was the negative predictor, for developing the students' 3D mental models of an atomic orbital. High-spatial-ability students outperformed the low-spatial-ability students on the atomic orbital conceptual test and mental model construction, while both types of students allocated similar amounts of attention to the 3D representations. Our results demonstrated that low-spatial-ability students' eye movement behaviors positively correlate with their performance on the atomic orbital concept test and the mental model construction. |
Xinxin Chen; Hongyan Yu; Fang Yu What is the optimal number of response alternatives for rating scales? From an information processing perspective Journal Article In: Journal of Marketing Analytics, vol. 3, no. 2, pp. 69–78, 2015. @article{Chen2015f, Rating scales are measuring instruments that are widely used in social science research. However, many different rating scale formats are used in the literature, differing specifically in the number of response alternatives offered. Previous studies on the optimal number of response alternatives have focused exclusively on the participants' final response results, rather than on the participants' information processing. We used an eye-tracking study to explore this issue from an information processing perspective. We analyzed the information processing in six scales with different response alternatives. We compared the reaction times, net acquiescence response styles, extreme response styles and proportional changes in the response alternatives of the six scales. Our results suggest that the optimal number of response alternatives is five. |
Jonas Everaert; Ernst H. W. Koster Interactions among emotional attention, encoding, and retrieval of ambiguous information: An eye-tracking study Journal Article In: Emotion, vol. 15, no. 5, pp. 539–543, 2015. @article{Everaert2015, Emotional biases in attention modulate encoding of emotional material into long-term memory, but little is known about the role of such attentional biases during emotional memory retrieval. The present study investigated how emotional biases in memory are related to attentional allocation during retrieval. Forty-nine individuals encoded emotionally positive and negative meanings derived from ambiguous information and then searched their memory for encoded meanings in response to a set of retrieval cues. The remember/know/new procedure was used to classify memories as recollection-based or familiarity-based, and gaze behavior was monitored throughout the task to measure attentional allocation. We found that a bias in sustained attention during recollection-based, but not familiarity-based, retrieval predicted subsequent memory bias toward positive versus negative material following encoding. Thus, during emotional memory retrieval, attention affects controlled forms of retrieval (i.e., recollection) but does not modulate relatively automatic, familiarity-based retrieval. These findings enhance understanding of how distinct components of attention regulate the emotional content of memories. Implications for theoretical models and emotion regulation are discussed. |
Michel Failing; Tom Nissens; Daniel Pearson; Mike Le Pelley; Jan Theeuwes Oculomotor capture by stimuli that signal the availability of reward Journal Article In: Journal of Neurophysiology, vol. 114, no. 4, pp. 2316–2327, 2015. @article{Failing2015, It is well known that eye movement patterns are influenced by both goal- and salience-driven factors. Recent studies, however, have demonstrated that objects that are nonsalient and task irrelevant can still capture our eyes if moving our eyes to those objects has previously produced reward. Here we demonstrate that training such an association between eye movements to an object and delivery of reward is not needed. Instead, an object that merely signals the availability of reward captures the eyes even when it is physically nonsalient and never relevant for the task. Furthermore, we show that oculomotor capture by reward is more reliably observed in saccades with short latencies. We conclude that a stimulus signaling high reward has the ability to capture the eyes independently of bottom-up physical salience or top-down task relevance and that the effect of reward affects early selection processes. |
Joseph D. Chisholm; Alan Kingstone Action video games and improved attentional control: Disentangling selection-and response-based processes Journal Article In: Psychonomic Bulletin & Review, vol. 22, no. 5, pp. 1430–1436, 2015. @article{Chisholm2015, Research has demonstrated that experience with action video games is associated with improvements in a host of cognitive tasks. Evidence from paradigms that assess aspects of attention has suggested that action video game players (AVGPs) possess greater control over the allocation of attentional resources than do non-video-game players (NVGPs). Using a compound search task that teased apart selection- and response-based processes (Duncan, 1985), we required participants to perform an oculomotor capture task in which they made saccades to a uniquely colored target (selection-based process) and then produced a manual directional response based on information within the target (response-based process). We replicated the finding that AVGPs are less susceptible to attentional distraction and, critically, revealed that AVGPs outperform NVGPs on both selection-based and response-based processes. These results not only are consistent with the improved-attentional-control account of AVGP benefits, but they suggest that the benefit of action video game playing extends across the full breadth of attention-mediated stimulus–response processes that impact human performance. |
Joseph D. Chisholm; Alan Kingstone Action video game players' visual search advantage extends to biologically relevant stimuli Journal Article In: Acta Psychologica, vol. 159, pp. 93–99, 2015. @article{Chisholm2015a, Research investigating the effects of action video game experience on cognition has demonstrated a host of performance improvements on a variety of basic tasks. Given the prevailing evidence that these benefits result from efficient control of attentional processes, there has been growing interest in using action video games as a general tool to enhance everyday attentional control. However, to date, there is little evidence indicating that the benefits of action video game playing scale up to complex settings with socially meaningful stimuli - one of the fundamental components of our natural environment. The present experiment compared action video game player (AVGP) and non-video game player (NVGP) performance on an oculomotor capture task that presented participants with face stimuli. In addition, the expression of a distractor face was manipulated to assess if action video game experience modulated the effect of emotion. Results indicate that AVGPs experience less oculomotor capture than NVGPs; an effect that was not influenced by the emotional content depicted by distractor faces. It is noteworthy that this AVGP advantage emerged despite participants being unaware that the investigation had to do with video game playing, and participants being equivalent in their motivation and treatment of the task as a game. The results align with the notion that action video game experience is associated with superior attentional and oculomotor control, and provides evidence that these benefits can generalize to more complex and biologically relevant stimuli. |
Wonil Choi; John M. Henderson Neural correlates of active vision: An fMRI comparison of natural reading and scene viewing Journal Article In: Neuropsychologia, vol. 75, pp. 109–118, 2015. @article{Choi2015, Theories of eye movement control during active vision tasks such as reading and scene viewing have primarily been developed and tested using data from eye tracking and computational modeling, and little is currently known about the neurocognition of active vision. The current fMRI study was conducted to examine the nature of the cortical networks that are associated with active vision. Subjects were asked to read passages for meaning and view photographs of scenes for a later memory test. The eye movement control network comprising frontal eye field (FEF), supplementary eye fields (SEF), and intraparietal sulcus (IPS), commonly activated during single-saccade eye movement tasks, were also involved in reading and scene viewing, suggesting that a common control network is engaged when eye movements are executed. However, the activated locus of the FEF varied across the two tasks, with medial FEF more activated in scene viewing relative to passage reading and lateral FEF more activated in reading than scene viewing. The results suggest that eye movements during active vision are associated with both domain-general and domain-specific components of the eye movement control network. |
Alasdair D. F. Clarke; Micha Elsner; Hannah Rohde Giving good directions: Order of mention reflects visual salience Journal Article In: Frontiers in Psychology, vol. 6, pp. 1793, 2015. @article{Clarke2015, In complex stimuli, there are many different possible ways to refer to a specified target. Previous studies have shown that when people are faced with such a task, the content of their referring expression reflects visual properties such as size, salience and clutter. Here, we extend these findings and present evidence that (i) the influence of visual perception on sentence construction goes beyond content selection and in part determines the order in which different objects are mentioned and (ii) order of mention influences comprehension. Study 1 (a corpus study of reference productions) shows that when a speaker uses a relational description to mention a salient object, that object is treated as being in the common ground and is more likely to be mentioned first. Study 2 (a visual search study) asks participants to listen to referring expressions and find the specified target; in keeping with the above result, we find that search for easy-to-find targets is faster when the target is mentioned first, while search for harder-to-find targets is facilitated by mentioning the target later, after a landmark in a relational description. Our findings show that seemingly low-level and disparate mental “modules” like perception and sentence planning interact at a high level and in task-dependent ways. |
Justine Cléry; Olivier Guipponi; Soline Odouard; Claire Wardak; Suliann Ben Hamed Impact prediction by looming visual stimuli enhances tactile detection Journal Article In: Journal of Neuroscience, vol. 35, no. 10, pp. 4179–4189, 2015. @article{Clery2015, From an ecological point of view, approaching objects are potentially more harmful than receding objects. A predator, a dominant conspecific, or a mere branch coming up at high speed can all be dangerous if one does not detect them and produce the appropriate escape behavior fast enough. And indeed, looming stimuli trigger stereotyped defensive responses in both monkeys and human infants. However, while the heteromodal somatosensory consequences of visual looming stimuli can be fully predicted by their spatiotemporal dynamics, few studies if any have explored whether visual stimuli looming toward the face predictively enhance heteromodal tactile sensitivity around the expected time of impact and at its expected location on the body. In the present study, we report that, in addition to triggering a defensive motor repertoire, looming stimuli toward the face provide the nervous system with predictive cues that enhance tactile sensitivity on the face. Specifically, we describe an enhancement of tactile processes at the expected time and location of impact of the stimulus on the face. We additionally show that a looming stimulus that brushes past the face also enhances tactile sensitivity on the nearby cheek, suggesting that the space close to the face is incorporated into the subjects' body schema. We propose that this cross-modal predictive facilitation involves multisensory convergence areas subserving the representation of a peripersonal space and a safety boundary of self. |
Moreno I. Coco; Frank Keller Integrating mechanisms of visual guidance in naturalistic language production Journal Article In: Cognitive Processing, vol. 16, no. 2, pp. 131–150, 2015. @article{Coco2015a, Situated language production requires the integration of visual attention and linguistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate perceptual (scene clutter) and conceptual guidance (cue animacy) and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of language production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of attentional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention. |
Russell Cohen Hoffing; Aaron R. Seitz Pupillometry as a glimpse into the neurochemical basis of human memory encoding Journal Article In: Journal of Cognitive Neuroscience, vol. 27, no. 4, pp. 765–774, 2015. @article{CohenHoffing2015, Neurochemical systems are well studied in animal learning; however, ethical issues limit methodologies to explore these systems in humans. Pupillometry provides a glimpse into the brain ʼs neurochemical systems, where pupil dynamics in monkeys have been linked with locus coeruleus activity, which releases norepinephrine (NE) throughout the brain. Here, we use pupil dynamics as a surrogate measure of neurochemical activity to explore the hypothesis that NE is involved in modulating memory encoding. We examine this using a task-irrelevant learning paradigm in which learning is boosted for stimuli temporally paired with task targets. We show that participants better recognize images that are paired with task targets than distractors and, in correspondence, that pupil size changes more for target-paired than distractor-paired images. To further investigate the hypothesis that NE nonspecifically guides learning for stimuli that are present with its release, a second procedure was used that employed an unexpected sound to activate the LC –NE system and induce pupil-size changes; results indicated a corresponding increase in memorization of images paired with the unexpected sounds. Together, these results suggest a relationship between the LC–NE system, pupil-size changes, and human memory encoding |
Noga Cohen; Natali Moyal; Avishai Henik Executive control suppresses pupillary responses to aversive stimuli Journal Article In: Biological Psychology, vol. 112, pp. 1–11, 2015. @article{Cohen2015a, Adaptive behavior depends on the ability to effectively regulate emotional responses. Continuous failure in the regulation of emotions can lead to heightened physiological reactions and to various psychopathologies. Recently, several behavioral and neuroimaging studies showed that exertion of executive control modulates emotion. Executive control is a high-order operation involved in goal-directed behavior, especially in the face of distractors or temptations. However, the role of executive control in regulating emotion-related physiological reactions is unknown. Here we show that exercise of executive control modulates reactivity of both the sympathetic and the parasympathetic components of the autonomic nervous system. Specifically, we demonstrate that both pupillary light reflex and pupil dilation for aversive stimuli are attenuated following recruitment of executive control. These findings offer new insights into the very basic mechanisms of emotion processing and regulation, and can lead to novel interventions for people suffering from emotion dysregulation psychopathologies. |
Joshua Correll; Bernd Wittenbrink; Matthew T. Crawford; Melody S. Sadler Stereotypic vision: How stereotypes disambiguate visual stimuli Journal Article In: Journal of Personality and Social Psychology, vol. 108, no. 2, pp. 219–233, 2015. @article{Correll2015, Three studies examined how participants use race to disambiguate visual stimuli. Participants performed a first-person-shooter task in which Black and White targets appeared holding either a gun or an innocuous object (e.g., a wallet). In Study 1, diffusion analysis (Ratcliff, 1978) showed that participants rapidly acquired information about a gun when it appeared in the hands of a Black target, and about an innocuous object in the hands of a White target. For counterstereotypic pairings (armed Whites, unarmed Blacks), participants acquired information more slowly. In Study 2, eye tracking showed that participants relied on more ambiguous information (measured by visual angle from fovea) when responding to stereotypic targets; for counterstereotypic targets, they achieved greater clarity before responding. In Study 3, participants were briefly exposed to targets (limiting access to visual information) but had unlimited time to respond. In spite of their slow, deliberative responses, they showed racial bias. This pattern is inconsistent with control failure and suggests that stereotypes influenced identification of the object. All 3 studies show that race affects visual processing by supplementing objective information. |
Patrick H. Cox; Maximilian Riesenhuber There is a "U" in clutter: Evidence for robust sparse codes underlying clutter tolerance in human vision Journal Article In: Journal of Neuroscience, vol. 35, no. 42, pp. 14148–14159, 2015. @article{Cox2015, The ability to recognize objects in clutter is crucial for human vision, yet the underlying neural computations remain poorly understood. Previous single-unit electrophysiology recordings in inferotemporal cortex in monkeys and fMRI studies of object-selective cortex in humans have shown that the responses to pairs of objects can sometimes be well described as a weighted average of the responses to the constituent objects. Yet, from a computational standpoint, it is not clear how the challenge of object recognition in clutter can be solved if downstream areas must disentangle the identity of an unknown number of individual objects from the confounded average neuronal responses. An alternative idea is that recognition is based on a subpopulation of neurons that are robust to clutter, i.e., that do not show response averaging, but rather robust object-selective responses in the presence of clutter. Here we show that simulations using the HMAX model of object recognition in cortex can fit the aforementioned single-unit and fMRI data, showing that the averaging-like responses can be understood as the result of responses of object-selective neurons to suboptimal stimuli. Moreover, the model shows how object recognition can be achieved by a sparse readout of neurons whose selectivity is robust to clutter. Finally, the model provides a novel prediction about human object recognition performance, namely, that target recognition ability should show a U-shaped dependency on the similarity of simultaneously presented clutter objects. This prediction is confirmed experimentally, supporting a simple, unifying model of how the brain performs object recognition in clutter.$backslash$n$backslash$nSIGNIFICANCE STATEMENT: The neural mechanisms underlying object recognition in cluttered scenes (i.e., containing more than one object) remain poorly understood. Studies have suggested that neural responses to multiple objects correspond to an average of the responses to the constituent objects. Yet, it is unclear how the identities of an unknown number of objects could be disentangled from a confounded average response. Here, we use a popular computational biological vision model to show that averaging-like responses can result from responses of clutter-tolerant neurons to suboptimal stimuli. The model also provides a novel prediction, that human detection ability should show a U-shaped dependency on target-clutter similarity, which is confirmed experimentally, supporting a simple, unifying account of how the brain performs object recognition in clutter. |
Eileen T. Crehan; Robert R. Althoff Measuring the stare-in-the-crowd effect: a new paradigm to study social perception Journal Article In: Behavior Research Methods, vol. 47, no. 4, pp. 994–1003, 2015. @article{Crehan2015, Social perceptual ability plays a key role in successful social functioning. Social interactions demand a number of simultaneous skills, one of which is the detection of self-directed gaze. This study demonstrates how the ability to accurately detect self-directed gaze, called the stare-in-the-crowd effect, can be studied using a new eye-tracking paradigm. A set of images was developed to test this effect using a group of healthy undergraduate students. Eye movements and pupil size were tracked while they viewed these images. Participants also completed behavioral measures about themselves. Results show that self-directed gaze results in significantly more looking by participants. Behavioral predictors of gaze behaviors were not identified, likely given the health of the sample. However, correlations with variables are reported to explore in future research. |
Mario Dalmaso; Giovanni Galfano; Luigi Castelli The impact of same- and other-race gaze distractors on the control of saccadic eye movements Journal Article In: Perception, vol. 44, no. 8-9, pp. 1020–1028, 2015. @article{Dalmaso2015, Two experiments were aimed at investigating whether the implementation of voluntary saccades in White participants could be modulated more strongly by gaze distractors embedded in White versus Black faces. Participants were instructed to make a rightward or leftward saccade, depending on a central directional cue. Saccade direction could be either congruent or incongruent with gaze direction of the distractor face. In Experiment 1, White faces produced greater interference on saccadic accuracy than Black faces when the averted-gaze face and cue onset were simultaneous rather than separated by a 900-ms asynchrony. In Experiment 2, two temporal intervals (50 ms vs. 1,000 ms) occurred between the initial presentation of the face with direct-gaze and the averted-gaze face onset, whereas the averted-gaze face and cue onset were synchronous. A greater interference emerged for White versus Black faces irrespective of the temporal interval. Overall, these findings suggest that saccadic generation system is sensitive to features of face stimuli conveying eye gaze. |
Miguel P. Eckstein; Wade Schoonveld; Sheng Zhang; Stephen C. Mack; Emre Akbas Optimal and human eye movements to clustered low value cues to increase decision rewards during search Journal Article In: Vision Research, vol. 113, pp. 137–154, 2015. @article{Eckstein2015, Rewards have important influences on the motor planning of primates and the firing of neurons coding visual information and action. When eye movements to a target are differentially rewarded across locations, primates execute saccades towards the possible target location with the highest expected value, a product of sensory evidence and potentially earned reward (saccade to maximum expected value model, sMEV). Yet, in the natural world eye movements are not directly rewarded. Their role is to gather information to support subsequent rewarded search decisions and actions. Less is known about the effects of decision rewards on saccades. We show that when varying the decision rewards across cued locations following visual search, humans can plan their eye movements to increase decision rewards. Critically, we report a scenario for which five of seven tested humans do not preferentially deploy saccades to the possible target location with the highest reward, a strategy which is optimal when rewarding eye movements. Instead, these humans make saccades towards lower value but clustered locations when this strategy optimizes decision rewards consistent with the preferences of an ideal Bayesian reward searcher that takes into account the visibility of the target across eccentricities. The ideal reward searcher can be approximated with a sMEV model with pooling of rewards from spatially clustered locations. We also find observers with systematic departures from the optimal strategy and inter-observer variability of eye movement plans. These deviations often reflect multiplicity of fixation strategies that lead to near optimal decision rewards but, for some observers, it relates to suboptimal choices in eye movement planning. |
S. Gareth Edwards; Lisa J. Stephenson; Mario Dalmaso; Andrew P. Bayliss Social orienting in gaze leading: A mechanism for shared attention Journal Article In: Proceedings of the Royal Society B: Biological Sciences, vol. 282, no. 1812, pp. 1–8, 2015. @article{Edwards2015, Here, we report a novel social orienting response that occurs after viewing averted gaze. We show, in three experiments, that when a person looks from one location to an object, attention then shifts towards the face of an individual who has subsequently followed the person's gaze to that same object. That is, contrary to 'gaze following', attention instead orients in the opposite direction to observed gaze and towards the gazing face. The magnitude of attentional orienting towards a face that 'follows' the participant's gaze is also associated with self-reported autism-like traits. We propose that this gaze leading phenomenon implies the existence of a mechanism in the human social cognitive system for detecting when one's gaze has been followed, in order to establish 'shared attention' and maintain the ongoing interaction. |
Abdurahman S. Elkhetali; Ryan J. Vaden; Sean M. Pool; Kristina M. Visscher Early visual cortex reflects initiation and maintenance of task set Journal Article In: NeuroImage, vol. 107, pp. 277–288, 2015. @article{Elkhetali2015, The human brain is able to process information flexibly, depending on a person's task. The mechanisms underlying this ability to initiate and maintain a task set are not well understood, but they are important for understanding the flexibility of human behavior and developing therapies for disorders involving attention. Here we investigate the differential roles of early visual cortical areas in initiating and maintaining a task set.Using functional Magnetic Resonance Imaging (fMRI), we characterized three different components of task set-related, but trial-independent activity in retinotopically mapped areas of early visual cortex, while human participants performed attention demanding visual or auditory tasks. These trial-independent effects reflected: (1) maintenance of attention over a long duration, (2) orienting to a cue, and (3) initiation of a task set. Participants performed tasks that differed in the modality of stimulus to be attended (auditory or visual) and in whether there was a simultaneous distractor (auditory only, visual only, or simultaneous auditory and visual). We found that patterns of trial-independent activity in early visual areas (V1, V2, V3, hV4) depend on attended modality, but not on stimuli. Further, different early visual areas play distinct roles in the initiation of a task set. In addition, activity associated with maintaining a task set tracks with a participant's behavior. These results show that trial-independent activity in early visual cortex reflects initiation and maintenance of a person's task set. |
Ralf Engbert; Hans A. Trukenbrod; Simon Barthelmé; Felix A. Wichmann Spatial statistics and attentional dynamics in scene viewing Journal Article In: Journal of Vision, vol. 15, no. 1, pp. 1–17, 2015. @article{Engbert2015, In humans and in foveated animals visual acuity is highly concentrated at the center of gaze, so that choosing where to look next is an important example of online, rapid decision-making. Computational neuroscientists have developed biologically-inspired models of visual attention, termed saliency maps, which successfully predict where people fixate on average. Using point process theory for spatial statistics, we show that scanpaths contain, however, important statistical structure, such as spatial clustering on top of distributions of gaze positions. Here, we develop a dynamical model of saccadic selection that accurately predicts the distribution of gaze positions as well as spatial clustering along individual scanpaths. Our model relies on activation dynamics via spatially-limited (foveated) access to saliency information, and, second, a leaky memory process controlling the re-inspection of target regions. This theoretical framework models a form of context-dependent decision-making, linking neural dynamics of attention to behavioral gaze data. |
Tinne Dewolf; Wim Van Dooren; Frouke Hermens; Lieven Verschaffel Do students attend to representational illustrations of non-standard mathematical word problems, and, if so, how helpful are they? Journal Article In: Instructional Science, vol. 43, no. 1, pp. 147–171, 2015. @article{Dewolf2015, During the last two decades various researchers confronted upper elementary and lower secondary school pupils with word problems that were problematic from a realistic modelling point of view (so-called P-items), and found that pupils in general did not use their everyday knowledge to solve such P-items. Several attempts were undertaken to encourage learners to use their everyday knowledge more when solving such problems, e.g., by presenting the P-items together with representational illustrations that represent the problematic situation described in the problem. These illustrations were expected to help learners to mentally imagine the situation and consequently solve the items more realis- tically. However, no effect of the illustrations was found. In this article we build further on the use of representational illustrations. We report two related experiments with higher education students that investigated whether and how illustrations that represent the problematic situation described in a P-item help to imagine the problem situation and thereby solve the problem more realistically. In Experiment 1 we measured students' eye movements when solving P-items that were accompanied by representational illustrations, to analyse whether the illustrations are processed at all. In Experiment 2 we manipulated the presentation of the illustrations so students could not but look at them, before the word problem appeared. We found that students scarcely looked at the representational illus- trations (Experiment 1) and when they did, there was no effect of the illustrations on the realistic nature of their solutions (Experiment 2). Possible explanations for these findings are discussed. |
Matteo Visconti Di Oleggio Castello; M. Ida Gobbini Familiar face detection in 180ms Journal Article In: PLoS ONE, vol. 10, no. 8, pp. e0136548, 2015. @article{DiOleggioCastello2015, The visual system is tuned for rapid detection of faces, with the fastest choice saccade to a face at 100ms. Familiar faces have a more robust representation than do unfamiliar faces, and are detected faster in the absence of awareness and with reduced attentional resources. Faces of familiar and close friends become familiar over a protracted period involving learning the unique visual appearance, including a view-invariant representation, as well as person knowledge. We investigated the effect of personal familiarity on the earliest stages of face processing by using a saccadic-choice task to measure how fast familiar face detection can happen. Subjects made correct and reliable saccades to familiar faces when unfamiliar faces were distractors at 180ms-very rapid saccades that are 30 to 70ms earlier than the earliest evoked potential modulated by familiarity. By contrast, accuracy of saccades to unfarmiliar faces with familiar faces as distractors did not exceed chance. Saccades to faces with object distractors were even faster (110 to 120ms) and equivalent for familiar and unfamiliar faces, including that familiarity does not affect ultra-rapid saccades. We propose that detectors of diagnostic facial features for familiar faces develop in visual cortices through learning and allow rapid detection that procedes expicit recognition of identity. |
Gregory J. DiGirolamo; David Smelson; Nathan Guevremont Cue-induced craving in patients with cocaine use disorder predicts cognitive control deficits toward cocaine cues Journal Article In: Addictive Behaviors, vol. 47, pp. 86–90, 2015. @article{DiGirolamo2015, Introduction: Cue-induced craving is a clinically important aspect of cocaine addiction influencing ongoing use and sobriety. However, little is known about the relationship between cue-induced craving and cognitive control toward cocaine cues. While studies suggest that cocaine users have an attentional bias toward cocaine cues, the present study extends this research by testing if cocaine use disorder patients (CDPs) can control their eye movements toward cocaine cues and whether their response varied by cue-induced craving intensity. Methods: Thirty CDPs underwent a cue exposure procedure to dichotomize them into high and low craving groups followed by a modified antisaccade task in which subjects were asked to control their eye movements toward either a cocaine or neutral drug cue by looking away from the suddenly presented cue. The relationship between breakdowns in cognitive control (as measured by eye errors) and cue-induced craving (changes in self-reported craving following cocaine cue exposure) was investigated. Results: CDPs overall made significantly more errors toward cocaine cues compared to neutral cues, with higher cravers making significantly more errors than lower cravers even though they did not differ significantly in addiction severity, impulsivity, anxiety, or depression levels. Cue-induced craving was the only specific and significant predictor of subsequent errors toward cocaine cues. Conclusion: Cue-induced craving directly and specifically relates to breakdowns of cognitive control toward cocaine cues in CDPs, with higher cravers being more susceptible. Hence, it may be useful identifying high cravers and target treatment toward curbing craving to decrease the likelihood of a subsequent breakdown in control. |
Mithun Diwakar; Deborah L. Harrington; Jun Maruta; Jamshid Ghajar; Fady El-Gabalawy; Laura Muzzatti; Maurizio Corbetta; Ming-Xiong X. Huang; Roland R. Lee Filling in the gaps: Anticipatory control of eye movements in chronic mild traumatic brain injury Journal Article In: NeuroImage: Clinical, vol. 8, pp. 210–223, 2015. @article{Diwakar2015, A barrier in the diagnosis of mild traumatic brain injury (mTBI) stems from the lack of measures that are adequately sensitive in detecting mild head injuries. MRI and CT are typically negative in mTBI patients with persistent symptoms of post-concussive syndrome (PCS), and characteristic difficulties in sustaining attention often go undetected on neuropsychological testing, which can be insensitive to momentary lapses in concentration. Conversely, visual tracking strongly depends on sustained attention over time and is impaired in chronic mTBI patients, especially when tracking an occluded target. This finding suggests deficient internal anticipatory control in mTBI, the neural underpinnings of which are poorly understood. The present study investigated the neuronal bases for deficient anticipatory control during visual tracking in 25 chronic mTBI patients with persistent PCS symptoms and 25 healthy control subjects. The task was performed while undergoing magnetoencephalography (MEG), which allowed us to examine whether neural dysfunction associated with anticipatory control deficits was due to altered alpha, beta, and/or gamma activity. Neuropsychological examinations characterized cognition in both groups. During MEG recordings, subjects tracked a predictably moving target that was either continuously visible or randomly occluded (gap condition). MEG source-imaging analyses tested for group differences in alpha, beta, and gamma frequency bands. The results showed executive functioning, information processing speed, and verbal memory deficits in the mTBI group. Visual tracking was impaired in the mTBI group only in the gap condition. Patients showed greater error than controls before and during target occlusion, and were slower to resynchronize with the target when it reappeared. Impaired tracking concurred with abnormal beta activity, which was suppressed in the parietal cortex, especially the right hemisphere, and enhanced in left caudate and frontaloral areas. Regional beta-amplitude demonstrated high classification accuracy (92%) compared to eye-tracking (65%) and neuropsychological variables (80%). These findings show that deficient internal anticipatory control in mTBI is associated with altered beta activity, which is remarkably sensitive given the heterogeneity of injuries. |
Helen F. Dodd; Jennifer L. Hudson; Tracey A. Williams; Talia Morris; Rebecca S. Lazarus; Yulisha Byrow Anxiety and attentional bias in preschool-aged children: An eyetracking study Journal Article In: Journal of Abnormal Child Psychology, vol. 43, no. 6, pp. 1055–1065, 2015. @article{Dodd2015, Extensive research has examined attentional bias for threat in anxious adults and school-aged children but it is unclear when this anxiety-related bias is first established. This study uses eyetracking technology to assess attentional bias in a sample of 83 children aged 3 or 4 years. Of these, 37 (19 female) met criteria for an anxiety disorder and 46 (30 female) did not. Gaze was recorded during a free-viewing task with angry-neutral face pairs presented for 1250 ms. There was no indication of between-group differences in threat bias, with both anxious and non-anxious groups showing vigilance for angry faces as well as longer dwell times to angry over neutral faces. Importantly, however, the anxious participants spent significantly less time looking at the faces overall, when compared to the non-anxious group. The results suggest that both anxious and non-anxious preschool-aged children preferentially attend to threat but that anxious children may be more avoidant of faces than non-anxious children. |
Peter H. Donaldson; Caroline T. Gurvich; Joanne Fielding; Peter G. Enticott Exploring associations between gaze patterns and putative human mirror neuron system activity Journal Article In: Frontiers in Human Neuroscience, vol. 9, pp. 523, 2015. @article{Donaldson2015, The human mirror neuron system (MNS) is hypothesized to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity), healthy right-handed participants aged 18–40 (n = 26) viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation. Motor- evoked potentials recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern. |
Ian Donovan; Sarit F. A. Szpiro; Marisa Carrasco Exogenous attention facilitates location transfer of perceptual learning Journal Article In: Journal of Vision, vol. 15, no. 10, pp. 1–16, 2015. @article{Donovan2015, Perceptual skills can be improved through practice on a perceptual task, even in adulthood. Visual perceptual learning is known to be mostly specific to the trained retinal location, which is considered as evidence of neural plasticity in retinotopic early visual cortex. Recent findings demonstrate that transfer of learning to untrained locations can occur under some specific training procedures. Here, we evaluated whether exogenous attention facilitates transfer of perceptual learning to untrained locations, both adjacent to the trained locations (Experiment 1) and distant from them (Experiment 2). The results reveal that attention facilitates transfer of perceptual learning to untrained locations in both experiments, and that this transfer occurs both within and across visual hemifields. These findings show that training with exogenous attention is a powerful regime that is able to overcome the major limitation of location specificity. |
Bruno Nicenboim; Shravan Vasishth; Carolina A. Gattei; Mariano Sigman; Reinhold Kliegl Working memory differences in long-distance dependency resolution Journal Article In: Frontiers in Psychology, vol. 6, pp. 312, 2015. @article{Nicenboim2015, There is a wealth of evidence showing that increasing the distance between an argument and its head leads to more processing effort, namely, locality effects; these are usually associated with constraints in working memory (DLT: Gibson, 2000; activation-based model: Lewis and Vasishth, 2005). In SOV languages, however, the opposite effect has been found: antilocality (see discussion in Levy et al., 2013). Antilocality effects can be explained by the expectation-based approach as proposed by Levy (2008) or by the activation-based model of sentence processing as proposed by Lewis and Vasishth (2005). We report an eye-tracking and a self-paced reading study with sentences in Spanish together with measures of individual differences to examine the distinction between expectation- and memory-based accounts, and within memory-based accounts the further distinction between DLT and the activation-based model. The experiments show that (i) antilocality effects as predicted by the expectation account appear only for high-capacity readers; (ii) increasing dependency length by interposing material that modifies the head of the dependency (the verb) produces stronger facilitation than increasing dependency length with material that does not modify the head; this is in agreement with the activation-based model but not with the expectation account; and (iii) a possible outcome of memory load on low-capacity readers is the increase in regressive saccades (locality effects as predicted by memory-based accounts) or, surprisingly, a speedup in the self-paced reading task; the latter consistent with good-enough parsing (Ferreira et al., 2002). In sum, the study suggests that individual differences in working memory capacity play a role in dependency resolution, and that some of the aspects of dependency resolution can be best explained with the activation-based model together with a prediction component. |
Babak Noory; Michael H. Herzog; Haluk Ogmen Retinotopy of visual masking and non-retinotopic perception during masking Journal Article In: Attention, Perception, & Psychophysics, vol. 77, no. 4, pp. 1263–1284, 2015. @article{Noory2015, Due to the movements of the observer and those of objects in the environment, retinotopic representations are highly unstable during ecological viewing conditions. The phenomenal stability of our perception suggests that retinotopic representations are transformed into non-retinotopic representations. It remains to show, however, which visual processes operate under retinotopic representations and which ones operate under non-retinotopic representations. Visual masking refers to the reduced visibility of one stimulus, called the target, due to the presence of a second stimulus, called the mask. Masking has been used extensively to study the dynamic aspects of visual perception. Previous studies using Saccadic Stimulus Presentation Paradigm (SSPP) suggested both retinotopic and non-retinotopic bases for visual masking. In order to understand how the visual system deals with retinotopic changes induced by moving targets, we investigated the retinotopy of visual masking and the fate of masked targets under conditions that do not involve eye movements. We have developed a series of experiments based on a radial Ternus-Pikler display. In this paradigm, the perceived Ternus-Pikler motion is used as a non-retinotopic reference frame to pit retinotopic against non-retinotopic visual masking hypothesis. Our results indicate that both metacontrast and structure masking are retinotopic. We also show that, under conditions that allow observers to read-out effectively non-retinotopic feature attribution, the target becomes visible at a destination different from its retinotopic/ spatiotopic location. We discuss the implications of our findings within the context of ecological vision and dynamic form perception. |
Antje Nuthmann; Wolfgang Einhäuser A new approach to modeling the influence of image features on fixation selection in scenes Journal Article In: Annals of the New York Academy of Sciences, vol. 1339, no. 1, pp. 82–96, 2015. @article{Nuthmann2015, Which image characteristics predict where people fixate when memorizing natural images? To answer this question, we introduce a new analysis approach that combines a novel scene-patch analysis with generalized linear mixed models (GLMMs). Our method allows for (1) directly describing the relationship between continuous feature value and fixation probability, and (2) assessing each feature's unique contribution to fixation selection. To demonstrate this method, we estimated the relative contribution of various image features to fixation selection: luminance and luminance contrast (low-level features); edge density (a mid-level feature); visual clutter and image segmentation to approximate local object density in the scene (higher-level features). An additional predictor captured the central bias of fixation. The GLMM results revealed that edge density, clutter, and the number of homogenous segments in a patch can independently predict whether image patches are fixated or not. Importantly, neither luminance nor contrast had an independent effect above and beyond what could be accounted for by the other predictors. Since the parcellation of the scene and the selection of features can be tailored to the specific research question, our approach allows for assessing the interplay of various factors relevant for fixation selection in scenes in a powerful and flexible manner. |
Thomas P. O'Connell; Dirk B. Walther Dissociation of salience-driven and content-driven spatial attention to scene category with predictive decoding of gaze patterns Journal Article In: Journal of Vision, vol. 15, no. 5, pp. 1–13, 2015. @article{OConnell2015, Scene content is thought to be processed quickly and efficiently to bias subsequent visual exploration. Does scene content bias spatial attention during task-free visual exploration of natural scenes? If so, is this bias driven by patterns of physical salience or content-driven biases formed through previous encounters with similar scenes? We conducted two eye-tracking experiments to address these questions. Using a novel gaze decoding method, we show that fixation patterns predict scene category during free exploration. Additionally, we isolate salience-driven contributions using computational salience maps and content-driven contributions using gaze-restricted fixation data. We find distinct time courses for salience-driven and content-driven effects. The influence of physical salience peaked initially but quickly fell off at 600 ms past stimulus onset. The influence of content effects started at chance and steadily increased over the 2000 ms after stimulus onset. The combination of these two components significantly explains the time course of gaze allocation during free exploration. |
Yuka O. Okazaki; Jörn M. Horschig; Lisa Luther; Robert Oostenveld; Ikuya Murakami; Ole Jensen Real-time MEG neurofeedback training of posterior alpha activity modulates subsequent visual detection performance Journal Article In: NeuroImage, vol. 107, pp. 323–332, 2015. @article{Okazaki2015, It has been demonstrated that alpha activity is lateralized when attention is directed to the left or right visual hemifield. We investigated whether real-time neurofeedback training of the alpha lateralization enhances participants' ability to modulate posterior alpha lateralization and causes subsequent short-term changes in visual detection performance. The experiment consisted of three phases: (i) pre-training assessment, (ii) neurofeedback phase and (iii) post-training assessment. In the pre- and post-training phases we measured the threshold to covertly detect a cued faint Gabor stimulus presented in the left or right hemifield. During magnetoencephalography (MEG) neurofeedback, two face stimuli superimposed with noise were presented bilaterally. Participants were cued to attend to one of the hemifields. The transparency of the superimposed noise and thus the visibility of the stimuli were varied according to the momentary degree of hemispheric alpha lateralization. In a double-blind procedure half of the participants were provided with sham feedback. We found that hemispheric alpha lateralization increased with the neurofeedback training; this was mainly driven by an ipsilateral alpha increase. Surprisingly, comparing pre- to post-training, detection performance decreased for a Gabor stimulus presented in the hemifield that was un-attended during neurofeedback. This effect was not observed in the sham group. Thus, neurofeedback training alters alpha lateralization, which in turn decreases performances in the untrained hemifield. Our findings suggest that alpha oscillations play a causal role for the allocation of attention. Furthermore, our neurofeedback protocol serves to reduce the detection of unattended visual information and could therefore be of potential use for training to reduce distractibility in attention deficit patients, but also highlights that neurofeedback paradigms can have negative impact on behavioral performance and should be applied with caution. |
Barbara F. M. Marino; Giovanni Mirabella; Rossana Actis-Grosso; Emanuela Bricolo; Paola Ricciardelli Can we resist another person's gaze? Journal Article In: Frontiers in Behavioral Neuroscience, vol. 9, pp. 258, 2015. @article{Marino2015, Adaptive adjustments of strategies are needed to optimize behavior in a dynamic and uncertain world. A key function in implementing flexible behavior and exerting self- control is represented by the ability to stop the execution of an action when it is no longer appropriate for the environmental requests. Importantly, stimuli in our environment are not equally relevant and some are more valuable than others. One example is the gaze of other people, which is known to convey important social information about their direction of attention and their emotional and mental states. Indeed, gaze direction has a significant impact on the execution of voluntary saccades of an observer since it is capable of inducing in the observer an automatic gaze-following behavior: a phenomenon named social or joint attention. Nevertheless, people can exert volitional inhibitory control on saccadic eye movements during their planning. Little is known about the interaction between gaze direction signals and volitional inhibition of saccades. To fill this gap, we administered a countermanding task to 15 healthy participants in which they were asked to observe the eye region of a face with the eyes shut appearing at central fixation. In one condition, participants were required to suppress a saccade, that was previously instructed by a gaze shift toward one of two peripheral targets, when the eyes were suddenly shut down (social condition, SC). In a second condition, participants were asked to inhibit a saccade, that was previously instructed by a change in color of one of the two same targets, when a change of color of a central picture occurred (non- social condition, N-SC). We found that inhibitory control was more impaired in the SC, suggesting that actions initiated and stopped by social cues conveyed by the eyes are more difficult to withhold. This is probably due to the social value intrinsically linked to these cues and the many uses we make of them. |
Julie Markant; Michael S. Worden; Dima Amso Not all attention orienting is created equal: Recognition memory is enhanced when attention orienting involves distractor suppression Journal Article In: Neurobiology of Learning and Memory, vol. 120, pp. 28–40, 2015. @article{Markant2015, Learning through visual exploration often requires orienting of attention to meaningful information in a cluttered world. Previous work has shown that attention modulates visual cortex activity, with enhanced activity for attended targets and suppressed activity for competing inputs, thus enhancing the visual experience. Here we examined the idea that learning may be engaged differentially with variations in attention orienting mechanisms that drive eye movements during visual search and exploration. We hypothesized that attention orienting mechanisms that engaged suppression of a previously attended location would boost memory encoding of the currently attended target objects to a greater extent than those that involve target enhancement alone. To test this hypothesis we capitalized on the classic spatial cueing task and the inhibition of return (IOR) mechanism (. Posner, 1980; Posner, Rafal, & Choate, 1985) to demonstrate that object images encoded in the context of concurrent suppression at a previously attended location were encoded more effectively and remembered better than those encoded without concurrent suppression. Furthermore, fMRI analyses revealed that this memory benefit was driven by attention modulation of visual cortex activity, as increased suppression of the previously attended location in visual cortex during target object encoding predicted better subsequent recognition memory performance. These results suggest that not all attention orienting impacts learning and memory equally. |
Linda Marschner; Sebastian Pannasch; Johannes Schulz; Sven-Thomas Graupner Social communication with virtual agents: The effects of body and gaze direction on attention and emotional responding in human observers Journal Article In: International Journal of Psychophysiology, vol. 97, no. 2, pp. 85–92, 2015. @article{Marschner2015, In social communication, the gaze direction of other persons provides important information to perceive and interpret their emotional response. Previous research investigated the influence of gaze by manipulating mutual eye contact. Therefore, gaze and body direction have been changed as a whole, resulting in only congruent gaze and body directions (averted or directed) of another person. Here, we aimed to disentangle these effects by using short animated sequences of virtual agents posing with either direct or averted body or gaze. Attention allocation by means of eye movements, facial muscle response, and emotional experience to agents of different gender and facial expressions were investigated. Eye movement data revealed longer fixation durations, i.e., a stronger allocation of attention, when gaze and body direction were not congruent with each other or when both were directed towards the observer. This suggests that direct interaction as well as incongruous signals increase the demands of attentional resources in the observer. For the facial muscle response, only the reaction of muscle zygomaticus major revealed an effect of body direction, expressed by stronger activity in response to happy expressions for direct compared to averted gaze when the virtual character's body was directed towards the observer. Finally, body direction also influenced the emotional experience ratings towards happy expressions. While earlier findings suggested that mutual eye contact is the main source for increased emotional responding and attentional allocation, the present results indicate that direction of the virtual agent's body and head also plays a minor but significant role. |
Sébastien Marti; Laurie Bayet; Stanislas Dehaene Subjective report of eye fixations during serial search Journal Article In: Consciousness and Cognition, vol. 33, pp. 1–15, 2015. @article{Marti2015, Humans readily introspect upon their thoughts and their behavior, but how reliable are these subjective reports? In the present study, we explored the consistencies of and differences between the observer's subjective report and actual behavior within a single trial. On each trial of a serial search task, we recorded eye movements and the participants' beliefs of where their eyes moved. The comparison of reported versus real eye movements revealed that subjects successfully reported a subset of their eye movements. Limits in subjective reports stemmed from both the number and the type of eye movements. Furthermore, subjects sometimes reported eye movements they actually never made. A detailed examination of these reports suggests that they could reflect covert shifts of attention during overt serial search. Our data provide quantitative and qualitative measures of observers' subjective reports and reveal experimental effects of visual search that would otherwise be inaccessible. |
Svenja Marx; Gina Gruenhage; Daniel Walper; Ueli Rutishauser; Wolfgang Einhäuser Competition with and without priority control: Linking rivalry to attention through winner-take-all networks with memory Journal Article In: Annals of the New York Academy of Sciences, vol. 1339, no. 1, pp. 138–153, 2015. @article{Marx2015b, Competition is ubiquitous in perception. For example, items in the visual field compete for processing resources, and attention controls their priority (biased competition). The inevitable ambiguity in the interpretation of sensory signals yields another form of competition: distinct perceptual interpretations compete for access to awareness. Rivalry, where two equally likely percepts compete for dominance, explicates the latter form of competition. Building upon the similarity between attention and rivalry, we propose to model rivalry by a generic competitive circuit that is widely used in the attention literature-a winner-take-all (WTA) network. Specifically, we show that a network of two coupled WTA circuits replicates three common hallmarks of rivalry: the distribution of dominance durations, their dependence on input strength ("Levelt's propositions"), and the effects of stimulus removal (blanking). This model introduces a form of memory by forming discrete states and explains experimental data better than competitive models of rivalry without memory. This result supports the crucial role of memory in rivalry specifically and in competitive processes in general. Our approach unifies the seemingly distinct phenomena of rivalry, memory, and attention in a single model with competition as the common underlying principle. |
Tommaso Mastropasqua; Jessica Galliussi; David Pascucci; Massimo Turatto Location transfer of perceptual learning: Passive stimulation and double training Journal Article In: Vision Research, vol. 108, pp. 93–102, 2015. @article{Mastropasqua2015, Specificity has always been considered one of the hallmarks of perceptual learning, suggesting that performance improvement would reflect changes at early stages of visual analyses (e.g., V1). More recently, however, this view has been challenged by studies documenting complete transfer of learning among different spatial locations or stimulus orientations when a double-training procedure is adopted. Here, we further investigate the conditions under which transfer of visual perceptual learning takes place, confirming that the passive stimulation at the transfer location seems to be insufficient to overcome learning specificity. By contrast, learning transfer is complete when performing a secondary task at the transfer location. Interestingly, (i) transfer emerges when the primary and secondary tasks are intermingled on a trial-by-trial basis, and (ii) the effects of learning generalization appear to be reciprocal, namely the primary task also serves to enable transfer of the secondary task. However, if the secondary task is not performed for a sufficient number of trials, then transfer is not enabled. Overall, the results lend support to the recent view that task-relevant perceptual learning may involve high-level stages of visual analyses. |
Tommaso Mastropasqua; Peter U. Tse; Massimo Turatto Learning of monocular information facilitates breakthrough to awareness during interocular suppression Journal Article In: Attention, Perception, & Psychophysics, vol. 77, no. 3, pp. 790–803, 2015. @article{Mastropasqua2015a, Continuous flash suppression (CFS) is a potent method of inducing binocular rivalry, wherein a rapid succession of high-contrast images presented to one eye effectively blocks from awareness a low-contrast image presented to the other eye. Here we addressed whether the contents of the suppressed image can break through to awareness with extended CFS exposure. On 2/3 of the trials, we presented a faint bar (the target) to the nondominant eye while a high-contrast flickering Mondrian (the mask) was displayed to the dominant eye. Participants were first asked to report whether the target had broken through the CFS mask. Furthermore, on target-present trials, the participants were then asked to guess whether the target had appeared above or below the fixation point. In Experiment 1, the target was presented with a fixed orientation for four blocks of trials, whereas in the fifth block, the target could also have the orthogonal orientation. In Experiment 2, the target was always presented with a fixed orientation, but in the fifth block, unbeknownst to participants, the target and the mask were swapped across the eyes. We found that awareness of the target rapidly improved with training in both experiments. However, whereas Experiment 1 revealed that the improvement largely generalized across stimulus orientations, Experiment 2 showed that the effect of practice was eye-specific. The results suggest that increased breakthrough with training was due to a monocular form of learning. Finally, a control experiment was conducted to exclude the possibility that the monocular learning we reported could have been due to sensory adaptation caused by the masks. |
Sebastiaan Mathôt; Jean-Baptiste Melmi; Eric Castet Intrasaccadic perception triggers pupillary constriction Journal Article In: PeerJ, vol. 3, pp. 1–16, 2015. @article{Mathot2015, It is commonly believed that vision is impaired during saccadic eye movements. However, here we report that some visual stimuli are clearly visible during saccades, and trigger a constriction of the eye's pupil. Participants viewed sinusoid gratings that changed polarity 150 times per second (every 6.67 ms). At this rate of flicker, the gratings were perceived as homogeneous surfaces while participants fixated. However, the flickering gratings contained ambiguous motion: rightward and leftward motion for vertical gratings; upward and downward motion for horizontal gratings. When participants made a saccade perpendicular to the gratings' orientation (e.g., a leftward saccade for a vertical grating), the eye's peak velocity matched the gratings' motion. As a result, the retinal image was approximately stable for a brief moment during the saccade, and this gave rise to an intrasaccadic percept: A normally invisible stimulus became visible when eye velocity was maximal. Our results confirm and extend previous studies by demonstrating intrasaccadic perception using a reflexive measure (pupillometry) that does not rely on subjective report. Our results further show that intrasaccadic perception affects all stages of visual processing, from the pupillary response to visual awareness. |
Carol McDonald Connor; Ralph Radach; Christian Vorstius; Stephanie L. Day; Leigh McLean; Frederick J. Morrison Individual differences in fifth graders' literacy and academic language predict comprehension monitoring development: An eye-movement study Journal Article In: Scientific Studies of Reading, vol. 19, no. 2, pp. 114–134, 2015. @article{McDonaldConnor2015, In this study, we investigated fifth graders' (n = 52) fall literacy, academic language, and motivation and how these skills predicted fall and spring comprehension monitoring on an eye movement task. Comprehension monitoring was defined as the identification and repair of misunderstandings when reading text. In the eye movement task, children read two sentences; the second included either a plausible or implausible word in the context of the first sentence. Stronger readers had shorter reading times overall suggesting faster processing of text. Generally fifth graders reacted to the implausible word (i.e., longer gaze duration on the implausible vs. the plausible word, which reflects lexical access). Students with stronger academic language, compared to those with weaker academic language, generally spent more time rereading the implausible target compared to the plausible target. This difference increased from fall to spring. Results support the centrality of academic language for meaning integration, setting standards of coherence, and utilizing comprehension repair strategies. |
Gerald P. McDonnell; Mark Mills; Leslie McCuller; Michael D. Dodd How does implicit learning of search regularities alter the manner in which you search? Journal Article In: Psychological Research, vol. 79, no. 2, pp. 183–193, 2015. @article{McDonnell2015, Individuals are highly sensitive to statistical regularities in their visual environment, even when these patterns do not reach conscious awareness. Here, we examine whether oculomotor behavior is systematically altered when distractor/target configurations rarely repeat, but target location on an initial trial predicts the location of a target on the subsequent trial. The purpose of the current study was to explore whether this temporal-spatial contextual cueing in a conjunction search task influences both reaction time to the target and participant's search strategy. Participants searched for a target through a gaze-contingent window in a display consisting of a large number of distractors, providing a target-present/absent response. Participants were faster to respond to the target on the predicted trial relative to the predictor trial in an implicit contextual cueing task but were no more likely to fixate first to the target quadrant on the predicted trial (Experiment 1). Furthermore, implicit learning was interrupted when instructing participants to vary their searching strategy across trials to eliminate visual scan similarity (Experiment 2). In Experiment 3, when participants were explicitly informed that a pattern was present at the start of the experiment, explicit learning was observed in both reaction time and eye movements. The present experiments provide evidence that implicit learning of sequential regularities regarding target locations is not based on learning more efficient scan paths, but is due to some other mechanism. |
David B. T. McMahon; Brian E. Russ; Heba D. Elnaiem; Anastasia I. Kurnikova; David A. Leopold Single-unit activity during natural vision: Diversity, consistency, and spatial sensitivity among AF face patch neurons Journal Article In: Journal of Neuroscience, vol. 35, no. 14, pp. 5537–5548, 2015. @article{McMahon2015, Several visual areas within the STS of the macaque brain respond strongly to faces and other biological stimuli. Determining the principles that govern neural responses in this region has proven challenging, due in part to the inherently complex stimulus domain of dynamic biological stimuli that are not captured by an easily parameterized stimulus set. Here we investigated neural responses in one fMRI-defined face patch in the anterior fundus (AF) of the STS while macaques freely view complex videos rich with natural social content. Longitudinal single-unit recordings allowed for the accumulation of each neuron's responses to repeated video presentations across sessions. We found that individual neurons, while diverse in their response patterns, were consistently and deterministically driven by the video content. We used principal component analysis to compute a family of eigenneurons, which summarized 24% of the shared population activity in the first two components. We found that the most prominent component of AF activity reflected an interaction between visible body region and scene layout. Close-up shots of faces elicited the strongest neural responses, whereas far away shots of faces or close-up shots of hindquarters elicited weak or inhibitory responses. Sensitivity to the apparent proximity of faces was also observed in gamma band local field potential. This category-selective sensitivity to spatial scale, together with the known exchange of anatomical projections of this area with regions involved in visuospatial analysis, suggests that the AF face patch may be specialized in aspects of face perception that pertain to the layout of a social scene. |
Radha Nila Meghanathan; Cees Leeuwen; Andrey R. Nikolaev Fixation duration surpasses pupil size as a measure of memory load in free viewing Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 1063, 2015. @article{Meghanathan2015, Oculomotor behavior reveals, not only the acquisition of visual information at fixation, but also the accumulation of information in memory across subsequent fixations. Two candidate measures were considered as indicators of such dynamic visual memory load: fixation duration and pupil size. While recording these measures, we displayed an arrangement of 3, 4 or 5 targets among distractors. Both occurred in various orientations. Participants searched for targets and reported whether in a subsequent display one of them had changed orientation. We determined to what extent fixation duration and pupil size indicate dynamic memory load, as a function of the number of targets fixated during the search. We found that fixation duration reflects the number of targets, both when this number is within and above the limit of working memory capacity. Pupil size reflects the number of targets only when it exceeds the capacity limit. Moreover, the duration of fixations on successive targets but not on distractors increases whereas pupil size does not. The increase in fixation duration with number of targets both within and above working memory capacity suggests that in free viewing fixation duration is sensitive to actual memory load as well as to processing load, whereas pupil size is indicative of processing load only. Two alternative models relating visual attention and working memory are considered relevant to these results. We discuss the results as supportive of a model which involves a temporary buffer in the interaction of attention and working memory. |
Andrew Isaac Meso; Guillaume S. Masson Dynamic resolution of ambiguity during tri-stable motion perception Journal Article In: Vision Research, vol. 107, pp. 113–123, 2015. @article{Meso2015, Multi-stable perception occurs when an image falling onto the retina has multiple incompatible interpretations. We probed this phenomenon in psychophysical experiments using a moving barber-pole visual stimulus configured as a square to generate three competing perceived directions, horizontal, diagonal and vertical. We characterised patterns in reported switching type and percept duration, classifying switches into three groups related to the direction cues driving such transitions i.e. away from diagonal, towards diagonal and between cardinals. The proportions of each class reported by participants depended on contrast. The two including diagonals dominated at low contrast and those between cardinals increased in proportion as contrast was increased. At low contrasts, the less frequent cardinals persisted for shorter than the dominant diagonals and this was reversed at higher contrasts. This observed asymmetry between the dominance of transition classes appears to be driven by different underlying dynamics between cardinal and the oblique cues and their related transitions. At trial onset we found that transitions away from diagonal dominate, a tendency which later in the trial reverses to dominance by transitions excluding the diagonal, most prominently at higher contrasts. Thus ambiguity is resolved over a contrast dependent temporal integration similar to, but lasting longer than that observed when resolving the aperture problem to estimate direction. When the diagonal direction dominates perception, evidence is found for a noisier competition seen in broader duration distributions than during dominance of cardinal perception. There remain aspects of these identified differences in cardinal and oblique dynamics to be investigated in future. |
Cristiano Micheli; Daniel Kaping; Stephanie Westendorff; Taufik A. Valiante; Thilo Womelsdorf Inferior-frontal cortex phase synchronizes with the temporal-parietal junction prior to successful change detection Journal Article In: NeuroImage, vol. 119, pp. 417–431, 2015. @article{Micheli2015, The inferior frontal gyrus (IFG) and the temporo-parietal junction (TPJ) are believed to be core structures of human brain networks that activate when sensory top-down expectancies guide goal directed behavior and attentive perception. But it is unclear how activity in IFG and TPJ coordinates during attention demanding tasks and whether functional interactions between both structures are related to successful attentional performance.Here, we tested these questions in electrocorticographic (ECoG) recordings in human subjects using a visual detection task that required sustained attentional expectancy in order to detect non-salient, near-threshold visual events. We found that during sustained attention the successful visual detection was predicted by increased phase synchronization of band-limited 15-30. Hz beta band activity that was absent prior to misses. Increased beta-band phase alignment during attentional engagement early during the task was restricted to inferior and lateral prefrontal cortex, but with sustained attention it extended to long-range IFG-TPJ phase synchronization and included superior prefrontal areas. In addition to beta, a widely distributed network of brain areas comprising the occipital cortex showed enhanced and reduced alpha band phase synchronization before correct detections.These findings identify long-range phase synchrony in the 15-30. Hz beta band as the mesoscale brain signal that predicts the successful deployment of attentional expectancy of sensory events. We speculate that localized beta coherent states in prefrontal cortex index 'top-down' sensory expectancy whose coupling with TPJ subregions facilitates the gating of relevant visual information. |
Mark Mills; Edwin S. Dalmaijer; Stefan Van der Stigchel; Michael D. Dodd Effects of task and task-switching on temporal inhibition of return, facilitation of return, and saccadic momentum during scene viewing Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 41, no. 5, pp. 1300–1314, 2015. @article{Mills2015, During scene viewing, saccades directed toward a recently fixated location tend to be delayed relative to saccades in other directions (“delay effect”), an effect attributable to inhibition of return (IOR) and/or saccadic momentum (SM). Previous work indicates this effect may be task-specific, suggesting that gaze control parameters are task-relevant and potentially affected by task-switching. Accordingly, the present study investigated task-set control of gaze behavior using the delay effect as a measure of task performance. The delay effect was measured as the effect of relative saccade direction on preceding fixation duration. Participants were cued on each trial to perform either a search, memory, or rating task. Tasks were performed either in pure-task or mixed-task blocks. This design allowed separation of switch-cost and mixing-cost. The critical result was that expression of the delay effect at 2-back locations was reversed on switch versus repeat trials such that return was delayed in repeat trials but speeded in switch trials. This difference between repeat and switch trials suggests that gaze-relevant parameters may be represented and switched as part of a task-set. Existing and new tests for dissociating IOR and SM accounts of the delay effect converged on the conclusion that the delay at 2-back locations was due to SM, and that task-switching affects SM. Additionally, the new test simultaneously replicated noncor- roborating results in the literature regarding facilitation-of-return (FOR), which confirmed its existence and showed that FOR is “reversed” SM that occurs when preceding and current saccades are both directed toward the 2-back location. |
Mark Mills; Kevin B. Smith; John R. Hibbing; Michael D. Dodd Obama cares about visuo-spatial attention: Perception of political figures moves attention and determines gaze direction Journal Article In: Behavioural Brain Research, vol. 278, pp. 221–225, 2015. @article{Mills2015a, Processing an abstract concept such as political ideology by itself is difficult but becomes easier when a background situation contextualizes it. Political ideology within American politics, for example, is commonly processed using space metaphorically, i.e., the political "left" and "right" (referring to Democrat and Republican views, respectively), presumably to provide a common metric to which abstract features of ideology can be grounded and understood. Commonplace use of space as metaphor raises the question of whether an inherently non-spatial stimulus (e.g., picture of the political "left" leader, Barack Obama) can trigger a spatially-specific response (e.g., attentional bias toward "left" regions of the visual field). Accordingly, pictures of well-known Democrats and Republicans were presented as central cues in peripheral target detection (Experiment 1) and saccadic free-choice (Experiment 2) tasks to determine whether perception of stimuli lacking a direct association with physical space nonetheless induce attentional and oculomotor biases in the direction compatible with the ideological category of the cue (i.e., Democrat/left and Republican/right). In Experiment 1, target detection following presentation of a Democrat (Republican) was facilitated for targets appearing to the left (right). In Experiment 2, participants were more likely to look left (right) following presentation of a Democrat (Republican). Thus, activating an internal representation of political ideology induced a shift of attention and biased choice of gaze direction in a spatially-specific manner. These findings demonstrate that the link between conceptual processing and spatial attention can be totally arbitrary, with no reference to physical or symbolic spatial information. |
Tobias Moehler; Katja Fiehler The influence of spatial congruency and movement preparation time on saccade curvature in simultaneous and sequential dual-tasks Journal Article In: Vision Research, vol. 116, pp. 25–35, 2015. @article{Moehler2015, Saccade curvature represents a sensitive measure of oculomotor inhibition with saccades curving away from covertly attended locations. Here we investigated whether and how saccade curvature depends on movement preparation time when a perceptual task is performed during or before saccade preparation. Participants performed a dual-task including a visual discrimination task at a cued location and a saccade task to the same location (congruent) or to a different location (incongruent). Additionally, we varied saccade preparation time (time between saccade cue and Go-signal) and the occurrence of the discrimination task (during saccade preparation = simultaneous vs. before saccade preparation = sequential). We found deteriorated perceptual performance in incongruent trials during simultaneous task performance while perceptual performance was unaffected during sequential task performance. Saccade accuracy and precision were deteriorated in incongruent trials during simultaneous and, to a lesser extent, also during sequential task performance. Saccades consistently curved away from covertly attended non-saccade locations. Saccade curvature was unaffected by movement preparation time during simultaneous task performance but decreased and finally vanished with increasing movement preparation time during sequential task performance. Our results indicate that the competing saccade plan to the covertly attended non-saccade location is maintained during simultaneous task performance until the perceptual task is solved while in the sequential condition, in which the discrimination task is solved prior to the saccade task, oculomotor inhibition decays gradually with movement preparation time. |
Hassan Zanganeh Momtaz; Mohammad Reza Daliri Differences of eye movement pattern in natural and man-made scenes and image categorization with the help of these patterns Journal Article In: Journal of Integrative Neuroscience, vol. 14, no. 3, pp. 1–18, 2015. @article{Momtaz2015, In this paper, we investigated the parameters related to eye movement patterns of individuals while viewing images that consist of natural and man-made scenes. These parameters are as follows: number of fixations and saccades, fixation duration, saccade amplitude and distribution of fixation locations. We explored the way in which individuals look at images of different semantic categories, and used this information for automatic image classifcation. We showed that the eye movements and the contents of eye fixation locations of observers differ for images of different semantic categories. These differences were used effectively in automatic image categorization. Another goal of this study was to find the answer of this question that "whether the image patches of fixation points have sufficient information for image categorization?" To achieve this goal, a number of patches with different sizes from two different image categories was extracted. These patches, which were selected at the location of eye fixation points, were used to form a feature vector based on K-means clustering algorithm. Then, different statistical classiers were trained for categorization purpose. The results showed that it is possible to predict the image category by using the feature vectors derived from the image patches. We found significant differences in parameters of eye movement pattern between the two image categories (average across subjects). We could categorize images by using these parameters as features. The results also showed that it is possible to predict the image category by using image patches around the subjects' fixation points. |
Pieter Moors; Filip Germeys; Iwona Pomianowska; Karl Verfaillie Perceiving where another person is looking: The integration of head and body information in estimating another person's gaze Journal Article In: Frontiers in Psychology, vol. 6, pp. 909, 2015. @article{Moors2015, The process through which an observer allocates his/her attention based on the attention of another person is known as joint attention. To be able to do this, the observer effectively has to compute where the other person is looking. It has been shown that observers integrate information from the head and the eyes to determine the gaze of another person. Most studies have documented that observers show a bias called the overshoot effect when eyes and head are misaligned. That is, when the head is not oriented straight to the observer, perceived gaze direction is sometimes shifted in the direction opposite to the head turn. The present study addresses whether body information is also used as a cue to compute perceived gaze direction. In Experiment 1, we observed a similar overshoot effect in both behavioral and saccadic responses when manipulating body orientation. In Experiment 2, we explored whether the overshoot effect was due to observers assuming that the eyes are oriented further than the head when head and body orientation are misaligned. We removed horizontal eye information by presenting the stimulus from a side view. Head orientation was now manipulated in a vertical direction and the overshoot effect was replicated. In summary, this study shows that body orientation is indeed used as a cue to determine where another person is looking. |
Candice C. Morey; Yongqi Cong; Yixia Zheng; Mindi Price; Richard D. Morey The color-sharing bonus: Roles of perceptual organization and attentive processes in visual working memory. Journal Article In: Archives of Scientific Psychology, vol. 3, no. 1, pp. 18–29, 2015. @article{Morey2015, Color repetitions in a visual scene boost memory for its elements, a phenomenon known as the color-sharing effect. This may occur because improved perceptual organization reduces information load or because the repetitions capture attention. The implications of these explanations differ drastically for both the theoretical meaning of this effect and its potential value for applications in design of visual materials. If repetitions capture attention to the exclusion of other details, then use of repetition in visual displays should be confined to emphasized details, but if repetitions reduce the load of the display, designers can assume that the nonrepeated information is also more likely to be attended and remembered. We manipulated the availability of general attention during a visual memory task by comparing groups of participants engaged in meaningless speech or attention-demanding continuous arithmetic. We also tracked eye movements as an implicit indicator of selective attention. Estimated memory capacity was always higher when color duplicates were tested, and under full attention conditions this bonus spilled over to the unique colors too. Analyses of gazes showed that with full attention, participants tended to glance earlier at duplicate colors during stimulus presentation but looked more at unique colors during the retention interval. This pattern of results suggests that the color-sharing bonus reflects efficient perceptual organization of the display based on the presence of repetitions, and possibly strategic attention allocation when attention is available. |
Michael Morgan; Simon Grant; Dean Melmoth; Joshua A. Solomon Tilted frames of reference have similar effects on the perception of gravitational vertical and the planning of vertical saccadic eye movements Journal Article In: Experimental Brain Research, vol. 233, no. 7, pp. 2115–2125, 2015. @article{Morgan2015, We investigated the effects of a tilted refer- ence frame (i.e., allocentric visual context) on the percep- tion of the gravitational vertical and saccadic eye move- ments along a planned egocentric vertical path. Participants (n = 5) in a darkened room fixated a point in the center of a circle on an LCD display and decided which of two sequentially presented dots was closer to the unmarked ‘6 o'clock' position on that circle (i.e., straight down toward their feet). The slope of their perceptual psychometric func- tions showed that participants were able to locate which dot was nearer the vertical with a precision of 1°–2°. For three of the participants, a square frame centered at fixa- tion and tilted (in the roll direction) 5.6° from the vertical caused a strong perceptual bias, manifest as a shift in the psychometric function, in the direction of the traditional ‘rod-and-frame' effect, without affecting precision. The other two participants showed negligible or no equivalent biases. The same subjects participated in the saccade ver- sion of the task, in which they were instructed to shift their gaze to the 6 o'clock position as soon as the central fixation point disappeared. The participants who showed perceptual biases showed biases of similar magnitude in their saccadic endpoints, with a strong correlation between perceptual and saccadic biases across all subjects. Tilting of the head 5.6° reduced both perceptual and saccadic biases in all but one observer, who developed a strong saccadic bias. Otherwise, the overall pattern and significant correlations between results remained the same. We conclude that our observers' saccades-to-vertical were dominated by perceptual input, which outweighed any gravitational or head-centered input. |
Masahiro Morii; Takayuki Sakagami The effect of gaze-contingent stimulus elimination on preference judgments Journal Article In: Frontiers in Psychology, vol. 6, pp. 1351, 2015. @article{Morii2015, This study examined how stimulus elimination (SE) in a preference judgment task affects observers' choices. Previous research suggests that biasing gaze toward one alternative can increase preference for it; this preference reciprocally promotes gaze bias. Shimojo et al. (2003) called this phenomenon the Gaze Cascade Effect. They showed that the likelihood that an observer's gaze was directed toward their chosen alternative increased steadily until the moment of choosing. Therefore, we tested whether observers would prefer an alternative at which they had been gazing last if both alternatives were removed prior to the start of this rising gaze likelihood. To test this, we used a preference judgment task and controlled stimulus presentation based on gaze using an eye-tracking system. A pair of non-sensical figures was presented on the computer screen and both stimuli were eliminated while participants were still making their preference decision. The timing of the elimination differed between two experiments. In Experiment 1, after gazing at both stimuli one or more times, stimuli were removed when the participant's gaze fell on one alternative, pre-selected as the target stimulus. There was no significant difference in the preference of the two alternatives. In Experiment 2, we did not predefine any target stimulus. After the participant gazed at both stimuli one or more times, both stimuli were eliminated when the participant next fixated on either. The likelihood of choosing the stimulus that was gazed at last (at the moment of elimination) was greater than chance. Results showed that controlling participants' choices using gaze-contingent SE was impossible, but the different results between these two experiments suggest that participants decided which stimulus to choose during their first period of gazing at each alternative. Thus, we could predict participants' choices by analyzing eye movement patterns at the moment of SE. |
Antony C. Moss; Ian P. Albery; Kyle R. Dyer; Daniel Frings; Karis Humphreys; Thomas Inkelaar; Emily Harding; Abbie Speller The effects of responsible drinking messages on attentional allocation and drinking behaviour Journal Article In: Addictive Behaviors, vol. 44, pp. 94–101, 2015. @article{Moss2015, Aims: Four experiments were conducted to assess the acute impact of context and exposure to responsible drinking messages (RDMs) on attentional allocation and drinking behaviour of younger drinkers and to explore the utility of lab-based methods for the evaluation of such materials. Methods: A simulated bar environment was used to examine the impact of context, RDM posters, and brief online responsible drinking advice on actual drinking behaviour. Experiments one (n. =. 50) and two (n. =. 35) comprised female non-problem drinkers, whilst Experiments three (n. =. 80) and 4 (n. =. 60) included a mixed-gender sample of non-problem drinkers, recruited from an undergraduate student cohort. The Alcohol Use Disorders Identification Test (AUDIT) was used to assess drinking patterns. Alcohol intake was assessed through the use of a taste preference task. Results: Drinking in a simulated bar was significantly greater than in a laboratory setting in the first two studies, but not in the third. There was a significant increase in alcohol consumption as a result of being exposed to RDM posters. Provision of brief online RDM reduced the negative impact of these posters somewhat; however the lowest drinking rates were associated with being exposed to neither posters nor brief advice. Data from the final experiment demonstrated a low level of visual engagement with RDMs, and that exposure to posters was associated with increased drinking. Conclusions: Poster materials promoting responsible drinking were associated with increased consumption amongst undergraduate students, suggesting that poster campaigns to reduce alcohol harms may be having the opposite effect to that intended. Findings suggest that further research is required to refine appropriate methodologies for assessing drinking behaviour in simulated drinking environments, to ensure that future public health campaigns of this kind are having their intended effect. |
Zhiya Liu; Xiaohong Song; Carol A. Seger; Peter J. Hills An eye-tracking study of multiple feature value category structure learning: The role of unique features Journal Article In: PLoS ONE, vol. 10, no. 8, pp. e0135729, 2015. @article{Liu2015c, We examined whether the degree to which a feature is uniquely characteristic of a category can affect categorization above and beyond the typicality of the feature. We developed a multiple feature value category structure with different dimensions within which feature uniqueness and typicality could be manipulated independently. Using eye tracking, we found that the highest attentional weighting (operationalized as number of fixations, mean fixation time, and the first fixation of the trial) was given to a dimension that included a feature that was both unique and highly typical of the category. Dimensions that included features that were highly typical but not unique, or were unique but not highly typical, received less attention. A dimension with neither a unique nor a highly typical feature received least attention. On the basis of these results we hypothesized that subjects categorized via a rule learning procedure in which they performed an ordered evaluation of dimensions, beginning with unique and strongly typical dimensions, and in which earlier dimensions received higher weighting in the decision. This hypothesis accounted for performance on transfer stimuli better than simple implementations of two other common theories of category learning, exemplar models and prototype models, in which all dimensions were evaluated in parallel and received equal weighting. |
Francesc Llorens; Daniel Sanabria; Florentino Huertas; Enrique Molina; Simon J. Bennett Intense physical exercise reduces overt attentional capture Journal Article In: Journal of Sport and Exercise Psychology, vol. 37, no. 5, pp. 559–564, 2015. @article{Llorens2015, The abrupt onset of a visual stimulus typically results in overt attentional capture, which can be quantified by saccadic eye movements. Here, we tested whether attentional capture following onset of task-irrelevant visual stimuli (new object) is reduced after a bout of intense physical exercise. A group of participants performed a visual search task in two different activity conditions: rest, without any prior effort, and effort, immediately after an acute bout of intense exercise. The results showed that participants exhibited (1) slower reaction time of the first saccade toward the target when a new object was simultaneously presented in the visual field, but only in the rest activity condition, and (2) more saccades to the new object in the rest activity condition than in the effort activity condition. We suggest that immediately after an acute bout of effort, participants improved their ability to inhibit irrelevant (distracting) stimuli. |
Patrick Loesche; Jennifer Wiley; Marcus Hasselhorn How knowing the rules affects solving the Raven Advanced Progressive Matrices Test Journal Article In: Intelligence, vol. 48, pp. 58–75, 2015. @article{Loesche2015, The solution process underlying the Raven Advanced Progressive Matrices (RAPM) has been conceptualized to consist of two subprocesses: rule induction and goal management. Past research has also found a strong relation between measures of working memory capacity and performance on RAPM. The present research attempted to test whether the goal management subprocess is responsible for the relation between working memory capacity and RAPM, using a paradigm where the rules necessary to solve the problems were given to subjects, assuming that it would render rule induction unnecessary. Three experiments revealed that working memory capacity was still strongly related to RAPM performance in the given-rules condition, while in two experiments the correlation in the given-rules condition was significantly higher than in the no-rules condition. Experiment 4 revealed that giving the rules affected problem solving behavior. Evidence from eye tracking protocols suggested that participants in the given-rules condition were more likely to approach the problems with a constructive matching strategy. Two possible mechanisms are discussed that could both explain why providing participants with the rules might increase the relation between working memory capacity and RAPM performance. |
Francisco López-Orozco; Luis D. Rodríguez-Vega Model of making decisions during an information search task Journal Article In: Research in Computing Science, vol. 105, pp. 157–166, 2015. @article{LopezOrozco2015, This paper presents a cognitive computational model of the way people read a paragraph with the task of quickly deciding whether it is related or not to a given goal. In particular, the model attempts to predict the time at which participants would decide to stop reading the paragraph because they have enough information to make their decision. Our model makes predictions at the level of words that are likely to be ?xated before the paragraph is abandoned. Human semantic judgments are mimicked by computing the semantic similarities between sets of words using Latent Semantic Analysis. A two-variable linear threshold is proposed to account for that decision, based on the rank of the ?xation and the semantic similarity between the paragraph and the goal. Model performance is compared to eyetracking data of 19 participants. |
Thomas Zhihao Luo; John H. R. Maunsell Neuronal modulations in visual cortex are associated with only one of multiple components of attention Journal Article In: Neuron, vol. 86, no. 5, pp. 1182–1188, 2015. @article{Luo2015, Neuronal signals related to visual attention are found in widespread brain regions, and these signals are generally assumed to participate in a common mechanism of attention. However, the behavioral effects of attention in detection can be separated into two distinct components: spatially selective shifts in either the criterion or sensitivity of the subject. Here we show that a paradigm used by many single-neuron studies of attention conflates behavioral changes in the subject's criterion and sensitivity. Then, using a task designed to dissociate these two components, we found that multiple aspects of attention-related neuronal modulations in area V4 of monkey visual cortex corresponded to behavioral shifts in sensitivity, but not criterion. This result suggests that separate components of attention are associated with signals in different brain regions and that attention is not a unitary process in the brain, but instead consists of distinct neurobiological mechanisms. Luo and Maunsell show that the neuronal modulations in visual cortex correspond to only one of multiple components of attention. This result suggests that different brain structures underlie separate mechanisms of attention and that attention is not a unitary process in the brain, but instead consists of distinct neurobiological mechanisms. |
W. Joseph MacInnes; Hannah M. Krüger; Amelia R. Hunt Just passing through? Inhibition of return in saccadic sequences Journal Article In: Quarterly Journal of Experimental Psychology, vol. 68, no. 2, pp. 402–416, 2015. @article{MacInnes2015, Responses tend to be slower to previously fixated spatial locations, an effect known as "inhibition of return" (IOR). Saccades cannot be assumed to be independent, however, and saccade sequences programmed in parallel differ from independent eye movements. We measured the speed of both saccadic and manual responses to probes appearing in previously fixated locations when those locations were fixated as part of either parallel or independent saccade sequences. Saccadic IOR was observed in independent but not parallel saccade sequences, while manual IOR was present in both parallel and independent sequence types. Saccadic IOR was also short-lived, and dissipated with delays of more than ∼1500 ms between the intermediate fixation and the probe onset. The results confirm that the characteristics of IOR depend critically on the response modality used for measuring it, with saccadic and manual responses giving rise to motor and attentional forms of IOR, respectively. Saccadic IOR is relatively short-lived and is not observed at intermediate locations of parallel saccade sequences, while attentional IOR is long-lasting and consistent for all sequence types. |
Gregory H. MacLean; Raymond M. Klein; Matthew D. Hilchey Does oculomotor readiness mediate exogenous capture of visual attention? Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 41, no. 5, pp. 1260–1270, 2015. @article{MacLean2015, The oculomotor readiness hypothesis makes 2 predictions: Shifts in covert attention are accompanied by preparedness to move one's eyes to the attended region, and preparedness to move one's eyes to a region in space is accompanied by a shift in covert attention to the prepared location. Both predictions have been disconfirmed using an endogenous attention task. In the 2 experiments presented here, the same 2 predictions were tested using an exogenous attention task. It was found that participants experienced covert capture without accompanying oculomotor activation and experienced oculomotor activation without accompanying covert capture. While under everyday conditions the overt and covert orienting systems may be strongly linked, apparently they can nonetheless operate with a high degree of independence from one another. |
Mary H. Maclean; Barry Giesbrecht Neural evidence reveals the rapid effects of reward history on selective attention Journal Article In: Brain Research, vol. 1606, pp. 86–94, 2015. @article{Maclean2015b, Selective attention is often framed as being primarily driven by two factors: task-relevance and physical salience. However, factors like selection and reward history, which are neither currently task-relevant nor physically salient, can reliably and persistently influence visual selective attention. The current study investigated the nature of the persistent effects of irrelevant, physically non-salient, reward-associated features. These features affected one of the earliest reliable neural indicators of visual selective attention in humans, the P1 event-related potential, measured one week after the reward associations were learned. However, the effects of reward history were moderated by current task demands. The modulation of visually evoked activity supports the hypothesis that reward history influences the innate salience of reward associated features, such that even when no longer relevant, nor physically salient, these features have a rapid, persistent, and robust effect on early visual selective attention. |
Najib J. Majaj; Ha Hong; Ethan A. Solomon; James J. DiCarlo Simple learned weighted sums of inferior temporal neuronal firing rates accurately predict human core object recognition performance Journal Article In: Journal of Neuroscience, vol. 35, no. 39, pp. 13402–13418, 2015. @article{Majaj2015, To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands ofchallenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex ofmonkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern ofhuman performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors ofthe human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures ofIT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ~60,000 IT neurons and is executed as a simple weighted sum ofthose firing rates. |
Alex J. Major; Susheel Vijayraghavan; Stefan Everling Muscarinic attenuation of mnemonic rule representation in macaque dorsolateral prefrontal cortex during a pro-and anti-saccade task Journal Article In: Journal of Neuroscience, vol. 35, no. 49, pp. 16064–16076, 2015. @article{Major2015, Maintenance of context is necessary for execution of appropriate responses to diverse environmental stimuli. The dorsolateral prefrontal cortex (DLPFC) plays a pivotal role in executive function, including working memory and representation ofabstract rules. DLPFC activity is modulated by the ascending cholinergic system through nicotinic and muscarinic receptors. Although muscarinic receptors have been implicated in executive performance and gating ofsynaptic signals, their effect on local primate DLPFC neuronal activity in vivo during cognitive tasks remains poorly understood. Here, we examined the effects of muscarinic receptor blockade on rule-related activity in the macaque prefrontal cortex by combining iontophoretic application of the general muscarinic receptor antagonist scopolamine with single-cell recordings while monkeys performed a mnemonic rule-guided saccade task. We found that scopolamine reduced overall neuronal firing rate and impaired rule discriminability of task-selective cells. Saccade and visual direction selectivity measures were also reduced by muscarinic antagonism. These results demonstrate that blockade of muscarinic receptors in DLPFC creates deficits in working memory representation of rules in primates. |
George L. Malcolm; Sarah Shomstein Object-based attention in real-world scenes. Journal Article In: Journal of Experimental Psychology: General, vol. 144, no. 2, pp. 257–263, 2015. @article{Malcolm2015, We are continually confronted with more visual information than we can process in a given moment. In order to interact effectively with our environment, attentional mechanisms are used to select subsets of environmental properties for enhanced processing. Previous research demonstrated that spatial regions can be selected based on either their low-level feature or high-level semantic properties. However, the efficiency with which we interact with the world suggests that there must be an additional, midlevel, factor constraining effective attentional space. The present study investigates whether object-based attentional selection is one such midlevel factor that constrains visual attention in complex, real-world scenes. Participants viewed scene images while their eye movements were recorded. During viewing, a cue appeared on an object which participants were instructed to fixate. A target then appeared either on the same object as the cue, on a different object, or floating. Participants initiated saccades faster and had shorter response times to targets presented on the same object as the fixated cue. The results strongly suggest that when attending to a location on an object, the entire object benefits perceptually. This object-based effect on the distribution of spatial attention forms a critical link between low- and high-level factors that direct attention efficiently in complex real-world scenes. |
Pankhuri Malik; Joost C. Dessing; J. Douglas Crawford Role of early visual cortex in trans-saccadic memory of object features Journal Article In: Journal of Vision, vol. 15, no. 7, pp. 1–17, 2015. @article{Malik2015, Early visual cortex (EVC) participates in visual feature memory and the updating of remembered locations across saccades, but its role in the trans-saccadic integration of object features is unknown. We hypothesized that if EVC is involved in updating object features relative to gaze, feature memory should be disrupted when saccades remap an object representation into a simultaneously perturbed EVC site. To test this, we applied transcranial magnetic stimulation (TMS) over functional magnetic resonance imaging–localized EVC clusters corresponding to the bottom left/right visual quadrants (VQs). During experiments, these VQs were probed psychophysically by briefly presenting a central object (Gabor patch) while subjects fixated gaze to the right or left (and above). After a short memory interval, participants were required to detect the relative change in orientation of a re-presented test object at the same spatial location. Participants either sustained fixation during the memory interval (fixation task)ormade a horizontal saccade that either maintained or reversed the VQ of the object (saccade task). Three TMS pulses (coinciding with the pre-, peri-, and postsaccade intervals) were applied to the left or right EVC. This had no effect when (a) fixation was maintained, (b) saccades kept the object in the same VQ, or (c) the EVC quadrant corresponding to the first object was stimulated. However, as predicted, TMS reduced performance when saccades (especially larger saccades) crossed the remembered object location and brought it into the VQ corresponding to the TMS site. This suppression effect was statistically significant for leftward saccades and followed a weaker trend for rightward saccades. These causal results are consistent with the idea that EVC is involved in the gaze-centered updating of object features for trans-saccadic memory and perception. |
Ran Manor; Amir B. Geva Convolutional neural network for multi-category rapid serial visual presentation BCI Journal Article In: Frontiers in Computational Neuroscience, vol. 9, pp. 146, 2015. @article{Manor2015, Brain computer interfaces rely on machine learning (ML) algorithms to decode the brain's electrical activity into decisions. For example, in rapid serial visual presentation (RSVP) tasks, the subject is presented with a continuous stream of images containing rare target images among standard images, while the algorithm has to detect brain activity associated with target images. Here, we continue our previous work, presenting a deep neural network model for the use of single trial EEG classification in RSVP tasks. Deep neural networks have shown state of the art performance in computer vision and speech recognition and thus have great promise for other learning tasks, like classification of EEG samples. In our model, we introduce a novel spatio-temporal regularization for EEG data to reduce overfitting. We show improved classification performance compared to our earlier work on a five categories RSVP experiment. In addition, we compare performance on data from different sessions and validate the model on a public benchmark data set of a P300 speller task. Finally, we discuss the advantages of using neural network models compared to manually designing feature extraction algorithms. |
L. Müller-Pinzler; V. Gazzola; C. Keysers; Jens Sommer; Andreas Jansen; S. Frässle; Wolfgang Einhäuser; Frieder M. Paulus; Sören Krach Neural pathways of embarrassment and their modulation by social anxiety Journal Article In: NeuroImage, vol. 119, pp. 252–261, 2015. @article{MuellerPinzler2015, While being in the center of attention and exposed to other's evaluations humans are prone to experience embarrassment. To characterize the neural underpinnings of such aversive moments, we induced genuine experiences of embarrassment during person-group interactions in a functional neuroimaging study. Using a mock-up scenario with three confederates, we examined how the presence of an audience affected physiological and neural responses and the reported emotional experiences of failures and achievements. The results indicated that publicity induced activations in mentalizing areas and failures led to activations in arousal processing systems. Mentalizing activity as well as attention towards the audience were increased in socially anxious participants. The converging integration of information from mentalizing areas and arousal processing systems within the ventral anterior insula and amygdala forms the neural pathways of embarrassment. Targeting these neural markers of embarrassment in the (para-)limbic system provides new perspectives for developing treatment strategies for social anxiety disorders. |
Vishnu P. Murty; Sarah DuBrow; Lila Davachi The simple act of choosing influences declarative memory Journal Article In: Journal of Neuroscience, vol. 35, no. 16, pp. 6255–6264, 2015. @article{Murty2015, Individuals value the opportunity to make choices and exert control over their environment. This perceived sense of agency has been shown to have broad influences on cognition, including preference, decision-making, and valuation. However, it is unclear whether perceived control influences memory. Using a combined behavioral and functional magnetic resonance imaging approach, we investi- gated whether imbuing individuals with a sense of agency over their learning experience influences novelmemoryencoding. Participants encoded objects during a task that manipulated the opportunity to choose. Critically, unlike previous work on active learning, there was no relationship between individuals' choices and the content of memoranda. Despite this, we found that the opportunity to choose resulted in robust, reliable enhancements in declarative memory. Neuroimaging results revealed that anticipatory activation of the striatum, a region associated with decision-making, valuation, and exploration, correlated with choice-induced memory enhancements in behavior. These memory enhancements were further associated with interactions between the striatum and hippocampus. Specifi- cally, anticipatory signals in the striatum when participants are alerted to the fact that they will have to choose one of two memoranda were associated with encoding success effects in the hippocampus on a trial-by-trial basis. The precedence of the striatal signal in these interactions suggests a modulatory relationship of the striatum over the hippocampus. These findings not only demonstrate enhanced declarative memory when individuals have perceived control over their learning but also support a novel mechanism by which these enhancements emerge. Furthermore, they demonstrate a novel context in which mesolimbic and declarative memory systems interact. |
Andriy Myachykov; Angelo Cangelosi; Rob Ellis; Martin H. Fischer The oculomotor resonance effect in spatial-numerical mapping Journal Article In: Acta Psychologica, vol. 161, pp. 162–169, 2015. @article{Myachykov2015, We investigated automatic Spatial-Numerical Association of Response Codes (SNARC) effect in auditory number processing. Two experiments continually measured spatial characteristics of ocular drift at central fixation during and after auditory number presentation. Consistent with the notion of a spatially oriented mental number line, we found spontaneous magnitude-dependent gaze adjustments, both with and without a concurrent saccadic task. This fixation adjustment (1) had a small-number/left-lateralized bias and (2) it was biphasic as it emerged for a short time around the point of lexical access and it received later robust representation around following number onset. This pattern suggests a two-step mechanism of sensorimotor mapping between numbers and space - a first-pass bottom-up activation followed by a top-down and more robust horizontal SNARC. Our results inform theories of number processing as well as simulation-based approaches to cognition by identifying the characteristics of an oculomotor resonance phenomenon. |
Nicholas E. Myers; Lena Walther; George Wallis; Mark G. Stokes; Anna C. Nobre In: Journal of Cognitive Neuroscience, vol. 27, no. 3, pp. 492–508, 2015. @article{Myers2015a, Working memory (WM) is strongly influenced by attention. In visual WM tasks, recall performance can be improved by an attention-guiding cue presented before encoding (precue) or during maintenance (retrocue). Although precues and retro- cues recruit a similar frontoparietal control network, the two are likely to exhibit some processing differences, because pre- cues invite anticipation of upcoming information whereas retro- cues may guide prioritization, protection, and selection of information already in mind. Here we explored the behavioral and electrophysiological differences between precueing and retrocueing in a new visual WM task designed to permit a direct comparison between cueing conditions. We found marked differences in ERP profiles between the precue and retrocue conditions. In line with precues primarily generating an anti- cipatory shift of attention toward the location of an upcoming item, we found a robust lateralization in late cue-evoked po- tentials associated with target anticipation. Retrocues elicited a different pattern of ERPs that was compatible with an early selec- tion mechanism, but not with stimulus anticipation. In contrast to the distinct ERP patterns, alpha-band (8–14 Hz) lateralization was indistinguishable between cue types (reflecting, in both conditions, the location of the cued item). We speculate that, whereas alpha-band lateralization after a precue is likely to enable anticipatory attention, lateralization after a retrocue may instead enable the controlled spatiotopic access to recently encoded visual information |
Karly N. Neath; Roxane J. Itier Fixation to features and neural processing of facial expressions in a gender discrimination task Journal Article In: Brain and Cognition, vol. 99, pp. 97–111, 2015. @article{Neath2015, Early face encoding, as reflected by the N170 ERP component, is sensitive to fixation to the eyes. Whether this sensitivity varies with facial expressions of emotion and can also be seen on other ERP components such as P1 and EPN, was investigated. Using eye-tracking to manipulate fixation on facial features, we found the N170 to be the only eye-sensitive component and this was true for fearful, happy and neutral faces. A different effect of fixation to features was seen for the earlier P1 that likely reflected general sensitivity to face position. An early effect of emotion (~120. ms) for happy faces was seen at occipital sites and was sustained until ~350. ms post-stimulus. For fearful faces, an early effect was seen around 80. ms followed by a later effect appearing at ~150. ms until ~300. ms at lateral posterior sites. Results suggests that in this emotion-irrelevant gender discrimination task, processing of fearful and happy expressions occurred early and largely independently of the eye-sensitivity indexed by the N170. Processing of the two emotions involved different underlying brain networks active at different times. |
Andrea L. Nelson; Christine Purdon; Leanne Quigley; Jonathan Carriere; Daniel Smilek In: Cognition and Emotion, vol. 29, no. 3, pp. 504–526, 2015. @article{Nelson2015, Although attentional biases to threatening information are thought to contribute to the development and persistence of anxiety disorders, it is not clear whether an attentional bias to threat (ABT) is driven by trait anxiety, state anxiety or an interaction between the two. ABT may also be influenced by "top down" processes of motivation to attend or avoid threat. In the current study, participants high, mid and low in trait anxiety viewed high threat-neutral, mild threat-neutral and positive-neutral image pairs for 5 seconds in both calm and anxious mood states while their eye movements were recorded. State anxiety alone, but not trait anxiety, predicted greater maintenance of attention to high threat images (relative to neutral) following the first fixation (i.e., delayed disengagement) and over the time course. Motivation was associated with the time course of attention as would be expected, such that those motivated to look towards negative images showed the greatest ABT over time, and those highly motivated to look away from negative images showed the greatest avoidance. Interestingly, those ambivalent about where to direct their attention when viewing negative images showed the greatest ABT in the first 500 ms of viewing. Implications for theory and treatment of anxiety disorders, as well as areas for further study, are discussed. |
Kristin R. Newman; Christopher R. Sears Eye gaze tracking reveals different effects of a sad mood induction on the attention of previously depressed and never depressed women Journal Article In: Cognitive Therapy and Research, vol. 39, no. 3, pp. 292–306, 2015. @article{Newman2015, This study examined the effect of a sad mood induction (MI) on attention to emotional information and whether the effect varies as a function of depression vulnerability. Previously depressed (N = 42) and never depressed women (N = 58) were randomly assigned to a sad or a neutral MI and then viewed sets of depression-related, anxiety-related, positive, and neutral images. Attention was measured by tracking eye fixations to the images throughout an 8-s presentation. The sad MI had a substantial impact on the attention of never depressed participants: never depressed participants who experienced the sad MI increased their attention to positive images and decreased their attention to anxiety-related images relative to those who experienced the neutral MI. In contrast, previously depressed participants who experienced the sad MI did not attend to emotional images any differently than previously depressed participants who experienced the neutral MI. These results suggest that for never depressed individuals, a sad MI activates an emotion regulation strategy that changes the way that emotional information is attended to in order to counteract the sad mood; the absence of a difference for previously depressed individuals likely reflects a maladaptive emotion regulation response associated with depression vulnerability. Implications for cognitive theories of depression and depression-vulnerability are discussed. |
Phillip C. F Law; Bryan K. Paton; Jacqueline A. Riddiford; Caroline T. Gurvich; Trung T. Ngo; Steven M. Miller No relationship between binocular rivalry rate and eye-movement profiles in healthy individuals: A Bayes factor analysis Journal Article In: Perception, vol. 44, no. 5, pp. 643–661, 2015. @article{Law2015, Binocular rivalry (BR) is an intriguing phenomenon in which conflicting images are presented, one to each eye, resulting in perceptual alternations between each image. The rate of BR has been proposed as a potential endophenotype for bipolar disorder because (a) it is well established that this highly heritable psychiatric condition is associated with slower BR rate than in controls, and (b) an individual's BR rate is approximately 50% genetically determined. However, eye movements (EMs) could potentially account for the slow BR trait given EM anomalies are observed in psychiatric populations, and there has been report of an association between saccadic rate and BR rate in healthy individuals. Here, we sought to assess the relationship between BR rate and EMs in healthy individuals (N ¼ 40, mean age ¼ 34.4) using separate BR and EM tasks, with the latter measuring saccades during anticipatory, antisaccade, prosaccade, self-paced, free-viewing, and smooth-pursuit tasks. No correlation was found between BR rate and any EM measure for any BR task (p >.01) with substantial evidence favoring this lack of association (BF01 > 3). This finding is in contrast to previous data and has important implications for using BR rate as an endophenotype. If replicated in clinical psychiatric populations, EM interpretations of the slow BR trait can be excluded. |
James Lee; Jessica Manousakis; Joanne Fielding; Clare Anderson Alcohol and sleep restriction combined reduces vigilant attention, whereas sleep restriction alone enhances distractibility Journal Article In: Sleep, vol. 38, no. 5, pp. 765–775, 2015. @article{Lee2015a, STUDY OBJECTIVES: Alcohol and sleep loss are leading causes of motor vehicle crashes, whereby attention failure is a core causal factor. Despite a plethora of data describing the effect of alcohol and sleep loss on vigilant attention, little is known about their effect on voluntary and involuntary visual attention processes. DESIGN: Repeated-measures, counterbalanced design. SETTING: Controlled laboratory setting. PARTICIPANTS: Sixteen young (18-27 y; M = 21.90 ± 0.60 y) healthy males. INTERVENTIONS: Participants completed an attention test battery during the afternoon (13:00-14:00) under four counterbalanced conditions: (1) baseline; (2) alcohol (0.05% breath alcohol concentration); (3) sleep restriction (02:00-07:00); and (4) alcohol/sleep restriction combined. This test battery included a Psychomotor Vigilance Task (PVT) as a measure of vigilant attention, and two ocular motor tasks-visually guided and antisaccade-to measure the involuntary and voluntary allocation of visual attention. MEASUREMENTS AND RESULTS: Only the combined condition led to reductions in vigilant attention characterized by slower mean reaction time, fastest 10% responses, and increased number of lapses (P < 0.05) on the PVT. In addition, the combined condition led to a slowing in the voluntary allocation of attention as reflected by increased antisaccade latencies (P < 0.05). Sleep restriction alone however increased both antisaccade inhibitory errors [45.8% errors versus < 28.4% all others; P < 0.001] and the involuntary allocation of attention, as reflected by faster visually guided latencies (177.7 msec versus > 185.0 msec all others) to a peripheral target (P < 0.05). CONCLUSIONS: Our data reveal specific signatures for sleep related attention failure: the voluntary allocation of attention is impaired, whereas the involuntary allocation of attention is enhanced. This provides key evidence for the role of distraction in attention failure during sleep loss. |
Chantal L. Lemieux; Charles A. Collin; Elizabeth A. Nelson Modulations of eye movement patterns by spatial filtering during the learning and testing phases of an old/new face recognition task Journal Article In: Attention, Perception, & Psychophysics, vol. 77, no. 2, pp. 536–550, 2015. @article{Lemieux2015, In two experiments, we examined the effects of varying the spatial frequency (SF) content of face images on eye movements during the learning and testing phases of an old/new recognition task. At both learning and testing, participants were presented with face stimuli band-pass filtered to 11 different SF bands, as well as an unfiltered baseline condition. We found that eye movements varied significantly as a function of SF. Specifically, the frequency of transitions between facial features showed a band-pass pattern, with more transitions for middle-band faces (a parts per thousand 5-20 cycles/face) than for low-band (a parts per thousand < 5 cpf) or high-band (a parts per thousand > 20 cpf) ones. These findings were similar for the learning and testing phases. The distributions of transitions across facial features were similar for the middle-band, high-band, and unfiltered faces, showing a concentration on the eyes and mouth; conversely, low-band faces elicited mostly transitions involving the nose and nasion. The eye movement patterns elicited by low, middle, and high bands are similar to those previous researchers have suggested reflect holistic, configural, and featural processing, respectively. More generally, our results are compatible with the hypotheses that eye movements are functional, and that the visual system makes flexible use of visuospatial information in face processing. Finally, our finding that only middle spatial frequencies yielded the same number and distribution of fixations as unfiltered faces adds more evidence to the idea that these frequencies are especially important for face recognition, and reveals a possible mediator for the superior performance that they elicit. |
Karolina M. Lempert; Yu Lin Chen; Stephen M. Fleming Relating pupil dilation and metacognitive confidence during auditory decision-making Journal Article In: PLoS ONE, vol. 10, no. 5, pp. e0126588, 2015. @article{Lempert2015, The sources of evidence contributing to metacognitive assessments of confidence in decision-making remain unclear. Previous research has shown that pupil dilation is related to the signaling of uncertainty in a variety of decision tasks. Here we ask whether pupil dilation is also related to metacognitive estimates of confidence. Specifically, we measure the relationship between pupil dilation and confidence during an auditory decision task using a general linear model approach to take into account delays in the pupillary response. We found that pupil dilation responses track the inverse of confidence before but not after a decision is made, even when controlling for stimulus difficulty. In support of an additional post-decisional contribution to the accuracy of confidence judgments, we found that participants with better metacognitive ability - that is, more accurate appraisal of their own decisions - showed a tighter relationship between post-decisional pupil dilation and confidence. Together our findings show that a physiological index of uncertainty, pupil dilation, predicts both confidence and metacognitive accuracy for auditory decisions. |
Karolina M. Lempert; Elizabeth A. Phelps; Paul W. Glimcher; Elizabeth A. Phelps Emotional arousal and discount rate in intertemporal choice are reference-dependent Journal Article In: Journal of Experimental Psychology: General, vol. 144, no. 2, pp. 366–373, 2015. @article{Lempert2015a, Many decisions involve weighing immediate gratification against future consequences. In such intertemporal choices, people often choose smaller, immediate rewards over larger delayed rewards. It has been proposed that emotional responses to immediate rewards lead us to choose them at our long-term expense. Here we utilize an objective measure of emotional arousal – pupil dilation – to examine the role of emotion in these decisions. We show that emotional arousal responses, as well as choices, in intertemporal choice tasks are reference-dependent and reflect the decision-maker's recent history of offers. Arousal increases when less predictable rewards are better than expected, whether those rewards are immediate or delayed. Furthermore, when immediate rewards are less predictable than delayed rewards, participants tend to be patient. When delayed rewards are less predictable, immediate rewards are preferred. Our findings suggest that we can encourage people to be more patient by changing the context in which intertemporal choices are made. |
Carly J. Leonard; Angela Balestreri; Steven J. Luck Interactions between space-based and feature-based attention Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 41, no. 1, pp. 11–16, 2015. @article{Leonard2015, Although early research suggested that attention to nonspatial features (i.e., red) was confined to stimuli appearing at an attended spatial location, more recent research has emphasized the global nature of feature-based attention. For example, a distractor sharing a target feature may capture attention even if it occurs at a task-irrelevant location. Such findings have been used to argue that feature-based attention operates independently of spatial attention. However, feature-based attention may nonetheless interact with spatial attention, yielding larger feature-based effects at attended locations than at unattended locations. The present study tested this possibility. In 2 experiments, participants viewed a rapid serial visual presentation (RSVP) stream and identified a target letter defined by its color. Target-colored distractors were presented at various task-irrelevant locations during the RSVP stream. We found that feature-driven attentional capture effects were largest when the target-colored distractor was closer to the attended location. These results demonstrate that spatial attention modulates the strength of feature-based attention capture, calling into question the prior evidence that feature-based attention operates in a global manner that is independent of spatial attention. |
Clément Letesson; Stéphane Grade; Martin G. Edwards Different but complementary roles of action and gaze in action observation priming: Insights from eye-and motion-tracking measures Journal Article In: Frontiers in Psychology, vol. 6, pp. 569, 2015. @article{Letesson2015, Action priming following action observation is thought to be caused by the observed action kinematics being represented in the same brain areas as those used for action execution. But, action priming can also be explained by shared goal representations, with compatibility between observation of the agent's gaze and the intended action of the observer. To assess the contribution of action kinematics and eye-gaze cues in the prediction of an agent's action goal and action priming, participants observed actions where the availability of both cues was manipulated. Action observation was followed by action execution, and the congruency between the target of the agent's and observer's actions, and the congruency between the observed and executed action spatial location were manipulated. Eye movements were recorded during the observation phase, and the action priming was assessed using motion analysis. The results showed that the observation of gaze information influenced the observer's prediction speed to attend to the target, and that observation of action kinematic information influenced the accuracy of these predictions. Motion analysis results showed that observed action cues alone primed both spatial incongruent and object congruent actions, consistent with the idea that the prime effect was driven by similarity between goals and kinematics. The observation of action and eye-gaze cues together induced a prime effect complementarily sensitive to object and spatial congruency. While observation of the agent's action kinematics triggered an object-centered and kinematic-centered action representation, independently, the complementary observation of eye-gaze triggered a more fine-grained representation illustrating a specification of action kinematics toward the selected goal. Even though both cues differentially contributed to action priming, their complementary integration led to a more refined pattern of action priming. |
Heiko Lex; Kai Essig; Andreas Knoblauch; Thomas Schack Cognitive representations and cognitive processing of team-specific tactics in soccer Journal Article In: PLoS ONE, vol. 10, no. 2, pp. e0118219, 2015. @article{Lex2015, Two core elements for the coordination of different actions in sport are tactical information and knowledge about tactical situations. The current study describes two experiments to learn about the memory structure and the cognitive processing of tactical information. Experiment 1 investigated the storage and structuring of team-specific tactics in humans' long-term memory with regard to different expertise levels. Experiment 2 investigated tactical decision-making skills and the corresponding gaze behavior, in presenting participants the identical match situations in a reaction time task. The results showed that more experienced soccer players, in contrast to less experienced soccer players, possess a functionally organized cognitive representation of team-specific tactics in soccer. Moreover, the more experienced soccer players reacted faster in tactical decisions, because they needed less fixations of similar duration as compared to less experienced soccer players. Combined, these experiments offer evidence that a functionally organized memory structure leads to a reaction time and a perceptual advantage in tactical decision-making in soccer. The discussion emphasizes theoretical and applied implications of the current results of the study. |
Claire L. Kelly; Sandra I. Sünram-Lea; Trevor J. Crawford The role of motivation, glucose and self-control in the antisaccade task Journal Article In: PLoS ONE, vol. 10, no. 3, pp. e0122218, 2015. @article{Kelly2015, Research shows that self-control is resource limited and there is a gradual weakening in consecutive self-control task performance akin to muscle fatigue. A body of evidence suggests that the resource is glucose and consuming glucose reduces this effect. This study examined the effect of glucose on performance in the antisaccade task - which requires self-control through generating a voluntary eye movement away from a target - following self-control exertion in the Stroop task. The effects of motivation and individual differences in self-control were also explored. In a double-blind design, 67 young healthy adults received a 25g glucose or inert placebo drink. Glucose did not enhance antisaccade performance following self-control exertion in the Stroop task. Motivation however, predicted performance on the antisaccade task; more specifically high motivation ameliorated performance decrements observed after initial self-control exertion. In addition, individuals with high levels of self-control performed better on certain aspects of the antisaccade task after administration of a glucose drink. The results of this study suggest that the antisaccade task might be a powerful paradigm, which could be used as a more objective measure of self-control. Moreover, the results indicate that level of motivation and individual differences in self-control should be taken into account when investigating deficiencies in self-control following prior exertion. |
Shahabeddin Khalighy; Graham Green; Christoph Scheepers; Craig Whittet Quantifying the qualities of aesthetics in product design using eye-tracking technology Journal Article In: International Journal of Industrial Ergonomics, vol. 49, pp. 31–43, 2015. @article{Khalighy2015, This study provides a methodology to quantify the qualities of visual aesthetics in product design by applying eye-tracking technology. The output data of eye-tracking software, consisting of number, duration, and coordinate of eye fixations, are formulated using the fundamental constituent factors of beauty and attractiveness. This methodology has been developed by conducting three eye-tracking experiments and five experiments applying subjective measures which in total more than 300 participants attended. The results of these experiments contributed to the development of an aesthetic formula. The output of this formula was then compared with the declared preferences of a further 200 subjects. This comparison confirmed that the proposed methodology was capable of quantifying and predicting aesthetic preference by only monitoring eye behaviour. |
Aarlenne Zein Khan; Gunnar Blohm; Laure Pisella; Douglas P. Munoz Saccade execution suppresses discrimination at distractor locations rather than enhancing the saccade goal location Journal Article In: European Journal of Neuroscience, vol. 41, no. 12, pp. 1624–1634, 2015. @article{Khan2015, As we have limited processing abilities with respect to the plethora of visual information entering our brain, spatial selection mechanisms are crucial. These mechanisms result in both enhancing processing at a location of interest and in suppressing processing at other locations; together, they enable successful further processing of locations of interest. It has been suggested that saccade planning modulates these spatial selection mechanisms; however, the precise influence of saccades on the distribution of spatial resources underlying selection remains unclear. To this end, we compared discrimination performance at different locations (six) within a work space during different saccade tasks. We used visual discrimination performance as a behavioral measure of enhancement and suppression at the different locations. A total of 14 participants performed a dual discrimination/saccade countermanding task, which allowed us to specifically isolate the consequences of saccade execution. When a saccade was executed, discrimination performance at the cued location was never better than when fixation was maintained, suggesting that saccade execution did not enhance processing at a location more than knowing the likelihood of its appearance. However, discrimination was consistently lower at distractor (uncued) locations in all cases where a saccade was executed compared with when fixation was maintained. Based on these results, we suggest that saccade execution specifically suppresses distractor locations, whereas attention shifts (with or without an accompanying saccade) are involved in enhancing perceptual processing at the goal location. |