EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2013 |
Steven L. Prime; Jonathan J. Marotta Gaze strategies during visually-guided versus memory-guided grasping Journal Article In: Experimental Brain Research, vol. 225, no. 2, pp. 291–305, 2013. @article{Prime2013, Vision plays a crucial role in guiding motor actions. But sometimes we cannot use vision and must rely on our memory to guide action-e.g. remembering where we placed our eyeglasses on the bedside table when reaching for them with the lights off. Recent studies show subjects look towards the index finger grasp position during visually-guided precision grasping. But, where do people look during memory-guided grasping? Here, we explored the gaze behaviour of subjects as they grasped a centrally placed symmetrical block under open- and closed-loop conditions. In Experiment 1, subjects performed grasps in either a visually-guided task or memory-guided task. The results show that during visually-guided grasping, gaze was first directed towards the index finger's grasp point on the block, suggesting gaze targets future grasp points during the planning of the grasp. Gaze during memory-guided grasping was aimed closer to the blocks' centre of mass from block presentation to the completion of the grasp. In Experiment 2, subjects performed an 'immediate grasping' task in which vision of the block was removed immediately at the onset of the reach. Similar to the visually-guided results from Experiment 1, gaze was primarily directed towards the index finger location. These results support the 2-stream theory of vision in that motor planning with visual feedback at the onset of the movement is driven primarily by real-time visuomotor computations of the dorsal stream, whereas grasping remembered objects without visual feedback is driven primarily by the perceptual memory representations mediated by the ventral stream. |
Sarah J. Rappaport; Glyn W. Humphreys; M. Jane Riddoch The attraction of yellow corn: Reduced attentional constraints on coding learned conjunctive relations Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 4, pp. 1016–1031, 2013. @article{Rappaport2013, Physiological evidence indicates that different visual features are computed quasi-independently. The subsequent step of binding features, to generate coherent perception, is typically considered a major rate-limiting process, confined to one location at a time and taking 25 ms per item or longer (A. Treisman & S. Gormican, 1988, Feature analysis in early vision: Evidence from search asymmetries, Psychological Review, Vol. 95, pp. 15-48). We examined whether these processing limitations remain once bindings are learned for familiar objects. Participants searched for objects that could appear either in familiar or unfamiliar colors. Objects in familiar colors were detected efficiently at rates consistent with simultaneous binding across multiple stimuli. Processing limitations were evident for objects in unfamiliar colors. The advantage for the learned color for known targets was eliminated when participants searched for geometric shapes carrying the object colors and when the colors fell in local background areas around the shapes. The effect occurred irrespective of whether the nontargets had familiar colors, but was largest when nontargets had incorrect colors. The efficient search for targets in familiar colors held, even when the search was biased to favor objects in unfamiliar colors. The data indicate that learned bindings can be computed with minimal attentional limitations, consistent with the direct activation of learned conjunctive representations in vision. |
Olivia M. Maynard; Marcus R. Munafò; Ute Leonards Visual attention to health warnings on plain tobacco packaging in adolescent smokers and non-smokers Journal Article In: Addiction, vol. 108, no. 2, pp. 413–419, 2013. @article{Maynard2013, AIMS: Previous research with adults indicates that plain packaging increases visual attention to health warnings in adult non-smokers and weekly smokers, but not daily smokers. The present research extends this study to adolescents aged 14-19 years. DESIGN: Mixed-model experimental design, with smoking status as a between-subjects factor and pack type (branded or plain pack) and eye gaze location (health warning or branding) as within-subjects factors. SETTING: Three secondary schools in Bristol, UK. PARTICIPANTS: A convenience sample of adolescents comprising never-smokers (n = 26), experimenters (n = 34), weekly smokers (n = 13) and daily smokers (n = 14). MEASUREMENTS: Number of eye movements to health warnings and branding on plain and branded packs. FINDINGS: Analysis of variance, irrespective of smoking status revealed more eye movements to health warnings than branding on plain packs, but an equal number of eye movements to both regions on branded packs (P = 0.033). This was observed among experimenters (P < 0.001) and weekly smokers (P = 0.047), but not among never-smokers or daily smokers. CONCLUSION: Among experimenters and weekly smokers, plain packaging increases visual attention to health warnings and away from branding. Daily smokers, even relatively early in their smoking careers, seem to avoid the health warnings on cigarette packs. Adolescent never-smokers attend the health warnings preferentially on both types of packs, a finding which may reflect their decision not to smoke. |
Ulrich Mayr; David Kuhns; Miranda Rieter Eye movements reveal dynamics of task control Journal Article In: Journal of Experimental Psychology: General, vol. 142, no. 2, pp. 489–509, 2013. @article{Mayr2013, With the goal to determine the cognitive architecture that underlies flexible changes of control settings, we assessed within-trial and across-trial dynamics of attentional selection by tracking of eye movements in the context of a cued task-switching paradigm. Within-trial dynamics revealed a switch-induced, discrete delay in onset of task-congruent fixations, a result that is consistent with a higher level configuration process. Next, we derived predictions about the trial-to-trial dynamic coupling of control settings from competing models, assuming that control is achieved either through task-level competition or through higher level configuration processes. Empirical coupling dynamics between trial n-1 eye movements and trial n response times–estimated through mixed linear modeling–revealed a pattern that was consistent with the higher level configuration model. The results indicate that a combination of eye movement data and mixed modeling methods can yield new constraints on models of flexible control. This general approach can be useful in any domain in which theoretical progress depends on high-resolution information about dynamic relationships within individuals. |
Eugene McSorley; Carien M. Van Reekum The time course of implicit affective picture processing: An eye movement study Journal Article In: Emotion, vol. 13, no. 4, pp. 769–773, 2013. @article{McSorley2013, Consistent with a negativity bias account, neuroscientific and behavioral evidence demonstrates modulation of even early sensory processes by unpleasant, potentially threat-relevant information. The aim of this research is to assess the extent to which pleasant and unpleasant visual stimuli presented extrafoveally capture attention and impact eye movement control. We report an experiment examining deviations in saccade metrics in the presence of emotional image distractors that are close to a nonemotional target. We additionally manipulate the saccade latency to test when the emotional distractor has its biggest impact on oculomotor control. The results demonstrate that saccade landing position was pulled toward unpleasant distractors, and that this pull was due to the quick saccade responses. Overall, these findings support a negativity bias account of early attentional control and call for the need to consider the time course of motivated attention when affect is implicit. |
Benjamin P. Meek; Keri Locheed; Jane M. Lawrence-Dewar; Paul Shelton; Jonathan J. Marotta Posterior cortical atrophy: An investigation of scan paths generated during face matching tasks Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 309, 2013. @article{Meek2013, When viewing a face, healthy individuals focus more on the area containing the eyes and upper nose in order to retrieve important featural and configural information. In contrast, individuals with face blindness (prosopagnosia) tend to direct fixations toward individual facial features-particularly the mouth. Presented here is an examination of face perception deficits in individuals with Posterior Cortical Atrophy (PCA). PCA is a rare progressive neurodegenerative disorder that is characterized by atrophy in occipito-parietal and occipito-temporal cortices. PCA primarily affects higher visual processing, while memory, reasoning, and insight remain relatively intact. A common symptom of PCA is a decreased effective field of vision caused by the inability to "see the whole picture." Individuals with PCA and healthy control participants completed a same/different discrimination task in which images of faces were presented as cue-target pairs. Eye-tracking equipment and a novel computer-based perceptual task-the Viewing Window paradigm-were used to investigate scan patterns when faces were presented in open view or through a restricted-view, respectively. In contrast to previous prosopagnosia research, individuals with PCA each produced unique scan paths that focused on non-diagnostically useful locations. This focus on non-diagnostically useful locations was also present when using a restricted viewing aperture, suggesting that individuals with PCA have difficulty processing the face at either the featural or configural level. In fact, it appears that the decreased effective field of view in PCA patients is so severe that it results in an extreme dependence on local processing, such that a feature-based approach is not even possible. |
M. Meeter; Stefan Van der Stigchel Visual priming through a boost of the target signal: Evidence from saccadic landing positions Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 7, pp. 1336–1341, 2013. @article{Meeter2013, The present study focuses on the effects of priming on visual selection. Repetition breeds success. This is the case over the long run, when repeating a certain act leads to learning, but also in the very short one: Over and over again, it has been found that the action that was performed last is primed to be performed again. In the visual search literature, such repetition priming has been studied extensively. Priming in visual search has been found for different target features: for the layout of the search scene, for its size, for the to-be given response, and for interactions between these factors. Using reaction time measures, as is typically done in priming studies, these possibilities cannot be disentangled from one another, since the measures reflect both pre- and post attentional processing and cannot dissociate the strength of the individual signals of target and distractor. Priming refers to a broad range of behavioral phenomena. It would be hard to argue that an enhancement of the target signal is the only mechanism involved in priming. For instance, distractor repetition speeds search even when target features are not repeated, suggesting that some form of distractor suppression or discounting also plays a role. |
Weston Pack; Thom Carney; Stanley A. Klein Involuntary attention enhances identification accuracy for unmasked low contrast letters using non-predictive peripheral cues Journal Article In: Vision Research, vol. 89, pp. 79–89, 2013. @article{Pack2013, There is controversy regarding whether or not involuntary attention improves response accuracy at a cued location when the cue is non-predictive and if these cueing effects are dependent on backward masking. Various perceptual and decisional mechanisms of performance enhancement have been proposed, such as signal enhancement, noise reduction, spatial uncertainty reduction, and decisional processes. Herein we review a recent report of mask-dependent accuracy improvements with low contrast stimuli and demonstrate that the experiments contained stimulus artifacts whereby the cue impaired perception of low contrast stimuli, leading to an absence of improved response accuracy with unmasked stimuli. Our experiments corrected these artifacts by implementing an isoluminant cue and increasing its distance relative to the targets. The results demonstrate that cueing effects are robust for unmasked stimuli presented in the periphery, resolving some of the controversy concerning cueing enhancement effects from involuntary attention and mask dependency. Unmasked low contrast and/or short duration stimuli as implemented in these experiments may have a short enough iconic decay that the visual system functions similarly as if a mask were present leading to improved accuracy with a valid cue. |
Simon Palmer; Uwe Mattler Masked stimuli modulate endogenous shifts of spatial attention Journal Article In: Consciousness and Cognition, vol. 22, no. 2, pp. 486–503, 2013. @article{Palmer2013, Unconscious stimuli can influence participants' motor behavior but also more complex mental processes. Recent research has gradually extended the limits of effects of unconscious stimuli. One field of research where such limits have been proposed is spatial cueing, where exogenous automatic shifts of attention have been distinguished from endogenous controlled processes which govern voluntary shifts of attention. Previous evidence suggests unconscious effects on mechanisms of exogenous shifts of attention. Here, we applied a cue-priming paradigm to a spatial cueing task with arbitrary cues by centrally presenting a masked symmetrical prime before every cue stimulus. We found priming effects on response times in target discrimination tasks with the typical dynamic of cue-priming effects (Experiments 1 and 2) indicating that central symmetrical stimuli which have been associated with endogenous orienting can modulate shifts of spatial attention even when they are masked. Prime-Cue Congruency effects of perceptual dissimilar prime and cue stimuli (Experiment 3) suggest that these effects cannot be entirely reduced to perceptual repetition priming of cue processing. In addition, priming effects did not differ between participants with good and poor prime recognition performance consistent with the view that unconscious stimulus features have access to processes of endogenous shifts of attention. |
Simon Palmer; Uwe Mattler On the source and scope of priming effects of masked stimuli on endogenous shifts of spatial attention Journal Article In: Consciousness and Cognition, vol. 22, no. 2, pp. 528–544, 2013. @article{Palmer2013a, Unconscious stimuli can influence participants' motor behavior as well as more complex mental processes. Previous cue-priming experiments demonstrated that masked cues can modulate endogenous shifts of spatial attention as measured by choice reaction time tasks. Here, we applied a signal detection task with masked luminance targets to determine the source and the scope of effects of masked stimuli. Target-detection performance was modulated by prime-cue congruency, indicating that prime-cue congruency modulates signal enhancement at early levels of target processing. These effects, however, were only found when the prime was perceptually similar to the cue indicting that primes influence early target processing in an indirect way by facilitating cue processing. Together with previous research we conclude that masked stimuli can modulate perceptual and post-central levels of processing. Findings mark a new limit of the effects of unconscious stimuli which seem to have a smaller scope than conscious stimuli. |
Florian Perdreau; Patrick Cavanagh The artistís advantage: Better integration of object information across eye movements Journal Article In: i-Perception, vol. 4, no. 6, pp. 380–395, 2013. @article{Perdreau2013, Over their careers, figurative artists spend thousands of hours analyzing objects and scene layout. We examined what impact this extensive training has on the ability to encode complex scenes, comparing participants with a wide range of training and drawing skills on a possible versus impossible objects task. We used a gaze-contingent display to control the amount of information the participants could sample on each fixation either from central or peripheral visual field. Test objects were displayed and participants reported, as quickly as possible, whether the object was structurally possible or not. Our results show that when viewing the image through a small central window, performance improved with the years of training, and to a lesser extent with the level of skill. This suggests that the extensive training itself confers an advantage for integrating object structure into more robust object descriptions. |
Melanie Perron; Annie Roy-Charland Analysis of eye movements in the judgment of enjoyment and non-enjoyment smiles Journal Article In: Frontiers in Psychology, vol. 4, pp. 659, 2013. @article{Perron2013, Enjoyment smiles are more often associated with the simultaneous presence of the Cheek raiser and Lip corner puller action units, and these units' activation is more often symmetric. Research on the judgment of smiles indicated that individuals are sensitive to these types of indices, but it also suggested that their ability to perceive these specific indices might be limited. The goal of the current study was to examine perceptual-attentional processing of smiles by using eye movement recording in a smile judgment task. Participants were presented with three types of smiles: a symmetric Duchenne, a non-Duchenne, and an asymmetric smile. Results revealed that the Duchenne smiles were judged happier than those with characteristics of non-enjoyment. Asymmetric smiles were also judged happier than the non-Duchenne smiles. Participants were as effective in judging the latter smiles as not really happy as they were in judging the symmetric Duchenne smiles as happy. Furthermore, they did not spend more time looking at the eyes or mouth regardless of types of smiles. While participants made more saccades between each side of the face for the asymmetric smiles than the symmetric ones, they judged the asymmetric smiles more often as really happy than not really happy. Thus, processing of these indices do not seem limited to perceptual-attentional difficulties as reflected in viewing behavior. |
Yoni Pertzov; Paul M. Bays; Sabine Joseph; Masud Husain Rapid forgetting prevented by retrospective attention cues Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 5, pp. 1224–1231, 2013. @article{Pertzov2013, Recent studies have demonstrated that memory performance can be enhanced by a cue which indicates the item most likely to be subsequently probed, even when that cue is delivered seconds after a stimulus array is extinguished. Although such retro-cuing has attracted considerable interest, the mechanisms underlying it remain unclear. Here, we tested the hypothesis that retro-cues might protect an item from degradation over time. We employed two techniques that previously have not been deployed in retro-cuing tasks. First, we used a sensitive, continuous scale for reporting the orientation of a memorized item, rather than binary measures (change or no change) typically used in previous studies. Second, to investigate the stability of memory across time, we also systematically varied the duration between the retro-cue and report. Although accuracy of reporting uncued objects rapidly declined over short intervals, retro-cued items were significantly more stable, showing negligible decline in accuracy across time and protection from forgetting. Retro-cuing an object's color was just as advantageous as spatial retro-cues. These findings demonstrate that during maintenance, even when items are no longer visible, attention resources can be selectively redeployed to protect the accuracy with which a cued item can be recalled over time, but with a corresponding cost in recall for uncued items. |
Claudia Peschke; Claus C. Hilgetag; Bettina Olk Influence of stimulus type on effects of flanker, flanker position, and trial sequence in a saccadic eye movement task Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 11, pp. 2253–2267, 2013. @article{Peschke2013, Using the flanker paradigm in a task requiring eye movement responses, we examined how stimulus type (arrows vs. letters) modulated effects of flanker and flanker position. Further, we examined trial sequence effects and the impact of stimulus type on these effects. Participants responded to a central target with a left- or rightward saccade. We reasoned that arrows, being overlearned symbols of direction, are processed with less effort and are therefore linked more easily to a direction and a required response than are letters. The main findings demonstrate that (a) flanker effects were stronger for arrows than for letters, (b) flanker position more strongly modulated the flanker effect for letters than for arrows, and (c) trial sequence effects partly differed between the two stimulus types. We discuss these findings in the context of a more automatic and effortless processing of arrow relative to letter stimuli. |
Anders Petersen; Søren Kyllingsbæk Eye movements and practice effects in the attentional dwell time paradigm Journal Article In: Experimental Psychology, vol. 60, no. 1, pp. 22–33, 2013. @article{Petersen2013a, In the attentional dwell time paradigm by Duncan, Ward, and Shapiro (1994), two backward masked targets are presented at different spatial locations and separated by a varying time interval. Results show that report of the second target is severely impaired when the time interval is less than 500 ms which has been taken as a direct measure of attentional dwell time in human vision. However, we show that eye movements may have confounded the estimate of the dwell time and that the measure may not be robust as previously suggested. The latter is supported by evidence suggesting that intensive training strongly attenuates the dwell time because of habituation to the masks. Thus, this article points to eye movements and masking as two potential methodological pitfalls that should be considered when using the attentional dwell time paradigm to investigate the temporal dynamics of attention. |
Anders Petersen; Søren Kyllingsbæk; Claus Bundesen Attentional dwell times for targets and masks Journal Article In: Journal of Vision, vol. 13, no. 3, pp. 1–12, 2013. @article{Petersen2013, Studies on the temporal dynamics of attention have shown that the report of a masked target (T2) is severely impaired when the target is presented with a delay (stimulus onset asynchrony) of less than 500 ms after a spatially separate masked target (T1). This is known as the attentional dwell time. Recently, we have proposed a computational model of this effect building on the idea that a stimulus retained in visual short-term memory (VSTM) takes up visual processing resources that otherwise could have been used to encode subsequent stimuli into VSTM. The resources are locked until the stimulus in VSTM has been recoded, which explains the long dwell time. Challenges for this model and others are findings by Moore, Egeth, Berglan, and Luck (1996) suggesting that the dwell time is substantially reduced when the mask of T1 is removed. Here we suggest that the mask of T1 modulates performance not by noticeably affecting the dwell time but instead by acting as a distractor drawing processing resources away from T2. This is consistent with our proposed model in which targets and masks compete for attentional resources and attention dwells on both. We tested the model by replicating the study by Moore et al., including a new condition in which T1 is omitted but the mask of T1 is retained. Results from this and the original study by Moore et al. are modeled with great precision. |
Matthew F. Peterson; Miguel P. Eckstein Individual differences in eye movements during face identification reflect observer-specific optimal points of fixation Journal Article In: Psychological Science, vol. 24, no. 7, pp. 1216–1225, 2013. @article{Peterson2013, In general, humans tend to first look just below the eyes when identifying another person. Does everybody look at the same place on a face during identification, and, if not, does this variability in fixation behavior lead to functional consequences? In two conditions, observers had their free eye movements recorded while they performed a face-identification task. In another condition, the same observers identified faces while their gaze was restricted to specific locations on each face. We found substantial differences, which persisted over time, in where individuals chose to first move their eyes. Observers' systematic departure from a canonical, theoretically optimal fixation point did not correlate with performance degradation. Instead, each individual's looking preference corresponded to an idiosyncratic performance-maximizing point of fixation: Those who looked lower on the face performed better when forced to fixate the lower part of the face. The results suggest an observer-specific synergy between the face-recognition and eye movement systems that optimizes face-identification performance. |
Cai S. Longman; Aureliu Lavric; Stephen Monsell More attention to attention? An eye-tracking investigation of selection of perceptual attributes during a task switch Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 4, pp. 1142–1151, 2013. @article{Longman2013, Switching tasks prolongs response times, an effect reduced but not eliminated by active preparation. To explore the role of attentional selection of the relevant stimulus attribute in these task-switch costs, we measured eye fixations in participants cued to identify either a face or a letter displayed on its forehead. With only 200 ms between cue and stimulus onsets, the eyes fixated the currently relevant region of the stimulus less and the irrelevant region more on switch than on repeat trials, at stimulus onset and for 500 ms thereafter, in a pattern suggestive of delayed orientation of attention to the relevant region on switch trials. With 800 ms to prepare, both switch costs and inappropriate fixations were reduced, but on switch trials participants still tended (relative to repeat trials) to fixate the now-irrelevant region more at stimulus onset and to maintain fixation on, or refixate, the irrelevant region more during the next 500 ms. The size of this attentional persistence was associated with differences in performance costs between and within participants. We suggest that reorientation of attention is an important, albeit somewhat neglected and controversial, component of advance task-set reconfiguration and that the task-set inertia (or reactivation) to which many attribute the residual task-switch cost seen after preparation includes inertia in (or reactivation of) attentional parameters. |
Sara Lucke; Harald Lachnit; Stephan Koenig; Metin Uengoer The informational value of contexts affects context-dependent learning Journal Article In: Learning and Behavior, vol. 41, no. 3, pp. 285–297, 2013. @article{Lucke2013, In two predictive-learning experiments, we investigated the role of the informational value of contexts for the formation of context-dependent behavior. During Phase 1 of each experiment, participants received either a conditional discrimination in which contexts were relevant (Group Relevant) or a simple discrimination in which contexts were irrelevant (Group Irrelevant). Each experiment also included an ABA renewal procedure. Participants received Z+ in context A during Phase 1, extinction of Z in context B during Phase 2, and were tested with Z in context A during a test phase. In each experiment, extinction of Z proceeded faster and was followed by stronger response recovery in Group Relevant than in Group Irrelevant. In Experiment 2, which included recording of eye-gaze behavior, dwell times on contexts were longer in Group Relevant than in Group Irrelevant. Our results support the idea that relevant contexts receive more attention, leading to stronger context specificity of learning. |
Diane E. MacKenzie; David A. Westwood Occupational therapists and observation: What are you looking at? Journal Article In: OTJR: Occupation, Participation and Health, vol. 33, no. 1, pp. 4–11, 2013. @article{MacKenzie2013a, Visual observation is a fundamental skill underlying all occupational performance assessments in occupational therapy. The purpose of this study was to determine whether eye movement patterns differ between occupational therapists and non-healthcare professionals during observation of static images portraying a client post-stroke (domain-specific content) or naturalistic scenes (domain-irrelevant content). Ten licensed occupational therapists (OT group) and 10 participants matched for age, gender, and education level (NonOT group) completed the study. Participants viewed two counterbalanced blocks of 10 images (scene and stroke) under the pretext of preparing for a memory test. The OT group differed in the viewing strategies during observation and in how they directed their eyes (higher frequency of fixations, shorter fixation durations, and increased saccade count) for domain-specific and domain-irrelevant images alike. Observation patterns used by occupational therapists are presumably related to top-down influences that are not necessarily related to domain-specific knowledge but perhaps to general experience with performing assessments using observational methods. |
Adrian Madsen; Amy Rouinfar; Adam M. Larson; Lester C. Loschky; N. Sanjay Rebello Can short duration visual cues influence students' reasoning and eye movements in physics problems? Journal Article In: Physical Review Special Topics - Physics Education Research, vol. 9, pp. 020104, 2013. @article{Madsen2013, We investigate the effects of visual cueing on students' eye movements and reasoning on introductory physics problems with diagrams. Participants in our study were randomly assigned to either the cued or noncued conditions, which differed by whether the participants saw conceptual physics problems overlaid with dynamic visual cues. Students in the cued condition were shown an initial problem, and if they answered that incorrectly, they were shown a series of problems each with selection and integration cues overlaid on the problem diagrams. Students in the noncued condition were also provided a series of problems, but without any visual cues. We found that significantly more participants in the cued condition answered the problems overlaid with visual cues correctly on one of the four problem sets used and a subsequent uncued problem (the transfer problem) on a different problem set. We also found that those in the cued condition spent significantly less time looking at "novicelike" areas of the diagram in the transfer problem on three of the four problem sets and significantly more time looking at the "expertlike" areas of the diagram in the transfer problem on one problem set. Thus, the use of visual cues to influence reasoning and visual attention in physics problems is promising. |
Taosheng Liu; Youyang Hou A hierarchy of attentional priority signals in human frontoparietal cortex Journal Article In: Journal of Neuroscience, vol. 33, no. 42, pp. 16606–16616, 2013. @article{Liu2013, Humans can voluntarily attend to a variety of visual attributes to serve behavioral goals. Voluntary attention is believed to be controlled by a network of dorsal frontoparietal areas. However, it is unknown how neural signals representing behavioral relevance (attentional priority) for different attributes are organized in this network. Computational studies have suggested that a hierarchical organization reflecting the similarity structure of the task demands provides an efficient and flexible neural representation. Here we examined the structure of attentional priority using functional magnetic resonance imaging. Participants were cued to attend to location, color, or motion direction within the same stimulus. We found a hierarchical structure emerging in frontoparietal areas, such that multivoxel patterns for attending to spatial locations were most distinct from those for attending to features, and the latter were further clustered into different dimensions (color vs motion). These results provide novel evidence for the organization of the attentional control signals at the level of distributed neural activity. The hierarchical organization provides a computationally efficient scheme to support flexible top-down control. |
Judith Peth; Johann S. C. Kim; Matthias Gamer Fixations and eye-blinks allow for detecting concealed crime related memories Journal Article In: International Journal of Psychophysiology, vol. 88, no. 1, pp. 96–103, 2013. @article{Peth2013, The Concealed Information Test (CIT) is a method of forensic psychophysiology that allows for revealing concealed crime related knowledge. Such detection is usually based on autonomic responses but there is a huge interest in other measures that can be acquired unobtrusively. Eye movements and blinks might be such measures but their validity is unclear. Using a mock crime procedure with a manipulation of the arousal during the crime as well as the delay between crime and CIT, we tested whether eye tracking measures allow for detecting concealed knowledge. Guilty participants showed fewer but longer fixations on central crime details and this effect was even present after stimulus offset and accompanied by a reduced blink rate. These ocular measures were partly sensitive for induction of emotional arousal and time of testing. Validity estimates were moderate but indicate that a significant differentiation between guilty and innocent subjects is possible. Future research should further investigate validity differences between gaze measures during a CIT and explore the underlying mechanisms. |
Marc Pomplun; Tyler W. Garaas; Marisa Carrasco The effects of task difficulty on visual search strategy in virtual 3D displays Journal Article In: Journal of Vision, vol. 13, no. 3, pp. 1–22, 2013. @article{Pomplun2013, Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy'' conjunction search task and a "difficult'' shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy'' task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult'' task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. |
Stephen P. Badham; Claire V. Hutchinson Characterising eye movement dysfunction in myalgic encephalomyelitis/ chronic fatigue syndrome Journal Article In: Graefe's Archive for Clinical and Experimental Ophthalmology, vol. 251, no. 12, pp. 2769–2776, 2013. @article{Badham2013, BACKGROUND: People who suffer from myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) often report that their eye movements are sluggish and that they have difficulties tracking moving objects. However, descriptions of these visual problems are based solely on patients' self-reports of their subjective visual experiences, and there is a distinct lack of empirical evidence to objectively verify their claims. This paper presents the first experimental research to objectively examine eye movements in those suffering from ME/CFS. METHODS: Patients were assessed for ME/CFS symptoms and were compared to age, gender, and education matched controls for their ability to generate saccades and smooth pursuit eye movements. RESULTS: Patients and controls exhibited similar error rates and saccade latencies (response times) on prosaccade and antisaccade tasks. Patients showed relatively intact ability to accurately fixate the target (prosaccades), but were impaired when required to focus accurately in a specific position opposite the target (antisaccades). Patients were most markedly impaired when required to direct their gaze as closely as possible to a smoothly moving target (smooth pursuit). CONCLUSIONS: It is hypothesised that the effects of ME/CFS can be overcome briefly for completion of saccades, but that continuous pursuit activity (accurately tracking a moving object), even for a short time period, highlights dysfunctional eye movement behaviour in ME/CFS patients. Future smooth pursuit research may elucidate and improve diagnosis of ME/CFS. |
D. A. Baker; N. J. Schweitzer; Evan F. Risko; Jillian M. Ware Visual attention and the neuroimage bias Journal Article In: PLoS ONE, vol. 8, no. 9, pp. e74449, 2013. @article{Baker2013, Several highly-cited experiments have presented evidence suggesting that neuroimages may unduly bias laypeople's judgments of scientific research. This finding has been especially worrisome to the legal community in which neuroimage techniques may be used to produce evidence of a person's mental state. However, a more recent body of work that has looked directly at the independent impact of neuroimages on layperson decision-making (both in legal and more general arenas), and has failed to find evidence of bias. To help resolve these conflicting findings, this research uses eye tracking technology to provide a measure of attention to different visual representations of neuroscientific data. Finding an effect of neuroimages on the distribution of attention would provide a potential mechanism for the influence of neuroimages on higher-level decisions. In the present experiment, a sample of laypeople viewed a vignette that briefly described a court case in which the defendant's actions might have been explained by a neurological defect. Accompanying these vignettes was either an MRI image of the defendant's brain, or a bar graph depicting levels of brain activity-two competing visualizations that have been the focus of much of the previous research on the neuroimage bias. We found that, while laypeople differentially attended to neuroimagery relative to the bar graph, this did not translate into differential judgments in a way that would support the idea of a neuroimage bias. |
Daniela Balslev; Bartholomäus Odoj; Hans-Otto Karnath Role of somatosensory cortex in visuospatial attention Journal Article In: Journal of Neuroscience, vol. 33, no. 46, pp. 18311–18318, 2013. @article{Balslev2013, The human somatosensory cortex (S1) is not among the brain areas usually associated with visuospatial attention. However, such a function can be presumed, given the recently identified eye proprioceptive input to S1 and the established links between gaze and attention. Here we investigated a rare patient with a focal lesion of the right postcentral gyrus that interferes with the processing of eye proprioception without affecting the ability to locate visual objects relative to her body or to execute eye movements. As a behavioral measure of spatial attention, we recorded fixation time during visual search and reaction time for visual discrimination in lateral displays. In contrast to a group of age-matched controls, the patient showed a gradient in looking time and in visual sensitivity toward the midline. Because an attention bias in the opposite direction, toward the ipsilesional space, occurs in patients with spatial neglect, in a second study, we asked whether the incidental coinjury of S1 together with the neglect-typical perisylvian lesion leads to a milder neglect. A voxelwise lesion behavior mapping analysis of a group of right-hemisphere stroke patients supported this hypothesis. The effect of an isolated S1 lesion on visual exploration and visual sensitivity as well as the modulatory role of S1 in spatial neglect suggest a role of this area in visuospatial attention. We hypothesize that the proprioceptive gaze signal in S1, although playing only a minor role in locating visual objects relative to the body, affects the allocation of attention in the visual space. |
Mark B. Neider; Cher Wee Ang; Michelle W. Voss; Ronald Carbonari; Arthur F. Kramer Training and transfer of training in rapid visual search for camouflaged targets Journal Article In: PLoS ONE, vol. 8, no. 12, pp. e83885, 2013. @article{Neider2013, Previous examinations of search under camouflage conditions have reported that performance improves with training and that training can engender near perfect transfer to similar, but novel camouflage-type displays [1]. What remains unclear, however, are the cognitive mechanisms underlying these training improvements and transfer benefits. On the one hand, improvements and transfer benefits might be associated with higher-level overt strategy shifts, such as through the restriction of eye movements to target-likely (background) display regions. On the other hand, improvements and benefits might be related to the tuning of lower-level perceptual processes, such as figure-ground segregation. To decouple these competing possibilities we had one group of participants train on camouflage search displays and a control group train on non-camouflage displays. Critically, search displays were rapidly presented, precluding eye movements. Before and following training, all participants completed transfer sessions in which they searched novel displays. We found that search performance on camouflage displays improved with training. Furthermore, participants who trained on camouflage displays suffered no performance costs when searching novel displays following training. Our findings suggest that training to break camouflage is related to the tuning of perceptual mechanisms and not strategic shifts in overt attention. |
Antje Nuthmann On the visual span during object search in real-world scenes Journal Article In: Visual Cognition, vol. 21, no. 7, pp. 803–837, 2013. @article{Nuthmann2013a, The current study investigated from how large a region around their current point of gaze viewers can take in information when searching for objects in real-world scenes. Visual span size was estimated using the gaze-contingent moving window paradigm. Experiment 1 featured window radii measuring 1, 3, 4, 4.7, 5.4, and 6.1°. Experiment 2 featured six window radii measuring between 5 and 10°. Each scene occupied a 24.8 × 18.6° field of view. Inside the moving window, the scene was presented in high resolution. Outside the window, the scene image was low-pass filtered to impede the parsing of the scene into constituent objects. Visual span was defined as the window size at which object search times became indistinguishable from search times in the no-window control condition; this occurred with windows measuring 8° and larger. Notably, as long as central vision was fully available (window radii ≥ 5°), the distance traversed by the eyes through the scene to the search target was comparable to baseline performance. However, to move their eyes to the target, viewers made shorter saccades, requiring more fixations to cover the same image space, and thus more time. Moreover, a gaze-data based decomposition of search time revealed disruptions in specific subprocesses of search. In addition, nonlinear mixed models analyses demonstrated reliable individual differences in visual span size and parameters of the search time function. |
Enkhbold Nyamsuren; Niels A. Taatgen The effect of visual representation style in problem-solving: A perspective from cognitive processes Journal Article In: PLoS ONE, vol. 8, no. 11, pp. e80550, 2013. @article{Nyamsuren2013a, Using results from a controlled experiment and simulations based on cognitive models, we show that visual presentation style can have a significant impact on performance in a complex problem-solving task. We compared subject performances in two isomorphic, but visually different, tasks based on a card game of SET. Although subjects used the same strategy in both tasks, the difference in presentation style resulted in radically different reaction times and significant deviations in scanpath patterns in the two tasks. Results from our study indicate that low-level subconscious visual processes, such as differential acuity in peripheral vision and low-level iconic memory, can have indirect, but significant effects on decision making during a problem-solving task. We have developed two ACT-R models that employ the same basic strategy but deal with different presentations styles. Our ACT-R models confirm that changes in low-level visual processes triggered by changes in presentation style can propagate to higher-level cognitive processes. Such a domino effect can significantly affect reaction times and eye movements, without affecting the overall strategy of problem solving. |
Enkhbold Nyamsuren; Niels A. Taatgen Set as an instance of a real-world visual-cognitive task Journal Article In: Cognitive Science, vol. 37, no. 1, pp. 146–175, 2013. @article{Nyamsuren2013, Complex problem solving is often an integration of perceptual processing and deliberate planning. But what balances these two processes, and how do novices differ from experts? We investigate the interplay between these two in the game of SET. This article investigates how people combine bottom-up visual processes and top-down planning to succeed in this game. Using combinatorial and mixed-effect regression analysis of eye-movement protocols and a cognitive model of a human player, we show that SET players deploy both bottom-up and top-down processes in parallel to accomplish the same task. The combination of competition and cooperation of both types of processes is a major factor of success in the game. Finally, we explore strategies players use during the game. Our findings suggest that within-trial strategy shifts can occur without the need of explicit meta-cognitive control, but rather implicitly as a result of evolving memory activations. |
Leigh A. Mrotek Following and intercepting scribbles: Interactions between eye and hand control Journal Article In: Experimental Brain Research, vol. 227, no. 2, pp. 161–174, 2013. @article{Mrotek2013, The smooth pursuit eye movement system appears to be importantly engaged during the planning and execution of interceptive hand movements. The present study sought to probe the interaction between eye and hand control systems by examining their responses during an interception task that included target speed perturbations. On 2/3 of trials the target increased or decreased speed at various times, ranging from about 300 ms before to 150 ms after the onset of a finger movement directed to intercept the target and was triggered by a GO signal. Additionally the same 2D sum- of-sines target trajectories were followed with the eyes without interception. The smooth pursuit system responded more quickly if the target speed perturbation occurred earlier during the reaction time (i.e., near the time of the GO signal). Similarly, the finger movement began more quickly if target speed was increased earlier during the reaction time. For early perturbation conditions, the initial direction of the finger movement matched the predicted target intercept using the new target speed. For perturbations occurring after finger movement onset initial direction of finger movement did not match target interception such that, the finger path began to curve toward the perturbed target after about 150–200 ms. The results support the idea of an active process of visual target path extrapolation simultaneously used to guide both the eye and hand. |
Manon Mulckhuyse; Geert Crombez; Stefan Van der Stigchel Conditioned fear modulates visual selection Journal Article In: Emotion, vol. 13, no. 3, pp. 529–536, 2013. @article{Mulckhuyse2013, Eye movements reflect the dynamic interplay between top-down- and bottom-up-driven processes. For example, when we voluntarily move our eyes across the visual field, salient visual stimuli in the environment may capture our attention, our eyes, or modulate the trajectory of an eye movement. Previous research has shown that the behavioral relevance of a salient stimulus modulates these processes. This study investigated whether a stimulus signaling an aversive event modulates saccadic behavior. Using a differential fear-conditioning procedure, we presented a threatening (conditional stimulus: CS+) and a nonthreatening stimulus distractor (CS-) during an oculomotor selection task. The results show that short-latency saccades deviated more strongly toward the CS+ than toward the CS- distractor, whereas long-latency saccades deviated more strongly away from the CS+ than from the CS- distractor. Moreover, the CS+ distractor captured the eyes more often than the CS- distractor. Together, these results demonstrate that conditioned fear has a direct and immediate influence on visual selection. The findings are interpreted in terms of a neurobiological model of emotional visual processing. |
Romy Müller; Jens R. Helmert; Sebastian Pannasch; Boris M. Velichkovsky Gaze transfer in remote cooperation: Is it always helpful to see what your partner is attending to? Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 7, pp. 1302–1316, 2013. @article{Mueller2013, Establishing common ground in remote cooperation is challenging because nonverbal means of ambiguity resolution are limited. In such settings, information about a partner's gaze can support cooperative performance, but it is not yet clear whether and to what extent the abundance of information reflected in gaze comes at a cost. Specifically, in tasks that mainly rely on spatial referencing, gaze transfer might be distracting and leave the partner uncertain about the meaning of the gaze cursor. To examine this question, we let pairs of participants perform a joint puzzle task. One partner knew the solution and instructed the other partner's actions by (1) gaze, (2) speech, (3) gaze and speech, or (4) mouse and speech. Based on these instructions, the acting partner moved the pieces under conditions of high or low autonomy. Performance was better when using either gaze or mouse transfer compared to speech alone. However, in contrast to the mouse, gaze transfer induced uncertainty, evidenced in delayed responses to the cursor. Also, participants tried to resolve ambiguities by engaging in more verbal effort, formulating more explicit object descriptions and fewer deictic references. Thus, gaze transfer seems to increase uncertainty and ambiguity, thereby complicating grounding in this spatial referencing task. The results highlight the importance of closely examining task characteristics when considering gaze transfer as a means of support. |
Jochen Müsseler; Jens Tiggelbeck The perceived onset position of a moving target: Effects of trial contexts are evoked by different attentional allocations Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 2, pp. 349–357, 2013. @article{Muesseler2013, Previous studies have shown that the localization of the perceived onset position of a moving target varies with the trial context. When the moving target appeared at predictable positions to the left or right of fixation (constant context), localization judgments of the perceived onset positions were essentially displaced in motion direction (Fröhlich effect). In contrast, when the target appeared at unpredictable positions in the visual field (random context), localization judgments were at least drastically reduced. Four explanations of this influence of trial context on localization judgments were examined in three experiments. Findings ruled out an overcompensation mechanism effective in random-context conditions, a predictive mechanism effective in constant-context conditions and a detrimental mechanism originating from more trial repetitions in constant-context conditions. Instead, the results indicated that different attentional allocations are responsible for the localization differences. They also demonstrated that attentional mechanisms are at the basis of the Fröhlich effect. |
Marnix Naber; George A. Alvarez; Ken Nakayama Tracking the allocation of attention using human pupillary oscillations Journal Article In: Frontiers in Psychology, vol. 4, pp. 1–12, 2013. @article{Naber2013, The muscles that control the pupil are richly innervated by the autonomic nervous system. While there are central pathways that drive pupil dilations in relation to arousal, there is no anatomical evidence that cortical centers involved with visual selective attention innervate the pupil. In this study, we show that such connections must exist. Specifically, we demonstrate a novel Pupil Frequency Tagging (PFT) method, where oscillatory changes in stimulus brightness over time are mirrored by pupil constrictions and dilations. We find that the luminance-induced pupil oscillations are enhanced when covert attention is directed to the flicker stimulus and when targets are correctly detected in an attentional tracking task. These results suggest that the amplitudes of pupil responses closely follow the allocation of focal visual attention and the encoding of stimuli. PFT provides a new opportunity to study top-down visual attention itself as well as identifying the pathways and mechanisms that support this unexpected phenomenon. |
Marnix Naber; Stefan Frassle; Ueli Rutishauser; Wolfgang Einhäuser Pupil size signals novelty and predicts later retrieval success for declarative memories of natural scenes Journal Article In: Journal of Vision, vol. 13, no. 2, pp. 1–20, 2013. @article{Naber2013a, Declarative memories of personal experiences are a key factor in defining oneself as an individual, which becomes particularly evident when this capability is impaired. Assessing the physiological mechanisms of human declarative memory is typically restricted to patients with specific lesions and requires invasive brain access or functional imaging. We investigated whether the pupil, an accessible physiological measure, can be utilized to probe memories for complex natural visual scenes. During memory encoding, scenes that were later remembered elicited a stronger pupil constriction compared to scenes that were later forgotten. Thus, pupil size predicts success or failure of memory formation. In contrast, novel scenes elicited stronger pupil constriction than familiar scenes during retrieval. When viewing previously memorized scenes, those that were forgotten (misjudged as novel) still elicited stronger pupil constrictions than those correctly judged as familiar. Furthermore, pupil constriction was influenced more strongly if images were judged with high confidence. Thus, we propose that pupil constriction can serve as a marker of novelty. Since stimulus novelty modulates the efficacy of memory formation, our pupil measurements during learning indicate that the later forgotten images were perceived as less novel than the later remembered pictures. Taken together, our data provide evidence that pupil constriction is a physiological correlate of a neural novelty signal during formation and retrieval of declarative memories for complex, natural scenes. |
Marnix Naber; Ken Nakayama Pupil responses to high-level image content Journal Article In: Journal of Vision, vol. 13, no. 6, pp. 1–8, 2013. @article{Naber2013b, The link between arousal and pupil dilation is well studied, but it is less known that other cognitive processes can trigger pupil responses. Here we present evidence that pupil responses can be induced by high- level scene processing, independent of changes in low- level features or arousal. In Experiment 1, we recorded changes in pupil diameter of observers while they viewed a variety of natural scenes with or without a sun that were presented either upright or inverted. Image inversion had the strongest effect on the pupil responses. The pupil constricted more to the onset of upright images as compared to inverted images. Furthermore, the amplitudes of pupil constrictions to viewing images containing a sun were larger relative to control images. In Experiment 2, we presented cartoon versions of upright and inverted pictures that included either a sun or a moon. The image backgrounds were kept identical across conditions. Similar to Experiment 1, upright images triggered pupil constrictions with larger amplitudes than inverted images and images of the sun evoked greater pupil contraction than images of the moon. We suggest that the modulations of pupil responses were due to higher-level interpretations of image content. |
Tamami Nakano; Noriko Higashida; Shigeru Kitazawa Facilitation of face recognition through the retino-tectal pathway Journal Article In: Neuropsychologia, vol. 51, no. 10, pp. 2043–2049, 2013. @article{Nakano2013, Humans can shift their gazes faster to human faces than to non-face targets during a task in which they are required to choose between face and non-face targets. However, it remains unclear whether a direct projection from the retina to the superior colliculus is specifically involved in this facilitated recognition of faces. To address this question, we presented a pair of face and non-face pictures to participants modulated in greyscale (luminance-defined stimuli) in one condition and modulated in a blue-yellow scale (S-cone-isolating stimuli) in another. The information of the S-cone-isolating stimuli is conveyed through the retino-geniculate pathway rather than the retino-tectal pathway. For the luminance stimuli, the reaction time was shorter towards a face than towards a non-face target. The facilitatory effect while choosing a face disappeared with the S-cone stimuli. Moreover, fearful faces elicited a significantly larger facilitatory effect relative to neutral faces, when the face (with or without emotion) and non-face stimuli were presented in greyscale. The effect of emotional expressions disappeared with the S-cone stimuli. In contrast to the S-cone stimuli, the face facilitatory effect was still observed with negated stimuli that were prepared by reversing the polarity of the original colour pictures and looked as unusual as the S-cone stimuli but still contained luminance information. These results demonstrate that the face facilitatory effect requires the facial and emotional information defined by luminance, suggesting that the luminance information conveyed through the retino-tectal pathway is responsible for the faster recognition of human faces. |
Kristin Michod Gagnier; Christopher A. Dickinson; Helene Intraub Fixating picture boundaries does not eliminate boundary extension: Implications for scene representation Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 11, pp. 2161–2186, 2013. @article{MichodGagnier2013, Observers frequently remember seeing more of a scene than was shown (boundary extension). Does this reflect a lack of eye fixations to the boundary region? Single-object photographs were presented for 14-15 s each. Main objects were either whole or slightly cropped by one boundary, creating a salient marker of boundary placement. All participants expected a memory test, but only half were informed that boundary memory would be tested. Participants in both conditions made multiple fixations to the boundary region and the cropped region during study. Demonstrating the importance of these regions, test-informed participants fixated them sooner, longer, and more frequently. Boundary ratings (Experiment 1) and border adjustment tasks (Experiments 2-4) revealed boundary extension in both conditions. The error was reduced, but not eliminated, in the test-informed condition. Surprisingly, test knowledge and multiple fixations to the salient cropped region, during study and at test, were insufficient to overcome boundary extension on the cropped side. Results are discussed within a traditional visual-centric framework versus a multisource model of scene perception. |
Sébastien Miellet; Luca Vizioli; Lingnan He; Xinyue Zhou; Roberto Caldara Mapping face recognition information use across cultures Journal Article In: Frontiers in Psychology, vol. 4, pp. 34, 2013. @article{Miellet2013, Face recognition is not rooted in a universal eye movement information-gathering strategy. Western observers favor a local facial feature sampling strategy, whereas Eastern observers prefer sampling face information from a global, central fixation strategy. Yet, the precise qualitative (the diagnostic) and quantitative (the amount) information underlying these cultural perceptual biases in face recognition remains undetermined. To this end, we monitored the eye movements of Western and Eastern observers during a face recognition task, with a novel gaze-contingent technique: the Expanding Spotlight. We used 2° Gaussian apertures centered on the observers' fixations expanding dynamically at a rate of 1° every 25 ms at each fixation - the longer the fixation duration, the larger the aperture size. Identity-specific face information was only displayed within the Gaussian aperture; outside the aperture, an average face template was displayed to facilitate saccade planning. Thus, the Expanding Spotlight simultaneously maps out the facial information span at each fixation location. Data obtained with the Expanding Spotlight technique confirmed that Westerners extract more information from the eye region, whereas Easterners extract more information from the nose region. Interestingly, this quantitative difference was paired with a qualitative disparity. Retinal filters based on spatial-frequency decomposition built from the fixations maps revealed that Westerners used local high-spatial-frequency information sampling, covering all the features critical for effective face recognition (the eyes and the mouth). In contrast, Easterners achieved a similar result by using global low-spatial-frequency information from those facial features. Our data show that the face system flexibly engages into local or global eye movement strategies across cultures, by relying on distinct facial information span and culturally tuned spatially filtered information. Overall, our findings challenge the view of a unique putative process for face recognition. |
Jared E. Miller; Laura A. Carlson; J. Devin McAuley When what you hear influences when you see: Listening to an auditory rhythm influences the temporal allocation of visual attention Journal Article In: Psychological Science, vol. 24, no. 1, pp. 11–18, 2013. @article{Miller2013, The three experiments reported here demonstrated a cross-modal influence of an auditory rhythm on the temporal allocation of visual attention. In Experiment 1, participants moved their eyes to a test dot with a temporal onset that was either synchronous or asynchronous with a preceding auditory rhythm. Saccadic latencies were faster for the synchronous condition than for the asynchronous conditions. In Experiment 2, the effect was replicated in a condition in which the auditory context stopped prior to the onset of the test dot, and the effect did not occur in a condition in which auditory tones were presented at irregular intervals. Experiment 3 replicated the effect using an accuracy measure within a nontimed visual task. Together, the experiments' findings support a general entrainment perspective on attention to events over time. |
Louise O'Hare; Alasdair D. F. Clarke; Paul B. Hibbard Visual search and visual discomfort Journal Article In: Perception, vol. 42, no. 1, pp. 1–15, 2013. @article{OHare2013, Certain visual stimuli evoke perceptions of discomfort in non-clinical populations. We investigated the impact of stimuli previously judged as uncomfortable by non-clinical populations on a visual search task. One stimulus that has been shown to affect discomfort judgments is noise that has been filtered to have particular statistical properties (Juricevic et al, 2010 Perception 39 884-899). A second type of stimulus associated with visual discomfort is striped patterns (Wilkins et al, 1984 Brain 107 989-1017). These stimuli were used as backgrounds in a visual search task, to determine their influence on search performance. Results showed that, while striped backgrounds did have an impact on visual search performance, this depended on the similarity between the target and background in orientation and spatial frequency. We found no evidence for a more generalised effect of discomfort on performance. |
Bettina Olk Measuring the allocation of attention in the Stroop task: Evidence from eye movement patterns Journal Article In: Psychological Research, vol. 77, no. 2, pp. 106–115, 2013. @article{Olk2013, Attention plays a crucial role in the Stroop task, which requires attending to less automatically processed task-relevant attributes of stimuli and the suppression of involuntary processing of task-irrelevant attributes. The experiment assessed the allocation of attention by monitoring eye movements throughout congruent and incongruent trials. Participants viewed two stimulus arrays that differed regarding the amount of items and their numerical value and judged by manual response which of the arrays contained more items, while disregarding their value. Different viewing patterns were observed between congruent (e.g., larger array of numbers with higher value) and incongruent (e.g., larger array of numbers with lower value) trials. The direction of first saccades was guided by task-relevant information but in the incongruent condition directed more frequently towards task-irrelevant information. The data further suggest that the difference in the deployment of attention between conditions changes throughout a trial, likely reflecting the impact and resolution of the conflict. For instance, stimulus arrays in line with the correct response were attended for longer and fixations were longer for incongruent trials, with the second fixation and considering all fixations. By the time of the correct response, this latter difference between conditions was absent. Possible mechanisms underlying eye movement patterns are discussed. |
Hans P. Op de Beeck; Ben Vermaercke; Daniel G. Woolley; Nicole Wenderoth Combinatorial brain decoding of people's whereabouts during visuospatial navigation Journal Article In: Frontiers in Neuroscience, vol. 7, pp. 78, 2013. @article{OpdeBeeck2013, Complex behavior typically relies upon many different processes which are related to activity in multiple brain regions. In contrast, neuroimaging analyses typically focus upon isolated processes. Here we present a new approach, combinatorial brain decoding, in which we decode complex behavior by combining the information which we can retrieve from the neural signals about the many different sub-processes. The case in point is visuospatial navigation. We explore the extent to which the route travelled by human subjects (N = 3) in a complex virtual maze can be decoded from activity patterns as measured with functional magnetic resonance imaging. Preliminary analyses suggest that it is difficult to directly decode spatial position from regions known to contain an explicit cognitive map of the environment, such as the hippocampus. Instead, we were able to indirectly derive spatial position from the pattern of activity in visual and motor cortex. The non-spatial representations in these regions reflect processes which are inherent to navigation, such as which stimuli are perceived at which point in time and which motor movement is executed when (e.g., turning left at a crossroad). Highly successful decoding of routes followed through the maze was possible by combining information about multiple aspects of navigation events across time and across multiple cortical regions. This "proof of principle" study highlights how visuospatial navigation is related to the combined activity of multiple brain regions, and establishes combinatorial brain decoding as a means to study complex mental events that involve a dynamic interplay of many cognitive processes. |
Jorge Otero-Millan; Stephen L. Macknik; Rachel E. Langston; Susana Martinez-Conde An oculomotor continuum from exploration to fixation Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 15, pp. 6175–6180, 2013. @article{OteroMillan2013, During visual exploration, saccadic eye movements scan the scene for objects of interest. During attempted fixation, the eyes are relatively still but often produce microsaccades. Saccadic rates during exploration are higher than those of microsaccades during fixation, reinforcing the classic view that exploration and fixation are two distinct oculomotor behaviors. An alternative model is that fixation and exploration are not dichotomous, but are instead two extremes of a functional continuum. Here, we measured the eye movements of human observers as they either fixed their gaze on a small spot or scanned natural scenes of varying sizes. As scene size diminished, so did saccade rates, until they were continuous with microsaccadic rates during fixation. Other saccadic properties varied as function of image size as well, forming a continuum with microsaccadic parameters during fixation. This saccadic continuum extended to nonrestrictive, ecological viewing conditions that allowed all types of saccades and fixation positions. Eye movement simulations moreover showed that a single model of oculomotor behavior can explain the saccadic continuum from exploration to fixation, for images of all sizes. These findings challenge the view that exploration and fixation are dichotomous, suggesting instead that visual fixation is functionally equivalent to visual exploration on a spatially focused scale. |
Andrew P. Bayliss; Emily Murphy; Claire K. Naughtin; Ada Kritikos; Leonhard Schilbach; Stefanie I. Becker Gaze leading: Initiating simulated joint attention influences eye movements and choice behavior Journal Article In: Journal of Experimental Psychology: General, vol. 142, no. 1, pp. 76–92, 2013. @article{Bayliss2013, Recent research in adults has made great use of the gaze cuing paradigm to understand the behavior of the follower in joint attention episodes. We implemented a gaze leading task to investigate the initiator–the other person in these triadic interactions. In a series of gaze-contingent eye-tracking studies, we show that fixation dwell time upon and reorienting toward a face are affected by whether that individual face shifts its eyes in a congruent or an incongruent direction in response to the participant's eye movement. Gaze leading also biased affective responses toward the faces and attended objects. These findings demonstrate that leading the eyes of other individuals alters how we explore and evaluate our social environment. |
Stefanie I. Becker Simply shapely: Relative, not absolute shapes are primed in pop-out search Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 5, pp. 845–861, 2013. @article{Becker2013, Visual search is typically faster when the target from the previous trial is repeated than when it changes. This priming effect is commonly attributed to a selection bias for the target feature value or against the nontarget feature value that carries over to the next trial. By contrast, according to a relational account, what is primed in visual search is the target-nontarget relationship-namely, the feature that the target has in relation to the features in the nontarget context (e.g., larger, darker, redder)-and switch costs occur only when the target-nontarget relations reverse across trials. Here, the relational account was tested against current feature-based views in three eye movement experiments that used different shape search tasks (e.g., geometrical figures varying in the number of corners). For all tested shapes, reversing the target-nontarget relationships produced switch costs of the same magnitude as directly switching the target and nontarget features across trials ("full-switch"). In particular, changing only the nontargets produced large switch costs, even when the target feature was always repeated across trials. By contrast, no switch costs were observed when both the target and nontarget features changed, such that the coarse target-nontarget relations remained constant across trials. These results support the relational account over feature-based accounts of priming and indicate that a target's shape can be encoded relative to the shapes in the nontarget context. |
Stefanie I. Becker; Ulrich Ansorge Higher set sizes in pop-out search displays do not eliminate priming or enhance target selection Journal Article In: Vision Research, vol. 81, pp. 18–28, 2013. @article{Becker2013a, Previous research shows that salient stimuli do not pop out solely in virtue of their feature contrast. Rather, visual selection of a pop-out target is strongly modulated by feature priming: Repeating the target feature (e.g., red) across trials primes attention shifts to the target but delays target selection when the target feature changes (e.g., from red to green). However, it has been argued that priming modulated target selection only because the stimuli were too sparsely packed, suggesting that pop-out is still mostly determined by the target's saliency (i.e., local feature contrast). Here, we tested these different views by measuring the observer's eye movements in search for a colour target (Exp. 1) or size target (Exp. 2), when the target was similar versus dissimilar to the target, and when the displays contained 6 or 12 search items. The results showed that making the target less similar to the nontargets indeed eliminated priming effects in search for colour, but not in search for size. Moreover, increasing the set size neither increased search efficiency nor eliminated feature priming effects. Taken together, the results indicated that priming can still modulate target selection even in search for salient targets. |
Stefanie I. Becker; Charles L. Folk; Roger W. Remington Attentional capture does not depend on feature similarity, but on target-nontarget relations Journal Article In: Psychological Science, vol. 24, no. 5, pp. 634–647, 2013. @article{Becker2013b, What factors determine which stimuli of a scene will be visually selected and become available for conscious perception? The currently prevalent view is that attention operates on specific feature values, so attention will be drawn to stimuli that have features similar to those of the sought-after target. Here, we show that, instead, attentional capture depends on whether a distractor's feature relationships match the target-nontarget relations (e.g., redder). In three spatial-cuing experiments, we found that (a) a cue with the target color (e.g., orange) can fail to capture attention when the cue-cue-context relations do not match the target-nontarget relations (e.g., redder target vs. yellower cue), whereas (b) a cue with the nontarget color can capture attention when its relations match the target-nontarget relations (e.g., both are redder). These results support a relational account in which attention is biased toward feature relationships instead of particular feature values, and show that attentional capture by an irrelevant distractor does not depend on feature similarity, but rather depends on whether the distractor matches or mismatches the target's relative attributes (e.g., relative color). |
Artem V. Belopolsky; Stefan Van der Stigchel Saccades curve away from previously inhibited locations: Evidence for the role of priming in oculomotor competition Journal Article In: Journal of Neurophysiology, vol. 110, no. 10, pp. 2370–2377, 2013. @article{Belopolsky2013, The oculomotor system serves as the basis for representing concurrently competing motor programs. Here, we examine whether the oculomotor system also keeps track of the outcome of competition between target and distractor on the previous trial. Participants had to perform a simple task of making a saccade toward a predefined direction. On two-thirds of the trials, an irrelevant distractor was presented to either the left or right of the fixation. On one-third of the trials, no distractor was present. The results show that on trials without a distractor, saccades curved away from the empty location that was occupied by a distractor on the previous trial. This result was replicated and extended to cases when different saccade directions were used. In addition, we show that repetition of distractor location on the distractor-present trials results in a stronger curvature away and in a shorter saccade latency to the target. Taken together, these results provide strong evidence that the oculomotor system automatically codes and retains locations that had been ignored in the past to bias future behavior. |
Daniel Belyusar; Adam C. Snyder; Hans Peter Frey; Mark R. Harwood; Josh Wallman; John J. Foxe Oscillatory alpha-band suppression mechanisms during the rapid attentional shifts required to perform an anti-saccade task Journal Article In: NeuroImage, vol. 65, pp. 395–407, 2013. @article{Belyusar2013, Neuroimaging has demonstrated anatomical overlap between covert and overt attention systems, although behavioral and electrophysiological studies have suggested that the two systems do not rely on entirely identical circuits or mechanisms. In a parallel line of research, topographically-specific modulations of alpha-band power (~. 8-14. Hz) have been consistently correlated with anticipatory states during tasks requiring covert attention shifts. These tasks, however, typically employ cue-target-interval paradigms where attentional processes are examined across relatively protracted periods of time and not at the rapid timescales implicated during overt attention tasks. The anti-saccade task, where one must first covertly attend for a peripheral target, before executing a rapid overt attention shift (i.e. a saccade) to the opposite side of space, is particularly well-suited for examining the rapid dynamics of overt attentional deployments. Here, we asked whether alpha-band oscillatory mechanisms would also be associated with these very rapid overt shifts, potentially representing a common neural mechanism across overt and covert attention systems. High-density electroencephalography in conjunction with infra-red eye-tracking was recorded while participants engaged in both pro- and anti-saccade task blocks. Alpha power, time-locked to saccade onset, showed three distinct phases of significantly lateralized topographic shifts, all occurring within a period of less than 1. s, closely reflecting the temporal dynamics of anti-saccade performance. Only two such phases were observed during the pro-saccade task. These data point to substantially more rapid temporal dynamics of alpha-band suppressive mechanisms than previously established, and implicate oscillatory alpha-band activity as a common mechanism across both overt and covert attentional deployments. |
Nicola C. Anderson; Walter F. Bischof; Kaitlin E. W. Laidlaw; Evan F. Risko; Alan Kingstone Recurrence quantification analysis of eye movements Journal Article In: Behavior Research Methods, vol. 45, pp. 842–856, 2013. @article{Anderson2013, Recurrence quantification analysis (RQA) has been successfully used for describing dynamic systems that are too complex to be characterized adequately by standard methods in time series analysis. More recently, RQA has been used for analyzing the coordination of gaze patterns between cooperating individuals. Here, we extend RQA to the characterization of fixation sequences, and we show that the global and local temporal characteristics of fixation sequences can be captured by a small number of RQA measures that have a clear interpretation in this context. We applied RQA to the analysis of a study in which observers looked at different scenes under natural or gaze-contingent viewing conditions, and we found large differences in the RQA measures between the viewing conditions, indicating that RQA is a powerful new tool for the analysis of the temporal patterns of eye movement behavior. |
Ulrich Ansorge; Heinz-Werner Priess; Dirk Kerzel Effects of relevant and irrelevant color singletons on inhibition of return and attentional capture Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 8, pp. 1687–1702, 2013. @article{Ansorge2013, We tested whether color singletons lead to saccadic and manual inhibition of return (IOR; i.e., slower responses at cued locations) and whether IOR depended on the relevance of the color singletons. The target display was preceded by a nonpredictive cue display. In three experiments, half of the cues were response-relevant, because participants had to perform a discrimination task at the cued location. With the exception of Experiment 2, none of the cue colors matched the target color. We observed saccadic IOR after color singletons, which was greater for slow than for fast responses. Furthermore, when the relevant cue color matched the target color, we observed attentional capture (i.e., faster responses at cued locations) with rapid responses, but IOR with slower responses, which provides evidence for attentional deallocation. When the cue display was completely response-irrelevant in two additional experiments, we did not find evidence for IOR. Instead, we found attentional capture when the cue color matched the target color. Also, attentional capture was greater for rapid responses and with short cue-target intervals. Thus, IOR emerges when cues are relevant and do not match the target color, whereas attentional capture emerges with relevant and irrelevant cues that match the target color. |
Katharina Anton-Erxleben; Katrin Herrmann; Marisa Carrasco Independent Effects of Adaptation and Attention on Perceived Speed Journal Article In: Psychological Science, vol. 24, no. 2, pp. 150–159, 2013. @article{AntonErxleben2013, Adaptation and attention are two mechanisms by which sensory systems manage limited bioenergetic resources: Whereas adaptation decreases sensitivity to stimuli just encountered, attention increases sensitivity to behaviorally relevant stimuli. In the visual system, these changes in sensitivity are accompanied by a change in the appearance of different stimulus dimensions, such as speed. Adaptation causes an underestimation of speed, whereas attention leads to an overestimation of speed. In the two experiments reported here, we investigated whether the effects of these mechanisms interact and how they affect the appearance of stimulus features. We tested the effects of adaptation and the subsequent allocation of attention on perceived speed. A quickly moving adaptor decreased the perceived speed of subsequent stimuli, whereas a slow adaptor did not alter perceived speed. Attention increased perceived speed regardless of the adaptation effect, which indicates that adaptation and attention affect perceived speed independently. Moreover, the finding that attention can alter perceived speed after adaptation indicates that adaptation is not merely a by-product of neuronal fatigue. |
Wei-Ying Chen; Piers D. Howe; Alex O. Holcombe Resource demands of object tracking and differential allocation of the resource Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 4, pp. 710–725, 2013. @article{Chen2013a, The attentional processes for tracking moving objects may be largely hemisphere-specific. Indeed, in our first two experiments the maximum object speed (speed limit) for tracking targets in one visual hemifield (left or right) was not significantly affected by a requirement to track additional targets in the other hemifield. When the additional targets instead occupied the same hemifield as the original targets, the speed limit was reduced. At slow target speeds, however, adding a second target to the same hemifield had little effect. At high target speeds, the cost of adding a same-hemifield second target was approximately as large as would occur if observers could only track one of the targets. This shows that performance with a fast-moving target is very sensitive to the amount of resource allocated. In a third experiment, we investigated whether the resources for tracking can be distributed unequally between two targets. The speed limit for a given target was higher if the second target was slow rather than fast, suggesting that more resource was allocated to the faster of the two targets. This finding was statistically significant only for targets presented in the same hemifield, consistent with the theory of independent resources in the two hemifields. Some limited evidence was also found for resource sharing across hemifields, suggesting that attentional tracking resources may not be entirely hemifield-specific. Together, these experiments indicate that the largely hemisphere-specific tracking resource can be differentially allocated to faster targets. |
Joey T. Cheng; Jessica L. Tracy; Tom Foulsham; Alan Kingstone; Joseph Henrich Two ways to the top: Evidence that dominance and prestige are distinct yet viable avenues to social rank and influence Journal Article In: Journal of Personality and Social Psychology, vol. 104, no. 1, pp. 103–125, 2013. @article{Cheng2013, The pursuit of social rank is a recurrent and pervasive challenge faced by individuals in all human societies. Yet, the precise means through which individuals compete for social standing remains unclear. In 2 studies, we investigated the impact of 2 fundamental strategies-Dominance (the use of force and intimidation to induce fear) and Prestige (the sharing of expertise or know-how to gain respect)-on the attainment of social rank, which we conceptualized as the acquisition of (a) perceived influence over others (Study 1), (b) actual influence over others' behaviors (Study 1), and (c) others' visual attention (Study 2). Study 1 examined the process of hierarchy formation among a group of previously unacquainted individuals, who provided round-robin judgments of each other after completing a group task. Results indicated that the adoption of either a Dominance or Prestige strategy promoted perceptions of greater influence, by both group members and outside observers, and higher levels of actual influence, based on a behavioral measure. These effects were not driven by popularity; in fact, those who adopted a Prestige strategy were viewed as likable, whereas those who adopted a Dominance strategy were not well liked. In Study 2, participants viewed brief video clips of group interactions from Study 1 while their gaze was monitored with an eye tracker. Dominant and Prestigious targets each received greater visual attention than targets low on either dimension. Together, these findings demonstrate that Dominance and Prestige are distinct yet viable strategies for ascending the social hierarchy, consistent with evolutionary theory. |
Dana L. Chesney; Nicole M. McNeil; James R. Brockmole; Ken Kelley An eye for relations: Eye-tracking indicates long-term negative effects of operational thinking on understanding of math equivalence Journal Article In: Memory & Cognition, vol. 41, no. 7, pp. 1079–1095, 2013. @article{Chesney2013, Prior knowledge in the domain of mathematics can sometimes interfere with learning and performance in that domain. One of the best examples of this phenomenon is in students' difficulties solving equations with operations on both sides of the equal sign. Elementary school children in the U.S. typically acquire incorrect, operational schemata rather than correct, relational schemata for interpreting equations. Researchers have argued that these operational schemata are never unlearned and can continue to affect performance for years to come, even after relational schemata are learned. In the present study, we investigated whether and how operational schemata negatively affect undergraduates' performance on equations. We monitored the eye movements of 64 undergraduate students while they solved a set of equations that are typically used to assess children's adherence to operational schemata (e.g., 3 + 4 + 5 = 3 + __). Participants did not perform at ceiling on these equations, particularly when under time pressure. Converging evidence from performance and eye movements showed that operational schemata are sometimes activated instead of relational schemata. Eye movement patterns reflective of the activation of relational schemata were specifically lacking when participants solved equations by adding up all the numbers or adding the numbers before the equal sign, but not when they used other types of incorrect strategies. These findings demonstrate that the negative effects of acquiring operational schemata extend far beyond elementary school. |
Kimberly S. Chiew; Todd S. Braver Temporal dynamics of motivation-cognitive control interactions revealed by high-resolution pupillometry Journal Article In: Frontiers in Psychology, vol. 4, pp. 15, 2013. @article{Chiew2013, Motivational manipulations, such as the presence of performance-contingent reward incentives, can have substantial influences on cognitive control. Previous evidence suggests that reward incentives may enhance cognitive performance specifically through increased preparatory, or proactive, control processes. The present study examined reward influences on cognitive control dynamics in the AX-Continuous Performance Task (AX-CPT), using high-resolution pupillometry. In the AX-CPT, contextual cues must be actively maintained over a delay in order to appropriately respond to ambiguous target probes. A key feature of the task is that it permits dissociable characterization of preparatory, proactive control processes (i.e., utilization of context) and reactive control processes (i.e., target-evoked interference resolution). Task performance profiles suggested that reward incentives enhanced proactive control (context utilization). Critically, pupil dilation was also increased on reward incentive trials during context maintenance periods, suggesting trial-specific shifts in proactive control, particularly when context cues indicated the need to overcome the dominant target response bias. Reward incentives had both transient (i.e., trial-by-trial) and sustained (i.e., block-based) effects on pupil dilation, which may reflect distinct underlying processes. The transient pupillary effects were present even when comparing against trials matched in task performance, suggesting a unique motivational influence of reward incentives. These results suggest that pupillometry may be a useful technique for investigating reward motivational signals and their dynamic influence on cognitive control. |
Ian Cunnings; Claudia Felser The role of working memory in the processing of reflexives Journal Article In: Language and Cognitive Processes, vol. 28, no. 9, pp. 188–219, 2013. @article{Cunnings2013, We report results from two eye-movement experiments that examined how differences in working memory (WM) capacity affect readers' application of structural constraints on reflexive anaphor resolution during sentence comprehension. We examined whether binding Principle A, a syntactic constraint on the interpretation of reflexives, is reducible to a memory friendly ‘‘recency'' strategy, and whether WM capacity influences the degree to which readers create anaphoric dependencies ruled out by binding theory. Our results indicate that low and high WM span readers applied Principle A early during processing. However, contrary to previous findings, low span readers also showed immediate intrusion effects of a linearly closer but structurally inaccessible competitor antecedent. We interpret these findings as indicating that although the relative prominence of potential antecedents in WM can affect online anaphor resolution, Principle A is not reducible to a processing or linear distance based ‘‘least effort'' constraint. |
Kirsten A. Dalrymple; Alexander K. Gray; Brielle L. Perler; Elina Birmingham; Walter F. Bischof; Jason J. S. Barton; Alan Kingstone Eyeing the eyes in social scenes: Evidence for top-down control of stimulus selection in simultanagnosia Journal Article In: Cognitive Neuropsychology, vol. 30, no. 1, pp. 25–40, 2013. @article{Dalrymple2013, Simultanagnosia is a disorder of visual attention resulting from bilateral parieto-occipital lesions. Healthy individuals look at eyes to infer people's attentional states, but simultanagnosics allocate abnormally few fixations to eyes in scenes. It is unclear why simultanagnosics fail to fixate eyes, but it might reflect that they are (a) unable to locate and fixate them, or (b) do not prioritize attentional states. We compared eye movements of simultanagnosic G.B. to those of healthy subjects viewing scenes normally or through a restricted window of vision. They described scenes and explicitly inferred attentional states of people in scenes. G.B. and subjects viewing scenes through a restricted window made few fixations on eyes when describing scenes, yet increased fixations on eyes when inferring attention. Thus G.B. understands that eyes are important for inferring attentional states and can exert top-down control to seek out and process the gaze of others when attentional states are of interest. |
Ido Davidesco; Michal Harel; Michal Ramot; Uri Kramer; Svetlana Kipervasser; Fani Andelman; Miri Y. Neufeld; Gadi Goelman; Itzhak Fried; Rafael Malach Spatial and object-based attention modulates broadband high-frequency responses across the human visual cortical hierarchy Journal Article In: Journal of Neuroscience, vol. 33, no. 3, pp. 1228–1240, 2013. @article{Davidesco2013, One of the puzzling aspects in the visual attention literature is the discrepancy between electrophysiological and fMRI findings: whereas fMRI studies reveal strong attentional modulation in the earliest visual areas, single-unit and local field potential studies yielded mixed results. In addition, it is not clear to what extent spatial attention effects extend from early to high-order visual areas. Here we addressed these issues using electrocorticography recordings in epileptic patients. The patients performed a task that allowed simultaneous manipulation ofboth spatial and object-based attention. They were presented with composite stimuli, consisting ofa small object (face or house) superimposed on a large one, and in separate blocks, were instructed to attend one ofthe objects. We found a consistent increase in broadband high-frequency (30–90Hz) power, but not in visual evoked potentials, associated with spatial attention starting withV1/V2 and continuing throughout the visual hierarchy. The magnitude ofthe attentional modulation was correlated with the spatial selectivity of each electrode and its distance from the occipital pole. Interestingly, the latency of the attentional modulation showed a significant decrease along the visual hierarchy. In addition, electrodes placed over high-order visual areas (e.g., fusiform gyrus) showed both effects of spatial and object-based attention. Overall, our results help to reconcile previous observations of discrepancy between fMRI and electrophysiology. They also imply that spatial attention effects can be found both in early and high-order visual cortical areas, in parallel with their stimulus tuning properties. |
Wesley K. Burge; Lesley A. Ross; Franklin R. Amthor; William G. Mitchell; Alexander Zotov; Kristina M. Visscher Processing speed training increases the efficiency of attentional resource allocation in young adults Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 684, 2013. @article{Burge2013, Cognitive training has been shown to improve performance on a range of tasks. However, the mechanisms underlying these improvements are still unclear. Given the wide range of transfer effects, it is likely that these effects are due to a factor common to a wide range of tasks. One such factor is a participant's efficiency in allocating limited cognitive resources. The impact of a cognitive training program, Processing Speed Training (PST), on the allocation of resources to a set of visual tasks was measured using pupillometry in 10 young adults as compared to a control group of a 10 young adults (n = 20). PST is a well-studied computerized training program that involves identifying simultaneously presented central and peripheral stimuli. As training progresses, the task becomes increasingly more difficult, by including peripheral distracting stimuli and decreasing the duration of stimulus presentation. Analysis of baseline data confirmed that pupil diameter reflected cognitive effort. After training, participants randomized to PST used fewer attentional resources to perform complex visual tasks as compared to the control group. These pupil diameter data indicated that PST appears to increase the efficiency of attentional resource allocation. Increases in cognitive efficiency have been hypothesized to underlie improvements following experience with action video games, and improved cognitive efficiency has been hypothesized to underlie the benefits of PST in older adults. These data reveal that these training schemes may share a common underlying mechanism of increasing cognitive efficiency in younger adults. |
Manuel G. Calvo; Andrés Fernández-Martín Can the eyes reveal a person's emotions? Biasing role of the mouth expression Journal Article In: Motivation and Emotion, vol. 37, no. 1, pp. 202–211, 2013. @article{Calvo2013, In this study we investigated how perception of the eye expression in a face is influenced by the mouth expression, even when only the eyes are directly looked at. The same eyes appeared in a face with either an incongruent smiling, angry, or sad mouth, a congruent mouth, or no mouth. Attention was directed to the eyes by means of cueing and there were no fixations on the mouth. Participants evaluated whether the eyes were happy (or angry, or sad) or not. Results indicated that the smile biased the evaluation of the eyes towards happiness to a greater extent than an angry or a sad mouth did towards anger or sadness. The smiling mouth was also more visually salient than the angry and the sad mouths. We conclude that the role of the eyes as a 'window' to a person's emotional and motivational state is constrained and distorted by the configural projection of an expressive mouth, and that this effect is enhanced by the high visual saliency of the smile. |
Manuel G. Calvo; Andrés Fernández-Martín; Lauri Nummenmaa A smile biases the recognition of eye expressions: Configural projection from a salient mouth Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 6, pp. 1159–1181, 2013. @article{Calvo2013a, A smile is visually highly salient and grabs attention automatically. We investigated how extrafoveally seen smiles influence the viewers' perception of non-happy eyes in a face. A smiling mouth appeared in composite faces with incongruent non-happy (fearful, neutral, etc.) eyes, thus producing blended expressions, or it appeared in intact faces with genuine expressions. Attention to the eye region was spatially cued while foveal vision of the mouth was blocked by gaze-contingent masking. Participants judged whether the eyes were happy or not. Results indicated that the smile biased the evaluation of the eye expression: The same non-happy eyes were more likely to be judged as happy and categorized more slowly as not happy in a face with a smiling mouth than in a face with a non-smiling mouth or with no mouth. This bias occurred when the mouth and the eyes appeared simultaneously and aligned, but also to some extent when they were misaligned and when the mouth appeared after the eyes. We conclude that the highly salient smile projects to other facial regions, thus influencing the perception of the eye expression. Projection serves spatial and temporal integration of face parts and changes. |
E. Camara; Sanjay G. Manohar; Masud Husain Past rewards capture spatial attention and action choices Journal Article In: Experimental Brain Research, vol. 230, no. 3, pp. 291–300, 2013. @article{Camara2013, The desire to increase rewards and minimize punishing events is a powerful driver in behaviour. Here, we assess how the value of a location affects subsequent deployment of goal-directed attention as well as involuntary capture of attention on a trial-to-trial basis. By tracking eye position, we investigated whether the ability of an irrelevant, salient visual stimulus to capture gaze (stimulus-driven attention) is modulated by that location's previous value. We found that distractors draw attention to them significantly more if they appear at a location previously associated with a reward, even when gazing towards them now leads to punishments. Within the same experiment, it was possible to demonstrate that a location associated with a reward can also bias subsequent goal-directed attention (indexed by action choices) towards it. Moreover, individuals who were vulnerable to being distracted by previous reward history, as indexed by oculomotor capture, were also more likely to direct their actions to those locations when they had a free choice. Even when the number of initial responses was made to be rewarded and punished stimuli were equalized, the effects of previous reward history on both distractibility and action choices remained. Finally, a covert attention task requiring button-press responses rather than overt gaze shifts demonstrated the same pattern of findings. Thus, past rewards can act to modulate both subsequent stimulus-driven as well as goal-directed attention. These findings reveal that there can be surprising short-term costs of using reward cues to regulate behaviour. They show that current valence information, if maintained inappropriately, can have negative subsequent effects, with attention and action choices being vulnerable to capture and bias, mechanisms that are of potential importance in understanding distractibility and abnormal action choices. |
Ian G. M. Cameron; Donald C. Brien; Kira Links; Sarah Robichaud; Jennifer D. Ryan; Douglas P. Munoz; Tiffany W. Chow Changes to saccade behaviors in parkinson's disease following dancing and observation of dancing Journal Article In: Frontiers in Neurology, vol. 4, pp. 22, 2013. @article{Cameron2013, BACKGROUND: The traditional view of Parkinson's disease (PD) as a motor disorder only treated by dopaminergic medications is now shifting to include non-pharmacologic interventions. We have noticed that patients with PD obtain an immediate, short-lasting benefit to mobility by the end of a dance class, suggesting some mechanism by which dancing reduces bradykinetic symptoms. We have also found that patients with PD are unimpaired at initiating highly automatic eye movements to visual stimuli (pro-saccades) but are impaired at generating willful eye movements away from visual stimuli (anti-saccades). We hypothesized that the mechanisms by which a dance class improves movement initiation may generalize to the brain networks impacted in PD (frontal lobe and basal ganglia, BG), and thus could be assessed objectively by measuring eye movements, which rely on the same neural circuitry. METHODS: Participants with PD performed pro- and anti-saccades before, and after, a dance class. "Before" and "after" saccade performance measurements were compared. These measurements were then contrasted with a control condition (observing a dance class in a video), and with older and younger adult populations, who rested for an hour between measurements. RESULTS: We found an improvement in anti-saccade performance following the observation of dance (but not following dancing), but we found a detriment in pro-saccade performance following dancing. CONCLUSION: We suggest that observation of dance induced plasticity changes in frontal-BG networks that are important for executive control. Dancing, in contrast, increased voluntary movement signals that benefited mobility, but interfered with the automaticity of efficient pro-saccade execution. |
Rodrigo A. Cárdenas; Lauren Julius Harris; Mark W. Becker Sex differences in visual attention toward infant faces Journal Article In: Evolution and Human Behavior, vol. 34, no. 4, pp. 280–287, 2013. @article{Cardenas2013, Parental care and alloparental care are major evolutionary dimensions of the biobehavioral repertoire of many species, including human beings. Despite their importance in the course of human evolution and the likelihood that they have significantly shaped human cognition, the nature of the cognitive mechanisms underlying alloparental care is still largely unexplored. In this study, we examined whether one such cognitive mechanism is a visual attentional bias toward infant features, and if so, whether and how it is related to the sex of the adult and the adult's self-reported interest in infants. We used eye-tracking to measure the eye movements of nulliparous undergraduates while they viewed pairs of faces consisting of one adult face (a man or woman) and one infant face (a boy or girl). Subjects then completed two questionnaires designed to measure their interest in infants. Results showed, consistent with the significance of alloparental care in human evolution, that nulliparous adults have an attentional bias toward infants. Results also showed that women's interest in and attentional bias towards infants were stronger and more stable than men's. These findings are consistent with the hypothesis that, due to their central role in infant care, women have evolved a greater and more stable sensitivity to infants. The results also show that eye movements can be successfully used to assess individual differences in interest in infants. © 2013 Elsevier Inc. |
Thomas C. Cassey; David R. Evens; Rafal Bogacz; James A. R. Marshall; Casimir J. H. Ludwig Adaptive sampling of information during perceptual decision-making Journal Article In: PLoS ONE, vol. 8, no. 11, pp. e78993, 2013. @article{Cassey2013, In many perceptual and cognitive decision-making problems, humans sample multiple noisy information sources serially, and integrate the sampled information to make an overall decision. We derive the optimal decision procedure for two- alternative choice tasks in which the different options are sampled one at a time, sources vary in the quality of the information they provide, and the available time is fixed. To maximize accuracy, the optimal observer allocates time to sampling different information sources in proportion to their noise levels. We tested human observers in a corresponding perceptual decision-making task. Observers compared the direction of two random dot motion patterns that were triggered only when fixated. Observers allocated more time to the noisier pattern, in a manner that correlated with their sensory uncertainty about the direction of the patterns. There were several differences between the optimal observer predictions and human behaviour. These differences point to a number of other factors, beyond the quality of the currently available sources of information, that influences the sampling strategy. |
C. Cavina-Pratesi; Constanze Hesse Why do the eyes prefer the index finger? Simultaneous recording of eye and hand movements during precision grasping Journal Article In: Journal of Vision, vol. 13, no. 5, pp. 1–15, 2013. @article{CavinaPratesi2013, Previous research investigating eye movements when grasping objects with precision grip has shown that we tend to fixate close to the contact position of the index finger on the object. It has been hypothesized that this behavior is related to the fact that the index finger usually describes a more variable trajectory than the thumb and therefore requires a higher amount of visual monitoring. We wished to directly test this prediction by creating a grasping task in which either the index finger or the thumb described a more variable trajectory. Experiment 1 showed that the trajectory variability of the digits can be manipulated by altering the direction from which the hand approaches the object. If the start position is located in front of the object (hand-before), the index finger produces a more variable trajectory. In contrast, when the hand approaches the object from a starting position located behind it (hand-behind), the thumb produces a more variable movement path. In Experiment 2, we tested whether the fixation pattern during grasping is altered in conditions in which the trajectory variability of the two digits is reversed. Results suggest that regardless of the trajectory variability, the gaze was always directed toward the contact position of the index finger. Notably, we observed that regardless of our starting position manipulation, the index finger was the first digit to make contact with the object. Hence, we argue that time to contact (and not movement variability) is the crucial parameter which determines where we look during grasping. |
Jelmer P. De Vries; Ignace T. C. Hooge; Alexander H. Wertheim; Frans A. J. Verstraten Background, an important factor in visual search Journal Article In: Vision Research, vol. 86, pp. 128–138, 2013. @article{DeVries2013, The ability to detect an object depends on the contrast between the object and its background. Despite this, many models of visual search rely solely on the properties of target and distractors, and do not take the background into account. Yet, both target and distractors have their individual contrasts with the background. These contrasts generally differ, because the target and distractors are different in at least one feature. Therefore, background is likely to play an important role in visual search. In three experiments we manipulated the properties of the background (luminance, orientation and spatial frequency, respectively) while keeping the target and distractors constant. In the first experiment, in which target and distractors had a different luminance, changing the background luminance had an extensive effect on search times. When background luminance was in between that of the target and distractors, search times were always short. Interestingly, when the background was darker than both the target and the distractors, search times were much longer than when the background was lighter. Manipulating orientation and spatial frequency of the background, on the other hand, resulted in search times that were longest for small target-background differences. Thus, background plays an important role in search. This role depends on the individual contrast of both target and distractors with the background and the type of feature contrast (luminance, orientation or spatial frequency). |
Alixia Demichelis; Gérard Olivier; Alain Berthoz Motor transfer from map ocular exploration to locomotion during spatial navigation from memory Journal Article In: Experimental Brain Research, vol. 224, no. 4, pp. 605–611, 2013. @article{Demichelis2013, Spatial navigation from memory can rely on two different strategies: a mental simulation of a kinesthetic spatial navigation (egocentric route strategy) or visual-spatial memory using a mental map (allocentric survey strategy). We hypothesized that a previously performed "oculomotor navigation" on a map could be used by the brain to perform a locomotor memory task. Participants were instructed to (1) learn a path on a map through a sequence of vertical and horizontal eyes movements and (2) walk on the slabs of a "magic carpet" to recall this path. The main results showed that the anisotropy of ocular movements (horizontal ones being more efficient than vertical ones) influenced performances of participants when they change direction on the central slab of the magic carpet. These data suggest that, to find their way through locomotor space, subjects mentally repeated their past ocular exploration of the map, and this visuo-motor memory was used as a template for the locomotor performance. |
Joost C. Dessing; Michael Vesia; J. Douglas Crawford The role of areas MT+/V5 and SPOC in spatial and temporal control of manual interception: An rTMS study Journal Article In: Frontiers in Behavioral Neuroscience, vol. 7, pp. 15, 2013. @article{Dessing2013, Manual interception, such as catching or hitting an approaching ball, requires the hand to contact a moving object at the right location and at the right time. Many studies have examined the neural mechanisms underlying the spatial aspects of goal-directed reaching, but the neural basis of the spatial and temporal aspects of manual interception are largely unknown. Here, we used repetitive transcranial magnetic stimulation (rTMS) to investigate the role of the human middle temporal visual motion area (MT+/V5) and superior parieto-occipital cortex (SPOC) in the spatial and temporal control of manual interception. Participants were required to reach-to-intercept a downward moving visual target that followed an unpredictably curved trajectory, presented on a screen in the vertical plane. We found that rTMS to MT+/V5 influenced interceptive timing and positioning, whereas rTMS to SPOC only tended to increase the spatial variance in reach end points for selected target trajectories. These findings are consistent with theories arguing that distinct neural mechanisms contribute to spatial, temporal, and spatiotemporal control of manual interception. |
Saurabh Dhawan; Heiner Deubel; Donatas Jonikaitis Inhibition of saccades elicits attentional suppression Journal Article In: Journal of Vision, vol. 13, no. 6, pp. 1–12, 2013. @article{Dhawan2013, Visuospatial attention has been shown to have a central role in planning and generation of saccades but what role, if any, it plays in inhibition of saccades remains unclear. In this study, we used an oculomotor delayed match- or nonmatch-to-sample task in which a cued location has to be encoded and memorized for one of two very different goals-to plan a saccade to it or to avoid making a saccade to it. We measured the spatial allocation of attention during the delay and found that while marking a location as a future saccade target resulted in an attentional benefit at that location, marking it as forbidden to saccades led to an attentional cost. Additionally, saccade trajectories were found to deviate away more from the "don't look" location than from a saccade-irrelevant distractor confirming greater inhibition of an actively forbidden location in oculomotor programming. Our finding that attention is suppressed at locations forbidden to saccades confirms and complements the claim of a selective and obligatory coupling between saccades and attention-saccades at the memorized location could neither be planned nor suppressed independent of a corresponding effect on attentional performance. |
L. L. Di Stasi; M. Marchitto; A. Antolí; J. J. Cañas Saccadic peak velocity as an alternative index of operator attention: A short review Journal Article In: European Review of Applied Psychology, vol. 63, no. 6, pp. 335–343, 2013. @article{DiStasi2013, Introduction Automation research has identified the need to monitor operator attentional states in real time as a basis for determining the most appropriate type and level of automated assistance for operators doing complex tasks. Objective The development of a methodology that is able to detect on-line operator attentional state variations could represent a good starting point to solve this critical issue. Results We present a short review of the literature on different indices of attentional state and discuss a series of experiments that demonstrates the validity and sensitivity of a specific eye movement index: saccadic peak velocity (PV). PV was able to detect variations in mental state while doing complex and ecological tasks, ranging from air traffic control simulated tasks to driving simulator sessions. Conclusion This research could provide several guidelines for designing adaptive systems (able to allocate tasks between operators and machine in a dynamic way) and early fatigue-and-distraction warning systems to reduce accident risk. © 2013 Elsevier Masson SAS. All rights reserved. |
Christopher A. Dickinson; Gregory J. Zelinsky New evidence for strategic differences between static and dynamic search tasks: An individual observer analysis of eye movements Journal Article In: Frontiers in Psychology, vol. 4, pp. 8, 2013. @article{Dickinson2013, Two experiments are reported that further explore the processes underlying dynamic search. In Experiment 1, observers' oculomotor behavior was monitored while they searched for a randomly oriented T among oriented L distractors under static and dynamic viewing conditions. Despite similar search slopes, eye movements were less frequent and more spatially constrained under dynamic viewing relative to static, with misses also increasing more with target eccentricity in the dynamic condition. These patterns suggest that dynamic search involves a form of sit-and-wait strategy in which search is restricted to a small group of items surrounding fixation. To evaluate this interpretation, we developed a computational model of a sit-and-wait process hypothesized to underlie dynamic search. In Experiment 2 we tested this model by varying fixation position in the display and found that display positions optimized for a sit-and-wait strategy resulted in higher d' values relative to a less optimal location. We conclude that different strategies, and therefore underlying processes, are used to search static and dynamic displays. |
Steve Dipaola; Caitlin Riebe; James T. Enns Following the masters: Portrait viewing and appreciation is guided by selective detail Journal Article In: Perception, vol. 42, no. 6, pp. 608–630, 2013. @article{Dipaola2013, A painted portrait differs from a photo in that selected regions are often rendered in much sharper detail than other regions. Artists believe these choices guide viewer gaze and influence their appreciation of the portrait, but these claims are difficult to test because increased portrait detail is typically associated with greater meaning, stronger lighting, and a more central location in the composition. In three experiments we monitored viewer gaze and recorded viewer preferences for portraits rendered with a parameterised non-photorealistic technique to mimic the style of Rembrandt (DiPaola, 2009 International Journal of Art and Technology 2 82-93). Results showed that viewer gaze was attracted to and held longer by regions of relatively finer detail (experiment 1), and also by textural highlighting (experiment 2), and that artistic appreciation increased when portraits strongly biased gaze (experiment 3). These findings have implications for understanding both human vision science and visual art. |
Susanne Bergert How do our brain hemispheres cooperate to avoid false memories? Journal Article In: Cortex, vol. 49, no. 2, pp. 572–581, 2013. @article{Bergert2013, Memories are not always as reliable as they may appear. The occurrence of false memories can be reduced, however, by enhancing the cooperation between the two brain hemispheres. Yet is the communication from left to right hemisphere as helpful as the information transfer from right to left? To address this question, 72 participants were asked to learn 16 word lists. Applying the DeeseeRoedigereMcDermott paradigm, the words in each list were associated with an unpresented prototype word. In the test condition, learned words and corresponding prototypes were presented along with non-associated new words, and participants were asked to indicate which of the words they recognized. Crucially, both study and test words were projected to only one hemisphere in order to stimulate each hemisphere separately. It was found that false recognitions occurred significantly less often when the right hemisphere studied and the left hemisphere recognized the stimuli. Moreover, only the right-to-left direction of interhemispheric communication reduced false memories signifi- cantly, whereas left-to-right exchange did not. Further analyses revealed that the observed reduction of false memories was not due to an enhanced discrimination sensitivity, but to a stricter response bias. Hence, the data suggest that interhemispheric cooperation does not improve the ability to tell old and new apart, but rather evokes a conservative response tendency. Future studies may narrow down in which cognitive processing steps inter- hemispheric interaction can change the response criterion. |
Raymond Bertram; Laura Helle; Johanna K. Kaakinen; Erkki Svedström The effect of expertise on eye movement behaviour in medical image perception Journal Article In: PLoS ONE, vol. 8, no. 6, pp. e66169, 2013. @article{Bertram2013, The present eye-movement study assessed the effect of expertise on eye-movement behaviour during image perception in the medical domain. To this end, radiologists, computed-tomography radiographers and psychology students were exposed to nine volumes of multi-slice, stack-view, axial computed-tomography images from the upper to the lower part of the abdomen with or without abnormality. The images were presented in succession at low, medium or high speed, while the participants had to detect enlarged lymph nodes or other visually more salient abnormalities. The radiologists outperformed both other groups in the detection of enlarged lymph nodes and their eye-movement behaviour also differed from the other groups. Their general strategy was to use saccades of shorter amplitude than the two other participant groups. In the presence of enlarged lymph nodes, they increased the number of fixations on the relevant areas and reverted to even shorter saccades. In volumes containing enlarged lymph nodes, radiologists' fixation durations were longer in comparison to their fixation durations in volumes without enlarged lymph nodes. More salient abnormalities were detected equally well by radiologists and radiographers, with both groups outperforming psychology students. However, to accomplish this, radiologists actually needed fewer fixations on the relevant areas than the radiographers. On the basis of these results, we argue that expert behaviour is manifested in distinct eye-movement patterns of proactivity, reactivity and suppression, depending on the nature of the task and the presence of abnormalities at any given moment. |
Adam T. Biggs; James R. Brockmole; Jessica K. Witt Armed and attentive: Holding a weapon can bias attentional priorities in scene viewing Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 8, pp. 1715–1724, 2013. @article{Biggs2013, The action-specific perception hypothesis (Witt, Current Directions in Psychological Science 20: 201-206, 2011) claims that the environment is represented with respect to potential interactions for objects present within said environment. This investigation sought to extend the hypothesis beyond perceptual mechanisms and assess whether action-specific potential could alter attentional allocation. To do so, we examined a well-replicated attention bias in the weapon focus effect (Loftus, Loftus, & Messo, Law and Human Behaviour 1, 55-62, 1987), which represents the tendency for observers to attend more to weapons than to neutral objects. Our key manipulation altered the anticipated action-specific potential of observers by providing them a firearm while they freely viewed scenes with and without weapons present. We replicated the original weapon focus effect using modern eye tracking and confirmed that the increase in time looking at weapons comes at a cost of less time spent looking at faces. Additionally, observers who held firearms while viewing the various scenes showed a general bias to look at faces over objects, but only if the firearm was in a readily usable position (i.e., pointed at the scenes rather than holstered at one's side). These two effects, weapon focus and the newly found bias to look more at faces when armed, canceled out one another without interacting. This evidence confirms that the action capabilities of the observer alter more than just perceptual mechanisms and that holding a weapon can change attentional priorities. Theoretical and real-world implications are discussed. |
Markus Bindemann; Michael B. Lewis Face detection differs from categorization: Evidence from visual search in natural scenes Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 6, pp. 1140–1145, 2013. @article{Bindemann2013, In this study, we examined whether the detection of frontal, ¾, and profile face views differs from their categorization as faces. In Experiment 1, we compared three tasks that required observers to determine the presence or absence of a face, but varied in the extents to which participants had to search for the faces in simple displays and in small or large scenes to make this decision. Performance was equivalent for all of the face views in simple displays and small scenes, but it was notably slower for profile views when this required the search for faces in extended scene displays. This search effect was confirmed in Experiment 2, in which we compared observers' eye movements with their response times to faces in visual scenes. These results demonstrate that the categorization of faces at fixation is dissociable from the detection of faces in space. Consequently, we suggest that face detection should be studied with extended visual displays, such as natural scenes. |
Patrick G. Bissett; Gordon D. Logan Stop before you leap: Changing eye and hand movements requires stopping Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 4, pp. 941–946, 2013. @article{Bissett2013, The search-step paradigm addresses the processes involved in changing movement plans, usually saccadic eye-movements. Subjects move their eyes to a target (T1) among distractors, but when the target steps to a new location (T2), subjects are instructed to move their eyes directly from fixation to the new location. We ask whether moving to T2 requires a separate stop process that inhibits the movement to T1. It need not. The movement plan for the second response may inhibit the first response. To distinguish these hypotheses, we decoupled the offset of T1 from the onset of T2. If the second movement is sufficient to inhibit the first, then the probability of responding to T1 should depend only on T2 onset. If a separate stop process is required, then the probability of responding to T1 should depend only on T1 offset, which acts as a stop signal. We tested these hypotheses in manual and saccadic search-step tasks and found that the probability of responding to T1 depended most strongly on T1 offset, supporting the hypothesis that changing from one movement plan to another involves a separate stop process that inhibits the first plan. |
B. Bonev; Lewis L. Chuang; F. Escolano How do image complexity, task demands and looking biases influence human gaze behavior? Journal Article In: Pattern Recognition Letters, vol. 34, no. 7, pp. 723–730, 2013. @article{Bonev2013, In this paper we propose an information-theoretic approach to understand eye-movement patterns, in relation to the task performed and image complexity. We commence with the analysis of the distributions and amplitudes of eye-movement saccades, performed across two different image-viewing tasks: free viewing and visual search. Our working hypothesis is that the complexity of image information and task demands should interact. This should be reflected in the Markovian pattern of short and long saccades. We compute high-order Markovian models of performing a large saccade after many short ones and also propose a novel method for quantifying image complexity. The analysis of the interaction between high-order Markovianity, task and image complexity supports our hypothesis. |
John Christie; Matthew D. Hilchey; Raymond M. Klein Inhibition of return is at the midpoint of simultaneous cues Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 8, pp. 1610–1618, 2013. @article{Christie2013, When multiple cues are presented simultaneously, Klein, Christie, and Morris (Psychonomic Bulletin & Review 12:295-300, 2005) found a gradient of inhibition (of return, IOR), with the slowest simple manual detection responses occurring to targets in the direction of the center of gravity of the cues. Here, we explored the possibility of extending this finding to the saccade response modality, using methods of data analysis that allowed us to consider the relative contributions of the distance from the target to the center of gravity of the array of cues and the nearest element in the cue array. We discovered that the bulk of the IOR effect with multiple cues, in both the previous and present studies, can be explained by the distance between the target and the center of gravity of the cue array. The present results are consistent with the proposal advanced by Klein et al., (2005) suggesting that this IOR effect is due to population coding in the oculomotor pathways (e.g., the superior colliculus) driving the eye movement system toward the center of gravity of the cued array. |
Christopher D. Cowper-Smith; Gail A. Eskes; David A. Westwood Motor inhibition of return can affect prepared reaching movements Journal Article In: Neuroscience Letters, vol. 541, pp. 83–86, 2013. @article{CowperSmith2013a, Inhibition of return (IOR) is a widely studied phenomenon that is thought to affect attention, eye movements, or reaching movements, in order to promote orienting responses toward novel stimuli. Previous research in our laboratory demonstrated that the motor form of saccadic IOR can arise from late-stage response execution processes. In the present study, we were interested in whether the same is true of reaching responses. If IOR can emerge from processes operating at or around the time of response execution, then IOR should be observed even when participants have fully prepared their responses in advance of the movement initiation signal. Similar to the saccadic system, our results reveal that IOR can be implemented as a late-stage execution bias in the reaching control system. |
Sabine Born; Ulrich Ansorge; Dirk Kerzel Predictability of spatial and non-spatial target properties improves perception in the pre-saccadic interval Journal Article In: Vision Research, vol. 91, pp. 93–101, 2013. @article{Born2013, In a dual-task paradigm with a perceptual discrimination task and a concurrent saccade task, we examined participants' ability to make use of prior knowledge of a critical property of the perceptual target to improve discrimination. Previous research suggests that during a short time window before a saccade, covert attention is imperatively directed towards the saccade target location. Consequently, discrimination of perceptual targets at the saccade target location is better than at other locations. We asked whether the obligatory pre-saccadic attention shift prevents perceptual benefits arising for perceptual target stimuli with predictable as opposed to non-predictable properties. We compared conditions in which the color or location of the perceptual target was constant to conditions in which those properties varied randomly across trials. In addition to the expected improvements of perception at the saccade target location, we found perception to be better with constant than with random properties of the perceptual target. Thus, color or location information about an upcoming perceptual target facilitates perception even while spatial attention is shifted to the saccade target. The improvement occurred irrespective of the saccade target location, which suggests that the underlying mechanism is independent of the pre-saccadic attention shift, but alternative interpretations are discussed as well. |
Janet H. Bultitude; Stefan Van der Stigchel; Tanja C. W. Nijboer Prism adaptation alters spatial remapping in healthy individuals: Evidence from double-step saccades Journal Article In: Cortex, vol. 49, no. 3, pp. 759–770, 2013. @article{Bultitude2013, The visual system is able to represent and integrate large amounts of information as we move our gaze across a scene. This process, called spatial remapping, enables the construction of a stable representation of our visual environment despite constantly changing retinal images. Converging evidence implicates the parietal lobes in this process, with the right hemisphere having a dominant role. Indeed, lesions to the right parietal lobe (e.g., leading to hemispatial neglect) frequently result in deficits in spatial remapping. Research has demonstrated that recalibrating visual, proprioceptive and motor reference frames using prism adaptation ameliorates neglect symptoms and induces neglect-like performance in healthy people - one example of the capacity for rapid neural plasticity in response to new sensory demands. Because of the influence of prism adaptation on parietal functions, the present research investigates whether prism adaptation alters spatial remapping in healthy individuals. To this end twenty-eight undergraduates completed blocks of a double-step saccade (DSS) task after sham adaptation and adaptation to leftward- or rightward-shifting prisms. The results were consistent with an impairment in spatial remapping for left visual field targets following adaptation to leftward-shifting prisms. These results suggest that temporarily realigning spatial representations using sensory-motor adaptation alters right-hemisphere remapping processes in healthy individuals. The implications for the possible mechanisms of the amelioration of hemispatial neglect after prism adaptation are discussed. |
Antimo Buonocore; Robert D. McIntosh Attention modulates saccadic inhibition magnitude Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 6, pp. 1051–1059, 2013. @article{Buonocore2013, Visual transient events during ongoing eye movement tasks inhibit saccades within a precise temporal window, spanning from around 60-120 ms after the event, having maximum effect at around 90 ms. It is not yet clear to what extent this saccadic inhibition phenomenon can be modulated by attention. We studied the saccadic inhibition induced by a bright flash above or below fixation, during the preparation of a saccade to a lateralized target, under two attentional manipulations. Experiment 1 demonstrated that exogenous precueing of a distractor's location reduced saccadic inhibition, consistent with inhibition of return. Experiment 2 manipulated the relative likelihood that a distractor would be presented above or below fixation. Saccadic inhibition magnitude was relatively reduced for distractors at the more likely location, implying that observers can endogenously suppress interference from specific locations within an oculomotor map. We discuss the implications of these results for models of saccade target selection in the superior colliculus. |
David J. Hancock; Diane M. Ste-Marie Gaze behaviors and decision making accuracy of higher- and lower-level ice hockey referees Journal Article In: Psychology of Sport & Exercise, vol. 14, no. 1, pp. 66–71, 2013. @article{Hancock2013, Background: Gaze behaviors are often studied in athletes, but infrequently for sport officials. There is a need to better understand gaze behavior in refereeing in order to improve training and education related to visual search patterns, which have been argued to be related to decision making (Abernethy & Russell, 1987a). Objective: To examine gaze behaviors, decision accuracy, and decision sensitivity (using signal detection analysis) of ice hockey referees of varying skill levels in a laboratory setting. Design: Using an experimental design, we conducted multiple t-tests. Method: Higher-level (N = 15) and lower-level ice hockey referees (N = 15) wore a head-mounted eye movement recorder and made penalty/no penalty decisions related to ice hockey video clips on a computer screen. We recorded gaze behaviors, decision accuracy, and decision sensitivity for each participant. Results: Results of the t-tests indicated no group differences in gaze behaviors; however, higher-level referees made significantly more accurate decisions (both accuracy and sensitivity) than lower-level referees. Conclusion: Higher-level ice hockey referees are superior to lower-level referees on decision making, but referees do not differ on gaze behaviors. Possibly, higher-level referees process relevant decision making information more effectively. |
Anthony M. Harris; Roger W. Remington; Stefanie I. Becker Feature specificity in attentional capture by size and color Journal Article In: Journal of vision, vol. 13, no. 3, pp. 1–15, 2013. @article{Harris2013, Top-down guidance of visual attention has classically been thought to operate in a feature-specific manner. However, recent studies have shown that top-down visual attention can also be guided by information about target-nontarget feature relations (e.g., larger, redder, brighter). Here we recommend a minimal set of cues for differentiating between relational and feature-specific attentional guidance and examine contrasting predictions for the guidance of attention by size and color stimuli in a spatial cueing paradigm. In Experiment 1 we demonstrate that in search for size, when both feature-specific and relational strategies are available, participants adopt a relational search strategy. Experiment 2 shows that when feature-specific information is the only reliable information to guide attention to the target, participants are able to adopt a feature-specific set for size information. Finally, in Experiment 3 we extend our paradigm to differentiate between feature-specific and relational strategies in search for color. Together, these experiments help to clarify the conditions under which different attentional guidance strategies will be employed, and demonstrate a useful minimum cue requirement for differentiating between these two forms of top-down guidance. Implications for current theories of attention are discussed. |
William J. Harrison; Jason B. Mattingley; Roger W. Remington Eye movement targets are released from visual crowding Journal Article In: Journal of Neuroscience, vol. 33, no. 7, pp. 2927–2933, 2013. @article{Harrison2013, Our ability to recognize objects in peripheral vision is impaired when other objects are nearby (Bouma, 1970). This phenomenon, known as crowding, is often linked to interactions in early visual processing that depend primarily on the retinal position of visual stimuli (Pelli, 2008; Pelli and Tillman, 2008). Here we tested a new account that suggests crowding is influenced by spatial information derived from an extraretinal signal involved in eye movement preparation. We had human observers execute eye movements to crowded targets and measured their ability to identify those targets just before the eyes began to move. Beginning ∼50 ms before a saccade toward a crowded object, we found that not only was there a dramatic reduction in the magnitude of crowding, but the spatial area within which crowding occurred was almost halved. These changes in crowding occurred despite no change in the retinal position of target or flanking stimuli. Contrary to the notion that crowding depends on retinal signals alone, our findings reveal an important role for eye movement signals. Eye movement preparation effectively enhances object discrimination in peripheral vision at the goal of the intended saccade. These presaccadic changes may enable enhanced recognition of visual objects in the periphery during active search of visually cluttered environments. |
Josephine Hartwig; Katharina M. Schnitzspahn; Matthias Kliegel; Boris M. Velichkovsky; Jens R. Helmert I see you remembering: What eye movements can reveal about process characteristics of prospective memory Journal Article In: International Journal of Psychophysiology, vol. 88, no. 2, pp. 193–199, 2013. @article{Hartwig2013, Prospective memory performance describes the delayed execution of an intended action. As this requires a mixture of memory and attentional control functions, current research aims at delineating the specific processes associated with solving a prospective memory task. Therefore, the current study measured, analysed and compared eye movements of participants who performed a prospective memory, a free viewing, and a visual search task. By keeping constant the prospective memory cue as well as the context of tasks, we aimed at putting the processes of solving prospective memory tasks into context. The results show, that when a prospective memory task is missed, the continuous gaze behaviour is rather similar to the gaze behaviour during free viewing. When the prospective memory task is successfully solved, on the other hand, average gaze behaviour is between free viewing and visual search. Furthermore, individual differences in eye movements were found between low and high performers. Our data suggest that a prospective memory task can be solved in different ways, therefore different processes can be observed. |
Craig Hedge; Ute Leonards Using eye movements to explore switch costs in working memory Journal Article In: Journal of Vision, vol. 13, no. 4, pp. 1–19, 2013. @article{Hedge2013, Updating object locations in working memory (WM) is faster when the same object is updated twice in a row compared to updating another object. In analogy to repetition priming effects in perceptual attention, this object-switch cost in WM is thought of as being due to the necessity to shift attention internally from one object to another. However, evidence for this hypothesis is only indirect. Here, we used eye tracking and a classic model of perceptual attention to get a more direct handle on the different processes underlying switch costs in spatial WM. Eye-movement data revealed three different contributors to switch costs. First, overt attention was attracted initially towards locations of the previously updated object. Second, longer fixation periods preceded eye movements between locations of different objects as compared to (previous and new) locations of the same object, most likely due to disengaging and reorienting focal attention between objects. Third, longer dwell times at the to-be-updated location preceded manual responses for switch updates as compared to repeats, probably indicating increased uncertainty between competing sources of activity after the actual attention shift. Results can easily be interpreted with existing (perceptual) attention models that propose competitive activation in an attention map for target objects. |
Jennifer J. Heisz; Molly M. Pottruff; David I. Shore Females scan more than males: A potential mechanism for sex differences in recognition memory Journal Article In: Psychological Science, vol. 24, no. 7, pp. 1157–1163, 2013. @article{Heisz2013, Recognition-memory tests reveal individual differences in episodic memory; however, by themselves, these tests provide little information regarding the stage (or stages) in memory processing at which differences are manifested. We used eye-tracking technology, together with a recognition paradigm, to achieve a more detailed analysis of visual processing during encoding and retrieval. Although this approach may be useful for assessing differences in memory across many different populations, we focused on sex differences in face memory. Females outperformed males on recognition-memory tests, and this advantage was directly related to females' scanning behavior at encoding. Moreover, additional exposures to the faces reduced sex differences in face recognition, which suggests that males may be able to improve their recognition memory by extracting more information at encoding through increased scanning. A strategy of increased scanning at encoding may prove to be a simple way to enhance memory performance in other populations with memory impairment. |
Clayton Hickey; Wieske Zoest Reward-associated stimuli capture the eyes in spite of strategic attentional set Journal Article In: Vision Research, vol. 92, pp. 67–74, 2013. @article{Hickey2013, Theories of reinforcement learning have proposed that the association of reward to visual stimuli may cause these objects to become fundamentally salient and thus attention-drawing. A number of recent studies have investigated the oculomotor correlates of this reward-priming effect, but there is some ambiguity in this literature regarding the involvement of top-down attentional set. Existing paradigms tend to create a situation where participants are actively looking for a reward-associated stimulus before subsequently showing that this selective bias sustains when it no longer has strategic purpose. This perseveration of attentional set is potentially different in nature than the direct impact of reward proposed by theory. Here we investigate the effect of reward on saccadic selection in a paradigm where strategic attentional set is decoupled from the effect of reward. We find that during search for a uniquely oriented target, the receipt of reward following selection of a target characterized by an irrelevant unique color causes subsequent stimuli characterized by this color to be preferentially selected. Importantly, this occurs regardless of whether the color characterizes the target or distractor. Other analyses demonstrate that only features associated with correct selection of the target prime the target representation, and that the magnitude of this effect can be predicted by variability in saccadic indices of feedback processing. These results add to a growing literature demonstrating that reward guides visual selection, often in spite of our strategic efforts otherwise. |
Matthew D. Hilchey; Jason Satel; Jason Ivanoff; Raymond M. Klein On the nature of the delayed "inhibitory" Cueing effects generated by uninformative arrows at fixation Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 3, pp. 593–600, 2013. @article{Hilchey2013, When the interval between a spatially uninformative arrow and a visual target is short (<500 ms), response times (RTs) are fastest when the arrow points to the target. When this interval exceeds 500 ms, there is a near-universal absence of an effect of the arrow on RTs. Contrary to this expected pattern of results, Taylor and Klein (J Exp Psychol Hum Percept Perform 26:1639-1656, 2000) observed that RTs were slowest when a to-be-localized visual target occurred in the direction of a fixated arrow presented 1 s earlier (i.e., an "inhibitory" Cueing effect; ICE). Here we examined which factor(s) may have allowed the arrow to generate an ICE. Our experiments indicated that the ICE was a side effect of subthreshold response activation attributable to a task-induced association between the arrow and a keypress response. Because the cause of this ICE was more closely related to subthreshold keypress activation than to oculomotor activation, we considered that the effect might be more similar to the negative compatibility effect (NCE) than to inhibition of return (IOR). This similarity raises the possibility that classical IOR, when caused by a spatially uninformative peripheral onset event and measured by a keypress response to a subsequent onset, might represent, in part, another instance of an NCE. Serendipitously, we discovered that context (i.e., whether an uninformative peripheral onset could occur at the time of an uninformative central arrow) ultimately determined whether the "inhibitory" aftermath of automatic response activation would affect output or input pathways. |
Rumi Hisakata; Masahiko Terao; Ikuya Murakami Illusory position shift induced by motion within a moving envelope during smooth-pursuit eye movements Journal Article In: Journal of Vision, vol. 13, no. 12, pp. 1–12, 2013. @article{Hisakata2013, The static envelope of a Gabor patch with a moving carrier appears to shift in the direction of the carrier motion; this phenomenon is known as the motion-induced position shift (De Valois & De Valois, 1991; Ramachandran & Anstis, 1990). This conventional stimulus configuration contains at least three covarying factors: the retinal carrier velocity, the environmental carrier velocity, and the carrier velocity relative to the envelope velocity, which happens to be zero. We manipulated these velocities independently to identify which is critical, and we measured the perceived position of the moving Gabor patch relative to a reference stimulus moving in the same direction at the same speed. In the first experiment, the position of the moving envelope observed with fixation appeared to shift in the direction of the carrier velocity relative to the envelope velocity. Furthermore, the illusion was more pronounced when the carrier moved in a direction opposite to that of the envelope. In the second and third experiments, we measured the illusion during smooth-pursuit eye movement in which the envelope was either static or moving, thereby dissociating retinal and environmental velocities. Under all conditions, the illusion occurred according to the envelope-relative velocity of the carrier. Additionally, the illusion was more pronounced when the carrier and envelope moved in opposite directions. We conclude that the carrier's envelope-relative velocity is the primary determinant of the motion-induced position shift. |
Jillian Hobson; Gillian Bruce; Stephen H. Butler A flicker change blindness task employing eye tracking reveals an association with levels of craving not consumption Journal Article In: Journal of Psychopharmacology, vol. 27, no. 1, pp. 93–97, 2013. @article{Hobson2013, We investigated attentional biases with a flicker paradigm, examining the proportion of alcohol relative to neutral changes detected. Furthermore, we examined how measures of the participants initial orienting of attention and of their maintained attention relate to levels of alcohol consumption and subjective craving in social drinkers. The eye movements of 58 participants (24 male) were monitored whilst they completed a flicker-induced change blindness task using both simple stimuli and real world scenes, with both an alcohol and neutral change competing for detection. When examined in terms of consumption levels, we observed that heavier social drinkers detected a higher proportion of alcohol related changes in real world scenes only. However, we also observed that levels of craving were not indicative of levels of consumption in social drinkers. Furthermore, also in real world scenes only, higher cravers detected a greater proportion of alcohol related changes compared to lower cravers, and were also quicker to initially fixate on alcohol related stimuli. Thus we conclude that processing biases in the orienting of attention to alcohol related stimuli were demonstrated in higher craving compared to lower craving social users in real world scenes. However, this was not related to the level of consumption as would be expected. These results highlight various methodological and conceptual issues to be considered in future research. |
Michael Dorr; Peter J. Bex Peri-saccadic natural vision Journal Article In: Journal of Neuroscience, vol. 33, no. 3, pp. 1211–1217, 2013. @article{Dorr2013, The fundamental role of the visual system is to guide behavior in natural environments. To optimize information transmission, many animals have evolved a non-homogeneous retina and serially sample visual scenes by saccadic eye movements. Such eye movements, however, introduce high-speed retinal motion and decouple external and internal reference frames. Until now, these processes have only been studied with unnatural stimuli, eye movement behavior, and tasks. These experiments confound retinotopic and geotopic coordinate systems and may probe a non-representative functional range. Here we develop a real-time, gaze-contingent display with precise spatiotemporal control over high-definition natural movies. In an active condition, human observers freely watched nature documentaries and indicated the location of periodic narrow-band contrast increments relative to their gaze position. In a passive condition under central fixation, the same retinal input was replayed to each observer by updating the video's screen position. Comparison of visual sensitivity between conditions revealed three mechanisms that the visual system has adapted to compensate for peri-saccadic vision changes. Under natural conditions we show that reduced visual sensitivity during eye movements can be explained simply by the high retinal speed during a saccade without recourse to an extra-retinal mechanism of active suppression; we give evidence for enhanced sensitivity immediately after an eye movement indicative of visual receptive fields remapping in anticipation of forthcoming spatial structure; and we demonstrate that perceptual decisions can be made in world rather than retinal coordinates. |
Feng Du; Yue Qi; Xingshan Li; Kan Zhang Dual processes of oculomotor capture by abrupt onset: Rapid involuntary capture and sluggish voluntary prioritization Journal Article In: PLoS ONE, vol. 8, no. 11, pp. e80678, 2013. @article{Du2013, The present study showed that there are two distinctive processes underlying oculomotor capture by abrupt onset. When a visual mask between the cue and the target eliminates the unique luminance transient of an onset, the onset still attracts attention in a top-down fashion. This memory-based prioritization of onset is voluntarily controlled by the knowledge of target location. But when there is no visual mask between the cue and the target, the onset captures attention mainly in a bottom-up manner. This transient-driven capture of onset is involuntary because it occurs even when the onset is completely irrelevant to the target location. In addition, the present study demonstrated distinctive temporal characteristics for these two processes. The involuntary capture driven by luminance transients is rapid and brief, whereas the memory- based voluntary prioritization of onset is more sluggish and long-lived. |