全部EyeLink出版物
以下列出了截至2023年(包括2024年初)的所有12000多份同行评审的EyeLink研究出版物。您可以使用“视觉搜索”、“平滑追踪”、“帕金森氏症”等关键字搜索出版物库。您还可以搜索个人作者的姓名。可以在解决方案页面上找到按研究领域分组的眼动追踪研究。如果我们错过了任何EyeLink眼动追踪论文,请 给我们发电子邮件!
2010 |
Stefanie I. Becker Oculomotor capture by colour singletons depends on intertrial priming Journal Article In: Vision Research, vol. 50, no. 21, pp. 2116–2126, 2010. @article{Becker2010b, In visual search, an irrelevant colour singleton captures attention when the colour of the distractor changes across trials (e.g., from red to green), but not when the colour remains constant (Becker, 2007). The present study shows that intertrial changes of the distractor colour also modulate oculomotor capture: an irrelevant colour singleton distractor was only selected more frequently than the inconspicuous nontargets (1) when its features had switched (compared to the previous trial), or (2) when the distractor had been presented at the same position as the target on the previous trial. These results throw doubt on the notion that colour distractors capture attention and the eyes because of their high feature contrast, which is available at an earlier point in time than information about specific feature values. Instead, attention and eye movements are apparently controlled by a system that operates on feature-specific information, and gauges the informativity of nominally irrelevant features. |
Stefanie I. Becker; Charles L. Folk; Roger W. Remington The role of relational information in contingent capture Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 6, pp. 1460–1476, 2010. @article{Becker2010c, On the contingent capture account, top-down attentional control settings restrict involuntary attentional capture to items that match the features of the search target. Attention capture is involuntary, but contingent on goals and intentions. The observation that only target-similar items can capture attention has usually been taken to show that the content of the attentional control settings consists of specific feature values. In contrast, the present study demonstrates that the top-down target template can include information about the relationship between the target and nontarget features (e.g., redder, darker, larger). Several spatial cuing experiments show that a singleton cue that is less similar to the target but that shares the same relational property that distinguishes targets from nontargets can capture attention to the same extent as cues that are similar to the target. Moreover, less similar cues can even capture attention more than cues that are identical to the target when they are relationally better than identical cues. The implications for current theories of attentional capture and attentional guidance are discussed. |
Marius Blanke; Ludwig Harsch; Jonas Knöll; Frank Bremmer Spatial perception during pursuit initiation Journal Article In: Vision Research, vol. 50, no. 24, pp. 2714–2720, 2010. @article{Blanke2010, Spatial perception is modulated by eye movements. During smooth pursuit, perceived locations are shifted in the direction of the eye movement. During active fixation, visual space is perceptually compressed towards the fovea. In our present study, we were interested to determine the time course of spatial localization during pursuit initiation, i.e. the transition period from fixation to steady-state pursuit. Human observers had to localize briefly flashed targets around the time of pursuit initiation. Our data clearly show that pursuit-like mislocalization starts well before the onset of the eye movement. Our results point towards corollary-discharge as neural source for the observed perceptual effect. |
Jens Blechert; Ulrich Ansorge; Brunna Tuschen-Caffier A body-related dot-probe task reveals distinct attentional patterns for bulimia nervosa and anorexia nervosa Journal Article In: Journal of Abnormal Psychology, vol. 119, no. 3, pp. 575–585, 2010. @article{Blechert2010, We investigated body-related attentional biases in eating disorders by testing whether individuals with anorexia nervosa (AN |
Yoram S. Bonneh; Tobias H. Donner; Dov Sagi; Moshe Fried; Alexander Cooperman; Amos Arieli Motion-induced blindness and microsaccades: Cause and effect Journal Article In: Journal of Vision, vol. 10, no. 14, pp. 1–15, 2010. @article{Bonneh2010, It has been suggested that subjective disappearance of visual stimuli results from a spontaneous reduction of microsaccade rate causing image stabilization, enhanced adaptation, and a consequent fading. In motion-induced blindness (MIB), salient visual targets disappear intermittently when surrounded by a moving pattern. We investigated whether changes in microsaccade rate can account for MIB. We first determined that the moving mask does not affect microsaccade metrics (rate, magnitude, and temporal distribution). We then compared the dynamics of microsaccades during reported illusory disappearance (MIB) and physical disappearance (Replay) of a salient peripheral target. We found large modulations of microsaccade rate following perceptual transitions, whether illusory (MIB) or real (Replay). For MIB, the rate also decreased prior to disappearance and increased prior to reappearance. Importantly, MIB persisted in the presence of microsaccades although sustained microsaccade rate was lower during invisible than visible periods. These results suggest that the microsaccade system reacts to changes in visibility, but microsaccades also modulate MIB. The latter modulation is well described by a Poisson model of the perceptual transitions assuming that the probability for reappearance and disappearance is modulated following a microsaccade. Our results show that microsaccades counteract disappearance but are neither necessary nor sufficient to account for MIB. |
Walter R. Boot; James R. Brockmole Irrelevant features at fixation modulate saccadic latency and direction in visual search Journal Article In: Visual Cognition, vol. 18, no. 4, pp. 481–491, 2010. @article{Boot2010, Do irrelevant visual features at fixation influence saccadic latency and direction? In a novel search paradigm, we found that when the feature of an irrelevant item at fixation matched the feature defining the target, oculomotor disengagement was delayed, and when it matched a salient distractor more eye movements were directed to that distractor. Latency effects were short-lived; direction effects persisted for up to 200 ms. We replicated latency results and demonstrated facilitated eye movements to the target when the fixated item matched the target colour. Irrelevant features of fixated items influence saccadic latency and direction and may be important considerations in predicting search behaviour. |
Douwe P. Bergsma; G. J. Wildt Visual training of cerebral blindness patients gradually enlarges the visual field Journal Article In: British Journal of Ophthalmology, vol. 94, no. 1, pp. 88–96, 2010. @article{Bergsma2010, BACKGROUND: Multiple studies on recovery of hemianopsia after cerebrovascular accident report visual-field enlargements after stimulation of the visual-field border area. These enlargements are made evident by the difference between pre- and post-training measurements of the visual field. Until now, it was not known how the visual-field enlargement develops. AIM: To study how the enlargement develops as a function of time. METHODS: 11 subjects were trained by stimulating the border area of their visual-field defect using a Goldmann perimeter. The visual-field border location was assessed using dynamic Goldmann perimetry before, after and during training (after each 10th training session). To monitor eye fixation, a video-based eye-tracker was used during each complete perimetry session. RESULTS: It was found that visual-field enlargement during training is actually a gradual shift of the visual-field border, which was independent of the type of stimulus-set used during training. The shift could be observed while eye fixation was accurate. CONCLUSION: Visual-detection training leads to a decrease in detection thresholds in the affected visual-field areas and to visual-field enlargement. Training effects can be generalised to important daily-life activities like reading. |
Mario Bettenbühl; Claudia Paladini; Konstantin Mergenthaler; Reinhold Kliegl; Ralf Engbert; Matthias Holschneider Microsaccade characterization using the continuous wavelet transform and principal component analysis Journal Article In: Journal of Eye Movement Research, vol. 3, no. 5, pp. 1–14, 2010. @article{Bettenbuehl2010, During visual fixation on a target, humans perform miniature (or fixational) eye movements consisting of three components, i.e., tremor, drift, and microsaccades. Microsaccades are high velocity components with small amplitudes within fixa- tional eye movements. However, microsaccade shapes and statistical properties vary between individual observers. Here we show that microsaccades can be formally represented with two significant shapes which we identfied using the mathematical definition of singularities for the detection of the former in real data with the continuous wavelet transform. For characterization and model selection, we carried out a principal component analysis, which identified a step shape with an overshoot as first and a bump which regulates the overshoot as second component. We conclude that microsaccades are singular events with an overshoot component which can be detected by the continuous wavelet transform. |
Torsten Betz Investigating task-dependent top-down effects on overt visual attention Journal Article In: Journal of Vision, vol. 10, no. 3, pp. 1–14, 2010. @article{Betz2010, Different tasks can induce different viewing behavior, yet it is still an open question how or whether at all high-level task information interacts with the bottom-up processing of stimulus-related information. Two possible causal routes are considered in this paper. Firstly, the weak top-down hypothesis, according to which top-down effects are mediated by changes of feature weights in the bottom-up system. Secondly, the strong top-down hypothesis, which proposes that top-down information acts independently of the bottom-up process. To clarify the influences of these different routes, viewing behavior was recorded on web pages for three different tasks: free viewing, content awareness, and information search. The data reveal significant task-dependent differences in viewing behavior that are accompanied by minor changes in feature-fixation correlations. Extensive computational modeling shows that these small but significant changes are insufficient to explain the observed differences in viewing behavior. Collectively, the results show that task-dependent differences in the current setting are not mediated by a reweighting of features in the bottom-up hierarchy, ruling out the weak top-down hypothesis. Consequently, the strong top-down hypothesis is the most viable explanation for the observed data. |
Markus Bindemann Scene and screen center bias early eye movements in scene viewing Journal Article In: Vision Research, vol. 50, no. 23, pp. 2577–2587, 2010. @article{Bindemann2010, In laboratory studies of visual perception, images of natural scenes are routinely presented on a computer screen. Under these conditions, observers look at the center of scenes first, which might reflect an advantageous viewing position for extracting visual information. This study examined an alternative possibility, namely that initial eye movements are drawn towards the center of the screen. Observers searched visual scenes in a person detection task, while the scenes were aligned with the screen center or offset horizontally (Experiment 1). Two central viewing effects were observed, reflecting early visual biases to the scene and the screen center. The scene effect was modified by person content but is not specific to person detection tasks, while the screen bias cannot be explained by the low-level salience of a computer display (Experiment 2). These findings support the notion of a central viewing tendency in scene analysis, but also demonstrate a bias to the screen center that forms a potential artifact in visual perception experiments. |
Markus Bindemann; Christoph Scheepers; Heather J. Ferguson; A. Mike Burton Face, body, and center of gravity mediate person detection in natural scenes Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 6, pp. 1477–1485, 2010. @article{Bindemann2010a, Person detection is an important prerequisite of social interaction, but is not well understood. Following suggestions that people in the visual field can capture a viewer's attention, this study examines the role of the face and the body for person detection in natural scenes. We observed that viewers tend first to look at the center of a scene, and only then to fixate on a person. When a person's face was rendered invisible in scenes, bodies were detected as quickly as faces without bodies, indicating that both are equally useful for person detection. Detection was optimized when face and body could be seen, but observers preferentially fixated faces, reinforcing the notion of a prominent role for the face in social perception. These findings have implications for claims of attention capture by faces in that they demonstrate a mediating influence of body cues and general scanning principles in natural scenes. |
A. J. Austin; Theodora Duka Mechanisms of attention for appetitive and aversive outcomes in Pavlovian conditioning Journal Article In: Behavioural Brain Research, vol. 213, no. 1, pp. 19–26, 2010. @article{Austin2010, Different mechanisms of attention controlling learning have been proposed in appetitive and aversive conditioning. The aim of the present study was to compare attention and learning in a Pavlovian conditioning paradigm using visual stimuli of varying predictive value of either monetary reward (appetitive conditioning; 10p or 50p) or blast of white noise (aversive conditioning; 97 dB or 102 dB). Outcome values were matched across the two conditions with regard to their emotional significance. Sixty-four participants were allocated to one of the four conditions matched for age and gender. All participants underwent a discriminative learning task using pairs of visual stimuli that signalled a 100%, 50%, or 0% probability of receiving an outcome. Learning was measured using a 9-point Likert scale of expectancy of the outcome, while attention using an eyetracker device. Arousal and emotional conditioning were also evaluated. Dwell time was greatest for the full predictor in the noise groups, while in the money groups attention was greatest for the partial predictor over the other two predictors. The progression of learning was the same for both groups. These findings suggest that in aversive conditioning attention is driven by the predictive salience of the stimulus while in appetitive conditioning attention is error-driven, when emotional value of the outcome is comparable. |
Ava-Ann Allman; Chawki Benkelfat; France Durand; Igor Sibon; Alain Dagher; Marco Leyton; Glen B. Baker; Gillian A. O'Driscoll Effect of D-amphetamine on inhibition and motor planning as a function of baseline performance. Journal Article In: Psychopharmacology, vol. 211, no. 4, pp. 423–33, 2010. @article{Allman2010, RATIONALE: Baseline performance has been reported to predict dopamine (DA) effects on working memory, following an inverted-U pattern. This pattern may hold true for other executive functions that are DA-sensitive. OBJECTIVES: The objective of this study is to investigate the effect of D: -amphetamine, an indirect DA agonist, on two other putatively DA-sensitive executive functions, inhibition and motor planning, as a function of baseline performance. METHODS: Participants with no prior stimulant exposure participated in a double-blind crossover study of a single dose of 0.3 mg/kg, p.o. of D: -amphetamine and placebo. Participants were divided into high and low groups, based on their performance on the antisaccade and predictive saccade tasks on the baseline day. Executive functions, mood states, heart rate and blood pressure were assessed before (T0) and after drug administration, at 1.5 (T1), 2.5 (T2) and 3.5 h (T3) post-drug. RESULTS: Antisaccade errors decreased with D: -amphetamine irrespective of baseline performance (p = 0.025). For antisaccade latency, participants who generated short-latency antisaccades at baseline had longer latencies on D: -amphetamine than placebo, while those with long-latency antisaccades at baseline had shorter latencies on D: -amphetamine than placebo (drug x group |
Elaine J. Anderson; Sabira K. Mannan; Geraint Rees; Petroc Sumner; Christopher Kennard Overlapping functional anatomy for working memory and visual search. Journal Article In: Experimental Brain Research, vol. 200, no. 1, pp. 91–107, 2010. @article{Anderson2010, Recent behavioural findings using dual-task paradigms demonstrate the importance of both spatial and non-spatial working memory processes in inefficient visual search (Anderson et al. in Exp Psychol 55:301-312, 2008). Here, using functional magnetic resonance imaging (fMRI), we sought to determine whether brain areas recruited during visual search are also involved in working memory. Using visually matched spatial and non-spatial working memory tasks, we confirmed previous behavioural findings that show significant dual-task interference effects occur when inefficient visual search is performed concurrently with either working memory task. Furthermore, we find considerable overlap in the cortical network activated by inefficient search and both working memory tasks. Our findings suggest that the interference effects observed behaviourally may have arisen from competition for cortical processes subserved by these overlapping regions. Drawing on previous findings (Anderson et al. in Exp Brain Res 180:289-302, 2007), we propose that the most likely anatomical locus for these interference effects is the inferior and middle frontal cortex of the right hemisphere. These areas are associated with attentional selection from memory as well as manipulation of information in memory, and we propose that the visual search and working memory tasks used here compete for common processing resources underlying these mechanisms. |
Jeremy B. Badler; Philippe Lefevre; Marcus Missal Causality attribution biases oculomotor responses Journal Article In: Journal of Neuroscience, vol. 30, no. 31, pp. 10517–10525, 2010. @article{Badler2010, When viewing one object move after being struck by another, humans perceive that the action of the first object "caused" the motion of the second, not that the two events occurred independently. Although established as a perceptual and linguistic concept, it is not yet known whether the notion of causality exists as a fundamental, preattentional "Gestalt" that can influence predictive motor processes. Therefore, eye movements of human observers were measured while viewing a display in which a launcher impacted a tool to trigger the motion of a second "reaction" target. The reaction target could move either in the direction predicted by transfer of momentum after the collision ("causal") or in a different direction ("noncausal"), with equal probability. Control trials were also performed with identical target motion, either with a 100 ms time delay between the collision and reactive motion, or without the interposed tool. Subjects made significantly more predictive movements (smooth pursuit and saccades) in the causal direction during standard trials, and smooth pursuit latencies were also shorter overall. These trends were reduced or absent in control trials. In addition, pursuit latencies in the noncausal direction were longer during standard trials than during control trials. The results show that causal context has a strong influence on predictive movements. |
Daniel H. Baker; Erich W. Graf Extrinsic factors in the perception of bistable motion stimuli Journal Article In: Vision Research, vol. 50, no. 13, pp. 1257–1265, 2010. @article{Baker2010, When viewing a drifting plaid stimulus, perceived motion alternates over time between coherent pattern motion and a transparent impression of the two component gratings. It is known that changing the intrinsic attributes of such patterns (e.g. speed, orientation and spatial frequency of components) can influence percept predominance. Here, we investigate the contribution of extrinsic factors to perception; specifically contextual motion and eye movements. In the first experiment, the percept most similar to the speed and direction of surround motion increased in dominance, implying a tuned integration process. This shift primarily involved an increase in dominance durations of the consistent percept. The second experiment measured eye movements under similar conditions. Saccades were not associated with perceptual transitions, though blink rate increased around the time of a switch. This indicates that saccades do not cause switches, yet saccades in a congruent direction might help to prolong a percept because (i) more saccades were directionally congruent with the currently reported percept than expected by chance, and (ii) when observers were asked to make deliberate eye movements along one motion axis, this increased percept reports in that direction. Overall, we find evidence that perception of bistable motion can be modulated by information from spatially adjacent regions, and changes to the retinal image caused by blinks and saccades. |
Kim Joris Boström; Anne Kathrin Warzecha Open-loop speed discrimination performance of ocular following response and perception Journal Article In: Vision Research, vol. 50, no. 9, pp. 870–882, 2010. @article{Bostroem2010, So far, it remains largely unresolved to what extent neuronal noise affects behavioral responses. Here, we investigate, where in the human visual motion pathway noise originates that limits the performance of the entire system. In particular, we ask whether perception and eye movements are limited by a common noise source, or whether processing stages after the separation into different streams limit their performance. We use the ocular following response of human subjects and a simultaneously performed psychophysical paradigm to directly compare perceptual and oculomotor system with respect to their speed discrimination ability. Our results show that on the open-loop condition the perceptual system is superior to the oculomotor system and that the responses of both systems are not correlated. Two alternative conclusions can be drawn from these findings. Either the perceptual and oculomotor pathway are effectively separate, or the amount of post-sensory (motor) noise is not negligible in comparison to the amount of sensory noise. In view of well-established experimental findings and due to plausibility considerations, we favor the latter conclusion. |
Doris I. Braun; Alexander C. Schütz; Karl R. Gegenfurtner Localization of speed differences of context stimuli during fixation and smooth pursuit eye movements Journal Article In: Vision Research, vol. 50, no. 24, pp. 2740–2749, 2010. @article{Braun2010, The visual system can detect speed changes of moving objects only by means of alterations of retinal image motion, which is also subject to changes induced by head or eye movements. Here we investigated whether smooth pursuit eye movements affect the ability to localize short speed perturbations of large context stimuli. Psychophysical thresholds for localization, discrimination and detection of speed perturbations in one of two context stimuli were measured under two main conditions: in fixation trials subjects fixated a central stationary spot, in pursuit trials they followed a horizontally moving target with their eyes. Context stimuli were vertically oriented sine wave gratings moving simultaneously above and below the fixation or pursuit target for one second in the same direction at the same or a different speed as the pursuit target. During the movement one of the gratings suddenly changed its speed for 500. ms and returned to its original speed. Observers were asked to discern the location of the speed change (two-alternative spatial forced choice task). While detection (two-interval forced choice) and discrimination thresholds for the kind of speed perturbation were in the normal range of Weber fractions of 10-15%, thresholds for the location of the speed perturbation were dramatically increased to 30-50%. Localization thresholds were particularly high when the retinal motion was mainly due to the context movements as during fixation or slow pursuit and significantly reduced when the retinal motion was mainly due to pursuit. This result indicates that the origin of retinal motion, whether it is caused by object motion or by voluntary pursuit is important. We conclude that the localization of speed perturbations affecting one of two peripheral moving objects is exceedingly complicated for the visual system probably due to the dominance of relative motion. During smooth pursuit the ability to localize speed perturbations of non-foveated objects seems to be improved by additional information gained from pursuit such as corollary discharge. |
Holly Bridge; Stephen L. Hicks; Jingyi Xie; Thomas W. Okell; Sabira K. Mannan; Iona Alexander; Alan Cowey; Christopher Kennard Visual activation of extra-striate cortex in the absence of V1 activation Journal Article In: Neuropsychologia, vol. 48, no. 14, pp. 4148–4154, 2010. @article{Bridge2010, When the primary visual cortex (V1) is damaged, there are a number of alternative pathways that can carry visual information from the eyes to extrastriate visual areas. Damage to the visual cortex from trauma or infarct is often unilateral, extensive and includes gray matter and white matter tracts, which can disrupt other routes to residual visual function. We report an unusual young patient, SBR, who has bilateral damage to the gray matter of V1, sparing the adjacent white matter and surrounding visual areas. Using functional magnetic resonance imaging (fMRI), we show that area MT+/V5 is activated bilaterally to visual stimulation, while no significant activity could be measured in V1. Additionally, the white matter tracts between the lateral geniculate nucleus (LGN) and V1 appear to show some degeneration, while the tracts between LGN and MT+/V5 do not differ from controls. Furthermore, the bilateral nature of the damage suggests that residual visual capacity does not result from strengthened interhemispheric connections. The very specific lesion in SBR suggests that the ipsilateral connection between LGN and MT+/V5 may be important for residual visual function in the presence of damage to V1. |
James R. Brockmole; Melissa L. -H. Võ Semantic memory for contextual regularities within and across scene categories: Evidence from eye movements Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 7, pp. 1803–1813, 2010. @article{Brockmole2010, When encountering familiar scenes, observers can use item-specific memory to facilitate the guidance of attention to objects appearing in known locations or configurations. Here, we investigated how memory for relational contingencies that emerge across different scenes can be exploited to guide attention. Participants searched for letter targets embedded in pictures of bedrooms. In a between-subjects manipulation, targets were either always on a bed pillow or randomly positioned. When targets were systematically located within scenes, search for targets became more efficient. Importantly, this learning transferred to bedrooms without pillows, ruling out learning that is based on perceptual contingencies. Learning also transferred to living room scenes, but it did not transfer to kitchen scenes, even though both scene types contained pillows. These results suggest that statistical regularities abstracted across a range of stimuli are governed by semantic expectations regarding the presence of target-predicting local landmarks. Moreover, explicit awareness of these contingencies led to a central tendency bias in recall memory for precise target positions that is similar to the spatial category effects observed in landmark memory. These results broaden the scope of conditions under which contextual cuing operates and demonstrate how semantic memory plays a causal and independent role in the learning of associations between objects in real-world scenes. |
Martin Corley Making predictions from speech with repairs: Evidence from eye movements Journal Article In: Language and Cognitive Processes, vol. 25, no. 5, pp. 706–727, 2010. @article{Corley2010, When listeners hear a spoken utterance, they are able to predict upcoming information on the basis of what they have already heard. But what happens when the speaker changes his or her mind mid-utterance? The present paper investigates the immediate effects of repairs on listeners' linguistic predictions. Participants listened to sentences like the boy will eat/move the cake while viewing scenes depicting the agent, the theme, and distractor objects (which were not edible). Over 25% of items included conjoined verbs (eat and move) and 25% included repairs (eat- uh, move). Participants were sensitive to repairs: where eat was overridden by move, fixations on the theme patterned with the plain move condition, but where there was a conjunct, fixations patterned with eat. However, once the theme had been heard, there were more fixations to the cake in all conditions including eat, showing that the first verb maintained an influence on prediction, even following a repair. The results are compatible with the view that prediction during comprehension is updated incrementally, but not completely, as the linguistic input unfolds. |
Kenny R. Coventry; Dermot Lynott; Angelo Cangelosi; Lynn Monrouxe; Dan Joyce; Daniel C. Richardson Spatial language, visual attention, and perceptual simulation Journal Article In: Brain and Language, vol. 112, no. 3, pp. 202–213, 2010. @article{Coventry2010, Spatial language descriptions, such as The bottle is over the glass, direct the attention of the hearer to particular aspects of the visual world. This paper asks how they do so, and what brain mechanisms underlie this process. In two experiments employing behavioural and eye tracking methodologies we examined the effects of spatial language on people's judgements and parsing of a visual scene. The results underscore previous claims regarding the importance of object function in spatial language, but also show how spatial language differentially directs attention during examination of a visual scene. We discuss implications for existing models of spatial language, with associated brain mechanisms. |
Christopher D. Cowper-Smith; Esther Y. Y. Lau; Carl A. Helmick; Gail A. Eskes; David A. Westwood Neural coding of movement direction in the healthy human brain Journal Article In: PLoS ONE, vol. 5, no. 10, pp. e13330, 2010. @article{CowperSmith2010, Neurophysiological studies in monkeys show that activity of neurons in primary cortex (M1), pre-motor cortex (PMC), and cerebellum varies systematically with the direction of reaching movements. These neurons exhibit preferred direction tuning, where the level of neural activity is highest when movements are made in the preferred direction (PD), and gets progressively lower as movements are made at increasing degrees of offset from the PD. Using a functional magnetic resonance imaging adaptation (fMRI-A) paradigm, we show that PD coding does exist in regions of the human motor system that are homologous to those observed in non-human primates. Consistent with predictions of the PD model, we show adaptation (i.e., a lower level) of the blood oxygen level dependent (BOLD) time-course signal in M1, PMC, SMA, and cerebellum when consecutive wrist movements were made in the same direction (0 degrees offset) relative to movements offset by 90 degrees or 180 degrees . The BOLD signal in dorsolateral prefrontal cortex adapted equally in all movement offset conditions, mitigating against the possibility that the present results are the consequence of differential task complexity or attention to action in each movement offset condition. |
David P. Crabb; Nicholas D. Smith; Franziska G. Rauscher; Catharine M. Chisholm; John L. Barbur; David F. Edgar; David F. Garway-Heath Exploring eye movements in patients with glaucoma when viewing a driving scene Journal Article In: PLoS ONE, vol. 5, no. 3, pp. e9710, 2010. @article{Crabb2010, BACKGROUND: Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT). METHODOLOGY/PRINCIPAL FINDINGS: The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers). Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis). On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%). Whilst the average region of 'point-of-regard' of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. CONCLUSIONS/SIGNIFICANCE: Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could provide useful information about the definition of the visual field component required for fitness to drive. |
Filipe Cristino; Sebastiaan Mathôt; Jan Theeuwes; Iain D. Gilchrist ScanMatch: A novel method for comparing fixation sequences Journal Article In: Behavior Research Methods, vol. 42, no. 3, pp. 692–700, 2010. @article{Cristino2010, We present a novel approach to comparing saccadic eye movement sequences based on the Needleman– Wunsch algorithm used in bioinformatics to compare DNA sequences. In the proposed method, the saccade sequence is spatially and temporally binned and then recoded to create a sequence of letters that retains fixa- tion location, time, and order information. The comparison of two letter sequences is made by maximizing the similarity score computed from a substitution matrix that provides the score for all letter pair substitutions and a penalty gap. The substitution matrix provides a meaningful link between each location coded by the individual letters. This link could be distance but could also encode any useful dimension, including perceptual or semantic space. We show, by using synthetic and behavioral data, the benefits of this method over existing methods. The ScanMatch toolbox for MATLAB is freely available online (www.scanmatch.co.uk). |
Norbert Bruggemann; Christine Klein; Christoph Helmchen Eye movement disorders in ATP13A2 dutation carriers (PARK9) Journal Article In: Movement Disorders, vol. 25, no. 15, pp. 2686–2687, 2010. @article{Bruggemann2010, Homozygous and compound-heterozygous* mutations in the ATP13A2 gene (PARK9 locus on chromosome 1p36) cause a rare, atypical form of autosomal recessive, early- onset parkinsonism, known as Kufor Rakeb disease (KRD).1 KRD patients present with additional pyramidal signs, dementia, and supra-nuclear gaze palsy.2,3 Asymptomatic carriers of ATP13A2 mutations have been reported to show subtle parkinsonian signs and abnormalities in functional and structural neuroimaging4; however, the impact of a single ATP13A2 mutation remains a matter of debate. |
Simona Buetti; Dirk Kerzel Effects of saccades and response type on the simon effect: If you look at the stimulus, the Simon effect may be gone Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 11, pp. 2172–2189, 2010. @article{Buetti2010, The Simon effect has most often been investigated with key-press responses and eye fixation. In the present study, we asked how the type of eye movement and the type of manual response affect response selection in a Simon task. We investigated three eye movement instructions (spontaneous, saccade, and fixation) while participants performed goal-directed (i.e., reaching) or symbolic (i.e., finger-lift) responses. Initially, no oculomotor constraints were imposed, and a Simon effect was present for both response types. Next, eye movements were constrained. Participants had to either make a saccade toward the stimulus or maintain gaze fixed in the screen centre. While a congruency effect was always observed in reaching responses, it disappeared in finger-lift responses. We suggest that the redirection of saccades from the stimulus to the correct response location in noncorresponding trials contributes to the Simon effect. Because of eye-hand coupling, this occurred in a mandatory manner with reaching responses but not with finger-lift responses. Thus, the Simon effect with key-presses disappears when participants do what they typically do–look at the stimulus. |
Patrick A. Byrne; David C. Cappadocia; J. Douglas Crawford Interactions between gaze-centered and allocentric representations of reach target location in the presence of spatial updating Journal Article In: Vision Research, vol. 50, no. 24, pp. 2661–2670, 2010. @article{Byrne2010, Numerous studies have investigated the phenomenon of egocentric spatial updating in gaze-centered coordinates, and some have studied the use of allocentric cues in visually-guided movement, but it is not known how these two mechanisms interact. Here, we tested whether gaze-centered and allocentric information combine at the time of viewing the target, or if the brain waits until the last possible moment. To do this, we took advantage of the well-known fact that pointing and reaching movements show gaze-centered 'retinal magnification' errors (RME) that update across saccades. During gaze fixation, we found that visual landmarks, and hence allocentric information, reduces RME for targets in the left visual hemifield but not in the right. When a saccade was made between viewing and reaching, this landmark-induced reduction in RME only depended on gaze at reach, not at encoding. Based on this finding, we argue that egocentric-allocentric combination occurs after the intervening saccade. This is consistent with previous findings in healthy and brain damaged subjects suggesting that the brain updates early spatial representations during eye movement and combines them at the time of action. |
Eamon Caddigan; Alejandro Lleras Saccadic repulsion in pop-out search: How a target's dodgy history can push the eyes away from it Journal Article In: Journal of Vision, vol. 10, no. 14, pp. 1–9, 2010. @article{Caddigan2010, Previous studies have shown that even in the context of fairly easy selection tasks, as is the case in a pop-out task, selection of the pop-out stimulus can be sped up (in terms of eye movements) when the target-defining feature repeats across trials. Here, we show that selection of a pop-out target can actually be delayed (in terms of saccadic latencies) and made less accurate (in terms of saccade accuracy) when the target-defining feature has recently been associated with distractor status. This effect was observed even though participants' task was to fixate color oddballs (when present) and simply press a button when their eyes reached the target to advance to the next trial. Importantly, the inter-trial effect was also observed in response time (time to advance to the next trial). In contrast, this response time effect was completely eliminated in a second experiment when eye movements were eliminated from the task. That is, when participants still had to press a button to advance to the next trial when an oddball target was present in the display (an oddball detection task experiment). This pattern of results closely links the "need for selection" in a task to the presence of an inter-trial bias of attention (and eye movements) in pop-out search. |
Roberto Caldara; Xinyue Zhou; Sébastien Miellet Putting culture under the 'Spotlight' reveals universal information use for face recognition Journal Article In: PLoS ONE, vol. 5, no. 3, pp. e9708, 2010. @article{Caldara2010, Background: Eye movement strategies employed by humans to identify conspecifics are not universal. Westerners predominantly fixate the eyes during face recognition, whereas Easterners more the nose region, yet recognition accuracy is comparable. However, natural fixations do not unequivocally represent information extraction. So the question of whether humans universally use identical facial information to recognize faces remains unresolved. Methodology/Principal Findings: We monitored eye movements during face recognition of Western Caucasian (WC) and East Asian (EA) observers with a novel technique in face recognition that parametrically restricts information outside central vision. We used Spotlights with Gaussian apertures of 2, 5 or 8 dynamically centered on observers' fixations. Strikingly, in constrained Spotlight conditions (2 and 5) observers of both cultures actively fixated the same facial information: the eyes and mouth. When information from both eyes and mouth was simultaneously available when fixating the nose (8), as expected EA observers shifted their fixations towards this region. Conclusions/Significance: Social experience and cultural factors shape the strategies used to extract information from faces, but these results suggest that external forces do not modulate information use. Human beings rely on identical facial information to recognize conspecifics, a universal law that might be dictated by the evolutionary constraints of nature and not nurture. |
Manuel G. Calvo; Lauri Nummenmaa; Pedro Avero Recognition advantage of happy faces in extrafoveal vision: Featural and affective processing Journal Article In: Visual Cognition, vol. 18, no. 9, pp. 1274–1297, 2010. @article{Calvo2010, Happy, surprised, disgusted, angry, sad, fearful, and neutral facial expressions were presented extrafoveally (2.5° away from fixation) for 150 ms, followed by a probe word for recognition (Experiment 1) or a probe scene for affective valence evaluation (Experiment 2). Eye movements were recorded and gaze-contingent masking prevented foveal viewing of the faces. Results showed that (a) happy expressions were recognized faster than others in the absence of fixations on the faces, (b) the same pattern emerged when the faces were presented upright or upside-down, (c) happy prime faces facilitated the affective evaluation of emotionally congruent probe scenes, and (d) such priming effects occurred at 750 but not at 250 ms prime-probe stimulus-onset asynchrony. This reveals an advantage in the recognition of happy faces outside of overt visual attention, and suggests that this recognition advantage relies initially on featural processing and involves processing of positive affect at a later stage. |
Karen L. Campbell; Naseem Al-Aidroos; Robert Fatt; Jay Pratt; Lynn Hasher The effects of multisensory targets on saccadic trajectory deviations: Eliminating age differences Journal Article In: Experimental Brain Research, vol. 201, no. 3, pp. 385–392, 2010. @article{Campbell2010a, The present study had two aims. First, to determine if bimodal audio-visual targets allow for greater inhibition of visual distractors, which in turn may lead to greater saccadic trajectory deviations away from those distractors. Second, to determine if bimodal targets can reduce age differences in the ability to generate deviations away, as older adults tend to benefit more from multisensory integration than younger adults. The results show that bimodal targets produced larger deviations away than unimodal targets, but only when the distractor preceded the target, and this effect was comparable across age groups. Furthermore, in contrast to previous research, older adults in this study showed similar deviations away from distractors to those of younger adults. These findings suggest that age differences in the production of trajectory deviations away are not inevitable and that multisensory integration may be an important means for increasing top-down inhibition of irrelevant distraction. |
Linda E. Campbell; Kathryn L. McCabe; Kate Leadbeater; Ulrich Schall; Carmel M. Loughland; Dominique Rich Visual scanning of faces in 22q11.2 deletion syndrome: Attention to the mouth or the eyes? Journal Article In: Psychiatry Research, vol. 177, no. 1-2, pp. 211–215, 2010. @article{Campbell2010, Previous research demonstrates that people with 22q11.2 deletion syndrome (22q11DS) have social and interpersonal skill deficits. However, the basis of this deficit is unknown. This study examined, for the first time, how people with 22q11DS process emotional face stimuli using visual scanpath technology. The visual scanpaths of 17 adolescents and age/gender matched healthy controls were recorded while they viewed face images depicting one of seven basic emotions (happy, sad, surprised, angry, fear, disgust and neutral). Recognition accuracy was measured concurrently. People with 22q11DS differed significantly from controls, displaying visual scanpath patterns that were characterised by fewer fixations and a shorter scanpath length. The 22q11DS group also spent significantly more time gazing at the mouth region and significantly less time looking at eye regions of the faces. Recognition accuracy was correspondingly impaired, with 22q11DS subjects displaying particular deficits for fear and disgust. These findings suggest that 22q11DS is associated with a maladaptive visual information processing strategy that may underlie affect recognition accuracy and social functioning deficits in this group. |
Matt Canham; Mary Hegarty Effects of knowledge and display design on comprehension of complex graphics Journal Article In: Learning and Instruction, vol. 20, no. 2, pp. 155–166, 2010. @article{Canham2010, In two experiments, participants made inferences from weather maps, before and after they received instruction about relevant meteorological principles. Different versions of the maps showed either task-relevant information alone, or both task-relevant and task-irrelevant information. Participants improved on the inference task after instruction, indicating that they could apply newly acquired declarative knowledge to make inferences from graphics. In Experiment 1, participants spent more time viewing task-relevant information and less time viewing task-irrelevant information after instruction, and in Experiment 2, the presence of task-irrelevant information impaired performance. These results show that domain knowledge can affect information selection and encoding from complex graphics as well as processes of interpreting and making inferences from the encoded information. They also provide validation of one principle for the design of effective graphical displays, namely that graphics should not display more information than is required for the task at hand. |
David Caplan Task effects on BOLD signal correlates of implicit syntactic processing Journal Article In: Language and Cognitive Processes, vol. 25, no. 6, pp. 866–901, 2010. @article{Caplan2010, BOLD signal was measured in sixteen participants who made timed font change detection judgments in visually presented sentences that varied in syntactic structure and the order of animate and inanimate nouns. Behavioral data indicated that sentences were processed to the level of syntactic structure. BOLD signal increased in visual association areas bilaterally and left supramarginal gyrus in the contrast of sentences with object- and subject-extracted relative clauses without font changes in which the animacy order of the nouns biased against the syntactically determined meaning of the sentence. This result differs from the findings in a non-word detection task (Caplan et al, 2008a), in which the same contrast led to increased BOLD signal in the left inferior frontal gyrus. The difference in areas of activation indicates that the sentences were processed differently in the two tasks. These differences were further explored in an eye tracking study using the materials in the two tasks. Issues pertaining to how parsing and interpretive operations are affected by a task that is being performed, and how this might affect BOLD signal correlates of syntactic contrasts, are discussed. |
Elena Carbone; Werner X. Schneider The control of stimulus-driven saccades is subject not to central, but to visual attention limitations Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 8, pp. 2168–2175, 2010. @article{Carbone2010, In three experiments, we investigated whether the control of reflexive saccades is subject to central attention limitations. In a dual-task procedure, Task 1 required either unspeeded reporting or ignoring of briefly presented masked stimuli, whereas Task 2 required a speeded saccade toward a visual target. The stimulus onset asyn- chrony (SOA) between the two tasks was varied. In Experiments 1 and 2, the Task 1 stimulus was one or three letters, and we asked how saccade target selection is influenced by the number of items. We found (1) longer saccade latencies at short than at long SOAs in the report condition, (2) a substantially larger latency increase for three letters than for one letter, and (3) a latency difference between SOAs in the ignore condition. Broadly, these results match the central interference theory. However, in Experiment 3, an auditory stimulus was used as the Task 1 stimulus, to test whether the interference effects in Experiments 1 and 2 were due to visual instead of central interference. Although there was a small saccade latency increase from short to long SOAs, this differ- ence did not increase from the ignore to the report condition. To explain visual interference effects between letter encoding and stimulus-driven saccade control, we propose an extended theory of visual attention. |
Jean Carletta; Robin L. Hill; Craig Nicol; Tim Taylor; Jan Peter Ruiter; Ellen Gurman Bard Eyetracking for two-person tasks with manipulation of a virtual world Journal Article In: Behavior Research Methods, vol. 42, no. 1, pp. 254–265, 2010. @article{Carletta2010, Eyetracking facilities are typically restricted to monitoring a single person viewing static images or pre-recorded video. In the present article, we describe a system that makes it possible to study visual attention in coordination with other activity during joint action. The software links two eyetracking systems in parallel and provides an on-screen task. By locating eye movements against dynamic screen regions, it permits automatic tracking of moving on-screen objects. Using existing SR technology, the system can also cross-project each participant's eyetrack and mouse location onto the other's on-screen work space. Keeping a complete record of eyetrack and on-screen events in the same format as subsequent human coding, the system permits the analysis of multiple modalities. The software offers new approaches to spontaneous multimodal communication: joint action and joint attention. These capacities are demonstrated using an experimental paradigm for cooperative on-screen assembly of a two-dimensional model. The software is available under an open source license. |
Rebecca A. Champion; Tom C. A. Freeman Discrimination contours for the perception of head-centered velocity Journal Article In: Journal of Vision, vol. 10, no. 6, pp. 1–9, 2010. @article{Champion2010, There is little direct psychophysical evidence that the visual system contains mechanisms tuned to head-centered velocity when observers make a smooth pursuit eye movement. Much of the evidence is implicit, relying on measurements of bias (e.g., matching and nulling). We therefore measured discrimination contours in a space dimensioned by pursuit target motion and relative motion between target and background. Within this space, lines of constant head-centered motion are parallel to the main negative diagonal, so judgments dominated by mechanisms that combine individual components should produce contours with a similar orientation. Conversely, contours oriented parallel to the cardinal axes of the space indicate judgments based on individual components. The results provided evidence for mechanisms tuned to head-centered velocity-discrimination ellipses were significantly oriented away from the cardinal axes, toward the main negative diagonal. However, ellipse orientation was considerably less steep than predicted by a pure combination of components. This suggests that observers used a mixture of two strategies across trials, one based on individual components and another based on their sum. We provide a model that simulates this type of behavior and is able to reproduce the ellipse orientations we found. |
Thérèse Collins Visual target selection and motor planning define attentional enhancement at perceptual processing stages Journal Article In: Frontiers in Human Neuroscience, vol. 4, pp. 14, 2010. @article{Collins2010, Extracting information from the visual field can be achieved by covertly orienting attention to different regions, or by making saccades to bring areas of interest onto the fovea. While much research has shown a link between covert attention and saccade preparation, the nature of that link remains a matter of dispute. Covert presaccadic orienting could result from target selection or from planning a motor act toward an object. We examined the contribution of visual target selection and motor preparation to attentional orienting in humans by dissociating these two habitually aligned processes with saccadic adaptation. Adaptation introduces a discrepancy between the visual target evoking a saccade and the motor metrics of that saccade, which, unbeknownst to the participant, brings the eyes to a different spatial location. We examined attentional orienting by recording event-related potentials (ERPs) to task-irrelevant visual probes flashed during saccade preparation at four equidistant locations including the visual target location and the upcoming motor endpoint. ERPs as early as 130-170 ms post-probe were modulated by attention at both the visual target and motor endpoint locations. These results indicate that both target selection and motor preparation determine the focus of spatial attention, resulting in enhanced processing of stimuli at early visual-perceptual stages. |
Thérèse Collins Extraretinal signal metrics in multiple-saccade sequences Journal Article In: Journal of Vision, vol. 10, no. 14, pp. 1–14, 2010. @article{Collins2010a, Executing sequences of memory-guided movements requires combining sensory information with information about previously made movements. In the oculomotor system, extraretinal information must be combined with stored visual information about target location. The use of extraretinal signals in oculomotor planning can be probed in the double-step task. Using this task and a multiple-step version, the present study examined whether an extraretinal signal was used on every trial, whether its metrics represented desired or actual eye displacement, and whether it was best characterized as a direct estimate of orbital eye position or a vector representation of eye displacement. The results show that accurate information, including saccadic adaptation, about the first saccade is used to plan the second saccade. Furthermore, with multiple saccades, endpoint variability increases with the number of saccades. Controls ruled out that this was due to the perceptual or memory requirements of storing several target locations. Instead, each memory-guided movement depends on an internal copy of an executed movement, which may present a small discrepancy with the actual movement. Increasing the number of estimates increases the variability because this small discrepancy accumulates over several saccades. Such accumulation is compatible with a corollary discharge signal carrying metric information about saccade vectors. |
Thérèse Collins; Tobias Heed; Karine Doré-Mazars; Brigitte Röder Presaccadic attention interferes with feature detection Journal Article In: Experimental Brain Research, vol. 201, no. 1, pp. 111–117, 2010. @article{Collins2010b, Preparing a saccadic eye movement to a particular spatial location enhances the perception of visual targets at this location and decreases perception of nearby targets prior to movement onset. This effect has been termed the orientation of pre-saccadic attention. Here, we investigated whether pre-saccadic attention influenced the detection of a simple visual feature-a process that has been hypothesized to occur without the need for attention. Participants prepared a saccade to a cued location and detected the occurrence of a "pop-out" feature embedded in distracters at the same or different location. The results show that preparing a saccade to a given location decreased detection of features at non-aimed-for locations, suggesting that the selection of a location as the next saccade endpoint influences sensitivity to basic visual features across the visual field. |
Thérèse Collins; Tobias Heed; Brigitte Röder Eye-movement driven changes in the perception of auditory space Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 3, pp. 736–746, 2010. @article{Collins2010c, The perceptual localization of sensory stimuli often depends on body position, and, when action is required, sensory coordinates must be transformed into a motor reference frame. We investigated the role of such a refer- ence frame change on visual and auditory spatial cognition. Participants had to make a saccade to a visual or auditory target and subsequently compare the location of a visual or auditory probe to the remembered location of the target. Neither visual nor auditory localization depended on trial-by-trial variability in saccade endpoint, suggesting that target locations are remapped across saccades in a manner allowing for oculomotor noise. We also compared visual and auditory localization performance before and after the systematic modification of saccade metrics by saccadic adaptation. Adaptation introduced systematic biases into transsaccadic visual and auditory localization behavior. These results show that information about eye movements is taken into account in both visual and auditory spatial cognition. We propose that auditory stimuli are remapped across saccades and that this eye-centered representation contributes to normal auditory localization. |
Sébastien Coppe; Jean-Jacques Orban de Xivry; Marcus Missal; Philippe Lefèvre Biological motion influences the visuomotor transformation for smooth pursuit eye movements Journal Article In: Vision Research, vol. 50, no. 24, pp. 2721–2728, 2010. @article{Coppe2010, Humans are very sensitive to the presence of other living persons or animals in their surrounding. Human actions can readily be perceived, even in a noisy environment. We recently demonstrated that biological motion, which schematically represents human motion, influences smooth pursuit eye movements during the initiation period (Orban de Xivry, Coppe, Lef??vre, & Missal, 2010). This smooth pursuit response is driven both by a visuomotor pathway, which transforms retinal inputs into motor commands, and by a memory pathway, which is directly related to the predictive properties of smooth pursuit. To date, it is unknown which of these pathways is influenced by biological motion. In the present study, we first use a theoretical model to demonstrate that an influence of biological motion on the visuomotor and memory pathways might both explain its influence on smooth pursuit initiation. In light of this model, we made theoretical predictions of the possible influence of biological motion on smooth pursuit during and after the transient blanking of the stimulus. These qualitative predictions were then compared with recordings of eye movements acquired before, during and after the transient blanking of the stimulus. The absence of difference in smooth pursuit eye movements during blanking of the stimuli and the stronger visually guided smooth pursuit reacceleration after reappearance of the biological motion stimuli in comparison with control stimuli suggests that biological motion influences the visuomotor pathway but not the memory pathway. |
Kirsten A. Dalrymple; Walter F. Bischof; David Cameron; Jason J. S. Barton; Alan Kingstone Simulating simultanagnosia: Spatially constricted vision mimics local capture and the global processing deficit Journal Article In: Experimental Brain Research, vol. 202, no. 2, pp. 445–455, 2010. @article{Dalrymple2010, Patients with simultanagnosia, which is a component of Bálint syndrome, have a restricted spatial window of visual attention and cannot see more than one object at a time. As a result, these patients see the world in a piecemeal fashion, seeing the local components of objects or scenes at the expense of the global picture. To directly test the relationship between the restriction of the attentional window in simultanagnosia and patients' difficulty with global-level processing, we used a gaze-contingent display to create a literal restriction of vision for healthy participants while they performed a global/local identification task. Participants in this viewing condition were instructed to identify the global and local aspects of hierarchical letter stimuli of different sizes and densities. They performed well at the local identification task, and their patterns of inaccuracies for the global level task were highly similar to the pattern of inaccuracies typically seen with simultanagnosic patients. This suggests that a restricted spatial area of visual processing, combined with normal limits to visual processing, can lead to difficulties with global-level perception. |
Rong-Fuh Day Examining the validity of the Needleman-Wunsch algorithm in identifying decision strategy with eye-movement data Journal Article In: Decision Support Systems, vol. 49, no. 4, pp. 396–403, 2010. @article{Day2010, A new generation of eye trackers shows us a promising alternative approach to tracing decision processes beyond the popular computerized-information-board approach. In order to exploit the eye-movement data, this study examined the validity of the Needleman-Wunsch algorithm (NWA) to characterize the decision process, and proposed an NWA-based classification method to predict which typical strategy an empirical search behavior might belong to. An eye-tracking based experiment was conducted. Our results showed that the resemblance score by NWA conformed to the assumption that the pair of information search behaviors based on the same strategy should have the closest resemblance. Moreover, with respect to our NWA-based classification method, our result showed that its overall prediction accuracy, hit-ratio, in identifying underlying strategies achieved 88%, significantly much higher than that gained from chance. On the whole, the combination of eye-fixation data and our NWA-based classification method is qualified. © 2010 Elsevier B.V. All rights reserved. |
Xiaomo Chen; Katherine Wilson Scangos; Veit Stuphorn Supplementary motor area exerts proactive and reactive control of arm movements Journal Article In: Journal of Neuroscience, vol. 30, no. 44, pp. 14657–14675, 2010. @article{Chen2010, Adaptive behavior requires the ability to flexibly control actions. This can occur either proactively to anticipate task requirements, or reactively in response to sudden changes. Here we report neuronal activity in the supplementary motor area (SMA) that is correlated with both forms of behavioral control. Single-unit and multiunit activity and intracranial local field potentials (LFPs) were recorded in macaque monkeys during a stop-signal task, which elicits both proactive and reactive behavioral control. The LFP power in high- (60-150 Hz) and low- (25-40 Hz) frequency bands was significantly correlated with arm movement reaction time, starting before target onset. Multiunit and single-unit activity also showed a significant regression with reaction time. In addition, LFPs and multiunit and single-unit activity changed their activity level depending on the trial history, mirroring adjustments on the behavioral level. Together, these findings indicate that neuronal activity in the SMA exerts proactive control of arm movements by adjusting the level of motor readiness. On trials when the monkeys successfully canceled arm movements in response to an unforeseen stop signal, the LFP power, particularly in a low (10-50 Hz) frequency range, increased early enough to be causally related to the inhibition of the arm movement on those trials. This indicated that neuronal activity in the SMA is also involved in response inhibition in reaction to sudden task changes. Our findings indicate, therefore, that SMA plays a role in the proactive control of motor readiness and the reactive inhibition of unwanted movements. |
Ana B. Chica; Raymond M. Klein; Robert D. Rafal; Joseph B. Hopfinger Endogenous saccade preparation does not produce inhibition of return: Failure to replicate Rafal, Calabresi, Brennan, & Sciolto (1989) Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 5, pp. 1193–1206, 2010. @article{Chica2010, Inhibition of Return (IOR, slower reaction times to previously cued or inspected locations) is observed both when eye movements are prohibited, and when the eyes move to the peripheral location and back to the centre before the target appears. It has been postulated that both effects are generated by a common mechanism, the activation of the oculomotor system. In strong support of this claim, IOR is not observed when attention is oriented endogenously and covertly, but it has been observed when eye movements are endogenously prepared, even when not executed. Here, we aimed to replicate and extend the finding that endogenous saccade preparation produces IOR. In five experiments using different paradigms, IOR was not observed when participants endogenously prepared an eye movement. These results lead us to conclude that endogenous saccade preparation is not sufficient to produce IOR. |
Ana B. Chica; Tracy L. Taylor; Juan Lupiáñez; Raymond M. Klein Two mechanisms underlying inhibition of return Journal Article In: Experimental Brain Research, vol. 201, no. 1, pp. 25–35, 2010. @article{Chica2010a, Inhibition of return (IOR) refers to slower reaction times to targets presented at previously stimulated or inspected locations. Taylor and Klein (J Exp Psychol Hum Percept Perform 26(5):1639-1656, 2000) showed that IOR can affect either attentional/perceptual or motor processes, depending on whether the oculomotor system is in a quiescent or in an activated state, respectively. If the motoric flavour of IOR is truly non-perceptual and non-attentional, no IOR should be observed when the responses to targets are not based on spatial information. In the present experiments, we demonstrated that when the eyes moved to the peripheral cue and back to centre before the target appeared (to generate the motoric flavour), IOR was observed in detection tasks, for which the spatial location is an integral feature of the onset that is reported, but not in colour discrimination tasks, for which the outcome of a non-spatial perceptual discrimination is reported. When eye movements were prevented, both tasks showed robust IOR. We, therefore, conclude that the motoric flavour of IOR, elicited by oculomotor activation, does not affect attention or perceptual processing. |
Tjerk P. Gutteling; Helene M. Ettinger-Veenstra; J. Leon Kenemans; Sebastiaan F. W. Neggers In: Journal of Cognitive Neuroscience, vol. 22, no. 9, pp. 1931–1943, 2010. @article{Gutteling2010, When an eye movement is prepared, attention is shifted toward the saccade end-goal. This coupling of eye movements and spatial attention is thought to be mediated by cortical connections between the FEFs and the visual cortex. Here, we present evidence for the existence of these connections. A visual discrimination task was performed while recording the EEG. Discrimination performance was significantly improved when the discrimination target and the saccade target matched. EEG results show that frontal activity precedes occipital activity contralateral to saccade direction when the saccade is prepared but not yet executed; these effects were absent in fixation conditions. This is consistent with the idea that the FEF exerts a direct modulatory influence on the visual cortex and enhances perception at the saccade end-goal. |
Nathalie Guyader; Jennifer Malsert; Christian Marendaz Having to identify a target reduces latencies in prosaccades but not in antisaccades Journal Article In: Psychological Research, vol. 74, no. 1, pp. 12–20, 2010. @article{Guyader2010, In a princeps study, Trottier and Pratt (2005) showed that saccadic latencies were dramatically reduced when subjects were instructed to not simply look at a peripheral target (reflexive saccade) but to identify some of its properties. According to the authors, the shortening of saccadic reactions times may arise from a top-down disinhibition of the superior colliculus (SC), potentially mediated by the direct pathway connecting frontal/prefrontal cortex structures to the SC. Using a "cue paradigm" (a cue preceded the appearance of the target), the present study tests if the task instruction (Identify vs. Glance) also reduces the latencies of antisaccades (AS), which involve prefrontal structures. We show that instruction reduces latencies for prosaccade but not for AS. An AS requires two processes: the inhibition of a reflexive saccade and the generation of a voluntary saccade. To separate these processes and to better understand the task effect we also test the effect of the task instruction only on voluntary saccades. The effect still exists but it is much weaker than for reflexive saccades. The instruction effect closely depends on task demands in executive resources. |
Norbert Hagemann; Jörg Schorer; R. Canal-Bruland; Simone Lotz; Bernd Strauss Visual perception in fencing: Do the eye movements of fencers represent their information pickup? Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 8, pp. 2204–2214, 2010. @article{Hagemann2010, The present study examined whether results of athletes' eye movements while they observe fencing attacks reflect their actual information pickup by comparing these results with others gained with temporal and spatial occlusion and cuing techniques. Fifteen top-ranking expert fencers, 15 advanced fencers, and 32 sport students predicted the target region of 405 fencing attacks on a computer monitor. Results of eye movement recordings showed a stronger foveal fixation on the opponent's trunk and weapon in the two fencer groups. Top-ranking expert fencers fixated particularly on the upper trunk. This matched their performance decrements in the spatial occlusion condition. However, when the upper trunk was occluded, participants also shifted eye movements to neighboring body regions. Adding cues to the video material had no positive effects on prediction performance. We conclude that gaze behavior does not necessarily represent information pickup, but that studies applying the spatial occlusion paradigm should also register eye movements to avoid underestimating the information contributed by occluded regions. |
Tuomo Häikiö; Raymond Bertram; Jukka Hyönä Development of parafoveal processing within and across words in reading: Evidence from the boundary paradigm Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 10, pp. 1982–1998, 2010. @article{Haeikioe2010, In this study we used the boundary paradigm to examine whether readers extract more parafoveal information within than across words. More specifically, we examined whether readers extract more parafoveal information from a compound word's second constituent than from the same word when it is the noun in an adjective-noun phrase (kummitustarina "ghost story" vs. lennokas tarina "vivid story"). We also examined whether the processing of compound word constituents is serial or parallel and how parafoveal word processing develops over the elementary school years. Participants were Finnish adults and 8-year-old second-, 10-year-old fourth-, and 12-year-old sixth-graders. The results showed that for all age groups more parafoveal information is extracted from the second constituent within compounds than from the noun in adjective-noun phrases. Moreover, for all age groups we found evidence for parallel processing of constituents within compounds, but only when the compounds were of high frequency. In sum, the present study shows that attentional allocation extends further to the right and is more simultaneous when words are linguistically and spatially unified, providing evidence that attention in text processing is flexible in nature. |
Jessica K. Hall; Samuel B. Hutton; Michael J. Morgan Sex differences in scanning faces: Does attention to the eyes explain female superiority in facial expression recognition? Journal Article In: Cognition and Emotion, vol. 24, no. 4, pp. 629–637, 2010. @article{Hall2010, Previous meta-analyses support a female advantage in decoding non-verbal emotion (Hall, 1978, 1984), yet the mechanisms underlying this advantage are not understood. The present study examined whether the female advantage is related to greater female attention to the eyes. Eye-tracking techniques were used to measure attention to the eyes in 19 males and 20 females during a facial expression recognition task. Women were faster and more accurate in their expression recognition compared with men, and women looked more at the eyes than men. Positive relationships were observed between dwell time and number of fixations to the eyes and both accuracy of facial expression recognition and speed of facial expression recognition. These results support the hypothesis that the female advantage in facial expression recognition is related to greater female attention to the eyes. |
S. N. Hamid; B. Stankiewicz; Mary Hayhoe Gaze patterns in navigation: Encoding information in large-scale environments Journal Article In: Journal of Vision, vol. 10, no. 12, pp. 1–11, 2010. @article{Hamid2010, We investigated the role of gaze in encoding of object landmarks in navigation. Gaze behavior was measured while participants learnt to navigate in a virtual large-scale environment in order to understand the sampling strategies subjects use to select visual information during navigation. The results showed a consistent sampling pattern. Participants preferentially directed gaze at a subset of the available object landmarks with a preference for object landmarks at the end of hallways and T-junctions. In a subsequent test of knowledge of the environment, we removed landmarks depending on how frequently they had been viewed. Removal of infrequently viewed landmarks had little effect on performance, whereas removal of the most viewed landmarks impaired performance substantially. Thus, gaze location during learning reveals the information that is selectively encoded, and landmarks at choice points are selected in preference to less informative landmarks. |
Robert E. Hampson; Ioan Opris; S. A. Deadwyler Neural correlates of fast pupil dilation in nonhuman primates: Relation to behavioral performance and cognitive workload Journal Article In: Behavioural Brain Research, vol. 212, no. 1, pp. 1–11, 2010. @article{Hampson2010, Pupil dilation in humans has been previously shown to correlate with cognitive workload, whereby increased frequency of dilation is associated with increased degree of difficulty of a task. It has been suggested that frontal oculomotor brain areas control cognitively related pupil dilations, but this has not been confirmed due to lack of animal models of cognitive workload and task-related pupil dilation. This is the first report of a wavelet analysis applied to continuous measures of pupil size used to detect the onset of abrupt pupil dilations and the frequency of those dilations in nonhuman primates (NHPs) performing a trial-unique delayed-match-to-sample (DMS) task. A unique finding shows that electrophysiological recordings in the same animals revealed firing of neurons in frontal cortex correlated to different components of pupil dilation during task performance. It is further demonstrated that the frequency of fast pupil dilations (but not rate of eye movements) correlated with cognitive workload during task performance. Such correlations suggest that frontal neuron encoding of pupil dilation provides critical feedback to other brain areas involved in the processing of complex visual information. |
Ben M. Harvey; O. J. Braddick; A. Cowey In: Journal of Vision, vol. 10, no. 5, pp. 1–15, 2010. @article{Harvey2010, Our recent psychophysical experiments have identified differences in the spatial summation characteristics of pattern detection and position discrimination tasks performed with rotating, expanding, and contracting stimuli. Areas MT and MST are well established to be involved in processing these stimuli. fMRI results have shown retinotopic activation of area V3A depending on the location of the center of radial motion in vision. This suggests the possibility that V3A may be involved in position discrimination tasks with these motion patterns. Here we use repetitive transcranial magnetic stimulation (rTMS) over MT+ and a dorsomedial extrastriate region including V3A to try to distinguish between TMS effects on pattern detection and position discrimination tasks. If V3A were involved in position discrimination, we would expect to see effects on position discrimination tasks, but not pattern detection tasks, with rTMS over this dorsomedial extrastriate region. In fact, we could not dissociate TMS effects on the two tasks, suggesting that they are performed by the same extrastriate area, in MT+. |
Katharina Havermann; Markus Lappe The influence of the consistency of postsaccadic visual errors on saccadic adaptation Journal Article In: Journal of Neurophysiology, vol. 103, no. 6, pp. 3302–3310, 2010. @article{Havermann2010, The saccadic system is a prime example of motor control without continuous visual feedback. These systems suffer from a strong vulnerability against disturbances. The mechanism of saccadic adaptation allows adjustment of saccades to alterations arising not only from anatomical changes but also from external changes. The weighting of errors according to their reliability provides a strong benefit for an optimized control system. Thus the consistency of visual error should influence the characteristics of adaptation. In the typical adaptation paradigm a visual error is introduced by stepping the target during the saccade by a given amount. In this paradigm, the retinal error varies with the accuracy of the saccade and the step size. To study the influence of error consistency we use a variant of the adaptation paradigm which allows to specify a constant error size. Intrasaccadic target step sizes were calculated with respect to the predicted landing position of each individual saccade. The consistency of the visual error was varied by introducing different levels of noise to the intrasaccadic target step. Different mean intrasaccadic target step sizes were examined: positive target step, negative target step, and a condition in which the mean of the error distribution was clamped to the fovea. In all three conditions saccadic adaptation was strongest when the error was consistent and became weaker as the error became more variable. These results show that saccadic adaptation takes not only the average error but also the consistency of the error into account. |
Stefan Hawelka; Benjamin Gagl; Heinz Wimmer A dual-route perspective on eye movements of dyslexic readers Journal Article In: Cognition, vol. 115, no. 3, pp. 367–379, 2010. @article{Hawelka2010, This study assessed eye movement abnormalities of adolescent dyslexic readers and interpreted the findings by linking the dual-route model of single word reading with the E-Z Reader model of eye movement control during silent sentence reading. A dysfunction of the lexical route was assumed to account for a reduced number of words which received only a single fixation or which were skipped and for the increased number of words with multiple fixations and a marked effect of word length on gaze duration. This pattern was interpreted as a frequent failure of orthographic whole-word recognition (based on orthographic lexicon entries) and on reliance on serial sublexical processing instead. Inefficiency of the lexical route was inferred from prolonged gaze durations for singly fixated words. These findings were related to the E-Z Reader model of eye movement control. Slow activation of word phonology accounted for the low skipping rate of dyslexic readers. Frequent reliance on sublexical decoding was inferred from a tendency to fixate word beginnings and from short forward saccades. Overall, the linkage of the dual-route model of single word reading and a model of eye movement control led to a useful framework for understanding eye movement abnormalities of dyslexic readers. |
Ryusuke Hayashi; Yuko Sugita; Shin'ya Nishida; Kenji Kawano How motion signals are integrated across frequencies: Study on motion perception and ocular following responses using multiple-slit stimuli Journal Article In: Journal of Neurophysiology, vol. 103, no. 1, pp. 230–243, 2010. @article{Hayashi2010, Visual motion signals, which are initially extracted in parallel at multiple spatial frequencies, are subsequently integrated into a unified motion percept. Cross-frequency integration plays a crucial role when directional information conflicts across frequencies due to such factors as occlusion. We investigated the human observers' open-loop oculomotor tracking responses (ocular following responses, or OFRs) and the perceived motion direction in an idealized situation of occlusion—multiple-slits viewing (MSV)—in which a moving pattern is visible only through an array of slits. We also tested a more challenging viewing condition, contrast-alternating MSV (CA-MSV), in which the contrast polarity of the moving pattern alternates when it passes the slits. We found that changes in the distribution of the spectral content of the slit stimuli, introduced by variations of both the interval between the slits and the frame rate of the image stream, modulated the OFR and the reported motion direction in a rather complex manner. We show that those complex modulations could be explained by the weighted sum of the motion signal (motion contrast) of each spatiotemporal frequency. The estimated distribution of frequency weights (tuning maps) indicate that the cross-frequency integration of supra-threshold motion signals gives strong weight to low spatial frequency components (<0.25 cpd) for both OFR and motion perception. However, the tuning map estimated with the MSV stimuli were significantly different from those estimated with the CA-MSV (and from those measured in a more direct manner using grating stimuli), suggesting that interfrequency interactions (e.g., interaction producing speed-dependent tuning) was involved. |
Benjamin Y. Hayden; Sarah R. Heilbronner; Michael L. Platt Ambiguity aversion in rhesus macaques Journal Article In: Frontiers in Neuroscience, vol. 4, pp. 166, 2010. @article{Hayden2010a, People generally prefer risky options, which have fully specified outcome probabilities, to ambiguous options, which have unspecified probabilities. This preference, formalized in economics, is strong enough that people will reliably prefer a risky option to an ambiguous option with a greater expected value. Explanations for ambiguity aversion often invoke uniquely human faculties like language, self-justification, or a desire to avoid public embarrassment. Challenging these ideas, here we demonstrate that a preference for unambiguous options is shared with rhesus macaques. We trained four monkeys to choose between pairs of options that both offered explicitly cued probabilities of large and small juice outcomes. We then introduced occasional trials where one of the options was obscured and examined their resulting preferences; we ran humans in a parallel experiment on a nearly identical task. We found that monkeys reliably preferred risky options to ambiguous ones, even when this bias was costly, closely matching the behavior of humans in the analogous task. Notably, ambiguity aversion varied parametrically with the extent of ambiguity. As expected, ambiguity aversion gradually declined as monkeys learned the underlying probability distribution of rewards. These data indicate that ambiguity aversion reflects fundamental cognitive biases shared with other animals rather than uniquely human factors guiding decisions. |
Benjamin Y. Hayden; Michael L. Platt Neurons in anterior cingulate cortex multiplex information about reward and action Journal Article In: Journal of Neuroscience, vol. 30, no. 9, pp. 3339–3346, 2010. @article{Hayden2010, The dorsal anterior cingulate cortex (dACC) is thought to play a critical role in forming associations between rewards and actions. Currently available physiological data, however, remain inconclusive regarding the question of whether dACC neurons carry information linking particular actions to reward or, instead, encode abstract reward information independent of specific actions. Here we show that firing rates of a majority of dACC neurons in a population studied in an eight-option variably rewarded choice task were sensitive to both saccade direction and reward value. Furthermore, the influences of reward and saccade direction on neuronal activity were approximately equal in magnitude over the range of rewards tested and were statistically independent. Our results indicate that dACC neurons multiplex information about both reward and action, endorsing the idea that this area links motivational outcomes to behavior and undermining the notion that its neurons solely contribute to reward processing in the abstract. |
Benjamin Y. Hayden; David V. Smith; Michael L. Platt Cognitive control signals in posterior cingulate cortex Journal Article In: Frontiers in Human Neuroscience, vol. 4, pp. 223, 2010. @article{Hayden2010b, Efficiently shifting between tasks is a central function of cognitive control. The role of the default network - a constellation of areas with high baseline activity that declines during task performance - in cognitive control remains poorly understood. We hypothesized that task switching demands cognitive control to shift the balance of processing toward the external world, and therefore predicted that switching between the two tasks would require suppression of activity of neurons within the posterior cingulate cortex (CGp). To test this idea, we recorded the activity of single neurons in CGp, a central node in the default network, in monkeys performing two interleaved tasks. As predicted, we found that basal levels of neuronal activity were reduced following a switch from one task to another and gradually returned to pre-switch baseline on subsequent trials. We failed to observe these effects in lateral intraparietal cortex, part of the dorsal fronto-parietal cortical attention network directly connected to CGp. These findings indicate that suppression of neuronal activity in CGp facilitates cognitive control, and suggest that activity in the default network reflects processes that directly compete with control processes elsewhere in the brain. |
Frouke Hermens; Petroc Sumner; Robin Walker Inhibition of masked primes as revealed by saccade curvature Journal Article In: Vision Research, vol. 50, no. 1, pp. 46–56, 2010. @article{Hermens2010c, In masked priming, responses are often speeded when primes are similar to targets ('positive compatibility effect'). However, sometimes similarity of prime and target impairs responses ('negative compatibility effect'). A similar distinction has been found for the curvature of saccade trajectories. Here, we test whether the same inhibition processes are involved in the two phenomena, by directly comparing response times and saccade curvature within the same masked priming paradigm. Interestingly, we found a dissociation between the directions of masked priming and saccade curvature, which could indicate that multiple types of inhibition are involved in the suppression of unwanted responses. |
Frouke Hermens; Robin Walker What determines the direction of microsaccades? Journal Article In: Journal of Eye Movement Research, vol. 3, no. 4, pp. 1–20, 2010. @article{Hermens2010, During visual fixation, our eyes are not entirely still. Instead, small eye movements, such as microsaccades, can be observed. We here investigate what determines the direction and frequency of these microsaccades, as this information might help to clarifywhat purpose they serve. The relative contribution of three possible factorswas examined: (1) the orienting of covert attention, (2) the spatial distribution of possible target locations, and (3) whether monocular or binocular microsaccades are considered. The orienting of covert attention and the distribution of possible target locations had a relatively weak effect on microsaccade rates and directions. In contrast, the classification of microsaccades as binocular (occurring in both eyes simultaneously) or monocular (observed in one eye only) strongly affected both the rate and the direction of microsaccades. The results are discussed in the context of existing findings. |
Frouke Hermens; Robin Walker The Influence of Onsets and Offsets on Saccade Programming Journal Article In: i-Perception, vol. 1, no. 2, pp. 83–94, 2010. @article{Hermens2010a, When making a saccadic eye movement to a peripheral target, a simultaneous stimulus onset at central fixation generally increases saccadic latency, while offsets reduce latency (‘gap effect'). Visual onsets remote from fixation also increase latency (‘remote distractor effect'); however, the influence of remote visual offsets is less clear. Previous studies, which used a search task, found that remote offsets either facilitated, inhibited, or did nothing to saccade latencies towards a peripheral target. It cannot be excluded, however, that the target selection process in such search tasks influenced the results. We therefore simplified the task and asked participants to make eye movements to a predictable target. Simultaneously with target onset, either one or multiple remote stimulus onsets and offsets were presented. It was found that peripheral onsets increased saccade latencies, but offsets did not influence the initiation of a saccade to the target. Moreover, the number of onsets and offsets did not affect the results. These results suggest that earlier effects of remote stimulus offsets and of the number of remote distractor onsets reside in the target identification process of the visual search task rather than the competition between possible saccade goals. The results are discussed in the context of models of saccade target selection. |
Frouke Hermens; Robin Walker Gaze and arrow distractors influence saccade trajectories similarly Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 11, pp. 2120–2140, 2010. @article{Hermens2010b, Perceiving someone's averted eye-gaze is thought to result in an automatic shift of attention and in the preparation of an oculomotor response in the direction of perceived gaze. Although gaze cues have been regarded as being special in this respect, recent studies have found evidence for automatic attention shifts with nonsocial stimuli, such as arrow cues. Here, we directly compared the effects of social and nonsocial cues on eye movement preparation by examining the modulation of saccade trajectories made in the presence of eye-gaze, arrows, or peripheral distractors. At a short stimulus onset asynchrony (SOA) between the distractor and the target, saccades deviated towards the direction of centrally presented arrow distractors, but away from the peripheral distractors. No significant trajectory deviations were found for gaze distractors. At the longer SOA, saccades deviated away from the direction of the distractor for all three distractor types, but deviations were smaller for the centrally presented gaze and arrow distractors. These effects were independent of whether line-drawings or photos of faces were used and could not be explained by differences in the spatial properties of the peripheral distractor. The results suggest that all three types of distractors (gaze, arrow, peripheral) can induce the automatic programming of an eye movement. Moreover, the findings suggest that gaze and arrow distractors affect oculomotor preparation similarly, whereas peripheral distractors, which are classically regarded as eliciting an automatic shift of attention and an oculomotor response, induce a stronger and faster acting influence on response preparation and the corresponding inhibition of that response. |
Frouke Hermens; Johannes M. Zanker; Robin Walker Microsaccades and preparatory set: A comparison between delayed and immediate, exogenous and endogenous pro- and anti-saccades Journal Article In: Experimental Brain Research, vol. 201, no. 3, pp. 489–498, 2010. @article{Hermens2010d, When we fixate an object, our eyes are not entirely still, but undergo small displacements such as microsaccades. Here, we investigate whether these microsaccades are sensitive to the preparatory processes involved in programming a saccade. We show that the frequency of microsaccades depends in a specific manner on the intention where to move the eyes (towards a target location or away from it), when to move (immediately after the onset of the target or after a delay), and what type of cue is followed (a peripheral onset or a centrally presented symbolic cue). In particular, in the preparatory interval before and early after target onset, more microsaccades were found when a delayed saccade towards a peripheral target was prepared than when a saccade away was programmed. However, no such difference in the frequency of microsaccades was observed when saccades were initiated immediately after the onset of the target or when the saccades were programmed on the basis of a centrally presented arrow cue. The results are discussed in the context of the neural correlates of response preparation, known as preparatory set. |
Katrin Herrmann; Leila Montaser-Kouhsari; Marisa Carrasco; David J. Heeger When size matters: Attention affects performance by contrast or response gain Journal Article In: Nature Neuroscience, vol. 13, no. 12, pp. 1554–1561, 2010. @article{Herrmann2010, Covert attention, the selective processing of visual information in the absence of eye movements, improves behavioral performance. Here, we show that attention, both exogenous (involuntary) and endogenous (voluntary), can affect performance by contrast or response gain changes, depending on the stimulus size and the relative size of the attention field. These two variables were manipulated in a cueing task while varying stimulus contrast. We observed a change in behavioral performance consonant with a change in contrast gain for small stimuli paired with spatial uncertainty, but a change in response gain for large stimuli presented at one location (no uncertainty) and surrounded by irrelevant flanking distracters. A complementary neuroimaging experiment revealed that observers' attention field was wider with than without spatial uncertainty. Our results support key predictions of the normalization model of attention, and reconcile previous, seemingly contradictory, findings on the effects of visual attention. Introduction |
Arvid Herwig; Miriam Beisert; Werner X. Schneider In: Journal of Vision, vol. 108, no. 5, pp. 1–10, 2010. @article{Herwig2010, Recent work indicates that covert visual attention and eye movements on the one hand, and covert visual attention and visual working memory on the other hand are closely interrelated. Two experiments address the question whether all three processes draw on the same spatial representations. Participants had to memorize a target location for a subsequent memory-guided saccade. During the memory interval, task-irrelevant distractors were briefly flashed on some trials either near or remote to the memory target. Results showed that the previously flashed distractors attract the saccade's landing position. However, attraction was only found, if the distractor was presented within a sector of T20- around the target axis, but not if the distractor was presented outside this sector. This effect strongly resembles the global effect in which saccades are directed to intermediate locations between a target and a simultaneously presented neighboring distractor stimulus. It is argued that covert visual attention, eye movements, and visual working memory recruit the same spatial mechanisms that can probably be ascribed to attentional priority maps. |
Constanze Hesse; Tristan T. Nakagawa; Heiner Deubel Bimanual movement control is moderated by fixation strategies Journal Article In: Experimental Brain Research, vol. 202, no. 4, pp. 837–850, 2010. @article{Hesse2010, Our study examined the effects of performing a pointing movement with the left hand on the kinematics of a simultaneous grasping movement executed with the right hand. We were especially interested in the question of whether both movements can be controlled independently or whether interference effects occur. Since previous studies suggested that eye movements may play a crucial role in bimanual movement control, the effects of different fixation strategies were also studied. Human participants were either free to move their eyes (Experiment 1) or they had to fixate (Experiment 2) while doing the task. The results show that bimanual movement control differed fundamentally depending on the fixation condition: if free viewing was allowed, participants tended to perform the task sequentially, as reflected in grasping kinematics by a delayed grip opening and a poor adaptation of the grip to the object properties for the duration of the pointing movement. This behavior was accompanied by a serial fixation of the targets for the pointing and grasping movements. In contrast, when central fixation was required, both movements were performed fast and with no obvious interference effects. The results support the notion that bimanual movement control is moderated by fixation strategies. By default, participants seem to prefer a sequential behavior in which the eyes monitor what the hands are doing. However, when forced to fixate, they do surprisingly well in performing both movements in parallel. |
J. Stephen Higgins; Ranxiao Frances Wang A landmark effect in the perceived displacement of objects Journal Article In: Vision Research, vol. 50, no. 2, pp. 242–248, 2010. @article{Higgins2010, Perceiving the displacement of an object after a visual distraction is an essential ability to interact with the world. Previous research has shown a bias to perceive the first object seen after a saccade as stable while the second one moving (landmark effect). The present study examines the generality and nature of this phenomenon. The landmark effect was observed in the absence of eye movements, when the two objects were obscured by a blank screen, a moving-pattern mask, or simply disappeared briefly before reappearing one after the other. The first reappearing object was not required to remain visible while the second object reappeared to induce the bias. The perceived direction of the displacement was mainly determined by the relative displacement of the two objects, suggesting that the landmark effect is primarily due to a landmark calibration mechanism. |
Yoriko Hirose Perception and memory across viewpoint changes in moving images Journal Article In: Journal of Vision, vol. 10, no. 4, pp. 1–19, 2010. @article{Hirose2010, Current understanding of scene perception derives largely from experiments using static scenes and psychological understanding of how moving images are processed is under-developed. We examined eye movement patterns and recognition memory performance as observers looked at short movies involving a change in viewpoint (a cut). At the time of the cut, four types of object property (color, position, identity and shape) were manipulated. Results show differential sensitivity to object property changes, reflected in both eye movement behavior after the cut and memory performance when object properties are remembered after viewing. When object properties change across a cut, memory is generally biased towards information present after the cut, except for position information which showed no bias. Our findings suggest that spatial information is represented differently to other forms of object information when viewing movies that include changes in viewpoint. |
Aaron B. Hoffman; Bob Rehder The costs of supervised classification: The effect of learning task on conceptual flexibility Journal Article In: Journal of Experimental Psychology: General, vol. 139, no. 2, pp. 319–340, 2010. @article{Hoffman2010, Research has shown that learning a concept via standard supervised classification leads to a focus on diagnostic features, whereas learning by inferring missing features promotes the acquisition of within-category information. Accordingly, we predicted that classification learning would produce a deficit in people's ability to draw novel contrasts–distinctions that were not part of training–compared with feature inference learning. Two experiments confirmed that classification learners were at a disadvantage at making novel distinctions. Eye movement data indicated that this conceptual inflexibility was due to (a) a narrower attention profile that reduces the encoding of many category features and (b) learned inattention that inhibits the reallocation of attention to newly relevant information. Implications of these costs of supervised classification learning for views of conceptual structure are discussed. |
Lee Hogarth; Anthony Dickinson; Theodora Duka The associative basis of cue-elicited drug taking in humans Journal Article In: Psychopharmacology, vol. 208, no. 3, pp. 337–351, 2010. @article{Hogarth2010, RATIONALE: Drug cues play an important role in motivating human drug taking, lapse and relapse, but the psychological basis of this effect has not been fully specified. METHOD: To clarify these mechanisms, the study measured the extent to which pictorial and conditioned tobacco cues enhanced smoking topography in an ad libitum smoking session simultaneously with cue effects on subjective craving, pleasure and anxiety. RESULTS: Both cue types increased the number of puffs consumed and craving, but pleasure and anxiety responses were dissociated across cue type. Moreover, cue effects on puff number correlated with effects on craving but not pleasure or anxiety. Finally, whereas overall puff number and craving declined across the two blocks of consumption, consistent with burgeoning satiety, cue enhancement of puff number and craving were both unaffected by satiety. CONCLUSIONS: Overall, the data suggest that cue-elicited drug taking in humans is mediated by an expectancy-based associative learning architecture, which paradoxically is autonomous of the current incentive value of the drug. |
Mackenzie G. Glaholt; Mei-Chun Wu; Eyal M. Reingold Evidence for top-down control of eye movements during visual decision making Journal Article In: Journal of Vision, vol. 10, no. 5, pp. 1–10, 2010. @article{Glaholt2010, Participants' eye movements were monitored while they viewed displays containing 6 exemplars from one of several categories of everyday items (belts, sunglasses, shirts, shoes), with a column of 3 items presented on the left and another column of 3 items presented on the right side of the display. Participants were either required to choose which of the two sets of 3 items was the most expensive (2-AFC) or which of the 6 items was the most expensive (6-AFC). Importantly, the stimulus display, and the relevant stimulus dimension, were held constant across conditions. Consistent with the hypothesis of top-down control of eye movements during visual decision making, we documented greater selectivity in the processing of stimulus information in the 6-AFC than the 2-AFC decision. In addition, strong spatial biases in looking behavior were demonstrated, but these biases were largely insensitive to the instructional manipulation, and did not substantially influence participants' choices. |
Jibo He; Jason S. McCarley Executive working memory load does not compromise perceptual processing during visual search: Evidence from additive factors analysis Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 2, pp. 308–316, 2010. @article{He2010, Executive working memory (WM) load reduces the efficiency of visual search, but the mechanisms by which this occurs are not fully known. In the present study, we assessed the effect of executive load on perceptual processing during search. Participants performed a serial oculomotor search task, looking for a circle target among gapped-circle distractors. The participants performed the task under high and low executive WM load, and the visual quality (Experiment 1) or discriminability of targets and distractors (Experiment 2) was manipulated across trials. By the logic of the additive factors method (Sternberg, 1969, 1998), if WM load compromises the quality of perceptual processing during visual search, manipulations of WM load and perceptual processing difficulty should produce nonadditive effects. Contrary to this prediction, the effects of WM load and perceptual difficulty were additive. The results imply that executive WM load does not degrade perceptual analysis during visual search. |
Mary Hegarty; Matt S. Canham; Sara I. Fabrikant Thinking about the weather: How display salience and knowledge affect performance in a graphic inference task Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 36, no. 1, pp. 37–53, 2010. @article{Hegarty2010, Three experiments examined how bottom-up and top-down processes interact when people view and make inferences from complex visual displays (weather maps). Bottom-up effects of display design were investigated by manipulating the relative visual salience of task-relevant and task-irrelevant information across different maps. Top-down effects of domain knowledge were investigated by examining performance and eye fixations before and after participants learned relevant meteorological principles. Map design and knowledge interacted such that salience had no effect on performance before participants learned the meteorological principles; however, after learning, participants were more accurate if they viewed maps that made task-relevant information more visually salient. Effects of display design on task performance were somewhat dissociated from effects of display design on eye fixations. The results support a model in which eye fixations are directed primarily by top-down factors (task and domain knowledge). They suggest that good display design facilitates performance not just by guiding where viewers look in a complex display but also by facilitating processing of the visual features that represent task-relevant information at a given display location. |
Angela Heine; Verena Thaler; Sascha Tamm; Stefan Hawelka; Michael Schneider; Joke Torbeyns; Bert Smedt; Lieven Verschaffel; Elsbeth Stern; Arthur M. Jacobs What the eyes already 'know': Using eye movement measurement to tap into children's implicit numerical magnitude representations Journal Article In: Infant and Child Development, vol. 19, no. 2, pp. 175–186, 2010. @article{Heine2010, To date, a number of studies have demonstrated the existence of mismatches between children's implicit and explicit knowledge at certain points in development that become manifest by their gestures and gaze orientation in different problem solving contexts. Stimulated by this research, we used eye movement measurement to investigate the development of basic knowledge about numerical magnitude in primary school children. Sixty-six children from grades one to three (i.e. 6–9 years) were presented with two parallel versions of a number line estimation task of which one was restricted to behavioural measures, whereas the other included the recording of eye movement data. The results of the eye movement experiment indicate a quantitative increase as well as a qualitative change in children's implicit knowledge about numerical magnitudes in this age group that precedes the overt, that is, behavioural, demonstration of explicit numerical knowledge. The finding that children's eye movements reveal substantially more about the presence of implicit precursors of later explicit knowledge in the numerical domain than classical approaches suggests further exploration of eye movement measurement as a potential early assessment tool of individual achievement levels in numerical processing. |
Stephen J. Heinen; Aarlenne Zein Khan; Philippe Lefevre; G. Blohm The default allocation of attention is broadly ahead of smooth pursuit Journal Article In: Journal of Vision, vol. 10, no. 13, pp. 1–17, 2010. @article{Heinen2010, When moving through our environment, it is vital to preferentially process positions on our future path in order to react quickly to critical situations. During smooth pursuit, attention may be directed ahead with either a focused locus or a broad bias. We examined the 2D spatial extent of attention during a smooth pursuit task using both saccade (SRT) and manual (MRT) reaction times as measures of attentional allocation. Targets were flashed at various locations around current eye position while subjects pursued a moving target. Subjects made a saccade or pressed a button as soon as they perceived the target. Both SRTs and MRTs were shortest to targets flashed ahead of compared to behind the direction of pursuit across half of the visual field ahead of pursuit direction. Furthermore, we found an increase specific to SRTs at small target eccentricities directly ahead of pursuit, which may be related to an additional saccade trigger strategy; small saccades take longer to execute if smooth pursuit brings the eyes close to the target. In summary, both SRTs and MRTs revealed that attention is by default broadly allocated in the visual hemi-field ahead of the line of sight during smooth pursuit eye movements. This attentional bias may serve a predictive purpose for facilitating the processing of upcoming events. |
Robert D. Gordon; Sarah D. Vollmer Episodic representation of diagnostic and nondiagnostic object colour Journal Article In: Visual Cognition, vol. 18, no. 5, pp. 728–750, 2010. @article{Gordon2010, In three experiments, we investigated transsaccadic object file representations. In each experiment, participants moved their eyes from a central fixation cross to a saccade target located between two peripheral objects. During the saccade, this preview display was replaced with a target display containing a single object to be named. On trials in which the target identity matched one of the preview objects, its color either matched or did not match the previewed object color. The results indicated that color changes disrupt perceptual continuity, but only for the class of objects for which color is diagnostic of object identity. When the color is not integral to identifying an object (for example, when the object is a letter or an object without a characteristic color), object continuity is preserved regardless of changes to the object's color. These results suggest that object features that are important for defining the object are incorporated into its episodic representation. Furthermore, the results are consistent with previous work showing that the quality of a feature's representation determines its importance in preserving continuity. |
Harold H. Greene; Alexander Pollatsek; Kathleen M. Masserang; Yen Ju Lee; Keith Rayner Directional processing within the perceptual span during visual target localization Journal Article In: Vision Research, vol. 50, no. 13, pp. 1274–1282, 2010. @article{Greene2010, In order to understand how processing occurs within the effective field of vision (i.e. perceptual span) during visual target localization, a gaze-contingent moving mask procedure was used to disrupt parafoveal information pickup along the vertical and the horizontal visual fields. When the mask was present within the horizontal visual field, there was a relative increase in saccade probability along the nearby vertical field, but not along the opposite horizontal field. When the mask was present either above or below fixation, saccades downwards were reduced in magnitude. This pattern of data suggests that parafoveal information selection (indexed by probability of saccade direction) and the extent of spatial parafoveal processing in a given direction (indexed by saccade amplitude) may be controlled by somewhat different mechanisms. |
Daniel J. Grodner; Natalie M. Klein; Kathleen M. Carbary; Michael K. Tanenhaus "Some," and possibly all, scalar inferences are not delayed: Evidence for immediate pragmatic enrichment Journal Article In: Cognition, vol. 116, no. 1, pp. 42–55, 2010. @article{Grodner2010, Scalar inferences are commonly generated when a speaker uses a weaker expression rather than a stronger alternative, e.g., John ate some of the apples implies that he did not eat them all. This article describes a visual-world study investigating how and when perceivers compute these inferences. Participants followed spoken instructions containing the scalar quantifier some directing them to interact with one of several referential targets (e.g., Click on the girl who has some of the balloons). Participants fixated on the target compatible with the implicated meaning of some and avoided a competitor compatible with the literal meaning prior to a disambiguating noun. Further, convergence on the target was as fast for some as for the non-scalar quantifiers none and all. These findings indicate that the scalar inference is computed immediately and is not delayed relative to the literal interpretation of some. It is argued that previous demonstrations that scalar inferences increase processing time are not necessarily due to delays in generating the inference itself, but rather arise because integrating the interpretation of the inference with relevant information in the context may require additional time. With sufficient contextual support, processing delays disappear. |
Martin Groen; Jan Noyes Solving problems: How can guidance concerning task-relevancy be provided? Journal Article In: Computers in Human Behavior, vol. 26, no. 6, pp. 1318–1326, 2010. @article{Groen2010, The analysis of eye movements of people working on problem solving tasks has enabled a more thorough understanding than would have been possible with a traditional analysis of cognitive behavior. Recent studies report that influencing 'where we look' can affect task performance. However, some of the studies that reported these results have shortcomings, namely, it is unclear whether the reported effects are the result of 'attention guidance' or an effect of highlighting display elements alone; second, the selection of the highlighted display elements was based on subjective methods which could have introduced bias. In the study reported here, two experiments are described that attempt to address these shortcomings. Experiment 1 investigates the relative contribution of each display element to successful task realization and does so with an objective analysis method, namely signal detection analysis. Experiment 2 examines whether any performance effects of highlighting are due to foregrounding intrinsic task-relevant aspects or whether they are a result of the act of highlighting in itself. Results show that the chosen objective method is effective and that highlighting the display element thus identified improves task performance significantly. These findings are not an effect of the highlighting per se and thus indicate that the highlighted element is conveying task-relevant information. These findings improve on previous results as the objective selection and analysis methods reduce potential bias and provide a more reliable input to the design and provision of computer-based problem solving support. |
Amanda L. Gamble; Ronald M. Rapee The time-course of attention to emotional faces in social phobia Journal Article In: Journal of Behavior Therapy and Experimental Psychiatry, vol. 41, no. 1, pp. 39–44, 2010. @article{Gamble2010, This study investigated the time-course of attentional bias in socially phobic (SP) and non-phobic (NP) adults. Participants viewed angry and happy faces paired with neutral faces (i.e., face-face pairs) and angry, happy and neutral faces paired with household objects (i.e., face-object pairs) for 5000 ms. Eye movement (EM) was measured throughout to assess biases in early and sustained attention. Attentional bias occurred only for face-face pairs. SP adults were vigilant for angry faces relative to neutral faces in the first 500 ms of the 5000 ms exposure, relative to NP adults. SP adults were also vigilant for happy faces over 500 ms, although there were no group-based differences in attention to happy-neutral face pairs. There were no group differences in attention to faces throughout the remainder of the exposure. Results suggest that social phobia is characterised by early vigilance for social cues with no bias in subsequent processing. |
Joy J. Geng; Nicholas E. DiQuattro Attentional capture by a perceptually salient non-target facilitates target processing through inhibition and rapid rejection Journal Article In: Journal of Vision, vol. 10, no. 6, pp. 1–12, 2010. @article{Geng2010, Perceptually salient distractors typically interfere with target processing in visual search situations. Here we demonstrate that a perceptually salient distractor that captures attention can nevertheless facilitate task performance if the observer knows that it cannot be the target. Eye-position data indicate that facilitation is achieved by two strategies: inhibition when the first saccade was directed to the target, and rapid rejection when the first saccade was captured by the salient distractor. Both mechanisms relied on the distractor being perceptually salient and not just perceptually different. The results demonstrate how bottom-up attentional capture can play a critical role in constraining top-down attentional selection at multiple stages of processing throughout a single trial. |
Eckart Zimmermann; Markus Lappe Motor signals in visual localization Journal Article In: Journal of Vision, vol. 10, no. 6, pp. 1–11, 2010. @article{Zimmermann2010, We demonstrate a strong sensory-motor coupling in visual localization in which experimental modification of the control of saccadic eye movements leads to an associated change in the perceived location of objects. Amplitudes of saccades to peripheral targets were altered by saccadic adaptation, induced by an artificial step of the saccade target during the eye movement, which leads the oculomotor system to recalibrate saccade parameters. Increasing saccade amplitudes induced concurrent shifts in perceived location of visual objects. The magnitude of perceptual shift depended on the size and persistence of errors between intended and actual saccade amplitudes. This tight agreement between the change of eye movement control and the change of localization shows that perceptual space is shaped by motor knowledge rather than simply constructed from visual input. |
Jan Zwickel; Melissa L. -H. Võ How the presence of persons biases eye movements Journal Article In: Psychonomic Bulletin & Review, vol. 17, no. 2, pp. 257–262, 2010. @article{Zwickel2010, We investigated modulation of gaze behavior of observers viewing complex scenes that included a person. To assess spontaneous orientation-following, and in contrast to earlier studies, we did not make the person salient via instruction or low-level saliency. Still, objects that were referred to by the orientation of the person were visited earlier, more often, and longer than when they were not referred to. Analysis of fixation sequences showed that the number of saccades to the cued and uncued objects differed only for saccades that started from the head region, but not for saccades starting from a control object or from a body region. We therefore argue that viewing a person leads to an increase in spontaneous following of the person's viewing direction even when the person plays no role in scene understanding and is not made prominent. |
Noriko Yamagishi; Stephen J. Anderson; Mitsuo Kawato The observant mind: Self-awareness of attentional status Journal Article In: Proceedings of the Royal Society B: Biological Sciences, vol. 277, no. 1699, pp. 3421–3426, 2010. @article{Yamagishi2010, Visual perception is dependent not only on low-level sensory input but also on high-level cognitive factors such as attention. In this paper, we sought to determine whether attentional processes can be internally monitored for the purpose of enhancing behavioural performance. To do so, we developed a novel paradigm involving an orientation discrimination task in which observers had the freedom to delay target presentation–by any amount required–until they judged their attentional focus to be complete. Our results show that discrimination performance is significantly improved when individuals self-monitor their level of visual attention and respond only when they perceive it to be maximal. Although target delay times varied widely from trial-to-trial (range 860 ms-12.84 s), we show that their distribution is Gaussian when plotted on a reciprocal latency scale. We further show that the neural basis of the delay times for judging attentional status is well explained by a linear rise-to-threshold model. We conclude that attentional mechanisms can be self-monitored for the purpose of enhancing human decision-making processes, and that the neural basis of such processes can be understood in terms of a simple, yet broadly applicable, linear rise-to-threshold model. |
Ming Yan; Reinhold Kliegl; Eike M. Richter; Antje Nuthmann; Hua Shu Flexible saccade-target selection in Chinese reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 4, pp. 705–725, 2010. @article{Yan2010, As Chinese is written without orthographical word boundaries (i.e., spaces), it is unclear whether saccade targets are selected on the basis of characters or words and whether saccades are aimed at the beginning or the centre of words. Here, we report an experiment where 30 Chinese readers read 150 sentences while their eye movements were monitored. They exhibited a strong tendency to fixate at the word centre in single-fixation cases and at the word beginning in multiple-fixation cases. Different from spaced alphabetic script, initial fixations falling at the end of words were no more likely to be followed by a refixation than initial fixations at word centre. Further, single fixations were shorter than first fixations in two-fixation cases, which is opposite to what is found in Roman script. We propose that Chinese readers dynamically select the beginning or centre of words as saccade targets depending on failure or success with segmentation of parafoveal word boundaries. |
Ming Yan; Reinhold Kliegl; Hua Shu; Jinger Pan; Xiaolin Zhou Parafoveal load of word n+1 modulates preprocessing effectiveness of word n+2 in Chinese reading Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 6, pp. 1669–1676, 2010. @article{Yan2010a, Preview benefits (PBs) from two words to the right of the fixated one (i.e., word N + 2) and associated parafoveal-on-foveal effects are critical for proposals of distributed lexical processing during reading. This experiment examined parafoveal processing during reading of Chinese sentences, using a boundary manipulation of N + 2-word preview with low- and high-frequency words N + 1. The main findings were (a) an identity PB for word N + 2 that was (b) primarily observed when word N + 1 was of high frequency (i.e., an interaction between frequency of word N + 1 and PB for word N + 2), and (c) a parafoveal-on-foveal frequency effect of word N + 1 for fixation durations on word N. We discuss implications for theories of serial attention shifts and parallel distributed processing of words during reading. |
Qing Yang; Marine Vernet; Christophe Orssaud; Pierre Bonfils; Alain Londero; Zoï Kapoula Central crosstalk for somatic tinnitus: Abnormal vergence eye movements Journal Article In: PLoS ONE, vol. 5, no. 7, pp. e11845, 2010. @article{Yang2010, Background: Frequent oulomotricity problems with orthoptic testing were reported in patients with tinnitus. This study examines with objective recordings vergence eye movements in patients with somatic tinnitus patients with ability to modify their subjective tinnitus percept by various movements, such as jaw, neck, eye movements or skin pressure. Methods: Vergence eye movements were recorded with the Eyelink II video system in 15 (23–63 years) control adults and 19 (36–62 years) subjects with somatic tinnitus. Findings: 1) Accuracy of divergence but not of convergence was lower in subjects with somatic tinnitus than in control subjects. 2) Vergence duration was longer and peak velocity was lower in subjects with somatic tinnitus than in control subjects. 3) The number of embedded saccades and the amplitude of saccades coinciding with the peak velocity of vergence were higher for tinnitus subjects. Yet, saccades did not increase peak velocity of vergence for tinnitus subjects, but they did so for controls. 4) In contrast, there was no significant difference of vergence latency between these two groups. Interpretation: The results suggest dysfunction of vergence areas involving cortical-brainstem-cerebellar circuits. We hypothesize that central auditory dysfunction related to tinnitus percept could trigger mild cerebellar-brainstem dysfunction or that tinnitus and vergence dysfunction could both be manifestations of mild cortical-brainstem-cerebellar syndrome reflecting abnormal cross-modality interactions between vergence eye movements and auditory signals. |
Hang Zhang; Camille Morvan; Laurence T. Maloney Gambling in the visual periphery: A conjoint- measurement analysis of human ability to judge visual uncertainty Journal Article In: PLoS Computational Biology, vol. 6, no. 12, pp. e1001023, 2010. @article{Zhang2010a, Recent work in motor control demonstrates that humans take their own motor uncertainty into account, adjusting the timing and goals of movement so as to maximize expected gain. Visual sensitivity varies dramatically with retinal location and target, and models of optimal visual search typically assume that the visual system takes retinal inhomogeneity into account in planning eye movements. Such models can then use the entire retina rather than just the fovea to speed search. Using a simple decision task, we evaluated human ability to compensate for retinal inhomogeneity. We first measured observers' sensitivity for targets, varying contrast and eccentricity. Observers then repeatedly chose between targets differing in eccentricity and contrast, selecting the one they would prefer to attempt: e.g., a low contrast target at 2u versus a high contrast target at 10u. Observers knew they would later attempt some of their chosen targets and receive rewards for correct classifications. We evaluated performance in three ways. Equivalence: Do observers' judgments agree with their actual performance? Do they correctly trade off eccentricity and contrast and select the more discriminable target in each pair? Transitivity: Are observers' choices self-consistent? Dominance: Do observers understand that increased contrast improves performance? Decreased eccentricity? All observers exhibited patterned failures of equivalence, and seven out of eight observers failed transitivity. There were significant but small failures of dominance. All these failures together reduced their winnings by 10%–18%. |
Li Zhang; Wu Li Perceptual learning beyond retinotopic reference frame Journal Article In: Proceedings of the National Academy of Sciences, vol. 107, no. 36, pp. 15969–15974, 2010. @article{Zhang2010, Repetitive experience with the same visual stimulus and task can remarkably improve behavioral performance on the task. This well-known perceptual-learning phenomenon is usually specific to the trained retinal- or visual-field location, which is taken as an indication of plastic changes in retinotopic visual areas. In previous studies of perceptual learning, however, a change in stimulus location on the retina is accompanied by positional changes of the stimulus in nonretinotopic frames of reference, such as relative to the head and other objects. It is unclear, therefore, whether the putative location specificity is exclusively retinotopic or if it could also depend on nonretinotopic representation of the stimulus, which is particularly important for multisensory and sensorimotor integration as well as for maintenance of stable visual percepts. Here, by manipulating subjects' gaze direction to control spatial and retinal locations of stimuli independently, we found that, when the stimulated retinal regions were held constant, the improvement with training in motion-direction discrimination of two successively displayed stimuli was restricted to the relative spatial position of the stimuli but independent of their absolute locations in head- and world-centered frame. These findings indicate location specificity of perceptual learning beyond retinotopic frame of reference, suggesting a pliable spatiotopic mechanism that can be specifically shaped by experience for better spatiotemporal integration of the learned stimuli. |
Ting Zhang; Lu Qi Xiao; Stanley A. Klein; Dennis M. Levi; Cong Yu Decoupling location specificity from perceptual learning of orientation discrimination Journal Article In: Vision Research, vol. 50, no. 4, pp. 368–374, 2010. @article{Zhang2010b, Perceptual learning of orientation discrimination is reported to be precisely specific to the trained retinal location. This specificity is often taken as evidence for localizing the site of orientation learning to retinotopic cortical areas V1/V2. However, the extant physiological evidence for training improved orientation turning in V1/V2 neurons is controversial and weak. Here we demonstrate substantial transfer of orientation learning across retinal locations, either from the fovea to the periphery or amongst peripheral locations. Most importantly, we found that a brief pretest at a peripheral location before foveal training enabled complete transfer of learning, so that additional practice at that peripheral location resulted in no further improvement. These results indicate that location specificity in orientation learning depends on the particular training procedures, and is not necessarily a genuine property of orientation learning. We suggest that non-retinotopic high brain areas may be responsible for orientation learning, consistent with the extant neurophysiological data. |
Zhi-Lei Zhang; Christopher R. L. Cantor; Clifton M. Schor Perisaccadic stereo depth with zero retinal disparity Journal Article In: Current Biology, vol. 20, no. 13, pp. 1176–1181, 2010. @article{Zhang2010c, When an object is viewed binocularly, unequal perspective projections of the two eyes' half images (binocular disparity) provide a cue for the sensation of stereo depth. For almost 200 years, binocular disparity has remained synonymous with retinal disparity [1], which is computed by subtracting the distance of each half image from its respective fovea [2]. However, binocular disparity could also be coded in headcentric instead of retinal coordinates, by combining eye position and retinal image position in each eye and representing disparity as differences between visual directions of half images relative to the head [3]. Although these two disparity-coding schemes suggest very different neural mechanisms, both offer identical predictions for stereopsis in almost every viewing condition, making it difficult to empirically distinguish between them. We designed a novel stimulus that uses perisaccadic spatial distortion [4] to generate inconsistency between headcentric and retinal disparity. Foveal half images flashed asynchronously just before a horizontal saccade have zero retinal disparity, yet they produce a sensation of depth consistent with a nonzero headcentric disparity. Furthermore, this headcentric disparity can cancel and reverse the perceived depth stimulated with nonzero retinal disparity. This is the first demonstration that a coding scheme other than retinal disparity has a role in human stereopsis. |
Keith J. Yoder; Matthew K. Belmonte Combining computer game-based behavioural experiments with high-density EEG and infrared gaze tracking Journal Article In: Journal of Visualized Experiments, vol. 46, pp. 1–10, 2010. @article{Yoder2010, Experimental paradigms are valuable insofar as the timing and other parameters of their stimuli are well specified and controlled, and insofar as they yield data relevant to the cognitive processing that occurs under ecologically valid conditions. These two goals often are at odds, since well controlled stimuli often are too repetitive to sustain subjects' motivation. Studies employing electroencephalography (EEG) are often especially sensitive to this dilemma between ecological validity and experimental control: attaining sufficient signal-to-noise in physiological averages demands large numbers of repeated trials within lengthy recording sessions, limiting the subject pool to individuals with the ability and patience to perform a set task over and over again. This constraint severely limits researchers' ability to investigate younger populations as well as clinical populations associated with heightened anxiety or attentional abnormalities. Even adult, non-clinical subjects may not be able to achieve their typical levels of performance or cognitive engagement: an unmotivated subject for whom an experimental task is little more than a chore is not the same, behaviourally, cognitively, or neurally, as a subject who is intrinsically motivated and engaged with the task. A growing body of literature demonstrates that embedding experiments within video games may provide a way between the horns of this dilemma between experimental control and ecological validity. The narrative of a game provides a more realistic context in which tasks occur, enhancing their ecological validity (Chaytor & Schmitter-Edgecombe, 2003). Moreover, this context provides motivation to complete tasks. In our game, subjects perform various missions to collect resources, fend off pirates, intercept communications or facilitate diplomatic relations. In so doing, they also perform an array of cognitive tasks, including a Posner attention-shifting paradigm (Posner, 1980), a go/no-go test of motor inhibition, a psychophysical motion coherence threshold task, the Embedded Figures Test (Witkin, 1950, 1954) and a theory-of-mind (Wimmer & Perner, 1983) task. The game software automatically registers game stimuli and subjects' actions and responses in a log file, and sends event codes to synchronise with physiological data recorders. Thus the game can be combined with physiological measures such as EEG or fMRI, and with moment-to-moment tracking of gaze. Gaze tracking can verify subjects' compliance with behavioural tasks (e.g. fixation) and overt attention to experimental stimuli, and also physiological arousal as reflected in pupil dilation (Bradley et al., 2008). At great enough sampling frequencies, gaze tracking may also help assess covert attention as reflected in microsaccades - eye movements that are too small to foveate a new object, but are as rapid in onset and have the same relationship between angular distance and peak velocity as do saccades that traverse greater distances. The distribution of directions of microsaccades correlates with the (otherwise) covert direction of attention (Hafed & Clark, 2002). |
Theodoros P. Zanos; Patrick J. Mineault; Christopher C. Pack Removal of spurious correlations between spikes and local field potentials Journal Article In: Journal of Neurophysiology, vol. 105, pp. 474–486, 2010. @article{Zanos2010, Single neurons carry out important sensory and motor functions related to the larger networks in which they are embedded. Under- standing the relationships between single-neuron spiking and network activity is therefore of great importance and the latter can be readily estimated from low-frequency brain signals known as local field potentials (LFPs). In this work we examine a number of issues related to the estimation of spike and LFP signals. We show that spike trains and individual spikes contain power at the frequencies that are typically thought to be exclusively related to LFPs, such that simple frequency-domain filtering cannot be effectively used to separate the two signals. Ground-truth simulations indicate that the commonly used method of estimating the LFP signal by low-pass filtering the raw voltage signal leads to artifactual correlations between spikes and LFPs and that these correlations exert a powerful influence on popular metrics of spike–LFP synchronization. Similar artifactual results were seen in data obtained from electrophysiological recordings in ma- caque visual cortex, when low-pass filtering was used to estimate LFP signals. In contrast LFP tuning curves in response to sensory stimuli do not appear to be affected by spike contamination, either in simulations or in real data. To address the issue of spike contamina- tion, we devised a novel Bayesian spike removal algorithm and confirmed its effectiveness in simulations and by applying it to the electrophysiological data. The algorithm, based on a rigorous math- ematical framework, outperforms other methods of spike removal on most metrics of spike–LFP correlations. Following application of this spike removal algorithm, many of our electrophysiological recordings continued to exhibit spike–LFP correlations, confirming previous reports that such relationships are a genuine aspect of neuronal activity. Overall, these results show that careful preprocessing is necessary to remove spikes from LFP signals, but that when effective spike removal is used, spike–LFP correlations can potentially yield novel insights about brain function. |
Gregory J. Zelinsky; Andrei Todor The role of "rescue saccades" in tracking objects through occlusions Journal Article In: Journal of Vision, vol. 10, no. 14, pp. 29–29, 2010. @article{Zelinsky2010, We hypothesize that our ability to track objects through occlusions is mediated by timely assistance from gaze in the form of "rescue saccades"-eye movements to tracked objects that are in danger of being lost due to impending occlusion. Observers tracked 2-4 target sharks (out of 9) for 20 s as they swam through a rendered 3D underwater scene. Targets were either allowed to enter into occlusions (occlusion trials) or not (no occlusion trials). Tracking accuracy with 2-3 targets was 92% regardless of target occlusion but dropped to 74% on occlusion trials with four targets (no occlusion trials remained accurate; 83%). This pattern was mirrored in the frequency of rescue saccades. Rescue saccades accompanied approximatlely 50% of the Track 2-3 target occlusions, but only 34% of the Track 4 occlusions. Their frequency also decreased with increasing distance between a target and the nearest other object, suggesting that it is the potential for target confusion that summons a rescue saccade, not occlusion itself. These findings provide evidence for a tracking system that monitors for events that might cause track loss (e.g., occlusions) and requests help from the oculomotor system to resolve these momentary crises. As the number of crises increase with the number of targets, some requests for help go unsatisfied, resulting in degraded tracking. |
Chia-Chien Wu; Oh-Sang Kwon; Eileen Kowler Fitts's Law and speed/accuracy trade-offs during sequences of saccades: Implications for strategies of saccadic planning Journal Article In: Vision Research, vol. 50, no. 21, pp. 2142–2157, 2010. @article{Wu2010, Strategies of saccadic planning must take into account both the required level of accuracy of the saccades, and the time and resources needed to plan and execute the movements. To determine relationships between accuracy and time, we studied sequences of saccades made to scan a set of stationary targets located at the corners of an imaginary square. Target separation and size varied. The time taken to complete saccadic sequences increased with the required level of precision, in agreement with the classical Fitts's Law (1954) relationship. This was mainly due to the use of error-correcting secondary saccades, whose frequency increased with target separation and decreased with target size. Increases in the time spent fixating near each target did not increase the accuracy of the next primary saccade in the sequence. Instead, secondary saccades were the principal means of correcting landing errors of primary saccades. The results are consistent with a scanning strategy that discourages careful planning of individual saccades in favor of increasing the rate of saccadic production (i.e., exploration), using secondary saccades as needed to correct saccadic landing errors. |
Lester C. Loschky; Bruce C. Hansen; Amit Sethi; Tejaswi N. Pydimarri The role of higher order image statistics in masking scene gist recognition Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 2, pp. 427–444, 2010. @article{Loschky2010, In the present article, we investigated whether higher order image statistics, which are known to be carried by the Fourier phase spectrum, are sufficient to affect scene gist recognition. In Experiment 1, we compared the scene gist masking strength of four masking image types that varied in their degrees of second- and higher order relationships: normal scene images, scene textures, phase-randomized scene images, and white noise. Masking effects were the largest for masking images that possessed significant higher order image statistics (scene images and scene textures) as compared with masking images that did not (phase-randomized scenes and white noise), with scene image masks yielding the largest masking effects. In a control study, we eliminated all differences in the second-order statistics of the masks, while maintaining differences in their higher order statistics by comparing masking by scene textures rather than by their phase-randomized versions, and showed that the former produced significantly stronger gist masking. Experiments 2 and 3 were designed to test whether conceptual masking could account for the differences in the strength of the scene texture and phase-randomized masks used in Experiment 1, and revealed that the recognizability of scene texture masks explained just 1% of their masking variance. Together, the results suggest that (1) masks containing the higher order statistical structure of scenes are more effective at masking scene gist processing than are masks lacking such structure, and (2) much of the disruption of scene gist recognition that one might be tempted to attribute to conceptual masking is due to spatial masking. |