EyeLink 认知出版物
所有EyeLink认知和感知研究出版物,直至2023年(部分早于2024年)均按年份列出。您可以使用视觉搜索、场景感知、面部处理等关键词搜索出版物。您还可以搜索单个作者姓名。如果我们错过了任何EyeLink认知或感知文章,请给我们发电子邮件!
2010 |
Jessica K. Hall; Samuel B. Hutton; Michael J. Morgan Sex differences in scanning faces: Does attention to the eyes explain female superiority in facial expression recognition? Journal Article In: Cognition and Emotion, vol. 24, no. 4, pp. 629–637, 2010. @article{Hall2010, Previous meta-analyses support a female advantage in decoding non-verbal emotion (Hall, 1978, 1984), yet the mechanisms underlying this advantage are not understood. The present study examined whether the female advantage is related to greater female attention to the eyes. Eye-tracking techniques were used to measure attention to the eyes in 19 males and 20 females during a facial expression recognition task. Women were faster and more accurate in their expression recognition compared with men, and women looked more at the eyes than men. Positive relationships were observed between dwell time and number of fixations to the eyes and both accuracy of facial expression recognition and speed of facial expression recognition. These results support the hypothesis that the female advantage in facial expression recognition is related to greater female attention to the eyes. |
S. N. Hamid; B. Stankiewicz; Mary Hayhoe Gaze patterns in navigation: Encoding information in large-scale environments Journal Article In: Journal of Vision, vol. 10, no. 12, pp. 1–11, 2010. @article{Hamid2010, We investigated the role of gaze in encoding of object landmarks in navigation. Gaze behavior was measured while participants learnt to navigate in a virtual large-scale environment in order to understand the sampling strategies subjects use to select visual information during navigation. The results showed a consistent sampling pattern. Participants preferentially directed gaze at a subset of the available object landmarks with a preference for object landmarks at the end of hallways and T-junctions. In a subsequent test of knowledge of the environment, we removed landmarks depending on how frequently they had been viewed. Removal of infrequently viewed landmarks had little effect on performance, whereas removal of the most viewed landmarks impaired performance substantially. Thus, gaze location during learning reveals the information that is selectively encoded, and landmarks at choice points are selected in preference to less informative landmarks. |
Ben M. Harvey; O. J. Braddick; A. Cowey In: Journal of Vision, vol. 10, no. 5, pp. 1–15, 2010. @article{Harvey2010, Our recent psychophysical experiments have identified differences in the spatial summation characteristics of pattern detection and position discrimination tasks performed with rotating, expanding, and contracting stimuli. Areas MT and MST are well established to be involved in processing these stimuli. fMRI results have shown retinotopic activation of area V3A depending on the location of the center of radial motion in vision. This suggests the possibility that V3A may be involved in position discrimination tasks with these motion patterns. Here we use repetitive transcranial magnetic stimulation (rTMS) over MT+ and a dorsomedial extrastriate region including V3A to try to distinguish between TMS effects on pattern detection and position discrimination tasks. If V3A were involved in position discrimination, we would expect to see effects on position discrimination tasks, but not pattern detection tasks, with rTMS over this dorsomedial extrastriate region. In fact, we could not dissociate TMS effects on the two tasks, suggesting that they are performed by the same extrastriate area, in MT+. |
Ryusuke Hayashi; Yuko Sugita; Shin'ya Nishida; Kenji Kawano How motion signals are integrated across frequencies: Study on motion perception and ocular following responses using multiple-slit stimuli Journal Article In: Journal of Neurophysiology, vol. 103, no. 1, pp. 230–243, 2010. @article{Hayashi2010, Visual motion signals, which are initially extracted in parallel at multiple spatial frequencies, are subsequently integrated into a unified motion percept. Cross-frequency integration plays a crucial role when directional information conflicts across frequencies due to such factors as occlusion. We investigated the human observers' open-loop oculomotor tracking responses (ocular following responses, or OFRs) and the perceived motion direction in an idealized situation of occlusion—multiple-slits viewing (MSV)—in which a moving pattern is visible only through an array of slits. We also tested a more challenging viewing condition, contrast-alternating MSV (CA-MSV), in which the contrast polarity of the moving pattern alternates when it passes the slits. We found that changes in the distribution of the spectral content of the slit stimuli, introduced by variations of both the interval between the slits and the frame rate of the image stream, modulated the OFR and the reported motion direction in a rather complex manner. We show that those complex modulations could be explained by the weighted sum of the motion signal (motion contrast) of each spatiotemporal frequency. The estimated distribution of frequency weights (tuning maps) indicate that the cross-frequency integration of supra-threshold motion signals gives strong weight to low spatial frequency components (<0.25 cpd) for both OFR and motion perception. However, the tuning map estimated with the MSV stimuli were significantly different from those estimated with the CA-MSV (and from those measured in a more direct manner using grating stimuli), suggesting that interfrequency interactions (e.g., interaction producing speed-dependent tuning) was involved. |
Jibo He; Jason S. McCarley Executive working memory load does not compromise perceptual processing during visual search: Evidence from additive factors analysis Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 2, pp. 308–316, 2010. @article{He2010, Executive working memory (WM) load reduces the efficiency of visual search, but the mechanisms by which this occurs are not fully known. In the present study, we assessed the effect of executive load on perceptual processing during search. Participants performed a serial oculomotor search task, looking for a circle target among gapped-circle distractors. The participants performed the task under high and low executive WM load, and the visual quality (Experiment 1) or discriminability of targets and distractors (Experiment 2) was manipulated across trials. By the logic of the additive factors method (Sternberg, 1969, 1998), if WM load compromises the quality of perceptual processing during visual search, manipulations of WM load and perceptual processing difficulty should produce nonadditive effects. Contrary to this prediction, the effects of WM load and perceptual difficulty were additive. The results imply that executive WM load does not degrade perceptual analysis during visual search. |
Arvid Herwig; Miriam Beisert; Werner X. Schneider In: Journal of Vision, vol. 108, no. 5, pp. 1–10, 2010. @article{Herwig2010, Recent work indicates that covert visual attention and eye movements on the one hand, and covert visual attention and visual working memory on the other hand are closely interrelated. Two experiments address the question whether all three processes draw on the same spatial representations. Participants had to memorize a target location for a subsequent memory-guided saccade. During the memory interval, task-irrelevant distractors were briefly flashed on some trials either near or remote to the memory target. Results showed that the previously flashed distractors attract the saccade's landing position. However, attraction was only found, if the distractor was presented within a sector of T20- around the target axis, but not if the distractor was presented outside this sector. This effect strongly resembles the global effect in which saccades are directed to intermediate locations between a target and a simultaneously presented neighboring distractor stimulus. It is argued that covert visual attention, eye movements, and visual working memory recruit the same spatial mechanisms that can probably be ascribed to attentional priority maps. |
J. Stephen Higgins; Ranxiao Frances Wang A landmark effect in the perceived displacement of objects Journal Article In: Vision Research, vol. 50, no. 2, pp. 242–248, 2010. @article{Higgins2010, Perceiving the displacement of an object after a visual distraction is an essential ability to interact with the world. Previous research has shown a bias to perceive the first object seen after a saccade as stable while the second one moving (landmark effect). The present study examines the generality and nature of this phenomenon. The landmark effect was observed in the absence of eye movements, when the two objects were obscured by a blank screen, a moving-pattern mask, or simply disappeared briefly before reappearing one after the other. The first reappearing object was not required to remain visible while the second object reappeared to induce the bias. The perceived direction of the displacement was mainly determined by the relative displacement of the two objects, suggesting that the landmark effect is primarily due to a landmark calibration mechanism. |
Yoriko Hirose Perception and memory across viewpoint changes in moving images Journal Article In: Journal of Vision, vol. 10, no. 4, pp. 1–19, 2010. @article{Hirose2010, Current understanding of scene perception derives largely from experiments using static scenes and psychological understanding of how moving images are processed is under-developed. We examined eye movement patterns and recognition memory performance as observers looked at short movies involving a change in viewpoint (a cut). At the time of the cut, four types of object property (color, position, identity and shape) were manipulated. Results show differential sensitivity to object property changes, reflected in both eye movement behavior after the cut and memory performance when object properties are remembered after viewing. When object properties change across a cut, memory is generally biased towards information present after the cut, except for position information which showed no bias. Our findings suggest that spatial information is represented differently to other forms of object information when viewing movies that include changes in viewpoint. |
Mieke Donk; Leroy Soesman Salience is only briefly represented: Evidence from probe-detection performance Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 2, pp. 286–302, 2010. @article{Donk2010, Salient objects in the visual field tend to capture attention. The present study aimed to examine the time-course of salience effects using a probe-detection task. Eight experiments investigated how the salience of different orientation singletons affected probe reaction time as a function of stimulus onset asynchrony (SOA) between the presentation of a singleton display and a probe display. The results demonstrate that salience consistently affected probe reaction time at the shortest SOA. The effect of salience disappeared as SOA increased. These results suggest that contrary to the assumption of major theories on visual selection, salience is transiently represented in our visual system allowing the effects of salience on attentional selection to be only short-lived. |
Michael Dorr; T. Martinetz; Karl R. Gegenfurtner; E. Barth Variability of eye movements when viewing dynamic natural scenes Journal Article In: Journal of Vision, vol. 10, no. 10, pp. 1–17, 2010. @article{Dorr2010, How similar are the eye movement patterns of different subjects when free viewing dynamic natural scenes? We collected a large database of eye movements from 54 subjects on 18 high-resolution videos of outdoor scenes and measured their variability using the Normalized Scanpath Saliency, which we extended to the temporal domain. Even though up to about 80% of subjects looked at the same image region in some video parts, variability usually was much greater. Eye movements on natural movies were then compared with eye movements in several control conditions. "Stop-motion" movies had almost identical semantic content as the original videos but lacked continuous motion. Hollywood action movie trailers were used to probe the upper limit of eye movement coherence that can be achieved by deliberate camera work, scene cuts, etc. In a "repetitive" condition, subjects viewed the same movies ten times each over the course of 2 days. Results show several systematic differences between conditions both for general eye movement parameters such as saccade amplitude and fixation duration and for eye movement variability. Most importantly, eye movements on static images are initially driven by stimulus onset effects and later, more so than on continuous videos, by subject-specific idiosyncrasies; eye movements on Hollywood movies are significantly more coherent than those on natural movies. We conclude that the stimuli types often used in laboratory experiments, static images and professionally cut material, are not very representative of natural viewing behavior. All stimuli and gaze data are publicly available at http://www.inb.uni-luebeck.de/tools-demos/gaze. |
Jacob Duijnhouwer; Bart Krekelberg; Albert V. Berg; Richard J. A. Wezel Temporal integration of focus position signal during compensation for pursuit in optic flow. Journal Article In: Journal of Vision, vol. 10, no. 14, pp. 1–15, 2010. @article{Duijnhouwer2010, Observer translation results in optic flow that specifies heading. Concurrent smooth pursuit causes distortion of the retinal flow pattern for which the visual system compensates. The distortion and its perceptual compensation are usually modeled in terms of instantaneous velocities. However, apart from adding a velocity to the flow field, pursuit also incrementally changes the direction of gaze. The effect of gaze displacement on optic flow perception has received little attention. Here we separated the effects of velocity and gaze displacement by measuring the perceived two-dimensional focus position of rotating flow patterns during pursuit. Such stimuli are useful in the current context because the two effects work in orthogonal directions. As expected, the instantaneous pursuit velocity shifted the perceived focus orthogonally to the pursuit direction. Additionally, the focus was mislocalized in the direction of the pursuit. Experiments that manipulated the presentation duration, flow speed, and uncertainty of the focus location supported the idea that the latter component of mislocalization resulted from temporal integration of the retinal trajectory of the focus. Finally, a comparison of the shift magnitudes obtained in conditions with and without pursuit (but with similar retinal stimulation) suggested that the compensation for both effects uses extraretinal information. |
Wolfgang Einhäuser; Christof Koch; Olivia Carter Pupil dilation betrays the timing of decisions Journal Article In: Frontiers in Human Neuroscience, vol. 4, pp. 18, 2010. @article{Einhaeuser2010, The notion of "mind-reading" by carefully observing another individual's physiological responses has recently become commonplace in popular culture, particularly in the context of brain imaging. The question remains, however, whether outwardly accessible physiological signals indeed betray a decision before a person voluntarily reports it. In one experiment we asked observers to push a button at any time during a 10-s period ("immediate overt response"). In a series of three additional experiments observers were asked to select one number from five sequentially presented digits but concealed their decision until the trial's end ("covert choice"). In these experiments observers either had to choose the digit themselves under conditions of reward and no reward, or were instructed which digit to select via an external cue provided at the time of the digit presentation. In all cases pupil dilation alone predicted the choice (timing of button response or chosen digit, respectively). Consideration of the average pupil-dilation responses, across all experiments, showed that this prediction of timing was distinct from a general arousal or reward-anticipation response. Furthermore, the pupil dilation appeared to reflect the post-decisional consolidation of the selected outcome rather than the pre-decisional cognitive appraisal component of the decision. Given the tight link between pupil dilation and norepinephrine levels during constant illumination, our results have implications beyond the tantalizing mind-reading speculations. These findings suggest that similar noradrenergic mechanisms may underlie the consolidation of both overt and covert decisions. |
Nick C. Ellis; Nuria Sagarra Learned attention effects in L2 temporal reference: The first hour and the next eight semesters Journal Article In: Language Learning, vol. 60, pp. 85–108, 2010. @article{Ellis2010, This article relates adults' difficulty acquiring foreign languages to the associative learn- ing phenomena of cue salience, cue complexity, and the blocking of later experienced cues by earlier learned ones. It examines short- and long-term learned attention effects in adult acquisition of lexical (adverbs) and morphological cues (verbal inflections) for temporal reference in Latin (1 hr of controlled laboratory learning) and Spanish (three to eight semesters of classroom learning). Our experiments indicate that early adult learning is characterized by a general tendency to focus on lexical cues because of their physical salience in the input and their psychological salience resulting from their simplicity of form-function mapping and from learners' prior first language knowledge. Later, attention to verbal morphology is modulated by cue complexity and language experience: Acquisition is better in cases of cues of lesser complexity, speakers of morphologically rich native languages, and longer periods of study. Finally, instruc- tional practices that emphasize morphological cues by means either of preexposure or typographical enhancement increase attention to inflections thus to block reliance on adverbial cues. This |
David R. Evens; Casimir J. H. Ludwig Dual-task costs and benefits in anti-saccade performance Journal Article In: Experimental Brain Research, vol. 205, pp. 545–557, 2010. @article{Evens2010, It has been reported that anti-saccade performance is facilitated by diverting attention through a secondary task (Kristja´nsson et al. in Nat Neurosci 4:1037–1042, 2001). This finding supports the idea that the withdrawal of resources that would be taken up by the erroneous movement plan makes it easier to overcome the tendency to look towards the imperative stimulus. We first report an attempt to replicate this finding. Four observers were extensively tested in an anti-saccade paradigm. The luminance of the fixation point or peripheral target was briefly increased or decreased. In the dual-task condition observers signalled the direction of the luminance change. In the single-task condition the discrimination stimulus was presented, but could be ignored as it required no response. We found an overall dual-task cost in anti-saccade latency, although some facilitation was observed in the accuracy. The discrepancy between the two studies was attributed to performance in the single-task condition. For latency facilitation to occur, performance should not be affected by the discrimination stimulus when it is task-irrelevant. We show that naive, untrained observers could not ignore this irrelevant visual event. If it occurred before the imperative movement signal, the event acted as a warning signal, speeding up anti-saccade generation. If it occurred after the imperative movement stimulus, it acted as a remote distractor and interfered with the generation of the correct movement. Under normal circumstances, these basic oculomotor effects operate in both single- and dual-task conditions. An overall dual-task cost rides on top of this latency modulation. This overall cost is best accounted for by an increase in the response criterion for saccade generation in the more demanding dual-task condition. |
Thérèse Collins; Tobias Heed; Karine Doré-Mazars; Brigitte Röder Presaccadic attention interferes with feature detection Journal Article In: Experimental Brain Research, vol. 201, no. 1, pp. 111–117, 2010. @article{Collins2010b, Preparing a saccadic eye movement to a particular spatial location enhances the perception of visual targets at this location and decreases perception of nearby targets prior to movement onset. This effect has been termed the orientation of pre-saccadic attention. Here, we investigated whether pre-saccadic attention influenced the detection of a simple visual feature-a process that has been hypothesized to occur without the need for attention. Participants prepared a saccade to a cued location and detected the occurrence of a "pop-out" feature embedded in distracters at the same or different location. The results show that preparing a saccade to a given location decreased detection of features at non-aimed-for locations, suggesting that the selection of a location as the next saccade endpoint influences sensitivity to basic visual features across the visual field. |
Kenny R. Coventry; Dermot Lynott; Angelo Cangelosi; Lynn Monrouxe; Dan Joyce; Daniel C. Richardson Spatial language, visual attention, and perceptual simulation Journal Article In: Brain and Language, vol. 112, no. 3, pp. 202–213, 2010. @article{Coventry2010, Spatial language descriptions, such as The bottle is over the glass, direct the attention of the hearer to particular aspects of the visual world. This paper asks how they do so, and what brain mechanisms underlie this process. In two experiments employing behavioural and eye tracking methodologies we examined the effects of spatial language on people's judgements and parsing of a visual scene. The results underscore previous claims regarding the importance of object function in spatial language, but also show how spatial language differentially directs attention during examination of a visual scene. We discuss implications for existing models of spatial language, with associated brain mechanisms. |
Christopher D. Cowper-Smith; Esther Y. Y. Lau; Carl A. Helmick; Gail A. Eskes; David A. Westwood Neural coding of movement direction in the healthy human brain Journal Article In: PLoS ONE, vol. 5, no. 10, pp. e13330, 2010. @article{CowperSmith2010, Neurophysiological studies in monkeys show that activity of neurons in primary cortex (M1), pre-motor cortex (PMC), and cerebellum varies systematically with the direction of reaching movements. These neurons exhibit preferred direction tuning, where the level of neural activity is highest when movements are made in the preferred direction (PD), and gets progressively lower as movements are made at increasing degrees of offset from the PD. Using a functional magnetic resonance imaging adaptation (fMRI-A) paradigm, we show that PD coding does exist in regions of the human motor system that are homologous to those observed in non-human primates. Consistent with predictions of the PD model, we show adaptation (i.e., a lower level) of the blood oxygen level dependent (BOLD) time-course signal in M1, PMC, SMA, and cerebellum when consecutive wrist movements were made in the same direction (0 degrees offset) relative to movements offset by 90 degrees or 180 degrees . The BOLD signal in dorsolateral prefrontal cortex adapted equally in all movement offset conditions, mitigating against the possibility that the present results are the consequence of differential task complexity or attention to action in each movement offset condition. |
Kim Joris Boström; Anne Kathrin Warzecha Open-loop speed discrimination performance of ocular following response and perception Journal Article In: Vision Research, vol. 50, no. 9, pp. 870–882, 2010. @article{Bostroem2010, So far, it remains largely unresolved to what extent neuronal noise affects behavioral responses. Here, we investigate, where in the human visual motion pathway noise originates that limits the performance of the entire system. In particular, we ask whether perception and eye movements are limited by a common noise source, or whether processing stages after the separation into different streams limit their performance. We use the ocular following response of human subjects and a simultaneously performed psychophysical paradigm to directly compare perceptual and oculomotor system with respect to their speed discrimination ability. Our results show that on the open-loop condition the perceptual system is superior to the oculomotor system and that the responses of both systems are not correlated. Two alternative conclusions can be drawn from these findings. Either the perceptual and oculomotor pathway are effectively separate, or the amount of post-sensory (motor) noise is not negligible in comparison to the amount of sensory noise. In view of well-established experimental findings and due to plausibility considerations, we favor the latter conclusion. |
Holly Bridge; Stephen L. Hicks; Jingyi Xie; Thomas W. Okell; Sabira K. Mannan; Iona Alexander; Alan Cowey; Christopher Kennard Visual activation of extra-striate cortex in the absence of V1 activation Journal Article In: Neuropsychologia, vol. 48, no. 14, pp. 4148–4154, 2010. @article{Bridge2010, When the primary visual cortex (V1) is damaged, there are a number of alternative pathways that can carry visual information from the eyes to extrastriate visual areas. Damage to the visual cortex from trauma or infarct is often unilateral, extensive and includes gray matter and white matter tracts, which can disrupt other routes to residual visual function. We report an unusual young patient, SBR, who has bilateral damage to the gray matter of V1, sparing the adjacent white matter and surrounding visual areas. Using functional magnetic resonance imaging (fMRI), we show that area MT+/V5 is activated bilaterally to visual stimulation, while no significant activity could be measured in V1. Additionally, the white matter tracts between the lateral geniculate nucleus (LGN) and V1 appear to show some degeneration, while the tracts between LGN and MT+/V5 do not differ from controls. Furthermore, the bilateral nature of the damage suggests that residual visual capacity does not result from strengthened interhemispheric connections. The very specific lesion in SBR suggests that the ipsilateral connection between LGN and MT+/V5 may be important for residual visual function in the presence of damage to V1. |
James R. Brockmole; Melissa L. -H. Võ Semantic memory for contextual regularities within and across scene categories: Evidence from eye movements Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 7, pp. 1803–1813, 2010. @article{Brockmole2010, When encountering familiar scenes, observers can use item-specific memory to facilitate the guidance of attention to objects appearing in known locations or configurations. Here, we investigated how memory for relational contingencies that emerge across different scenes can be exploited to guide attention. Participants searched for letter targets embedded in pictures of bedrooms. In a between-subjects manipulation, targets were either always on a bed pillow or randomly positioned. When targets were systematically located within scenes, search for targets became more efficient. Importantly, this learning transferred to bedrooms without pillows, ruling out learning that is based on perceptual contingencies. Learning also transferred to living room scenes, but it did not transfer to kitchen scenes, even though both scene types contained pillows. These results suggest that statistical regularities abstracted across a range of stimuli are governed by semantic expectations regarding the presence of target-predicting local landmarks. Moreover, explicit awareness of these contingencies led to a central tendency bias in recall memory for precise target positions that is similar to the spatial category effects observed in landmark memory. These results broaden the scope of conditions under which contextual cuing operates and demonstrate how semantic memory plays a causal and independent role in the learning of associations between objects in real-world scenes. |
Simona Buetti; Dirk Kerzel Effects of saccades and response type on the simon effect: If you look at the stimulus, the Simon effect may be gone Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 11, pp. 2172–2189, 2010. @article{Buetti2010, The Simon effect has most often been investigated with key-press responses and eye fixation. In the present study, we asked how the type of eye movement and the type of manual response affect response selection in a Simon task. We investigated three eye movement instructions (spontaneous, saccade, and fixation) while participants performed goal-directed (i.e., reaching) or symbolic (i.e., finger-lift) responses. Initially, no oculomotor constraints were imposed, and a Simon effect was present for both response types. Next, eye movements were constrained. Participants had to either make a saccade toward the stimulus or maintain gaze fixed in the screen centre. While a congruency effect was always observed in reaching responses, it disappeared in finger-lift responses. We suggest that the redirection of saccades from the stimulus to the correct response location in noncorresponding trials contributes to the Simon effect. Because of eye-hand coupling, this occurred in a mandatory manner with reaching responses but not with finger-lift responses. Thus, the Simon effect with key-presses disappears when participants do what they typically do–look at the stimulus. |
Torsten Betz Investigating task-dependent top-down effects on overt visual attention Journal Article In: Journal of Vision, vol. 10, no. 3, pp. 1–14, 2010. @article{Betz2010, Different tasks can induce different viewing behavior, yet it is still an open question how or whether at all high-level task information interacts with the bottom-up processing of stimulus-related information. Two possible causal routes are considered in this paper. Firstly, the weak top-down hypothesis, according to which top-down effects are mediated by changes of feature weights in the bottom-up system. Secondly, the strong top-down hypothesis, which proposes that top-down information acts independently of the bottom-up process. To clarify the influences of these different routes, viewing behavior was recorded on web pages for three different tasks: free viewing, content awareness, and information search. The data reveal significant task-dependent differences in viewing behavior that are accompanied by minor changes in feature-fixation correlations. Extensive computational modeling shows that these small but significant changes are insufficient to explain the observed differences in viewing behavior. Collectively, the results show that task-dependent differences in the current setting are not mediated by a reweighting of features in the bottom-up hierarchy, ruling out the weak top-down hypothesis. Consequently, the strong top-down hypothesis is the most viable explanation for the observed data. |
Markus Bindemann Scene and screen center bias early eye movements in scene viewing Journal Article In: Vision Research, vol. 50, no. 23, pp. 2577–2587, 2010. @article{Bindemann2010, In laboratory studies of visual perception, images of natural scenes are routinely presented on a computer screen. Under these conditions, observers look at the center of scenes first, which might reflect an advantageous viewing position for extracting visual information. This study examined an alternative possibility, namely that initial eye movements are drawn towards the center of the screen. Observers searched visual scenes in a person detection task, while the scenes were aligned with the screen center or offset horizontally (Experiment 1). Two central viewing effects were observed, reflecting early visual biases to the scene and the screen center. The scene effect was modified by person content but is not specific to person detection tasks, while the screen bias cannot be explained by the low-level salience of a computer display (Experiment 2). These findings support the notion of a central viewing tendency in scene analysis, but also demonstrate a bias to the screen center that forms a potential artifact in visual perception experiments. |
Markus Bindemann; Christoph Scheepers; Heather J. Ferguson; A. Mike Burton Face, body, and center of gravity mediate person detection in natural scenes Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 6, pp. 1477–1485, 2010. @article{Bindemann2010a, Person detection is an important prerequisite of social interaction, but is not well understood. Following suggestions that people in the visual field can capture a viewer's attention, this study examines the role of the face and the body for person detection in natural scenes. We observed that viewers tend first to look at the center of a scene, and only then to fixate on a person. When a person's face was rendered invisible in scenes, bodies were detected as quickly as faces without bodies, indicating that both are equally useful for person detection. Detection was optimized when face and body could be seen, but observers preferentially fixated faces, reinforcing the notion of a prominent role for the face in social perception. These findings have implications for claims of attention capture by faces in that they demonstrate a mediating influence of body cues and general scanning principles in natural scenes. |
Walter R. Boot; James R. Brockmole Irrelevant features at fixation modulate saccadic latency and direction in visual search Journal Article In: Visual Cognition, vol. 18, no. 4, pp. 481–491, 2010. @article{Boot2010, Do irrelevant visual features at fixation influence saccadic latency and direction? In a novel search paradigm, we found that when the feature of an irrelevant item at fixation matched the feature defining the target, oculomotor disengagement was delayed, and when it matched a salient distractor more eye movements were directed to that distractor. Latency effects were short-lived; direction effects persisted for up to 200 ms. We replicated latency results and demonstrated facilitated eye movements to the target when the fixated item matched the target colour. Irrelevant features of fixated items influence saccadic latency and direction and may be important considerations in predicting search behaviour. |
Ana B. Chica; Raymond M. Klein; Robert D. Rafal; Joseph B. Hopfinger Endogenous saccade preparation does not produce inhibition of return: Failure to replicate Rafal, Calabresi, Brennan, & Sciolto (1989) Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 5, pp. 1193–1206, 2010. @article{Chica2010, Inhibition of Return (IOR, slower reaction times to previously cued or inspected locations) is observed both when eye movements are prohibited, and when the eyes move to the peripheral location and back to the centre before the target appears. It has been postulated that both effects are generated by a common mechanism, the activation of the oculomotor system. In strong support of this claim, IOR is not observed when attention is oriented endogenously and covertly, but it has been observed when eye movements are endogenously prepared, even when not executed. Here, we aimed to replicate and extend the finding that endogenous saccade preparation produces IOR. In five experiments using different paradigms, IOR was not observed when participants endogenously prepared an eye movement. These results lead us to conclude that endogenous saccade preparation is not sufficient to produce IOR. |
Ana B. Chica; Tracy L. Taylor; Juan Lupiáñez; Raymond M. Klein Two mechanisms underlying inhibition of return Journal Article In: Experimental Brain Research, vol. 201, no. 1, pp. 25–35, 2010. @article{Chica2010a, Inhibition of return (IOR) refers to slower reaction times to targets presented at previously stimulated or inspected locations. Taylor and Klein (J Exp Psychol Hum Percept Perform 26(5):1639-1656, 2000) showed that IOR can affect either attentional/perceptual or motor processes, depending on whether the oculomotor system is in a quiescent or in an activated state, respectively. If the motoric flavour of IOR is truly non-perceptual and non-attentional, no IOR should be observed when the responses to targets are not based on spatial information. In the present experiments, we demonstrated that when the eyes moved to the peripheral cue and back to centre before the target appeared (to generate the motoric flavour), IOR was observed in detection tasks, for which the spatial location is an integral feature of the onset that is reported, but not in colour discrimination tasks, for which the outcome of a non-spatial perceptual discrimination is reported. When eye movements were prevented, both tasks showed robust IOR. We, therefore, conclude that the motoric flavour of IOR, elicited by oculomotor activation, does not affect attention or perceptual processing. |
Kirsten A. Dalrymple; Walter F. Bischof; David Cameron; Jason J. S. Barton; Alan Kingstone Simulating simultanagnosia: Spatially constricted vision mimics local capture and the global processing deficit Journal Article In: Experimental Brain Research, vol. 202, no. 2, pp. 445–455, 2010. @article{Dalrymple2010, Patients with simultanagnosia, which is a component of Bálint syndrome, have a restricted spatial window of visual attention and cannot see more than one object at a time. As a result, these patients see the world in a piecemeal fashion, seeing the local components of objects or scenes at the expense of the global picture. To directly test the relationship between the restriction of the attentional window in simultanagnosia and patients' difficulty with global-level processing, we used a gaze-contingent display to create a literal restriction of vision for healthy participants while they performed a global/local identification task. Participants in this viewing condition were instructed to identify the global and local aspects of hierarchical letter stimuli of different sizes and densities. They performed well at the local identification task, and their patterns of inaccuracies for the global level task were highly similar to the pattern of inaccuracies typically seen with simultanagnosic patients. This suggests that a restricted spatial area of visual processing, combined with normal limits to visual processing, can lead to difficulties with global-level perception. |
Rong-Fuh Day Examining the validity of the Needleman-Wunsch algorithm in identifying decision strategy with eye-movement data Journal Article In: Decision Support Systems, vol. 49, no. 4, pp. 396–403, 2010. @article{Day2010, A new generation of eye trackers shows us a promising alternative approach to tracing decision processes beyond the popular computerized-information-board approach. In order to exploit the eye-movement data, this study examined the validity of the Needleman-Wunsch algorithm (NWA) to characterize the decision process, and proposed an NWA-based classification method to predict which typical strategy an empirical search behavior might belong to. An eye-tracking based experiment was conducted. Our results showed that the resemblance score by NWA conformed to the assumption that the pair of information search behaviors based on the same strategy should have the closest resemblance. Moreover, with respect to our NWA-based classification method, our result showed that its overall prediction accuracy, hit-ratio, in identifying underlying strategies achieved 88%, significantly much higher than that gained from chance. On the whole, the combination of eye-fixation data and our NWA-based classification method is qualified. © 2010 Elsevier B.V. All rights reserved. |
Denise D. J. Grave; Nicola Bruno The effect of the Müller-Lyer illusion on saccades is modulated by spatial predictability and saccadic latency Journal Article In: Experimental Brain Research, vol. 203, no. 4, pp. 671–679, 2010. @article{Grave2010, Studies investigating the effect of visual illusions on saccadic eye movements have provided a wide variety of results. In this study, we test three factors that might explain this variability: the spatial predictability of the stimulus, the duration of the stimulus and the latency of the saccades. Participants made a saccade from one end of a Muller-Lyer figure to the other end. By changing the spatial predictability of the stimulus, we find that the illusion has a clear effect on saccades (16%) when the stimulus is at a highly predictable location. Even stronger effects of the illusion are found when the stimulus location becomes more unpredictable (19-23%). Conversely, manipulating the duration of the stimulus fails to reveal a clear difference in illusion effect. Finally, by computing the illusion effect for different saccadic latencies, we find a maximum illusion effect (about 30%) for very short latencies, which decreases by 7% with every 100 ms latency increase. We conclude that spatial predictability of the stimulus and saccadic latency influences the effect of the Muller-Lyer illusion on saccades. |
Patrick A. Byrne; David C. Cappadocia; J. Douglas Crawford Interactions between gaze-centered and allocentric representations of reach target location in the presence of spatial updating Journal Article In: Vision Research, vol. 50, no. 24, pp. 2661–2670, 2010. @article{Byrne2010, Numerous studies have investigated the phenomenon of egocentric spatial updating in gaze-centered coordinates, and some have studied the use of allocentric cues in visually-guided movement, but it is not known how these two mechanisms interact. Here, we tested whether gaze-centered and allocentric information combine at the time of viewing the target, or if the brain waits until the last possible moment. To do this, we took advantage of the well-known fact that pointing and reaching movements show gaze-centered 'retinal magnification' errors (RME) that update across saccades. During gaze fixation, we found that visual landmarks, and hence allocentric information, reduces RME for targets in the left visual hemifield but not in the right. When a saccade was made between viewing and reaching, this landmark-induced reduction in RME only depended on gaze at reach, not at encoding. Based on this finding, we argue that egocentric-allocentric combination occurs after the intervening saccade. This is consistent with previous findings in healthy and brain damaged subjects suggesting that the brain updates early spatial representations during eye movement and combines them at the time of action. |
Eamon Caddigan; Alejandro Lleras Saccadic repulsion in pop-out search: How a target's dodgy history can push the eyes away from it Journal Article In: Journal of Vision, vol. 10, no. 14, pp. 1–9, 2010. @article{Caddigan2010, Previous studies have shown that even in the context of fairly easy selection tasks, as is the case in a pop-out task, selection of the pop-out stimulus can be sped up (in terms of eye movements) when the target-defining feature repeats across trials. Here, we show that selection of a pop-out target can actually be delayed (in terms of saccadic latencies) and made less accurate (in terms of saccade accuracy) when the target-defining feature has recently been associated with distractor status. This effect was observed even though participants' task was to fixate color oddballs (when present) and simply press a button when their eyes reached the target to advance to the next trial. Importantly, the inter-trial effect was also observed in response time (time to advance to the next trial). In contrast, this response time effect was completely eliminated in a second experiment when eye movements were eliminated from the task. That is, when participants still had to press a button to advance to the next trial when an oddball target was present in the display (an oddball detection task experiment). This pattern of results closely links the "need for selection" in a task to the presence of an inter-trial bias of attention (and eye movements) in pop-out search. |
Roberto Caldara; Xinyue Zhou; Sébastien Miellet Putting culture under the 'Spotlight' reveals universal information use for face recognition Journal Article In: PLoS ONE, vol. 5, no. 3, pp. e9708, 2010. @article{Caldara2010, Background: Eye movement strategies employed by humans to identify conspecifics are not universal. Westerners predominantly fixate the eyes during face recognition, whereas Easterners more the nose region, yet recognition accuracy is comparable. However, natural fixations do not unequivocally represent information extraction. So the question of whether humans universally use identical facial information to recognize faces remains unresolved. Methodology/Principal Findings: We monitored eye movements during face recognition of Western Caucasian (WC) and East Asian (EA) observers with a novel technique in face recognition that parametrically restricts information outside central vision. We used Spotlights with Gaussian apertures of 2, 5 or 8 dynamically centered on observers' fixations. Strikingly, in constrained Spotlight conditions (2 and 5) observers of both cultures actively fixated the same facial information: the eyes and mouth. When information from both eyes and mouth was simultaneously available when fixating the nose (8), as expected EA observers shifted their fixations towards this region. Conclusions/Significance: Social experience and cultural factors shape the strategies used to extract information from faces, but these results suggest that external forces do not modulate information use. Human beings rely on identical facial information to recognize conspecifics, a universal law that might be dictated by the evolutionary constraints of nature and not nurture. |
Manuel G. Calvo; Lauri Nummenmaa; Pedro Avero Recognition advantage of happy faces in extrafoveal vision: Featural and affective processing Journal Article In: Visual Cognition, vol. 18, no. 9, pp. 1274–1297, 2010. @article{Calvo2010, Happy, surprised, disgusted, angry, sad, fearful, and neutral facial expressions were presented extrafoveally (2.5° away from fixation) for 150 ms, followed by a probe word for recognition (Experiment 1) or a probe scene for affective valence evaluation (Experiment 2). Eye movements were recorded and gaze-contingent masking prevented foveal viewing of the faces. Results showed that (a) happy expressions were recognized faster than others in the absence of fixations on the faces, (b) the same pattern emerged when the faces were presented upright or upside-down, (c) happy prime faces facilitated the affective evaluation of emotionally congruent probe scenes, and (d) such priming effects occurred at 750 but not at 250 ms prime-probe stimulus-onset asynchrony. This reveals an advantage in the recognition of happy faces outside of overt visual attention, and suggests that this recognition advantage relies initially on featural processing and involves processing of positive affect at a later stage. |
Linda E. Campbell; Kathryn L. McCabe; Kate Leadbeater; Ulrich Schall; Carmel M. Loughland; Dominique Rich Visual scanning of faces in 22q11.2 deletion syndrome: Attention to the mouth or the eyes? Journal Article In: Psychiatry Research, vol. 177, no. 1-2, pp. 211–215, 2010. @article{Campbell2010, Previous research demonstrates that people with 22q11.2 deletion syndrome (22q11DS) have social and interpersonal skill deficits. However, the basis of this deficit is unknown. This study examined, for the first time, how people with 22q11DS process emotional face stimuli using visual scanpath technology. The visual scanpaths of 17 adolescents and age/gender matched healthy controls were recorded while they viewed face images depicting one of seven basic emotions (happy, sad, surprised, angry, fear, disgust and neutral). Recognition accuracy was measured concurrently. People with 22q11DS differed significantly from controls, displaying visual scanpath patterns that were characterised by fewer fixations and a shorter scanpath length. The 22q11DS group also spent significantly more time gazing at the mouth region and significantly less time looking at eye regions of the faces. Recognition accuracy was correspondingly impaired, with 22q11DS subjects displaying particular deficits for fear and disgust. These findings suggest that 22q11DS is associated with a maladaptive visual information processing strategy that may underlie affect recognition accuracy and social functioning deficits in this group. |
Elena Carbone; Werner X. Schneider The control of stimulus-driven saccades is subject not to central, but to visual attention limitations Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 8, pp. 2168–2175, 2010. @article{Carbone2010, In three experiments, we investigated whether the control of reflexive saccades is subject to central attention limitations. In a dual-task procedure, Task 1 required either unspeeded reporting or ignoring of briefly presented masked stimuli, whereas Task 2 required a speeded saccade toward a visual target. The stimulus onset asyn- chrony (SOA) between the two tasks was varied. In Experiments 1 and 2, the Task 1 stimulus was one or three letters, and we asked how saccade target selection is influenced by the number of items. We found (1) longer saccade latencies at short than at long SOAs in the report condition, (2) a substantially larger latency increase for three letters than for one letter, and (3) a latency difference between SOAs in the ignore condition. Broadly, these results match the central interference theory. However, in Experiment 3, an auditory stimulus was used as the Task 1 stimulus, to test whether the interference effects in Experiments 1 and 2 were due to visual instead of central interference. Although there was a small saccade latency increase from short to long SOAs, this differ- ence did not increase from the ignore to the report condition. To explain visual interference effects between letter encoding and stimulus-driven saccade control, we propose an extended theory of visual attention. |
Kurt Debono; Alexander C. Schütz; Miriam Spering; Karl R. Gegenfurtner Receptive fields for smooth pursuit eye movements and motion perception Journal Article In: Vision Research, vol. 50, no. 24, pp. 2729–2739, 2010. @article{Debono2010, Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). |
Adriana M. Degani; Alessander Danna-Dos-Santos; Thomas Robert; Mark L. Latash Kinematic synergies during saccades involving whole-body rotation: A study based on the uncontrolled manifold hypothesis Journal Article In: Human Movement Science, vol. 29, no. 2, pp. 243–258, 2010. @article{Degani2010, We used the framework of the uncontrolled manifold hypothesis to study the coordination of body segments and eye movements in standing persons during the task of shifting the gaze to a target positioned behind the body. The task was performed at a comfortable speed and fast. Multi-segment and head-eye synergies were quantified as co-varied changes in elemental variables (body segment rotations and eye rotation) that stabilized (reduced the across trials variability) of head rotation in space and gaze trajectory. Head position in space was stabilized by co-varied rotations of body segments prior to the action, during its later stages, and after its completion. The synergy index showed a drop that started prior to the action initiation (anticipatory synergy adjustment) and continued during the phase of quick head rotation. Gaze direction was stabilized only at movement completion and immediately after the saccade at movement initiation under the " fast" instruction. The study documents for the first time anticipatory synergy adjustments during whole-body actions. It shows multi-joint synergies stabilizing head trajectory in space. In contrast, there was no synergy between head and eye rotations during saccades that would achieve a relatively invariant gaze trajectory. |
Steve Dipaola; Caitlin Riebe; James T. Enns Rembrandt's textural agency: A shared perspective in visual art and science Journal Article In: Leonardo, vol. 43, no. 2, pp. 145–151, 2010. @article{Dipaola2010, This interdisciplinary paper hypothesizes that Rembrandt developed new painterly techniques — novel to the early modern period — in order to engage and direct the gaze of the observer. Though these methods were not based on scientific evidence at the time, we show that they nonetheless are consistent with a contemporary understanding of human vision. Here we propose that artists in the late ‘early modern' period developed the technique of textural agency — involving selective variation in image detail — to guide the observer's eye and thereby influence the viewing experience. The paper begins by establishing the well-known use of textural agency among modern portrait artists, before considering the possibility that Rembrandt developed these techniques in his late portraits in reaction to his Italian contemporaries. A final section brings the argument full circle, with the presentation of laboratory evidence that Rembrandt's techniques indeed guide the modern viewer's eye in the way we propose. |
Barnaby J. Dixson; Gina M. Grimshaw; Wayne L. Linklater; Alan F. Dixson Watching the hourglass: Eye tracking reveals men's appreciation of the female form Journal Article In: Human Nature, vol. 21, no. 4, pp. 355–370, 2010. @article{Dixson2010, Eye-tracking techniques were used to measure men's attention to back-posed and front-posed images of women varying in waist-to-hip ratio (WHR). Irrespective of body pose, men rated images with a 0.7 WHR as most attractive. For back-posed images, initial visual fixations (occurring within 200 milliseconds of commencement of the eye-tracking session) most frequently involved the midriff. Numbers of fixations and dwell times throughout each of the five-second viewing sessions were greatest for the midriff and buttocks. By contrast, visual attention to front-posed images (first fixations, numbers of fixations, and dwell times) mainly involved the breasts, with attention shifting more to the midriff of images with a higher WHR. This report is the first to compare men's eye-tracking responses to back-posed and front-posed images of the female body. Results show the importance of the female midriff and of WHR upon men's attractiveness judgments, especially when viewing back-posed images. |
Matthew O. Kimble; Kevin Fleming; Carole Bandy; Julia Kim; Andrea Zambetti Eye tracking and visual attention to threating stimuli in veterans of the Iraq war Journal Article In: Journal of Anxiety Disorders, vol. 24, no. 3, pp. 293–299, 2010. @article{Kimble2010, Theoretical and clinical characterizations of attention in PTSD acknowledge the possibility for both hypervigilance and avoidance of trauma-relevant stimuli. This study used eye tracking technology to investigate visual orientation and attention to traumatic and neutral stimuli in nineteen veterans of the Iraq war. Veterans saw slides in which half the screen had a negatively valenced image and half had a neutral image. Negatively valenced stimuli were further divided into stimuli that varied in trauma relevance (either Iraq war or civilian motor vehicle accidents). Veterans reporting relatively higher levels of PSTD symptoms had larger pupils to all negatively valenced pictures and spent more time looking at them than did veterans lower in PTSD symptoms. Veterans higher in PTSD symptoms also showed a trend towards looking first at Iraq images. The findings suggest that post-traumatic pathology is associated with vigilance rather than avoidance when visually processing negatively valenced and trauma-relevant stimuli. |
Yosuke Kita; Atsuko Gunji; Kotoe Sakihara; Masumi Inagaki; Makiko Kaga; Eiji Nakagawa; Toru Hosokawa Scanning strategies do not modulate face identification: Eye-tracking and near-infrared spectroscopy study Journal Article In: PLoS ONE, vol. 5, no. 6, pp. e11050, 2010. @article{Kita2010, BACKGROUND: During face identification in humans, facial information is sampled (seeing) and handled (processing) in ways that are influenced by the kind of facial image type, such as a self-image or an image of another face. However, the relationship between seeing and information processing is seldom considered. In this study, we aimed to reveal this relationship using simultaneous eye-tracking measurements and near-infrared spectroscopy (NIRS) in face identification tasks. METHODOLOGY/PRINCIPAL FINDINGS: 22 healthy adult subjects (8 males and 14 females) were shown facial morphing movies in which an initial facial image gradually changed into another facial image (that is, the subject's own face was changed to a familiar face). The fixation patterns on facial features were recorded, along with changes in oxyhemoglobin (oxyHb) levels in the frontal lobe, while the subjects identified several faces. In the self-face condition (self-face as the initial image), hemodynamic activity around the right inferior frontal gyrus (IFG) was significantly greater than in the familiar-face condition. On the other hand, the scanning strategy was similar in almost all conditions with more fixations on the eyes and nose than on other areas. Fixation time on the eye area did not correlate with changes in oxyHb levels, and none of the scanning strategy indices could estimate the hemodynamic changes. CONCLUSIONS/SIGNIFICANCE: We conclude that hemodynamic activity, i.e., the means of processing facial information, is not always modulated by the face-scanning strategy, i.e., the way of seeing, and that the right IFG plays important roles in both self-other facial discrimination and self-evaluation. |
Tomas Knapen; Martin Rolfs; Mark Wexler; Patrick Cavanagh The reference frame of the tilt aftereffect Journal Article In: Journal of Vision, vol. 10, no. 1, pp. 1–13, 2010. @article{Knapen2010, Perceptual aftereffects provide a sensitive tool to investigate the influence of eye and head position on visual processing. There have been recent indications that the TAE is remapped around the time of a saccade to remain aligned to the adapting location in the world. Here, we investigate the spatial frame of reference of the TAE by independently manipulating retinal position, gaze orientation, and head orientation between adaptation and test. The results show that the critical factor in the TAE is the correspondence between the adaptation and test locations in a retinotopic frame of reference, whereas world- and head-centric frames of reference do not play a significant role. Our results confirm that adaptation to orientation takes place at retinotopic levels of visual processing. We suggest that the remapping process that plays a role in visual stability does not transfer feature gain information around the time of eye (or head) movements. |
Peter Ko; Sepp Kollmorgen; Nora Nortmann; Sylvia Schröder; Peter König Influence of low-level stimulus features, task dependent factors, and spatial biases on overt visual attention Journal Article In: PLoS Computational Biology, vol. 6, no. 5, pp. e1000791, 2010. @article{Ko2010, Visual attention is thought to be driven by the interplay between low-level visual features and task dependent information content of local image regions, as well as by spatial viewing biases. Though dependent on experimental paradigms and model assumptions, this idea has given rise to varying claims that either bottom-up or top-down mechanisms dominate visual attention. To contribute toward a resolution of this discussion, here we quantify the influence of these factors and their relative importance in a set of classification tasks. Our stimuli consist of individual image patches (bubbles). For each bubble we derive three measures: a measure of salience based on low-level stimulus features, a measure of salience based on the task dependent information content derived from our subjects' classification responses and a measure of salience based on spatial viewing biases. Furthermore, we measure the empirical salience of each bubble based on our subjects' measured eye gazes thus characterizing the overt visual attention each bubble receives. A multivariate linear model relates the three salience measures to overt visual attention. It reveals that all three salience measures contribute significantly. The effect of spatial viewing biases is highest and rather constant in different tasks. The contribution of task dependent information is a close runner-up. Specifically, in a standardized task of judging facial expressions it scores highly. The contribution of low-level features is, on average, somewhat lower. However, in a prototypical search task, without an available template, it makes a strong contribution on par with the two other measures. Finally, the contributions of the three factors are only slightly redundant, and the semi-partial correlation coefficients are only slightly lower than the coefficients for full correlations. These data provide evidence that all three measures make significant and independent contributions and that none can be neglected in a model of human overt visual attention. |
Peter J. Kohler; G. P. Caplovitz; P. -J. Hsieh; J. Sun; P. U. Tse Motion fading is driven by perceived, not actual angular velocity Journal Article In: Vision Research, vol. 50, no. 11, pp. 1086–1094, 2010. @article{Kohler2010, After prolonged viewing of a slowly drifting or rotating pattern under strict fixation, the pattern appears to slow down and then momentarily stop. Here we examine the relationship between such 'motion fading' and perceived angular velocity. Using several different dot patterns that generate emergent virtual contours, we demonstrate that whenever there is a difference in the perceived angular velocity of two patterns of dots that are in fact rotating at the same angular velocity, there is also a difference in the time to undergo motion fading for those two patterns. Conversely, whenever two patterns show no difference in perceived angular velocity, even if in fact rotating at different angular velocities, we find no difference in the time to undergo motion fading. Thus, motion fading is driven by the perceived rather than actual angular velocity of a rotating stimulus. |
A. Kotowicz; Ueli Rutishauser; Christof Koch Time course of target recognition in visual search Journal Article In: Frontiers in Human Neuroscience, vol. 4, pp. 31, 2010. @article{Kotowicz2010, Visual search is a ubiquitous task of great importance: it allows us to quickly find the objects that we are looking for. During active search for an object (target), eye movements are made to different parts of the scene. Fixation locations are chosen based on a combination of information about the target and the visual input. At the end of a successful search, the eyes typically fixate on the target. But does this imply that target identification occurs while looking at it? The duration of a typical fixation ( approximately 170 ms) and neuronal latencies of both the oculomotor system and the visual stream indicate that there might not be enough time to do so. Previous studies have suggested the following solution to this dilemma: the target is identified extrafoveally and this event will trigger a saccade towards the target location. However this has not been experimentally verified. Here we test the hypothesis that subjects recognize the target before they look at it using a search display of oriented colored bars. Using a gaze-contingent real-time technique, we prematurely stopped search shortly after subjects fixated the target. Afterwards, we asked subjects to identify the target location. We find that subjects can identify the target location even when fixating on the target for less than 10 ms. Longer fixations on the target do not increase detection performance but increase confidence. In contrast, subjects cannot perform this task if they are not allowed to move their eyes. Thus, information about the target during conjunction search for colored oriented bars can, in some circumstances, be acquired at least one fixation ahead of reaching the target. The final fixation serves to increase confidence rather then performance, illustrating a distinct role of the final fixation for the subjective judgment of confidence rather than accuracy. |
Gesche M. Huebner; Karl R. Gegenfurtner Effects of viewing time, fixations, and viewing strategies on visual memory for briefly presented natural objects Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 7, pp. 1398–1413, 2010. @article{Huebner2010, We investigated the impact of viewing time and fixations on visual memory for briefly presented natural objects. Participants saw a display of eight natural objects arranged in a circle and used a partial report procedure to assign one object to the position it previously occupied during stimulus presentation. At the longest viewing time of 7,000 ms or 10 fixations, memory performance was significantly higher than at the shorter times. This increase was accompanied by a primacy effect, suggesting a contribution of another memory component-for example, visual long-term memory (VLTM). We found a very limited beneficial effect of fixations on objects; fixated objects were only remembered better at the shortest viewing times. Our results revealed an intriguing difference between the use of a blocked versus an interleaved experimental design. When trial length was predictable, in the blocked design, target fixation durations increased with longer viewing times. When trial length was unpredictable, fixation durations stayed the same for all viewing lengths. Memory performance was not affected by this design manipulation, thus also supporting the idea that the number and duration of fixations are not closely coupled to memory performance. |
Lynn Huestegge Effects of vowel length on gaze durations in silent and oral reading Journal Article In: Journal of Eye Movement Research, vol. 3, no. 5, pp. 1–18, 2010. @article{Huestegge2010a, Vowel length is known to affect reaction times in single word reading. Eye movement studies involving silent sentence reading showed that phonological information of a word can be acquired even before it is fixated. However, it remained an open question whether vowel length directly influences oculomotor control in sentence reading. In the present eye tracking study, subjects read sentences that included target words of varying vowel length and frequency. In Experiment 1, subjects read silently for comprehension, whereas Experiment 2 involved oral reading. Experiments 3 and 4 additionally included an articulatory suppression task and a foot tapping task. Results indicated that in conditions that did not require additional articulation (Experiments 1 and 4) gaze durations were increased for words with long vowels compared to words with short vowels. Conditions that required simultaneous articulation (Experiments 2 and 3) did not yield a vowel length effect. The results point to an influence of phonetic properties on oculomotor control during silent reading around the time of the completion of lexical access. |
Lynn Huestegge; Iring Koch Crossmodal action selection: Evidence from dual-task compatibility Journal Article In: Memory & Cognition, vol. 38, no. 4, pp. 493–501, 2010. @article{Huestegge2010, Response-related mechanisms of multitasking were studied by analyzing simultaneous processing of responses in different modalities (i.e., crossmodal action). Participants responded to a single auditory stimulus with a saccade, a manual response (single-task conditions), or both (dual-task condition). We used a spatially incompatible stimulus-response mapping for one task, but not for the other. Critically, inverting these mappings varied temporal task overlap in dual-task conditions while keeping spatial incompatibility across responses constant. Unlike previous paradigms, temporal task overlap was manipulated without utilizing sequential stimulus presentation, which might induce strategic serial processing. The results revealed dual-task costs, but these were not affected by an increase of temporal task overlap. This finding is evidence for parallel response selection in multitasking. We propose that crossmodal action is processed by a central mapping-selection mechanism in working memory and that the dual-task costs are mainly caused by mapping-related crosstalk. |
Lynn Huestegge; Iring Koch Fixation disengagement enhances peripheral perceptual processing: Evidence for a perceptual gap effect Journal Article In: Experimental Brain Research, vol. 201, no. 4, pp. 631–640, 2010. @article{Huestegge2010c, Temporal gaps between the offset of a central fixation stimulus and the onset of an eccentric target typically reduce saccade latencies (saccadic gap effect). Here, we test whether temporal gaps also affect perceptual performance in peripheral vision. In Experiment 1, subjects executed saccades to briefly presented peripheral target letters and reported letter identity afterwards. A central fixation stimulus either remained visible throughout the trial (overlap) or disappeared 200 ms before letter onset (gap). Experiment 2 tested perceptual performance without saccade execution, whereas Experiment 3 tested saccade execution without perceptual demands. Peripheral letter perception performance was enhanced in gap as compared to overlap conditions (perceptual gap effect) irrespective of concurrent oculomotor demands. Furthermore, the saccadic gap effect was modulated by concurrent perceptual demands. Experiment 4 ruled out a general warning explanation of the perceptual gap effect. These findings extend recent theories assuming a strong coupling between the preparation of goal-directed saccades and shifts of visual attention from the spatial to the temporal domain. |
Lucica Iordanescu; Marcia Grabowecky; Steven L. Franconeri; Jan Theeuwes; Satoru Suzuki Characteristic sounds make you look at target objects more quickly Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 7, pp. 1736–1741, 2010. @article{Iordanescu2010, When you are looking for an object, does hearing its characteristic sound make you find it more quickly? Our recent results supported this possibility by demonstrating that when a cat target, for example, was presented among other objects, a simultaneously presented “meow” sound (containing no spatial information) reduced the manual response time for visual localization of the target. To extend these results, we determined how rapidly an object-specific auditory signal can facilitate target detection in visual search. On each trial, participants fixated a spec ified target object as quickly as possible. The target's characteristic sound speeded the saccadic search time within 215–220 msec and also guided the initial saccade toward the target, compared with presentation of a distractor's sound or with no sound. These results suggest that object-based auditory–visual interactions rapidly increase the target object's salience in visual search. |
Osman Iyilikci; Cordula Becker; Onur Güntürkün; Sonia Amado Visual processing asymmetries in change detection Journal Article In: Perception, vol. 39, no. 6, pp. 761–769, 2010. @article{Iyilikci2010, Change detection is critically dependent on attentional mechanisms. However, the relation between an asymmetrical distribution of visuo-spatial attention and the detection of changes in visual scenes is not clear. Spatial tasks are known to induce a stronger activation of the right hemisphere. The effects of such visual processing asymmetries induced by a spatial task on change detection were investigated. When required to detect changes in the left and in the right visual fields, participants were significantly faster in detecting changes on the left than on the right. Importantly, this left-side superiority in change detection is not influenced by inspection time, suggesting a critical role of visual processing benefit for the left visual field. |
Michal Jacob; Shaul Hochstein Graded recognition as a function of the number of target fixations Journal Article In: Vision Research, vol. 50, no. 1, pp. 107–117, 2010. @article{Jacob2010, Target recognition stages were studied by exposing observers to varying controlled numbers of target fixations. The target, present in half the displays, consisted of two identical cards (Identity Search Task; Jacob & Hochstein, 2009). Following more fixations, targets are better recognized, indicated by increased Hit-rate and detectability (according to Unequal Variance Signal Detection Theory), decreased Response Time and growing confidence, reflecting current stage in recognition process. Thus, gathering information over a specific scene region results from a growing number of fixations on that particular region. We conclude that several fixations on a scene location are necessary for achieving recognition. |
Richard H. A. H. Jacobs; Remco Renken; Stefan Thumfart; Frans W. Cornelissen Different judgments about visual textures invoke different eye movement patterns Journal Article In: Journal of Eye Movement Research, vol. 3, no. 4, pp. 1–13, 2010. @article{Jacobs2010a, Top-down influences on the guidance of the eyes are generally modeled as modulating influences on bottom-up salience maps. Interested in task-driven influences on how, rather than where, the eyes are guided, we expected differences in eye movement parameters accompanying beauty and roughness judgments about visual textures. Participants judged textures for beauty and roughness, while their gaze-behavior was recorded. Eye movement parameters differed between the judgments, showing task effects on how people look at images. Similarity in the spatial distribution of attention suggests that differences in the guidance of attention are non-spatial, possibly feature-based. During the beauty judgment, participants fixated on patches that were richer in color information, further supporting the idea that differences in the guidance of attention are feature-based. A finding of shorter fixation durations during beauty judgments may indicate that extraction of the relevant features is easier during this judgment. This finding is consistent with a more ambient scanning mode during this judgment. The differences in eye movement parameters during different judgments about highly repetitive stimuli highlight the need for models of eye guidance to go beyond salience maps, to include the temporal dynamics of eye guidance. |
Anshul Jain; Stuart Fuller; Benjamin T. Backus In: PLoS ONE, vol. 5, no. 10, pp. e13295, 2010. @article{Jain2010, The visual system can learn to use information in new ways to construct appearance. Thus, signals such as the location or translation direction of an ambiguously rotating wire frame cube, which are normally uninformative, can be learned as cues to determine the rotation direction. This perceptual learning occurs when the formerly uninformative signal is statistically associated with long-trusted visual cues (such as binocular disparity) that disambiguate appearance during training. In previous demonstrations, the newly learned cue was intrinsic to the perceived object, in that the signal was conveyed by the same image elements as the object itself. Here we used extrinsic new signals and observed no learning. We correlated three new signals with long-trusted cues in the rotating cube paradigm: one crossmodal (an auditory signal) and two within modality (visual). Cue recruitment did not occur in any of these conditions, either in single sessions or in ten sessions across as many days. These results suggest that the intrinsic/extrinsic distinction is important for the perceptual system in determining whether it can learn and use new information from the environment to construct appearance. Extrinsic cues do have perceptual effects (e.g. the "bounce-pass" illusion and McGurk effect), so we speculate that extrinsic signals must be recruited for perception, but only if certain conditions are met. These conditions might specify the age of the observer, the strength of the long-trusted cues, or the amount of exposure to the correlation. |
Mortier Karen; Wieske Zoest; Martijn Meeter; Jan Theeuwes Word cues affect detection but not localization responses Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 1, pp. 65–75, 2010. @article{Karen2010, Many theories assume that pre-knowledge of an upcoming target helps visual selection. In those theories, a top-down set can alter the salience of the target, such that attention can be deployed to the target more efficiently and responses are faster. Evidence for this account stems from visual search studies in which the identity of the upcoming target is cued in advance. In five experiments, we show that top-down knowledge affects the speed with which a singleton target can be detected but not the speed with which it can be localized. Furthermore, we show that these results are independent of the mode of responding (manual or saccadic) and are not due to a ceiling effect. Our results suggest that in singleton search, top-down information does not affect visual selection but most likely does affect response selection. We argue that such an effect is found only when information from different dimensions needs to be integrated to generate a response and that this is the case in singleton detection tasks but not in other singleton search tasks. |
David J. Kelly; Sébastien Miellet; Roberto Caldara Culture shapes eye movements for visually homogeneous objects Journal Article In: Frontiers in Psychology, vol. 1, pp. 6, 2010. @article{Kelly2010, Culture affects the way people move their eyes to extract information in their visual world. Adults from Eastern societies (e.g., China) display a disposition to process information holistically, whereas individuals from Western societies (e.g., Britain) process information analytically. In terms of face processing, adults from Western cultures typically fixate the eyes and mouth, while adults from Eastern cultures fixate centrally on the nose region, yet face recognition accuracy is comparable across populations. A potential explanation for the observed differences relates to social norms concerning eye gaze avoidance/engagement when interacting with conspecifics. Furthermore, it has been argued that faces represent a 'special' stimulus category and are processed holistically, with the whole face processed as a single unit. The extent to which the holistic eye movement strategy deployed by East Asian observers is related to holistic processing for faces is undetermined. To investigate these hypotheses, we recorded eye movements of adults from Western and Eastern cultural backgrounds while learning and recognizing visually homogeneous objects: human faces, sheep faces and greebles. Both group of observers recognized faces better than any other visual category, as predicted by the specificity of faces. However, East Asian participants deployed central fixations across all the visual categories. This cultural perceptual strategy was not specific to faces, discarding any parallel between the eye movements of Easterners with the holistic processing specific to faces. Cultural diversity in the eye movements used to extract information from visual homogenous objects is rooted in more general and fundamental mechanisms. |
Aarlenne Zein Khan; Stephen J. Heinen; Robert M. McPeek Attentional cueing at the saccade goal, not at the target location, facilitates saccades Journal Article In: Journal of Neuroscience, vol. 30, no. 16, pp. 5481–5488, 2010. @article{Khan2010, Presenting a behaviorally irrelevant cue shortly before a target at the same location decreases the latencies of saccades to the target, a phenomenon known as exogenous attention facilitation. It remains unclear whether exogenous attention interacts with early, sensory stages or later, motor planning stages of saccade production. To distinguish between these alternatives, we used a saccadic adaptation paradigm to dissociate the location of the visual target from the saccade goal. Three male and four female human subjects performed both control trials, in which saccades were made to one of two target eccentricities, and adaptation trials, in which the target was shifted from one location to the other during the saccade. This manipulation adapted saccades so that they eventually were directed to the shifted location. In both conditions, a behaviorally irrelevant cue was flashed 66.7 ms before target appearance at a randomly selected one of seven positions that included the two target locations. In control trials, saccade latencies were shortest when the cue was presented at the target location and increased with cue-target distance. In contrast, adapted saccade latencies were shortest when the cue was presented at the adapted saccade goal, and not at the visual target location. The dynamics of adapted saccades were also altered, consistent with prior adaptation studies, except when the cue was flashed at the saccade goal. Overall, the results suggest that attentional cueing facilitates saccade planning rather than visual processing of the target. |
Aaron B. Hoffman; Bob Rehder The costs of supervised classification: The effect of learning task on conceptual flexibility Journal Article In: Journal of Experimental Psychology: General, vol. 139, no. 2, pp. 319–340, 2010. @article{Hoffman2010, Research has shown that learning a concept via standard supervised classification leads to a focus on diagnostic features, whereas learning by inferring missing features promotes the acquisition of within-category information. Accordingly, we predicted that classification learning would produce a deficit in people's ability to draw novel contrasts–distinctions that were not part of training–compared with feature inference learning. Two experiments confirmed that classification learners were at a disadvantage at making novel distinctions. Eye movement data indicated that this conceptual inflexibility was due to (a) a narrower attention profile that reduces the encoding of many category features and (b) learned inattention that inhibits the reallocation of attention to newly relevant information. Implications of these costs of supervised classification learning for views of conceptual structure are discussed. |
Lee Hogarth; Anthony Dickinson; Theodora Duka The associative basis of cue-elicited drug taking in humans Journal Article In: Psychopharmacology, vol. 208, no. 3, pp. 337–351, 2010. @article{Hogarth2010, RATIONALE: Drug cues play an important role in motivating human drug taking, lapse and relapse, but the psychological basis of this effect has not been fully specified. METHOD: To clarify these mechanisms, the study measured the extent to which pictorial and conditioned tobacco cues enhanced smoking topography in an ad libitum smoking session simultaneously with cue effects on subjective craving, pleasure and anxiety. RESULTS: Both cue types increased the number of puffs consumed and craving, but pleasure and anxiety responses were dissociated across cue type. Moreover, cue effects on puff number correlated with effects on craving but not pleasure or anxiety. Finally, whereas overall puff number and craving declined across the two blocks of consumption, consistent with burgeoning satiety, cue enhancement of puff number and craving were both unaffected by satiety. CONCLUSIONS: Overall, the data suggest that cue-elicited drug taking in humans is mediated by an expectancy-based associative learning architecture, which paradoxically is autonomous of the current incentive value of the drug. |
S. Lee Hong; Melissa R. Beck Uncertainty compensation in human attention: Evidence from response times and fixation durations Journal Article In: PLoS ONE, vol. 5, no. 7, pp. e11461, 2010. @article{Hong2010, BACKGROUND: Uncertainty and predictability have remained at the center of the study of human attention. Yet, studies have only examined whether response times (RT) or fixations were longer or shorter under levels of stimulus uncertainty. To date, no study has examined patterns of stimuli and responses through a unifying framework of uncertainty. METHODOLOGY/PRINCIPAL FINDINGS: We asked 29 college students to generate repeated responses to a continuous series of visual stimuli presented on a computer monitor. Subjects produced these responses by pressing on a keypad as soon a target was detected (regardless of position) while the durations of their visual fixations were recorded. We manipulated the level of stimulus uncertainty in space and time by changing the number of potential stimulus locations and time intervals between stimulus presentations. To allow the analyses to be conducted using uncertainty as common description of stimulus and response we calculated the entropy of the RT and fixation durations. We tested the hypothesis of uncertainty compensation across space and time by fitting the RT and fixation duration entropy values to a quadratic surface. The quadratic surface accounted for 80% of the variance in the entropy values of both RT and fixation durations. RT entropy increased as a function of spatial and temporal uncertainty of the stimulus, alongside a symmetric, compensatory decrease in the entropy of fixation durations as the level of spatial and temporal uncertainty of the stimuli was increased. CONCLUSIONS/SIGNIFICANCE: Our results demonstrate that greater uncertainty in the stimulus leads to greater uncertainty in the response, and that the effects of spatial and temporal uncertainties are compensatory. We also observed compensatory relationship across the entropies of fixation duration and RT, suggesting that a more predictable visual search strategy leads to more uncertain response patterns and vice versa. |
Tien Ho-Phuoc; Nathalie Guyader; Anne Guérin-Dugué A functional and statistical bottom-up saliency model to reveal the relative contributions of low-level visual guiding factors Journal Article In: Cognitive Computation, vol. 2, no. 4, pp. 344–359, 2010. @article{HoPhuoc2010, When looking at a scene, we frequently move our eyes to place consecutive interesting regions on the fovea, the retina centre. At each fixation, only this specific foveal region is analysed in detail by the visual system. The visual attention mechanisms control eye movements and depend on two types of factor: bottom-up and top-down factors. Bottom-up factors include different visual features such as colour, luminance, edges, and orientations. In this paper, we evaluate quantitatively the relative contribution of basic low-level features as candidate guiding factors to visual attention and hence to eye movements. We also study how these visual features can be combined in a bottom-up saliency model. Our work consists of three interactive parts: a functional saliency model, a statistical model and eye movement data recorded during free viewing of natural scenes. The functional saliency model, inspired by the primate visual system, decomposes a visual scene into different feature maps. The statistical model indicates which features best explain the recorded eye movements. We show an essential role of high frequency luminance and an important contribution of central fixation bias. The relative contribution of features, calculated by the statistical model, is then used to combine the different feature maps into a saliency map. Finally, the comparison between the saliency model and experimental data confirmed the influence of these contributions. |
Hyung Lee; Mathias Abegg; Amadeo Rodriguez; John D. Koehn; Jason J. S. Barton Why do humans make antisaccade errors? Journal Article In: Experimental Brain Research, vol. 201, no. 1, pp. 65–73, 2010. @article{Lee2010, Antisaccade errors are attributed to failure to inhibit the habitual prosaccade. We investigated whether the amount of information about the required response the patient has before the trial begins also contributes to error rate. Participants performed antisaccades in five conditions. The traditional design had two goals on the left and right horizontal meridians. In the second condition, stimulus-goal confusability between trials was eliminated by displacing one goal upward. In the third, hemifield uncertainty was eliminated by placing both goals in the same hemifield. In the fourth, goal uncertainty was eliminated by having only one goal, but interspersed with no-go trials. The fifth condition eliminated all uncertainty by having the same goal on every trial. Antisaccade error rate increased by 2% with each additional source of uncertainty, with the main effect being hemifield information, and a trend for stimulus-goal confusability. A control experiment for the effects of increasing angular separation between targets without changing these types of prior response information showed no effects on latency or error rate. We conclude that other factors besides prosaccade inhibition contribute to antisaccade error rates in traditional designs, possibly by modulating the strength of goal activation. |
Aaron P. Johnson; Rick Gurnsey Size scaling compensates for sensitivity loss produced by a simulated central scotoma in a shape-from-texture task Journal Article In: Journal of Vision, vol. 10, no. 12, pp. 1–16, 2010. @article{Johnson2010, Studies of eccentricity-dependent sensitivity loss typically require participants to maintain fixation while making judgments about stimuli presented at a range of sizes and eccentricities. However, training participants to fixate can prove difficult, and as stimulus size increases, they become poorly localized and may even encroach on the fovea. In the present experiment, we controlled eccentricity of stimulus presentation using a simulated central scotoma of variable size. Participants were asked to perform a 27-alternative forced-choice shape-from-texture task in the presence of a simulated scotoma, with stimulus size and scotoma radius as the independent variables. The resulting psychometric functions for each simulated scotoma were shifted versions of each other on a log size axis. Therefore, stimulus magnification was sufficient to equate sensitivity to shape from texture for all scotoma radii. Increasing scotoma radius also disrupts eye movements, producing increases in fixation frequency and duration, as well as saccade length. |
Stephanie A. H. Jones; Denise Y. P. Henriques Memory for proprioceptive and multisensory targets is partially coded relative to gaze Journal Article In: Neuropsychologia, vol. 48, no. 13, pp. 3782–3792, 2010. @article{Jones2010, We examined the effect of gaze direction relative to target location on reach endpoint errors made to proprioceptive and multisensory targets. We also explored if and how visual and proprioceptive information about target location are integrated to guide reaches. Participants reached to their unseen left hand in one of three target locations (left of body midline, body midline, or right or body midline), while it remained at a target site (online), or after it was removed from this location (remembered), and also after the target hand had been briefly lit before reaching (multisensory target). The target hand was guided to a target location using a robot-generated path. Reaches were made with the right hand in complete darkness, while gaze was varied in one of four eccentric directions. Horizontal reach errors systematically varied relative to gaze for all target modalities; not only for visually remembered and online proprioceptive targets as has been found in previous studies, but for the first time, also for remembered proprioceptive targets and proprioceptive targets that were briefly visible. These results suggest that the brain represents the locations of online and remembered proprioceptive reach targets, as well as visual-proprioceptive reach targets relative to gaze, along with other motor-related representations. Our results, however, do not suggest that visual and proprioceptive information are optimally integrated when coding the location of multisensory reach targets in this paradigm. |
Donatas Jonikaitis; Torsten Schubert; Heiner Deubel Preparing coordinated eye and hand movements: Dual-task costs are not attentional Journal Article In: Journal of Vision, vol. 10, no. 14, pp. 1–17, 2010. @article{Jonikaitis2010, Dual-task costs are observed when people perform two tasks at the same time. It has been suggested that these costs arise from limitations of movement goal selection when multiple goal-directed movements are made simultaneously. To investigate this, we asked participants to reach and look at different locations while we varied the time between the cues to start the eye and the hand movement between 150 ms and 900 ms. In Experiment 1, participants executed the reach first, and the saccade second, in Experiment 2 the order of the movements was reversed. We observed dual-task costs-participants were slower to start the eye or hand movement if they were planning another movement at that time. In Experiment 3, we investigated whether these dual-task costs were due to limited attentional resources needed to select saccade and reach goal locations. We found that the discrimination of a probe improved at both saccade and reach locations, indicating that attention shifted to both movement goals. Importantly, while we again observed the expected dual-task costs as reflected in movement latencies, there was no apparent delay of the associated attention shifts. Our results rule out attentional goal selection as the causal factor leading to the dual-task costs occurring in eye-hand movements. |
Gustav Kuhn; John M. Findlay Misdirection, attention and awareness: Inattentional blindness reveals temporal relationship between eye movements and visual awareness Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 1, pp. 136–146, 2010. @article{Kuhn2010, We designed a magic trick that could be used to investigate how misdirection can prevent people from perceiving a visually salient event, thus offering a novel paradigm to examine inattentional blindness. We demonstrate that participants' verbal reports reflect what they saw rather than inferences about how they thought the trick was done and thus provide a reliable index of conscious perception. Eye movements revealed that for a subset of participants their conscious perception was not related to where they were looking at the time of the event and thus demonstrate how overt and covert attention can be spatially dissociated. However, detection of the event resulted in rapid shifts of eye movements towards the detected event, thus indicating a strong temporal link between overt and covert attention, and that covert attention can be allocated at least 2 or 3 saccade targets ahead of where people are fixating. |
Victor Kuperman; Raymond Bertram; R. Harald Baayen Processing trade-offs in the reading of Dutch derived words Journal Article In: Journal of Memory and Language, vol. 62, no. 2, pp. 83–97, 2010. @article{Kuperman2010, This eye-tracking study explores visual recognition of Dutch suffixed words (e.g., plaats+ing "placing") embedded in sentential contexts, and provides new evidence on the interplay between storage and computation in morphological processing. We show that suffix length crucially moderates the use of morphological properties. In words with shorter suffixes, we observe a stronger effect of full-forms (derived word frequency) on reading times than in words with longer suffixes. Also, processing times increase if the base word (plaats) and the suffix (-ing) differ in the amount of information carried by their morphological families (sets of words that share the base or the suffix). We model this imbalance of informativeness in the morphological families with the information-theoretical measure of relative entropy and demonstrate its predictivity for the processing times. The observed processing trade-offs are discussed in the context of current models of morphological processing. |
2009 |
Marius Bernard Hart; Johannes Vockeroth; Frank Schumann; Klaus Bartl; Erich Schneider; Peter König; Wolfgang Einhäuser Gaze allocation in natural stimuli: Comparing free exploration to head-fixed viewing conditions Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 1132–1158, 2009. @article{tHart2009, "Natural" gaze is typically measured by tracking eye positions during scene presentation in laboratory settings. How informative are such investigations for real-world conditions? Using a mobile eyetracking setup ("EyeSeeCam"), we measure gaze during free exploration of various in- and outdoor environments, while simultaneously recording head-centred videos. Here, we replay these videos in a laboratory setup. Half of the laboratory observers view the movies continuously, half as sequences of static 1-second frames. We find a bias of eye position to the stimulus centre, which is strongest in the 1 s frame replay condition. As a consequence, interobserver consistency is highest in this condition, though not fully explained by spatial bias alone. This leaves room for image specific bottom-up models to predict gaze beyond generic biases. Indeed, the "saliency map" predicts eye position in all conditions, and best for continuous replay. Continuous replay predicts real-world gaze better than 1 s frame replay does. In conclusion, experiments and models benefit from preserving the spatial statistics and temporal continuity of natural stimuli to improve their validity for real-world gaze behaviour. |
J. Stephen Higgins; David E. Irwin; Ranxiao Frances Wang; Laura E. Thomas Visual direction constancy across eyeblinks Journal Article In: Attention, Perception, and Psychophysics, vol. 71, no. 7, pp. 1607–1617, 2009. @article{StephenHiggins2009, When a visual target is displaced during a saccade, the perception of its displacement is suppressed. Its movement can usually only be detected if the displacement is quite large. This suppression can be eliminated by introducing a short blank period after the saccade and before the target reappears in a new location. This has been termed the blanking effect and has been attributed to the use of otherwise ignored extraretinal information. We examined whether similar effects occur with eyeblinks and other visual distractions. We found that suppression of displacement perception can also occur due to a blink (both immediately prior to the blink and during the blink), and that introducing a blank period after a blink reduces the displacement suppression in much the same way as after a saccade. The blanking effect does not occur when other visual distractions are used. This provides further support for the conclusion that the blanking effect arises from extraretinal signals about eye position. |
Benjamin W. Tatler; Benjamin T. Vincent The prominence of behavioural biases in eye guidance Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 1029–1054, 2009. @article{Tatler2009, When attempting to understand where people look during scene perception, researchers typically focus on the relative contributions of low- and high-level cues. Computational models of the contribution of low-level features to fixation selection, with modifications to incorporate top-down sources of information have been abundant in recent research. However, we are still some way from a model that can explain many of the complexities of eye movement behaviour. Here we show that understanding biases in how we move the eyes can provide powerful new insights into the decision about where to look in complex scenes. A model based solely on these biases and therefore blind to current visual information outperformed popular salience-based approaches. Our data show that incorporating an understanding of oculomotor behavioural biases into models of eye guidance is likely to significantly improve our understanding of where we choose to fixate in natural scenes. |
Alisdair J. G. Taylor; Samuel B. Hutton The effects of task instructions on pro and antisaccade performance Journal Article In: Experimental Brain Research, vol. 195, no. 1, pp. 5–14, 2009. @article{Taylor2009, In the antisaccade task participants are required to overcome the strong tendency to saccade towards a sudden onset target, and instead make a saccade to the mirror image location. The task thus provides a powerful tool with which to study the cognitive processes underlying goal directed behaviour, and has become a widely used index of "disinhibition" in a range of clinical populations. Across two experiments we explored the role of top-down strategic influences on antisaccade performance by varying the instructions that participants received. Instructions to delay making a response resulted in a significant increase in correct antisaccade latencies and reduction in erroneous prosaccades towards the target. Instructions to make antisaccades as quickly as possible resulted in faster correct responses, whereas instructions to be as spatially accurate as possible increased correct antisaccade latencies. Neither of these manipulations resulted in a significant change in error rate. In a second experiment, participants made fewer errors in delayed pro and antisaccade tasks than in a standard antisaccade task. The implications of these results for current models of antisaccade performance, and the interpretation of antisaccade deficits in clinical populations are discussed. |
Jan Theeuwes; Artem V. Belopolsky No functional role of attention-based rehearsal in maintenance of spatial working memory representations Journal Article In: Acta Psychologica, vol. 132, no. 2, pp. 124–135, 2009. @article{Theeuwes2009, The present study systematically examined the role of attention in maintenance of spatial representations in working memory as proposed by the attention-based rehearsal hypothesis [Awh, E., Jonides, J., & Reuter-Lorenz, P. A. (1998). Rehearsal in spatial working memory. Journal of Experimental Psychology–Human Perception and Performance, 24(3), 780-790]. Three main issues were examined. First, Experiments 1-3 demonstrated that inhibition and not facilitation of visual processing is often observed at the memorized location during the retention interval. This inhibition was caused by keeping a location in memory and not by the exogenous nature of the memory cue. Second, Experiment 4 showed that inhibition of the memorized location does not lead to any significant impairment in memory accuracy. Finally, Experiment 5 connected current results to the previous findings and demonstrated facilitation of processing at the memorized location. Importantly, facilitation of processing did not lead to more accurate memory performance. The present results challenge the functional role of attention in maintenance of spatial working memory representations. |
Jan Theeuwes; Stefan Van der Stigchel Saccade trajectory deviations and inhibition-of-return: Measuring the amount of attentional processing Journal Article In: Vision Research, vol. 49, no. 10, pp. 1307–1315, 2009. @article{Theeuwes2009a, This study used a classic exogenous cueing task in which an abrupt onset cue indicated the target location at chance level. When there was a delay between the cue and the target, observers responded slower and less accurate to the target presented at cued than at uncued locations, signifying the occurrence of inhibition-of-return (IOR). On some trials, instead of a manual response, participants had to move their eyes to a location in space. Our findings show no saccade deviation away from the location that was inhibited due to IOR unless participants had to process the target letter presented at the inhibited location. Our findings are consistent with the notion that inhibition resulting in IOR does not occur at the saccade map level but IOR seems to reduce the input of signals going into the saccade map. We show that the strength of saccade deviation is an important measure which can reveal the amount of attentional processing taking place at any particular location in time. |
Laura E. Thomas; Alejandro Lleras Covert shifts of attention function as an implicit aid to insight Journal Article In: Cognition, vol. 111, no. 2, pp. 168–174, 2009. @article{Thomas2009, Previous research shows that directed actions can unconsciously influence higher-order cognitive processing, helping learners to retain knowledge and guiding problem solvers to useful insights (e.g. Cook, S. W., Mitchell, Z., & Goldin-Meadow, S. (2008). Gesturing makes learning last. Cognition, 106, 1047-1058; Thomas, L. E., & Lleras, A. (2007). Moving eyes and moving thought: on the spatial compatibility between eye movements and cognition. Psychonomic Bulletin and Review, 14, 663-668). We examined whether overt physical movement is necessary for these embodied effects on cognition, or whether covert shifts of attention are sufficient to influence cognition. We asked participants to try to solve Duncker's radiation problem while occasionally directing them, via an unrelated digit-tracking task, to shift their attention (while keeping their eyes fixed) in a pattern related to the problem's solution, to move their eyes in this pattern, or to keep their eyes and their attention fixed in the center of the display. Although they reported being unaware of any relationship between the digit-tracking task and the radiation problem, participants in both the eye-movement and attention-shift groups were more likely to solve the problem than were participants who maintained fixation. Our results show that by shifting attention in a pattern compatible with a problem's solution, we can aid participants' insight even in the absence of overt physical movements. |
Tim J. Smith; John M. Henderson Facilitation of return during scene viewing Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 1083–1108, 2009. @article{Smith2009, Inhibition of Return (IOR) is a delay in initiating attentional shifts to previously attended locations. It is believed to facilitate attentional exploration of a scene. Computational models of attention have implemented IOR as a simple mechanism for driving attention through a scene. However, evidence for IOR during scene viewing is inconclusive. In this study IOR during scene memorization and in response to sudden onsets at the last (1-back) and penultimate (2-back) fixation location was measured. The results indicate that there is a tendency for saccades to continue the trajectory of the last saccade (Saccadic Momentum), but contrary to the “foraging facilitator” hypothesis of IOR, there is also a distinct population of saccades directed back to the last fixation location, especially in response to onsets. Voluntary return saccades to the 1-back location experience temporal delay but this does not affect their likelihood of occurrence. No localized temporal delay is exhibited at 2-back. These results suggest that IOR exists at the last fixation location during scene memorization but that this temporal delay is overridden by Facilitation of Return. Computational models of attention will fail to capture the pattern of saccadic eye movements during scene viewing unless they model the dynamics of visual encoding and can account for the interaction between Facilitation of Return, Saccadic Momentum, and Inhibition of Return. |
John F. Soechting; John Z. Juveli; Hrishikesh M. Rao Models for the extrapolation of target motion for manual interception Journal Article In: Journal of Neurophysiology, vol. 102, no. 3, pp. 1491–1502, 2009. @article{Soechting2009, Intercepting a moving target requires a prediction of the target's future motion. This extrapolation could be achieved using sensed parameters of the target motion, e.g., its position and velocity. However, the accuracy of the prediction would be improved if subjects were also able to incorporate the statistical properties of the target's motion, accumulated as they watched the target move. The present experiments were designed to test for this possibility. Subjects intercepted a target moving on the screen of a computer monitor by sliding their extended finger along the monitor's surface. Along any of the six possible target paths, target speed could be governed by one of three possible rules: constant speed, a power law relation between speed and curvature, or the trajectory resulting from a sum of sinusoids. A go signal was given to initiate interception and was always presented when the target had the same speed, irrespective of the law of motion. The dependence of the initial direction of finger motion on the target's law of motion was examined. This direction did not depend on the speed profile of the target, contrary to the hypothesis. However, finger direction could be well predicted by assuming that target location was extrapolated using target velocity and that the amount of extrapolation depended on the distance from the finger to the target. Subsequent analysis showed that the same model of target motion was also used for on-line, visually mediated corrections of finger movement when the motion was initially misdirected. |
Hiroyuki Sogo; Yuji Takeda Effect of spatial inhibition on saccade trajectory depends on location-based mechanisms Journal Article In: Japanese Psychological Research, vol. 51, no. 1, pp. 35–46, 2009. @article{Sogo2009, Saccade trajectory often curves away from a previously attended, inhibited location. A recent study of curved saccades showed that an inhibitory effect prevents ineffective reexamination during serial visual search. The time course of this effect differs from that of a similar inhibitory effect, known as inhibition of return (IOR). In the present study, we examined whether this saccade-related inhibitory effect can operate in an object-based manner (similar to IOR). Using a spatial cueing paradigm, we demonstrated that if a cue is presented on a placeholder that is then shifted from its original location, the saccade trajectory curves away from the original (cued) location (Experiment 1), yet the IOR effect is observed on the cued placeholder (Experiment 2). The inhibitory mechanism that causes curved saccades appears to operate in a location-based manner, whereas the mechanism underlying IOR appears to operate in an object-based manner. We propose that these inhibitory mechanisms work in a complementary fashion to guide eye movements efficiently under conditions of a dynamic visual environment. |
David Souto; Dirk Kerzel Evidence for an attentional component in saccadic inhibition of return Journal Article In: Experimental Brain Research, vol. 195, no. 4, pp. 531–540, 2009. @article{Souto2009, After presentation of a peripheral cue, facilitation at the cued location is followed by inhibition of return (IOR). It has been recently proposed that IOR may originate at different processing stages for manual and ocular responses, with manual IOR resulting from inhibited attentional orienting, and ocular IOR resulting form inhibited motor preparation. Contrary to this interpretation, we found an effect of target contrast on saccadic IOR. The effect of contrast decreased with increasing reaction times (RTs) for saccades, but not for manual key-press responses. This may have masked the effect of contrast on IOR with saccades in previous studies (Hunt and Kingstone in J Exp Psychol Hum Percept Perform 29:1068-1074, 2003) because only mean RTs were considered. We also found that background luminance strongly influenced the effects of gap and target contrast on IOR. |
Christian Starzynski; Ralf Engbert Noise-enhanced target discrimination under the influence of fixational eye movements and external noise Journal Article In: Chaos, vol. 19, no. 1, pp. 1–7, 2009. @article{Starzynski2009, Active motor processes are present in many sensory systems to enhance perception. In the human visual system, miniature eye movements are produced involuntarily and unconsciously when we fixate a stationary target. These fixational eye movements represent self-generated noise which serves important perceptual functions. Here we investigate fixational eye movements under the influence of external noise. In a two-choice discrimination task, the target stimulus performed a random walk with varying noise intensity. We observe noise-enhanced discrimination of the target stimulus characterized by a U-shaped curve of manual response times as a function of the diffusion constant of the stimulus. Based on the experiments, we develop a stochastic information-accumulator model for stimulus discrimination in a noisy environment. Our results provide a new explanation for the constructive role of fixational eye movements in visual perception. |
Joseph Schmidt; Gregory J. Zelinsky Search guidance is proportional to the categorical specificity of a target cue Journal Article In: Quarterly Journal of Experimental Psychology, vol. 62, no. 10, pp. 1904–1914, 2009. @article{Schmidt2009, Visual search studies typically assume the availability of precise target information to guide search, often a picture of the exact target. However, search targets in the real world are often defined categorically and with varying degrees of visual specificity. In five target preview conditions we manipulated the availability of target visual information in a search task for common real-world objects. Previews were: a picture of the target, an abstract textual description of the target, a precise textual description, an abstract + colour textual description, or a precise + colour textual description. Guidance generally increased as information was added to the target preview. We conclude that the information used for search guidance need not be limited to a picture of the target. Although generally less precise, to the extent that visual information can be extracted from a target label and loaded into working memory, this information too can be used to guide search. |
Franziska Schrammel; Sebastian Pannasch; Sven-Thomas Graupner; Andreas Mojzisch; Boris M. Velichkovsky Virtual friend or threat? The effects of facial expression and gaze interaction on psychophysiological responses and emotional experience Journal Article In: Psychophysiology, vol. 46, no. 5, pp. 922–931, 2009. @article{Schrammel2009, The present study aimed to investigate the impact of facial expression, gaze interaction, and gender on attention allocation, physiological arousal, facial muscle responses, and emotional experience in simulated social interactions. Participants viewed animated virtual characters varying in terms of gender, gaze interaction, and facial expression. We recorded facial EMG, fixation duration, pupil size, and subjective experience. Subject's rapid facial reactions (RFRs) differentiated more clearly between the character's happy and angry expression in the condition of mutual eye-to-eye contact. This finding provides evidence for the idea that RFRs are not simply motor responses, but part of an emotional reaction. Eye movement data showed that fixations were longer in response to both angry and neutral faces than to happy faces, thereby suggesting that attention is preferentially allocated to cues indicating potential threat during social interaction. |
Alexander C. Schütz; Doris I. Braun; Karl R. Gegenfurtner Chromatic contrast sensitivity during optokinetic nystagmus, visually enhanced vestibulo-ocular reflex, and smooth pursuit eye movements Journal Article In: Journal of Neurophysiology, vol. 101, no. 5, pp. 2317–2327, 2009. @article{Schuetz2009, Recently we showed that sensitivity for chromatic- and high-spatial frequency luminance stimuli is enhanced during smooth-pursuit eye movements (SPEMs). Here we investigated whether this enhancement is a general property of slow eye movements. Besides SPEM there are two other classes of eye movements that operate in a similar range of eye velocities: the optokinetic nystagmus (OKN) is a reflexive pattern of alternating fast and slow eye movements elicited by wide-field visual motion and the vestibulo-ocular reflex (VOR) stabilizes the gaze during head movements. In a natural environment all three classes of eye movements act synergistically to allow clear central vision during self- and object motion. To test whether the same improvement of chromatic sensitivity occurs during all of these eye movements, we measured human detection performance of chromatic and luminance line stimuli during OKN and contrast sensitivity during VOR and SPEM at comparable velocities. For comparison, performance in the same tasks was tested during fixation. During the slow phase of OKN we found a similar enhancement of chromatic detection rate like that during SPEM, whereas no enhancement was observable during VOR. This result indicates similarities between slow-phase OKN and SPEM, which are distinct from VOR. |
Keith Rayner; Monica S. Castelhano; Jinmian Yang Eye movements when looking at unusual/weird scenes: Are there cultural differences? Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 35, no. 1, pp. 254–259, 2009. @article{Rayner2009b, Recent studies have suggested that eye movement patterns while viewing scenes differ for people from different cultural backgrounds and that these differences in how scenes are viewed are due to differences in the prioritization of information (background or foreground). The current study examined whether there are cultural differences in how quickly eye movements are drawn to highly unusual aspects of a scene. American and Chinese viewers examined photographic scenes while performing a preference rating task. For each scene, participants were presented with either a normal or an unusual/weird version. Even though there were differences between the normal and weird versions of the scenes, there was no evidence of any cultural differences while viewing either scene type. The present study, along with other recent reports, raises doubts about the notion that cultural differences can influence oculomotor control in scene perception. |
Keith Rayner; Tim J. Smith; George L. Malcolm; John M. Henderson Eye movements and visual encoding during scene perception Journal Article In: Psychological Science, vol. 20, no. 1, pp. 6–10, 2009. @article{Rayner2009, The amount of time viewers could process a scene during eye fixations was varied by a mask that appeared at a certain point in each eye fixation. The scene did not reappear until the viewer made an eye movement. The main finding in the studies was that in order to normally process a scene, viewers needed to see the scene for at least 150 ms during each eye fixation. This result is surprising because viewers can extract the gist ofa scene from a brief 40- to 100-ms exposure. It also stands in marked contrast to reading, as readers need only to view the words in the text for 50 to 60 ms to read normally. Thus, although the same neural mechanisms control eye movements in scene perception and reading, the cognitive processes associated with each task drive processing in different ways. |
Bob Rehder; Robert M. Colner; Aaron B. Hoffman Feature inference learning and eyetracking Journal Article In: Journal of Memory and Language, vol. 60, no. 3, pp. 393–419, 2009. @article{Rehder2009, Besides traditional supervised classification learning, people can learn categories by inferring the missing features of category members. It has been proposed that feature inference learning promotes learning a category's internal structure (e.g., its typical features and interfeature correlations) whereas classification promotes the learning of diagnostic information. We tracked learners' eye movements and found in Experiment 1 that inference learners indeed fixated features that were unnecessary for inferring the missing feature, behavior consistent with acquiring the categories' internal structure. However, Experiments 3 and 4 showed that fixations were generally limited to features that needed to be predicted on future trials. We conclude that inference learning induces both supervised and unsupervised learning of category-to-feature associations rather than a general motivation to learn the internal structure of categories. |
Michael G. Reynolds; John D. Eastwood; Marita Partanen; Alexandra Frischen; Daniel Smilek Monitoring eye movements while searching for affective faces Journal Article In: Visual Cognition, vol. 17, no. 3, pp. 318–333, 2009. @article{Reynolds2009, A single experiment is reported in which we provide a novel analysis of eye movements during visual search to disentangle the contributions of unattended guidance and focal target processing to visual search performance. This technique is used to examine the controversial claim that unattended affective faces can guide attention during search. Results indicated that facial expression influences how efficiently the target was fixated for the first time as a function of set size. However, affective faces did not influence how efficiently the target was identified as a function of set size after it was first fixated. These findings suggest that, in the present context, facial expression can influence search before the target is attended and that the present measures are able to distinguish between the guidance of attention by targets and the processing of targets within the focus of attention. |
Paola Ricciardelli; Elena Betta; Sonia Pruner; Massimo Turatto Is there a direct link between gaze perception and joint attention behaviours? Effects of gaze contrast polarity on oculomotor behaviour Journal Article In: Experimental Brain Research, vol. 194, no. 3, pp. 347–357, 2009. @article{Ricciardelli2009, Previous studies have found that attention is oriented in the direction of other people's gaze suggesting that gaze perception is related to the mechanisms of joint attention. However, the role of the perception of gaze direction on joint attention has been challenged. We investigated the effects of disrupting gaze perception on the orienting of observers' attention, in particular, whether orienting to gaze direction is affected by the disruptive effect of negative contrast polarity on gaze perception. A dynamic distracting gaze was presented to observers performing an endogenous saccadic task. Gaze perception was manipulated by reversing the contrast polarity between the sclera and the iris. With positive display polarity, eye movement recordings showed shorter saccadic latencies when the direction of the instructed saccade matched the direction of the distracting gaze, and a substantial number of erroneous saccades towards the direction of the perceived gaze when the latter did not match the instruction. Crucially, such effects were not found when gaze contrast polarity was reversed and gaze perception was impaired. These results extend previous studies by demonstrating the existence of a direct link between joint attention and the perception of gaze direction, and show how orienting of attention to other people's gaze can be suppressed. |
M. Carmen Romano; Marco Thiel; Jürgen Aurths; Konstantin Mergenthaler; Ralf Engbert Hypothesis test for synchronization: Twin surrogates revisited Journal Article In: Chaos, vol. 19, no. 1, pp. 1–14, 2009. @article{Romano2009, The method of twin surrogates has been introduced to test for phase synchronization of complex systems in the case of passive experiments. In this paper we derive new analytical expressions for the number of twins depending on the size of the neighborhood, as well as on the length of the trajectory. This allows us to determine the optimal parameters for the generation of twin surrogates. Furthermore, we determine the quality of the twin surrogates with respect to several linear and nonlinear statistics depending on the parameters of the method. In the second part of the paper we perform a hypothesis test for phase synchronization in the case of experimental data from fixational eye movements. These miniature eye movements have been shown to play a central role in neural information processing underlying the perception of static visual scenes. The high number of data sets (21 subjects and 30 trials per person) allows us to compare the generated twin surrogates with the "natural" surrogates that correspond to the different trials. We show that the generated twin surrogates reproduce very well all linear and nonlinear characteristics of the underlying experimental system. The synchronization analysis of fixational eye movements by means of twin surrogates reveals that the synchronization between the left and right eye is significant, indicating that either the centers in the brain stem generating fixational eye movements are closely linked, or, alternatively that there is only one center controlling both eyes. |
Clive R. Rosenthal; Emma E. Roche-Kelly; Masud Husain; Christopher Kennard Response-dependent contributions of human primary motor cortex and angular gyrus to manual and perceptual sequence learning Journal Article In: Journal of Neuroscience, vol. 29, no. 48, pp. 15115–15125, 2009. @article{Rosenthal2009, Motor sequence learning on the serial reaction time task involves the integration of response-, stimulus-, and effector-based information. Human primary motor cortex (M1) and the inferior parietal lobule (IPL) have been identified with supporting the learning of effector-dependent and -independent information, respectively. Current neurocognitive data are, however, exclusively based on learning complex sequence information via perceptual-motor responses. Here, we investigated the effects of continuous theta-burst transcranial magnetic stimulation (cTBS)-induced disruption of M1 and the angular gyrus (AG) of the IPL on learning a probabilistic sequence via sequential perceptual-motor responses (experiment 1) or covert orienting of visuospatial attention (experiment 2). Functional effects on manual sequence learning were evident during 75% of training trials in the cTBS M1 condition, whereas cTBS over the AG resulted in interference confined to a midpoint during the training phase. Posttraining direct (declarative) tests of sequence knowledge revealed that cTBS over M1 modulated the availability of newly acquired sequence knowledge, whereby sequence knowledge was implicit in the cTBS M1 condition but was available to conscious awareness in the cTBS AG and control conditions. In contrast, perceptual sequence learning was abolished in the perceptual cTBS AG condition, whereas learning was intact and available to conscious awareness in the cTBS M1 and control conditions. These results show that the right AG had a critical role in perceptual sequence learning, whereas M1 had a causal role in developing experience-dependent functional attributes relevant to conscious knowledge on manual but not perceptual sequence learning. |
T. Roth; Alexander N. Sokolov; A. Messias; P. Roth; M. Weller; Susanne Trauzettel-Klosinski Comparing explorative saccade and flicker training in hemianopia: A randomized controlled study Journal Article In: Neurology, vol. 72, pp. 324–331, 2009. @article{Roth2009, Objective: Patients with homonymous hemianopia are disabled on everyday exploratory activities. We examined whether explorative saccade training (EST), compared with flicker-stimulation training (FT), would selectively improve saccadic behavior on the patients' blind side and benefit performance on natural exploratory tasks. Methods: Twenty-eight hemianopic patients were randomly assigned to distinct groups performing for 6 weeks either EST (a digit-search task) or FT (blind-hemifield stimulation by flickering letters). Outcome variables (response times [RTs] during natural search, number of fixations during natural scene exploration, fixation stability, visual fields, and quality-of-life scores) were collected before, directly after, and 6 weeks after training. Results: EST yielded a reduced (post/pre, 47%) digit-search RT for the blind side. Natural search RT decreased (post/pre, 23%) on the blind side but not on the seeing side. After FT, both sides' RT remained unchanged. Only with EST did the number of fixations during natural scene exploration increase toward the blind and decrease on the seeing side (follow-up/pre difference, 238%). Even with the target located on the seeing side, after EST more fixations occurred toward the blind side. The EST group showed decreased (post/pre, 43%) fixation stability and increased (post/pre, 482%) asymmetry of fixations toward the blind side. Visual field size remained constant after both treatments. EST patients reported improvements in social domain. Conclusions: Explorative saccade training selectively improves saccadic behavior, natural search, and scene exploration on the blind side. Flicker-stimulation training does not improve saccadic behavior or visual fields. The findings show substantial benefits of compensatory exploration training, including subjective improvements in mastering daily-life activities, in a randomized controlled trial. |
Jennifer D. Ryan; Christina Villate Building visual representations: The binding of relative spatial relations across time Journal Article In: Visual Cognition, vol. 17, no. 1-2, pp. 254–272, 2009. @article{Ryan2009, In this study, the construction of, and subsequent access to, representations regarding the relative spatial and temporal relations among sequentially presented objects was examined using eye movement monitoring. Participants were presented with a series of single objects. Subsequently, a test display revealed all three objects simultaneously and participants judged whether the relative relations were maintained. Eye movements revealed the binding of relations across study images; eye movements transitioned between the location of the presented object and the locations that were previously occupied by objects in prior study images. For the test displays, changes in the relative relations were accurately detected. Eye movements distinguished intact displays from those in which the relations had been altered. Order of fixations to objects in test images mimicked the temporal order in which objects had been studied, but disruption of temporal order was observed for manipulated images. The present findings suggest that memory representations regarding the visual world include information about the relative spatial and temporal relations among objects. Eye movements may be the conduit by which information is integrated into a lasting representation, and by which current information is compared to stored representations. |
Gustav Kuhn; Alan Kingstone Look away! Eyes and arrows engage oculomotor responses automatically Journal Article In: Attention, Perception, and Psychophysics, vol. 71, no. 2, pp. 314–327, 2009. @article{Kuhn2009, The present study investigates how people's voluntary saccades are influenced by where another person is looking, even when this is counterpredictive of the intended saccade direction. The color of a fixation point instructed participants to make saccades either to the left or right. These saccade directions were either congru- ent or incongruent with the eye gaze of a centrally presented schematic face. Participants were asked to ignore the eyes, which were congruent only 20% of the time. At short gaze–fixation-cue stimulus onset asynchronies (SOAs; 0 and 100 msec), participants made more directional errors on incongruent than on congruent trials. At a longer SOA (900 msec), the pattern tended to reverse. We demonstrate that a perceived eye gaze results in an automatic saccade following the gaze and that the gaze cue cannot be ignored, even when attending to it is detrimental to the task. Similar results were found for centrally presented arrow cues, suggesting that this interference is not unique to gazes. |
Gustav Kuhn; Benjamin W. Tatler; Geoff G. Cole You look where I look! Effect of gaze cues on overt and covert attention in misdirection Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 925–944, 2009. @article{Kuhn2009a, We designed a magic trick in which misdirection was used to orchestrate observers' attention in order to prevent them from detecting the to-be-concealed event. By experimentally manipulating the magician's gaze direction we investigated the role that gaze cues have in attentional orienting, independently of any low level features. Participants were significantly less likely to detect the to-be-concealed event if the misdirection was supported by the magician's gaze, thus demonstrating that the gaze plays an important role in orienting people's attention. Moreover, participants spent less time looking at the critical hand when the magician's gaze was used to misdirect their attention away from the hand. Overall, the magician's face, and in particular the eyes, accounted for a large proportion of the fixations. The eyes were popular when the magician was looking towards the observer; once he looked towards the actions and objects being manipulated, participants typically fixated the gazed-at areas. Using a highly naturalistic paradigm using a dynamic display we demonstrate gaze following that is independent of the low level features of the scene. |
Feng Yang Kuo; Chiung-Wen Hsu; Rong-Fuh Day An exploratory study of cognitive effort involved in decision under Framing-an application of the eye-tracking technology Journal Article In: Decision Support Systems, vol. 48, no. 1, pp. 81–91, 2009. @article{Kuo2009, The framing effect, proposed by Tversky and Kahneman [A. Tversky, D. Kahneman, The framing of decisions and the psychology of choice, Science 211 (4481) (1981) 453-458.], refers to the phenomenon that varying the presentations of the same problem can systematically affect the choice one makes. In this research we have reviewed a literature related to the framing effect and neurobiological studies of emotion. This review leads us to conceptualize that framing may induce emotion, which in turn impinges on the level of cognitive effort that subsequently shapes the framing effect. We then employ the eye-tracking technology to explore the differences in cognitive effort under both positive and negative framing conditions. Among the four experimental problems, disease and gambling problems are found to exhibit the framing effect, while the kittens' therapy and the plant problem do not. In analyzing the level of eye movement for the four problems, we find that cognitive effort asymmetry plays a critical role in the production of the framing effect. That is, for the two problems that display the framing effect, subjects expend more effort in the negative framing condition than they do in the positive, yet the framing effect persists, indicating that they cannot change their cognitive inertia despite this increase in cognitive effort. The finding has potential implications for the design of information presentation to facilitate decision making. © 2009 Elsevier B.V. All rights reserved. |
Wolfe Kienzle; Matthias O. Franz; Bernhard Scholkopf; Felix A. Wichmann Center-surround patterns emerge as optimal predictors for human saccade targets Journal Article In: Journal of Vision, vol. 9, no. 5, pp. 1–15, 2009. @article{Kienzle2009, The human visual system is foveated, that is, outside the central visual field resolution and acuity drop rapidly. Nonetheless much of a visual scene is perceived after only a few saccadic eye movements, suggesting an effective strategy for selecting saccade targets. It has been known for some time that local image structure at saccade targets influences the selection process. However, the question of what the most relevant visual features are is still under debate. Here we show that center-surround patterns emerge as the optimal solution for predicting saccade targets from their local image structure. The resulting model, a one-layer feed-forward network, is surprisingly simple compared to previously suggested models which assume much more complex computations such as multi-scale processing and multiple feature channels. Nevertheless, our model is equally predictive. Furthermore, our findings are consistent with neurophysiological hardware in the superior colliculus. Bottom-up visual saliency may thus not be computed cortically as has been thought previously. |
Antonella C. Kis; Vaughan W. A. Singh; Matthias Niemeier Short- and long-term plasticity of eye position information: Examining perceptual, attentional, and motor influences on perisaccadic perception Journal Article In: Journal of vision, vol. 9, no. 6, pp. 1–21, 2009. @article{Kis2009, Spatial vision requires information about eye position to account for eye movements. But integrating eye position information and information about objects in the world is imperfect and can lead to transient misperceptions around the time of saccadic eye movements most likely because the signals are prone to temporal errors making it difficult to tell when the retinas move relative to when retinal images move. To clarify where this uncertainty comes from, in four experiments we examined influences of eye posture, attentional cueing, and trial history on perisaccadic misperceptions. We found evidence for one longer-term modulation of perisaccadic shift that evolved over the time of the test session due to biased eye posture. Another, short-term influence on perisaccadic shift was related to eye posture during preceding trials or the direction of the preceding saccade. Both perceptual effects could not be explained with visual delays, influences of attention or changes in saccade metrics. Our data are consistent with the idea that perisaccadic shift is caused by neural representations of eye position or space that are plastic and that arise from non-motor, extraretinal mechanisms. This suggests a perceptual system that continuously calibrates itself in response to changes in oculomotor and muscle systems to reconstruct a stable percept of the world. |
Steffen Klingenhoefer; Frank Bremmer Perisaccadic localization of auditory stimuli Journal Article In: Experimental Brain Research, vol. 198, no. 2-3, pp. 411–423, 2009. @article{Klingenhoefer2009, Interaction with the outside world requires the knowledge about where objects are with respect to one's own body. Such spatial information is represented in various topographic maps in different sensory systems. From a computational point of view, however, a single, modality-invariant map of the incoming sensory signals appears to be a more efficient strategy for spatial representations. If such a single supra-modal map existed and were used for perceptual purposes, localization characteristics should be similar across modalities. Previous studies had shown mislocalization of brief visual stimuli presented in the temporal vicinity of saccadic eye-movements. Here, we tested, if such mislocalizations could also be found for auditory stimuli. We presented brief noise bursts before, during, and after visually guided saccades. Indeed, we found localization errors for these auditory stimuli. The spatio-temporal pattern of this mislocalization, however, clearly differed from the one found for visual stimuli. The spatial error also depended on the exact type of eye-movement (visually guided vs. memory guided saccades). Finally, results obtained in fixational control paradigms under different conditions suggest that auditory localization can be strongly influenced by both static and dynamic visual stimuli. Visual localization on the other hand is not influenced by distracting visual stimuli but can be inaccurate in the temporal vicinity of eye-movements. Taken together, our results argue against a single, modality-independent spatial representation of sensory signals. |
Tomas Knapen; Jan Brascamp; Wendy J. Adams; Erich W. Graf The spatial scale of perceptual memory in ambiguous figure perception Journal Article In: Journal of Vision, vol. 9, no. 13, pp. 1–12, 2009. @article{Knapen2009, Ambiguous visual stimuli highlight the constructive nature of vision: perception alternates between two plausible interpretations of unchanging input. However, when a previously viewed ambiguous stimulus reappears, its earlier perception almost entirely determines the new interpretation; memory disambiguates the input. Here, we investigate the spatial properties of this perceptual memory, taking into account strong anisotropies in percept preference across the visual field. Countering previous findings, we show that perceptual memory is not confined to the location in which it was instilled. Rather, it spreads to noncontiguous regions of the visual field, falling off at larger distances. Furthermore, this spread of perceptual memory takes place in a frame of reference that is tied to the surface of the retina. These results place the neural locus of perceptual memory in retinotopically organized sensory cortical areas, with implications for the wider function of perceptual memory in facilitating stable vision in natural, dynamic environments. |
Tomas Knapen; Martin Rolfs; Patrick Cavanagh The reference frame of the motion aftereffect is retinotopic Journal Article In: Journal of Vision, vol. 9, no. 5, pp. 1–6, 2009. @article{Knapen2009a, Although eye-, head- and body-movements can produce large-scale translations of the visual input on the retina, perception is notable for its spatiotemporal continuity. The visual system might achieve this by the creation of a detailed map in world coordinates–a spatiotopic representation. We tested the coordinate system of the motion aftereffect by adapting observers to translational motion and then tested (1) at the same retinal and spatial location (full aftereffect condition), (2) at the same retinal location, but at a different spatial location (retinotopic condition), (3) at the same spatial, but at a different retinal location (spatiotopic condition), or (4) at a different spatial and retinal location (general transfer condition). We used large stimuli moving at high speed to maximize the likelihood of motion integration across space. In a second experiment, we added a contrast-decrement detection task to the motion stimulus to ensure attention was directed at the adapting location. Strong motion aftereffects were found when observers were tested in the full and retinotopic aftereffect conditions. We also found a smaller aftereffect at the spatiotopic location but it did not differ from that at the location that was neither spatiotopic nor retinotopic. This pattern of results did not change when attention was explicitly directed at the adapting stimulus. We conclude that motion adaptation took place at retinotopic levels of visual cortex and that no spatiotopic interaction of motion adaptation and test occurred across saccades. |