EyeLink 认知出版物
所有EyeLink认知和感知研究出版物,直至2023年(部分早于2024年)均按年份列出。您可以使用视觉搜索、场景感知、面部处理等关键词搜索出版物。您还可以搜索单个作者姓名。如果我们错过了任何EyeLink认知或感知文章,请给我们发电子邮件!
2008 |
Richard Godijn; Arthur F. Kramer Oculomotor capture by surprising onsets Journal Article In: Visual Cognition, vol. 16, no. 2-3, pp. 279–289, 2008. @article{Godijn2008b, The present study examined the effect of surprising onsets on oculomotor behaviour. Participants were required to execute a saccadic eye movement to a colour singleton target. After a series of trials an unexpected onset distractor was abruptly presented on the surprise trial. The presentation of the onset was repeated on subsequent trials. The results showed that the onset captured the eyes for 28% of the participants on the surprise trial, but this percentage decreased after repeated exposure to the onset. Furthermore, saccade latencies to the target were increased when a surprising onset was presented. After repeated exposure to the onset, latencies to the target decreased to the preonset level. The results suggest that when the onset is not part of participants' task set it has a strong effect on oculomotor behaviour. Once the task set has been updated and the onset no longer comes as a surprise its effect on oculomotor behaviour is dramatically reduced. |
Jennifer J. Heisz; David I. Shore More efficient scanning for familiar faces Journal Article In: Journal of Vision, vol. 8, no. 1, pp. 1–10, 2008. @article{Heisz2008, The present study reveals changes in eye movement patterns as newly learned faces become more familiar. Observers received multiple exposures to newly learned faces over four consecutive days. Recall tasks were performed on all 4 days, and a recognition task was performed on the fourth day. Eye movement behavior was compared across facial exposure and task type. Overall, the eyes were viewed for longer and more often than any other facial region, regardless of face familiarity. As a face became more familiar, observers made fewer fixations during recall and recognition. With increased exposure, observers sampled more from the eyes and sampled less from the nose, mouth, forehead, chin, and cheek regions. Interestingly, this change in scanning behavior was only observed for recall tasks, but not for recognition. |
John M. Henderson; Graham L. Pierce Eye movements during scene viewing: Evidence for mixed control of fixation durations Journal Article In: Psychonomic Bulletin & Review, vol. 15, no. 3, pp. 566–573, 2008. @article{Henderson2008, Recent behavioral and computational research on eye movement control during scene viewing has focused on where the eyes move. However, fixations also differ in their durations, and when the eyes move may be another important indicator of perceptual and cognitive activity. Here we used a scene onset delay paradigm to investigate the degree to which individual fixation durations are under direct moment-to-moment control of the viewer's current visual scene. During saccades just prior to critical fixations, the scene was removed from view so that when the eyes landed, no scene was present. Following a manipulated delay period, the scene was restored to view. We found that one population of fixations was under the direct control of the current scene, increasing in duration as delay increased. A second population of fixations was relatively constant across delay. The pattern of data did not change whether delay duration was random or blocked, suggesting that the effects were not under the strategic control of the viewer. The results support a mixed control model in which the durations of some fixations proceed regardless of scene presence, whereas others are under the direct moment-to-moment control of ongoing scene analysis. |
Mieke Donk; Wieske Zoest Effects of salience are short-lived Journal Article In: Psychological Science, vol. 19, no. 7, pp. 733–739, 2008. @article{Donk2008, A salient event in the visual field tends to attract attention and the eyes. To account for the effects of salience on visual selection, models generally assume that the human visual system continuously holds information concerning the relative salience of objects in the visual field. Here we show that salience in fact drives vision only during the short time interval immediately following the onset of a visual scene. In a saccadic target-selection task, human performance in making an eye movement to the most salient element in a display was accurate when response latencies were short, but was at chance when response latencies were long. In a manual discrimination task, performance in making a judgment of salience was more accurate with brief than with long display durations. These results suggest that salience is represented in the visual system only briefly after a visual image enters the brain. |
Jacob Duijnhouwer; Richard J. A. Wezel; Albert V. Van den Berg The role of motion capture in an illusory transformation of optic flow fields. Journal Article In: Journal of Vision, vol. 8, no. 4, pp. 1–18, 2008. @article{Duijnhouwer2008, In the optic flow illusion, the focus of an expanding optic flow field appears shifted when uniform flow is transparently superimposed. The shift is in the direction of the uniform flow, or "inducer." Current explanations relate the transformation of the expanding optic flow field to perceptual subtraction of the inducer signal. Alternatively, the shift might result from motion capture acting on the perceived focus position. To test this alternative, we replaced expanding target flow with contracting or rotating flow. Current explanations predict focus shifts in expanding and contracting flows that are opposite but of equal magnitude and parallel to the inducer. In rotary flow, the current explanations predict shifts that are perpendicular to the inducer. In contrast, we report larger shift for expansion than for contraction and a component of shift parallel to the inducer for rotary flow. The magnitude of this novel component of shift depended on the target flow speed, the inducer flow speed, and the presentation duration. These results support the idea that motion capture contributes substantially to the optic flow illusion. |
Kristie R. Dukewich; Raymond M. Klein; John Christie The effect of gaze on gaze direction while looking at art Journal Article In: Psychonomic Bulletin & Review, vol. 15, no. 6, pp. 1141–1147, 2008. @article{Dukewich2008, In highly controlled cuing experiments, conspecific gaze direction has powerful effects on an observer's attention. We explored the generality of this effect by using paintings in which the gaze direction of a key character had been carefully manipulated. Our observers looked at these paintings in one of three instructional states (neutral, social, or spatial) while we monitored their eye movements. Overt orienting was much less influenced by the critical gaze direction than what the cuing literature might suggest: An analysis of the direction of saccades following the first fixation of the critical gaze showed that observers were weakly biased to orient in the direction of the gaze. Over longer periods of viewing, however, this effect disappeared for all but the social condition. This restriction of gaze as an attentional cue to a social context is consistent with the idea that the evolution of gaze direction detection is rooted in social communication. The picture stimuli from this experiment can be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, www.psychonomic.org/archive. |
Frank H. Durgin; Erika Doyle; Louisa Egan Upper-left gaze bias reveals competing search strategies in a reverse Stroop task Journal Article In: Acta Psychologica, vol. 127, no. 2, pp. 428–448, 2008. @article{Durgin2008, Three experiments with a total of 87 human observers revealed an upper-left spatial bias in the initial movement of gaze during visual search. The bias was present whether or not the explicit control of gaze was required for the task. This bias may be part of a search strategy that competed with the fixed-gaze parallel search strategy hypothesized by Durgin [Durgin, F. H. (2003). Translation and competition among internal representations in a reverse Stroop effect. Perception & Psychophysics, 65, 367-378.] for this task. When the spatial probabilities of the search target were manipulated either in accord with or in opposition to the existing upper-left bias, two orthogonal factors of interference in the latency data were differentially affected. The two factors corresponded to two different forms of representation and search. Target probabilities consistent with the gaze bias encouraged opportunistic serial search (including gaze shifts), while symmetrically opposing target probabilities produced latency patterns more consistent with parallel search based on a sensory code. |
C. Ehresman; D. Saucier; Matthew Heath; G. Binsted Online corrections can produce illusory bias during closed-loop pointing Journal Article In: Experimental Brain Research, vol. 188, no. 3, pp. 371–378, 2008. @article{Ehresman2008, This experiment examined whether the impact of pictorial illusions during the execution of goal-directed reaching movements is attributable to ocular motor signaling. We analyzed eye and hand movements directed toward both the vertex of the Müller-Lyer (ML) figure in a closed-loop procedure. Participants pointed to the right vertex of a visual stimulus in two conditions: a control condition wherein the figure (in-ML, neutral, out-ML) presented at response planning remained unchanged throughout the movement, and an experimental condition wherein a neutral figure presented at response planning was perturbed to an illusory figure (in-ML, out-ML) at movement onset. Consistent with previous work from our group (Heath et al. in Exp Brain Res 158:378-384, 2004; Heath et al. in J Mot Behav 37:179-185, 2005b), action-bias present in both conditions; thus illusory bias was introduced into during online control. Although primary saccades were influenced by illusory configurations (control conditions; see Binsted and Elliott in Hum Mov Sci 18:103-117, 1999a), illusory bias developed within the secondary "corrective" saccades during experimental trials (i.e., following a veridical primary saccade). These results support the position that a unitary spatial representation underlies both action and perception and this representation is common to both the manual and oculomotor systems. |
Wolfgang Einhäuser; Ueli Rutishauser; Christof Koch Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli Journal Article In: Journal of Vision, vol. 8, no. 2, pp. 1–19, 2008. @article{Einhaeuser2008, In natural vision both stimulus features and task-demands affect an observer's attention. However, the relationship between sensory-driven ("bottom-up") and task-dependent ("top-down") factors remains controversial: Can task-demands counteract strong sensory signals fully, quickly, and irrespective of bottom-up features? To measure attention under naturalistic conditions, we recorded eye-movements in human observers, while they viewed photographs of outdoor scenes. In the first experiment, smooth modulations of contrast biased the stimuli's sensory-driven saliency towards one side. In free-viewing, observers' eye-positions were immediately biased toward the high-contrast, i.e., high-saliency, side. However, this sensory-driven bias disappeared entirely when observers searched for a bull's-eye target embedded with equal probability to either side of the stimulus. When the target always occurred in the low-contrast side, observers' eye-positions were immediately biased towards this low-saliency side, i.e., the sensory-driven bias reversed. Hence, task-demands do not only override sensory-driven saliency but also actively countermand it. In a second experiment, a 5-Hz flicker replaced the contrast gradient. Whereas the bias was less persistent in free viewing, the overriding and reversal took longer to deploy. Hence, insufficient sensory-driven saliency cannot account for the bias reversal. In a third experiment, subjects searched for a spot of locally increased contrast ("oddity") instead of the bull's-eye ("template"). In contrast to the other conditions, a slight sensory-driven free-viewing bias prevails in this condition. In a fourth experiment, we demonstrate that at known locations template targets are detected faster than oddity targets, suggesting that the former induce a stronger top-down drive when used as search targets. Taken together, task-demands can override sensory-driven saliency in complex visual stimuli almost immediately, and the extent of overriding depends on the search target and the overridden feature, but not on the latter's free-viewing saliency. |
Wolfgang Einhäuser; Merrielle Spain; Pietro Perona Objects predict fixations better than early saliency Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–26, 2008. @article{Einhaeuser2008a, Humans move their eyes while looking at scenes and pictures. Eye movements correlate with shifts in attention and are thought to be a consequence of optimal resource allocation for high-level tasks such as visual recognition. Models of attention, such as "saliency maps," are often built on the assumption that "early" features (color, contrast, orientation, motion, and so forth) drive attention directly. We explore an alternative hypothesis: Observers attend to "interesting" objects. To test this hypothesis, we measure the eye position of human observers while they inspect photographs of common natural scenes. Our observers perform different tasks: artistic evaluation, analysis of content, and search. Immediately after each presentation, our observers are asked to name objects they saw. Weighted with recall frequency, these objects predict fixations in individual images better than early saliency, irrespective of task. Also, saliency combined with object positions predicts which objects are frequently named. This suggests that early saliency has only an indirect effect on attention, acting through recognized objects. Consequently, rather than treating attention as mere preprocessing step for object recognition, models of both need to be integrated. |
Wolfgang Einhäuser; James Stout; Christof Koch; Olivia Carter Pupil dilation reflects perceptual selection and predicts subsequent stability in perceptual rivalry Journal Article In: Proceedings of the National Academy of Sciences, vol. 105, no. 5, pp. 1704–1709, 2008. @article{Einhaeuser2008b, During sustained viewing of an ambiguous stimulus, an individual's perceptual experience will generally switch between the different possible alternatives rather than stay fixed on one interpretation (perceptual rivalry). Here, we measured pupil diameter while subjects viewed different ambiguous visual and auditory stimuli. For all stimuli tested, pupil diameter increased just before the reported perceptual switch and the relative amount of dilation before this switch was a significant predictor of the subsequent duration of perceptual stability. These results could not be explained by blink or eye-movement effects, the motor response or stimulus driven changes in retinal input. Because pupil dilation reflects levels of norepinephrine (NE) released from the locus coeruleus (LC), we interpret these results as suggestive that the LC-NE complex may play the same role in perceptual selection as in behavioral decision making. |
Robert D. Gordon; Sarah D. Vollmer; Megan L. Frankl Object continuity and the transsaccadic representation of form Journal Article In: Perception and Psychophysics, vol. 70, no. 4, pp. 667–679, 2008. @article{Gordon2008, Transsaccadic object file representations were investigated in three experiments. Subjects moved their eyes from a central fixation cross to a location between two peripheral objects. During the saccade, this preview display was replaced with a target display containing a single object to be named. On trials on which the target identity matched one of the preview objects, its orientation either matched or did not match the previewed orientation. The results of Experiments 1 and 2 revealed that orientation changes disrupt perceptual continuity for objects located near fixation, but not for objects located further from fixation. The results of Experiment 3 confirmed that orientation changes do not disrupt continuity for distant objects, while showing that subjects nevertheless maintain an object-specific representation of the orientation of such objects. Together, the results suggest that object files represent orientation but that whether or not orientation plays a role in the processes that determine continuity depends on the quality of the perceptual representation. While |
Harold H. Greene Distance-from-target dynamics during visual search Journal Article In: Vision Research, vol. 48, no. 23-24, pp. 2476–2484, 2008. @article{Greene2008, Tseng, Y. C., & Li, C. S. (2004). Oculomotor correlates of context-guided learning in visual search. Perception & Psychophysics, 66, 1368-1378 noted that visual search with eye movements may be characterized by a search phase in which fixations do not move towards the target, followed by a phase in which fixations move steadily towards the target. They speculated that the phases are related to memory and recognition processes. Human visual search and Monte Carlo simulations are described towards an explanation. Distance-from-target dynamics were demonstrated to be sensitive to geometric constraints and therefore do not provide a solution to the question of memory in visual search. Finally, it is concluded that the specific distance-from-target dynamics noted by Tseng, Y. C., & Li, C. S. (2004). Oculomotor correlates of context-guided learning in visual search. Perception & Psychophysics, 66, 1368-1378 are parsimoniously explained by random walks that were initialized at the centre of their stimulus displays. |
Tom Foulsham; Alan Kingstone; Geoffrey Underwood Turning the world around: Patterns in saccade direction vary with picture orientation Journal Article In: Vision Research, vol. 48, pp. 1777–1790, 2008. @article{Foulsham2008a, The eye movements made by viewers of natural images often feature a predominance of horizontal saccades. Can this behaviour be explained by the distribution of saliency around the horizon, low-level oculomotor factors, top-down control or laboratory artefacts? Two experiments explored this bias by recording saccades whilst subjects viewed photographs rotated to varying extents, but within a constant square frame. The findings show that the dominant saccade direction follows the orientation of the scene, though this pattern varies in interiors and during recognition of previously seen pictures. This demonstrates that a horizon bias is robust and affected by both the distribution of features and more global representations of the scene layout. |
Tom Foulsham; Geoffrey Underwood What can saliency models predict about eye movements? Spatial and sequential aspects of fixations during encoding and recognition Journal Article In: Journal of Vision, vol. 8, no. 2, pp. 1–17, 2008. @article{Foulsham2008, Saliency map models account for a small but significant amount of the variance in where people fixate, but evaluating these models with natural stimuli has led to mixed results. In the present study, the eye movements of participants were recorded while they viewed color photographs of natural scenes in preparation for a memory test (encoding) and when recognizing them later. These eye movements were then compared to the predictions of a well defined saliency map model (L. Itti & C. Koch, 2000), in terms of both individual fixation locations and fixation sequences (scanpaths). The saliency model is a significantly better predictor of fixation location than random models that take into account bias toward central fixations, and this is the case at both encoding and recognition. However, similarity between scanpaths made at multiple viewings of the same stimulus suggests that repetitive scanpaths also contribute to where people look. Top-down recapitulation of scanpaths is a key prediction of scanpath theory (D. Noton & L. Stark, 1971), but it might also be explained by bottom-up guidance. The present data suggest that saliency cannot account for scanpaths and that incorporating these sequences could improve model predictions. |
Hans Peter Frey; Christian Honey; P. König; Peter Konig What's color got to do with it? The influence of color on visual attention in different categories Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–17, 2008. @article{Frey2008, Certain locations attract human gaze in natural visual scenes. Are there measurable features, which distinguish these locations from others? While there has been extensive research on luminance-defined features, only few studies have examined the influence of color on overt attention. In this study, we addressed this question by presenting color-calibrated stimuli and analyzing color features that are known to be relevant for the responses of LGN neurons. We recorded eye movements of 15 human subjects freely viewing colored and grayscale images of seven different categories. All images were also analyzed by the saliency map model (L. Itti, C. Koch, & E. Niebur, 1998). We find that human fixation locations differ between colored and grayscale versions of the same image much more than predicted by the saliency map. Examining the influence of various color features on overt attention, we find two extreme categories: while in rainforest images all color features are salient, none is salient in fractals. In all other categories, color features are selectively salient. This shows that the influence of color on overt attention depends on the type of image. Also, it is crucial to analyze neurophysiologically relevant color features for quantifying the influence of color on attention. |
Steffen Gais; Sabine Köster; Andreas Sprenger; Judith Bethke; Wolfgang Heide; Hubert Kimmig Sleep is required for improving reaction times after training on a procedural visuo-motor task Journal Article In: Neurobiology of Learning and Memory, vol. 90, no. 4, pp. 610–615, 2008. @article{Gais2008, Sleep has been found to enhance consolidation of many different forms of memory. However in most procedural tasks, a sleep-independent, fast learning component interacts with slow, sleep-dependent improvements. Here, we show that in humans a visuo-motor saccade learning task shows no improvements during training, but only during a delayed recall testing after a period of sleep. Subjects were trained in a prosaccade task (saccade to a visual target). Performance was tested in the prosaccade and the antisaccade task (saccade to opposite direction of the target) before training, after a night of sleep or sleep deprivation, after a night of recovery sleep, and finally in a follow-up test 4 weeks later. We found no immediate improvement in saccadic reaction time (SRT) during training, but a delayed reduction in SRT, indicating a slow-learning process. This reduction occurred only after a period of sleep, i.e. after the first night in the sleep group and after recovery sleep in the sleep deprivation group. This improvement was stable during the 4-week follow-up. Saccadic training can thus induce covert changes in the saccade generation pathway. During the following sleep period, these changes in turn bring about overt performance improvements, presuming a learning effect based on synaptic tagging. |
Ali Ezzati; Ashkan Golzar; Arash S. R. Afraz Topography of the motion aftereffect with and without eye movements Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–16, 2008. @article{Ezzati2008, Although a lot is known about various properties of the motion aftereffect (MAE), there is no systematic study of the topographic organization of MAE. In the current study, first we provided a topographic map of the MAE to investigate its spatial properties in detail. To provide a fine topographic map, we measured MAE with small test stimuli presented at different loci after adaptation to motion in a large region within the visual field. We found that strength of MAE is highest on the internal edge of the adapted area. Our results show a sharper aftereffect boundary for the shearing motion compared to compression and expansion boundaries. In the second experiment, using a similar paradigm, we investigated topographic deformation of the MAE area after a single saccadic eye movement. Surprisingly, we found that topographic map of MAE splits into two separate regions after the saccade: one corresponds to the retinal location of the adapted stimulus and the other matches the spatial location of the adapted region on the display screen. The effect was stronger at the retinotopic location. The third experiment is basically replication of the second experiment in a smaller zone that confirms the results of previous experiments in individual subjects. The eccentricity of spatiotopic area is different from retinotopic area in the second experiment; Experiment 3 controls the effect of eccentricity and confirms the major results of the second experiment. |
Fred H. Hamker; Marc Zirnsak; Markus Lappe About the influence of post-saccadic mechanisms for visual stability on peri-saccadic compression of object location Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–13, 2008. @article{Hamker2008, Peri-saccadic perception experiments have revealed a multitude of mislocalization phenomena. For instance, a briefly flashed stimulus is perceived closer to the saccade target, whereas a displacement of the saccade target goes usually unnoticeable. This latter saccadic suppression of displacement has been explained by a built-in characteristic of the perceptual system: the assumption that during a saccade, the environment remains stable. We explored whether the mislocalization of a briefly flashed stimulus toward the saccade target also grounds in the built-in assumption of a stable environment. If the mislocalization of a peri-saccadically flashed stimulus originates from a post-saccadic alignment process, an additional location marker at the position of the upcoming flash should counteract compression. Alternatively, compression might be the result of peri-saccadic attentional phenomena. In this case, mislocalization should occur even if the position of the flashed stimulus is marked. When subjects were asked about their perceived location, they mislocalized the stimulus toward the saccade target, even though they were fully aware of the correct stimulus location. Thus, our results suggest that the uncertainty about the location of a flashed stimulus is not inherently relevant for compression. |
Benjamin Y. Hayden; Sarah R. Heilbronner; Amrita C. Nair; Michael L. Platt Cognitive influences on risk-seeking by rhesus macaques Journal Article In: Judgment and Decision Making, vol. 3, no. 5, pp. 389–395, 2008. @article{Hayden2008, Humans and other animals are idiosyncratically sensitive to risk, either preferring or avoiding options having the same value but differing in uncertainty. Many explanations for risk sensitivity rely on the non-linear shape of a hypothesized utility curve. Because such models do not place any importance on uncertainty per se, utility curve-based accounts predict indifference between risky and riskless options that offer the same distribution of rewards. Here we show that monkeys strongly prefer uncertain gambles to alternating rewards with the same payoffs, demonstrating that uncertainty itself contributes to the appeal of risky options. Based on prior observations, we hypothesized that the appeal of the risky option is enhanced by the salience of the potential jackpot. To test this, we subtly manipulated payoffs in a second gambling task. We found that monkeys are more sensitive to small changes in the size of the large reward than to equivalent changes in the size of the small reward, indicating that they attend preferentially to the jackpots. Together, these results challenge utility curve-based accounts of risk sensitivity, and suggest that psychological factors, such as outcome salience and uncertainty itself, contribute to risky decision-making. |
Gregory J. Zelinsky A theory of eye movements during target acquisition Journal Article In: Psychological Review, vol. 115, pp. 787–835, 2008. @article{Zelinsky2008, The gaze movements accompanying target localization were examined via human observers and a computational model (Target Acquisition Model, TAM). Search contexts ranged from fully realistic scenes, to toys in a crib, to Os and Qs, and manipulations included set size, target eccentricity, and target-distractor similarity. Observers and the model always previewed the same targets and searched the identical displays. Behavioral and simulated eye movements were analyzed for acquisition accuracy, efficiency, and target guidance. TAM's behavior generally fell within the behavioral mean's 95% confidence interval for all measures in each experiment/condition. This agreement suggests that a fixed-parameter model using spatio-chromatic filters and a simulated retina, when driven by the correct visual routines, can be a good general purpose predictor of human target acquisition behavior. |
Gregory J. Zelinsky; Mark B. Neider An eye movement analysis of multiple object tracking in a realistic environment Journal Article In: Visual Cognition, vol. 16, no. 5, pp. 553–566, 2008. @article{Zelinsky2008a, To study multiple object tracking under naturalistic conditions, observers tracked 1–4 sharks (9 in total) swimming throughout an underwater scene. Accuracy was high in the Track 1–3 conditions (>92%), but declined when tracking 4 targets (78%). Gaze analyses revealed a dependency between tracking strategy and target number. Observers tracking 2 targets kept their gaze on the target centroid rather than individual objects; observers tracking 4 targets switched their gaze back-and-forth between sharks. Using an oculomotor method for identifying targets lost during tracking, we confirmed that this strategy shift was real and not an artifact of centroid definition. Moreover, we found that tracking errors increased with gaze time on targets, and decreased with time spent looking at the centroid. Depending on tracking load, both centroid and target-switching strategies are used, with accuracy improving with reliance on centroid tracking. An index juggling hypothesis is advanced to explain the suboptimal tendency to fixate tracked objects. |
Zhi-Lei Zhang; Christopher R. L. Cantor; Clifton M. Schor Effects of luminance and saccadic suppression on perisaccadic spatial distortions Journal Article In: Journal of vision, vol. 8, no. 14, pp. 22 1–18, 2008. @article{Zhang2008, Visual directions of foveal targets flashed just prior to the onset of a saccade are misperceived as shifted in the direction of the eye movement. We examined the effects of luminance level and temporal interactions on the amplitude of these perisaccadic spatial distortions (PSDs). PSDs were larger for both single and sequentially double-flashed stimuli with low than high luminance levels, and there was a reduction of PSDs for low luminance targets flashed immediately before the saccade. Significant temporal interactions were suggested by PSDs for a pair of sequentially presented flashes (ISI = 50 ms) that could not be predicted from the single-flash distortions: PSD increased for the first flash and decreased for the second compared to the single-flash distortions. We also found that when the flash pair was presented near saccade onset, the perceived distortion of the earlier flash overtook that of the later flash, even though the late flash occurred closer in time to the saccade. To explain these effects, we propose that stimulus-dependent nonlinearities (contrast gain control and saccadic suppression) influence the duration of the temporal impulse response of both single- and double-flashed stimuli. |
Katsumi Watanabe; Kenji Yokoi Dynamic distortion of visual position representation around moving objects Journal Article In: Journal of Vision, vol. 8, no. 3, pp. 1–11, 2008. @article{Watanabe2008, The relative visual positions of briefly flashed stimuli are systematically modified in the presence of motion signals (R. Nijhawan, 2002; D. Whitney, 2002). Previously, we investigated the two-dimensional distortion of relative-position representations between moving and flashed stimuli. The results showed that the perceived position of a flash is not uniformly displaced but shifted toward a single convergent point back along the trajectory of a moving object (K. Watanabe & K. Yokoi, 2006, 2007). In the present study, we examined the temporal dynamics of the anisotropic distortion of visual position representation. While observers fixated on a stationary cross, a black disk appeared, moved along a horizontal trajectory, and disappeared. A white dot was briefly flashed at various positions relative to the moving disk and at various timings relative to the motion onset/offset. The temporal emerging-waning pattern of anisotropic mislocalization indicated that position representation in the space ahead of a moving object differs qualitatively from that in the space behind it. Thus, anisotropic mislocalization cannot be explained by either a spatially or a temporally homogeneous process. Instead, visual position representation is anisotropically influenced by moving objects in both space and time. |
Mark Wexler; Nizar Ouarti Depth affects where we look Journal Article In: Current Biology, vol. 18, no. 23, pp. 1872–1876, 2008. @article{Wexler2008, Understanding how we spontaneously scan the visual world through eye movements is crucial for characterizing both the strategies and inputs of vision [1-27]. Despite the importance of the third or depth dimension for perception and action, little is known about how the specifically three-dimensional aspects of scenes affect looking behavior. Here we show that three-dimensional surface orientation has a surprisingly large effect on spontaneous exploration, and we demonstrate that a simple rule predicts eye movements given surface orientation in three dimensions: saccades tend to follow surface depth gradients. The rule proves to be quite robust: it generalizes across depth cues, holds in the presence or absence of a task, and applies to more complex three-dimensional objects. These results not only lead to a more accurate understanding of visuo-motor strategies, but also suggest a possible new oculomotor technique for studying three-dimensional vision from a variety of depth cues in subjects-such as animals or human infants-that cannot explicitly report their perceptions. |
Xoana G. Troncoso; Stephen L. Macknik; Susana Martinez-Conde In: Spatial Vision, vol. 22, pp. 335–348, 2008. @article{Troncoso2008a, When corners are embedded in a luminance gradient, their perceived salience varies linearly with corner angle (Troncoso et al., 2005). Here we hypothesize that this relationship may hold true for all corners, not just corner gradients. To test this hypothesis, we developed a novel variant of the flicker-augmented contrast illusion (Anstis and Ho, 1998) that employs solid (non-gradient) corners of varying angles to modify perceived brightness. We flickered solid corners from dark to light grey (50% luminance over time) against a black or a white background. With this new stimulus, subjects compared the apparent brightness of corners, which did not vary in actual luminance, to non-illusory stimuli that varied in actual luminance. We found that the apparent brightness of corners was linearly related to the sharpness of corner angle. Thus this relationship is not solely an effect of corners embedded in gradients, but may be a general principle of corner perception. These findings may have important repercussions for brain mechanisms underlying the early visual processing of shape and brightness. A large fraction of Vasarely's art showcases the perceptual salience of corners, curvature and terminators. Several of these artworks and their implications for visual processing are discussed. |
Yuan-Chi Tseng; Chiang-Shan Ray Li The effects of response readiness and error monitoring on saccade countermanding Journal Article In: The Open Psychology Journal, vol. 1, no. 1, pp. 18–25, 2008. @article{Tseng2008, The stop-signal task (SST) and anti-saccade tasks are both widely used to explore cognitive inhibitory control. Our previous work on a manual SST showed that subjects' readiness to respond to the go signal and the extent to which subjects monitor their errors need to be considered in order to attribute impaired performance to deficits in response inhi- bition. Here we examine whether these same task-related variables similarly influence oculomotor SST and anti-saccade performance. Thirty-six and sixty healthy, adult subjects participated in an oculomotor SST and anti-saccade task, respec- tively, in which the fore-period (FP) of imperative stimulus varied randomly from trial to trial. We computed a FP effect to index response readiness to the imperative stimulus and a post-error slowing (PES) effect to index error monitoring. Contrary to what we had anticipated, other than a weak but negative association between the FP effect and anti-saccade errors, these behavioral variables did not correlate with SST or anti-saccade performance. |
Geoffrey Underwood; Emma Templeman; Laura Lamming; Tom Foulsham Is attention necessary for object identification? Evidence from eye movements during the inspection of real-world scenes Journal Article In: Consciousness and Cognition, vol. 17, no. 1, pp. 159–170, 2008. @article{Underwood2008, Eye movements were recorded during the display of two images of a real-world scene that were inspected to determine whether they were the same or not (a comparative visual search task). In the displays where the pictures were different, one object had been changed, and this object was sometimes taken from another scene and was incongruent with the gist. The experiment established that incongruous objects attract eye fixations earlier than the congruous counterparts, but that this effect is not apparent until the picture has been displayed for several seconds. By controlling the visual saliency of the objects the experiment eliminates the possibility that the incongruency effect is dependent upon the conspicuity of the changed objects. A model of scene perception is suggested whereby attention is unnecessary for the partial recognition of an object that delivers sufficient information about its visual characteristics for the viewer to know that the object is improbable in that particular scene, and in which full identification requires foveal inspection. |
Ronald Berg; Frans W. Cornelissen; Jos B. T. M. Roerdink Perceptual dependencies in information visualization assessed by complex visual search Journal Article In: ACM Transactions on Applied Perception, vol. 4, no. 4, pp. 1–21, 2008. @article{Berg2008, A common approach for visualizing data sets is to map them to images in which distinct data dimensions are mapped to distinct visual features, such as color, size and orientation. Here, we consider visualizations in which different data dimensions should receive equal weight and attention. Many of the end-user tasks performed on these images involve a form of visual search. Often, it is simply assumed that features can be judged independently of each other in such tasks. However, there is evidence for perceptual dependencies when simultaneously presenting multiple features. Such dependencies could potentially affect information visualizations that contain combinations of features for encoding information and, thereby, bias subjects into unequally weighting the relevance of different data dimensions. We experimentally assess (1) the presence of judgment dependencies in a visualization task (searching for a target node in a node-link diagram) and (2) how feature contrast relates to salience. From a visualization point of view, our most relevant findings are that (a) to equalize saliency (and thus bottom-up weighting) of size and color, color contrasts have to become very low. Moreover, orientation is less suitable for representing information that consists of a large range of data values, because it does not show a clear relationship between contrast and salience; (b) color and size are features that can be used independently to represent information, at least as far as the range of colors that were used in our study are concerned; (c) the concept of (static) feature salience hierarchies is wrong; how salient a feature is compared to another is not fixed, but a function of feature contrasts; (d) final decisions appear to be as good an indicator of perceptual performance as indicators based on measures obtained from individual fixations. Eye tracking, therefore, does not necessarily present a benefit for user studies that aim at evaluating performance in search tasks. |
Claudia Wilimzig; Naotsugu Tsuchiya; Manfred Fahle; Wolfgang Einhäuser; Christof Koch Spatial attention increases performance but not subjective confidence in a discrimination task Journal Article In: Journal of Vision, vol. 8, no. 5, pp. 7, 2008. @article{Wilimzig2008, Selective attention to a target yields faster and more accurate responses. Faster response times, in turn, are usually associated with increased subjective confidence. Could the decrease in reaction time in the presence of attention therefore simply reflect a shift toward more confident responses? We here addressed the extent to which attention modulates accuracy, processing speed, and confidence independently. To probe the effect of spatial attention on performance, we used two attentional manipulations of a visual orientation discrimination task. We demonstrate that spatial attention significantly increases accuracy, whereas subjective confidence measures reveal overconfidence in non-attended stimuli. At constant confidence levels, reaction times showed a significant decrease (by 15-49%, corresponding to 100-250 ms). This dissociation of objective performance and subjective confidence suggests that attention and awareness, as measured by confidence, are distinct, albeit related, phenomena. |
Amanda H. Wilson; Adam Wilson; Martin W. Hove; Martin Paré; Kevin G. Munhall Loss of central vision and audiovisual speech perception Journal Article In: Visual Impairment Research, vol. 10, no. 1, pp. 23–34, 2008. @article{Wilson2008, Communication impairments pose a major threat to an individual's quality of life. However, the impact of visual impairments on communication is not well understood, despite the important role that vision plays in the perception of speech. Here we present 2 experiments examining the impact of discrete central scotomas on speech perception. In the first experiment, 4 patients with central vision loss due to unilateral macular holes identified utterances with conflicting auditory-visual information, while simultaneously having their eye movements recorded. Each eye was tested individually. Three participants showed similar speech perception with both the impaired eye and the unaffected eye. For 1 participant, speech perception was disrupted by the scotoma because the participant did not shift gaze to avoid obscuring the talker's mouth with the scotoma. In the second experiment, 12 undergraduate students with gaze-contingent artificial scotomas (10 visual degrees in diameter) identified sentences in background noise. These larger scotomas disrupted speech perception, but some participants overcame this by adopting a gaze strategy whereby they shifted gaze to prevent obscuring important regions of the face such as the mouth. Participants who did not spontaneously adopt an adaptive gaze strategy did not learn to do so over the course of 5 days; however, participants who began with adaptive gaze strategies became more consistent in their gaze location. These findings confirm that peripheral vision is sufficient for perception of most visual information in speech, and suggest that training in gaze strategy may be worthwhile for individuals with communication deficits due to visual impairments. |
M. Wittenberg; Frank Bremmer; T. Wachtler Perceptual evidence for saccadic updating of color stimuli Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–9, 2008. @article{Wittenberg2008, In retinotopically organized areas of the macaque visual cortex, neurons have been found that shift their receptive fields before a saccade to their postsaccadic position. This saccadic remapping has been interpreted as a mechanism contributing to perceptual stability of space across eye movements. So far, there is only limited evidence for similar mechanisms that support perceptual stability of visual objects by remapping the representation of object features across saccades. In our present study, we investigated whether color stimuli presented before a saccade affected the perception of color stimuli at the same spatial position after the saccade. We found that the perceived hue of a postsaccadically flashed stimulus was systematically shifted toward the color of a presaccadically presented stimulus. This finding would be in accordance with a saccadic remapping process that preactivates, prior to a saccade, the neurons that represent a stimulus after the saccade at this very location. Such a remapping of visual object features could contribute to the stable perception of the visual world across saccades. |
Denise H. Wu; Anne Morganti; Anjan Chatterjee Neural substrates of processing path and manner information of a moving event Journal Article In: Neuropsychologia, vol. 46, no. 2, pp. 704–713, 2008. @article{Wu2008, Languages consistently distinguish the path and the manner of a moving event in different constituents, even if the specific constituents themselves vary across languages. Children also learn to categorize moving events according to their path and manner at different ages. Motivated by these linguistic and developmental observations, we employed fMRI to test the hypothesis that perception of and attention to path and manner of motion is segregated neurally. Moreover, we hypothesize that such segregation respects the "dorsal-where and ventral-what" organizational principle of vision. Consistent with this proposal, we found that attention to the path of a moving event was associated with greater activity within bilateral inferior/superior parietal lobules and the frontal eye-field, while attention to manner was associated with greater activity within bilateral postero-lateral inferior/middle temporal regions. Our data provide evidence that motion perception, traditionally considered as a dorsal "where" visual attribute, further segregates into dorsal path and ventral manner attributes. This neural segregation of the components of motion, which are linguistically tagged, points to a perceptual counterpart of the functional organization of concepts and language. |
Christian Vorstius; Ralph Radach; Alan R. Lang; Christina J. Riccardi Specific visuomotor deficits due to alcohol intoxication: evidence from the pro- and antisaccade paradigms. Journal Article In: Psychopharmacology, vol. 196, no. 2, pp. 201–210, 2008. @article{Vorstius2008, RATIONALE: Alcohol affects a variety of human behaviors, including visual perception and motor control. Although recent research has begun to explore mechanisms that mediate these changes, their exact nature is still not well understood. OBJECTIVES: The present study used two basic oculomotor tasks to examine the effect of alcohol on different levels of visual processing within the same individuals. A theoretical framework is offered to integrate findings across multiple levels of oculomotor control. MATERIALS AND METHODS: Twenty-four healthy participants were asked to perform eye movements in reflexive (pro-) and voluntary (anti-) saccade tasks. In one of two counterbalanced sessions, performance was measured after alcohol administration (mean BrAC=69 mg%); the other served as a within-subjects no-alcohol comparison condition. RESULTS: Error rates were not influenced by alcohol intoxication in either task. However, there were significant effects of alcohol on saccade latency and peak velocity in both tasks. Critically, a specific alcohol-induced impairment (hypermetria) in saccade amplitudes was observed exclusively in the anti-saccade task. CONCLUSIONS: The saccade latency data strongly suggest that alcohol intoxication impairs temporal aspects of saccade generation, irrespective of the level of processing triggering the saccade. The absence of effects on anti-saccade errors calls for further research into the notion of alcohol-induced impairment of the ability to inhibit prepotent responses. Furthermore, the specific impairment of saccade amplitude in the anti-saccade task under alcohol suggests that higher level processes involved in the spatial remapping of target location in the absence of a visually specified saccade goal are specifically affected by alcohol intoxication. |
Robin Walker; Eugene McSorley The influence of distractors on saccade target selection: Saccade trajectory effects Journal Article In: Journal of Eye Movement Research, vol. 2, no. 3, pp. 1–13, 2008. @article{Walker2008, It has long been known that the path (trajectory) taken by the eye to land on a target is rarely straight (Yarbus, 1967). Furthermore, the magnitude and direction of this natural tendency for curvature can be modulated by the presence of a competing distractor stimulus presented along with the saccade target. The distractorrelated modulation of saccade trajectories provides a subtle measure of the underlying competitive processes involved in saccade target selection. Here we review some of our own studies into the effects distractors have on saccade trajectories, which can be regarded as a way of probing the competitive balance between target and distractor salience. |
Lu Qi Xiao; Jun-Yun Zhang; Rui Wang; Stanley A. Klein; Dennis M. Levi; Cong Yu Complete transfer of perceptual learning across retinal locations enabled by double training Journal Article In: Current Biology, vol. 18, no. 24, pp. 1922–1926, 2008. @article{Xiao2008, Practice improves discrimination of many basic visual features, such as contrast, orientation, and positional offset [1-7]. Perceptual learning of many of these tasks is found to be retinal location specific, in that learning transfers little to an untrained retinal location [1, 6-8]. In most perceptual learning models, this location specificity is interpreted as a pointer to a retinotopic early visual cortical locus of learning [1, 6-11]. Alternatively, an untested hypothesis is that learning could occur in a central site, but it consists of two separate aspects: learning to discriminate a specific stimulus feature ("feature learning"), and learning to deal with stimulus-nonspecific factors like local noise at the stimulus location ("location learning") [12]. Therefore, learning is not transferable to a new location that has never been location trained. To test this hypothesis, we developed a novel double-training paradigm that employed conventional feature training (e.g., contrast) at one location, and additional training with an irrelevant feature/task (e.g., orientation) at a second location, either simultaneously or at a different time. Our results showed that this additional location training enabled a complete transfer of feature learning (e.g., contrast) to the second location. This finding challenges location specificity and its inferred cortical retinotopy as central concepts to many perceptual-learning models and suggests that perceptual learning involves higher nonretinotopic brain areas that enable location transfer. |
Stefan Van der Stigchel; Jan Theeuwes Differences in distractor-induced deviation between horizontal and vertical saccade trajectories Journal Article In: NeuroReport, vol. 19, no. 2, pp. 251–254, 2008. @article{VanderStigchel2008, The present study systematically investigated the influence of a distractor on horizontal and vertical eye movements. Results showed that both horizontal and vertical eye movements deviated away from the distractor but these deviations were stronger for vertical than for horizontal movements. As trajectory deviations away from a distractor are generally attributed to inhibition applied to the distractor, this suggests that this deviation is not only due to differences in activity between the two collicular motor maps, but can also be evoked by local application of inhibitory processes in the same map as the target. Nonetheless, deviations were more dominant for vertical movements which suggests that for these movements more inhibition is applied than for horizontal movements. |
Stefan Van der Stigchel; Wieske Zoest; Jan Theeuwes; Jason J. S. Barton The influence of "blind" distractors on eye movement trajectories in visual hemifield defects Journal Article In: Journal of Cognitive Neuroscience, vol. 20, no. 11, pp. 2025–2036, 2008. @article{VanderStigchel2008a, There is evidence that some visual information in blind regions may still be processed in patients with hemifield defects after cerebral lesions ("blindsight"). We tested the hypothesis that, in the absence of retinogeniculostriate processing, residual retinotectal processing may still be detected as modifications of saccades to seen targets by irrelevant distractors in the blind hemifield. Six patients were presented with distractors in the blind and intact portions of their visual field and participants were instructed to make eye movements to targets in the intact field. Eye movements were recorded to determine if blind-field distractors caused deviation in saccadic trajectories. No deviation was found in one patient with an optic chiasm lesion, which affect both retinotectal and retinogeniculostriate pathways. In five patients with lesions of the optic radiations or the striate cortex, the results were mixed, with two of the five patients showing significant deviations of saccadic trajectory away from the "blind" distractor. In a second experiment, two of the five patients were tested with the target and the distractor more closely aligned. Both patients showed a "global effect," in that saccades deviated toward the distractor, but the effect was stronger in the patient who also showed significant trajectory deviation in the first experiment. Although our study confirms that distractor effects on saccadic trajectory can occur in patients with damage to the retinogeniculostriate visual pathway but preserved retinotectal projections, there remain questions regarding what additional factors are required for these effects to manifest themselves in a given patient. |
Wieske Zoest; Mieke Donk Goal-driven modulation as a function of time in saccadic target selection Journal Article In: Quarterly Journal of Experimental Psychology, vol. 61, no. 10, pp. 1553–1572, 2008. @article{Zoest2008, Four experiments were performed to investigate the contribution of goal-driven modulation in saccadic target selection as a function of time. Observers were required to make an eye movement to a prespecified target that was concurrently presented with multiple nontargets and possibly one distractor. Target and distractor were defined in different dimensions (orientation dimension and colour dimension in Experiment 1), or were both defined in the same dimension (i.e., both defined in the orientation dimension in Experiment 2, or both defined in the colour dimension in Experiments 3 and 4). The identities of target and distractor were switched over conditions. Speed-accuracy functions were computed to examine the full time course of selection in each condition. There were three major results. First, the ability to exert goal-driven control increased as a function of response latency. Second, this ability depended on the specific target-distractor combination, yet was not a function of whether target and distractor were defined within or across dimensions. Third, goal-driven control was available earlier when target and distractor were dissimilar than when they were similar. It was concluded that the influence of goal-driven control in visual selection is not all or none, but is of a continuous nature. |
Wieske Zoest; Stefan Van der Stigchel; Jason J. S. Barton Distractor effects on saccade trajectories: A comparison of prosaccades, antisaccades, and memory-guided saccades Journal Article In: Experimental Brain Research, vol. 186, no. 3, pp. 431–442, 2008. @article{Zoest2008a, The present study investigated the contribution of the presence of a visual signal at the saccade goal on saccade trajectory deviations and measured distractor-related inhibition as indicated by deviation away from an irrelevant distractor. Performance in a prosaccade task where a visual target was present at the saccade goal was compared to performance in an anti- and memory-guided saccade task. In the latter two tasks no visual signal is present at the location of the saccade goal. It was hypothesized that if saccade deviation can be ultimately explained in terms of relative activation levels between the saccade goal location and distractor locations, the absence of a visual stimulus at the goal location will increase the competition evoked by the distractor and affect saccade deviations. The results of Experiment 1 showed that saccade deviation away from a distractor varied significantly depending on whether a visual target was presented at the saccade goal or not: when no visual target was presented, saccade deviation away from a distractor was increased compared to when the visual target was present. The results of Experiments 2-4 showed that saccade deviation did not systematically change as a function of time since the offset of the target. Moreover, Experiments 3 and 4 revealed that the disappearance of the target immediately increased the effect of a distractor on saccade deviations, suggesting that activation at the target location decays very rapidly once the visual signal has disappeared from the display. |
André Vandierendonck; Maud Deschuyteneer; Ann Depoorter; Denis Drieghe Input monitoring and response selection as components of executive control in pro-saccades and anti-saccades Journal Article In: Psychological Research, vol. 72, no. 1, pp. 1–11, 2008. @article{Vandierendonck2008, Several studies have shown that anti-saccades, more than pro-saccades, are executed under executive control. It is argued that executive control subsumes a variety of controlled processes. The present study tested whether some of these underlying processes are involved in the execution of anti-saccades. An experiment is reported in which two such processes were parametrically varied, namely input monitoring and response selection. This resulted in four selective interference conditions obtained by factorially combining the degree of input monitoring and the presence of response selection in the interference task. The four tasks were combined with a primary task which required the participants to perform either pro-saccades or anti-saccades. By comparison of performance in these dual-task conditions and performance in single-task conditions, it was shown that anti-saccades, but not pro-saccades, were delayed when the secondary task required input monitoring or response selection. The results are discussed with respect to theoretical attempts to deconstruct the concept of executive control. |
Julius Verrel; Harold Bekkering; Bert Steenbergen Eye-hand coordination during manual object transport with the affected and less affected hand in adolescents with hemiparetic cerebral palsy Journal Article In: Experimental Brain Research, vol. 187, no. 1, pp. 107–116, 2008. @article{Verrel2008, In the present study we investigated eye-hand coordination in adolescents with hemiparetic cerebral palsy (CP) and neurologically healthy controls. Using an object prehension and transport task, we addressed two hypotheses, motivated by the question whether early brain damage and the ensuing limitations of motor activity lead to general and/or effector-specific effects in visuomotor control of manual actions. We hypothesized that individuals with hemiparetic CP would more closely visually monitor actions with their affected hand, compared to both their less affected hand and to control participants without a sensorimotor impairment. A second, more speculative hypothesis was that, in relation to previously established deficits in prospective action control in individuals with hemiparetic CP, gaze patterns might be less anticipatory in general, also during actions performed with the less affected hand. Analysis of the gaze and hand movement data revealed the increased visual monitoring of participants with CP when using their affected hand at the beginning as well as during object transport. In contrast, no general deficit in anticipatory gaze control in the participants with hemiparetic CP could be observed. Collectively, these findings are the first to directly show that individuals with hemiparetic CP adapt eye-hand coordination to the specific constraints of the moving limb, presumably to compensate for sensorimotor deficits. |
Matthew T. Crawford; John J. Skowronski; Chris Stiff; Ute Leonards Seeing, but not thinking: Limiting the spread of spontaneous trait transference II Journal Article In: Journal of Experimental Social Psychology, vol. 44, no. 3, pp. 840–847, 2008. @article{Crawford2008, When an informant describes trait-implicative behavior of a target, the informant is often associated with the trait implied by the behavior and can be assigned heightened ratings on that trait (STT effects). Presentation of a target photo along with the description seemingly eliminates these effects. Using three different measures of visual attention, the results of two studies show the elimination of STT effects by target photo presentation cannot be attributed to associative mechanisms linked to enhanced visual attention to targets. Instead, presentation of a target's photo likely prompts perceivers to spontaneously make target inferences in much the same way they make spontaneous inferences about self-describers. As argued by Todorov and Uleman [Todorov, A., & Uleman, J. S. (2004). The person reference process in spontaneous trait inferences. Journal of Personality & Social Psychology, 87, 482-493], such attributional processing can preclude the formation of trait associations to informants. |
Denise D. J. Grave; Constanze Hesse; Anne-Marie Brouwer; Volker H. Franz Fixation locations when grasping partly occluded objects Journal Article In: Journal of Vision, vol. 8, no. 7, pp. 1–11, 2008. @article{Grave2008, When grasping an object, subjects tend to look at the contact positions of the digits (A. M. Brouwer, V. H. Franz, D. Kerzel, & K. R. Gegenfurtner, 2005; R. S. Johansson, G. Westling, A. Bäckström, & J. R. Flanagan, 2001). However, these contact positions are not always visible due to occlusion. Subjects might look at occluded parts to determine the location of the contact positions based on extrapolated information. On the other hand, subjects might avoid looking at occluded parts since no object information can be gathered there. To find out where subjects fixate when grasping occluded objects, we let them grasp flat shapes with the index finger and thumb at predefined contact positions. Either the contact position of the thumb or the finger or both was occluded. In a control condition, a part of the object that does not involve the contact positions was occluded. The results showed that subjects did look at occluded object parts, suggesting that they used extrapolated object information for grasping. Additionally, they preferred to look in the direction of the index finger. When the contact position of the index finger was occluded, this tendency was inhibited. Thus, an occluder does not prevent fixations on occluded object parts, but it does affect fixation locations especially in conditions where the preferred fixation location is occluded. |
Thérèse Collins; Tobias Schicke; Brigitte Röder Action goal selection and motor planning can be dissociated by tool use Journal Article In: Cognition, vol. 109, no. 3, pp. 363–371, 2008. @article{Collins2008, The preparation of eye or hand movements enhances visual perception at the upcoming movement end position. The spatial location of this influence of action on perception could be determined either by goal selection or by motor planning. We employed a tool use task to dissociate these two alternatives. The instructed goal location was a visual target to which participants pointed with the tip of a triangular hand-held tool. The motor endpoint was defined by the final fingertip position necessary to bring the tool tip onto the goal. We tested perceptual performance at both locations (tool tip endpoint, motor endpoint) with a visual discrimination task. Discrimination performance was enhanced in parallel at both spatial locations, but not at nearby and intermediate locations, suggesting that both action goal selection and motor planning contribute to visual perception. In addition, our results challenge the widely held view that tools extend the body schema and suggest instead that tool use enhances perception at those precise locations which are most relevant during tool action: the body part used to manipulate the tool, and the active tool tip. |
Marc H. E. Lussanet; Luciano Fadiga; Lars Michels; Rüdiger J. Seitz; Raimund Kleiser; Markus Lappe Interaction of visual hemifield and body view in biological motion perception Journal Article In: European Journal of Neuroscience, vol. 27, no. 2, pp. 514–522, 2008. @article{Lussanet2008, The brain network for the recognition of biological motion includes visual areas and structures of the mirror-neuron system. The latter respond during action execution as well as during action recognition. As motor and somatosensory areas predominantly represent the contralateral side of the body and visual areas predominantly process stimuli from the contralateral hemifield, we were interested in interactions between visual hemifield and action recognition. In the present study, human participants detected the facing direction of profile views of biological motion stimuli presented in the visual periphery. They recognized a right-facing body view of human motion better in the right visual hemifield than in the left; and a left-facing body view better in the left visual hemifield than in the right. In a subsequent fMRI experiment, performed with a similar task, two cortical areas in the left and right hemispheres were significantly correlated with the behavioural facing effect: primary somatosensory cortex (BA 2) and inferior frontal gyrus (BA 44). These areas were activated specifically when point-light stimuli presented in the contralateral visual hemifield displayed the side view of their contralateral body side. Our results indicate that the hemispheric specialization of one's own body map extends to the visual representation of the bodies of others. |
Christopher A. Dickinson; Helene Intraub Transsaccadic representation of layout: What is the time course of boundary extension? Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 34, no. 3, pp. 543–555, 2008. @article{Dickinson2008, How rapidly does boundary extension occur? Across experiments, trials included a 3-scene sequence (325 ms/picture), masked interval, and repetition of 1 scene. The repetition was the same view or differed (more close-up or wide angle). Observers rated the repetition as same as, closer than, or more wide angle than the original view on a 5-point scale. Masked intervals were 100, 250, 625, or 1,000 ms in Experiment 1 and 42, 100, or 250 ms in Experiments 2 and 3. Boundary extension occurred in all cases: Identical views were rated as too "close-up," and distractor views elicited the rating asymmetry typical of boundary extension (wider angle distractors were rated as being more similar to the original than were closer up distractors). Most important, boundary extension was evident when only a 42-ms mask separated the original and test views. Experiments 1 and 3 included conditions eliciting a gaze shift prior to the rating test; this did not eliminate boundary extension. Results show that boundary extension is available soon enough and is robust enough to play an on-line role in view integration, perhaps supporting incorporation of views within a larger spatial framework. |
Gregory J. Digirolamo; Jason S. McCarley; Arthur F. Kramer; Harry J. Griffin Voluntary and reflexive eye movements to illusory lengths Journal Article In: Visual Cognition, vol. 16, no. 1, pp. 68–89, 2008. @article{Digirolamo2008, Considerable debate surrounds the extent and manner that motor control is, like perception, susceptible to visual illusions. Using the Brentano version of the Mu ¨ller-Lyer illusion, we measured the accuracy of voluntary and reflexive eye movements to the endpoints of equal length line segments that appeared different (Experiment 1) and different length line segments that appeared equal (Experiment 3). Voluntary and reflexive saccades were both influenced by the illusion, but the former were more strongly biased and closer to the subjective percept. Experiment 2 demonstrated that these data were the results of the illusion and not centre-of- gravity effects. The representations underlying perception and action interact and this interaction produces biases for actions, particularly voluntary actions. |
Elina Birmingham; Walter F. Bischof; Alan Kingstone Social attention and real-world scenes: The roles of action, competition and social content Journal Article In: Quarterly Journal of Experimental Psychology, vol. 61, no. 7, pp. 986–998, 2008. @article{Birmingham2008, The present study examined how social attention is influenced by social content and the presence of items that are available for attention. We monitored observers' eye movements while they freely viewed real-world social scenes containing either 1 or 3 people situated among a variety of objects. Building from the work of Yarbus (1965/1967) we hypothesized that observers would demonstrate a preferential bias to fixate the eyes of the people in the scene, although other items would also receive attention. In addition, we hypothesized that fixations to the eyes would increase as the social content (i.e., number of people) increased. Both hypotheses were supported by the data, and we also found that the level of activity in the scene influenced attention to eyes when social content was high. The present results provide support for the notion that the eyes are selected by others in order to extract social information. Our study also suggests a simple and surreptitious methodology for studying social attention to real-world stimuli in a range of populations, such as those with autism spectrum disorders. |
Elina Birmingham; Walter Bischof; Alan Kingstone Gaze selection in complex social scenes Journal Article In: Visual Cognition, vol. 16, no. 2-3, pp. 341–355, 2008. @article{Birmingham2008a, A great deal of recent research has sought to understand the factors and neural systems that mediate the orienting of spatial attention to a gazed-at location. What have rarely been examined, however, are the factors that are critical to the initial selection of gaze information from complex visual scenes. For instance, is gaze prioritized relative to other possible body parts and objects within a scene? The present study springboards from the seminal work of Yarbus (1965/1967), who had originally examined participants? scan paths while they viewed visual scenes containing one or more people. His work suggested to us that the selection of gaze information may depend on the task that is assigned to participants, the social content of the scene, and/or the activity level depicted within the scene. Our results show clearly that all of these factors can significantly modulate the selection of gaze information. Specifically, the selection of gaze was enhanced when the task was to describe the social attention within a scene, and when the social content and activity level in a scene were high. Nevertheless, it is also the case that participants always selected gaze information more than any other stimulus. Our study has broad implications for future investigations of social attention as well as resolving a number of longstanding issues that had undermined the classic original work of Yarbus. A great deal of recent research has sought to understand the factors and neural systems that mediate the orienting of spatial attention to a gazed-at location. What have rarely been examined, however, are the factors that are critical to the initial selection of gaze information from complex visual scenes. For instance, is gaze prioritized relative to other possible body parts and objects within a scene? The present study springboards from the seminal work of Yarbus (1965/1967), who had originally examined participants? scan paths while they viewed visual scenes containing one or more people. His work suggested to us that the selection of gaze information may depend on the task that is assigned to participants, the social content of the scene, and/or the activity level depicted within the scene. Our results show clearly that all of these factors can significantly modulate the selection of gaze information. Specifically, the selection of gaze was enhanced when the task was to describe the social attention within a scene, and when the social content and activity level in a scene were high. Nevertheless, it is also the case that participants always selected gaze information more than any other stimulus. Our study has broad implications for future investigations of social attention as well as resolving a number of longstanding issues that had undermined the classic original work of Yarbus. |
Caroline Blais; Rachael E. Jack; Christoph Scheepers; Daniel Fiset; Roberto Caldara Culture shapes how we look at faces Journal Article In: PLoS ONE, vol. 3, no. 8, pp. e3022, 2008. @article{Blais2008, Background: Face processing, amongst many basic visual skills, is thought to be invariant across all humans. From as early as 1965, studies of eye movements have consistently revealed a systematic triangular sequence of fixations over the eyes and the mouth, suggesting that faces elicit a universal, biologically-determined information extraction pattern. Methodology/Principal Findings: Here we monitored the eye movements of Western Caucasian and East Asian observers while they learned, recognized, and categorized by race Western Caucasian and East Asian faces. Western Caucasian observers reproduced a scattered triangular pattern of fixations for faces of both races and across tasks. Contrary to intuition, East Asian observers focused more on the central region of the face. Conclusions/Significance: These results demonstrate that face processing can no longer be considered as arising from a universal series of perceptual events. The strategy employed to extract visual information from faces differs across cultures. |
Lizzy Bleumers; Peter De Graef; Karl Verfaillie; Johan Wagemans Eccentric grouping by proximity in multistable dot lattices Journal Article In: Vision Research, vol. 48, no. 2, pp. 179–192, 2008. @article{Bleumers2008, The Pure Distance Law predicts grouping by proximity in dot lattices that can be organised in four ways by grouping dots along parallel lines. It specifies a quantitative relationship between the relative probability of perceiving an organisation and the relative distance between the grouped dots. The current study was set up to investigate whether this principle holds both for centrally and for eccentrically displayed dot lattices. To this end, dot lattices were displayed either in central vision, or to the right of fixation with their closest border at 3° or 15°. We found that the Pure Distance Law adequately predicted grouping of centrally displayed dot lattices but did not capture the eccentric data well, even when the eccentric dot lattices were scaled. Specifically, a better fit was obtained when we included the possibility in the model that in some trials participants could not report an organisation and consequently responded randomly. A plausible interpretation for the occurrence of random responses in the eccentric conditions is that under these circumstances an attention shift is required from the locus of fixation towards the dot lattice, which occasionally fails to take place. When grouping could be reported, scale and eccentricity appeared to interact. The effect of the relative interdot distances on the perceptual organisation of the dot lattices was estimated to be stronger in peripheral vision than in central vision at the two largest scales, but this difference disappeared when the smallest scale was applied. |
Gary D. Bond Deception detection expertise Journal Article In: Law and Human Behavior, vol. 32, no. 4, pp. 339–351, 2008. @article{Bond2008, A lively debate between Bond and Uysal (2007, Law and Human Behavior, 31, 109-115) and O'Sullivan (2007, Law and Human Behavior, 31, 117-123) concerns whether there are experts in deception detection. Two experiments sought to (a) identify expert(s) in detection and assess them twice with four tests, and (b) study their detection behavior using eye tracking. Paroled felons produced videotaped statements that were presented to students and law enforcement personnel. Two experts were identified, both female Native American BIA correctional officers. Experts were over 80% accurate in the first assessment, and scored at 90% accuracy in the second assessment. In Signal Detection analyses, experts showed high discrimination, and did not evidence biased responding. They exploited nonverbal cues to make fast, accurate decisions. These highly-accurate individuals can be characterized as experts in deception detection. |
Verena S. Bonitz; Robert D. Gordon Attention to smoking-related and incongruous objects during scene viewing Journal Article In: Acta Psychologica, vol. 129, no. 2, pp. 255–263, 2008. @article{Bonitz2008, This study examined the influences of semantic characteristics of objects in real-world scenes on allocation of attention as reflected in eye movement measures. Stimuli consisted of full-color photographic scenes, and within each scene, the semantic salience of two target objects was manipulated while the objects' perceptual salience was kept constant. One of the target objects was either inconsistent or consistent with the scene category. In addition, the second target object was either smoking-related or neutral. Two groups of college students, namely current cigarette smokers (N = 18) and non-smokers (N = 19), viewed each scene for 10 s while their eye movements were recorded. While both groups showed preferential allocation of attention to inconsistent objects, smokers also selectively attended to smoking-related objects. Theoretical implications of the results are discussed. |
Susan E. Brennan; Xin Chen; Christopher A. Dickinson; Mark B. Neider; Gregory J. Zelinsky Coordinating cognition: The costs and benefits of shared gaze during collaborative search Journal Article In: Cognition, vol. 106, no. 3, pp. 1465–1477, 2008. @article{Brennan2008, Collaboration has its benefits, but coordination has its costs. We explored the potential for remotely located pairs of people to collaborate during visual search, using shared gaze and speech. Pairs of searchers wearing eyetrackers jointly performed an O-in-Qs search task alone, or in one of three collaboration conditions: shared gaze (with one searcher seeing a gaze-cursor indicating where the other was looking, and vice versa), shared-voice (by speaking to each other), and shared-gaze-plus-voice (by using both gaze-cursors and speech). Although collaborating pairs performed better than solitary searchers, search in the shared gaze condition was best of all: twice as fast and efficient as solitary search. People can successfully communicate and coordinate their searching labor using shared gaze alone. Strikingly, shared gaze search was even faster than shared-gaze-plus-voice search; speaking incurred substantial coordination costs. We conclude that shared gaze affords a highly efficient method of coordinating parallel activity in a time-critical spatial task. |
Julie N. Buchan; Martin Paré; Kevin G. Munhall The effect of varying talker identity and listening conditions on gaze behavior during audiovisual speech perception Journal Article In: Brain Research, vol. 1242, pp. 162–171, 2008. @article{Buchan2008, During face-to-face conversation the face provides auditory and visual linguistic information, and also conveys information about the identity of the speaker. This study investigated behavioral strategies involved in gathering visual information while watching talking faces. The effects of varying talker identity and varying the intelligibility of speech (by adding acoustic noise) on gaze behavior were measured with an eyetracker. Varying the intelligibility of the speech by adding noise had a noticeable effect on the location and duration of fixations. When noise was present subjects adopted a vantage point that was more centralized on the face by reducing the frequency of the fixations on the eyes and mouth and lengthening the duration of their gaze fixations on the nose and mouth. Varying talker identity resulted in a more modest change in gaze behavior that was modulated by the intelligibility of the speech. Although subjects generally used similar strategies to extract visual information in both talker variability conditions, when noise was absent there were more fixations on the mouth when viewing a different talker every trial as opposed to the same talker every trial. These findings provide a useful baseline for studies examining gaze behavior during audiovisual speech perception and perception of dynamic faces. |
Manuel G. Calvo; Pedro Avero Affective priming of emotional pictures in parafoveal vision: Left visual field advantage Journal Article In: Cognitive, Affective and Behavioral Neuroscience, vol. 8, no. 1, pp. 41–53, 2008. @article{Calvo2008, This study investigated whether stimulus affective content can be extracted from visual scenes when these appear in parafoveal locations of the visual field and are foveally masked, and whether there is lateralization involved. Parafoveal prime pleasant or unpleasant scenes were presented for 150 msec 2.5° away from fixation and were followed by a foveal probe scene that was either congruent or incongruent in emotional valence with the prime. Participants responded whether the probe was emotionally positive or negative. Affective priming was demonstrated by shorter response latencies for congruent than for incongruent prime-probe pairs. This effect occurred when the prime was presented in the left visual field at a 300-msec prime-probe stimulus onset asynchrony, even when the prime and the probe were different in physical appearance and semantic category. This result reveals that the affective significance of emotional stimuli can be assessed early through covert attention mechanisms, in the absence of overt eye fixations on the stimuli, and suggests that right-hemisphere dominance is involved. Copyright 2008 Psychonomic Society, Inc. |
Manuel G. Calvo; Michael W. Eysenck Affective significance enhances covert attention: Roles of anxiety and word familiarity Journal Article In: Quarterly Journal of Experimental Psychology, vol. 61, no. 11, pp. 1669–1686, 2008. @article{Calvo2008a, To investigate the processing of emotional words by covert attention, threat-related, positive, and neutral word primes were presented parafoveally (2.2 degrees away from fixation) for 150 ms, under gaze-contingent foveal masking, to prevent eye fixations. The primes were followed by a probe word in a lexical-decision task. In Experiment 1, results showed a parafoveal threat-anxiety superiority: Parafoveal prime threat words facilitated responses to probe threat words for high-anxiety individuals, in comparison with neutral and positive words, and relative to low-anxiety individuals. This reveals an advantage in threat processing by covert attention, without differences in overt attention. However, anxiety was also associated with greater familiarity with threat words, and the parafoveal priming effects were significantly reduced when familiarity was covaried out. To further examine the role of word knowledge, in Experiment 2, vocabulary and word familiarity were equated for low- and high-anxiety groups. In these conditions, the parafoveal threat-anxiety advantage disappeared. This suggests that the enhanced covert-attention effect depends on familiarity with words. |
Manuel G. Calvo; Lauri Nummenmaa Detection of emotional faces: Salient physical features guide effective visual search Journal Article In: Journal of Experimental Psychology: General, vol. 137, no. 3, pp. 471–494, 2008. @article{Calvo2008b, In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features–especially the smiling mouth–is responsible for facilitated initial orienting, which thus shortens detection. |
Manuel G. Calvo; Lauri Nummenmaa; Pedro Avero Visual search of emotional faces: Eye-movement assessment of component processes Journal Article In: Experimental Psychology, vol. 55, no. 6, pp. 359–370, 2008. @article{Calvo2008c, In a visual search task using photographs of real faces, a target emotional face was presented in an array of six neutral faces. Eye movements were monitored to assess attentional orienting and detection efficiency. Target faces with happy, surprised, and disgusted expressions were: (a) responded to more quickly and accurately, (b) localized and fixated earlier, and (c) detected as different faster and with fewer fixations, in comparison with fearful, angry, and sad target faces. This reveals a happy, surprised, and disgusted-face advantage in visual search, with earlier attentional orienting and more efficient detection. The pattern of findings remained equivalent across upright and inverted presentation conditions, which suggests that the search advantage involves processing of featural rather than configural information. Detection responses occurred generally after having fixated the target, which implies that detection of all facial expressions is post- rather than preattentional |
Manuel G. Calvo; Lauri Nummenmaa; Jukka Hyönä Emotional scenes in peripheral vision: Selective orienting and gist processing, but not content identification Journal Article In: Emotion, vol. 8, no. 1, pp. 68–80, 2008. @article{Calvo2008d, Emotional-neutral pairs of visual scenes were presented peripherally (with their inner edges 5.2 degrees away from fixation) as primes for 150 to 900 ms, followed by a centrally presented recognition probe scene, which was either identical in specific content to one of the primes or related in general content and affective valence. Results indicated that (a) if no foveal fixations on the primes were allowed, the false alarm rate for emotional probes was increased; (b) hit rate and sensitivity (A') were higher for emotional than for neutral probes only when a fixation was possible on only one prime; and (c) emotional scenes were more likely to attract the first fixation than neutral scenes. It is concluded that the specific content of emotional or neutral scenes is not processed in peripheral vision. Nevertheless, a coarse impression of emotional scenes may be extracted, which then leads to selective attentional orienting or–in the absence of overt attention–causes false alarms for related probes. |
Gideon P. Caplovitz; Nora A. Paymer; Peter U. Tse The drifting edge illusion: A stationary edge abutting an oriented drifting grating appears to move because of the 'other aperture problem' Journal Article In: Vision Research, vol. 48, no. 22, pp. 2403–2414, 2008. @article{Caplovitz2008, We describe the Drifting Edge Illusion (DEI), in which a stationary edge appears to move when it abuts a drifting grating. Although a single edge is sufficient to perceive DEI, a particularly compelling version of DEI occurs when a drifting grating is viewed through an oriented and stationary aperture. The magnitude of the illusion depends crucially on the orientations of the grating and aperture. Using psychophysics, we describe the relationship between the magnitude of DEI and the relative angle between the grating and aperture. Results are discussed in the context of the roles of occlusion, component-motion, and contour relationships in the interpretation of motion information. In particular, we suggest that the visual system is posed with solving an ambiguity other than the traditionally acknowledged aperture problem of determining the direction of motion of the drifting grating. In this 'second aperture problem' or 'edge problem', a motion signal may belong to either the occluded or occluding contour. That is, the motion along the contour can arise either because the grating is drifting or because the edge is drifting over a stationary grating. DEI appears to result from a misattribution of motion information generated by the drifting grating to the stationary contours of the aperture, as if the edges are interpreted to travel over the grating, although they are in fact stationary. |
Jonathan S. A. Carriere; Daniel Eaton; Michael G. Reynolds; Mike J. Dixon; Daniel Smilek Grapheme–color synesthesia influences overt visual attention Journal Article In: Journal of Cognitive Neuroscience, vol. 21, no. 2, pp. 246–258, 2008. @article{Carriere2008, For individuals with grapheme–color synesthesia, achromatic letters and digits elicit vivid perceptual experiences of color. We report two experiments that evaluate whether synesthesia influences overt visual attention. In these experiments, two grapheme–color synesthetes viewed colored letters while their eye movements were monitored. Letters were presented in colors that were either congruent or incongruent with the synesthetes' colors. Eye tracking analysis showed that synesthetes exhibited a color congruity bias—a propensity to fixate congruently colored letters more often and for longer durations than incongruently colored letters—in a naturalistic free-viewing task. In a more structured visual search task, this congruity bias caused synesthetes to rapidly fixate and identify congruently colored target letters, but led to problems in identifying incongruently colored target letters. The results are discussed in terms of their implications for perception in synesthesia. |
Monica S. Castelhano; Alexander Pollatsek; Kyle R. Cave Typicality aids search for an unspecified target, but only in identification and not in attentional guidance Journal Article In: Psychonomic Bulletin & Review, vol. 15, no. 4, pp. 795–801, 2008. @article{Castelhano2008, Participants searched for a picture of an object, and the object was either a typical or an atypical category member. The object was cued by either the picture or its basic-level category name. Of greatest interest was whether it would be easier to search for typical objects than to search for atypical objects. The answer was"yes," but only in a qualified sense: There was a large typicality effect on response time only for name cues, and almost none of the effect was found in the time to locate (i.e., first fixate) the target. Instead, typicality influenced verification time-the time to respond to the target once it was fixated. Typicality is thus apparently irrelevant when the target is well specified by a picture cue; even when the target is underspecified (as with a name cue), it does not aid attentional guidance, but only facilitates categorization. |
2007 |
Dirk Calow; Markus Lappe Local statistics of retinal optic flow for self-motion through natural sceneries Journal Article In: Network: Computation in Neural Systems, vol. 18, no. 4, pp. 343–374, 2007. @article{Calow2007, Image analysis in the visual system is well adapted to the statistics of natural scenes. Investigations of natural image statistics have so far mainly focused on static features. The present study is dedicated to the measurement and the analysis of the statistics of optic flow generated on the retina during locomotion through natural environments. Natural locomotion includes bouncing and swaying of the head and eye movement reflexes that stabilize gaze onto interesting objects in the scene while walking. We investigate the dependencies of the local statistics of optic flow on the depth structure of the natural environment and on the ego-motion parameters. To measure these dependencies we estimate the mutual information between correlated data sets. We analyze the results with respect to the variation of the dependencies over the visual field, since the visual motions in the optic flow vary depending on visual field position. We find that retinal flow direction and retinal speed show only minor statistical interdependencies. Retinal speed is statistically tightly connected to the depth structure of the scene. Retinal flow direction is statistically mostly driven by the relation between the direction of gaze and the direction of ego-motion. These dependencies differ at different visual field positions such that certain areas of the visual field provide more information about ego-motion and other areas provide more information about depth. The statistical properties of natural optic flow may be used to tune the performance of artificial vision systems based on human imitating behavior, and may be useful for analyzing properties of natural vision systems. |
Manuel G. Calvo; Lauri Nummenmaa Processing of unattended emotional visual scenes Journal Article In: Journal of Experimental Psychology: General, vol. 136, no. 3, pp. 347–369, 2007. @article{Calvo2007, Prime pictures of emotional scenes appeared in parafoveal vision, followed by probe pictures either congruent or incongruent in affective valence. Participants responded whether the probe was pleasant or unpleasant (or whether it portrayed people or animals). Shorter latencies for congruent than for incongruent prime-probe pairs revealed affective priming. This occurred even when visual attention was focused on a concurrent verbal task and when foveal gaze-contingent masking prevented overt attention to the primes but only if these had been preexposed and appeared in the left visual field. The preexposure and laterality patterns were different for affective priming and semantic category priming. Affective priming was independent of the nature of the task (i.e., affective or category judgment), whereas semantic priming was not. The authors conclude that affective processing occurs without overt attention–although it is dependent on resources available for covert attention–and that prior experience of the stimulus is required and right-hemisphere dominance is involved. |
Manuel G. Calvo; Lauri Nummenmaa; Jukka Hyönä Emotional and neutral scenes in competition: Orienting, efficiency, and identification Journal Article In: Quarterly Journal of Experimental Psychology, vol. 60, no. 12, pp. 1585–1593, 2007. @article{Calvo2007a, To investigate preferential processing of emotional scenes competing for limited attentional resources with neutral scenes, prime pictures were presented briefly (450 ms), peripherally (5.2 degrees away from fixation), and simultaneously (one emotional and one neutral scene) versus singly. Primes were followed by a mask and a probe for recognition. Hit rate was higher for emotional than for neutral scenes in the dual- but not in the single-prime condition, and A' sensitivity decreased for neutral but not for emotional scenes in the dual-prime condition. This preferential processing involved both selective orienting and efficient encoding, as revealed, respectively, by a higher probability of first fixation on–and shorter saccade latencies to–emotional scenes and by shorter fixation time needed to accurately identify emotional scenes, in comparison with neutral scenes. |
G. P. Caplovitz; P. U. Tse Rotating dotted ellipses: Motion perception driven by grouped figural rather than local dot motion signals Journal Article In: Vision Research, vol. 47, no. 15, pp. 1979–1991, 2007. @article{Caplovitz2007, Unlike the motion of a continuous contour, the motion of a single dot is unambiguous and immune to the aperture problem. Here we exploit this fact to explore the conditions under which unambiguous local motion signals are used to drive global percepts of an ellipse undergoing rotation. In previous work, we have shown that a thin, high aspect ratio ellipse will appear to rotate faster than a lower aspect ratio ellipse even when the two in fact rotate at the same angular velocity [Caplovitz, G. P., Hsieh, P. -J., & Tse, P. U. (2006) Mechanisms underlying the perceived angular velocity of a rigidly rotating object. Vision Research, 46(18), 2877-2893]. In this study we examined the perceived speed of rotation of ellipses defined by a virtual contour made up of evenly spaced dots. Results: Ellipses defined by closely spaced dots exhibit the speed illusion observed with continuous contours. That is, thin dotted ellipses appear to rotate faster than fat dotted ellipses when both rotate at the same angular velocity. This illusion is not observed if the dots defining the ellipse are spaced too widely apart. A control experiment ruled out low spatial frequency "blurring" as the source of the illusory percept. Conclusion: Even in the presence of local motion signals that are immune to the aperture problem, the global percept of an ellipse undergoing rotation can be driven by potentially ambiguous motion signals arising from the non-local form of the grouped ellipse itself. Here motion perception is driven by emergent motion signals such as those of virtual contours constructed by grouping procedures. Neither these contours nor their emergent motion signals are present in the image. |
Gideon P. Caplovitz; Peter U. Tse V3A processes contour curvature as a trackable feature for the perception of rotational motion Journal Article In: Cerebral Cortex, vol. 17, no. 5, pp. 1179–1189, 2007. @article{Caplovitz2007a, Contour curvature (CC) is a vital cue for the analysis of both form and motion. Using functional magnetic resonance imaging, we localized the neural correlates of CC for the processing and perception of rotational motion. We found that the blood oxygen level-dependent signal in retinotopic area V3A and possibly also lateral occipital cortex (LOC) varied parametrically with the degree of CC. Control experiments ruled out the possibility that these modulations resulted from either changes in the area of the stimuli, the velocity with which contour elements were actually translating, or perceived angular velocity. We conclude that neurons within V3A and perhaps also LOC process continuously moving CC as a trackable feature. These data are consistent with the hypothesis that V3A contains neural populations that process trackable form features such as CC, not to solve the "ventral problem" of determining object shape but in order to solve the "dorsal problem" of what is going where. |
Christopher A. Dickinson; Gregory J. Zelinsky Memory for the search path: Evidence for a high-capacity representation of search history Journal Article In: Vision Research, vol. 47, no. 13, pp. 1745–1755, 2007. @article{Dickinson2007, Using a gaze-contingent paradigm, we directly measured observers' memory capacity for fixated distractor locations during search. After approximately half of the search objects had been fixated, they were masked and a spatial probe appeared at either a previously fixated location or a non-fixated location; observers then rated their confidence that the target had appeared at the probed location. Observers were able to differentiate the 12 most recently fixated distractor locations from non-fixated locations, but analyses revealed that these locations were represented fairly coarsely. We conclude that there exists a high-capacity, but low-resolution, memory for a search path. |
Joan M. Dafoe; Irene T. Armstrong; Douglas P. Munoz The influence of stimulus direction and eccentricity on pro- and anti-saccades in humans Journal Article In: Experimental Brain Research, vol. 179, no. 4, pp. 563–570, 2007. @article{Dafoe2007, We examined the sensory and motor influences of stimulus eccentricity and direction on saccadic reaction times (SRTs), direction-of-movement errors, and saccade amplitude for stimulus-driven (prosaccade) and volitional (antisaccade) oculomotor responses in humans. Stimuli were presented at five eccentricities, ranging from 0.5 degrees to 8 degrees , and in eight radial directions around a central fixation point. At 0.5 degrees eccentricity, participants showed delayed SRT and increased direction-of-movement errors consistent with misidentification of the target and fixation points. For the remaining eccentricities, horizontal saccades had shorter mean SRT than vertical saccades. Stimuli in the upper visual field trigger overt shifts in gaze more easily and faster than in the lower visual field: prosaccades to the upper hemifield had shorter SRT than to the lower hemifield, and more anti-saccade direction-of-movement errors were made into the upper hemifield. With the exception of the 0.5 degrees stimuli, SRT was independent of eccentricity. Saccade amplitude was dependent on target eccentricity for prosaccades, but not for antisaccades within the range we tested. Performance matched behavioral measures described previously for monkeys performing the same tasks, confirming that the monkey is a good model for the human oculomotor function. We conclude that an upper hemifield bias lead to a decrease in SRT and an increase in direction errors. |
Leanne Boucher; Veit Stuphorn; Gordon D. Logan; Jeffrey D. Schall; Thomas J. Palmeri Stopping eye and hand movements: Are the processes independent? Journal Article In: Perception and Psychophysics, vol. 69, no. 5, pp. 785–801, 2007. @article{Boucher2007, To explore how eye and hand movements are controlled in a stop task, we introduced effector uncertainty by instructing subjects to initiate and occasionally inhibit eye, hand, or eye + hand movements in response to a color-coded foveal or tone-coded auditory stop signal. Regardless of stop signal modality, stop signal reaction time was shorter for eye movements than for hand movements, but notably did not vary with knowledge about which movement to cancel. Most errors on eye + hand stopping trials were combined eye + hand movements. The probability and latency of signal respond eye and hand movements corresponded to predictions of Logan and Cowan's (1984) race model applied to each effector independently. |
Eli Brenner; Jeroen B. J. Smeets Flexibility in intercepting moving objects. Journal Article In: Journal of Vision, vol. 7, no. 5, pp. 1–17, 2007. @article{Brenner2007, When hitting moving targets, the hand does not always move to the point of interception in the same manner as it would if the target were not moving. This could be because the point at which the target will be intercepted is initially misjudged, or even not judged at all, but it could also be because a different path is optimal for intercepting a moving target. Here we examine the extent to which performance is degraded if people have to follow a different path than their preferred one. Forcing people to make small adjustments to their path by placing obstacles near the path hardly influenced their performance. When the orientation of elongated targets was manipulated, people adjusted their paths, but not quite enough to avoid intercepting the targets at a sub-optimal angle, probably because following a more curved path would have reduced the spatial accuracy and taken more time. When the task was to hit targets in certain directions, people had to sometimes follow much more curved paths. This gave rise to larger errors and longer movement times. An asymmetry in performance between hitting moving targets further in the direction in which they were moving and hitting them back from where they came is consistent with the different consequences of timing errors for the two directions of target motion. We conclude that the path that people take to intercept moving targets depends on the precise constraints under the prevailing conditions rather than being a consequence of judgment errors or of limitations in the way in which movements can be controlled. |
G. J. Brouwer; Raymond Van Ee Visual cortex allows prediction of perceptual states during ambiguous structure-from-motion Journal Article In: Journal of Neuroscience, vol. 27, no. 5, pp. 1015–1023, 2007. @article{Brouwer2007, We investigated the role of retinotopic visual cortex and motion-sensitive areas in representing the content of visual awareness during ambiguous structure-from-motion (SFM), using functional magnetic resonance imaging (fMRI) and multivariate statistics (support vector machines). Our results indicate that prediction of perceptual states can be very accurate for data taken from dorsal visual areas V3A, V4D, V7, and MTϩ and for parietal areas responsive to SFM, but to a lesser extent for other visual areas. Generalization of prediction was possible, because prediction accuracy was significantly better than chance for both an unambiguous stimulus and a different experimental design. Detailed analysis of eye movements revealed that strategic and even encouraged beneficial eye movements were not the cause of the prediction accuracy based on cortical activation. We conclude that during perceptual rivalry, neural correlates of visual awareness can be found in retinotopic visual cortex, MTϩ, and parietal cortex. We argue that the organization of specific motion-sensitive neurons creates detectable biases in the preferred direction selectivity of voxels, allowing prediction of perceptual states. During perceptual rivalry, retinotopic visual cortex, in particular higher-tier dorsal areas like V3A and V7, actively represents the content the visual awareness. |
Julie Buchan; Martin Paré; Kevin G. Munhall Spatial statistics of gaze fixations during dynamic face processing Journal Article In: Social Neuroscience, vol. 2, no. 1, pp. 1–13, 2007. @article{Buchan2007, Social interaction involves the active visual perception of facial expressions and communicative gestures. This study examines the distribution of gaze fixations while watching videos of expressive talking faces. The knowledge-driven factors that influence the selective visual processing of facial information were examined by using the same set of stimuli, and assigning subjects to either a speech recognition task or an emotion judgment task. For half of the subjects assigned to each of the tasks, the intelligibility of the speech was manipulated by the addition of moderate masking noise. Both tasks and the intelligibility of the speech signal influenced the spatial distribution of gaze. Gaze was concentrated more on the eyes when emotion was being judged as compared to when words were being identified. When noise was added to the acoustic signal, gaze in both tasks was more centralized on the face. This shows that subject's gaze is sensitive to the distribution of information on the face, but can also be influenced by strategies aimed at maximizing the amount of visual information processed. |
Ed H. Chi; Michelle Gumbrecht; Lichan Hong Visual foraging of highlighted text: An eye-tracking study Journal Article In: Human-Computer Interaction, pp. 589–598, 2007. @article{Chi2007, The wide availability of digital reading material online is causing a major shift in everyday reading activities. Readers are skimming instead of reading in depth [Nielson 1997]. Highlights are increasingly used in digital interfaces to direct attention toward relevant passages within texts. In this paper, we study the eye-gaze behavior of subjects using both keyword highlighting and ScentHighlights [Chi et al. 2005]. In this first eye-tracking study of highlighting interfaces, we show that there is direct evidence of the von Restorff isolation effect [VonRestorff 1933] in the eye-tracking data, in that subjects focused on highlighted areas when highlighting cues are present. The results point to future design possibilities in highlighting interfaces. |
Lisa R. Betts; Allison B. Sekuler; Patrick J. Bennett The effects of aging on orientation discrimination Journal Article In: Vision Research, vol. 47, no. 13, pp. 1769–1780, 2007. @article{Betts2007, The current experiments measured orientation discrimination thresholds in younger (mean age ≈ 23 years) and older (mean age ≈ 66 years) subjects. In Experiment 1, the contrast needed to discriminate Gabor patterns (0.75, 1.5, and 3 c/deg) that differed in orientation by 12 deg was measured for different levels of external noise. At all three spatial frequencies, discrimination thresholds were significantly higher in older than younger subjects when external noise was low, but not when external noise was high. In Experiment 2, discrimination thresholds were measured as a function of stimulus contrast by varying orientation while contrast was fixed. The resulting threshold-vs-contrast curves had very similar shapes in the two age groups, although the curve obtained from older subjects was shifted to slightly higher contrasts. At contrasts greater than 0.05, thresholds in both older and younger subjects were approximately constant at 0.5 deg. The results from Experiments 1 and 2 suggest that age differences in orientation discrimination are due solely to differences in equivalent input noise. Using the same methods as Experiment 1, Experiment 3 measured thresholds in 6 younger observers as a function of external noise and retinal illuminance. Although reducing retinal illumination increased equivalent input noise, the effect was much smaller than the age difference found in Experiment 1. Therefore, it is unlikely that differences in orientation discrimination were due solely to differences in retinal illumination. Our findings are consistent with recent physiological experiments that have found elevated spontaneous activity and reduced orientation tuning on visual cortical neurons in senescent cats (Hua, T., Li, X., He, L., Zhou, Y., Wang, Y., Leventhal, A. G. (206). Functional degradation of visual cortical cells in old cats. |
Elina Birmingham; Walter F. Bischof; Alan Kingstone Why do we look at people's eyes? Journal Article In: Journal of Eye Movement Research, vol. 1, no. 1, pp. 1–6, 2007. @article{Birmingham2007, We have previously shown that when observers are presented with complex natural scenes that contain a number of objects and people, observers look mostly at the eyes of the people. Why is this? It cannot be because eyes are merely the most salient area in a scene, as relative to other objects they are fairly inconspicuous. We hypothesized that people look at the eyes because they consider the eyes to be a rich source of information. To test this idea, we tested two groups of participants. One set of participants, called the Told Group, was informed that there would be a recognition test after they were shown the natural scenes. The second set, the Not Told Group, was not informed that there would be a subsequent recognition test. Our data showed that during the initial and test viewings, the Told Group fixated the eyes more frequently than the Not Told group, supporting the idea that the eyes are considered an informative region in social scenes. Converging evidence for this interpretation is that the Not Told Group fixated the eyes more frequently in the test session than in the study session. |
Hiroyuki Sogo; Yuji Takeda Saccade trajectory under simultaneous inhibition for two locations Journal Article In: Vision Research, vol. 47, no. 11, pp. 1537–1549, 2007. @article{Sogo2007, A saccade trajectory often curves away from the location of a non-target stimulus that appears before saccade execution. Spatial inhibition may prevent the saccade from moving toward the non-target stimulus. However, little is known about how simultaneous inhibition for multiple locations affects saccade trajectories. In this study, we examined the effects from two inhibited locations on saccade trajectories. The results show that the saccade trajectories depend on the inhibited locations, and the effect of inhibiting two locations on the trajectory was a summation of the effect of inhibiting each location. A simulation study using the initial interference model also suggests that the effect of each inhibition was summed up to modulate the initial saccade direction. |
Joo-Hyun Song; Ken Nakayama Fixation offset facilitates saccades and manual reaching for single but not multiple target displays Journal Article In: Experimental Brain Research, vol. 177, no. 2, pp. 223–232, 2007. @article{Song2007, Turning off a fixation point, typically for 200 ms, before the onset of a peripheral target substantially reduces saccadic reaction times. This facilitatory effect generated by an inserted temporal gap between fixation offset and the target appearance is called the "gap" effect [J Opt Soc Am 57:1030-1033, 1967]. We show that the gap reduces the initial latency of both saccades and manual pointing in single and multiple target displays. Yet, in multiple target displays, the gap increased the movement duration because eye or hand movements were frequently misdirected toward distractors so that the trajectory had to be corrected. Thus, in spite of the shortened latency, the total time for trial completion was not shortened in multiple target displays, whereas it was reduced in single target displays. This selective gap effect for a single target was not restricted to goal-directed motor tasks because perceptual discrimination tasks, where no motor response is required, also demonstrated the gap effect only for single target displays. Our results suggest that the gap may facilitate attentional disengagement, but it does not help target selection in motor and perceptual discrimination tasks, where the allocation of attention to the target is required. |
Bert Steenbergen; Julius Verrel; Andrew M. Gordon Motor planning in congenital hemiplegia Journal Article In: Disability and Rehabilitation, vol. 29, no. 1, pp. 13–23, 2007. @article{Steenbergen2007, PURPOSE: Cerebral Palsy (CP) is a broad definition of a neurological condition in which disorders in movement execution and postural control limit the performance of activities of daily living. In this paper, we first review studies on motor planning in hemiplegic CP. Second, preliminary data of a recent study on eye-hand coordination in participants with hemiplegic CP are presented. Here, the potential role of vision for online and prospective control of action was examined. METHOD: Review and presentation of preliminary data of an eye- and hand movement registration experiment in hemiplegic CP. RESULTS: Deficits in motor planning in hemiplegic CP contribute to limitations of activities of daily living. In the second part, exemplary plots of eye-hand coordination are presented for the affected and unaffected hand in one participant with hemiplegic CP, and for the preferred hand in controls, both as an illustration of the research methodology and to give an impression of the observed gaze patterns. CONCLUSION: Research on CP should not solely focus on low-level aspects of action execution, but also take into account the more high-level aspects of motor control, such as planning. Possible deviations therein may be sought in altered gaze patterns as illustrated in the paper. |
Claudiu Simion; Shinsuke Shimojo Interrupting the cascade: Orienting contributes to decision making even in the absence of visual stimulation Journal Article In: Perception and Psychophysics, vol. 69, pp. 591–595, 2007. @article{Simion2007, Most systematic studies of human decision making approach the subject from a cost analysis point of view and assume that people make the highest utility choice. Very few articles investigate subjective decision making, such as that involving preference, although such decisions are very important for our daily functioning. We have argued (Shimojo, Simion, Shimojo, & Scheier, 2003) that an orienting bias effectively leads to the preference decision by means of a positive feedback loop involving mere exposure and preferential looking. The illustration of this process is a continually increasing gaze bias toward the eventual choice, which we call the gaze cascade effect. In the present study, we interrupt the natural process of preference selection, but we show that gaze behavior does not change even when the stimuli are removed from observers' visual field. This demonstrates that once started, the involvement of orienting in decision making cannot be stopped and that orienting acts independently of the presence of visual stimuli. We also show that the cascade effect is intrinsically linked to the decision itself and is not triggered simply by a tendency to look at preferred targets. |
Ueli Rutishauser; Christof Koch Probabilistic modeling of eye movement data during conjunction search via feature-based attention Journal Article In: Journal of Vision, vol. 7, no. 6, pp. 1–20, 2007. @article{Rutishauser2007, Where the eyes fixate during search is not random; rather, gaze reflects the combination of information about the target and the visual input. It is not clear, however, what information about a target is used to bias the underlying neuronal responses. We here engage subjects in a variety of simple conjunction search tasks while tracking their eye movements. We derive a generative model that reproduces these eye movements and calculate the conditional probabilities that observers fixate, given the target, on or near an item in the display sharing a specific feature with the target. We use these probabilities to infer which features were biased by top-down attention: Color seems to be the dominant stimulus dimension for guiding search, followed by object size, and lastly orientation. We use the number of fixations it took to find the target as a measure of task difficulty. We find that only a model that biases multiple feature dimensions in a hierarchical manner can account for the data. Contrary to common assumptions, memory plays almost no role in search performance. Our model can be fit to average data of multiple subjects or to individual subjects. Small variations of a few key parameters account well for the intersubject differences. The model is compatible with neurophysiological findings of V4 and frontal eye fields (FEF) neurons and predicts the gain modulation of these cells. |
Nicola Rycroft; Samuel B. Hutton; O. Clowry; C. Groomsbridge; A. Sierakowski; Jennifer M. Rusted Non-cholinergic modulation of antisaccade performance: A modafinil-nicotine comparison Journal Article In: Psychopharmacology, vol. 195, no. 2, pp. 245–253, 2007. @article{Rycroft2007, INTRODUCTION: The antisaccade task provides a powerful tool with which to investigate the cognitive and neural systems underlying goal-directed behaviour, particularly in situations when the correct behavioural response requires the suppression of a prepotent response. Antisaccade errors (failures to suppress reflexive prosaccades towards sudden-onset targets) are increased in patients with damage to the dorsolateral prefrontal cortex, and in patients with schizophrenia. Nicotine has been found to improve antisaccade performance in patients with schizophrenia and healthy controls. This performance enhancing effect may be due to direct effects on the cholinergic system, but there has been no test of this hypothesis. MATERIALS AND METHODS: In a double blind, double dummy, placebo-controlled design, we compared the effect of nicotine and modafinil, a putative indirect noradrenergic agonist, on antisaccade performance in healthy non-smokers. RESULTS AND DISCUSSION: Both compounds reduced latency for correct antisaccades, although neither reduced antisaccade errors. These findings are discussed with reference to the pharmacological route of performance enhancement on the antisaccade task and current models of antisaccade performance. |
Jean Saint-Aubin; Sébastien Tremblay; Annie Jalbert Eye movements and serial memory for visual-spatial information: Does time spent fixating contribute to recall? Journal Article In: Experimental Psychology, vol. 54, no. 4, pp. 264–272, 2007. @article{SaintAubin2007, This research investigated the nature of encoding and its contribution to serial recall for visual-spatial information. In order to do so, we examined the relationship between fixation duration and recall performance. Using the dot task–a series of seven dots spatially distributed on a monitor screen is presented sequentially for immediate recall–performance and eye-tracking data were recorded during the presentation of the to-be-remembered items. When participants were free to move their eyes at their will, both fixation durations and probability of correct recall decreased as a function of serial position. Furthermore, imposing constant durations of fixation across all serial positions had a beneficial impact (though relatively small) on item but not order recall. Great care was taken to isolate the effect of fixation duration from that of presentation duration. Although eye movement at encoding contributes to immediate memory, it is not decisive in shaping serial recall performance. Our results also provide further evidence that the distinction between item and order information, well-established in the verbal domain, extends to visual-spatial information. |
Benjamin W. Tatler The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions Journal Article In: Journal of Vision, vol. 7, no. 14, pp. 4, 2007. @article{Tatler2007, Observers show a marked tendency to fixate the center of the screen when viewing scenes on computer monitors. This is often assumed to arise because image features tend to be biased toward the center of natural images and fixations are correlated with image features. A common alternative explanation is that experiments typically use a central pre-trial fixation marker, and observers tend to make small amplitude saccades. In the present study, the central bias was explored by dividing images post hoc according to biases in their image feature distributions. Central biases could not be explained by motor biases for making small saccades and were found irrespective of the distribution of image features. When the scene appeared, the initial response was to orient to the center of the screen. Following this, fixation distributions did not vary with image feature distributions when freely viewing scenes. When searching the scenes, fixation distributions shifted slightly toward the distribution of features in the image, primarily during the first few fixations after the initial orienting response. The endurance of the central fixation bias irrespective of the distribution of image features, or the observer's task, implies one of three possible explanations: First, the center of the screen may be an optimal location for early information processing of the scene. Second, it may simply be that the center of the screen is a convenient location from which to start oculomotor exploration of the scene. Third, it may be that the central bias reflects a tendency to re-center the eye in its orbit. |
Benjamin W. Tatler; Samuel B. Hutton Trial by trial effects in the antisaccade task Journal Article In: Experimental Brain Research, vol. 179, no. 3, pp. 387–396, 2007. @article{Tatler2007a, The antisaccade task requires participants to inhibit the reflexive tendency to look at a sudden onset target and instead direct their gaze to the opposite hemifield. As such it provides a convenient tool with which to investigate the cognitive and neural systems that support goal-directed behaviour. Recent models of cognitive control suggest that antisaccade performance on a single trial should vary as a function of the outcome (correct antisaccade or erroneous prosaccade) of the previous trial. In addition, repetition priming effects suggest that the spatial location of the target on the previous trial may also influence current trial performance. Thus an analysis of contingency effects in antisaccade performance may provide new insights into the factors that influence the monitoring and modulation of the antisaccade task and other ongoing behaviours. Using a multilevel modelling analysis we explored previous trial effects on current trial performance in a large antisaccade dataset. We found (1) repetition priming effects following correct antisaccades; (2) contrary to models of cognitive control antisaccade error rates were increased on trials following an error, suggesting that failures to adequately maintain the task goal can persist across more than one trial; and (3) current trial latencies varied according to the previous trial outcome (correct antisaccade, slowly corrected error or rapidly corrected error). These results are discussed in terms of current models of antisaccade performance and cognitive control and further demonstrate the utility of multilevel modelling for analysing antisaccade data. |
Alisdair J. G. Taylor; Samuel B. Hutton The effects of individual differences on cued antisaccade performance Journal Article In: Journal of Eye Movement Research, vol. 1, no. 1, pp. 1–9, 2007. @article{Taylor2007, In the antisaccade task, pre-cueing the location of a correct response has the paradoxical effect of increasing errors. It has been suggested that this effect occurs because participants adopt an "antisaccade task set" and treat the cue as if was a target - directing attention away from the precue and towards the location of the impending target. This hypothe- sis was tested using a mixed pro / antisaccade task. In addition the effects of individual differences in working memory capacity and schizotypal personality traits on performance were examined. Whilst we observed some modest relationships between these individual differences and antisaccade performance, the strongest predictor of antisaccade error rate was uncued prosaccade latency. |
Laura E. Thomas; Alejandro Lleras Moving eyes and moving thought: On the spatial compatibility between eye movements and cognition Journal Article In: Psychonomic Bulletin & Review, vol. 14, no. 4, pp. 663–668, 2007. @article{Thomas2007, Grant and Spivey (2003) proposed that eye movement trajectories can influence spatial reasoning by way of an implicit eye-movement-to-cognition link. We tested this proposal and investigated the nature of this link by continuously monitoring eye movements and asking participants to perform a problem-solving task under free-viewing conditions while occasionally guiding their eye movements (via an unrelated tracking task), either in a pattern related to the problem's solution or in unrelated patterns. Although participants reported that they were not aware of any relationship between the tracking task and the problem, those who moved their eyes in a pattern related to the problem's solution were the most successful problem solvers. Our results support the existence of an implicit compatibility between spatial cognition and the eye movement patterns that people use to examine a scene. |
Neil W. D. Thomas; Martin Pare Temporal processing of saccade targets in parietal cortex area LIP during visual search Journal Article In: Journal of Neurophysiology, vol. 97, no. 1, pp. 942–947, 2007. @article{Thomas2007a, We studied whether the lateral intraparietal (LIP) area—a subdivision of parietal cortex anatomically interposed between visual cortical areas and saccade executive centers—contains neurons with activity patterns sufficient to contribute to the active process of selecting saccade targets in visual search. Visually responsive neurons were recorded while monkeys searched for a color-different target presented concurrently with seven distractors evenly distributed in a circular search array. We found that LIP neurons initially responded indiscriminately to the presentation of a visual stimulus in their response fields, regardless of its feature and identity. Their activation nevertheless evolved to signal the search target before saccade initiation: an ideal observer could reliably discriminate the target from the individual activation of 60% of neurons, on average, 138 ms after stimulus presentation and 26 ms before saccade initiation. Importantly, the timing of LIP neuronal discrimination varied proportionally with reaction times. These findings suggest that LIP activity reflects the selection of both the search target and the targeting saccade during active visual search. |
Aidan A. Thompson; David A. Westwood The hand knows something that the eye does not: Reaching movements resist the Müller-Lyer illusion whether or not the target is foveated Journal Article In: Neuroscience Letters, vol. 426, no. 2, pp. 111–116, 2007. @article{Thompson2007, Previous reports suggest that saccades are affected by the Müller-Lyer (ML) pictorial illusion, whereas reaching movements are not. It is unclear if the resistance of reaching to illusions depends on the concurrent engagement of the oculomotor system. Here we show that the endpoints and kinematics of reaching movements were unaffected by a peripherally viewed ML stimulus regardless of whether or not a concurrent saccade was carried out. Primary saccade endpoints were affected by the ML stimulus but secondary saccades were not. Perceptual judgments of target location were influenced by the ML stimulus in the expected direction. The resistance of reaching movements to pictorial illusions does not appear to depend on the concurrent engagement of the oculomotor system. Implications for models of oculomotor and upper limb control are discussed. |
Keith Rayner; Xingshan Li; Carrick C. Williams; Kyle R. Cave; Arnold D. Well Eye movements during information processing tasks: Individual differences and cultural effects Journal Article In: Vision Research, vol. 47, no. 21, pp. 2714–2726, 2007. @article{Rayner2007, The eye movements of native English speakers, native Chinese speakers, and bilingual Chinese/English speakers who were either born in China (and moved to the US at an early age) or in the US were recorded during six tasks: (1) reading, (2) face processing, (3) scene perception, (4) visual search, (5) counting Chinese characters in a passage of text, and (6) visual search for Chinese characters. Across the different groups, there was a strong tendency for consistency in eye movement behavior; if fixation durations of a given viewer were long on one task, they tended to be long on other tasks (and the same tended to be true for saccade size). Some tasks, notably reading, did not conform to this pattern. Furthermore, experience with a given writing system had a large impact on fixation durations and saccade lengths. With respect to cultural differences, there was little evidence that Chinese participants spent more time looking at the background information (and, conversely less time looking at the foreground information) than the American participants. Also, Chinese participants' fixations were more numerous and of shorter duration than those of their American counterparts while viewing faces and scenes, and counting Chinese characters in text. |
Bobby B. Stojanoski; Matthias Niemeier Feature-based attention modulates the perception of object contours Journal Article In: Journal of Vision, vol. 7, no. 14, pp. 1–11, 2007. @article{Stojanoski2007, Feature-based attention is known to support perception of visual features associated with early and intermediate visual areas. Here we examined the role of feature-based attention in higher levels of object processing. We used a dual-task design to probe perception of poorly attended contour-defined or motion-defined loops while attention was occupied with congruent or incongruent feature detection tasks. Perception of the unattended task was better when concurrently presented with a congruent stimulus. However, this effect was eliminated when detection of the primary task was made easy suggesting that task-demand in object perception is feature specific. Our results provide evidence for the contribution of feature-based attention to object perception. |
Raliza S. Stoyanova; Jay Pratt; Adam K. Anderson Inhibition of return to social signals of fear Journal Article In: Emotion, vol. 7, no. 1, pp. 49–56, 2007. @article{Stoyanova2007, The present study examined whether inhibition of return (IOR) is modulated by the fear relevance of the cue. Experiment 1 found similar magnitude of IOR was produced by neutral and fear faces and luminance matched cues. To allow a more sensitive measure of endogenously directed attention, Experiment 2 removed a central reorienting cue and more precisely measured the time course of IOR. At stimulus onset asynchronies (SOAs) of 500, 1,000 and 1,500 ms, fear face and luminance matched cues resulted in similar IOR. These findings suggest that IOR is triggered by event onsets and disregards event value. Views of IOR as an adaptive "foraging facilitator," whereby attention is guided to promote optimal sampling of important environmental events, are discussed. |
Martin Stritzke; Julia Trommershäuser Eye movements during rapid pointing under risk Journal Article In: Vision Research, vol. 47, no. 15, pp. 2000–2009, 2007. @article{Stritzke2007, We recorded saccadic eye movements during visually-guided rapid pointing movements under risk. We intended to determine whether saccadic end points are necessarily tied to the goals of rapid pointing movements or whether, when the visual features of a display and the goals of a pointing movement are different, saccades are driven by low-level features of the visual stimulus. Subjects pointed at a stimulus configuration consisting of a target region and a penalty region. Each target hit yielded a gain of points; each penalty hit incurred a loss of points. Late responses were penalized. The luminance of either target or penalty region was indicated by a disk which differed significantly from the background in luminance, while the other region was indicated by a thin circle. In subsequent experiments, we varied the visual salience of the stimulus configuration and found that manual responses followed near-optimal strategies maximizing expected gain, independent of the salience of the target region. We suggest that the final eye position is partially pre-programmed prior to hand movement initiation. While we found that manipulations of the visual salience of the display determined the end point of the initial saccade we also found that subsequent saccades are driven by the goal of the hand movement. |
Alexander C. Schütz; Doris I. Braun; Karl R. Gegenfurtner Contrast sensitivity during the initiation of smooth pursuit eye movements Journal Article In: Vision Research, vol. 47, no. 21, pp. 2767–2777, 2007. @article{Schuetz2007, Eye movements challenge the perception of a stable world by inducing retinal image displacement. During saccadic eye movements visual stability is accompanied by a remapping of visual receptive fields, a compression of visual space and perceptual suppression. Here we explore whether a similar suppression changes the perception of briefly presented low contrast targets during the initiation of smooth pursuit eye movements. In a 2AFC design we investigated the contrast sensitivity for threshold-level stimuli during the initiation of smooth pursuit and during saccades. Pursuit was elicited by horizontal step-ramp and ramp stimuli. At any time from 200 ms before to 500 ms after pursuit stimulus onset, a blurred 0.3 deg wide horizontal line with low contrast just above detection threshold appeared for 10 ms either 2 deg above or below the pursuit trajectory. Observers had to pursue the moving stimulus and to indicate whether the target line appeared above or below the pursuit trajectory. In contrast to perceptual suppression effects during saccades, no pronounced suppression was found at pursuit onset for step-ramp motion. When pursuit was elicited by a ramp stimulus, pursuit initiation was accompanied by catch-up saccades, which caused saccadic suppression. Additionally, contrast sensitivity was attenuated at the time of pursuit or saccade stimulus onset. This attenuation might be due to an attentional deficit, because the stimulus required the focus of attention during the programming of the following eye movement. |
Alexander C. Schütz; Elias Delipetkos; Doris I. Braun; Dirk Kerzel; Karl R. Gegenfurtner Temporal contrast sensitivity during smooth pursuit eye movements Journal Article In: Journal of Vision, vol. 7, no. 13, pp. 1–15, 2007. @article{Schuetz2007a, During smooth pursuit eye movements, stimuli other than the pursuit target move across the retina, and this might affect their detectability. We measured detection thresholds for vertically oriented Gabor stimuli with different temporal frequencies (1, 4, 8, 12, 16, 20, and 24 Hz) of the sinusoids. Observers kept fixation on a small target spot that was either stationary or moved horizontally at a speed of 8 deg/s. The sinusoid of the Gabor stimuli moved either in the same or in the opposite direction as the pursuit target. Observers had to indicate whether the Gabor stimuli were displayed 4- above or below the target spot. Results show that contrast sensitivity was mainly determined by retinal-image motion but was slightly reduced during smooth pursuit eye movements. Moreover, sensitivity for motion opposite to pursuit direction was reduced in comparison to motion in pursuit direction. The loss in sensitivity for peripheral targets during pursuit can be interpreted in terms of space-based attention to the pursuit target. The loss of sensitivity for motion opposite to pursuit direction can be interpreted as feature-based attention to the pursuit direction. |
Matthew S. Peterson; Melissa R. Beck; Miroslava Vomela Visual search is guided by prospective and retrospective memory Journal Article In: Perception and Psychophysics, vol. 69, no. 1, pp. 123–135, 2007. @article{Peterson2007, Although there has been some controversy as to whether attention is guided by memory during visual search, recent findings have suggested that memory helps to prevent attention from needlessly reinspecting examined items. Until now, it has been assumed that some form of retrospective memory is responsible for keeping track of examined items and preventing revisitations. Alternatively, some form of prospective memory, such as strategic scanpath planning, could be responsible for guiding attention away from examined items. We used a new technique that allowed us to selectively prevent retrospective or prospective memory from contributing to search. We demonstrated that both retrospective and prospective memory guide attention during visual search. |
Tobias Pflugshaupt; Urs P. Mosimann; Wolfgang J. Schmitt; Roman Wartburg; Pascal Wurtz; Mathias Lüthi; Thomas Nyffeler; Christian W. Hess; René M. Müri To look or not to look at threat? Scanpath differences within a group of spider phobics Journal Article In: Journal of Anxiety Disorders, vol. 21, no. 3, pp. 353–366, 2007. @article{Pflugshaupt2007, Predicting the behavior of phobic patients in a confrontational situation is challenging. While avoidance as a major clinical component of phobias suggests that patients orient away from threat, findings based on cognitive paradigms indicate an attentional bias towards threat. Here we present eye movement data from 21 spider phobics and 21 control subjects, based on 3 basic oculomotor tasks and a visual exploration task that included close-up views of spiders. Relative to the control group, patients showed accelerated reflexive saccades in one of the basic oculomotor tasks, while the fear-relevant exploration task evoked a general slowing in their scanning behavior and pronounced oculomotor avoidance. However, this avoidance strongly varied within the patient group and was not associated with the scores from spider avoidance-sensitive questionnaire scales. We suggest that variation of oculomotor avoidance between phobics reflects different strategies of how they cope with threat in confrontational situations. |
Tobias Pflugshaupt; Thomas Nyffeler; Roman Wartburg; Pascal Wurtz; Mathias Lüthi; Daniela Hubl; Klemens Gutbrod; Freimut D. Juengling; Christian W. Hess; René M. Müri When left becomes right and vice versa: Mirrored vision after cerebral hypoxia Journal Article In: Neuropsychologia, vol. 45, no. 9, pp. 2078–2091, 2007. @article{Pflugshaupt2007a, The combination of acquired mirror writing and reading is an extremely rare neurological disorder. It is encountered when brain damaged patients prefer horizontally mirrored over normal script in writing and reading. Previous theories have related this pathology to a disinhibition of mirrored engrams in the non-dominant hemisphere, possibly accompanied by a reversal of the preferred scanning direction. Here, we report the experimental investigation of PR, a patient who developed pronounced mirror writing and reading following septic shock that caused hypoxic brain damage. A series of five oculomotor experiments revealed that the patient's preferred scanning direction was indeed reversed. However, PR showed striking scanpath abnormalities and mirror reversals that cannot be explained by previous theories. Considered together with mirror phenomena she displayed in neuropsychological tasks and everyday activities, our findings suggest a horizontal reversal of visual information on a perceptual level. In addition, a systematic manipulation of visual variables within two further experiments had dramatic effects on her mirror phenomena. When confronted with moving, flickering or briefly presented stimuli, PR showed hardly any left-right reversals. Not only do these findings underline the perceptual nature of her disorder, but also allow interpretation of the pathology in terms of a dissociation between visual subsystems. We speculate that early visual cortices are crucially involved in this dissociation. More generally, her mirrored vision may represent an extreme clinical manifestation of the relative instability of the horizontal axis in spatial vision. |