EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2005 |
Igor Riečanský; Alexander Thiele; Claudia Distler; Klaus-Peter Hoffmann Chromatic sensitivity of neurones in area MT of the anaesthetised macaque monkey compared to human motion perception Journal Article In: Experimental Brain Research, vol. 167, no. 4, pp. 504–525, 2005. @article{Riecansky2005, We recorded activity from neurones in cortical motion-processing areas, middle temporal area (MT) and middle posterior superior temporal sulcus (MST), of anaesthetised and paralysed macaque monkeys in response to moving sinewave gratings modulated in luminance and chrominance. The activity of MT and MST neurones was highly dependent on luminance contrast. In three of four animals isoluminant chromatic modulations failed to activate MT/MST neurones significantly. At low luminance contrast a systematic dependence on chromaticity was revealed, attributable mostly to residual activity of the magnocellular pathway. Additionally, we found indications for a weak S-cone input, but rod intrusion could also have made a contribution. In contrast to the activity of MT and MST neurones, speed judgments and onset amplitude of evoked optokinetic eye movements in human subjects confronted with equivalent visual stimuli were largely independent of luminance modulation. Motion of every grating (including isoluminant) was readily visible for all but one observer. Similarity with the activity of MT/MST cells was found only for motion-nulling equivalent luminance contrast judgments at isoluminance. Our results suggest that areas MT and MST may not be involved in the processing of chromatic motion, but effects of central anaesthesia and/or the existence of intra- and inter-species differences must also be considered. |
Mike Rinck; Andrea Reinecke; Thomas Ellwart; Kathrin Heuer; Eni S. Becker Speeded detection and increased distraction in fear of spiders: Evidence from eye movements Journal Article In: Journal of Abnormal Psychology, vol. 114, no. 2, pp. 235–248, 2005. @article{Rinck2005, Anxiety patients exhibit attentional biases toward threat, which have often been demonstrated as increased distractibility by threatening stimuli. In contrast, speeded detection of threat has rarely been shown. Therefore, the authors studied both phenomena in 3 versions of a visual search task while eye movements were recorded continuously. Spider-fearful individuals and nonanxious control participants participated in a target search task, an odd-one-out search task, and a category search task. Evidence for disorder-specific increased distraction by threat was found in all tasks, whereas speeded threat detection did not occur in the target search task. The implications of these findings for cognitive theories of anxiety are discussed, particularly in relation to the concept of disengagement from threat. |
Martin Rolfs; Ralf Engbert; Reinhold Kliegl Crossmodal coupling of oculomotor control and spatial attention in vision and audition Journal Article In: Experimental Brain Research, vol. 166, no. 3-4, pp. 427–439, 2005. @article{Rolfs2005, Fixational eye movements occur involuntarily during visual fixation of stationary scenes. The fastest components of these miniature eye movements are microsaccades, which can be observed about once per second. Recent studies demonstrated that microsaccades are linked to covert shifts of visual attention. Here, we generalized this finding in two ways. First, we used peripheral cues, rather than the centrally presented cues of earlier studies. Second, we spatially cued attention in vision and audition to visual and auditory targets. An analysis of microsaccade responses revealed an equivalent impact of visual and auditory cues on microsaccade-rate signature (i.e. an initial inhibition followed by an overshoot and a final return to the pre-cue baseline rate). With visual cues or visual targets, microsaccades were briefly aligned with cue direction and then opposite to cue direction during the overshoot epoch, probably as a result of an inhibition of an automatic saccade to the peripheral cue. With left auditory cues and auditory targets microsaccades oriented in cue direction. We argue that microsaccades can be used to study crossmodal integration of sensory information and to map the time course of saccade preparation during covert shifts of visual and auditory attention. |
Martin Lemay; George E. Stelmach Multiple frames of reference for pointing to a remembered target Journal Article In: Experimental Brain Research, vol. 164, no. 3, pp. 301–310, 2005. @article{Lemay2005, Pointing with an unseen hand to a visual target that disappears prior to movement requires maintaining a memory representation about the target location. The target location can be transformed either into a hand-centered frame of reference during target presentation and remembered under that form, or remembered in terms of retinal and extra-retinal cues and transformed into a body-centered frame of reference before movement initiation. The main goal of the present study was to investigate whether the target is stored in memory in an eye-centered frame, a hand-centered frame or in both frames of reference concomitantly. The task was to locate, memorize, and point to a target in a dark environment. Hand movement was not visible. During the recall delay, participants were asked to move their hand or their eyes in order to disrupt the memory representation of the target. Movement of the eyes during the recall delay was expected to disrupt an eye-centered memory representation whereas movement of the hand was expected to disrupt a hand-centered memory representation by increasing movement variability to the target. Variability of movement amplitude and direction was examined. Results showed that participants were more variable on the directional component of the movement when required to move their hand during recall delay. On the contrary, moving the eyes caused an increase in variability only in the amplitude component of the pointing movement. Taken together, these results suggest that the direction of the movement is coded and remembered in a frame of reference linked to the arm, whereas the amplitude of the movement is remembered in an eye-centered frame of reference. |
Ute Leonards; Nicholas E. Scott-Samuel Idiosyncratic initiation of saccadic face exploration in humans Journal Article In: Vision Research, vol. 45, no. 20, pp. 2677–2684, 2005. @article{Leonards2005, Visual processing and subsequent action are limited by the effectiveness of eye movement control: where the eyes fixate determines what part of the visual environment is seen in detail. Visual exploration consists of stereotypical sequences of saccadic eye movements which are known to depend upon both external factors, such as visual stimulus features, and internal cognition-related factors, such as attention and memory. However, how these two factors are balanced is unknown. One determinant might be the familiarity or ecological importance of the visual stimulus being explored. Recordings of saccades for human face stimuli revealed that their exploration was subject to strong individual biases for the initial saccade direction: subjects tended to look first to one particular side. We attribute this to internal factors. In contrast, exploration of landscapes, fractals or inverted faces showed no significant direction bias for initial saccades, suggesting more externally driven exploration patterns. Thus the balance between external and internal factors in scene exploration depends on stimulus type. An analysis of saccade latencies suggested that this individual preference for first saccade direction during face exploration leads to higher effectiveness through automation. The findings have implications for the understanding of both normal and abnormal eye movements. |
B. Krekelberg Implied motion from form in the human visual cortex Journal Article In: Journal of Neurophysiology, vol. 94, no. 6, pp. 4373–4386, 2005. @article{Krekelberg2005, When cartoonists use speed lines - also called motion streaks - to suggest the speed of a stationary object, they use form to imply motion. The goal of this study was to investigate the mechanisms that mediate the percept of implied motion in the human visual cortex. In an adaptation functional imaging paradigm we presented Glass patterns that, just like speed lines, imply motion but do not on average contain coherent motion energy. We found selective adaptation to these patterns in the human motion complex, the lateral occipital complex ( LOC), and earlier visual areas. Glass patterns contain both local orientation features and global structure. To disentangle these aspects we performed a control experiment using Glass patterns with minimal local orientation differences but large global structure differences. This experiment showed that selectivity for Glass patterns arises in part in areas beyond V1 and V2. Interestingly, the selective adaptation transferred from implied motion stimuli to similar real motion patterns in dorsal but not ventral areas. This suggests that the same subpopulations of cells in dorsal areas that are selective for implied motion are also selective for real motion. In other words, these cells are invariant with respect to the cue ( implied or real) that generates the motion. We conclude that the human motion complex responds to Glass patterns as if they contain coherent motion. This, presumably, is the reason why these patterns appear to move coherently. The LOC, however, has different cells that respond to the structure of real motion patterns versus implied motion patterns. Such a differential response may allow ventral areas to further analyze the structure of global patterns. |
Timothée Jost; Nabil Ouerhani; Roman Von Wartburg; René M. Müri; Heinz Hügli Assessing the contribution of color in visual attention Journal Article In: Computer Vision and Image Understanding, vol. 100, no. 1-2, pp. 107–123, 2005. @article{Jost2005, Visual attention is the ability of a vision system, be it biological or artificial, to rapidly detect potentially relevant parts of a visual scene, on which higher level vision tasks, such as object recognition, can focus. The saliency-based model of visual attention represents one of the main attempts to simulate this visual mechanism on computers. Though biologically inspired, this model has only been partially assessed in comparison with human behavior. Our methodology consists in comparing the computational saliency map with human eye movement patterns. This paper presents an in-depth analysis of the model by assessing the contribution of different cues to visual attention. It reports the results of a quantitative comparison of human visual attention derived from fixation patterns with visual attention as modeled by different versions of the computer model. More specifically, a one-cue gray-level model is compared to a two-cues color model. The experiments conducted with over 40 images of different nature and involving 20 human subjects assess the quantitative contribution of chromatic features in visual attention. |
Anne Lise V. Holahan; Gillian A. O'Driscoll Antisaccade and smooth pursuit performance in positive- and negative-symptom schizotypy Journal Article In: Schizophrenia Research, vol. 76, no. 1, pp. 43–54, 2005. @article{Holahan2005, Schizophrenic patients have well-documented abnormalities in smooth pursuit eye movements and antisaccade performance. In populations at risk for schizophrenia, smooth pursuit abnormalities are also well documented. Antisaccade deficits have been replicated in high-risk populations as well, but the findings are more variable and the reasons for the variability are not clear. Some evidence suggests that antisaccade deficits increase in high-risk populations in relation to the presence of positive symptoms. Whether antisaccade deficits increase in relation to negative symptoms in high-risk populations is relatively uninvestigated. We evaluated antisaccade and pursuit performance in "psychometric schizotypes" who had elevated scores on either the Perceptual Aberration Scale (PerAb; i.e., positive symptoms) or the Physical Anhedonia Scale (PhysAnh; i.e., negative symptoms) but not both, and in normal controls. We used the standard version of the antisaccade task, for which results in positive-symptom schizotypes have previously been reported, and investigated performance on a gap and overlap version. We replicated the finding that a significantly larger percentage of positive-symptom schizotypes than controls have elevated antisaccade error rates on the standard antisaccade task (P=0.03); the percentage of negative-symptom schizotypes with elevated antisaccade error rates did not differ from that of control subjects. Neither schizotypal group was impaired on the gap or overlap versions of the task. On the pursuit task, a higher percentage of positive- and negative-symptom schizotypes were classified as having deviant performance than control subjects (both Ps<0.04). These findings suggest that antisaccade deficits may be better at identifying high-risk subjects with positive symptoms. Pursuit deficits identified both positive- and negative-symptom schizotypes, but was better at identifying the latter. |
Annette Horstmann; Klaus-Peter Hoffmann Target selection in eye-hand coordination: Do we reach to where we look or do we look to where we reach? Journal Article In: Experimental Brain Research, vol. 167, no. 2, pp. 187–195, 2005. @article{Horstmann2005, During a goal-directed movement of the hand to a visual target the controlling nervous system depends on information provided by the visual system. This suggests that a coupling between these two systems is crucial. In a choice condition with two or more equivalent objects present at the same time the question arises whether we (a) reach for the object we have selected to look at or (b) look to the object we have selected to grasp. Therefore, we examined the preference of human subjects selecting the left or the right target and its correlation to the action to be performed (eye-, arm- or coordinated eye-arm movement) as well as the horizontal position of the target. Two targets were presented at the same distance to the left and right of a fixation point and the stimulus onset asynchrony (SOA) was adjusted until both targets were selected equally often. This balanced SOA was then taken as a quantitative measure of selection preference. We compared these preferences at three horizontal positions for the different movement types (eye, arm, both). The preferences of the 'arm' and 'coordinated eye-arm' movement types were correlated more strongly than the preferences of the other movement types. Thus, we look to where we have already selected to grasp. These findings provide evidence that in a coordinated movement of eyes and arm the control of gaze is a means to an end, namely a tool to conduct the arm movement properly. |
Sarah Howlett; John Hamill; Carol O'sullivan Predicting and evaluating saliency for simplified polygonal models Journal Article In: ACM Transactions on Applied Perception, vol. 2, no. 3, pp. 286–308, 2005. @article{Howlett2005, In this paper, we consider the problem of determining feature saliency for three-dimensional (3D) objects and describe a series of experiments that examined if salient features exist and can be predicted in advance. We attempt to determine salient features by using an eye-tracking device to capture human gaze data and then investigate if the visual fidelity of simplified polygonal models can be improved by emphasizing the detail of salient features identified in this way. To try to evaluate the visual fidelity of the simplified models, a set of naming time, matching time, and forced-choice preference experiments were carried out. We found that perceptually weighted simplification led to a significant increase in visual fidelity for the lower levels of detail (LOD) of the natural objects, but that for the man-made artifacts the opposite was true. We, therefore, conclude that visually prominent features may be predicted in thisway for natural objects, but our results show that saliency prediction for synthetic objects is more difficult, perhaps because it is more strongly affected by task. As a further step we carried out some confirmation experiments to examine if the prominent features found during the saliency experiment were actually the features focused upon during the naming, matching, and forced-choice preference tasks. Results demonstrated that the heads of natural objects received a significant amount of attention, especially during the naming task. We hope that our results will lead to new insights into the nature of saliency in 3D graphics. |
P. -J. Hsieh; G. P. Caplovitz; P. U. Tse Illusory rebound motion and the motion continuity heuristic Journal Article In: Vision Research, vol. 45, no. 23, pp. 2972–2985, 2005. @article{Hsieh2005, A new motion illusion, "illusory rebound motion" (IRM), is described. IRM is qualitatively similar to illusory line motion (ILM). ILM occurs when a bar is presented shortly after an initial stimulus such that the bar appears to move continuously away from the initial stimulus. IRM occurs when a second bar of a different color is presented at the same location as the first bar within a certain delay after ILM, making this second bar appear to move in the opposite direction relative to the preceding direction of ILM. Three plausible accounts of IRM are considered: a shifting attentional gradient model, a motion aftereffect (MAE) model, and a heuristic model. Results imply that IRM arises because of a heuristic about how objects move in the environment: In the absence of countervailing evidence, motion trajectories are assumed to continue away from the location where an object was last seen to move. |
Dirk Kerzel; Karl R. Gegenfurtner Motion-induced illusory displacement reexamined: Differences between perception and action? Journal Article In: Experimental Brain Research, vol. 162, no. 2, pp. 191–201, 2005. @article{Kerzel2005, The position of a drifting sine-wave grating enveloped by a stationary Gaussian is misperceived in the direction of motion. Previous research indicated that the illusion was larger when observers pointed to the center of the stimulus than when they indicated the stimulus position on a ruler. This conclusion was reexamined. Observers pointed to the position of a small Gabor patch on the screen or compared its position to moving patches, stationary lines, or flashed lines. With moving patches, the illusion was larger with probe than with motor judgments; with stationary lines, the illusion was about the same size; and with flashed lines, the illusion was smaller with probe than with motor judgments. Thus, the comparison between perceptual and motor measures depended strongly on the methods used. Further, the target was mislocalized toward the fovea with motor judgments, whereas the target was displaced away from the fovea relative to line probes. |
Raymond M. Klein; John Christie; Eric P. Morris Vector averaging of inhibition of return Journal Article In: Psychonomic Bulletin & Review, vol. 12, no. 2, pp. 295–300, 2005. @article{Klein2005, Observers detected targets presented 400 msec after a display containing one cue or two to four cues displayed simultaneously in randomly selected locations on a virtual circle around fixation. The cue arrangement was completely uninformative about the upcoming target's location, and eye position was monitored to ensure that the participants maintained fixation between the cue and their manual detection response. Reflecting inhibition of return (IOR), there was a gradient of performance following single cues, with reaction time decreasing monotonically as the target's angular distance from the cued direction increased. An equivalent gradient of IOR was found following multiple cues whose center of gravity fell outside the parafoveal region and, thus, whose net vector would activate an orienting response. Moreover, on these trials, whether or not the targeted location had been stimulated by a cue had little effect on this gradient. Finally, when the array of cues was balanced so that its center of gravity was at fixation, there was no IOR. These findings, which suggest that IOR is an aftermath of orienting elicited by the cue, are compatible with population coding of the entire cue (as a grouped array for multiple cues) as the generator of IOR. |
Zoe Kourtzi; Lisa R. Betts; Pegah Sarkheil; Andrew E. Welchman Distributed neural plasticity for shape learning in the human visual cortex Journal Article In: PLoS Biology, vol. 3, no. 7, pp. 1317–1327, 2005. @article{Kourtzi2005, Expertise in recognizing objects in cluttered scenes is a critical skill for our interactions in complex environments and is thought to develop with learning. However, the neural implementation of object learning across stages of visual analysis in the human brain remains largely unknown. Using combined psychophysics and functional magnetic resonance imaging (fMRI), we show a link between shape-specific learning in cluttered scenes and distributed neuronal plasticity in the human visual cortex. We report stronger fMRI responses for trained than untrained shapes across early and higher visual areas when observers learned to detect low-salience shapes in noisy backgrounds. However, training with high-salience pop-out targets resulted in lower fMRI responses for trained than untrained shapes in higher occipitotemporal areas. These findings suggest that learning of camouflaged shapes is mediated by increasing neural sensitivity across visual areas to bolster target segmentation and feature integration. In contrast, learning of prominent pop-out shapes is mediated by associations at higher occipitotemporal areas that support sparser coding of the critical features for target recognition. We propose that the human brain learns novel objects in complex scenes by reorganizing shape processing across visual areas, while taking advantage of natural image correlations that determine the distinctiveness of target shapes. |
Zoe Kourtzi; Elisabeth Huberle Spatiotemporal characteristics of form analysis in the human visual cortex revealed by rapid event-related fMRI adaptation Journal Article In: NeuroImage, vol. 28, pp. 440–452, 2005. @article{Kourtzi2005a, The integration of local elements to coherent forms is at the core of understanding visual perception. Accumulating evidence suggests that both early retinotopic and higher occipitotemporal areas contribute to the integration of local elements to global forms. However, the spatiotemporal characteristics of form analysis in the human visual cortex remain largely unknown. The aim of this study was to investigate form analysis at different spatial (global vs. local structure) and temporal (different stimulus presentation rates) scales across stages of visual analysis (from V1 to the lateral occipital complex— LOC) in the human brain. We used closed contours rendered by Gabor elements and manipulated either the global contour structure or the orientation of the local Gabor elements. Our rapid event-related fMRI adaptation studies suggest that contour integration and form process- ing in early visual areas is transient and limited within the local neighborhood of their cells' receptive field. In contrast, higher visual areas appear to process the perceived global form in a more sustained manner. Finally, we demonstrate that these spatiotemporal properties of form processing in the visual cortex are modulated by attention. Attention to the global form maintains sustained processing in occipitotemporal areas, whereas attention to local elements enhances their integration in early visual areas. These findings provide novel neuroimaging evidence for form analysis at different spatiotemporal scales across human visual areas and validate the use of rapid event- related fMRI adaptation for investigating processing across stages of visual analysis in the human brain. |
Heinz Hügli; Timothée Jost; Nabil Ouerhani Model performance for visual attention in real 3D color scenes Journal Article In: Artificial Intelligence and Knowledge Engineering Applications: A Bioinspired Approach, pp. 1–10, 2005. @article{Huegli2005, Visual attention is the ability of a vision system, be it biological or artificial, to rapidly detect potentially relevant parts of a visual scene. The saliency-based model of visual attention is widely used to simulate this visual mechanism on computers. Though biologically inspired, this model has been only partially assessed in comparison with human behavior. The research described in this paper aims at assessing its performance in the case of natural scenes, i.e. real 3D color scenes. The evaluation is based on the comparison of computer saliency maps with human visual attention derived from fixation patterns while sub jects are looking at the scenes. The paper presents a number of experiments involving natural scenes and computer models differing by their capacity to deal with color and depth. The results point on the large range of scene specific performance variations and provide typical quantitative performance values for models of different complexity. |
Hanako Ikeda; Randolph Blake; Katsumi Watanabe Eccentric perception of biological motion is unscalably poor Journal Article In: Vision Research, vol. 45, no. 15, pp. 1935–1943, 2005. @article{Ikeda2005, Accurately perceiving the activities of other people is a crucially important social skill of obvious survival value. Human vision is equipped with highly sensitive mechanisms for recognizing activities performed by others [Johansson, G. (1973). Visual perception of biological motion and a model for its analysis. Perception and Psychophysics, 14, 201; Johansson, G. (1976). Spatio-temporal differentiation and integration in visual motion perception: An experimental and theoretical analysis of calculus-like functions in visual data processing. Psychological Research, 38, 379]. One putative functional role of biological motion perception is to register the presence of biological events anywhere within the visual field, not just within central vision. To assess the salience of biological motion throughout the visual field, we compared the detectability performances of biological motion animations imaged in central vision and in peripheral vision. To compensate for the poorer spatial resolution within the periphery, we spatially magnified the motion tokens defining biological motion. Normal and scrambled biological motion sequences were embedded in motion noise and presented in two successively viewed intervals on each trial (2AFC). Subjects indicated which of the two intervals contained normal biological motion. A staircase procedure varied the number of noise dots to produce a criterion level of discrimination performance. For both foveal and peripheral viewing, performance increased but saturated with stimulus size. Foveal and peripheral performance could not be equated by any magnitude of size scaling. Moreover, the inversion effect - superiority of upright over inverted biological motion [Sumi, S. (1984). Upside-down presentation of the Johansson moving light-spot pattern. Perception, 13, 283] - was found only when animations were viewed within the central visual field. Evidently the neural resource responsible for biological motion perception are embodied within neural mechanisms focused on central vision. |
Iole Indovina; Vincenzo Maffei; Gianfranco Bosco; Myrka Zago; Emiliano Macaluso; Francesco Lacquaniti Representation of visual gravitational motion in the human vestibular cortex Journal Article In: Science, vol. 308, no. 5720, pp. 416–419, 2005. @article{Indovina2005, How do we perceive the visual motion of objects that are accelerated by gravity? We propose that, because vision is poorly sensitive to accelerations, an internal model that calculates the effects of gravity is derived from graviceptive information, is stored in the vestibular cortex, and is activated by visual motion that appears to be coherent with natural gravity. The acceleration of visual targets was manipulated while brain activity was measured using functional magnetic resonance imaging. In agreement with the internal model hypothesis, we found that the vestibular network was selectively engaged when acceleration was consistent with natural gravity. These findings demonstrate that predictive mechanisms of physical laws of motion are represented in the human brain. |
2004 |
Marcus Kaiser; Markus Lappe Perisaccadic mislocalization orthogonal to saccade direction. Journal Article In: Neuron, vol. 41, pp. 293–300, 2004. @article{Kaiser2004, Saccadic eye movements transiently distort perceptual space. Visual objects flashed shortly before or during a saccade are mislocalized along the saccade direction, resembling a compression of space around the saccade target. These mislocalizations reflect transient errors of processes that construct spatial stability across eye movements. They may arise from errors of reference signals associated with saccade direction and amplitude or from visual or visuomotor remapping processes focused on the saccade target's position. The second case would predict apparent position shifts toward the target also in directions orthogonal to the saccade. We report that such orthogonal mislocalization indeed occurs. Surprisingly, however, the orthogonal mislocalization is restricted to only part of the visual field. This part comprises distant positions in saccade direction but does not depend on the target's position. Our findings can be explained by a combination of directional and positional reference signals that varies in time course across the visual field. |
Mika Koivisto; Jukka Hyönä; Antti Revonsuo The effects of eye movements, spatial attention, and stimulus features on inattentional blindness Journal Article In: Vision Research, vol. 44, no. 27, pp. 3211–3221, 2004. @article{Koivisto2004, Observers often fail to detect the appearance of an unexpected visual object ("inattentional blindness"). Experiment 1 studied the effects of fixation position and spatial attention on inattentional blindness. Eye movements were measured. We found strong inattentional blindness to the unexpected stimulus even when it was fixated and appeared in one of the expected positions. The results suggest that spatial attention is not sufficient for attentional capture and awareness. Experiment 2 showed that the stimulus was easier to consciously detect when it was colored but the relation of the color to the color of the attended objects had no effect on detection. The unexpected stimulus was easiest to detect, when it represented the same category as the attended objects. |
Christof Körner; Iain D. Gilchrist Eye movements in a simple spatial reasoning task Journal Article In: Perception, vol. 33, no. 4, pp. 485–494, 2004. @article{Koerner2004a, We report two experiments in which participants read a question about the spatial relationship between two letters, then viewed a visual display containing the letters and were required to respond to the question. The format of the question influenced the nature of the eye movements generated to the visual display. Participants also had a tendency to make additional eye movements in order to generate a fixation sequence that corresponded to the order of the letters in the question. This demonstrates an influence of stored information on eye movement generation, and suggests that the scanpath plays a role in structuring the visual information to facilitate reasoning. |
Timothy L. Hodgson; Charlotte Golding; Dimitra Molyva; Clive R. Rosenthal; Christopher Kennard Eye movements during task switching: Reflexive, symbolic, and affective contributions to response selection Journal Article In: Journal of Cognitive Neuroscience, vol. 16, no. 2, pp. 318–330, 2004. @article{Hodgson2004, Active vision is a dynamic process involving the flexible coordination of different gaze strategies to achieve behavioral goals. Although many complex behaviors rely on an ability to efficiently switch between gaze-control strategies, few studies to date have examined mechanisms of task level oculomotor control in detail. Here, we report five experiments in which subjects alternated between conflicting stimulus-saccade mappings within a block of trials. The first experiment showed that there is no performance cost associated with switching between pro and anti saccades. However, follow-up experiments demonstrate that whenever subjects alternate between arbitrary stimulus-saccade mappings, latency costs are apparent on the first trial after a task change. More detailed analysis of switch costs showed that latencies were particularly elevated for saccades directed toward the same location that had been the target for a saccade on the preceeding trial. This saccade "inhibition of return" effect was most marked when unexpected error feedbacks cued task switches, suggesting that saccade selection processes are modulated by reward. We conclude that there are two systems for saccade control that differ in their characteristics following a task switch. The "reflexive" control system can be enabled/disabled in advance of saccade execution without incurring any performance cost. Switch costs are only observed when two or more arbitrary stimulus-saccade mappings have to be coordinated by a "symbolic" control system. |
Dirk Kerzel Attentional load modulates mislocalization of moving stimuli, but does not eliminate the error Journal Article In: Psychonomic Bulletin & Review, vol. 11, no. 5, pp. 848–853, 2004. @article{Kerzel2004, Localization of the onset and offset of a moving target is subject to a number of errors that have to be attributed to events following or preceding the target event. Apparently, observers are unable to ignore the spatiotemporal context surrounding the target event. In two experiments, observers' attention was directed toward a single position along a trajectory, two positions along a single trajectory, or two positions along two different trajectories. In the latter condition, attention to details of a single trajectory was reduced. At the same time, motion type was manipulated by varying the temporal interval between successive target presentations. The localization error was not affected by attentional load; however, effects of motion type were eliminated when two trajectories had to be attended to. It may be sufficient to notice that the target has moved for localization errors to occur, while specifics of the trajectory are ignored. |
A. Caspi; B. R. Beutter; Miguel P. Eckstein The time course of visual information accrual guiding eye movement decisions Journal Article In: Proceedings of the National Academy of Sciences, vol. 101, no. 35, pp. 13086–13090, 2004. @article{Caspi2004, Saccadic eye movements are the result of neural decisions about where to move the eyes. These decisions are based on visual information accumulated before the saccade; however, during an approximately 100-ms interval immediately before the initiation of an eye movement, new visual information cannot influence the decision. Does the brain simply ignore information presented during this brief interval or is the information used for the subsequent saccade? Our study examines how and when the brain integrates visual information through time to drive saccades during visual search. We introduce a new technique, saccade-contingent reverse correlation, that measures the time course of visual information accrual driving the first and second saccades. Observers searched for a contrast-defined target among distractors. Independent contrast noise was added to the target and distractors every 25 ms. Only noise presented in the time interval in which the brain accumulates information will influence the saccadic decisions. Therefore, we can retrieve the time course of saccadic information accrual by averaging the time course of the noise, aligned to saccade initiation, across all trials with saccades to distractors. Results show that before the first saccade, visual information is being accumulated simultaneously for the first and second saccades. Furthermore, information presented immediately before the first saccade is not used in making the first saccadic decision but instead is stored and used by the neural processes driving the second saccade. |
Paul Dassonville; Jagdeep Bala Bala Perception, action, and Roelofs effect: A mere illusion of dissociation. Journal Article In: PLoS Biology, vol. 2, pp. 1936–1945, 2004. @article{Dassonville2004, A prominent and influential hypothesis of vision suggests the existence of two separate visual systems within the brain, one creating our perception of the world and another guiding our actions within it. The induced Roelofs effect has been described as providing strong evidence for this perception/action dissociation: When a small visual target is surrounded by a large frame positioned so that the frame's center is offset from the observer's midline, the perceived location of the target is shifted in the direction opposite the frame's offset. In spite of this perceptual mislocalization, however, the observer can accurately guide movements to the target location. Thus, perception is prone to the illusion while actions seem immune. Here we demonstrate that the Roelofs illusion is caused by a frame-induced transient distortion of the observer's apparent midline. We further demonstrate that actions guided to targets within this same distorted egocentric reference frame are fully expected to be accurate, since the errors of target localization will exactly cancel the errors of motor guidance. These findings provide a mechanistic explanation for the various perceptual and motor effects of the induced Roelofs illusion without requiring the existence of separate neural systems for perception and action. Given this, the behavioral dissociation that accompanies the Roelofs effect cannot be considered evidence of a dissociation of perception and action. This indicates a general need to re-evaluate the broad class of evidence purported to support this hypothesized dissociation. |
Paul Dassonville; Bruce Bridgeman; Jagdeep Kaur Bala; Paul Thiem; Anthony Sampanes The induced Roelofs effect: Two visual systems of the shift of a single reference frame? Journal Article In: Vision Research, vol. 44, pp. 603–611, 2004. @article{Dassonville2004a, Cognitive judgments about an object's location are distorted by the presence of a large frame offset left or right of an observer's midline. Sensorimotor responses, however, seem immune to this induced Roelofs illusion, with observers able to accurately point to the target's location. These findings have traditionally been used as evidence for a dissociation of the visual processing required for cognitive judgments and sensorimotor responses. However, a recent alternative hypothesis suggests that the behavioral dissociation is expected if the visual system uses a single frame of reference whose origin (the apparent midline) is biased toward the offset frame. The two theories make qualitatively distinct predictions in a paradigm in which observers are asked to indicate the direction symmetrically opposite the target's position. The collaborative findings of two laboratories clearly support the biased-midline hypothesis. |
Agnieszka Bojko; Arthur F. Kramer; Matthew S. Peterson Age equivalence in switch costs for prosaccade and antisaccade tasks Journal Article In: Psychology and Aging, vol. 19, no. 1, pp. 226–234, 2004. @article{Bojko2004, This study examined age differences in task switching using prosaccade and antisaccade tasks. Significant specific and general switch costs were found for both young and old adults, suggesting the existence of 2 types of processes: those responsible for activation of the currently relevant task set and deactivation of the previously relevant task set and those responsible for maintaining more than 1 task active in working memory. Contrary to the findings of previous research, which used manual response tasks with arbitrary stimulus-response mappings to study task-switching performance, no age-related deficits in either type of switch costs were found. These data suggest age-related sparing of task-switching processes in situations in which memory load is low and stimulus-response mappings are well learned. |
Walter R. Boot; Jason S. McCarley; Arthur F. Kramer; Matthew S. Peterson Automatic and intentional memory processes in visual search Journal Article In: Psychonomic Bulletin & Review, vol. 11, no. 5, pp. 854–861, 2004. @article{Boot2004, Previous research has indicated that saccade target selection during visual search is influenced by scanning history. Already inspected items are less likely to be chosen as saccade targets as long as the number intervening saccades is small. Here, we adapted Jacoby's (1991) process dissociation procedure to assess the role of intentional and automatic processes in saccade target selection. Results indicate a large automatic component biasing participants to move their eyes to unexamined locations. However, an intentional component allowed participants to both reinspect old items and aid their selection of new items. A second experiment examined inhibition of return (IOR) as a candidate for the observed automatic component. IOR was found for items that had been previously examined. It is concluded that both automatic and intentional memory traces are available to guide the eyes during search. |
Diane C. Gooding; L. Mohapatra; H. B. Shea Temporal stability of saccadic task performance in schizophrenia and bipolar patients Journal Article In: Psychological Medicine, vol. 34, no. 5, pp. 921–932, 2004. @article{Gooding2004, Background. Identifying endophenotypes of schizophrenia will assist in the identification of individuals who are at heightened risk for the disorder. Investigators have proposed antisaccade task deficits as an endophenotypic marker of schizophrenia. However, the diagnostic specificity and the temporal stability of the task deficit are unresolved issues. To date, there are few published reports of test-retest stability of antisaccade task performance in psychiatric patients. Method. Twenty-three schizophrenia out-patients and 10 bipolar out-patients were administered two saccadic (antisaccade and refixation) tasks at two separate assessments, with an average test-retest interval of 33 months. Results. The schizophrenia patients displayed high test-retest reliabilities of antisaccade task accuracy, despite changes in medication and clinical status. Additionally, the schizophrenia group's saccadic reaction times for antisaccade correct responses and task errors were moderately stable over time. In contrast, the bipolar patients did not show temporal stability in their antisaccade task accuracy or in their response latencies to either correct or incorrect antisaccade responses. Conclusions. The results are supportive of the trait-like characteristics of antisaccade task deficits in schizophrenia patients. These findings also suggest that antisaccade task deficits may serve as an endophenotypic marker of schizophrenia. © 2004 Cambridge University Press. |
Steven L. Franconeri; Daniel J. Simons; Justin A. Junge Searching for stimulus-driven shifts of attention Journal Article In: Psychonomic Bulletin & Review, vol. 11, no. 5, pp. 876–881, 2004. @article{Franconeri2004, Several types of dynamic cues (e.g., abrupt onsets, motion) draw attention in visual search tasks even when they are irrelevant. Although these stimuli appear to capture attention in a stimulus-driven fashion, typical visual search tasks might induce an intentional strategy to focus on dynamic events. Because observers can only begin their search when the search display suddenly appears, they might orient to any dynamic display change (Folk, Remington, & Johnston, 1992; Gibson & Kelsey, 1998). If so, the appearance of capture might result from task-induced biases rather than from the properties of the stimulus. In fact, such biases can even create the appearance of stimulus-driven capture by stimuli that typically do not capture attention (Gibson & Kelsey, 1998). The possibility of task-induced, top-down biases plagues the interpretation of all previous studies claiming stimulus-driven attention capture by dynamic stimuli. In two experiments, we attempt to eliminate potential task-induced biases by removing any need to monitor for display changes. In the first experiment, search displays did not change on most trials. In the second experiment, although new search displays appeared on each trial, we ensured that observers never saw the changes, by making them during large saccades. In both cases, dynamic events still received search priority, suggesting that some dynamic stimuli capture attention in a stimulus-driven fashion. |
Alexandra Frischen; Steven P. Tipper Orienting attention via observed gaze shift evokes longer term inhibitory effects: Implications for social interactions, attention, and memory Journal Article In: Journal of Experimental Psychology: General, vol. 133, no. 4, pp. 516–533, 2004. @article{Frischen2004, One component of successful social interactions is joint attention. It is now well established that when a gaze shift is observed, the observer's attention rapidly and automatically orients to the same location in space. It is also established that such attention shifts via gaze are relatively transient and do not evoke subsequent inhibition processes. In contrast to this conventional view, the authors conducted a series of studies that showed that these properties of gaze attention shift are not necessarily the case in all situations. The article demonstrates (a) gaze cuing over longer intervals than previously observed, (b) that these longer term effects can be inhibitory, and (c) that the longer term gaze cuing effects do not appear to be contingent on retrieval associated with a particular face identity. |
Richard Godijn; Jan Theeuwes The relationship between inhibition of return and saccade trajectory deviations Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 30, no. 3, pp. 538–554, 2004. @article{Godijn2004, After presentation of a peripheral cue, a subsequent saccade to the cued location is delayed (inhibition of return: IOR). Furthermore, saccades typically deviate away from the cued location. The present study examined the relationship between these inhibitory effects. IOR and saccade trajectory deviations were found after central (endogenous) and peripheral (exogenous) cuing of attention, and both effects were larger with an onset cue than with a color singleton cue. However, a dissociation in time course was found between IOR and saccade trajectory deviations. Saccade trajectory deviations occurred at short delays between the cue and the saccade, but IOR was found at longer delays. A model is proposed in which IOR is caused by inhibition applied to a preoculomotor attentional map, whereas saccade trajectory deviations are caused by inhibition applied to the saccade map, in which the final stage of oculomotor programming takes place. |
Jillian H. Fecteau; Crystal Au; Irene T. Armstrong; Douglas P. Munoz Sensory biases produce alternation advantage found in sequential saccadic eye movement tasks Journal Article In: Experimental Brain Research, vol. 159, no. 1, pp. 84–91, 2004. @article{Fecteau2004, In two-choice reaction time tasks, participants respond faster when the correct decision switches across consecutive trials. This alternation advantage has been interpreted as the guessing strategies of participants. Because the participants expect that the correct decision will switch across consecutive trials, they respond faster when this expectation is confirmed and they respond more slowly when it is disconfirmed. In this study, we evaluated the veracity of this expectancy interpretation. After replicating a long-lasting alternation advantage in saccadic reaction times (Experiment 1), we show that reducing the participants' ability to guess with a challenging mental rotation task does not change the alternation advantage, which suggests that expectancy is not responsible for the effect (Experiment 2). Next, we used prosaccade and antisaccade responses to dissociate between the sensory and motor contributions of the alternation advantage (Experiment 3) and we found that the alternation advantage originates from sensory processing. The implications of these findings are discussed with regard to guessing strategies, sensory processing, and how these findings may relate to inhibition of return. |
Wieske Zoest; Mieke Donk; Jan Theeuwes The role of stimulus-driven and goal-driven control in saccadic visual selection Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 30, no. 4, pp. 746–759, 2004. @article{Zoest2004, Four experiments were conducted to investigate the role of stimulus-driven and goal-driven control in saccadic eye movements. Participants were required to make a speeded saccade toward a predefined target presented concurrently with multiple nontargets and possibly 1 distractor. Target and distractor were either equally salient (Experiments 1 and 2) or not (Experiments 3 and 4). The results uniformly demonstrated that fast eye movements were completely stimulus driven, whereas slower eye movements were goal driven. These results are in line with neither a bottom-up account nor a top-down notion of visual selection. Instead, they indicate that visual selection is the outcome of 2 independent processes, one stimulus driven and the other goal driven, operating in different time windows. |
Michael Varsori; Angelica Perez-Fornos; Avinoam B. Safran; Andrew R. Whatham Development of a viewing strategy during adaptation to an artificial central scotoma Journal Article In: Vision Research, vol. 44, no. 23, pp. 2691–2705, 2004. @article{Varsori2004, Although many individuals with a central scotoma develop eccentric fixation most often beneath or left of the scotoma, little is known about how they come to develop a particular viewing strategy. We investigated this by asking eight subjects with normal vision to read isolated letters, words and text passages while an artificial scotoma covered a central portion of the visual field. We quantified viewing strategy and analysed changes in their viewing behaviour over 8â€"10 sessions within a two-week period. Subjects read while either a horizontal (n = 4) or vertical bar scotoma (n = 4), 10° wide, covered the entire horizontal or vertical meridian of the stimulus field.For the horizontal scotoma group: (1) there was an increasing preference to use the inferior visual field for isolated letters/words and text passages, which was essentially complete within the test period; (2) the superior visual field was preferred when reading letters/words initially presented in upper visual space and the inferior visual field when reading letters/words initially presented in lower visual space; (3) in general, variation in viewing strategy according to stimulus position diminished over the sessions for all stimuli.For the vertical scotoma group: (1) two subjects used the left and right visual fields in approximately equal proportion to view isolated letters/words, one subject showed a weak preference to use the left visual field and one subject developed a strong preference for using the right visual field; (2) the text passages could be read with combined use of left and right visual fields in a specific manner; (3) the left visual field was preferred to view stimuli initially presented in left visual space while the right visual field was preferred for words initially presented in right visual space. This effect diminished across sessions.Overall, these findings indicate that (1) a specific viewing strategy can be developed through as little as 5 hours of reading experience without guided training; (2) two distinctly separate retinal areas can be used in an integrated manner during reading; (4) stimulus position in visual space can influence viewing strategy; (5) in general, reading encourages a preference for the inferior over the superior visual field, but not the left over right visual field. Letter/word/text recognition and reading speeds increased progressively across sessions, even after scotoma lateralisation appeared stabilised suggesting that multiple mechanism are involved in adaptive changes. |
Yuan-Chi Tseng; Chiang-Shan Ray Li Oculomotor correlates of context-guided learning in visual search Journal Article In: Perception and Psychophysics, vol. 66, no. 8, pp. 1363–1378, 2004. @article{Tseng2004, Previous studies have shown that context-facilitated visual search can occur through implicit learning. In the present study, we have explored its oculomotor correlates as a step toward unraveling the mechanisms that underlie such learning. Specifically, we examined a number of oculomotor parameters that might accompany the learning of context-guided search. The results showed that a decrease in the number of saccades occurred along with a fall in search time. Furthermore, we identified an effective search period in which each saccade monotonically brought the fixation closer to the target. Most important, the speed with which eye fixation approached the target did not change as a result of learning. We discuss the general implications of these results for visual search. |
Geoffrey Underwood; Lorraine Jebbett; Katharine Roberts Inspecting pictures for information to verify a sentence: Eye movements in general encoding and in focused search Journal Article In: Quarterly Journal of Experimental Psychology, vol. 57, no. 1, pp. 165–182, 2004. @article{Underwood2004, When we see combinations of text & graphics, such as photographs & their captions in printed media, how do we compare the information in the two components? Two experiments used a sentence-picture verification task in which statements about photographs of natural scenes were read in order to make a true/false decision about the validity of the sentence, & in which eye movements were recorded. In experiment 1, the sentence & the picture were presented concurrently, & objects & words could be inspected in any order. In experiment 2, the two components were presented one after the other, either picture first or sentence first. Fixation durations on pictures were characteristically longer than those on sentences in both experiments, & fixations on sentences varied according to whether they were being encoded as abstract propositions or as coreferents of objects depicted in a previously inspected picture. The decision time data present a difficulty for existing models of sentence verification tasks, with an inconsistent pattern of differences between true & false trials. |
Wolf Schwarz; Inge M. Keus Moving the eyes along the mental number line: Comparing SNARC effects with saccadic and manual responses Journal Article In: Perception and Psychophysics, vol. 66, no. 4, pp. 651–664, 2004. @article{Schwarz2004, Bimanual parityjudgments about numerically small (large) digits are faster with the left (right) hand, even though parity is unrelated to numerical magnitude per se (the SNARC effect; Dehaene, Bossini, & Giraux, 1993). According to one model, this effect reflects a space-related representation of numerical magnitudes (mental number line) with a genuine left-to-right orientation. Alternatively, it may simply reflect an overlearned motor association between numbers and manual responses–as, for example, on typewriters or computer keyboards–in which case it should be weaker or absent with effectors whose horizontal response component is less systematically associated with individual numbers. Two experiments involving comparisons of saccadic and manual parity judgment tasks clearly support the first view; they also establish a vertical SNARC effect, suggesting that our magnitude representation resembles a number map, rather than a number line. |
Jan Theeuwes; Richard Godijn Inhibition of return and oculomotor interference Journal Article In: Vision Research, vol. 44, pp. 1485–1492, 2004. @article{Theeuwes2004, The present study shows that inhibition-of-return reduces competition for selection within the oculomotor system. We examined the effect of a distractor when it was presented at an inhibited location (IOR). The results show that due to IOR distractors cause less interference. This was evident in all three measures. First, there was less oculomotor capture when a distractor was presented at an inhibited location. Second, the saccade latency to the target was shorter when a distractor appeared at an inhibited location than when it appeared at a non-inhibited location. Third, there was less curvature towards the distractor when it was presented at inhibited location relative to a non-inhibited location. The observation that there is less interference for a distractor presented at an inhibited location suggests that IOR reduces the exogenous activation of the distractor within the saccade map. |
Jan Theeuwes; Richard Godijn; Jay Pratt A new estimation of the duration of attentional dwell time Journal Article In: Psychonomic Bulletin & Review, vol. 11, no. 1, pp. 60–64, 2004. @article{Theeuwes2004a, How rapidly can attention move from one object to the next? Previous studies in which the dwell time paradigm was used have estimated attentional switch times of 200-500 msec, results incompatible with the search rate estimates of 25-50 msec shown in numerous visual search studies. It has been argued that dwell times are so long in the dwell time paradigm because the attentional shifts measured are unlike those used in visual search. In the present experiment, a variation of a visual search task was used, in which serial endogenous (volitional) deployments of attention were measured directly by means of a probe reaction time task. The experiment revealed a dwell time of about 250 msec, consistent with the faster estimates from other dwell time studies. This result suggests that endogenous shifts of attention may be relatively slow and that the faster attentional shifts estimated from visual search tasks may be due to the involvement of bottom-up processes. |
John T. Serences; Jens Schwarzbach; Susan M. Courtney; Xavier Golay; Steven Yantis Control of object-based attention in human cortex Journal Article In: Cerebral Cortex, vol. 14, no. 12, pp. 1346–1357, 2004. @article{Serences2004, Visual attention is a mechanism by which observers select relevant or important information from the current visual array. Previous investigations have focused primarily on the ability to select a region of space for further visual analysis. These studies have revealed a distributed frontoparietal circuit that is responsible for the control of spatial attention. However, vision must ultimately represent objects and in real scenes objects often overlap spatially; thus attention must be capable of selecting objects and their properties nonspatially. Little is known about the neural basis of object-based attentional control. In two experiments, human observers shifted attention between spatially superimposed faces and houses. Event-related functional magnetic resonance imaging (fMRI) revealed attentional modulation of activity in face- and house-selective cortical regions. Posterior parietal and frontal regions were transiently active when attention was shifted between spatially superimposed perceptual objects. The timecourse of activity provides insight into the functional role that these brain regions play in attentional control processes. |
Bhavin R. Sheth; Shinsuke Shimojo Sound-aided recovery from and persistence against visual filling-in Journal Article In: Vision Research, vol. 44, no. 16, pp. 1907–1917, 2004. @article{Sheth2004, Disappearance phenomena, in which salient visual stimuli do not register consciously, have been known to occur. Recovery from such phenomena typically occurs through change in some visual attribute, such as increase in luminance contrast or stimulus duration. Thus far, there have been no reports of cross-modal modulation of disappearance phenomena. In particular, what effect a cross-modal attentional cue has on sensory suppression is unknown. Here, we show that an adapted, flickered visual target that is synchronous with a brief sound appears more vivid than a similarly adapted, otherwise identical, visual target that is offset in time by more than 200 ms from the auditory cue. We argue that the brief auditory stimuli momentarily boost the concurrent signal of the adapted visual stimulus at a site downstream of the visual adaptation, thus causing the transient recovery from the visual adaptation. Repetitive visual cues cause significantly less recovery from visual adaptation than repetitive auditory cues, implying that there are functions a cross-modal cue can perform that a cue of the same modality cannot. Moreover repetitive auditory cues selectively prevent synchronous visual targets from undergoing visual adaptation. Ours is the first report of cross-modal modulation of a disappearance phenomenon. |
Matthew S. Peterson; Walter Boot; Arthur F. Kramer; Jason S. McCarley Landmarks help guide attention during visual search Journal Article In: Spatial Vision, vol. 17, no. 4-5, pp. 497–510, 2004. @article{Peterson2004, Using a novel visual search paradigm McCarley et al. (2003) concluded that the oculomotor system keeps a history of 3-4 previously attended objects. However, their displays were visually sparse, denying participants structural information which might be used during normal search. This might have underestimated memory capacity. To examine this possibility, we included landmarks in the same search paradigm. Previously examined items were re-examined less frequently when landmarks were present compared to when they were absent. Results indicate that objects in the environment that share no features with search items are used as external support to aid memory in guiding visual search. |
Matthew S. Peterson; Arthur F. Kramer; David E. Irwin Covert shifts of attention precede involuntary eye movements Journal Article In: Perception and Psychophysics, vol. 66, no. 3, pp. 398–405, 2004. @article{Peterson2004a, There is considerable evidence that covert visual attention precedes voluntary eye movements to an intended location. What happens to covert attention when an involuntary saccadic eye movement is made? In agreement with other researchers, we found that attention and voluntary eye movements are tightly coupled in such a way that attention always shifts to the intended location before the eyes begin to move. However, we found that when an involuntary eye movement is made, attention first precedes the eyes to the unintended location and then switches to the intended location, with the eyes following this pattern a short time later. These results support the notion that attention and saccade programming are tightly coupled. |
Gijs Plomp; Chie Nakatani; Valérìe Bonnardel; Cees Van Leeuwen Amodal completion as reflected by gaze durations Journal Article In: Perception, vol. 33, no. 10, pp. 1185–1200, 2004. @article{Plomp2004, In two experiments amodal completion of partly occluded shapes was investigated by recording eye movements in a directed visual-search task. Participants searched arrays of shapes in a prescribed order for target figures that could partly be occluded. Longer gaze durations were found on occlusion patterns than on truncated control patterns for targets but not for non- targets. This effect of occlusion was restricted to a subset of the stimuli. A second experiment was carried out to establish whether this restriction resulted from structural properties of the stimuli or their familiarity. Occlusion patterns in this experiment were ambiguous with respect to structure, allowing both local and global completions. One of the completions was always less familiar than the other. The results showed longer gazes only for the less familiar completions, irrespective of whether they were local or global. |
A. Panagopoulos; Michael W. Grünau; C. Galera Attentive mechanisms in visual search Journal Article In: Spatial Vision, vol. 17, no. 4-5, pp. 353–371, 2004. @article{Panagopoulos2004, Selective attention can be employed to a restricted region in space or to specific objects. Many properties of this attentional window or spotlight are not well understood. In the present study, we examined the question whether the putative shape of the attentional spotlight can be determined by endogenous cueing within a visual search paradigm. Participants searched for a target among distractors, which were arranged within a vertical or horizontal rectangle. The shape of this rectangle was cued endogenously in a valid or invalid way. Response times (RTs) to correct identification of target orientation were recorded. In Experiment 1, the difference between valid and invalid RTs demonstrated that cueing resulted in elongated attentional areas. This was true only for a group of experienced psychophysical participants, whereas a group of inexperienced participants were not able to use cueing in this way. In Experiment 2, the line motion illusion was used to examine the spatial properties of the attended area. The results confirmed for both experienced and inexperienced participants that attention was confined to the cued elongated area only. We present converging evidence for an attentional spotlight whose shape can be adjusted flexibly by appropriate endogenous cueing. |
Casimir J. H. Ludwig; Iain D. Gilchrist; Eugene McSorley The influence of spatial frequency and contrast on saccade latencies Journal Article In: Vision Research, vol. 44, no. 22, pp. 2597–2604, 2004. @article{Ludwig2004, We characterised the impact of spatial frequency and contrast on saccade latencies to single Gabor patches. Saccade latencies decreased as a function of contrast, and increased with spatial frequency. The observed latency variations are qualitatively similar to those observed for manual reaction times. For single target detection, our findings highlight the similarity in the visual processes that support both saccadic and manual responses. |
Fumiko Maeda; Ryota Kanai; Shinsuke Shimojo Changing pitch induced visual motion illusion Journal Article In: Current Biology, vol. 14, no. 23, pp. 1–2, 2004. @article{Maeda2004, We often associate moving objects and changing pitch, e.g., falling stones with descending, and launching rockets with ascending pitch, even when these sounds do not happen in the real- world. The reason for this is unknown. Here we report an illusion in which auditory stimuli with no apparent spatial and motion information [1–3] alter human visual motion perception. |
Paresh Malhotra; Sabira K. Mannan; Jon Driver; Masud Husain Special section impaired spatial working memory: One component of the visual neglect syndrome ? Journal Article In: Cortex, vol. 40, no. 4-5, pp. 667–676, 2004. @article{Malhotra2004, Both impaired spatial working memory (SWM) and unilateral neglect may follow damage to the right parietal lobe. We propose that impaired SWM can exacerbate visual neglect, due to failures in remembering locations that have already been searched. When combined with an attentional bias to the ipsilesional right side, such a SWM impairment should induce recursive search of ipsilesional locations. Here we studied a left neglect patient with a right temporoparietal haemorrhage. On a nonlateralised, purely vertical SWM task, he was impaired in retaining spatial locations. In a visual search task, his eye position was monitored while his spatial memory was probed. He recursively searched through right stimuli, re-fixating previously inspected items, and critically treated them as if they were new discoveries, consistent with the SWM deficit. When his recovery was tracked over several months, his SWM deficit and left neglect showed concurrent improvements. We argue that impaired SWM may be one important component of the visual neglect syndrome. |
Andrew P. Bayliss; Giuseppe Di Pellegrino; Steven P. Tipper Orienting of attention via observed eye gaze is head-centred Journal Article In: Cognition, vol. 94, no. 1, pp. B1–B10, 2004. @article{Bayliss2004, Observing averted eye gaze results in the automatic allocation of attention to the gazed-at location. The role of the orientation of the face that produces the gaze cue was investigated. The eyes in the face could look left or right in a head-centred frame, but the face itself could be oriented 90 degrees clockwise or anticlockwise such that the eyes were gazing up or down. Significant cueing effects to targets presented to the left or right of the screen were found in these head orientation conditions. This suggests that attention was directed to the side to which the eyes would have been looking towards, had the face been presented upright. This finding provides evidence that head orientation can affect gaze following, even when the head orientation alone is not a social cue. It also shows that the mechanism responsible for the allocation of attention following a gaze cue can be influenced by intrinsic object-based (i.e. head-centred) properties of the task-irrelevant cue. |
Nabil Ouerhani; Roman Von Wartburg; Heinz Hugli; Rene M. Muri; Heinz Hügli; René M. Müri Empirical validation of the saliency-based model of visual attention Journal Article In: Electronic Letters on Computer Vision and Image Analysis, vol. 3, no. 1, pp. 13–24, 2004. @article{Ouerhani2004, Visual attention is the ability of the human vision system to detect salient parts of the scene, on which higher vision tasks, such as recognition, can focus. In human vision, it is believed that visual attention is intimately linked to the eye movements and that the fixation points correspond to the location of the salient scene parts. In computer vision, the paradigm of visual attention has been widely investigated and a saliency- based model of visual attention is now available that is commonly accepted and used in the field, despite the fact that its biological grounding has not been fully assessed. This work proposes a new method for quantitatively assessing the plausibility of this model by comparing its performance with human behavior. The basic idea is to compare the map of attention - the saliency map - produced by the computational model with a fixation density map derived from eye movement experiments. This human attention map can be constructed as an integral of single impulses located at the positions of the successive fixation points. The resulting map has the same format as the computer-generated map, and can easily be compared by qualitative and quantitative methods. Some illustrative examples using a set of natural and synthetic color images show the potential of the validation method to assess the plausibility of the attention model. |
Leigh A. Mrotek; Martha Flanders; John F. Soechting Interception of targets using brief directional cues Journal Article In: Experimental Brain Research, vol. 156, no. 1, pp. 94–103, 2004. @article{Mrotek2004, There are time delays in visuomanual and oculomotor pathways, and some of these time delays may be due to the finite time required to process visual motion signals and to extract accurate information about the speed and direction of the motion. The present experiments were designed to ascertain the time required to obtain a reliable estimate of the direction of target motion. Subjects were asked to indicate the final direction of a moving target, which abruptly changed direction and shortly thereafter disappeared, by pointing to its expected emergence at the boundary of an occlusion. Subjects made small but consistent errors that overestimated the target's change in direction. These errors depended little on the amount of time the target was visible (ranging from 50 to 400 ms) after it changed direction. Pointing direction was strongly correlated with gaze, which was dominated by a saccade initiated shortly after the target changed direction. The pointing errors were explained by the fact that the saccade always intercepted the (occluded) target, but then continued in the same direction toward the boundary of the occlusion. The analysis reveals that target direction was estimated accurately even at the shortest viewing time. |
Chie Nakatani; Alexander Pollatsek An eye movement analysis of "mental rotation" of simple scenes Journal Article In: Perception and Psychophysics, vol. 66, no. 7, pp. 1227–1245, 2004. @article{Nakatani2004, Participants saw a standard scene of three objects on a desktop and then judged whether a comparison scene was either the same, except for the viewpoint of the scene, or different, when one or more of the objects either exchanged places or were rotated around their center. As in Nakatani, Pollatsek, and Johnson (2002), judgment times were longer when the rotation angles of the comparison scene increased, and the size of the rotation effect varied for different axes and was larger for same judgments than for different judgments. A second experiment, which included trials without the desktop, indicated that removing the desktop frame of reference mainly affected the y-axis rotation conditions (the axis going vertically through the desktop plane). In addition, eye movement analyses indicated that the process was far more than a simple analogue rotation of the standard scene. The total response latency was divided into three components: the initial eye movement latency, the first-pass time, and the second-pass time. The only indication of a rotation effect in the time to execute the first two components was for z-axis (plane of sight) rotations. Thus, for x- and y-axis rotations, rotation effects occurred only in the probability of there being a second pass and the time to execute it. The data are inconsistent either with an initial rotation of the memory representation of the standard scene to the orientation of the comparison scene or with a holistic alignment of the comparison scene prior to comparing it with the memory representation of the standard scene. Indeed, the eye movement analysis suggests that little of the increased response time for rotated comparison scenes is due to something like a time-consuming analogue process but is, instead, due to more comparisons on individual objects being made (possibly more double checking). |
Risto Näsänen; Helena Ojanpää How many faces can be processed during a single eye fixation? Journal Article In: Perception, vol. 33, no. 1, pp. 67–77, 2004. @article{Naesaenen2004, The purpose of our study was to estimate the perceptual span for facial information: how many faces can be processed during a single eye fixation. We used a visual-search task, in which the targets and distractors were facial photographs. The task of the observer was to search for and identify a target face in an array of faces. We measured the time needed for one search–threshold search time–by using a multiple-alternative staircase method. The threshold represents the duration of stimulus presentation at which the probability of correct responses was 79%. The array size was varied from 2 x 2 to 8 x 8 faces. Simultaneously with the performance measurements we measured eye movements with a video eye tracker. We found that threshold search time increased with increasing set size nearly linearly. The number of fixations also increased linearly from unity at the smallest set size to about fifteen at the largest set size. The result of 2 x 2 faces during a single fixation gave an estimate of 4 faces for the perceptual span. If, on average, only half of the elements had to be scanned for finding the target, 15 fixations at the largest set size (8 x 8) gave another estimate of 2.13 faces. The mean fixation duration was around 200 ms. Thus, the results suggest that 2-4 faces can be processed during one fixation of about 200 ms. |
Desmond C. Adler; Tony H. Ko; Paul R. Herz; James G. Fujimoto Optical coherence tomography contrast enhancement using spectroscopic analysis with spectral autocorrelation Journal Article In: Optics Express, vol. 12, no. 22, pp. 5487, 2004. @article{Adler2004, The allocation of overt visual attention while viewing photographs of natural scenes is commonly thought to involve both bottom-up feature cues, such as luminance contrast, and top-down factors such as behavioural relevance and scene understanding. Profiting from the fact that light sources are highly visible but uninformative in visual scenes, we develop a mixture model approach that estimates the relative contribution of various low and high-level factors to patterns of eye movements whilst viewing natural scenes containing light sources. Low-level salience accounts predicted fixations at luminance contrast and at lights, whereas these factors played only a minor role in the observed human fixations. Conversely, human data were mostly explicable in terms of a central bias and a foreground preference. Moreover, observers were more likely to look near lights rather than directly at them, an effect that cannot be explained by low-level stimulus factors such as luminance or contrast. These and other results support the idea that the visual system neglects highly visible cues in favour of less visible object information. Mixture modelling might be a good way forward in understanding visual scene exploration, since it makes it possible to measure the extent that low-level or high-level cues act as drivers of eye movements. |
Christian F. Altmann; Arne Deubelius; Zoe Kourtzi Shape saliency modulates contextual processing in the human lateral occipital complex Journal Article In: Journal of Cognitive Neuroscience, vol. 16, no. 5, pp. 794–804, 2004. @article{Altmann2004, Visual context influences our perception of target objects in natural scenes. However, little is known about the analysis of context information and its role in shape perception in the human brain. We investigated whether the human lateral occipital complex (LOC), known to be involved in the visual analysis of shapes, also processes information about the context of shapes within cluttered scenes. We employed an fMRI adaptation paradigm in which fMRI responses are lower for two identical than for two different stimuli presented consecutively. The stimuli consisted of closed target contours defined by aligned Gabor elements embedded in a background of randomly oriented Gabors. We measured fMRI adaptation in the LOC across changes in the context of the target shapes by manipulating the position and orientation of the background elements. No adaptation was observed across context changes when the background elements were presented in the same plane as the target elements. However, adaptation was observed when the grouping of the target elements was enhanced in a bottom-up (i.e., grouping by disparity or motion) or top-down (i.e., shape priming) manner and thus the saliency of the target shape increased. These findings suggest that the LOC processes information not only about shapes, but also about their context. This processing of context information in the LOC is modulated by figure-ground segmentation and grouping processes. That is, neural populations in the LOC encode context information when relevant to the perception of target shapes, but represent salient targets independent of context changes. |
2003 |
Dimitris Agrafiotis; Nishan Canagarajah; David R. Bull; Matthew Dye Perceptually optimised sign language video coding based on eye tracking analysis Journal Article In: Electronics Letters, vol. 39, pp. 1–2, 2003. @article{Agrafiotis2003, A perceptually optimised approach to sign language video coding is presented. The proposed approach is based on the results (included) of an eye tracking study in the visual attention of sign language viewers. Results show reductions in bit rate of over 30% with very good subjective quality. |
Helena Ojanpää; Risto Näsänen Utilisation of spatial frequency information in face search Journal Article In: Vision Research, vol. 43, no. 24, pp. 2505–2515, 2003. @article{Ojanpaeae2003a, In previous studies the utilisation of spatial frequency information in face perception has been investigated by using static recognition tasks. In this study we used a visual search task, which requires eye movements and fast identification of previously learned facial photographs. Using Fourier phase randomisation, spatial information was selectively removed without changing the amplitude spectrum of the image. Fourier phase was randomised within one-octave wide bands of nine different centre spatial frequencies (2-32 c/face width, 0.63-10.1 c/deg). In a control condition no randomisation was used. All stimuli had similar contrast. Search times and eye movements during the search were measured. The removal of spatial information by phase randomisation at medium spatial frequencies resulted in a considerable increase of search times. In the main experiment the maximum of the search times occurred between 8 and 11 c/ face width. The number of eye fixations behaved similarly. In an additional experiment with a threefold viewing distance the search times increased and the maximum of the search times shifted slightly to lower object spatial frequencies (5.6-8 c/face width). This suggests that the band of spatial frequencies used in face search is not completely scale invariant. The results show that information most important to face search is located at a limited band of mid spatial frequencies. This is consistent with earlier studies, in which non-dynamical face recognition tasks and low-contrast stimuli have been used. |
Dominic J. Mort; Richard J. Perry; Sabira K. Mannan; Timothy L. Hodgson; Elaine Anderson; Rebecca Quest; Donald McRobbie; Alan McBride; Masud Husain; Christopher Kennard Differential cortical activation during voluntary and reflexive saccades in man Journal Article In: NeuroImage, vol. 18, no. 2, pp. 231–246, 2003. @article{Mort2003, A saccade involves both a step in eye position and an obligatory shift in spatial attention. The traditional division of saccades into two types, the "reflexive" saccade made in response to an exogenous stimulus change in the visual periphery and the "voluntary" saccade based on an endogenous judgement to move gaze, is supported by lines of evidence which include the longer onset latency of the latter and the differential effects of lesions in humans and primates on each. It has been supposed that differences between the two types of saccade derive from differences in how the spatial attention shifts involved in each are processed. However, while functional imaging studies have affirmed the close link between saccades and attentional shifts by showing they activate overlapping cortical networks, attempts to contrast exogenous with endogenous ("covert") attentional shifts directly have not revealed separate patterns of cortical activation. We took the "overt" approach, contrasting whole reflexive and voluntary saccades using event-related fMRI. This demonstrated that, relative to reflexive saccades, voluntary saccades produced greater activation within the frontal eye fields and the saccade-related area of the intraparietal sulci. The reverse contrast showed reflexive saccades to be associated with relative activation of the angular gyrus of the inferior parietal lobule, strongest in the right hemisphere. The frequent involvement of the right inferior parietal lobule in lesions causing hemispatial neglect has long implicated this parietal region in an important, though as yet uncertain, role in the awareness and exploration of space. This is the first study to demonstrate preferential activation of an area in its posterior part, the right angular gyrus, during production of exogenously triggered rather than endogenously generated saccades, a finding which we propose is consistent with an important role for the angular gyrus in exogenous saccadic orienting. |
Karen Mortier; Mieke Donk; Jan Theeuwes Attentional capture within and between objects Journal Article In: Acta Psychologica, vol. 113, pp. 133–145, 2003. @article{Mortier2003, The present study addressed the question whether attentional capture by abrupt onsets is affected by object-like properties of the stimulus field. Observers searched for a target circle at one of four ends of two solid rectangles. In the focused attention condition the location of the upcoming target was cued by means of a central arrowhead, whereas in the divided attention condition, the target location was not cued. Irrelevant abrupt onsets could appear either within the attended or within the non-attended object. The results showed that in the focused attention condition, onsets ceased to capture attention irrespective of whether the onset appeared within an attended object or within a non-attended object. © 2003 Elsevier Science B.V. All rights reserved. |
I. T. Armstrong; Douglas P. Munoz Inhibitory control of eye movements during oculomotor countermanding in adults with attention-deficit hyperactivity disorder Journal Article In: Experimental Brain Research, vol. 152, no. 4, pp. 444–452, 2003. @article{Armstrong2003a, Children with attention-deficit hyperactivity disorder (ADHD) are impulsive, and that impulsiveness can be measured using a countermanding task. Although the overt behaviors of ADHD attenuate with age, it is not clear how well impulsiveness is controlled in adults with ADHD. We tested ADHD adults with an oculomotor countermanding task. The task included two conditions: on 75% of the trials, participants viewed a central fixation marker and then looked to an eccentric target that appeared simultaneous with the disappearance of the fixation marker; on 25% of the trials, a signal was presented at variable delays after target appearance. The signal instructed subjects to stop, or countermand, an eye movement to the target. A correct movement in this case would be to hold gaze at the central fixation location. We expected ADHD participants to be impulsive in their countermanding performance. Additionally, we expected that a visual stop signal at the central fixation location would assist oculomotor countermanding because the signal is presented in the "stop" location, at fixation. To test whether a central stop signal positively biased countermanding, we used a three types of stop signal to instruct the stop: a central visual marker, a peripheral visual signal, and a non-localized sound. All subjects performed best with the central visual stop signal. Subjects with ADHD were less able to countermand eye movements and were influenced more negatively by the non-central signals. Oculomotor countermanding may be useful for quantifying impulsive dysfunction in adults with ADHD especially if a non-central stop signal is applied. |
Monika Harvey; Iain D. Gilchrist; Bettina Olk; Keith Muir Eye-movement patterns do not mediate size distortion effects in hemispatial neglect: Looking without seeing Journal Article In: Neuropsychologia, vol. 41, no. 8, pp. 1114–1121, 2003. @article{Harvey2003, Over the last decade a range of studies have shown that some patients with hemispatial neglect subjectively underestimate the size of objects presented in their contralesional hemispace. Recently, it has been suggested that the effect is simply due to either hemianopia [Brain 124 (2001) 527], or the combination of neglect and hemianopia [Neurology 52 (1999) 1845]. In the current study we asked right hemisphere lesioned patients with and without neglect and hemianopia as well as healthy controls to judge either two horizontal or vertical lines presented simultaneously in right and left hemispace and monitored their eye movements. Three out of the six patients showed the predicted size distortion effect for horizontal lines. We found no evidence that the effect was mediated by eye movements. The two neglect patients who showed the strongest left side underestimation showed symmetrical (left, right) scanning of the lines both in terms of number of fixations and fixation time, yet they still failed to judge the relative size veridically. In addition, we did not find strong evidence for a link with hemianopia. We therefore propose that the effect reflects a computational/representational failure of processing for horizontal extent. |
Katia Duscherer; Daniel Holender Semantic priming from flanker words: Some limitations to automaticity Journal Article In: Psychologica Belgica, vol. 43, no. 3, pp. 153–179, 2003. @article{Duscherer2003, We explore under which conditions words flanking a centrally presented digit in the prime display can elicit semantic priming on the lexical decision to a subsequent letter string appearing at fixation about 1 sec later. No significant priming is found when the prime display requires an immediate odd/even classification of a digit (Experiment 1), a delayed recall of a digit (Experiment 3), or the detection of an infrequent change from the digit 4 to the letter A (Experiment 4). It is only in Experiment 2, in which nothing is presented at fixation during the prime display in positive lexical decision trials, that a positive semantic priming effect is found. These results are discussed in the framework of quantitative and qualitative limitations to processing automaticity. |
Elizabeth Gilman; Geoffrey Underwood Restricting the field of view to investigate the perceptual spans of pianists Journal Article In: Visual Cognition, vol. 10, no. 2, pp. 201–232, 2003. @article{Gilman2003, An experiment is reported, which was designed to determine how the perceptual span of pianists varies with developing skill and cognitive load. Eye-movements were recorded as musical phrases were presented through a gaze-contingent window, which contained one beat, two beats, or four beats. In a control condition, the music was presented without a window. The pianists were required to perform three tasks of varying cognitive load: An error-detection task (low load); a sight- reading task (medium load); and a transposition task (high load). Measures taken comprised fixation duration, fixation frequency, saccade length, fixation locations, performance duration, note duration, position of first error, number of errors, and eye±hand span. The results indicate that good and poor sight-readers do not differ in terms of perceptual span. However, good sight-readers were found to have larger eye±hand spans. Furthermore, the results show that increasing cognitive load decreases eye±hand span, but has little effect on perceptual span. |
Richard Godijn; Jan Theeuwes Parallel allocation of attention prior to the execution of saccade sequences Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 29, no. 5, pp. 882–896, 2003. @article{Godijn2003, In a series of 5 experiments, the allocation of attention prior to the execution of saccade sequences was examined by using a dual-task paradigm. In the primary task, participants were required to execute a sequence of 2 endogenous saccades. The secondary task was a forced-choice letter identification task. During the programming of the saccade sequences, letters were briefly presented at the saccade goals and at no-saccade locations. The results showed that performance was better for letters presented at any of the saccade goals than for letters presented at any of the no-saccade locations. The results support a spatial model that assumes that prior to the execution of a saccade sequence, attention is allocated in parallel to all saccade goals. |
Dirk Kerzel Mental extrapolation of target position is strongest with weak motion signals and motor responses Journal Article In: Vision Research, vol. 43, no. 25, pp. 2623–2635, 2003. @article{Kerzel2003, Some accounts hold that the position of moving objects is extrapolated either in visual perception or visual short-term memory ("representational momentum"). However, some studies did not find forward displacement of the final position when smooth motion was used, whereas reliable displacement was observed with implied motion. To resolve this conflict, the frequency of position changes was varied to sample motion types between the extreme cases of implied and smooth motion. A continuous function relating frequency of target change and displacement was found: Displacement increased when the frequency of position changes was reduced. Further, the response mode was varied. Probe judgments produced less forward displacement than motor judgments such as mouse or natural pointing movements. Also, localization judgments were susceptible to motion context, but not to variations of probe shape or expectancy about trajectory length. It is suggested that forward displacement results from the extrapolation of the next step in the observed motion sequence. |
Dirk Kerzel; Karl R. Gegenfurtner Neuronal processing delays are compensated in the sensorimotor Journal Article In: Current Biology, vol. 13, no. 22, pp. 1975–1978, 2003. @article{Kerzel2003a, Moving objects change their position until signals from the photoreceptors arrive in the visual cortex. Nonetheless, motor responses to moving objects are accurate and do not lag behind the real-world position. The questions are how and where neural delays are compensated for. It was suggested that compensation is achieved within the visual system by extrapolating the position of moving objects. A visual illusion supports this idea: when a briefly flashed object is presented in the same position as a moving object, it appears to lag behind. However, moving objects do not appear ahead of their final or reversal points. We investigated a situation where participants localized the final position of a moving stimulus. Visual perception and short-term memory of the final target position were accurate, but reaching movements were directed toward future positions of the target beyond the vanishing point. Our results show that neuronal latencies are not compensated for at early stages of visual processing, but at a late stage when retinotopic information is transformed into egocentric space used for motor responses. The sensorimotor system extrapolates the position of moving targets to allow for precise localization of moving targets despite neuronal latencies. |
Masud Husain; Andrew Parton; Timothy L. Hodgson; Dominic J. Mort; Geraint Rees Self-control during response conflict by human supplementary eye field Journal Article In: Nature Neuroscience, vol. 6, no. 2, pp. 117–118, 2003. @article{Husain2003, Although medial frontal cortex is considered to have an important role in planning behavior and monitoring errors, the specific contributions of regions within it are poorly understood1, 2, 3. Here we report that a patient with a highly selective lesion of a medial frontal motor area—the supplementary eye field (SEF)—lacked control in changing the direction of his eye movement from either a previous intention or behavioral 'set'; however, he monitored his errors well and corrected them quickly. The results indicate a key new role for the SEF and show that medial frontal mechanisms for self-control of action may be highly specific, with the SEF critically involved in implementing oculomotor control during response conflict, but not in error monitoring. |
Toshihide Imaruoka; Toshio Yanagida; Satoru Miyauchi Attentional set for external information activates the right intraparietal area Journal Article In: Cognitive Brain Research, vol. 16, no. 2, pp. 199–209, 2003. @article{Imaruoka2003, Visual attention can be allocated to a location or an object by using two different types of information: internal information and external information. The results of recent psychological studies [Bagon and Egeth, Percept. Psychophys. 55 (1994) 485] suggest that an observer's attentional set determines how these two kinds of information are used in visual tasks. In this study, we measured brain activities during two modes of visual search; one is the feature search mode, in which an attentional set for knowledge of a target item (internal information) is used, and the other is the singleton detection mode, in which an attentional set for oddness in the visual scene (external information) is used. We found extended activation in the frontal and parietal areas for both search modes. In addition, a direct comparison of brain activity during the singleton detection mode and the feature search mode revealed that the areas around the right intraparietal sulcus were more involved in the attentional set for oddness. These results suggest that the human right intraparietal cortex is related to the attentional set for external information. |
Junji Ito; Andrey R. Nikolaev; Marjolein Luman; Maartje F. Aukes; Chie Nakatani; Cees Van Leeuwen Perceptual switching, eye movements, and the bus paradox Journal Article In: Perception, vol. 32, no. 6, pp. 681–698, 2003. @article{Ito2003, According to a widely cited finding by Ellis and Stark (1978 Perception 7 575-581), the duration of eye fixations is longer at the instant of perceptual reversal of an ambiguous figure than before or after the reversal. However, long fixations are more likely to include samples of an independent random event than are short fixations. This sampling bias would produce the pattern of results also when no correlation exists between fixation duration and perceptual reversals. When an appropriate correction is applied to the measurement of fixation durations, the effect disappears. In fact, there are fewer actual button-presses during the long intervals than would be expected by chance. Moving-window analyses performed on eye-fixation data reveal that no unique eye event is associated with switching behaviour. However, several indicators, such as blink frequency, saccade frequency, and the direction of the saccade, are each differentially sensitive to perceptual and response-related aspects of the switching process. The time course of these indicators depicts switching behaviour as a process of cascaded stages. |
Charissa R. Lansing; George W. McConkie Word identification and eye fixation locations in visual and visual-plus-auditory presentations of spoken sentences Journal Article In: Perception and Psychophysics, vol. 65, no. 4, pp. 536–552, 2003. @article{Lansing2003, In this study, we investigated where people look on talkers' faces as they try to understand what is being said. Sixteen young adults with normal hearing and demonstrated average speechreading proficiency were evaluated under two modality presentation conditions: vision only versus vision plus low-intensity sound. They were scored for the number of words correctly identified from 80 unconnected sentences spoken by two talkers. The results showed two competing tendencies: an eye primacy effect that draws the gaze to the talkers eyes during silence and an information source attraction effect that draws the gaze to the talker's mouth during speech periods. Dynamic shifts occur between eyes and mouth prior to speech onset and following the offset of speech, and saccades tend to be suppressed during speech periods. The degree to which the gaze is drawn to the mouth during speech and the degree to which saccadic activity is suppressed depend on the difficulty of the speech identification task. Under the most difficult modality presentation condition, vison only, accuracy was related to average sentence difficulty and individual proficiency in visual speech perception, but not to the proportion of gaze time directed toward the talkers mouth or toward other parts of the talker's face. |
Yuki Kamide; Gerry T. M. Altmann; Sarah L. Haywood The time-course of prediction in incremental sentence processing: Evidence from anticipatory eye movements Journal Article In: Journal of Memory and Language, vol. 49, no. 1, pp. 133–156, 2003. @article{Kamide2003, Three eye-tracking experiments using the 'visual-world' paradigm are described that explore the basis by which thematic dependencies can be evaluated in advance of linguistic input that unambiguously signals those dependencies. Following Altmann and Kamide (1999), who found that selectional information conveyed by a verb can be used to anticipate an upcoming Theme, we attempt to draw here a more precise picture of the basis for such anticipatory processing. Our data from two studies in English and one in Japanese suggest that (a) verb-based information is not limited to anticipating the immediately following (grammatical) object, but can also anticipate later occurring objects (e.g., Goals), (b) in combination with information conveyed by the verb, a pre-verbal argument (Agent) can constrain the anticipation of a subsequent Theme, and (c) in a head-final construction such as that typically found in Japanese, both syntactic and semantic constraints extracted from pre-verbal arguments can enable the anticipation, in effect, of a further forthcoming argument in the absence of their head (the verb). We suggest that such processing is the hallmark of an incremental processor that is able to draw on different sources of information (some non-linguistic) at the earliest possible opportunity to establish the fullest possible interpretation of the input at each moment in time. |
Kathryn Bock; David E. Irwin; Douglas J. Davidson; Willem J. M. Levelt Minding the clock Journal Article In: Journal of Memory and Language, vol. 48, no. 4, pp. 653–685, 2003. @article{Bock2003, Telling time is an exercise in coordinating language production with visual perception. By coupling different ways of saying times with different ways of seeing them, the performance of time-telling can be used to track cognitive transformations from visual to verbal information in connected speech. To accomplish this, we used eyetracking measures along with measures of speech timing during the production of time expressions. Our findings suggest that an effective interface between what has been seen and what is to be said can be constructed within 300 ms. This interface underpins a preverbal plan or message that appears to guide a comparatively slow, strongly incremental formulation of phrases. The results begin to trace the divide between seeing and saying - or thinking and speaking - that must be bridged during the creation of even the most prosaic utterances of a language. |
Leonardo Chelazzi; Elisabeth Moores; Liana Laiti Associative knowledge controls deployment of visual selective attention Journal Article In: Nature neuroscience, vol. 6, no. 2, pp. 182–189, 2003. @article{Chelazzi2003, According to some models of visual selective attention, objects in a scene activate corresponding neural representations, which compete for perceptual awareness and motor behavior. During a visual search for a target object, top-down control exerted by working memory representations of the target's defining properties resolves competition in favor of the target. These models, however, ignore the existence of associative links among object representations. Here we show that such associations can strongly influence deployment of attention in humans. In the context of visual search, objects associated with the target were both recalled more often and recognized more accurately than unrelated distractors. Notably, both target and associated objects competitively weakened recognition of unrelated distractors and slowed responses to a luminance probe. Moreover, in a speeded search protocol, associated objects rendered search both slower and less accurate. Finally, the first saccades after onset of the stimulus array were more often directed toward associated than control items. |
Heiner Deubel; Werner X. Schneider Delayed saccades, but not delayed manual aiming movements, require visual attention shifts Journal Article In: Annals of the New York Academy of Sciences, vol. 1004, pp. 289–296, 2003. @article{Deubel2003, Several studies have shown that during the preparation of a goal-directed movement, perceptual selection (i.e., visual attention) and action selection (the selection of the movement target) are closely coupled. Here, we study attentional selection in situations in which delayed saccadic eye movements and delayed manual movements are prepared. A dual-task paradigm was used which combined the movement preparation with a perceptual discrimination task. The results demonstrate a fundamental difference between the preparation of saccades and of manual reaching. For delayed saccades, attention is pinned to the saccade target until the onset of the response. This does not hold for manual reaching, however. Although fast reaching movements require attention, reaches delayed more than 300 ms after movement cue onset can be already performed "off-line"; that is, attention can be withdrawn from the movement target. |
Hendrik Chris Dijkerman; Robert D. McIntosh; David Milner; Yves Rossetti; Caroline Tilikete; Richard C. Roberts Ocular scanning and perceptual size distortion in hemispatial neglect: Effects of prism adaptation and sequential stimulus presentation Journal Article In: Experimental Brain Research, vol. 153, no. 2, pp. 220–230, 2003. @article{Dijkerman2003, When asked to compare two lateralized shapes for horizontal size, neglect patients often indicate the left stimulus to be smaller. Gainotti and Tiacci (1971) hypothesized that this phenomenon might be related to a rightward bias in the patients' gaze. This study aimed to assess the relation between this size underestimation and oculomotor asymmetries. Eye movements were recorded while three neglect patients judged the horizontal extent of two rectangles. Two experimental manipulations were performed to increase the likelihood of symmetrical scanning of the stimulus display. The first manipulation entailed a sequential, rather than simultaneous presentation of the two rectangles. The second required adaptation to rightward displacing prisms, which is known to reduce many manifestations of neglect. All patients consistently underestimated the left rectangle, but the pattern of verbal responses and eye movements suggested different underlying causes. These include a distortion of space perception without ocular asymmetry, a failure to view the full leftward extent of the left stimulus, and a high-level response bias. Sequential presentation of the rectangles and prism adaptation reduced ocular asymmetries without affecting size underestimation. Overall, the results suggest that leftward size underestimation in neglect can arise for a number of different reasons. Incomplete leftward scanning may perhaps be sufficient to induce perceptual size distortion, but it is not a necessary prerequisite. |
Nicholas D. Cassavaugh; Arthur F. Kramer; David E. Irwin Influence of task-irrelevant onset distractors on the visual search performance of young and old adults Journal Article In: Aging, Neuropsychology, and Cognition, vol. 10, no. 1, pp. 44–60, 2003. @article{Cassavaugh2003, We examined potential age-related differences in attentional and oculomotor capture by single and multiple abrupt onsets in a singleton search paradigm. 24 participants were instructed to move their eyes as quickly as possible to a color singleton target and to identify a small letter located inside it. Either single or dual onset task-irrelevant distractors were presented simultaneously with the color change that defined the target, or one onset distractor was presented prior to and another onset distractor was presented during the participant's initial eye movement away from fixation. Young and old adults misdirected their eyes to the single and dual onset task-irrelevant distractors, on an equivalent proportion of trials, relative to control trials. However, older adults' saccade latencies and RTs were influenced to a greater extent by onsets compared to younger adults'. These data are discussed in terms of age-related differences in attentional control and oculomotor capture. |
Junghyun Park; Madeleine Schlag-Rey; John Schlag Spatial localization precedes temporal determination in visual perception Journal Article In: Vision Research, vol. 43, no. 15, pp. 1667–1674, 2003. @article{Park2003a, The temporal order of two spots of light successively appearing in the dark, just before a saccade, influences their perceived spatial relation. Both spots are mislocalized in the saccade direction - the second more so than the first - because mislocalization grows as time elapses from stimulus to saccade onset. On the other hand, the perceived order of the two spots may be altered if the second spot is at the focus of spatial attention. How would these illusory perceptions of space and time interact when they are brought to play together? Could they be independent or could one perception depend on the other? Here we show that perceived location of stimuli is not affected by illusory temporal order, whereas perceived temporal order is affected by misperceived location. The results suggest that the brain processes spatial location of visual stimuli before processing their temporal order. |
Junghyun Park; Madeleine Schlag-Rey; John Schlag Voluntary action expands perceived duration of its sensory consequence Journal Article In: Experimental Brain Research, vol. 149, no. 4, pp. 527–529, 2003. @article{Park2003, When we look at a clock with a hand showing seconds, the hand sometimes appears to stay longer at its first-seen position than at the following positions, evoking an illusion of chronostasis. This illusory extension of perceived duration has been shown to be coupled to saccadic eye movement and it has been suggested to serve as a mechanism of maintaining spatial stability across the saccade. Here, we examined the effects of three kinds of voluntary movements on the illusion of chronostasis: key press, voice command, and saccadic eye movement. We found that the illusion can occur with all three kinds of voluntary movements if such movements start the clock immediately. When a delay is introduced between the voluntary movement and the start of the clock, the delay itself is overestimated. These results indicate that the illusion of chronostasis is not specific to saccadic eye movement, and may therefore involve a more general mechanism of how voluntary action influences time perception. |
Taosheng Liu; Scott D. Slotnick; John T. Serences; Steven Yantis Cortical mechanisms of feature-based attentional control Journal Article In: Cerebral Cortex, vol. 13, no. 12, pp. 1334–1343, 2003. @article{Liu2003, A network of fronto-parietal cortical areas is known to be involved in the control of visual attention, but the representational scope and specific function of these areas remains unclear. Recent neuroimaging evidence has revealed the existence of both transient (attention-shift) and sustained (attention-maintenance) mechanisms of space-based and object-based attentional control. Here we investigate the neural mechanisms of feature-based attentional control in human cortex using rapid event-related functional magnetic resonance imaging (fMRI). Subjects viewed an aperture containing moving dots in which dot color and direction of motion changed once per second. At any given moment, observers attended to either motion or color. Two of six motion directions and two of six colors embedded in the stimulus stream cued subjects either to shift attention from the currently attended to the unattended feature or to maintain attention on the currently attended feature. Attentional modulation of the blood oxygenation level dependent (BOLD) fMRI signal was observed in early visual areas that are selective for motion and color. More importantly, both transient and sustained BOLD activity patterns were observed in different fronto-parietal cortical areas during shifts of attention. We suggest these differing temporal profiles reflect complementary roles in the control of attention to perceptual features. |
Jason S. McCarley; Arthur F. Kramer; Gregory J. DiGirolamo Differential effects of the Müller-Lyer illusion on reflexive and voluntary saccades Journal Article In: Journal of Vision, vol. 3, no. 11, pp. 751–760, 2003. @article{McCarley2003, Research has produced conflicting evidence as to whether saccade programming is or is not biased by perceptual illusions. However, previous studies have generally not distinguished between effects of illusory percepts on reflexive saccades, programmed automatically in response to an external visual signal, and voluntary saccades, programmed purposively to a location where no signal has occurred. Here we find that voluntary and reflexive saccades are differentially susceptible to the Müller-Lyer illusion; reflexive movements are reliably but modestly affected by the illusion, whereas voluntary movements show an effect similar to that of perceptual judgments. Results suggest that voluntary saccade programming occurs within a non-retinotopic spatial representation similar to that of visual consciousness, whereas reflexive saccade programming occurs within a representation integrating retinotopic and higher level spatial frames. The effects of the illusion on reflexive saccades are not subject to endogenous control, nor are they modulated by the strength of an exogenous target signal. |
Jason S. Mccarley; Ranxiao F. Wang; Arthur F. Kramer; David E. Irwin; Matthew S. Peterson How much memory does oculomotor search have? Journal Article In: Psychological Science, vol. 14, no. 5, pp. 422–426, 2003. @article{Mccarley2003b, Research has demonstrated that oculomotor visual search is guided by memory for which items or locations within a display have already been inspected. In the study reported here, we used a gaze-contingent search paradigm to examine properties of this memory. Data revealed a memory buffer for search history of three to four items. This buffer was effected in part by a space-based trace attached to a location independently of whether the object that had been seen at that position remained visible, and was subject to interference from other stimuli seen in the course of a trial. |
Casimir J. H. Ludwig; Iain D. Gilchrist Target similarity affects saccade curvature away from irrelevant onsets Journal Article In: Experimental Brain Research, vol. 152, no. 1, pp. 60–69, 2003. @article{Ludwig2003, Saccade curvature away from visual distractors is a measure of the salience of these distractors for the oculomotor system. Three experiments are reported in which the integration of luminance onset signals and target similarity signals is examined, using a saccade curvature paradigm. Observers made saccades to a no-onset colour target in one of two positions on the vertical meridian. On most trials, an abrupt onset distractor that was either similar or dissimilar to the target appeared left or right on the horizontal midline. Saccades curved away from the irrelevant onsets; however, the amount of curvature was modulated by target similarity only when the onset appeared before the target (experiment 2) or when saccade initiation was delayed (experiment 3). These results suggest that the initial response to the onset is stimulus-driven and mediated by its transient component. Over time, the response is integrated with and augmented by top-down inputs. Visual and non-visual signals converge onto a common motor map to determine an item's salience. |
Casimir J. H. Ludwig; Iain D. Gilchrist Goal-driven modulation of oculomotor capture Journal Article In: Perception and Psychophysics, vol. 65, no. 8, pp. 1243–1251, 2003. @article{Ludwig2003a, In a recent study, Ludwig and Gilchrist (2002) showed that stimulus-driven oculomotor capture by abrupt onset distractors was modulated by distractor-target similarity: Participants were more likely to fixate an irrelevant onset when it shared the target color. Here we test whether this pattern of performance is the result of (1) inhibition of all items in the distractor color, (2) a response bias to local color discontinuities, or (3) the integration of stimulus-driven abrupt onset signals with goal-driven information about the target features. The results of two experiments clearly support the third explanation. We conclude that oculomotor capture is modulated by, but not contingent upon, top-down control, and our findings argue for an integrative view of the saccadic system. |
Lars Lünenburger; Klaus-Peter Hoffmann Arm movement and gap as factors influencing the reaction time of the second saccade in a double-step task Journal Article In: European Journal of Neuroscience, vol. 17, no. 11, pp. 2481–91, 2003. @article{Luenenburger2003, To guide our hand for reaching, we explore our visual environment by sequences of saccades. In the present paper, we studied the eye and hand movements of human subjects looking or looking and pointing at a target that is instantaneously displaced two times (double-step task). It was previously shown that the second saccade has a much longer reaction time than the first one [Feinstein & Williams (1972) Vision Res., 12, 33-44]. The second reaction time is even longer if the subject also has to point to the target with the hand [Lünenburger et al. (2000) Eur. J. Neurosci., 12, 4107-4116]. The conditions and objective for these effects are further examined in the present paper. It is shown that vision of the hand reduces the first and second saccadic reaction times in parallel. The second reaction time is prolonged for shorter delays between both target steps as well as for larger amplitudes of the second saccade. However, the long second reaction time does not reflect an absolute saccadic refractory period, because a gap before the second target step reduces the second reaction time to a value similar to the first. Hand response time and average hand velocity were increased when the second target step was larger. The response time for the eyes was about 30% of the response time of the hand. We argue that the observed effects reflect the coordination of eye and hand movement to allow a precise and efficient reaching behaviour. |
Ervin Poljac; Albert V. Berg Representation of heading direction in far and near head space Journal Article In: Experimental Brain Research, vol. 151, no. 2, pp. 501–513, 2003. @article{Poljac2003, Manipulation of objects around the head requires an accurate and stable internal representation of their locations in space, also during movements such as that of the eye or head. For far space, the representation of visual stimuli for goal-directed arm movements relies on retinal updating, if eye movements are involved. Recent neurophysiological studies led us to infer that a transformation of visual space from retinocentric to a head-centric representation may be involved for visual objects in close proximity to the head. The first aim of this study was to investigate if there is indeed such a representation for remembered visual targets of goal-directed arm movements. Participants had to point toward an initially foveated central target after an intervening saccade. Participants made errors that reflect a bias in the visuomotor transformation that depends on eye displacement rather than any head-centred variable. The second issue addressed was if pointing toward the centre of a wide-field expanding motion pattern involves a retinal updating mechanism or a transformation to a head-centric map and if that process is distance dependent. The same pattern of pointing errors in relation to gaze displacement was found independent of depth. We conclude that for goal-directed arm movements, representation of the remembered visual targets is updated in a retinal frame, a mechanism that is actively used regardless of target distance, stimulus characteristics or the requirements of the task. |
Marc Pomplun; Eyal M. Reingold; Jiye Shen Area activation: A computational model of saccadic selectivity in visual search Journal Article In: Cognitive Science, vol. 27, no. 2, pp. 299–312, 2003. @article{Pomplun2003, The Area Activation Model (Pomplun, Reingold, Shen, & Williams, 2000) is a computational model predicting the statistical distribution of saccadic endpoints in visual search tasks. Its basic assumption is that saccades in visual search tend to foveate display areas that provide a maximum amount of task-relevant information for processing during the subsequent fixation. In the present study, a counterintuitive prediction by the model is empirically tested, namely that saccadic selectivity towards stimulus features depends on the spatial arrangement of search items. We find good correspondence between simulated and empirically observed selectivity patterns, providing strong support for the Area Activation Model. |
Johannes M. Zanker; Melanie Doyle; Robin Walker Gaze stability of observers watching Op Art pictures Journal Article In: Perception, vol. 32, no. 9, pp. 1037–1049, 2003. @article{Zanker2003, It has been the matter of some debate why we can experience vivid dynamic illusions when looking at static pictures composed from simple black and white patterns. The impression of illusory motion is particularly strong when viewing some of the works of 'Op Artists, such as Bridget Riley's painting Fall. Explanations of the illusory motion have ranged from retinal to cortical mechanisms, and an important role has been attributed to eye movements. To assess the possible contribution of eye movements to the illusory-motion percept we studied the strength of the illusion under different viewing conditions, and analysed the gaze stability of observers viewing the Riley painting and control patterns that do not produce the illusion. Whereas the illusion was reduced, but not abolished, when watching the painting through a pinhole, which reduces the effects of accommodation, it was not perceived in flash afterimages, suggesting an important role for eye movements in generating the illusion for this image. Recordings of eye movements revealed an abundance of small involuntary saccades when looking at the Riley pattern, despite the fact that gaze was kept within the dedicated fixation region. The frequency and particular characteristics of these rapid eye movements can vary considerably between different observers, but, although there was a tendency for gaze stability to deteriorate while viewing a Riley painting, there was no significant difference in saccade frequency between the stimulus and control patterns. Theoretical considerations indicate that such small image displacements can generate patterns of motion signals in a motion-detector network, which may serve as a simple and sufficient, but not necessarily exclusive, explanation for the illusion. Why such image displacements lead to perceptual results with a group of Op Art and similar patterns, but remain invisible for other stimuli, is discussed. |
Thomas Wynn; Frederick Coolidge The role of working memory in the evolution of managed foraging Journal Article In: Before Farming, no. 2, pp. 1–16, 2003. @article{Wynn2003, This article proposes that a relatively simple evolutionary development in human cognition enabled the develop- ment of managed foraging systems and, ultimately, agriculture. This development, an increase in the capacity of working memory, resulted in an enhancement of such specific cognitive abilities as response inhibition, response preparation, resistance to interference, and the ability to integrate action across space and time. All are required for modern managed foraging systems, including hunting and gathering and agriculture. Archaeological evidence provides strong evidence for managed foraging by the middle of the European Upper Palaeolithic and South African Later Stone Age, and independent evidence for enhanced working memory capacity slightly earlier. This fits the hypothesis that enhanced working memory capacity was a relatively recent development in human evolu- tion, and one that enabled not just managed foraging, but perhaps modern culture itself. |
Jiye Shen; Eyal M. Reingold; Marc Pomplun Guidance of eye movements during conjunctive visual search: The distractor-ratio effect Journal Article In: Canadian Journal of Experimental Psychology, vol. 57, no. 2, pp. 76–96, 2003. @article{Shen2003, The distractor-ratio effect refers to the finding that search performance in a conjunctive visual search task depends on the relative frequency of two types or subsets of distractors when the total number of items in a display is fixed. Previously, Shen, Reingold, and Pomplun (2000) examined participants' patterns of eye movements in a distractor-ratio paradigm and demonstrated that on any given trial saccadic endpoints were biased towards the smaller subset of distractors and participants flexibly switched between different subsets across trials. The current study explored the boundary conditions of this tendency to flexibly search through a smaller subset of distractors by examining the influence of several manipulations known to modulate search efficiency, including stimulus discriminability (Experiment 1), within-dimension versus cross-dimension conjunction search and distractor heterogeneity (Experiment 2). The results indicated that the flexibility of visual guidance and saccadic bias exemplified by the distractor-ratio effect is a robust phenomenon that mediates search efficiency by adapting to changes in the relative informativeness of stimulus dimensions and features. |
Shinsuke Shimojo; Claudiu Simion; Eiko Shimojo; Christian Scheier Gaze bias both reflects and influences preference Journal Article In: Nature Neuroscience, vol. 6, no. 12, pp. 1317–1322, 2003. @article{Shimojo2003, Emotions operate along the dimension of approach and aversion, and it is reasonable to assume that orienting behavior is intrinsically linked to emotionally involved processes such as preference decisions. Here we describe a gaze 'cascade effect' that was present when human observers were shown pairs of human faces and instructed to decide which face was more attractive. Their gaze was initially distributed evenly between the two stimuli, but then gradually shifted toward the face that they eventually chose. Gaze bias was significantly weaker in a face shape discrimination task. In a second series of experiments, manipulation of gaze duration, but not exposure duration alone, biased observers' preference decisions. We thus conclude that gaze is actively involved in preference formation. The gaze cascade effect was also present when participants compared abstract, unfamiliar shapes for attractiveness, suggesting that orienting and preference for objects in general are intrinsically linked in a positive feedback loop leading to the conscious choice. |
Andreas Schiegg; Heiner Deubel; Werner X. Schneider Attentional selection during preparation of prehension movements Journal Article In: Visual Cognition, vol. 10, no. 4, pp. 409–431, 2003. @article{Schiegg2003, In two experiments coupling between dorsal attentional selection for action and ventral attentional selection for perception during preparation of prehension movements was examined. In a dual-task paradigm subjects had to grasp an "X"-shaped object with either the left or the right hand's thumb and index finger. Simultaneously a discrimination task was used to measure visual attention prior to the execution of the prehension movements: Mask items transiently changed into distractors or discrimination targets. There was exactly one discrimination target per trial, which appeared at one of the four branch ends of the object. In Experiment 1 target position varied randomly while in Experiment 2 it was constant and known to subjects in each block of trials. In both experiments discrimination performance was significantly better for discrimination target positions at to-be-grasped branch ends than for not-to-be-grasped branch ends. We conclude that during preparation of prehension movements visual attention is largely confined to those parts of an object that will be grasped. |
Scott D. Slotnick; Jens Schwarzbach; Steven Yantis Attentional inhibition of visual processing in human striate and extrastriate cortex Journal Article In: NeuroImage, vol. 19, no. 4, pp. 1602–1611, 2003. @article{Slotnick2003, Allocating attention to a spatial location in the visual field is associated with an increase in the cortical response evoked by a stimulus at that location, compared to when the same stimulus is unattended. We used event-related functional magnetic resonance imaging to investigate attentional modulation of the cortical response to a stimulus probe at an attended location and to multiple probes at unattended locations. A localizer task and retinotopic mapping were used to precisely identify the cortical representations of each probe within striate (V1) and extrastriate cortex (V2, VP, V3, V4v, and V3A). The magnitude and polarity of attentional modulation were assessed through analysis of event-related activity time-locked to shifts in spatial attention. Attentional facilitation at the attended location was observed in striate and extrastriate cortex, corroborating earlier findings. Attentional inhibition of visual stimuli near the attended location was observed in striate cortex, and attentional inhibition of more distant stimuli occurred in both striate and extrastriate cortex. These findings indicate that visual attention operates both through facilitation of visual processing at the attended location and through inhibition of unattended stimulus representations in striate and extrastriate cortex. |
Mike Rinck; Elena Gámez; José M. Díaz; Manuel De Vega Processing of temporal information: Evidence from eye movements Journal Article In: Memory & Cognition, vol. 31, no. 1, pp. 77–86, 2003. @article{Rinck2003, In two experiments, we recorded eye movements to study how readers monitor temporal order information contained in narrative texts. Participants read short texts containing critical temporal information in the sixth sentence, which could be either consistent or inconsistent with temporal order information given in the second sentence. In Experiment 1, inconsistent sentences yielded more regressions to the second sentence and longer refixations of it. In Experiment 2, this pattern of eye movements was shown only by readers who noticed the inconsistency and were able to report it. Theoretical and methodological implications of the results for research on text comprehension are discussed. |
Jan Theeuwes; Giel Jan De Vries; Richard Godijn Attentional and oculomotor capture with static singletons Journal Article In: Perception and Psychophysics, vol. 65, no. 5, pp. 735–746, 2003. @article{Theeuwes2003, Previous research has shown that in visual search static singletons have the ability to capture attention (Theeuwes, 1991a, 1992). The present study investigated whether these singletons also have the ability to capture the eyes. Participants had to make an eye movement and respond manually to a shape singleton while a color singleton was present. When participants searched for a unique shape while a unique color singleton was present there was strong attentional and oculomotor capture (Experiment 1). However, when participants searched for a specific-shape singleton (a green circle) when a specific-color singleton (a red element) had to be ignored, there was attentional capture but no oculomotor capture (Experiment 2). The results suggest that an attentional set for a specific feature value defining both the target and the distractor (as in Experiment 2) allows such a fast disengagement of attention from the location of the distractor that a saccade execution to that location is prevented. |
Yutaka Sakaguchi Visual field anisotropy revealed by perceptual filling-in Journal Article In: Vision Research, vol. 43, no. 19, pp. 2029–2038, 2003. @article{Sakaguchi2003, Four experiments were performed to investigate how the time required for perceptual filling-in varies with the position of the target in the visual field. Conventional studies have revealed that filling-in is facilitated by a target with greater eccentricity, while no systematic studies have examined the effect of polar angle. Experiment 1 examined the effect of polar angle when the target and surround differed in luminance. Filling-in was facilitated as the target position changed from the horizontal to the vertical meridian. This dependency was more prominent in the upper field than in the lower, although no asymmetry was found between the left and right visual fields. These features were observed in both monocular and binocular viewing. These results were replicated in a modified stimulus configuration, in which the surround was a circular region concentric with the target (Experiment 2). Moreover, it was confirmed that the asymmetry was not due to fluctuation in the retinal image (i.e., eye movement) (Experiment 3). Finally, Experiment 4 examined whether this anisotropy was observed when two differently oriented gratings were presented in the target and surround regions. Again, filling-in was facilitated for a target close to the vertical meridian, irrespective of the relationship between the target and surround orientations. The underlying mechanism of this anisotropy is discussed from the viewpoints of cortical magnification and neural connections in the visual cortex. |
2002 |
Scott D. Slotnick; Joseph B. Hopfinger; Stanley A. Klein; Erich E. Sutter Darkness beyond the light: Attentional inhibition surrounding the classic spotlight Journal Article In: NeuroReport, vol. 13, no. 6, pp. 773–778, 2002. @article{Slotnick2002, The aim of the present investigation was to determine the nature and spatial distribution of selective visual attention. Using cortical source localization of ERP data corresponding to 60 task-irrelevant stimuli across the visual field, we assessed attention effects on visual processing. Consistent with previous findings, visual processing was enhanced at the attended spatial location. In addition, this facilitation of processing extended from the attended location to the point of fixation resulting in a region of facilitation. Furthermore, a large region of inhibition was found surrounding this region of facilitation. The latter result is inconsistent with a simple facilitative spotlight model of attention and indicates that attention effects can be both facilitatory and inhibitory. |
Eyal M. Reingold On the perceptual specificity of memory representations Journal Article In: Memory, vol. 10, no. 5-6, pp. 365–379, 2002. @article{Reingold2002, The present paradigm involved manipulating the congruency of the perceptual processing during the study and test phases of a recognition memory task. During each trial, a gaze-contingent window was used to limit the stimulus display to a region either inside or outside a 10 degrees square centred on the participant's point of gaze, constituting the Central and Peripheral viewing modes respectively. The window position changed in real time in concert with changes in gaze position. Four experiments documented better task performance when viewing modes at encoding and retrieval matched than when they mismatched (i.e., perceptual specificity effects). Viewing mode congruency effects were demonstrated with both verbal and non-verbal stimuli. The present research is motivated and discussed in terms of theoretical views proposed in the 1970s including the levels-of-processing framework and the proceduralist viewpoint. In addition, implications for current processing and multiple systems views of memory are outlined. |
Eyal M. Reingold; Lester C. Loschky Saliency of peripheral targets in gaze-contingent multiresolutional displays Journal Article In: Behavior Research Methods, Instruments & Computers, vol. 34, no. 4, pp. 491–499, 2002. @article{Reingold2002a, Gaze-contingent multiresolutional displays (GCMRDs) have been proposed to solve the processing and bandwidth bottleneck in many single-user displays, by dynamically placing high-resolution in a window at the center of gaze, with lower resolution everywhere else. The three experiments reported here document a slowing of peripheral target acquisition associated with the presence of a gaze-contingent window. This window effect was shown for displays using either moving video or still images. The win- dow effect was similar across a resolution-defined window condition and a luminance-defined window condition, suggesting that peripheral image degradation is not a prerequisite of this effect. The window effect was also unaffected by the type of window boundary used (sharp or blended). These results are interpreted in terms of an attentional bias resulting in a reduced saliency of peripheral targets due to increased competition from items within the window. We discuss the implications of the window effect for the study of natural scene perception and for human factors research related to GCMRDs. |
Paola Ricciardelli; Emanuela Bricolo; Salvatore M. Aglioti; Leonardo Chelazzi My eyes want to look where your eyes are looking: Exploring the tendency to imitate another individual's gaze Journal Article In: NeuroReport, vol. 13, no. 17, pp. 2259–2264, 2002. @article{Ricciardelli2002, In this studyweinvestigated the tendencyofhumans toimitate the gaze direction of other individuals.Distracting gaze stimuli or non biological directional cues (arrows) were presented to observers performing an instructed saccadic eyemovement task. Eyemove- mentrecordings showed thatobserversperformedless accurately when the distracting gaze and the instructed saccade had opposite directions,with a substantial number of saccadesmatching the di- rection of the distracting gaze. Static (Experiment1) and dynamic (Experiment 2) gaze distracters, but not pointing arrows (Experiment 3), produced the e¡ect.Results showa strong predisposition of humans to imitate somebody else's oculomotor behaviour, even when detrimental to task performance. This is likely linked to a strong tendency to share attentional states of other individuals, known as joint attention. |