EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2006 |
Jan L. Souman; Ignace T. C. Hooge; Alexander H. Wertheim Frame of reference transformations in motion perception during smooth pursuit eye movements Journal Article In: Journal of Computational Neuroscience, vol. 20, no. 1, pp. 61–76, 2006. @article{Souman2006, Smooth pursuit eye movements change the retinal image velocity of objects in the visual field. In order to change from a retinocentric frame of reference into a head-centric one, the visual system has to take the eye movements into account. Studies on motion perception during smooth pursuit eye movements have measured either perceived speed or perceived direction during smooth pursuit to investigate this frame of reference transformation, but never both at the same time. We devised a new velocity matching task, in which participants matched both perceived speed and direction during fixation to that during pursuit. In Experiment 1, the velocity matches were determined for a range of stimulus directions, with the head-centric stimulus speed kept constant. In Experiment 2, the retinal stimulus speed was kept approximately constant, with the same range of stimulus directions. In both experiments, the velocity matches for all directions were shifted against the pursuit direction, suggesting an incomplete transformation of the frame of reference. The degree of compensation was approximately constant across stimulus direction. We fitted the classical linear model, the model of Turano and Massof (2001) and that of Freeman (2001) to the velocity matches. The model of Turano and Massof fitted the velocity matches best, but the differences between de model fits were quite small. Evaluation of the models and comparison to a few alternatives suggests that further specification of the potential effect of retinal image characteristics on the eye movement signal is needed. |
Claudiu Simion; Shinsuke Shimojo Early interactions between orienting, visual sampling and decision making in facial preference Journal Article In: Vision Research, vol. 46, no. 20, pp. 3331–3335, 2006. @article{Simion2006, Decision making has been regarded as the last stage before action in the human information processing, certainly subsequent to sensory sampling and perceptual integration. Our latest study showed that orienting contributes to preference decision making, by integrating preferential looking and mere exposure in a positive feedback loop leading to the conscious choice. Here, we introduce a gaze-contingent window method of stimulus presentation into our experimental paradigm, to completely block holistic stimulus processing while preserving piecemeal sampling through the gaze-contingent "peephole". This effectively zooms the visual processing in time domain, allowing us to show that orienting and decision making can interact long before the actual conscious choice. The finding also suggests that this interaction is independent of holistic properties of face stimuli and can be totally memory-driven. |
Mike Rinck; Eni S. Becker Spider fearful individuals attend to threat, then quickly avoid it: Evidence from eye movements Journal Article In: Journal of Abnormal Psychology, vol. 115, no. 2, pp. 231–238, 2006. @article{Rinck2006, According to cognitive models of anxiety, anxiety patients exhibit an early reflexive attentional bias toward threat stimuli, which may be followed by intentional avoidance of these stimuli. To determine the time course of attentional vigilance and avoidance, the authors conducted an eye-tracking study in which 22 highly spider fearful participants (SFs) and 23 nonanxious control participants (NACs) studied groups of 4 pictures (spider, butterfly, dog, and cat). The authors found that the very first fixation was on a spider picture more often in SFs than in NACs. However, SFs quickly moved their eyes away from the spider they had fixated first, yielding shorter gaze durations than NACs. Afterward, SFs exhibited shorter gaze durations on spiders than NACs for the rest of the 1-min presentation time. This early reflexive attentional bias toward threat followed by avoidance of threat may explain earlier failures to find attentional biases in anxiety. |
Tyler M. Rolheiser; Gordon Binsted; Kyle J. Brownell Visuomotor representation decay: Influence on motor systems Journal Article In: Experimental Brain Research, vol. 173, no. 4, pp. 698–707, 2006. @article{Rolheiser2006, The contribution of ventral stream information to the variability of movement has been the focus of much attention, and has provided numerous researchers with conflicting results. These results have been obtained through the use of discrete pointing movements, and as such, do not offer any explanation regarding how ventral stream information contributes to movement variability over time. The present study examined the contribution of ventral stream information to movement variability in three tasks: Hand-only movement, eye-only movement, and an eye-hand coordinated task. Participants performed a continuous reciprocal tapping task to two point-of-light targets for 10 s. The targets were visible for the first 5 s, at which point vision of the targets was removed. Movement variability was similar in all conditions for the initial 5-s interval. The no-vision condition (final 5 s) can be summarized as follows: ventral stream information contributed to an initial significant increase in variability across motor systems, though the different motor systems were able to preserve ventral information integrity differently. The results of these studies can be attributed to the behavioral and cortical networks that underlie the saccadic and manual motor systems. |
Jan Theeuwes; Stefan Van der Stigchel Faces capture attention: Evidence from inhibition of return Journal Article In: Visual Cognition, vol. 13, no. 6, pp. 657–665, 2006. @article{Theeuwes2006, The human face is a visual pattern of great social and biological importance. While previous studies have shown that attention may be preferentially directed and engaged longer by faces, the current study presents a new methodology to test the notion that faces can capture attention. The present study uses the occurrence of inhibition of return (IOR) as a diagnostic tool to determine the allocation of attention in visual space. Because previous research suggested that IOR at a location in space only occurs after attention has been reflexively moved to that location, the current finding of IOR at the location of the face provides converging support for the claim that faces do have the ability to summon attention. |
Jan Theeuwes; Stefan Van Der Stigchel; Christian N. L. Olivers Spatial working memory and inhibition of return Journal Article In: Psychonomic Bulletin & Review, vol. 13, no. 4, pp. 608–613, 2006. @article{Theeuwes2006a, Recently we showed that maintaining a location in spatial working memory affects saccadic eye movement trajectories, in that the eyes deviate away from the remembered location (Theeuwes, Olivers, and Chizk, 2005). Such saccade deviations are assumed to be the result of inhibitory processes within the oculomotor system. The present study investigated whether this inhibition is related to the phenomenon of inhibition of return (IOR), the relatively slow selection of previously attended locations as compared with new locations. The results show that the size of IOR to a location was not affected by whether or not the location was kept in working memory, but the size of the saccade trajectory deviation was affected. We conclude that inhibiting working memory-related eye movement activity is not the same as inhibiting a previously attended location in space. |
Laura E. Thomas; David E. Irwin Voluntary eyeblinks disrupt iconic memory Journal Article In: Perception and Psychophysics, vol. 68, no. 3, pp. 475–488, 2006. @article{Thomas2006, In the present research, we investigated whether eyeblinks interfere with cognitive processing. In Experiment 1, the participants performed a partial-report iconic memory task in which a letter array was presented for 106 msec, followed 50, 150, or 750 msec later by a tone that cued recall of onerow of the array. At a cue delay of 50 msec between array offset and cue onset, letter report accuracy was lower when the participants blinked following array presentation than under no-blink conditions; the participants made more mislocation errors under blink conditions. This result suggests that blinking interferes with the binding of object identity and object position in iconic memory. Experiment 2 demonstrated that interference due to blinks was not due merely to changes in light intensity. Experiments 3 and 4 demonstrated that other motor responses did not interfere with iconic memory. We propose a new phenomenon, cognitive blink suppression, in which blinking inhibits cognitive processing. This phenomenon may be due to neural interference. Blinks reduce activation in area V1, which may interfere with the representation of information in iconic memory. |
Lee Hogarth; Anthony Dickinson; Samuel B. Hutton; Helen Bamborough; Theodora Duka Contingency knowledge is necessary for learned motivated behaviour in humans: Relevance for addictive behaviour Journal Article In: Addiction, vol. 101, no. 8, pp. 1153–1166, 2006. @article{Hogarth2006, AIMS: Many forms of human conditioned behaviour depend upon explicit knowledge of the predictive contingency between stimuli, responses and the reinforcer. However, it remains uncertain whether the conditioning of three key behaviours in drug addiction-selective attention, instrumental drug-seeking behaviour and emotional state–are dependent upon contingency knowledge. To test this possibility, we employed an avoidance procedure to generate rapidly these three forms of conditioned behaviour without incurring the methodological problems of drug conditioning. DESIGN: In two experiments, participants (16 students) were trained on a schedule in which one stimulus (S +) predicted the occurrence of a startling noise, which could be cancelled by performing an instrumental avoidance response. MEASUREMENTS: The allocation of attention to the S + and the rate and probability of the avoidance response in the presence of S + were measured. Following training, participants were tested for their knowledge of the stimulus-noise contingencies arranged in the study and rated the emotional qualities of the stimuli. FINDINGS: Both experiments showed that S + gained control of selective attention, instrumental avoidance behaviour and subjective anxiety, but only in participants who reported explicit knowledge of the Pavlovian contingency between the S + and the startling noise. CONCLUSIONS: The implication of the present findings is that the control of selective attention, instrumental drug-seeking behaviour and emotional state by drug-paired stimuli is mediated by cognitive knowledge of the predictive contingency between the stimulus and the drug. |
Lee Hogarth; Anthony Dickinson; Samuel B. Hutton; Nieke Elbers; Theodora Duka Drug expectancy is necessary for stimulus control of human attention, instrumental drug-seeking behaviour and subjective pleasure Journal Article In: Psychopharmacology, vol. 185, no. 4, pp. 495–504, 2006. @article{Hogarth2006a, BACKGROUND: It has been suggested that drug-paired stimuli (S+) control addictive behaviour by eliciting an explicit mental representation or expectation of drug availability. AIMS: The aim of the present study was to test this hypothesis by determining whether the behavioural control exerted by a tobacco-paired S+ in human smokers would depend upon the S+ eliciting an explicit expectation of tobacco. DESIGN: In each trial, human smokers (n=16) were presented with stimuli for which attention was measured with an eyetracker. Participants then reported their cigarette reward expectancy before performing, or not, an instrumental tobacco-seeking response that was rewarded with cigarette gains if the S+ had been presented or punished with cigarette losses if the S- had been presented. Following training, participants rated the pleasantness of stimuli. RESULTS: The S+ only brought about conditioned behaviour in an aware group (those who expected the cigarette reward outcome when presented with the S+). This aware group allocated attention to the S+, performed the instrumental tobacco-seeking response selectively in the presence of the S+ and rated the S+ as pleasant. No conditioned behaviour was seen in the unaware group (those who did not expect the cigarette reward outcome in the presence of the S+). CONCLUSIONS: Drug-paired stimuli control selective attention, instrumental drug-seeking behaviour and positive emotional state by eliciting an explicit expectation of drug availability. |
R. Houtkamp; P. R. Roelfsema The effect of items in working memory on the deployment of attention and the eyes during visual search Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 32, no. 2, pp. 423–442, 2006. @article{Houtkamp2006, Paying attention to an object facilitates its storage in working memory. The authors investigate whether the opposite is also true: whether items in working memory influence the deployment of attention. Participants performed a search for a prespecified target while they held another item in working memory. In some trials this memory item was present in the search display as a distractor. Such a distractor has no effect on search time if the search target is in the display. In that case, the item in working memory is unlikely to be selected as a target for an eye movement, and if the eyes do land on it, fixation duration is short. In the absence of the target, however, there is a small but significant effect of the memory item on search time. The authors conclude that the target for visual search has a special status in working memory that allows it to guide attention. Guidance of attention by other items in working memory is much weaker and can be observed only if the search target is not present in the display. |
P. -J. Hsieh; G. P. Caplovitz; P. U. Tse Illusory motion induced by the offset of stationary luminance-defined gradients Journal Article In: Vision Research, vol. 46, no. 6-7, pp. 970–978, 2006. @article{Hsieh2006, An illusory motion induced by the offset of a stationary gradient stimulus is characterized. When a gradient stimulus, whose luminance contrast ranges gradually from white on one side to black on the other, is made to disappear all at once so that only the uniform white background remains visible, illusory motion is perceived. This motion lasts ∼700 ms, as if the stimulus moves from the low to the high luminance contrast side. This gradient-offset induced motion does not occur for equiluminant color-defined gradient offsets, suggesting that it relies mainly on the magnocellular pathway. Our data are consistent with the hypothesis that this illusion is caused by the decay of the gradient afterimage. |
P. -J. Hsieh; P. U. Tse Illusory color mixing upon perceptual fading and filling-in does not result in 'forbidden colors' Journal Article In: Vision Research, vol. 46, no. 14, pp. 2251–2258, 2006. @article{Hsieh2006a, A retinally stabilized object readily undergoes perceptual fading. It is commonly believed that the color of the apparently vanished object is filled in with the color of the background because the features of the filled-in area are determined by features located outside the stabilized boundary. Crane, H. D., & Piantanida, T. P. (1983) (On seeing reddish green and yellowish blue. Science, 221, 1078-1080) reported that the colors that are perceived upon full or partial perceptual fading can be 'forbidden' in the sense that they violate color opponency theory. For example, they claimed that their subjects could perceive "reddish greens" and "yellowish blues." Here we use visual stimuli composed of spatially alternating stripes of two different colors to investigate the characteristics of color mixing during perceptual filling-in, and to determine whether 'forbidden colors' really occur. Our results show that (1) the filled-in color is not solely determined by the background color, but can be the mixture of the background and the foreground color; (2) apparent color mixing can occur even when the two colors are presented to different eyes, implying that color mixing during filling-in is in part a cortical phenomenon; and (3) perceived colors are not 'forbidden colors' at all, but rather intermediate colors. |
P. -J. Hsieh; P. U. Tse Stimulus factors affecting illusory rebound motion Journal Article In: Vision Research, vol. 46, no. 12, pp. 1924–1933, 2006. @article{Hsieh2006b, Stimulus attributes that influence a recently reported illusion called "illusory rebound motion" (IRM; [Hsieh, P.-J., Caplovitz, G. P., & Tse, P. U. (2005). Illusory rebound motion and the motion continuity heuristic. Vision Research, 45, 2972-2985.]) are described. When a bar alternates between two different colors, IRM can be observed to traverse the bar as if the color were shooting back and forth like the opening and closing of a zipper, even though each color appears in fact all at once. Here, we tested IRM over dynamic squares or disks defined by random dot or checkerboard textures to show that (1) IRM can be perceived in the absence of first-order motion-energy (or when the direction of net first-order motion-energy is ambiguous); (2) the direction of IRM is multistable and can change spontaneously or be changed volitionally; and (3) the perceived frequency of IRM is affected by several factors such as the contours of the stimulus, stimulus texture, and motion-energy. |
Arthur F. Kramer; Walter R. Boot; Jason S. McCarley; Matthew S. Peterson; Angela M. Colcombe; Charles T. Scialfa Aging, memory and visual search Journal Article In: Acta Psychologica, vol. 122, no. 3, pp. 288–304, 2006. @article{Kramer2006, Potential age-related differences in the memory processes that underlie visual search are examined in the present study. Using a dynamic, gaze-contingent search paradigm developed to assess memory for previously examined distractors, older adults demonstrated no memory deficit. Surprisingly, older adults made fewer refixations compared to their younger counterparts, indicating better memory for previously inspected objects. This improved memory was not the result of a speed-accuracy trade-off or larger Inhibition-of-Return effects for older than for younger adults. Additional analyses suggested that older adults may derive their benefit from finer spatial encoding of search items. These findings suggest that some of the memory processes that support visual search are relatively age invariant. |
Tomas Knapen; Raymond Ee Slant perception, and its voluntary control, do not govern the slant aftereffect: Multiple slant signals adapt independently Journal Article In: Vision Research, vol. 46, no. 20, pp. 3381–3392, 2006. @article{Knapen2006, Although it is known that high-level spatial attention affects adaptation for a variety of stimulus features (including binocular disparity), the influence of voluntary attentional control-and the associated awareness-on adaptation has remained unexplored. We developed an ambiguous surface slant adaptation stimulus with conflicting monocular and binocular slant signals that instigated two mutually exclusive surface percepts with opposite slants. Using intermittent stimulus removal, subjects were able to voluntarily select one of the two rivaling slant percepts for extended adaptation periods, enabling us to dissociate slant adaptation due to awareness from stimulus-induced slant adaptation. We found that slant aftereffects (SAE) for monocular and binocular test patterns had opposite signs when measured simultaneously. There was no significant influence of voluntarily controlled perceptual state during adaptation on SAEs of monocular or binocular signals. In addition, the magnitude of the binocular SAE did not correlate with the magnitude of perceived slant. Using adaptation to one slant cue, and testing with the other cue, we demonstrated that multiple slant signals adapt independently. We conclude that slant adaptation occurs before the level of slant awareness. Our findings place the site of stereoscopic slant adaptation after disparity and eye posture are interpreted for slant [as demonstrated by Berends et al. (Berends, E. M., Liu, B., & Schor, C. M. (2005). Stereo-slant adaptation is high level and does not involve disparity coding. Journal of Vision 5 (1), 71-80), using that disparity scales with distance], but before other slant signals are integrated for the resulting awareness of the presented slant stimulus. |
Aulikki Hyrskykari Utilizing eye movements: Overcoming inaccuracy while tracking the focus of attention during reading Journal Article In: Computers in Human Behavior, vol. 22, no. 4, pp. 657–671, 2006. @article{Hyrskykari2006, Even though eye movements during reading have been studied intensively for decades, applications that track the reading of longer passages of text in real time are rare. The problems encountered in developing such an application (a reading aid, iDict), and the solutions to the problems are described. Some of the issues are general and concern the broad family of Attention Aware Systems. Others are specific to the modality of interest: eye gaze. One of the most difficult problems when using eye tracking to identify the focus of visual attention is the inaccuracy of the eye trackers used to measure the point of gaze. The inaccuracy inevitably affects the design decisions of any application exploiting the point of gaze for localizing the point of visual attention. The problem is demonstrated with examples from our experiments. The principles of the drift correction algorithms that automatically correct the vertical inaccuracy are presented and the performance of the algorithms is evaluated. |
F. Jakel; F. A. Wichmann Spatial four-alternative forced-choice method is the preferred psychophysical method for naive observers Journal Article In: Journal of Vision, vol. 6, no. 11, pp. 1307–1322, 2006. @article{Jakel2006, H. R. Blackwell (1952) investigated the influence of different psychophysical methods and procedures on detection thresholds. He found that the temporal two-interval forced-choice method (2-IFC) combined with feedback, blocked constant stimulus presentation with few different stimulus intensities, and highly trained observers resulted in the "best" threshold estimates. This recommendation is in current practice in many psychophysical laboratories and has entered the psychophysicists' "folk wisdom" of how to run proper psychophysical experiments. However, Blackwell's recommendations explicitly require experienced observers, whereas many psychophysical studies, particularly with children or within a clinical setting, are performed with naïve observers. In a series of psychophysical experiments, we find a striking and consistent discrepancy between naïve observers' behavior and that reported for experienced observers by Blackwell: Naïve observers show the "best" threshold estimates for the spatial four-alternative forced-choice method (4-AFC) and the worst for the commonly employed temporal 2-IFC. We repeated our study with a highly experienced psychophysical observer, and he replicated Blackwell's findings exactly, thus suggesting that it is indeed the difference in psychophysical experience that causes the discrepancy between our findings and those of Blackwell. In addition, we explore the efficiency of different methods and show 4-AFC to be more than 3.5 times more efficient than 2-IFC under realistic conditions. While we have found that 4-AFC consistently gives lower thresholds than 2-IFC in detection tasks, we have found the opposite for discrimination tasks. This discrepancy suggests that there are large extrasensory influences on thresholds–sensory memory for IFC methods and spatial attention for spatial forced-choice methods–that are critical but, alas, not part of theoretical approaches to psychophysics such as signal detection theory. |
Caroline J. Ketcham; Natalia V. Dounskaia; George E. Sielmach The role of vision in the control of continuous multijoint movements Journal Article In: Journal of Motor Behavior, vol. 38, no. 1, pp. 29–44, 2006. @article{Ketcham2006, The authors investigated whether visual fixations during a continuous graphical task were related to arm endpoint kinematics, joint motions, or joint control. The pattern of visual fixations across various shapes and the relationship between temporal and spatial events of the moving limb and visual fixations were assessed. Participants (N=16) performed movements of varying shapes by rotating the shoulder and elbow joints in the transverse plane at a comfortable pace. Across shapes, eye movements consisted of a series of fixations, with the eyes leading the hand. Fixations were spatially related to modulation of joint motion and were temporally related to the portions of the movement where curvature was the highest. Gathering of information related to modulation of interactive torques arising from passive forces from movement of a linked system occurred when the velocity of the movement (a) was the lowest and (b) was ahead of the moving limb, suggesting that that information is used in a feedforward manner. |
Markus Lappe; Simone Kuhlmann; Britta Oerke; Marcus Kaiser The fate of object features during perisaccadic mislocalization Journal Article In: Journal of Vision, vol. 6, pp. 1282–1293, 2006. @article{Lappe2006, Visual objects flashed before a saccade appear compressed toward the saccade target. Simultaneously flashed objects merge perceptually into one. To better understand cortical interactions in perisaccadic processing, we study the perception of features of mislocalized objects. We report four new findings: First, when multiple objects of different colors are compressed onto a single position, their color attributes remain distinguishable. Second, color attributes of objects compressed onto the same position compete for access to visual awareness. Third, objects presaccadically mislocalized onto a static background of identical color and luminance appear visible on top of that background. Object shape can be determined. Fourth, objects flashed during a saccade become invisible when a larger object is present at the mislocalized position. Thus, perisaccadic mislocalization affects the position of objects but retains other object features. Mislocalization must either occur in parallel to color and shape processing or at late stages of the visual pathway. |
Melissa R. Beck; Matthew S. Peterson; Walter R. Boot; Miroslava Vomela; Arthur F. Kramer Explicit memory for rejected distractors during visual search Journal Article In: Visual Cognition, vol. 14, no. 2, pp. 150–174, 2006. @article{Beck2006, Although memory for the identities of examined items is not used to guide visual search, identity memory may be acquired during visual search. In all experiments reported here, search was occasionally terminated and a memory test was presented for the identity of a previously examined item. Participants demonstrated memory for the locations of the examined items by avoiding revisits to these items and memory performance for the items' identities was above chance but lower than expected based on performance in intentional memory tests. Memory performance improved when the foil was not from the search set, suggesting that explicit identity memory is not bound to memory for location. Providing context information during test improved memory for the most recently examined item. Memory for the identities of previously examined items was best when the most recently examined item was tested, contextual information was provided, and location memory was not required. |
Melissa R. Beck; Matthew S. Peterson; Miroslava Vomela Memory for where, but not what, is used during visual search Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 32, no. 2, pp. 235–250, 2006. @article{Beck2006a, Although the role of memory in visual search is debatable, most researchers agree with a limited-capacity model of memory in visual search. The authors demonstrate the role of memory by replicating previous findings showing that visual search is biased away from old items (previously examined items) and toward new items (nonexamined items). Furthermore, the authors examined the type of memory representations used to bias search by changing an item's individuating feature or location during search. Changing the individuating feature of an item did not disrupt normal search biases. However, when the location of an item changed, normal search biases were disrupted. These results suggest that memory used in visual search is based on items' locations rather than their identity. |
Andreas Mojzisch; Leonhard Schilbach; Jens R. Helmert; Sebastian Pannasch; Boris M. Velichkovsky; Kai Vogeley In: Social Neuroscience, vol. 1, no. 3-4, pp. 184–195, 2006. @article{Mojzisch2006, Social neuroscience has shed light on the underpinnings of understanding other minds. The current study investigated the effect of self-involvement during social interaction on attention, arousal, and facial expression. Specifically, we sought to disentangle the effect of being personally addressed from the effect of decoding the meaning of another person's facial expression. To this end, eye movements, pupil size, and facial electromyographic (EMG) activity were recorded while participants observed virtual characters gazing at them or looking at someone else. In dynamic animations, the virtual characters then displayed either socially relevant facial expressions (similar to those used in everyday life situations to establish interpersonal contact) or arbitrary facial movements. The results show that attention allocation, as assessed by eye-tracking measurements, was specifically related to self-involvement regardless of the social meaning being conveyed. Arousal, as measured by pupil size, was primarily related to perceiving the virtual character's gender. In contrast, facial EMG activity was determined by the perception of socially relevant facial expressions irrespective of whom these were directed towards. |
Sharon Morein-Zamir; Alan Kingstone Fixation offset and stop signal intensity effects on saccadic countermanding: A crossmodal investigation Journal Article In: Experimental Brain Research, vol. 175, no. 3, pp. 453–462, 2006. @article{MoreinZamir2006, Two experiments utilized the stop signal paradigm to examine whether fixation offset and stop signal intensity influenced saccadic inhibition. There was a robust fixation offset effect on saccadic latencies. However, contrary to expectations, fixation offset did not influence saccadic inhibition latencies. Importantly, saccadic inhibition latencies were found to be influenced by stop signal salience, with a more intense signal leading to faster stopping. This pattern of results was observed whether the stop signal was presented in the visual or auditory modality. The results provide new insights into the mechanisms of inhibition and help resolve previous inconsistencies in the literature. |
Mark B. Neider; Gregory J. Zelinsky Scene context guides eye movements during visual search Journal Article In: Vision Research, vol. 46, pp. 614–621, 2006. @article{Neider2006, How does scene context guide search behavior to likely target locations? We had observers search for scene-constrained and scene-unconstrained targets, and found that scene-constrained targets were detected faster and with fewer eye movements. Observers also directed more initial saccades to target-consistent scene regions and devoted more time to searching these regions. However, final checking fixations on target-inconsistent regions were common in target-absent trials, suggesting that scene context does not strictly confine search to likely target locations. We interpret these data as evidence for a rapid top-down biasing of search behavior by scene context to the target-consistent regions of a scene. |
Mark B. Neider; Gregory J. Zelinsky Searching for camouflaged targets: Effects of target-background similarity on visual search Journal Article In: Vision Research, vol. 46, no. 14, pp. 2217–2235, 2006. @article{Neider2006a, Do observers search for camouflaged targets by looking through the distractors or by scrutinizing the target-similar background? In four experiments observers searched for toy targets among distractors under varying set size and target-background similarity (TBS) conditions. Manual errors and RTs increased with TBS, although search slopes did not significantly differ. Eye movement analyses revealed that the majority of fixations fell on discrete distractors rather than on the target-similar background, even under high TBS conditions. These data suggest a biased search process; salient patterns segmented from a background are preferred while more target-similar unsegmented regions of the background are relatively neglected. |
Yu-Qiong Niu; Qian Xiao; Rui-Feng Liu; Le-Qing Wu; Shu-Rong Wang Response characteristics of the pigeon's pretectal neurons to illusory contours and motion Journal Article In: Journal of Physiology, vol. 577, pp. 805–813, 2006. @article{Niu2006, Misinterpretations of visual information received by the retina are called visual illusions, which are known to occur in higher brain areas. However, whether they would be also processed in lower brain structures remains unknown, and how to explain the neuronal mechanisms underlying the motion after-effect is intensely debated. We show by extracellular recording that all motion-sensitive neurons in the pigeon's pretectum respond similarly to real and illusory contours, and their preferred directions are identical for both contours in unidirectional cells, whereas these directions are changed by 90 deg for real versus illusory contours in bidirectional cells. On the other hand, some pretectal neurons produce inhibitory (excitatory) after-responses to cessation of prolonged motion in the preferred (null) directions, whose time course is similar to that of the motion after-effect reported by humans. Because excitatory and inhibitory receptive fields of a pretectal cell overlap in visual space and possess opposite directionalities, after-responses to cessation of prolonged motion in one direction may create illusory motion in the opposite direction. It appears that illusory contours and motion could be detected at the earliest stage of central information processing and processed in bottom-up streams, and that the motion after-effect may result from functional interactions of excitatory and inhibitory receptive fields with opposite directionalities. |
Lauri Nummenmaa; Jukka Hyönä; Manuel G. Calvo Eye movement assessment of selective attentional capture by emotional pictures Journal Article In: Emotion, vol. 6, no. 2, pp. 257–268, 2006. @article{Nummenmaa2006a, The eye-tracking method was used to assess attentional orienting to and engagement on emotional visual scenes. In Experiment 1, unpleasant, neutral, or pleasant target pictures were presented simultaneously with neutral control pictures in peripheral vision under instruction to compare pleasantness of the pictures. The probability of first fixating an emotional picture, and the frequency of subsequent fixations, were greater than those for neutral pictures. In Experiment 2, participants were instructed to avoid looking at the emotional pictures, but these were still more likely to be fixated first and gazed longer during the first-pass viewing than neutral pictures. Low-level visual features cannot explain the results. It is concluded that overt visual attention is captured by both unpleasant and pleasant emotional content. |
Annika Åkerfelt; Hans Colonius; Adele Diederich Visual-tactile saccadic inhibition Journal Article In: Experimental Brain Research, vol. 169, no. 4, pp. 554–563, 2006. @article{Aakerfelt2006, In an eye movement countermanding paradigm it is demonstrated for the first time that a tactile stimulus can be an effective stop signal when human participants are to inhibit saccades to a visual target. Estimated stop signal processing times were 90-140 ms, comparable to results with auditory stop signals, but shorter than those commonly found for manual responses. Two of the three participants significantly slowed their reactions in expectation of the stop signal as revealed by a control experiment without stop signals. All participants produced slower responses in the shortest stop signal delay condition than predicted by the race model (Logan and Cowan 1984) along with hypometric saccades on stop failure trials, suggesting that the race model may need to be elaborated to include some component of interaction of stop and go signal processing. |
Richard Amlôt; Robin Walker Are somatosensory saccades voluntary or reflexive? Journal Article In: Experimental Brain Research, vol. 168, no. 4, pp. 557–565, 2006. @article{Amlot2006, The present study examines whether the distinction between voluntary (endogenous) and reflexive (stimulus-elicited) saccades made in the visual modality can be applied to the somatosensory modality. The behavioural characteristics of putative reflexive pro-saccades and voluntary anti-saccades made to visual and somatosensory stimuli were examined. Both visual and somatosensory pro-saccades had much shorter latency than voluntary anti-saccades made in the direction opposite to a peripheral stimulus. Furthermore, erroneous pro-saccades were made towards both visual and somatosensory stimuli on approximately 11-13% of anti-saccade trials. The observed difference in pro- and anti-saccade latency and the presence of pro-saccade errors in the anti-saccade task indicates that a somatosensory stimulus can elicit a form of reflexive saccade comparable to pro-saccades made in the visual modality. It is proposed that a peripheral somatosensory stimulus can elicit a form of reflexive saccade and that somatosensory saccades do not depend exclusively on higher level endogenous control processes for their generation. However, a comparison of the underlying latency distributions and of peak-velocity profiles of saccades made to visual and somatosensory stimuli showed that this distinction may be less clearly defined for the somatosensory modality and that modality-specific differences (such as differences in neural conduction rates) in the underlying oculomotor structures involved in saccade target selection also need to be considered. It is further suggested that a broader conceptualisation of saccades and saccade programming beyond the simple voluntary and reflexive dichotomy, that takes into account the control processes involved in saccade generation for both modalities, may be required. |
Daniel Baldauf; Martin Wolf; Heiner Deubel Deployment of visual attention before sequences of goal-directed hand movements Journal Article In: Vision Research, vol. 46, no. 26, pp. 4355–4374, 2006. @article{Baldauf2006, We examined the allocation of attention during the preparation of sequences of manual pointing movements in a dual task paradigm. As the primary task, the participants had to perform a sequence of two or three reaching movements to targets arranged on a clock face. The secondary task was a 2AFC discrimination task in which a discrimination target (digital 'E' or '3') was presented among distractors either at one of the movement goals or at any other position. The data show that discrimination performance is superior at the location of all movement targets while it is close to chance at the positions that were not relevant for the movement. Moreover, our findings demonstrate that all movement-relevant locations are selected in parallel rather than serially in time, and that selection involves spatially distinct, non-contiguous foci of visual attention. We conclude that during movement preparation-well before the actual execution of the hand movement-attention is allocated in parallel to each of the individual movement targets. |
Christian N. L. Olivers; Frank Meijer; Jan Theeuwes Feature-based memory-driven attentional capture: Visual working memory content affects visual attention Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 32, no. 5, pp. 1243–1265, 2006. @article{Olivers2006, In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by an additional memory task. Singleton distractors interfered even more when they were identical or related to the object held in memory, but only when it was difficult to verbalize the memory content. Furthermore, this content-specific interaction occurred for features that were relevant to the memory task but not for irrelevant features of the same object or for once-remembered objects that could be forgotten. Finally, memory-related distractors attracted more eye movements but did not result in longer fixations. The results demonstrate memory-driven attentional capture on the basis of content-specific representations. |
Nabil Ouerhani; Alexandre Bur; Heinz Hügli Linear vs. nonlinear feature combination for saliency computation: A comparison with human vision Journal Article In: Pattern Recognition, pp. 314–323, 2006. @article{Ouerhani2006, In the heart of the computer model of visual attention, an interest or saliency map is derived from an input image in a process that encompasses several data combination steps. While several combination strategies are possible and the choice of a method influences the final saliency substantially, there is a real need for a performance comparison for the purpose of model improvement. This paper presents contributing work in which model performances are measured by comparing saliency maps with human eye fixations. Four combination methods are compared in experiments involving the viewing of 40 images by 20 observers. Similarity is evaluated qualitatively by visual tests and quantitatively by use of a similarity score. With similarity scores lying 100% higher, non-linear combinations outperform linear methods. The comparison with human vision thus shows the superiority of non-linear over linear combination schemes and speaks for their preferred use in computer models. |
Björn N. S. Vlaskamp; Ignace T. C. Hooge Crowding degrades saccadic search performance Journal Article In: Vision Research, vol. 46, no. 3, pp. 417–425, 2006. @article{Vlaskamp2006, The identity of a target is more difficult to acquire when it is surrounded by distracters. The purpose of the present experiments was to investigate the implications of this crowding phenomenon for performance and eye movements in a real-life task as search with eye movements. The participants searched for a target in a one dimensional search strip. Above and below this search strip additional elements were added. In three conditions, the similarity of these mask elements to the search elements was varied. The spatial extent of crowding is known to increase with target-mask similarity [Nazir, T. A. (1992). Effects of lateral masking and spatial precueing on gap-resolution in central and peripheral vision. Vision Research, 32, 771-777, Kooi, F. L., Toet, A., Tripathy, S. P., & Levi, D. M. (1994). The effect of similarity and duration on spatial interaction in peripheral vision. Spatial Vision, 8(2), 255-279]. One condition did not contain masks. In a visibility experiment, we firstly validated this crowding manipulation. In the search experiment, we subsequently found that with increasing crowding search times were up to 76% longer. Eye movements were also affected. The number of fixations and fixation duration increased and saccade amplitude decreased with increasing crowding. We conclude that in order to understand eye movements in (everyday) tasks that require active exploration of the visual scene, crowding should be taken into account. © 2005 Elsevier Ltd. All rights reserved. |
Guy Wallis The temporal and spatial limits of compensation for fixational eye movements Journal Article In: Vision Research, vol. 46, no. 18, pp. 2848–2858, 2006. @article{Wallis2006, High-fidelity eye tracking is combined with a perceptual grouping task to provide insight into the likely mechanisms underlying the compensation of retinal image motion caused by movement of the eyes. The experiments describe the covert detection of minute temporal and spatial offsets incorporated into a test stimulus. Analysis of eye motion on individual trials indicates that the temporal offset sensitivity is actually due to motion of the eye inducing artificial spatial offsets in the briefly presented stimuli. The results have strong implications for two popular models of compensation for fixational eye movements, namely efference copy and image-based models. If an efference copy model is assumed, the results place constraints on the spatial accuracy and source of compensation. If an image-based model is assumed then limitations are placed on the integration time window over which motion estimates are calculated. |
Carol Walthew; Iain D. Gilchrist Target location probability effects in visual search: An effect of sequential dependencies Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 32, no. 5, pp. 1294–1301, 2006. @article{Walthew2006, Target location probability was manipulated in a visual search task. When the target was twice as likely to appear on 1 side of the display as the other, manual button-press response times were faster (Experiment 1A) and first saccades were more frequently directed (Experiment 1B) to the more probable locations. When the target appeared with equal probability at each location in this search task, performance benefited from repetition of target location in the preceding trials (Experiment 2). When the trial sequence was constrained so that target location did not repeat within a series of 4 trials, there was no longer an advantage for more probable locations (Experiment 3). The authors conclude that the search benefits for more probable locations resulted from short-term target location repetitions. |
Sébastien Tremblay; Jean Saint-Aubin; Annie Jalbert Rehearsal in serial memory for visual-spatial information: Evidence from eye movements Journal Article In: Psychonomic Bulletin & Review, vol. 13, no. 3, pp. 452–457, 2006. @article{Tremblay2006, It is well established that rote rehearsal plays a key role in serial memory for lists of verbal items. Although a great deal of research has informed us about the nature of verbal rehearsal, much less attention has been devoted to rehearsal in serial memory for visual-spatial information. By using the dot task–a visual-spatial analogue of the classical verbal serial recall task–with delayed recall, performance and eyetracking data were recorded in order to establish whether visual-spatial rehearsal could be evidenced by eye movement. The use of eye movement as a form of rehearsal is detectable (Experiment 1), and it seems to contribute to serial memory performance over and above rehearsal based on shifts of spatial attention (Experiments 1 and 2). |
P. U. Tse; P. -J. Hsieh The infinite regress illusion reveals faulty integration of local and global motion signals Journal Article In: Vision Research, vol. 46, no. 22, pp. 3881–3885, 2006. @article{Tse2006, We report a new visual illusion, where a global shape appears to continually move away from fixation, even though it remains a fixed distance from fixation. The illusion occurs because local motion signals within the object indicate motion away from fixation, and are incorrectly attributed by the visual system to the motion trajectory of the global object. A simple weighted vector summation of global and local motion signals, while a reasonable first approximation, cannot fully account for our data. We show that the faster the local motion signal, the more it biases judgments of global motion direction. We propose that local and global motion signals are summed non-linearly for this stimulus because as local motion speed increases, moving luminance blobs are visible for less time, affording less time to inhibit inappropriate component motion signals. This effect reveals the degree to which the visual system can incorrectly combine local and global motion signals belonging to a single object. |
Geoffrey Underwood; Tom Foulsham Visual saliency and semantic incongruency influence eye movements when inspecting pictures Journal Article In: Quarterly Journal of Experimental Psychology, vol. 59, no. 11, pp. 1931–1949, 2006. @article{Underwood2006, Models of low-level saliency predict that when we first look at a photograph our first few eye movements should be made towards visually conspicuous objects. Two experiments investigated this prediction by recording eye fixations while viewers inspected pictures of room interiors that contained objects with known saliency characteristics. Highly salient objects did attract fixations earlier than less conspicuous objects, but only in a task requiring general encoding of the whole picture. When participants were required to detect the presence of a small target, then the visual saliency of nontarget objects did not influence fixations. These results support modifications of the model that take the cognitive override of saliency into account by allowing task demands to reduce the saliency weights of task-irrelevant objects. The pictures sometimes contained incongruent objects that were taken from other rooms. These objects were used to test the hypothesis that previous reports of the early fixation of congruent objects have not been consistent because the effect depends upon the visual conspicuity of the incongruent object. There was an effect of incongruency in both experiments, with earlier fixation of objects that violated the gist of the scene, but the effect was only apparent for inconspicuous objects, which argues against the hypothesis. |
Geoffrey Underwood; Tom Foulsham; Editha M. Loon; L. Humphreys; J. Bloyce Eye movements during scene inspection : A test of the saliency map hypothesis Journal Article In: European Journal of Cognitive Psychology, vol. 18, no. November 2011, pp. 37–41, 2006. @article{Underwood2006a, What attracts attention when we inspect a scene? Two experiments recorded eye movements while viewers inspected pictures of natural office scenes in which two objects of interest were placed. One object had low contour density and uniform colouring (a piece of fruit), relative to another that was visually complex (for example, coffee mugs and commercial packages). In each picture the visually complex object had the highest visual saliency according to the Itti and Koch algorithm. Two experiments modified the task while the pictures were inspected, to determine whether visual saliency is invariably dominant in determining the pattern of fixations, or whether the purpose of inspection can provide a cognitive override that renders saliency secondary. In the first experiment viewers inspected the scene in preparation for a memory task, and the more complex objects were potent in attracting early fixations, in support of a saliency map model of scene inspection. In the second experiment viewers were set the task of detecting the presence of a low saliency target, and the effect of a high saliency distractor was negligible, supporting a model in which the saliency map can be built with cognitive influences that override low-level visual features. |
Loes C. J. Dam; Raymond Ee Retinal image shifts, but not eye movements per se, cause alternations in awareness during binocular rivalry Journal Article In: Journal of Vision, vol. 6, pp. 1172–1179, 2006. @article{Dam2006, Particularly promising studies on visual awareness exploit a generally used perceptual bistability phenomenon, "binocular rivalry"–in which the two eyes' images alternately dominate–because it can dissociate the visual input from the perceptual output. To successfully study awareness, it is crucial to know the extent to which eye movements alter the input. Although there is convincing evidence that perceptual alternations can occur without eye movements, the literature on their exact role is mixed. Moreover, recent work has demonstrated that eye movements, first, correlate positively with perceptual alternations in binocular rivalry, and second, often accompany covert attention shifts (that were previously thought to be purely mental). Here, we asked whether eye movements cause perceptual alternations, and if so, whether it is either the execution of the eye movement or the resulting retinal image change that causes the alternation. Subjects viewed repetitive line patterns, enabling a distinction of saccades that did produce foveal image changes from those that did not. Subjects reported binocular rivalry alternations. We found that, although a saccade is not essential to initiate percept changes, the foveal image change resulting from a (micro)saccade is a deciding factor for percept dominance. We conclude that the foveal image must change to have a saccade cause a change in awareness. This sheds new light on the interaction between spatial attention shifts and perceptual alternations. |
Loes C. J. Dam; Raymond Ee The role of saccades in exerting voluntary control in perceptual and binocular rivalry Journal Article In: Vision Research, vol. 46, pp. 787–799, 2006. @article{Dam2006a, We have investigated the role of saccades and fixation positions in two perceptual rivalry paradigms (slant rivalry and Necker cube) and in two binocular rivalry paradigms (grating and house–face rivalry), and we compared results obtained from two different voluntary control conditions (natural viewing and hold percept). We found that for binocular rivalry, rather than for perceptual rivalry, there is a marked positive temporal correlation between saccades and perceptual flips at about the moment of the flip. Across different voluntary control conditions the pattern of temporal correlation did not change (although the amount of correlation did frequently, but not always, change), indicating that subjects do not use different temporal eye movement schemes to exert voluntary control. Analysis of the fixation positions at about the moment of the flips indicates that the fixation position by itself does not determine the percept but that subjects prefer to fixate at different positions when asked to hold either of the different percepts. |
Stefan Van der Stigchel; Jan Theeuwes Our eyes deviate away from a location where a distractor is expected to appear Journal Article In: Experimental Brain Research, vol. 169, no. 3, pp. 338–349, 2006. @article{VanderStigchel2006, Previous research has shown that in order to make an accurate saccade to a target object, nearby distractor objects need to be inhibited. The extent to which saccade trajectories deviate away from a distractor is often considered to be an index of the strength of inhibition. The present study shows that the mere expectation that a distractor will appear at a specific location is enough to generate saccade deviations away from this location. This suggests that higher-order cognitive processes such as top-down expectancy interact with low-level structures involved in eye movement control. The results will be discussed in the light of current theories of target selection and possible neurophysiological correlates. |
Raymond Van Ee; A. J. Noest; J. W. Brascamp; Albert V. Berg In: Vision Research, vol. 46, no. 19, pp. 3129–3141, 2006. @article{VanEe2006, We studied distributions of perceptual rivalry reversals, as defined by the two fitted parameters of the Gamma distribution. We did so for a variety of bi-stable stimuli and voluntary control exertion tasks. Subjects' distributions differed from one another for a particular stimulus and control task in a systematic way that reflects a constraint on the describing parameters. We found a variety of two-parameter effects, the most important one being that distributions of subjects differ from one another in the same systematic way across different stimuli and control tasks (i.e., a fast switcher remains fast across all conditions in a parameter-specified way). The cardinal component of subject-dependent variation was not the conventionally used mean reversal rate, but a component that was oriented-for all stimuli and tasks-roughly perpendicular to the mean rate. For the Necker cube, we performed additional experiments employing specific variations in control exertion, suggesting that subjects have to a considerable extent independent control over the reversal rate of either of the two competing percepts. |
Stan Van Pelt; W. Pieter Medendorp Gaze-centered updating of remembered visual space during active whole-body translations Journal Article In: Journal of Neurophysiology, vol. 97, no. 2, pp. 1209–1220, 2006. @article{VanPelt2006, Various cortical and sub-cortical brain structures update the gaze-centered coordinates of remembered stimuli to maintain an accurate representation of visual space across eyes rotations and to produce suitable motor plans. A major challenge for the computations by these structures is updating across eye translations. When the eyes translate, objects in front of and behind the eyes' fixation point shift in opposite directions on the retina due to motion parallax. It is not known if the brain uses gaze coordinates to compute parallax in the translational updating of remembered space or if it uses gaze-independent coordinates to maintain spatial constancy across translational motion. We tested this by having subjects view targets, flashed in darkness in front of or behind fixation, then translate their body sideways, and subsequently reach to the memorized target. Reach responses showed parallax-sensitive updating errors: errors increased with depth from fixation and reversed in lateral direction for targets presented at opposite depths from fixation. In a series of control experiments, we ruled out possible biasing factors such as the presence of a fixation light during the translation, the eyes accompanying the hand to the target, and the presence of visual feedback about hand position. Quantitative geometrical analysis confirmed that updating errors were better described by using gaze-centered than gaze-independent coordinates. We conclude that spatial updating for translational motion operates in gaze-centered coordinates. Neural network simulations are presented suggesting that the brain relies on ego-velocity signals and stereoscopic depth and direction information in spatial updating during self-motion. |
Wieske Zoest; Mieke Donk Saccadic target selection as a function of time Journal Article In: Spatial Vision, vol. 19, no. 1, pp. 61–76, 2006. @article{Zoest2006, Recent evidence indicates that stimulus-driven and goal-directed control of visual selection operate independently and in different time windows (van Zoest et al., 2004). The present study further investigates how eye movements are affected by stimulus-driven and goal-directed control. Observers were presented with search displays consisting of one target, multiple non-targets and one distractor element. The task of observers was to make a fast eye movement to a target immediately following the offset of a central fixation point, an event that either co-occurred with or soon followed the presentation of the search display. Distractor saliency and target-distractor similarity were independently manipulated. The results demonstrated that the effect of distractor saliency was transient and only present for the fastest eye movements, whereas the effect of target-distractor similarity was sustained and present in all but the fastest eye movements. The results support an independent timing account of visual selection. |
François Vigneau; André F. Caissie; Douglas A. Bors Eye-movement analysis demonstrates strategic influences on intelligence Journal Article In: Intelligence, vol. 34, no. 3, pp. 261–272, 2006. @article{Vigneau2006, Taking into account various models and findings pertaining to the nature of analogical reasoning, this study explored quantitative and qualitative individual differences in intelligence using latency and eye-movement data. Fifty-five university students were administered 14 selected items of the Raven's Advanced Progressive Matrices test. Results showed that individuals differed in terms of speed, but also in terms of differences in strategies. More specifically, higher and lower ability subjects differed in terms of their patterns of item and matrix inspections, and several strategic indices (proportional time on matrix, number of alternations between matrix and response choice, latency to first alternation, matrix time distribution) emerged in regression analyses as significant predictors of Raven performance. Given the high reliabilities associated with these strategic indices, it is argued that these results provide evidence against a strong basic-information-processing view and supports a multifaceted view of individual differences in intelligence that includes differences in strategies. |
Katsumi Watanabe; Kenji Yokoi Object-based anisotropies in the flash-lag effect Journal Article In: Psychological Science, vol. 17, no. 8, pp. 728–735, 2006. @article{Watanabe2006, The relative visual position of a briefly flashed stimulus is systematically modified in the presence of motion signals. We investigated the two-dimensional distortion of the positional representation of a flash relative to a moving stimulus. Analysis of the spatial pattern of mislocalization revealed that the perceived position of a flash was not uniformly displaced, but instead shifted toward a single point of convergence that followed the moving object from behind at a fixed distance. Although the absolute magnitude of mislocalization increased with motion speed, the convergence point remained unaffected. The motion modified the perceived position of a flash, but had little influence on the perceived shape of a spatially extended flash stimulus. These results demonstrate that motion anisotropically distorts positional representation after the shapes of objects are represented. Furthermore, the results imply that the flash-lag effect may be considered a special case of two-dimensional anisotropic distortion. |
Alexander H. Wertheim; Ignace T. C. Hooge; K. Krikke; A. Johnson How important is lateral masking in visual search? Journal Article In: Experimental Brain Research, vol. 170, no. 3, pp. 387–402, 2006. @article{Wertheim2006, Five experiments are presented, providing empirical support of the hypothesis that the sensory phenomenon of lateral masking may explain many well-known visual search phenomena that are commonly assumed to be governed by cognitive attentional mechanisms. Experiment I showed that when the same visual arrays are used in visual search and in lateral masking experiments, the factors (1) number of distractors, (2) distractor density, and (3) search type (conjunction vs disjunction) have the same effect on search times as they have on lateral masking scores. Experiment II showed that when the number of distractors and eccentricity is kept constant in a search task, the effect of reducing density (which reduces the lateral masking potential of distractors on the target) is to strongly reduce the disjunction-conjunction difference. In experiment III, the lateral masking potential of distractors on a target was measured with arrays that typically yield asymmetric search times in visual search studies (a Q among Os vs. an O among Qs). The lateral masking scores showed the same asymmetry. Experiment IV was a visual search study with such asymmetric search arrays in which the number of distractors and eccentricity was kept constant, while manipulating density. Reducing density (i.e., reducing lateral masking) produced a strong reduction of the asymmetry effect. Finally in experiment V, we showed that the data from experiment IV cannot be explained due to a difference between a fine and a coarse grain attentional mechanism. Taken together with eye movement data and error scores from experiment II and with similar findings from the literature, these results suggest that the sensory mechanism of lateral masking could well be a very important (if not the main) factor causing many of the well-known effects that are traditionally attributed to higher level cognitive or attentional mechanisms in visual search. |
Brian J. White; Dirk Kerzel; Karl R. Gegenfurtner Visually guided movements to color targets Journal Article In: Experimental Brain Research, vol. 175, no. 1, pp. 110–126, 2006. @article{White2006a, The pathways controlling motor behavior are believed to exhibit little selectivity for color, but there is growing evidence suggesting that color signals can be used to guide actions. We investigated this by having observers make a saccade or a rapid pointing movement to a small, peripherally Xashed (100 ms) Gaussian target (SD=0.5°) deWned exclusively by luminance (maximum contrast) or color (from cardinal DKL red–green or blue–yellow axes, at maximum saturation). We found no diVerence in saccadic or pointing accuracy for luminance or color targets. The same was true using shutter goggles during pointing (to minimize the use of external cues), and when the luminance contrast of color targets was varied by up to 10%. In terms of response times, both eye and hand latencies increased with target eccentricity for R–G targets only, in a manner consistent with the sensitivity of this channel across eccentricity. We found little diVerence in response latencies between luminance and color targets once matched in terms of cone contrast. While RTs were longer when coupled with a goal directed pointing movement (versus a simple reaction without pointing), the diVerence was the same for color or luminance targets, suggesting that the spatial coding for the movements was also the same. In a Wnal experiment we compared the accuracy of pointing to color-naming performance in a 4AFC procedure. The psychometric functions relating pointing accuracy (% correct quadrant) to color-naming (% correct color-name) were identical. Taken together, the results show that human observers can eYciently use pure chromatic signals to guide actions. |
2005 |
Steven L. Franconeri; Daniel J. Simons The dynamic events that capture visual attention: A reply to Abrams and Christ (2005) Journal Article In: Perception and Psychophysics, vol. 67, no. 6, pp. 962–966, 2005. @article{Franconeri2005, We recently demonstrated that, contrary to previous findings, some types of irrelevant motion are capable of capturing our attention (Franconeri & Simons, 2003). Strikingly, whereas sitmulated looming (a dynamic increase in object size) captured attention, simulated receding (a decrease in object size) did not. Abrams and Christ (2003, 2005) have provided a different interpretation of this evidence, arguing that in each case attention was captured by the onset of motion rather than by motion per se. They argued that the only published finding inconsistent with their motion onset account is our evidence that simulated receding motion failed to capture attention. Abrams and Christ (2005) presented a receding object stereoscopically and found that it did capture attention, leading them to conclude that the motion onset account explains existing data more parsimoniously than our account does. Our reply has three parts. First, we argue that evidence of capture by receding motion is interesting but irrelevant to the debate over whether capture by motion requires a motion onset. Second, we show that the original empirical evidence in support of the motion onset claim (Abrams & Christ, 2003) put the motion-only condition at a critical disadvantage. We present a new experiment that demonstrates strong capture by motion in the absence of a motion onset, showing that motion onsets are not necessary for attention capture by dynamic events. Finally, we outline what is known about the set of dynamic events that capture attention. |
Angélica Pérez Fornos; Jörg Sommerhalder; Benjamin Rappaz; Avinoam B. Safran; Marco Pelizzone Simulation of artificial vision, III: Do the spatial or temporal characteristics of stimulus pixelization really matter? Journal Article In: Investigative Ophthalmology & Visual Science, vol. 46, no. 10, pp. 3906–3912, 2005. @article{Fornos2005, In preceding studies, simulations of artificial vision were used to determine the basic parameters for visual prostheses to restore useful reading abilities. These simulations were based on a simplified procedure to reduce stimuli information content by preprocessing images with a block-averaging algorithm (square pixelization). In the present study, how such a simplified algorithm affects reading performance was examined. |
Adam J. Galpin; Geoffrey Underwood Eye movements during search and detection in comparative visual search Journal Article In: Perception and Psychophysics, vol. 67, no. 8, pp. 1313–1331, 2005. @article{Galpin2005, Motivated by the fact that previous visual memory paradigms have imposed encoding and retrieval constraints, the present article presents two experiments that address how observers allocate eye movements in memory and comparison processes in the absence of constraints. A comparative visual search design (Pomplun, Sichelschmidt, et al., 2001) was utilized in which observers searched for a difference between two images presented side by side. Robust time course effects were obtained, whereby search was characterized by brief fixations and a high proportion of comparative saccades. Then, upon target detection, fixations were extended, more comparative saccades were elicited, and the search focus was narrowed. The saliency and presence of differences did not guide attention, and detection was contingent upon direct fixation of the targets. The results indicate that, when full control is given, observers adopt a strategy that cuts down on memory usage in favor of restricted encoding and active scanning. |
Mieke Donk; Wieske Zoest; Mieke Donk The effects of salience on saccadic target selection Journal Article In: Visual Cognition, vol. 12, no. 2, pp. 353–375, 2005. @article{Donk2005, Two experiments were conducted to investigate the effects of saliency on saccadic target selection as a function of time. Participants were required to make a speeded saccade towards a target defined by a unique orientation presented concurrently with multiple nontargets and one distractor. Target and distractor were equally salient within the orientation dimension but varied in saliency in the colour dimension. Within the colour dimension, the target presented could be more, equally, or less salient than the distractor. The results showed that saliency played a large role early during processing while no effects of saliency were found in later processing. Results are discussed in terms of models on visual selection. |
Aave Hannus; Frans W. Cornelissen; O. Lindemann; H. Bekkering Selection-for-action in visual search Journal Article In: Acta Psychologica, vol. 118, no. 1-2, pp. 171–191, 2005. @article{Hannus2005, Grasping an object rather than pointing to it enhances processing of its orientation but not its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant feature. In two experiments we investigated the limitations and targets of this bias. Specifically, in Experiment 1 we were interested to find out whether the effect is capacity demanding, therefore we manipulated the set-size of the display. The results indicated a clear cognitive processing capacity requirement, i.e. the magnitude of the effect decreased for a larger set size. Consequently, in Experiment 2, we investigated if the enhancement effect occurs only at the level of behaviorally relevant feature or at a level common to different features. Therefore we manipulated the discriminability of the behaviorally neutral feature (color). Again, results showed that this manipulation influenced the action enhancement of the behaviorally relevant feature. Particularly, the effect of the color manipulation on the action enhancement suggests that the action effect is more likely to bias the competition between different visual features rather than to enhance the processing of the relevant feature. We offer a theoretical account that integrates the action-intention effect within the biased competition model of visual selective attention. |
Stefan Hawelka; Heinz Wimmer Impaired visual processing of multi-element arrays is associated with increased number of eye movements in dyslexic reading Journal Article In: Vision Research, vol. 45, no. 7, pp. 855–863, 2005. @article{Hawelka2005, For assessing simultaneous visual processing in dyslexic and normal readers a multi-element processing task was used which required the report of a single digit of briefly presented multi-digit arrays. Dyslexic readers exhibited higher recognition thresholds on 4- and 6-digit, but not on 2-digit arrays. Individual recognition thresholds on the multi-digit arrays were associated with number of eye movements during reading. The dyslexic multi-element processing deficit was not accompanied by deficient coherent motion detection or deficient visual precedence detection and was independent from deficits in phonological awareness and rapid naming. However, only about half of the dyslexic readers exhibited a multi-element processing deficit. |
Iain D. Gilchrist; Benjamin W. Tatler; Roland J. Baddeley Visual correlates of fixation selection: Effects of scale and time Journal Article In: Vision Research, vol. 45, no. 5, pp. 643–659, 2005. @article{Gilchrist2005, What distinguishes the locations that we fixate from those that we do not? To answer this question we recorded eye movements while observers viewed natural scenes, and recorded image characteristics centred at the locations that observers fixated. To investigate potential differences in the visual characteristics of fixated versus non-fixated locations, these images were transformed to make intensity, contrast, colour, and edge content explicit. Signal detection and information theoretic techniques were then used to compare fixated regions to those that were not. The presence of contrast and edge information was more strongly discriminatory than luminance or chromaticity. Fixated locations tended to be more distinctive in the high spatial frequencies. Extremes of low frequency luminance information were avoided. With prolonged viewing, consistency in fixation locations between observers decreased. In contrast to [Parkhurst, D. J., Law, K., & Niebur, E. (2002). Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42 (1), 107-123] we found no change in the involvement of image features over time. We attribute this difference in our results to a systematic bias in their metric. We propose that saccade target selection involves an unchanging intermediate level representation of the scene but that the high-level interpretation of this representation changes over time. |
Diane C. Gooding; Heather B. Shea; Christie W. Matts Saccadic performance in questionnaire-identified schizotypes over time Journal Article In: Psychiatry Research, vol. 133, no. 2-3, pp. 173–186, 2005. @article{Gooding2005, In the present study, 121 young adults (mean age=19 years), hypothesized to be at varying levels of risk for psychosis on the basis of their psychometric profiles, were administered saccadic (antisaccade and refixation) tasks at two separate assessments. At Time 1, individuals posited to be at heightened risk for the later development of schizophrenia-spectrum disorders (i.e., those individuals with elevated Social Anhedonia Scale [SAS] scores) produced significantly more antisaccade task errors than the controls. Despite apparent improvement in antisaccade task performance from initial testing to the follow-up (mean test-retest interval=59 months) across all groups, the Social Anhedonia (SocAnh) group continued to produce significantly more errors than the control group. The antisaccade task performance of the control group showed good temporal stability (Pearson's r=0.70 |
Xoana G. Troncoso; Stephen L. Macknik; Susana Martinez-Conde Novel visual illusions related to Vasarely's 'nested squares' show that corner salience varies with corner angle Journal Article In: Perception, vol. 34, no. 4, pp. 409–420, 2005. @article{Troncoso2005, Vasarely's 'nested-squares' illusion shows that 90 degrees corners can be more salient perceptually than straight edges. On the basis of this illusion we have developed a novel visual illusion, the 'Alternating Brightness Star', which shows that sharp corners are more salient than shallow corners (an effect we call 'corner angle salience variation') and that the same corner can be perceived as either bright or dark depending on the polarity of the angle (ie whether concave or convex: 'corner angle brightness reversal'). Here we quantify the perception of corner angle salience variation and corner angle brightness reversal effects in twelve naive human subjects, in a two-alternative forced-choice brightness discrimination task. The results show that sharp corners generate stronger percepts than shallow corners, and that corner gradients appear bright or dark depending on whether the corner is concave or convex. Basic computational models of center surround receptive fields predict the results to some degree, but not fully. |
Peter U. Tse Voluntary attention modulates the brightness of overlapping transparent surfaces Journal Article In: Vision Research, vol. 45, no. 9, pp. 1095–1098, 2005. @article{Tse2005, A new class of brightness illusions is introduced that cannot be entirely accounted for by bottom-up models of neuronal processing. In these new illusions, brightness can be modulated by the location of voluntary attention in the absence of eye movements. These effects may arise from top-down or mid-level mechanisms that determine how 3D surfaces and transparent layers are constructed, which in turn influence perceived brightness. Attention is not the only factor that influences perceived brightness in overlapping transparent surfaces. For example, grouping procedures may favor the minimal number of transparent layers necessary to account for the geometry of the stimulus, causing surfaces on a common layer to change brightness together. Attentional modulation of brightness places constraints on possible future models of filling-in, transparent surface formation, brightness perception, and attentional processing. |
Nicholas B. Turk-Browne; Jay Pratt Attending to eye movements and retinal eccentricity: Evidence for the activity distribution model of attention reconsidered Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 31, no. 5, pp. 1061–1066, 2005. @article{TurkBrowne2005, When testing between spotlight and activity distribution models of visual attention, D. LaBerge, R. L. Carlson, J. K. Williams, and B. G. Bunney (1997) used an experimental paradigm in which targets are embedded in 3 brief displays. This paradigm, however, may be confounded by retinal eccentricity effects and saccadic eye movements. When the retinal eccentricities of the targets are equated and eye position is monitored, the pattern of results reported by LaBerge et al., which supported the activity distribution model, is not found. This result underscores the importance of considering targets' eccentricity and people's inclination to make saccadic eye movements in certain types of visual cognition tasks. |
Geoffrey Underwood; David Crundall; Katherine Hodson Confirming statements about pictures of natural scenes: Evidence of the processing of gist from eye movements Journal Article In: Perception, vol. 34, no. 9, pp. 1069–1082, 2005. @article{Underwood2005, Combined displays of graphics and text, such as figure captions in newspapers and books, lead to distinctive inspection patterns, or scanpaths. Readers characteristically look very briefly at the picture, and then read the caption, and then look again at the picture. The initial inspection of the picture is the focus of interest in the present experiment, in which we attempted to modify the inspection by giving participants advance knowledge of the subject of a sentence (the cued object) that was to be verified or denied on the basis of whether it correctly described some aspect of the scene shown in the picture. Eye fixations were recorded while the viewers looked at the picture and the sentence in whatever sequence they chose. By allowing viewers to know the subject of the sentence in advance, we asked whether patterns of fixations on the sentence and on the second inspection of the picture would reflect prior knowledge of the focus of the sentence. Providing advance information did not influence eye movements while reading the sentence. It did, however, increase the number of fixations in the initial inspection of the picture, and it also reduced the number and duration of the fixations on the pictures overall. The results suggest that cueing participants to the object allowed increased coding in the initial inspection of the picture, though the benefit of such coding only becomes apparent when the picture is inspected for the second time. |
Pieter J. A. Unema; Sebastian Pannasch; Markus Joos; Boris M. Velichkovsky Time course of information processing during scene perception: The relationship between saccade amplitude and fixation duration Journal Article In: Visual Cognition, vol. 12, pp. 473–494, 2005. @article{Unema2005, The present study focuses on two aspects of the time course of visual information processing during the perception of natural scenes. The first aspect is the change of fixation duration and saccade amplitude during the first couple of seconds of the inspection period, as has been described by Buswell (1935 Buswell G (1935) How people look at pictures Chicago: University of Chicago Press ), among others. This common effect suggests that the saccade amplitude and fixation duration are in some way controlled by the same mechanism. A simple exponential model containing two parameters can describe the phenomena quite satisfactorily. The parameters of the model show that saccade amplitude and fixation duration may be controlled by a common mechanism. The second aspect under scrutiny is the apparent lack of correlation between saccade amplitude and fixation duration (Viviani, 1990 Viviani P (1990) Eye movements in visual search: Cognitive, perceptual, and motor control aspects In E. Kowler (Ed.) Reviews of oculomotor research: Vol. 4: Eye movements and their role in visual and cognitive processes (pp. 353–383) Amsterdam: Elsevier ). The present study shows that a strong but nonlinear relationship between saccade amplitude and fixation duration does exist in picture viewing. A model, based on notions laid out by Findlay and Walker's (1999 Findlay, J and Walker, R. (1999). A model of saccade generation based on parallel processing and competitive inhibition. Behavioral and Brain Sciences, 22(4): 661–721. ) model of saccade generation and on the idea of two modes of visual processing (Trevarthen, 1968 Trevarthen, C‐B. (1968). Two mechanisms of vision in primates. Psychologische Forschung, 31(4): 299–337. ), was developed to explain this relationship. The model both fits the data quite accurately and can explain a number of related phenomena. |
Björn N. S. Vlaskamp; Eelco A. B. Over; Ignace T. C. Hooge Saccadic search performance: The effect of element spacing Journal Article In: Experimental Brain Research, vol. 167, no. 2, pp. 246–259, 2005. @article{Vlaskamp2005, In a saccadic search task, we investigated whether spacing between elements affects search performance. Since it has been suggested in the literature that element spacing can affect the eye movement strategy in several ways, its effects on search time per element are hard to predict. In the first experiment, we varied the element spacing (3.4 degrees -7.1 degrees distance between elements) and target-distracter similarity. As expected, search time per element increased with target-distracter similarity. Decreasing element spacing decreased the search time per element. However, this effect was surprisingly small in comparison to the effect of varying target-distracter similarity. In a second experiment, we elaborated on this finding and decreased element spacing even further (between 0.8 degrees and 3.2 degrees). Here, we did not find an effect on search time per element for element spacings from 3.2 degrees to spacings as small as 1.5 degrees . It was only at distances smaller than 1.5 degrees that search time per element increased with decreasing element spacing. In order to explain the remarkable finding that search time per element was not affected for such a wide range of element spacings, we propose that irrespective of the spacing crowding kept the number of elements processed per fixation more or less constant. |
Roman Wartburg; Nabil Ouerhani; Tobias Pflugshaupt; Thomas Nyffeler; Pascal Wurtz; Heinz Hugli; Rene M. Muri The influence of colour on oculomotor behaviour during image perception Journal Article In: Neuroreport, vol. 16, no. 14, pp. 1557–1560, 2005. @article{Wartburg2005, The aim of this study was to investigate how oculomotor behaviour depends on the availability of colour information in pictorial stimuli. Forty study participants viewed complex images in colour or grey-scale, while their eye movements were recorded. We found two major effects of colour. First, although colour increases the complexity of an image, fixations on colour images were shorter than on their grey-scale versions. This suggests that colour enhances discriminability and thus affects low-level perceptual processing. Second, colour decreases the similarity of spatial fixation patterns between participants. The role of colour on visual attention seems to be more important than previously assumed, in theoretical as well as methodological terms. |
Guy Wallis A spatial explanation for synchrony biases in perceptual grouping: Consequences for the temporal-binding hypothesis Journal Article In: Perception and Psychophysics, vol. 67, no. 2, pp. 345–353, 2005. @article{Wallis2005, If two images are shown in rapid sequential order, they are perceived as a single, fused image. Despite this, recent studies have revealed that fundamental perceptual processes are influenced by extremely brief temporal offsets in stimulus presentation. Some researchers have suggested that this is due to the action of a cortical temporal-binding mechanism, which would serve to keep multiple mental representations of one object distinct from those of other objects. There is now gathering evidence that these studies should be reassessed. This article describes evidence for sensitivity to fixational eye and head movements, which provides a purely spatial explanation for the earlier results. Taken in conjunction with other studies, the work serves to undermine the current body of behavioral evidence for a temporal-binding mechanism. |
Stefan Van der Stigchel; Jan Theeuwes The influence of attending to multiple locations on eye movements Journal Article In: Vision Research, vol. 45, no. 15, pp. 1921–1927, 2005. @article{VanderStigchel2005, The present paper reports results of a dual task study in which two locations were endogenously cued as possible target locations, while only one eye movement had to be executed. During the cue period, letters were briefly presented at the saccade goals and at no-saccade goals. Results show that performance was better for letters presented at any of the saccade goals than for letters presented at the no-saccade locations. Furthermore saccades deviated away from the non-saccaded target location, suggesting inhibition of the location to which the eyes should not go. The results indicate that the premotor theory also holds for conditions in which attention is allocated to multiple locations. |
E. Brenner; W. J. Meijer; Frans W. Cornelissen Judging relative positions across saccades Journal Article In: Vision Research, vol. 45, no. 12, pp. 1587–1602, 2005. @article{Brenner2005, When components of a shape are presented asynchronously during smooth pursuit, the retinal image determines the perceived shape, as if the parts belong to the moving object that the eyes are pursuing. Saccades normally shift our gaze between structures of interest, so there is no reason to expect anything to have moved with the eyes. We therefore decided to examine how people judge the separation between a target flashed before and another flashed after a saccade. Subjects tracked a jumping dot with their eyes. Targets were flashed at predetermined retinal positions, with a 67-242 ms interval between the flashes. After each trial subjects indicated where they had seen the targets. We selected the trials on which subjects made a complete saccade between the presentations of the two targets. For short inter-target intervals, subjects' judgements depended almost exclusively on the retinal separation, even when there were conspicuous visual references nearby. Even for the longest intervals, only part of the change in eye orientation was taken into consideration. These findings cannot simply be accounted for on the basis of the mislocalisation of individual targets or a compression of space near saccades. We conclude that the retinal separation determines the perceived separation between targets presented with a short interval between them, irrespective of any intervening eye movements. |
James R. Brockmole; David E. Irwin Eye movements and the integration of visual memory and visual perception Journal Article In: Perception and Psychophysics, vol. 67, no. 3, pp. 495–512, 2005. @article{Brockmole2005, Because visual perception has temporal extent, temporally discontinuous input must be linked in memory. Recent research has suggested that this may be accomplished by integrating the active contents of visual short-term memory (VSTM) with subsequently perceived information. In the present experiments, we explored the relationship between VSTM consolidation and maintenance and eye movements, in order to discover how attention selects the information that is to be integrated. Specifically, we addressed whether stimuli needed to be overtly attended in order to be included in the memory representation or whether covert attention was sufficient. Results demonstrated that in static displays in which the to-be-integrated information was presented in the same spatial location, VSTM consolidation proceeded independently of the eyes, since subjects made few eye movements. In dynamic displays, however, in which the to-be-integratted information was presented in different spatial locations, eye movements were directly related to task performance. We conclude that these differences are related to different encoding strategies. In the static display case, VSTM was maintained in the same spatial location as that in which it was generated. This could apparently be accomplished with covert deployments of attention. In the dynamic case, however, VSTM was generated in a location that did not overlap with one of the to-be-integrated percepts. In order to "move" the memory trace, overt shifts of attention were required. |
S. Butler; Iain D. Gilchrist; D. M. Burt; D. I. Perrett; E. Jones; Monika Harvey Are the perceptual biases found in chimeric face processing reflected in eye-movement patterns? Journal Article In: Neuropsychologia, vol. 43, no. 1, pp. 52–59, 2005. @article{Butler2005, Studies of patients with focal brain lesions and neuroimaging indicate that face processing is predominantly based on right hemisphere function. Additionally, experiments using chimeric faces, where the left and the right-hand side of the face are different, have shown that observers tend to bias their responses toward the information on the left. Here, we monitored eye-movements during a gender identification task using blended face images for both whole and chimeric (half female, half male) faces [Neuropsychologia 35 (1997) 685]. As expected, we found a left perceptual bias: subjects based their gender decision significantly more frequently on the left side of the chimeric faces. Analysis of the first saccade showed a significantly greater number of left fixations independent of perceptual bias presumably reflecting the tendency to first inspect the side of the face better suited to face analysis (left side of face/right hemisphere). On top of this though, there was a relationship between response and fixation pattern. On trials where participants showed a left perceptual bias they produced significantly more left saccades and fixated for longer on the left. In contrast, for trials where participants showed a right perceptual bias there was no reliable difference between the number, or total fixation duration, on the left or the right. These results demonstrate that on a trial-by-trial basis subtle differences in the extent of left or right side scanning are related to the perceptual response of the participant, although an overall initial fixation bias to the left occurs irrespective of response bias. |
Simone Oliveira; Sébastien Barthélémy Visual feedback reduces bimanual coupling of movement amplitudes, but not of directions Journal Article In: Experimental Brain Research, vol. 162, no. 1, pp. 78–88, 2005. @article{Oliveira2005, To what extent does visual feedback shape the coordination between our arms? As a first step towards answering this question, this study compares bimanual coupling in simultaneous bimanual reversal movements that control cursor movements on a vertical screen. While both cursors were visible in the control condition, visual feedback was prevented in the experimental condition by deleting one or both cursors from the screen. Absence of visual feedback for one or both arms significantly increased the reaction times of both arms and the movement amplitude of the occluded arm. Temporal coupling between the arms remained unchanged in all feedback conditions. The same was true for spatial coupling of movement directions. Amplitude coupling, however, was significantly affected by visual feedback. When no feedback for either arm was available, amplitude correlations were significantly higher than when feedback for one or both arms was present. This finding suggests that online visual feedback decreases bimanual amplitude coupling, presumably through independent movement corrections for the two arms. The difference between movement amplitudes and movement directions in their susceptibility to visual feedback supports the idea that they are subserved by different control mechanisms. Analysis of eye movements during task performance revealed no major differences between the different feedback conditions. The eye movements of all subjects followed a stereotypical pattern, with generally only one saccade after target onset, directed towards the average position of all possible targets, irrespective of feedback condition and target direction. |
Michel Desmurget; Robert S. Turner; Claude Prablanc; Gary S. Russo; Garret E. Alexander; Scott T. Grafton Updating target location at the end of an orienting saccade affects the characteristics of simple point-to-point movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 31, no. 6, pp. 1510–1536, 2005. @article{Desmurget2005, Six results are reported. (a) Reaching accuracy increases when visual capture of the target is allowed (e.g., target on vs. target off at saccade onset). (b) Whatever the visual condition, trajectories diverge only after peak acceleration, suggesting that accuracy is improved through feedback mechanisms. (c) Feedback corrections are smoothly implemented, causing the corrected and uncorrected velocity profiles to exhibit similar shapes. (d) Initial kinematics poorly predict final accuracy whatever the condition, indicating that target capture is not the only critical input for feedback control. (e) Hand and eye final variability are unrelated, suggesting that gaze direction is not a target signal for arm control. (f) Extent errors are corrected without modification of movement straightness; direction errors cause path curvature to increase. Together these data show that movements with straight paths and bell-shaped velocity profiles are not necessarily ballistic. |
Christopher A. Dickinson; Gregory J. Zelinsky Marking rejected distractors: A gaze-contingent technique for measuring memory during search Journal Article In: Psychonomic Bulletin & Review, vol. 12, no. 6, pp. 1120–1126, 2005. @article{Dickinson2005, There is a debate among search theorists as to whether search exploits a memory for rejected distractors. We addressed this question by monitoring eye movements and explicitly marking objects visited by gaze during search. If search is memoryless, markers might be used to reduce distractor reinspections and improve search efficiency, relative to a no-marking baseline. However, if search already uses distractor memory, there should be no differences between marking and no-marking conditions. In four experiments, with stimuli ranging from Os and Qs to realistic scenes, two consistent data patterns emerged: (1) Marking rejected distractors produced no systematic benefit for search efficiency, as measured by reinspections, reaction times, or errors, and (2) distractor reinspection rates were, overall, extremely low. These results suggest that search uses a memory for rejected distractors, at least in those many real-world search tasks in which gaze is free to move. |
Clare L. Blaukopf; Gregory J. DiGirolamo The automatic extraction and use of information from cues and go signals in an anti-saccade task. Journal Article In: Experimental Brain Research, vol. 167, no. 4, pp. 654–659, 2005. @article{Blaukopf2005, In a gap antisaccade task that exogenously cues the side that subjects should antisaccade to, subjects find it hard to look away from the suddenly appearing go signal. Surprisingly, subjects are unaware of the majority of the prosaccade errors they make, and these errors remain unrecognised even when corrected by a second saccade requiring twice the amplitude [Fischer B, Weber H (1992) in Exp Brain Res 89:415-424]. This paper presents an extended antisaccade task that investigates what information, if any, subjects extract from redundant cues and go signals. In Exp. 1, multiple saccade locations were introduced and the go signal specified the goal location. A redundant cue appeared, prior to the go signal, in the antisaccade goal location (valid) or in the alternative location on the same side (invalid). In Exp. 2, motivational value was assigned to the go signal. The use of multiple locations showed that subjects automatically extract irrelevant positional information from the cue, which affects the programming of subsequent correct and error saccades. When the cued location was also the goal location, antisaccade reaction times were significantly reduced. The results from Exp. 2 showed that subjects also extract information from the go signal. Errors made to a go signal associated with a higher monetary value were initiated significantly faster than those to a lower monetary value. This study has shown that the visual stimuli used in this antisaccade task do more than initiate orienting sets: Their properties can influence the programming of both accurate actions and errors. |
Walter R. Boot; Arthur F. Kramer; Matthew S. Peterson Oculomotor consequences of abrupt object onsets and offsets: Onsets dominate oculomotor capture Journal Article In: Perception and Psychophysics, vol. 67, no. 5, pp. 910–928, 2005. @article{Boot2005, Previous research has shown that the appearance of an object (onset) and the disappearance of an object (offset) have the ability to influence the allocation of covert attention. To determine whether both onsets and offsets have the ability to influence eye movements, a series of experiments was conducted in which participants had to make goal-directed eye movements to a color singleton target in the presence of an irrelevant onset/offset. In accord with previous research, onsets had the ability to capture the eyes. The offset of an object demonstrated little or no ability to interrupt goal-directed eye movements to the target. Two experiments in which the effects of onsets and offsets on covert attention were examined suggest that offsets do not capture the eyes, because they have a lesser ability to capture covert attention than do onsets. A number of other studies that have shown strong effects of offsets on attention have used offsets that were uncorrelated with target position (i.e., nonpredictive), whereas we used onsets and offsets that never served as targets (i.e., antipredictive). The present results are consistent with a new-object theory of attentional capture in which onsets receive attentional priority over other types of changes in the visual environment. |
Frans W. Cornelissen; Klaas J. Bruin; Aart C. Kooijman The influence of artificial scotomas on eye movements during visual search Journal Article In: Optometry and Vision Science, vol. 82, no. 1, pp. 27–35, 2005. @article{Cornelissen2005, PURPOSE: Fixation durations are normally adapted to the difficulty of the foveal analysis task. We examine to what extent artificial central and peripheral visual field defects interfere with this adaptation process. METHODS: Subjects performed a visual search task while their eye movements were registered. The latter were used to drive a real-time gaze-dependent display that was used to create artificial central and peripheral visual field defects. Recorded eye movements were used to determine saccadic amplitude, number of fixations, fixation durations, return saccades, and changes in saccade direction. RESULTS: For central defects, although fixation duration increased with the size of the absolute central scotoma, this increase was too small to keep recognition performance optimal, evident from an associated increase in the rate of return saccades. Providing a relatively small amount of visual information in the central scotoma did substantially reduce subjects' search times but not their fixation durations. Surprisingly, reducing the size of the tunnel also prolonged fixation duration for peripheral defects. This manipulation also decreased the rate of return saccades, suggesting that the fixations were prolonged beyond the duration required by the foveal task. CONCLUSIONS: Although we find that adaptation of fixation duration to task difficulty clearly occurs in the presence of artificial scotomas, we also find that such field defects may render the adaptation suboptimal for the task at hand. Thus, visual field defects may not only hinder vision by limiting what the subject sees of the environment but also by limiting the visual system's ability to program efficient eye movements. We speculate this is because of how visual field defects bias the balance between saccade generation and fixation stabilization. |
Frans W. Cornelissen; Klaas J. Bruin; Aart C. Kooijman The influence of artificial scotomas on eye-movements during visual search Journal Article In: Optometry and Vision Science, vol. 82, no. 1, pp. 27–35, 2005. @article{Cornelissen2005a, Purpose. Fixation durations are normally adapted to the difficulty of the foveal analysis task. We examine to what extent artificial central and peripheral visual field defects interfere with this adaptation process. Methods. Subjects performed a visual search task while their eye movements were registered. The latter were used to drive a real-time gaze-dependent display that was used to create artificial central and peripheral visual field defects. Recorded eye movements were used to determine saccadic amplitude, number of fixations, fixation durations, return saccades, and changes in saccade direction. Results. For central defects, although fixation duration increased with the size of the absolute central scotoma, this increase was too small to keep recognition performance optimal, evident from an associated increase in the rate of return saccades. Providing a relatively small amount of visual information in the central scotoma did substantially reduce subjects' search times but not their fixation durations. Surprisingly, reducing the size of the tunnel also prolonged fixation duration for peripheral defects. This manipulation also decreased the rate of return saccades, suggesting that the fixations were prolonged beyond the duration required by the foveal task. Conclusions. Although we find that adaptation of fixation duration to task difficulty clearly occurs in the presence of artificial scotomas, we also find that such field defects may render the adaptation suboptimal for the task at hand. Thus, visual field defects may not only hinder vision by limiting what the subject sees of the environment but also by limiting the visual system's ability to program efficient eye movements. We speculate this is because of how visual field defects bias the balance between saccade generation and fixation stabilization. |
Florence Chan; Irene T. Armstrong; Giovanna Pari; Richard J. Riopelle; Douglas P. Munoz Deficits in saccadic eye-movement control in Parkinson's disease Journal Article In: Neuropsychologia, vol. 43, no. 5, pp. 784–796, 2005. @article{Chan2005, In contrast to their slowed limb movements, individuals with Parkinson's disease (PD) produce rapid automatic eye movements to sensory stimuli and show an impaired ability to generate voluntary eye movements in cognitive tasks. Eighteen PD patients and 18 matched control volunteers were instructed to look either toward (pro-saccade) or away from (anti-saccade) a peripheral stimulus as soon as it appeared (immediate, gap and overlap conditions) or after a variable delay; or, they made sequential saccades to remembered targets after a variable delay. We found that PD patients made more express saccades (correct saccades in the latency range of 90–140 ms) in the immediate prosaccade task, more direction errors (automatic pro-saccades) in the immediate anti-saccade task, and were less able to inhibit saccades during the delay period in all delay tasks. PD patients also made more directional and end-point errors in the memory-guided sequential task. Their inability to plan eye movements to remembered target locations suggests that PD patients have a deficit in spatial working memory which, along with their deficit in automatic saccade suppression, is consistent with a disorder of the prefrontal-basal ganglia circuit. Impairment of this pathway may release the automatic saccade system from top-down inhibition and produce deficits in volitional saccade control. Parallel findings across various motor, cognitive and oculomotor tasks suggest a common mechanism underlying a general deficit in automatic response suppression. |
W. Pieter Medendorp; Herbert C. Goltz; J. Douglas Crawford; Tutis Vilis Integration of target and effector information in human posterior parietal cortex for the planning of action Journal Article In: Journal of Neurophysiology, vol. 93, pp. 954–962, 2005. @article{Medendorp2005, Recently, using event-related functional MRI (fMRI), we located a bilateral region in the human posterior parietal cortex (retIPS) that topographically represents and updates targets for saccades and pointing movements in eye-centered coordinates. To generate movements, this spatial information must be integrated with the selected effector. We now tested whether the activation in retIPS is dependent on the hand selected. Using 4-T fMRI, we compared the activation produced by movements, using either eyes or the left or right hand, to targets presented either leftward or rightward of central fixation. The majority of the regions activated during saccades were also activated during pointing movements, including occipital, posterior parietal, and premotor cortex. The topographic retIPS region was activated more strongly for saccades than for pointing. The activation associated with pointing was significantly greater when pointing with the unseen hand to targets ipsilateral to the hand. For example, although there was activation in the left retIPS when pointing to targets on the right with the left hand, the activation was significantly greater when using the right hand. The mirror symmetric effect was observed in the right retIPS. Similar hand preferences were observed in a nearby anterior occipital region. This effector specificity is consistent with previous clinical and behavioral studies showing that each hand is more effective in directing movements to targets in ipsilateral visual space. We conclude that not only do these regions code target location, but they also appear to integrate target selection with effector selection. |
John Palmer; Alexander C. Huk; Michael N. Shadlen The effect of stimulus strength on the speed and accuracy of a perceptual decision Journal Article In: Journal of Vision, vol. 5, pp. 376–404, 2005. @article{Palmer2005, Both the speed and the accuracy of a perceptual judgment depend on the strength of the sensory stimulation. When stimulus strength is high, accuracy is high and response time is fast; when stimulus strength is low, accuracy is low and response time is slow. Although the psychometric function is well established as a tool for analyzing the relationship between accuracy and stimulus strength, the corresponding chronometric function for the relationship between response time and stimulus strength has not received as much consideration. In this article, we describe a theory of perceptual decision making based on a diffusion model. In it, a decision is based on the additive accumulation of sensory evidence over time to a bound. Combined with simple scaling assumptions, the proportional-rate and power-rate diffusion models predict simple analytic expressions for both the chronometric and psychometric functions. In a series of psychophysical experiments, we show that this theory accounts for response time and accuracy as a function of both stimulus strength and speed-accuracy instructions. In particular, the results demonstrate a close coupling between response time and accuracy. The theory is also shown to subsume the predictions of Piéron's Law, a power function dependence of response time on stimulus strength. The theory's analytic chronometric function allows one to extend theories of accuracy to response time. |
Timo Mäntylä; Linus Holm Remembering parts and wholes: Configural processing in face recollection Journal Article In: European Journal of Cognitive Psychology, vol. 17, no. 6, pp. 753–769, 2005. @article{Maentylae2005, In two experiments, we examined the role of configural processing on states of awareness in face recognition. Configural processing was manipulated by presenting upright and inverted faces during study and test. After studying facial photographs of humans (Experiment 1) and horses (Experiment 2), participants provided a remember/know judgement for each recognised test face. In both experiments, disrupted configural processing had selective effects on states of awareness, so that inversion reduced remember, but not know, responses. Experiment 2 revealed disproportionate inversion effects in that the difference in remember responses between upright and inverted items was significant for human faces but not horses. These findings suggest that configural processing facilitates explicit recollection by providing a distinctive combination of nonsalient event attributes. |
Jay Pratt; Leo Trottier Pro-saccades and anti-saccades to onset and offset targets Journal Article In: Vision Research, vol. 45, no. 6, pp. 765–774, 2005. @article{Pratt2005, Pro- and anti-saccades made to either onset or offset targets were examined to determine which of (1) changes in luminance or (2) the appearance of new peripheral objects, is more important in the reflexive generation of pro-saccades. In two experiments, pro-saccades had faster reaction times than did anti-saccades, but the difference was much greater for onset targets than offset targets (both with white targets on black backgrounds and black targets on white backgrounds). These findings suggest that there is a continuum of "prepotentness" in the oculomotor system with new peripheral objects being especially effective in generating reflexive pro-saccades. |
Casimir J. H. Ludwig; Iain D. Gilchrist; Eugene McSorley The remote distractor effect in saccade programming: Channel interactions and lateral inhibition Journal Article In: Vision Research, vol. 45, no. 9, pp. 1177–1190, 2005. @article{Ludwig2005, We explored the dependency of the saccadic remote distractor effect (RDE) on the spatial frequency content of target and distractor Gabor patches. A robust RDE was obtained with low-medium spatial frequency distractors, regardless of the spatial frequency of the target. High spatial frequency distractors interfered to a similar extent when the target was of the same spatial frequency. We developed a quantitative model based on lateral inhibition within an oculomotor decision unit. This lateral inhibition mechanism cannot account for the interaction observed between target and distractor spatial frequency, pointing to the existence of channel interactions at an earlier level. |
Casimir J. H. Ludwig; Iain D. Gilchrist; Eugene McSorley; Roland J. Baddeley The temporal impulse response underlying saccadic decisions Journal Article In: Journal of Neuroscience, vol. 25, no. 43, pp. 9907–9912, 2005. @article{Ludwig2005a, Models of perceptual decision making often assume that sensory evidence is accumulated over time in favor of the various possible decisions, until the evidence in favor of one of them outweighs the evidence for the others. Saccadic eye movements are among the most frequent perceptual decisions that the human brain performs. We used stochastic visual stimuli to identify the temporal impulse response underlying saccadic eye movement decisions. Observers performed a contrast search task, with temporal variability in the visual signals. In experiment 1, we derived the temporal filter observers used to integrate the visual information. The integration window was restricted to the first approximately 100 ms after display onset. In experiment 2, we showed that observers cannot perform the task if there is no useful information to distinguish the target from the distractor within this time epoch. We conclude that (1) observers did not integrate sensory evidence up to a criterion level, (2) observers did not integrate visual information up to the start of the saccadic dead time, and (3) variability in saccade latency does not correspond to variability in the visual integration period. Instead, our results support a temporal filter model of saccadic decision making. The temporal impulse response identified by our methods corresponds well with estimates of integration times of V1 output neurons. |
Tobias Pflugshaupt; Urs P. Mosimann; Roman Von Wartburg; Wolfgang J. Schmitt; Thomas Nyffeler; René M. Müri Hypervigilance-avoidance pattern in spider phobia Journal Article In: Journal of Anxiety Disorders, vol. 19, no. 1, pp. 105–116, 2005. @article{Pflugshaupt2005, Cognitive-motivational theories of phobias propose that patients' behavior is characterized by a hypervigilance-avoidance pattern. This implies that phobics initially direct their attention towards fear-relevant stimuli, followed by avoidance that is thought to prevent objective evaluation and habituation. However, previous experiments with highly anxious individuals confirmed initial hypervigilance and yet failed to show subsequent avoidance. In the present study, we administered a visual task in spider phobics and controls, requiring participants to search for spiders. Analyzing eye movements during visual exploration allowed the examination of spatial as well as temporal aspects of phobic behavior. Confirming the hypervigilance-avoidance hypothesis as a whole, our results showed that, relative to controls, phobics detected spiders faster, fixated closer to spiders during the initial search phase and fixated further from spiders subsequently. |
Ervin Poljac; M. J. M. Lankheet; Albert V. Berg Perceptual compensation for eye torsion Journal Article In: Vision Research, vol. 45, no. 4, pp. 485–496, 2005. @article{Poljac2005, To correctly perceive visual directions relative to the head, one needs to compensate for the eye's orientation in the head. In this study we focus on compensation for the eye's torsion regarding objects that contain the line of sight and objects that do not pass through the fixation point. Subjects judged the location of flashed probe points relative to their binocular plane of regard, the mid-sagittal or the transverse plane of the head, while fixating straight ahead, right upward, or right downward at 30 cm distance, to evoke eye torsion according to Listing's law. In addition, we investigated the effects of head-tilt and monocular versus binocular viewing. Flashed probe points were correctly localized in the plane of regard irrespective of eccentric viewing, head-tilt, and monocular or binocular vision in nearly all subjects and conditions. Thus, eye torsion that varied by ±9°across these different conditions was in general compensated for. However, the position of probes relative to the midsagittal or the transverse plane, both true head-fixed planes, was misjudged. We conclude that judgment of the orientation of the plane of regard, a plane that contains the line of sight, is veridical, indicating accurate compensation for actual eye torsion. However, when judgment has to be made of a head-fixed plane that is offset with respect to the line of sight, eye torsion that accompanies that eye orientation appears not to be taken into account correctly. |
Ervin Poljac; Albert V. Berg Localization of the plane of regard in space Journal Article In: Experimental Brain Research, vol. 163, no. 4, pp. 457–467, 2005. @article{Poljac2005a, When we fixate an object in space, the rotation centers of the eyes, together with the object, define a plane of regard. People perceive the elevation of objects relative to this plane accurately, irrespective of eye or head orientation (Poljac et al. (2004) Vision Res, in press). Yet, to create a correct representation of objects in space, the orientation of the plane of regard in space is required. Subjects pointed along an eccentric vertical line on a touch screen to the location where their plane of regard intersected the touch screen positioned on their right. The distance of the vertical line to the subject's eyes varied from 10 to 40 cm. Subjects were sitting upright and fixating one of the nine randomly presented directions ranging from 20 degrees left and down to 20 degrees right and up relative to their straight ahead. The eccentricity of fixations relative to the pointing location varied by up to 40 degrees . Subjects underestimated the elevation of their plane of regard (on average by 3.69 cm |
Gillian A. O'Driscoll; Lana Dépatie; Anne Lise V. Holahan; Tal Savion-Lemieux; Ronald G. Barr; Claude Jolicoeur; Virginia I. Douglas Executive functions and methylphenidate response in subtypes of attention-deficit/hyperactivity disorder Journal Article In: Biological Psychiatry, vol. 57, no. 11, pp. 1452–1460, 2005. @article{ODriscoll2005, Background: Oculomotor tasks are a well-established means of studying executive functions and frontal-striatal functioning in both nonhuman primates and humans. Attention-deficit/hyperactivity disorder (ADHD) is thought to implicate frontal-striatal circuitry. We used oculomotor tests to investigate executive functions and methylphenidate response in two subtypes of ADHD. Methods: Subjects were boys, aged 11.5-14 years, with ADHD-combined (n = 10), ADHD-inattentive (n = 12), and control subjects (n = 10). Executive functions assessed were motor planning (tapped with predictive saccades), response inhibition (antisaccades), and task switching (saccades-antisaccades mixed). Results: The ADHD-combined boys were impaired relative to control subjects in motor planning (p < .003) and response inhibition (p < .007) but not in task switching (p > .92). They were also significantly impaired relative to ADHD-inattentive boys, making fewer predictive saccades (p < .03) and having more subjects with antisaccade performance in the impaired range (p < .04). Methylphenidate significantly improved motor planning and response inhibition in both subtypes. Conclusions: ADHD-combined but not ADHD-inattentive boys showed impairments on motor planning and response inhibition. These deficits might be mediated by brain structures implicated specifically in the hyperactive/ impulsive symptoms. Methylphenidate improved oculomotor performance in both subtypes; thus, it was effective even when initial performance was not impaired. |
Jaap A. Beintema; Editha M. Loon; Albert V. Berg Manipulating saccadic decision-rate distributions in visual search Journal Article In: Journal of Vision, vol. 5, pp. 150–164, 2005. @article{Beintema2005, The Gaussian shape of reciprocal latency distributions typically found in single saccade tasks supports the idea of a race-to-threshold process underlying the decision when to saccade (R. H. Carpenter & M. L. Williams, 1995). However, second and later saccades in a visual search task revealed decision-rate (=reciprocal latency) distributions that were skewed Gamma-like (E. M. Van Loon, I. T. Hooge, & A. V. Van den Berg, 2002). Here we consider a related family of Beta-prime distributions that follows from strong competition with a signal to stop the sequence, and is described by two parameters: a fixate and saccade threshold. In three saccadic search experiments, we tried to manipulate the two thresholds independently, thereby expecting change in shape and mean of the reciprocal latency distribution. Interestingly, rate distributions for later saccades were significantly better fit by Beta-prime than by Gamma functions. Increases in the distribution's skew were found with higher display density, but only for second and later saccades. First saccade rate distributions were not altered by the expected target location or by visual information presented prior to the search, but making pre-search saccades did influence both thresholds. The mean rate remained a stereotyped function of ordinal position in the saccade sequence. Our results support strong competition between two decision signals underlying the timing of saccades. |
K. Moutoussis; G. Keliris; Z. Kourtzi; Nikos K. Logothetis A binocular rivalry study of motion perception in the human brain Journal Article In: Vision Research, vol. 45, no. 17, pp. 2231–2243, 2005. @article{Moutoussis2005, The relationship between brain activity and conscious visual experience is central to our understanding of the neural mechanisms underlying perception. Binocular rivalry, where monocular stimuli compete for perceptual dominance, has been previously used to dissociate the constant stimulus from the varying percept. We report here fMRI results from humans experiencing binocular rivalry under a dichoptic stimulation paradigm that consisted of two drifting random dot patterns with different motion coherence. Each pattern had also a different color, which both enhanced rivalry and was used for reporting which of the two patterns was visible at each time. As the perception of the subjects alternated between coherent motion and motion noise, we examined the effect that these alternations had on the strength of the MR signal throughout the brain. Our results demonstrate that motion perception is able to modulate the activity of several of the visual areas which are known to be involved in motion processing. More specifically, in addition to area V5 which showed the strongest modulation, a higher activity during the perception of motion than during the perception of noise was also clearly observed in areas V3A and LOC, and less so in area V3. In previous studies, these areas had been selectively activated by motion stimuli but whether their activity reflects motion perception or not remained unclear; here we show that they are involved in motion perception as well. The present findings therefore suggest a lack of a clear distinction between 'processing' versus 'perceptual' areas in the brain, but rather that the areas involved in the processing of a specific visual attribute are also part of the neuronal network that is collectively responsible for its perceptual representation. |
Sebastiaan F. W. Neggers; Marieke L. Schölvinck; Rob H. J. Lubbe; Albert Postma Quantifying the interactions between allo- and egocentric representations of space Journal Article In: Acta Psychologica, vol. 118, no. 1-2, pp. 25–45, 2005. @article{Neggers2005, Under many circumstances, humans do not judge the location of objects in space where they really are. For instance, when a background is added to a target object, the judged position of a target with respect to oneself (egocentric position) is shifted in the opposite direction as the placement of such a background with respect to the body midline. It is an ongoing debate whether such effects are due to a uni- or bi-directional interaction between allo- and egocentric spatial representations in the brain, or reflect a response strategy, known as the perceived midline shift. In this study, the effects of allocentric stimulus coordinates on perceived egocentric position were examined more precisely and in a quantitative manner. Furthermore, it was investigated whether the judged allocentric position (with respect to a background) is also influenced by the egocentric position in space of that object. Allo- and egocentric coordinates were varied independently. Also, the effect of background luminance on the observed interactions between spatial coordinates was determined. Since background luminance had an effect on the size of the interaction between allocentric stimulus coordinates and egocentric judgments, and no reverse interaction was found, it seems that interactions between ego- and allocentric reference frames is most likely only unidirectional, with the latter affecting the former. This interaction effect was described in a quantitative manner. |
Atsushi Noritake; Koji Kazai; Masahiko Terao; Akihiro Yagi A continuously lit stimulus is perceived to be shorter than a flickering stimulus during a saccade Journal Article In: Spatial Vision, vol. 18, no. 3, pp. 297–316, 2005. @article{Noritake2005, When subjects made a saccade across a single-flashed dot, a flickering dot or a continuous dot, they perceived a dot, an array (phantom array), or a line (phantom line), respectively. We asked subjects to localize both endpoints of the phantom array or line and calculated the perceived lengths. Based on the findings of Matsumiya and Uchikawa (2001), we predicted that the apparent length of the phantom line would be larger than that of the phantom array. In Experiment 1 of the current study, contrary to the prediction, the phantom line was found to be shorter than the phantom array. In Experiment 2, we investigated whether the function underlying the filled-unfilled space illusion (Lewis, 1912) instead of the function underlying the saccadic compression could explain the results. Subjects were asked to localize both endpoints of a line or an array while keeping their eyes fixated. Although the results of Experiment 2 showed that the perceived length of a line was shorter than that of an array, the function underlying the filled-unfilled illusion could not fully account for the results of Experiment 1. To explain the present results, we proposed a model for the localization process and discussed its validity. |
Thomas Nyffeler; Tobias Pflugshaupt; Helene Hofer; Uli Baas; Klemens Gutbrod; Roman Von Wartburg; Christian W. Hess; René M. Müri Oculomotor behaviour in simultanagnosia: A longitudinal case study Journal Article In: Neuropsychologia, vol. 43, no. 11, pp. 1591–1597, 2005. @article{Nyffeler2005, The aim of the present single case study was to investigate oculomotor recovery in a patient with simultanagnosia due to biparietal hypoxic lesions. Applying visual exploration as well as basic oculomotor tasks in three consecutive test sessions - i.e. 8 weeks, 14 weeks, and 37 weeks after brain damage had occurred - differential recovery was observed. While visual exploration remarkably improved, an impaired disengagement of attention persisted. The improvement of exploration behaviour is interpreted within an oculomotor network theory and implications for a deficit-specific recovery from simultanagnosia are discussed. |
Benjamin W. Tatler; Iain D. Gilchrist; Michael F. Land Visual memory for objects in natural scenes: From fixations to object files Journal Article In: Quarterly Journal of Experimental Psychology, vol. 58A, no. 5, pp. 931–960, 2005. @article{Tatler2005, Object descriptions are extracted and retained across saccades when observers view natural scenes. We investigated whether particular object properties are encoded and the stability of the resulting memories. We tested immediate recall of multiple types of information from real-world scenes and from computer-presented images of the same scenes. The relationship between fixations and properties of object memory was investigated. Position information was encoded and accumulated from multiple fixations. In contrast, identity and colour were encoded but did not require direct fixation and did not accumulate. In the current experiments, participants were unable to recall any information about shape or relative distances between objects. In addition, where information was encoded we found differential patterns of stability. Data from viewing real scenes and images were highly consistent, with stronger effects in the real-world conditions. Our findings imply that object files are not dependent upon the encoding of any particular object property and so are robust to dynamic visual environments. |
Jan Theeuwes; Christian N. L. Olivers; Christopher L. Chizk Remembering a location makes the eyes curve away Journal Article In: Psychological science, vol. 16, no. 3, pp. 196–199, 2005. @article{Theeuwes2005, Working memory is a system that keeps limited information on-line for immediate access by cognitive processes. This type of active maintenance is important for everyday life activities. The present study shows that maintaining a location in spatial working memory affects the trajectories of saccadic eye movements toward visual targets, as the eyes deviate away from the remembered location. This finding provides direct evidence for a strong overlap between spatial working memory and the eye movement system. We argue that curvature is the result of the need to inhibit memory-based eye movement activity in the superior colliculus, in order to allow an accurate saccade to the visual target. Whereas previous research has shown that the eyes may deviate away from visually presented stimuli that need to be ignored, we show that the eyes also curve away from remembered stimuli. |
Scott D. Slotnick; Steven Yantis Common neural substrates for the control and effects of visual attention and perceptual bistability Journal Article In: Cognitive Brain Research, vol. 24, no. 1, pp. 97–108, 2005. @article{Slotnick2005, Behavioral studies have suggested that bistable figure perception is mediated by spatial attention. We tested this hypothesis using event-related functional MRI. During central fixation, two tilted squares containing coherently moving dots were presented in the left and right hemifields. In the attention condition, participants were occasionally cued to shift attention between the squares. In the perception condition, corresponding corners of the squares were connected by horizontal lines producing a perceptually bistable Necker cube figure. Observers reported which of the two faces appeared 'forward' in depth; cues elicited voluntary perceptual reversals. Attending to either square during the attention condition or perceiving either square as forward during the perception condition yielded increased activity in contralateral visual areas. Furthermore, voluntary shifts of attention and voluntary shifts in perceptual configuration were associated with common activity in the posterior parietal cortex, part of the frontoparietal attentional control network. These results support the hypothesis that voluntary shifts in perceptual bistability are mediated by spatial attention. |
Gerben Rotman; Eli Brenner; Jeroen B. J. Smeets Flashes are localised as if they were moving with the eyes Journal Article In: Vision Research, vol. 45, no. 3, pp. 355–364, 2005. @article{Rotman2005, Targets that are flashed during smooth pursuit are mislocalised in the direction of the pursuit. It has been suggested that a similar mislocalisation of moving targets could help to overcome processing delays when hitting moving objects. But are moving targets really mislocalised in the way that flashed ones are? To find out we asked people to indicate where targets that were visible for different periods of time had appeared. The targets appeared while the subjects' eyes were moving, and were either moving with the eyes or static. For flashed targets we found the usual systematic mislocalisation. For targets that moved with the eyes the mislocalisation was at least as large, irrespective of the presentation time. For static targets the mislocalisation decreased with increasing presentation time, so that by the time the presentations reached about 200 ms the targets were not mislocalised at all. A simple model that combines smooth retinal motion with information about the velocity of smooth pursuit could account for the measured tapping errors. These findings support the notion that the systematic mislocalisation of flashed targets is related to the way in which people intercept moving objects. |
Nicola Rycroft; Jennifer M. Rusted; Samuel B. Hutton Acute effects of nicotine on visual search tasks in young adult smokers Journal Article In: Psychopharmacology, vol. 181, no. 1, pp. 160–169, 2005. @article{Rycroft2005, Rationale: Nicotine is known to improve performance on tests involving sustained attention and recent research suggests that nicotine may also improve performance on tests involving the strategic allocation ofattention and working memory. Objectives: We used measures of accuracy and response latency combined with eye-tracking techniques to examine the effects of nicotine on visual search tasks. Methods: In experiment 1 smokers and non- smokers performed pop-out and serial search tasks. In experiment 2, we used a within-subject design and a more demanding search task for multiple targets. In both studies, 2-h abstinent smokers were asked to smoke one oftheir own cigarettes between baseline and tests. Results: In experiment 1, pop-out search times were faster after nicotine, without a loss in accuracy. Similar effects were observed for serial searches, but these were significant only at a trend level. In experiment 2, nicotine facilitated a strategic change in eye movements resulting in a higher proportion of fixations on target letters. If the cigarette was smoked on the first trial (when the task was novel), nicotine additionally reduced the total number of fixations and refixations on all letters in the display. Conclusions: Nicotine improves visual search performance by speeding up search time and enabling a better focus of attention on task relevant items. This appears to reflect more efficient inhibition of eye movements towards task irrelevant stimuli, and better active maintenance of task goals. When the task is novel, and therefore more difficult, nicotine lessens the need to refixate previously seen letters, suggesting an improvement in working memory. |
Naoki Saijo; Ikuya Murakami; Shin'ya Nishida; Hiroaki Gomi Large-field visual motion directly induces an involuntary rapid manual following response Journal Article In: Journal of Neuroscience, vol. 25, no. 20, pp. 4941–4951, 2005. @article{Saijo2005, Recent neuroscience studies have been concerned with how aimed movements are generated on the basis of target localization. However, visual information from the surroundings as well as from the target can influence arm motor control, in a manner similar to known effects in potural and ocular motor control. Here, we show an ultra-fast manual motor response directly induced by a large-field visual motion. This rapid response aided reaction when the subject moved his hand in the direction of visual motion, suggesting assistive visually evoked manual control during postural movement. The latency of muscle activity generating this response was as short as that of the ocular following responses to the visual motion. Abrupt visual motion entrained arm movement without affecting perceptual target localization, and the degrees of motion coherence and speed of the visual stimulus modulated this arm response. This visuomotor behavior was still observed when the visual motion was confined to the "follow-through" phase of a hitting movement, in which no target existed. An analysis of the arm movements suggests that the hitting follow through made by the subject is not a part of a reaching movement. Moreover, the arm response was systematically modulated by hand bias forces, suggesting that it results from a reflexive control mechanism. We therefore propose that its mechanism is radically distinct from motor control for aimed movements to a target. Rather, in an analogy with reflexive eye movement stabilizing a retinal image, we consider that this mechanism regulates arm movements in parallel with voluntary motor control. |
Bob Rehder; Aaron B. Hoffman Thirty-something categorization results explained: Selective attention, eyetracking, and models of category learning Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 31, no. 5, pp. 811–829, 2005. @article{Rehder2005, An eyetracking study testing D. L. Medin and M. M. Schaffer's (1978) 5-4 category structure was conducted. Over 30 studies have shown that the exemplar-based generalized context model (GCM) usually provides a better quantitative account of 5-4 learning data as compared with the prototype model. However, J. D. Smith and J. P. Minda (2000) argued that the GCM is a psychologically implausible account of 5-4 learning because it implies suboptimal attention weights. To test this claim, the authors recorded undergraduates' eye movements while the students learned the 5-4 category structure. Eye fixations matched the attention weights estimated by the GCM but not those of the prototype model. This result confirms that the GCM is a realistic model of the processes involved in learning the 5-4 structure and that learners do not always optimize attention, as commonly supposed. The conditions under which learners are likely to optimize attention during category learning are discussed. |
Bob Rehder; Aaron B. Hoffman Eyetracking and selective attention in category learning Journal Article In: Cognitive Psychology, vol. 51, no. 1, pp. 1–41, 2005. @article{Rehder2005a, An eyetracking version of the classic Shepard, Hovland, and Jenkins (1961) experiment was conducted. Forty years of research has assumed that category learning often involves learning to selectively attend to only those stimulus dimensions useful for classification. We confirmed that participants learned to allocate their attention optimally. We also found that learners tend to fixate all stimulus dimensions early in learning. This result obtained despite evidence that participants were also testing one-dimensional rules during this period. Finally, the restriction of eye movements to only relevant dimensions tended to occur only after errors were largely (or completely) eliminated. We interpret these findings as consistent with multiple-systems theories of learning which maximize information input in order to maximize the number of learning modules involved, and which focus solely on relevant information only after one module has solved the learning problem. |