EyeLink 认知出版物
所有EyeLink认知和感知研究出版物,直至2023年(部分早于2024年)均按年份列出。您可以使用视觉搜索、场景感知、面部处理等关键词搜索出版物。您还可以搜索单个作者姓名。如果我们错过了任何EyeLink认知或感知文章,请给我们发电子邮件!
2010 |
Elizabeth R. Schotter; Raymond W. Berry; Craig R. M. McKenzie; Keith Rayner Gaze bias: Selective encoding and liking effects Journal Article In: Visual Cognition, vol. 18, no. 8, pp. 1113–1132, 2010. @article{Schotter2010, People look longer at things that they choose than things they do not choose. How much of this tendency—the gaze bias effect—is due to a liking effect compared to the information encoding aspect of the decision-making process? Do these processes compete under certain conditions? We monitored eye movements during a visual decision-making task with four decision prompts: Like, dislike, older, and newer. The gaze bias effect was present during the first dwell in all conditions except the dislike condition, when the preference to look at the liked item and the goal to identify the disliked item compete. Colour content (whether a photograph was colour or black-and-white), not decision type, influenced the gaze bias effect in the older/newer decisions because colour is a relevant feature for such decisions. These interactions appear early in the eye movement record, indicating that gaze bias is influenced during information encoding. |
Michael L. Waterston; Christopher C. Pack Improved discrimination of visual stimuli following repetitive transcranial magnetic stimulation Journal Article In: PLoS ONE, vol. 5, no. 4, pp. e10354, 2010. @article{Waterston2010, Repetitive transcranial magnetic stimulation (rTMS) at certain frequencies increases thresholds for motor-evoked potentials and phosphenes following stimulation of cortex. Consequently rTMS is often assumed to introduce a “virtual lesion” in stimulated brain regions, with correspondingly diminished behavioral performance. Here we investigated the effects of rTMS to visual cortex on subjects' ability to perform visual psychophysical tasks. Contrary to expectations of a visual deficit, we find that rTMS often improves the discrimination of visual features. For coarse orientation tasks, discrimination of a static stimulus improved consistently following theta-burst stimulation of the occipital lobe. Using a reaction-time task, we found that these improvements occurred throughout the visual field and lasted beyond one hour post-rTMS. Low-frequency (1 Hz) stimulation yielded similar improvements. In contrast, we did not find consistent effects of rTMS on performance in a fine orientation discrimination task. Overall our results suggest that rTMS generally improves or has no effect on visual acuity, with the nature of the effect depending on the type of stimulation and the task. We interpret our results in the context of an ideal-observer model of visual perception. |
Marcus R. Watson; Allison A. Brennan; Alan Kingstone; James T. Enns Looking versus seeing: Strategies alter eye movements during visual search Journal Article In: Psychonomic Bulletin & Review, vol. 17, no. 4, pp. 543–549, 2010. @article{Watson2010, Visual search can be made more efficient by adopting a passive cognitive strategy (i.e., letting the target "pop" into mind) rather than by trying to actively guide attention. In the present study, we examined how this strategic benefit is linked to eye movements. Results show that participants using a passive strategy wait longer before beginning to move their eyes and make fewer saccades than do active participants. Moreover, the passive advantage stems from more efficient use of the information in a fixation, rather than from a wider attentional window. Individual difference analyses indicate that strategies also change the way eye movements are related to search success, with a rapid saccade rate predicting success among active participants, and fewer and larger amplitude saccades predicting success among passive participants. A change in mindset, therefore, alters how oculomotor behaviors are harnessed in the service of visual search. |
Matthew David Weaver; Joseph Phillips; Johan Lauwereyns Semantic influences from a brief peripheral cue depend on task set Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 7, pp. 1249–1255, 2010. @article{Weaver2010, Previous research has shown semantic influence from irrelevant peripheral cues on the spatial allocation of covert visual attention. The present study explored whether the task set determines the extent of such semantic influence. A spatial cueing paradigm with strict eye movement control was used, where cues were either first names (male or female) or emotionally charged words (positive or negative) followed by a face target. Participants discriminated either the gender (male or female) or the emotion (positive or negative) of the face. When there was high information overlap between cue and task set, responses were faster when the cue and target value were semantically congruent than when they were incongruent. It was concluded that the semantically related cues primed a task-influencing response independently of spatial attention allocation processes, showing that semantic influences from brief peripheral cues depend on the degree of information overlap between cue and task set. |
Felix A. Wichmann; Jan Drewes; Pedro Rosas; Karl R. Gegenfurtner Animal detection in natural scenes: Critical features revisited Journal Article In: Journal of Vision, vol. 10, no. 4, pp. 1–27, 2010. @article{Wichmann2010, S. J. Thorpe, D. Fize, and C. Marlot (1996) showed how rapidly observers can detect animals in images of natural scenes, but it is still unclear which image features support this rapid detection. A. B. Torralba and A. Oliva (2003) suggested that a simple image statistic based on the power spectrum allows the absence or presence of objects in natural scenes to be predicted. We tested whether human observers make use of power spectral differences between image categories when detecting animals in natural scenes. In Experiments 1 and 2 we found performance to be essentially independent of the power spectrum. Computational analysis revealed that the ease of classification correlates with the proposed spectral cue without being caused by it. This result is consistent with the hypothesis that in commercial stock photo databases a majority of animal images are pre-segmented from the background by the photographers and this pre-segmentation causes the power spectral differences between image categories and may, furthermore, help rapid animal detection. Data from a third experiment are consistent with this hypothesis. Together, our results make it exceedingly unlikely that human observers make use of power spectral differences between animal- and no-animal images during rapid animal detection. In addition, our results point to potential confounds in the commercially available “natural image” databases whose statistics may be less natural than commonly presumed. |
Carrick C. Williams Not all visual memories are created equal Journal Article In: Visual Cognition, vol. 18, no. 2, pp. 201–228, 2010. @article{Williams2010, Two experiments examined how well the long-term visual memories of objects that are encountered multiple times during visual search are updated. Participants searched for a target two or four times (e.g., white cat) among distractors that shared the target's colour, category, or were unrelated while their eye movements were recorded. Following the search, a surprise visual memory test was given. With additional object presentations, only target memory reliably improved; distractor memory was unaffected by the number of object presentations. Regression analyses using the eye movement variables as predictors indicated that number of object presentations predicted target memory with no additional impact of other viewing measures. In contrast, distractor memory was best predicted by the viewing pattern on the distractor objects. Finally, Experiment 2 showed that target memory was influenced by number of target object presentations, not number of searches for the target. Each of these experiments demonstrates visual memory differences between target and distractor objects and may provide insight into representational differences in visual memory. |
Bartholomäus Wissmath; Daniel Stricker; David Weibel; Eva Siegenthaler; Fred W. Mast The illusion of being located in dynamic virtual environments Journal Article In: Journal of Eye Movement Research, vol. 3, no. 5, pp. 1–8, 2010. @article{Wissmath2010, Attention allocation towards the mediated environment is assumed to be a necessary precondition to feel localized in a virtual world. In presence research, however, the potential of eye movement research has not been fully exploited so far. In this study, participants (N=44) rode on a virtual roller coaster simulation. We compare participants scoring high versus low on presence. During the ride, the eye movements and subjective ex post presence judgments were assessed. We found high sensations of presence to be associated with fewer fixations and a tendency towards longer fixation durations. In contrast to the immersive tendency trait, eye movement parameters can predict presence. |
Noriko Yamagishi; Stephen J. Anderson; Mitsuo Kawato The observant mind: Self-awareness of attentional status Journal Article In: Proceedings of the Royal Society B: Biological Sciences, vol. 277, no. 1699, pp. 3421–3426, 2010. @article{Yamagishi2010, Visual perception is dependent not only on low-level sensory input but also on high-level cognitive factors such as attention. In this paper, we sought to determine whether attentional processes can be internally monitored for the purpose of enhancing behavioural performance. To do so, we developed a novel paradigm involving an orientation discrimination task in which observers had the freedom to delay target presentation–by any amount required–until they judged their attentional focus to be complete. Our results show that discrimination performance is significantly improved when individuals self-monitor their level of visual attention and respond only when they perceive it to be maximal. Although target delay times varied widely from trial-to-trial (range 860 ms-12.84 s), we show that their distribution is Gaussian when plotted on a reciprocal latency scale. We further show that the neural basis of the delay times for judging attentional status is well explained by a linear rise-to-threshold model. We conclude that attentional mechanisms can be self-monitored for the purpose of enhancing human decision-making processes, and that the neural basis of such processes can be understood in terms of a simple, yet broadly applicable, linear rise-to-threshold model. |
Goedele Van Belle; Peter De Graef; Karl Verfaillie; Thomas Busigny; Bruno Rossion Whole not hole: Expert face recognition requires holistic perception Journal Article In: Neuropsychologia, vol. 48, no. 9, pp. 2620–2629, 2010. @article{VanBelle2010, Face recognition is an important ability of the human brain, yet its underlying mechanisms are still poorly understood. Two opposite views have been proposed to account for human face recognition expertise: the ability to extract the most diagnostic local information, feature-by feature (analytical view), or the ability to process all features at once over the whole face (holistic view). To help clarifying this debate, we used an original gaze-contingent stimulus presentation method to compare normal observers and a brain-damaged patient specifically impaired at face recognition (prosopagnosia). When a single central facial feature was revealed at a time through a gaze-contingent window, normal observers' performance at an individual face matching task decreased to the patient level. However, when only the central feature was masked, forcing normal observers to rely on the whole face but the fixated feature, their performance was almost not affected. In contrast, the prosopagnosic patient's performance decreased dramatically in this latter condition. These results were independent of the absolute size of the face and window/mask. This dissociation indicates that expertise in face recognition does not rest on the ability to analyze diagnostic local detailed features sequentially but rather on the ability to see the individual features of a face all at once, a function that is critically impaired in acquired prosopagnosia. |
Goedele Van Belle; Peter De Graef; Karl Verfaillie; Bruno Rossion; Philippe Lefèvre Face inversion impairs holistic perception: Evidence from gaze-contingent stimulation Journal Article In: Journal of Vision, vol. 10, no. 5, pp. 1–13, 2010. @article{VanBelle2010a, Human observers are experts at face recognition, yet a simple 180- rotation of a face photograph decreases recognition performance substantially. A full understanding of this phenomenonVwhich is believed to be important for clarifying the nature of our expertise in face recognitionVis still waiting. According to a long-standing and influential hypothesis, an inverted face cannot be perceived as holistically as an upright face and has to be analyzed local feature by local feature. Here, we tested this holistic perception hypothesis of the face inversion effect by means of a gaze-contingent stimulus presentation. When observers' perception was restricted to one fixated feature at a time by a gaze-contingent window, performance in an individual face matching task was almost unaffected by inversion. However, when a mask covered the fixated feature, preventing the use of local information at high resolution, the decrement of performance with inversion was even larger than in a normalVfull viewVcondition. These observations provide evidence that the face inversion effect is caused by an inability to perceive the individual face as a whole rather than as a collection of specific features and thus support the view that observers' expertise at upright face recognition is due to the ability to perceive an individual face holistically. |
Goedele Van Belle; Philippe Lefèvre; Renaud Laguesse; Thomas Busigny; Peter Graef; Karl Verfaillie; Bruno Rossion Feature-based processing of personally familiar faces in prosopagnosia: Evidence from eye-gaze contingency Journal Article In: Behavioural Neurology, vol. 23, no. 4, pp. 255–257, 2010. @article{VanBelle2010b, How familiar and unfamiliar faces are perceived remains largely unknown. Two views have dominated this field of research. On the one hand, recordings of eye fixations on faces and response classification experiments suggest that a face is processed in terms of its individual components, or facial features (mouth, eyes, nose,...), a strategy called analytical processing. On the other hand, there is strong behavioral evidence for interdependence in the processing of different features of a face, rather supporting holistic processing of the face. According to the latter holistic view, facial features are simultaneously perceived and integrated into a single representation, so that the perceptual field is that of the whole face. To shed light on this issue, in two recent studies, we recorded eye movements in a neurological patient suffering from a selective impairment in face recognition (acquired prosopagnosia). Previously, we showed that (1) PS fixates exactly on each of the main features of the face (mouth, left eye, right eye), contrary to normal observers who fixate mainly centrally on the top of the nose, around the geometric centre of the face. Moreover (2), an original gaze-contingent stimulus presentation method applied to an unfamiliar face discrimination task led us to demonstrate that, contrary to normal observers, PS' perceptual field appears to be limited to one central feature fixated at a time. These observations indicate that prosopagnosia prevents processing the multiple elements of a whole face simultaneously, and thus that this ability is a key aspect in human face recognition expertise. Here, we extend these observations by testing the same patient with eye gaze contingency while she attempts to identify a large set of personally familiar individuals from their face. |
Goedele Belle; Meike Ramon; Philippe Lefèvre; Bruno Rossion Fixation patterns during recognition of personally familiar and unfamiliar faces Journal Article In: Frontiers in Psychology, vol. 1, pp. 20, 2010. @article{Belle2010, Previous studies recording eye gaze during face perception have rendered somewhat inconclusive findings with respect to fixation differences between familiar and unfamiliar faces. This can be attributed to a number of factors that differ across studies: the type and extent of familiarity with the faces presented, the definition of areas of interest subject to analyses, as well as a lack of consideration for the time course of scan patterns. Here we sought to address these issues by recording fixations in a recognition task with personally familiar and unfamiliar faces. After a first common fixation on a central superior location of the face in between features, suggesting initial holistic encoding, and a subsequent left eye bias, local features were focused and explored more for familiar than unfamiliar faces. Although the number of fixations did not differ for un-/familiar faces, the locations of fixations began to differ before familiarity decisions were provided. This suggests that in the context of familiarity decisions without time constraints, differences in processing familiar and unfamiliar faces arise relatively early - immediately upon initiation of the first fixation to identity-specific information - and that the local features of familiar faces are processed more than those of unfamiliar faces. |
Jeroen J. A. Boxtel; Naotsugu Tsuchiya; Christof Koch Opposing effects of attention and consciousness on afterimages Journal Article In: Proceedings of the National Academy of Sciences, vol. 107, no. 19, pp. 8883–8888, 2010. @article{Boxtel2010, The brain's ability to handle sensory information is influenced by both selective attention and consciousness. There is no consensus on the exact relationship between these two processes and whether they are distinct. So far, no experiment has simultaneously manipulated both. We carried out a full factorial 2 x 2 study of the simultaneous influences of attention and consciousness (as assayed by visibility) on perception, correcting for possible concurrent changes in attention and consciousness. We investigated the duration of afterimages for all four combinations of high versus low attention and visible versus invisible. We show that selective attention and visual consciousness have opposite effects: paying attention to the grating decreases the duration of its afterimage, whereas consciously seeing the grating increases the afterimage duration. These findings provide clear evidence for distinctive influences of selective attention and consciousness on visual perception. |
Lise Van der Haegen; Denis Drieghe; Marc Brysbaert The split fovea theory and the Leicester critique: What do the data say? Journal Article In: Neuropsychologia, vol. 48, no. 1, pp. 96–106, 2010. @article{VanderHaegen2010, According to the Split Fovea Theory (SFT) recognition of foveally presented words involves interhemispheric transfer. This is because letters to the left of the fixation location are initially sent to the right hemisphere, whereas letters to the right of the fixation position are projected to the left hemisphere. Both sources of information must be integrated for words to be recognized. Evidence for the SFT comes from the Optimal Viewing Position (OVP) paradigm, in which foveal word recognition is examined as a function of the letter fixated. OVP curves are different for left and right language dominant participants, indicating a time cost when information is presented in the half-field ipsilateral to the dominant hemisphere (Hunter, Brysbaert, & Knecht, 2007). The methodology of the SFT research has recently been questioned, because not enough efforts were made to ensure adequate fixation. The aim of the present study is to test the validity of this argument. Experiment 1 replicated the OVP effect in a naming task by presenting words at different fixation positions, with the experimental settings applied in previous OVP research. Experiment 2 monitored and controlled eye fixations of the participants and presented the stimuli within the boundaries of the fovea. Exactly the same OVP curve was obtained. In Experiment 3, the eyes were also tracked and monocular viewing was used. Results again revealed the same OVP effect, although latencies were remarkably higher than in the previous experiments. From these results we can conclude that although noise is present in classical SFT studies without eye-tracking, this does not change the OVP effect observed with left dominant individuals. |
Melissa L. -H. Võ; Werner X. Schneider A glimpse is not a glimpse: Differential processing of flashed scene previews leads to differential target search benefits Journal Article In: Visual Cognition, vol. 18, no. 2, pp. 171–200, 2010. @article{Vo2010a, What information can we extract from an initial glimpse of a scene and how do people differ in the way they process visual information? In Experiment 1, participants searched 3-D-rendered images of naturalistic scenes for embedded target objects through a gaze-contingent window. A briefly flashed scene preview (identical, background, objects, or control) preceded each search scene. We found that search performance varied as a function of the participants' reported ability to distinguish between previews. Experiment 2 further investigated the source of individual differences using a whole-report task. Data were analysed following the ‘‘Theory of Visual Attention'' approach, which allows the assessment of visual processing efficiency parameters. Results from both experiments indicate that during the first glimpse of a scene global processing of visual information predominates and that individual differences in initial scene processing and subsequent eye movement behaviour are based on individual differences in visual perceptual processing speed. |
Melissa L. -H. Võ; Jan Zwickel; Werner X. Schneider Has someone moved my plate? The immediate and persistent effects of object location changes on gaze allocation during natural scene viewing Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 5, pp. 1251–1255, 2010. @article{Vo2010b, In this study, we investigated the immediate and persisting effects of object location changes on gaze control during scene viewing. Participants repeatedly inspected a randomized set of naturalistic scenes for later questioning. On the seventh presentation, an object was shown at a new location, whereas the change was reversed for all subsequent presentations of the scene. We tested whether deviations from stored scene representations would modify eye movements to the changed regions and whether these effects would persist. We found that changed objects were looked at longer and more often, regardless of change reportability. These effects were most pronounced immediately after the change occurred and quickly leveled off once a scene remained unchanged. However, participants continued to perform short validation checks to changed scene regions, which implies a persistent modulation of eye movement control beyond the occurrence of object location changes. |
Nicholas J. Wade; Benjamin W. Tatler Recognition and eye movements with partially hidden pictures of faces and cars in different orientations Journal Article In: i-Perception, vol. 1, no. 2, pp. 103–120, 2010. @article{Wade2010, Inverted faces are more difficult to identify than upright ones. This even applies when pictures of faces are partially hidden in geometrical designs so that it takes some seconds to recognise them. Similar, though not as pronounced, orientation preferences apply to familiar objects. We compared the recognition times and patterns of eye movements for two sets of familiar symmetrical objects. Pictures of faces and of cars were embedded in patterns of concentric circles in order to render them difficult to recognise. They were presented in four orientations, at 90° intervals from upright. Two experiments were conducted with the same set of stimuli; experiment 1 required participants to respond in terms of faces or cars, and in experiment 2 responses were made to the orientation of the embedded image independently of its class. Upright faces were recognised more accurately and faster than those in other orientations; fixation durations were longer for upright faces even before recognition. These results applied to both experiments. Orientation effects for cars were not pronounced and distinctions between 90°, 180°, and 270° embedded images were not consistent; this was the case in both experiments. |
Joseph Tao-yi Wang; Michael L. Spezio; Colin F. Camerer Pinocchio's pupil: Using eyetracking and pupil dilation to understand truth telling and deception in games Journal Article In: American Economic Review, vol. 100, no. 3, pp. 984–1007, 2010. @article{Wang2010b, We report experiments on sender-receiver games with an incentive for senders to exaggerate. Subjects "overcommunicate" —messages are more informative of the true state than they should be, in equilibrium. Eyetracking shows that senders look at payoffs in a way that is consistent with a level-k model. A combination of sender messages and lookup patterns predicts the true state about twice as often as predicted by equilibrium. Using these measures to infer the state would enable receiver subjects to hypothetically earn 16–21 percent more than they actually do, an economic value of 60 percent of the maximum increment. |
Gregory J. Zelinsky; Andrei Todor The role of "rescue saccades" in tracking objects through occlusions Journal Article In: Journal of Vision, vol. 10, no. 14, pp. 29–29, 2010. @article{Zelinsky2010, We hypothesize that our ability to track objects through occlusions is mediated by timely assistance from gaze in the form of "rescue saccades"-eye movements to tracked objects that are in danger of being lost due to impending occlusion. Observers tracked 2-4 target sharks (out of 9) for 20 s as they swam through a rendered 3D underwater scene. Targets were either allowed to enter into occlusions (occlusion trials) or not (no occlusion trials). Tracking accuracy with 2-3 targets was 92% regardless of target occlusion but dropped to 74% on occlusion trials with four targets (no occlusion trials remained accurate; 83%). This pattern was mirrored in the frequency of rescue saccades. Rescue saccades accompanied approximatlely 50% of the Track 2-3 target occlusions, but only 34% of the Track 4 occlusions. Their frequency also decreased with increasing distance between a target and the nearest other object, suggesting that it is the potential for target confusion that summons a rescue saccade, not occlusion itself. These findings provide evidence for a tracking system that monitors for events that might cause track loss (e.g., occlusions) and requests help from the oculomotor system to resolve these momentary crises. As the number of crises increase with the number of targets, some requests for help go unsatisfied, resulting in degraded tracking. |
Hang Zhang; Camille Morvan; Laurence T. Maloney Gambling in the visual periphery: A conjoint- measurement analysis of human ability to judge visual uncertainty Journal Article In: PLoS Computational Biology, vol. 6, no. 12, pp. e1001023, 2010. @article{Zhang2010a, Recent work in motor control demonstrates that humans take their own motor uncertainty into account, adjusting the timing and goals of movement so as to maximize expected gain. Visual sensitivity varies dramatically with retinal location and target, and models of optimal visual search typically assume that the visual system takes retinal inhomogeneity into account in planning eye movements. Such models can then use the entire retina rather than just the fovea to speed search. Using a simple decision task, we evaluated human ability to compensate for retinal inhomogeneity. We first measured observers' sensitivity for targets, varying contrast and eccentricity. Observers then repeatedly chose between targets differing in eccentricity and contrast, selecting the one they would prefer to attempt: e.g., a low contrast target at 2u versus a high contrast target at 10u. Observers knew they would later attempt some of their chosen targets and receive rewards for correct classifications. We evaluated performance in three ways. Equivalence: Do observers' judgments agree with their actual performance? Do they correctly trade off eccentricity and contrast and select the more discriminable target in each pair? Transitivity: Are observers' choices self-consistent? Dominance: Do observers understand that increased contrast improves performance? Decreased eccentricity? All observers exhibited patterned failures of equivalence, and seven out of eight observers failed transitivity. There were significant but small failures of dominance. All these failures together reduced their winnings by 10%–18%. |
Li Zhang; Wu Li Perceptual learning beyond retinotopic reference frame Journal Article In: Proceedings of the National Academy of Sciences, vol. 107, no. 36, pp. 15969–15974, 2010. @article{Zhang2010, Repetitive experience with the same visual stimulus and task can remarkably improve behavioral performance on the task. This well-known perceptual-learning phenomenon is usually specific to the trained retinal- or visual-field location, which is taken as an indication of plastic changes in retinotopic visual areas. In previous studies of perceptual learning, however, a change in stimulus location on the retina is accompanied by positional changes of the stimulus in nonretinotopic frames of reference, such as relative to the head and other objects. It is unclear, therefore, whether the putative location specificity is exclusively retinotopic or if it could also depend on nonretinotopic representation of the stimulus, which is particularly important for multisensory and sensorimotor integration as well as for maintenance of stable visual percepts. Here, by manipulating subjects' gaze direction to control spatial and retinal locations of stimuli independently, we found that, when the stimulated retinal regions were held constant, the improvement with training in motion-direction discrimination of two successively displayed stimuli was restricted to the relative spatial position of the stimuli but independent of their absolute locations in head- and world-centered frame. These findings indicate location specificity of perceptual learning beyond retinotopic frame of reference, suggesting a pliable spatiotopic mechanism that can be specifically shaped by experience for better spatiotemporal integration of the learned stimuli. |
Ting Zhang; Lu Qi Xiao; Stanley A. Klein; Dennis M. Levi; Cong Yu Decoupling location specificity from perceptual learning of orientation discrimination Journal Article In: Vision Research, vol. 50, no. 4, pp. 368–374, 2010. @article{Zhang2010b, Perceptual learning of orientation discrimination is reported to be precisely specific to the trained retinal location. This specificity is often taken as evidence for localizing the site of orientation learning to retinotopic cortical areas V1/V2. However, the extant physiological evidence for training improved orientation turning in V1/V2 neurons is controversial and weak. Here we demonstrate substantial transfer of orientation learning across retinal locations, either from the fovea to the periphery or amongst peripheral locations. Most importantly, we found that a brief pretest at a peripheral location before foveal training enabled complete transfer of learning, so that additional practice at that peripheral location resulted in no further improvement. These results indicate that location specificity in orientation learning depends on the particular training procedures, and is not necessarily a genuine property of orientation learning. We suggest that non-retinotopic high brain areas may be responsible for orientation learning, consistent with the extant neurophysiological data. |
Zhi-Lei Zhang; Christopher R. L. Cantor; Clifton M. Schor Perisaccadic stereo depth with zero retinal disparity Journal Article In: Current Biology, vol. 20, no. 13, pp. 1176–1181, 2010. @article{Zhang2010c, When an object is viewed binocularly, unequal perspective projections of the two eyes' half images (binocular disparity) provide a cue for the sensation of stereo depth. For almost 200 years, binocular disparity has remained synonymous with retinal disparity [1], which is computed by subtracting the distance of each half image from its respective fovea [2]. However, binocular disparity could also be coded in headcentric instead of retinal coordinates, by combining eye position and retinal image position in each eye and representing disparity as differences between visual directions of half images relative to the head [3]. Although these two disparity-coding schemes suggest very different neural mechanisms, both offer identical predictions for stereopsis in almost every viewing condition, making it difficult to empirically distinguish between them. We designed a novel stimulus that uses perisaccadic spatial distortion [4] to generate inconsistency between headcentric and retinal disparity. Foveal half images flashed asynchronously just before a horizontal saccade have zero retinal disparity, yet they produce a sensation of depth consistent with a nonzero headcentric disparity. Furthermore, this headcentric disparity can cancel and reverse the perceived depth stimulated with nonzero retinal disparity. This is the first demonstration that a coding scheme other than retinal disparity has a role in human stereopsis. |
Jan Zwickel; Melissa L. -H. Võ How the presence of persons biases eye movements Journal Article In: Psychonomic Bulletin & Review, vol. 17, no. 2, pp. 257–262, 2010. @article{Zwickel2010, We investigated modulation of gaze behavior of observers viewing complex scenes that included a person. To assess spontaneous orientation-following, and in contrast to earlier studies, we did not make the person salient via instruction or low-level saliency. Still, objects that were referred to by the orientation of the person were visited earlier, more often, and longer than when they were not referred to. Analysis of fixation sequences showed that the number of saccades to the cued and uncued objects differed only for saccades that started from the head region, but not for saccades starting from a control object or from a body region. We therefore argue that viewing a person leads to an increase in spontaneous following of the person's viewing direction even when the person plays no role in scene understanding and is not made prominent. |
Stefan Van der Stigchel; Mark Mills; Michael D. Dodd Shift and deviate: Saccades reveal that shifts of covert attention evoked by trained spatial stimuli are obligatory. Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 5, pp. 1244–1250, 2010. @article{VanderStigchel2010d, The premotor theory of attention predicts that motor movements, including manual movements and eye movements, are preceded by an obligatory shift of attention to the location of the planned response. We investigated whether the shifts of attention evoked by trained spatial cues (e.g., Dodd & Wilson, 2009) are obligatory by using an extreme prediction of the premotor theory: If individuals are trained to associate a color cue with a manual movement to the left or right, the shift of attention evoked by the color cue should also influence eye movements in an unrelated task. Participants were trained to associate an irrelevant color cue with left/right space via a training session in which directional responses were made. Experiment 1 showed that, posttraining, vertical saccades deviated in the direction of the trained response, despite the fact that the color cue was irrelevant. Experiment 2 showed that latencies of horizontal saccades were shorter when an eye movement had to be made in the direction of the trained response. These results demonstrate that the shifts of attention evoked by trained stimuli are obligatory, in addition to providing support for the premotor theory and for a connection between the attentional, motor, and oculomotor systems. |
Stefan Van der Stigchel; Tanja C. W. Nijboer The imbalance of oculomotor capture in unilateral visual neglect Journal Article In: Consciousness and Cognition, vol. 19, no. 1, pp. 186–197, 2010. @article{VanderStigchel2010b, Visual neglect has been associated with an imbalance in the level of activity in the saccadic system: activity in the contralesional field is suppressed, which makes target selection unlikely. We recorded eye movements of a patient with hemispatial neglect and a group of healthy participants during an oculomotor distractor paradigm. Results showed that the interfering effects of a distractor were very strong when presented in her ipsilesional visual field. However, when the distractor was presented in her contralesional field, there were no interfering effects when the target was presented in her ipsilesional field. These findings could not be explained by the presence of a visual field defect as revealed by the results of two hemianopic patients. Our results are in line with an imbalance in the level of activity in the saccadic system in visual neglect because visual elements presented in the contralesional field did not compete for saccadic selection. |
Editha M. Loon; Fadhel Khashawi; Geoffrey Underwood Visual strategies used for time-to-arrival judgments in driving Journal Article In: Perception, vol. 39, no. 9, pp. 1216–1229, 2010. @article{Loon2010, To investigate the sources of visual information that are involved in the anticipation of collisions we recorded eye movements while participants made relative timing judgments about approaching vehicles at a junction. The avoidance of collisions is a critical aspect in driving, particularly where cars enter a line of traffic from a side road, and the present study required judgments about animations in a virtual driving environment. In two experiments we investigated the effects of (i) the angle of approach of the vehicle and the type of path (straight or curved) of the observer, and (ii) the speed of both the observer and the approaching car. Relative timing judgments depend on the angle of approach of the other vehicle (judgments are more accurate for perpendicular than for obtuse angles). Eye-movement analysis shows that visual strategies in relative timing judgments are characterised by saccadic eye movements back and forth between the approaching car and the road ahead, particularly the side line which may serve as a spatial reference point. Results suggest that observers use the distance of the car from this reference point for their timing judgments. |
Signe Vangkilde; Thomas Habekost Finding Wally: Prism adaptation improves visual search in chronic neglect Journal Article In: Neuropsychologia, vol. 48, no. 7, pp. 1994–2004, 2010. @article{Vangkilde2010, Several studies have found that visuo-motor adaptation to rightward deviating prismatic goggles (prism adaptation) can alleviate symptoms of neglect after brain damage, but the long-term effect and clinical relevance of this rehabilitation approach have been questioned. In particular, the effect on visual search performance is controversial. In the present study 6 patients with chronic spatial neglect due to rightsided focal brain damage were given 20 sessions of prism adaptation over a period of two weeks. These patients, as well as a matched control group of neglect patients (n=5), were tested using a variety of effect measures with special emphasis on visual search at baseline, shortly after training, and five weeks later. A positive and very consistent long-term effect of prism adaptation was found across clinical tests of neglect, lateral bias of eye movements, and measures of everyday function, including subjective reports. The results show that prism adaptation can provide durable and clinically significant alleviation of neglect symptoms, even in the stable phase of recovery. |
Astrid Vermeiren; Baptist Liefooghe; André Vandierendonck Switch performance in peripherally and centrally triggered saccades Journal Article In: Experimental Brain Research, vol. 206, no. 3, pp. 243–248, 2010. @article{Vermeiren2010, A common hypothesis is that the switch cost measured when switching between prosaccades and antisaccades mainly reflects the inhibition of the saccadic system after the execution of an antisaccade, which requires the inhibition of a gaze response. The present study further tested this hypothesis by comparing switch performance between peripherally triggered saccades and centrally triggered saccades with the latter type of saccades not requiring inhibition of a gaze response. For peripherally triggered saccades, a switch cost was present for prosaccades but not for antisaccades. For centrally triggered saccades, a switch cost was present both for prosaccades and for antisaccades. The difference between both saccade tasks further supports the hypothesis that the switch performance observed for peripherally triggered saccades is related to the inhibition of a gaze response that is required when executing a peripherally triggered antisaccade and the persisting inhibition in the saccadic system this entails. Furthermore, the switch costs observed for centrally triggered saccades indicate that more general processes besides the persisting inhibition in the saccadic system, such as reconfiguration and interference control, also contribute to the switch performance in saccades. |
Michael Vesia; Steven L. Prime; Xiaogang Yan; Lauren E. Sergio; J. Douglas Crawford Specificity of human parietal saccade and reach regions during transcranial magnetic stimulation Journal Article In: Journal of Neuroscience, vol. 30, no. 39, pp. 13053–13065, 2010. @article{Vesia2010, Single-unit recordings in macaque monkeys have identified effector-specific regions in posterior parietal cortex (PPC), but functional neuroimaging in the human has yielded controversial results. Here we used on-line repetitive transcranial magnetic stimulation (rTMS) to determine saccade and reach specificity in human PPC. A short train of three TMS pulses (separated by an interval of 100 ms) was delivered to superior parieto-occipital cortex (SPOC), a region over the midposterior intraparietal sulcus (mIPS), and a site close to caudal IPS situated over the angular gyrus (AG) during a brief memory interval while subjects planned either a saccade or reach with the left or right hand. Behavioral measures then were compared to controls without rTMS. Stimulation of mIPS and AG produced similar patterns: increased end-point variability for reaches and decreased saccade accuracy for contralateral targets. In contrast, stimulation of SPOC deviated reach end points toward visual fixation and had no effect on saccades. Contralateral-limb specificity was highest for AG and lowest for SPOC. Visual feedback of the hand negated rTMS-induced disruptions of the reach plan for mIPS and AG, but not SPOC. These results suggest that human SPOC is specialized for encoding retinally peripheral reach goals, whereas more anterior-lateral regions (mIPS and AG) along the IPS possess overlapping maps for saccade and reach planning and are more closely involved in motor details (i.e., planning the reach vector for a specific hand). This work provides the first causal evidence for functional specificity of these parietal regions in healthy humans. |
Sébastien Miellet; Xinyue Zhou; Lingnan He; Helen Rodger; Roberto Caldara Investigating cultural diversity for extrafoveal information use in visual scenes Journal Article In: Journal of Vision, vol. 10, no. 6, pp. 1–18, 2010. @article{Miellet2010, Culture shapes how people gather information from the visual world. We recently showed that Western observers focus on the eyes region during face recognition, whereas Eastern observers fixate predominantly the center of faces, suggesting a more effective use of extrafoveal information for Easterners compared to Westerners. However, the cultural variation in eye movements during scene perception is a highly debated topic. Additionally, the extent to which those perceptual differences across observers from different cultures rely on modulations of extrafoveal information use remains to be clarified. We used a gaze-contingent technique designed to dynamically mask central vision, the Blindspot, during a visual search task of animals in natural scenes. We parametrically controlled the Blindspots and target animal sizes (0°, 2°, 5°, or 8°). We processed eye-tracking data using an unbiased data-driven approach based on fixation maps and we introduced novel spatiotemporal analyses in order to finely characterize the dynamics of scene exploration. Both groups of observers, Eastern and Western, showed comparable animal identification performance, which decreased as a function of the Blindspot sizes. Importantly, dynamic analysis of the exploration pathways revealed identical oculomotor strategies for both groups of observers during animal search in scenes. Culture does not impact extrafoveal information use during the ecologically valid visual search of animals in natural scenes. |
Milica Milosavljevic; Jonathan Malmaud; Alexander Huth; Christof Koch; Antonio Rangel The drift diffusion model can account for the accuracy and reaction time of value-based choices under high and low time pressure Journal Article In: Judgment and Decision Making, vol. 5, no. 6, pp. 437–449, 2010. @article{Milosavljevic2010, An important open problem is how values are compared to make simple choices. A natural hypothesis is that the brain carries out the computations associated with the value comparisons in a manner consistent with the Drift Diffusion Model (DDM), since this model has been able to account for a large amount of data in other domains. We investigated the ability of four different versions of the DDM to explain the data in a real binary food choice task under conditions of high and low time pressure. We found that a seven-parameter version of the DDM can account for the choice and reaction time data with high-accuracy, in both the high and low time pressure conditions. The changes associated with the introduction of time pressure could be traced to changes in two key model parameters: the barrier height and the noise in the slope of the drift process. |
Anish R. Mitra; Mathias Abegg; Jayalakshmi Viswanathan; Jason J. S. Barton Line bisection in simulated homonymous hemianopia Journal Article In: Neuropsychologia, vol. 48, no. 6, pp. 1742–1749, 2010. @article{Mitra2010, Hemianopic patients make a systematic error in line bisection, showing a contra-lesional bias towards their blind side, which is the opposite of that in hemineglect patients. This error has been attributed variously to the visual field defect, to long-term strategic adaptation, or to independent effects of damage to extrastriate cortex. To determine if hemianopic bisection error can occur without the latter two factors, we studied line bisection in healthy subjects with simulated homonymous hemianopia using a gaze-contingent display, with different line-lengths, and with or without markers at both ends of the lines. Simulated homonymous hemianopia did induce a contra-lesional bisection error and this was associated with increased fixations towards the blind field. This error was found with end-marked lines and was greater with very long lines. In a second experiment we showed that eccentric fixation alone produces a similar bisection error and eliminates the effect of line-end markers. We conclude that a homonymous hemianopic field defect alone is sufficient to induce both a contra-lesional line bisection error and previously described alterations in fixation distribution, and does not require long-term adaptation or extrastriate damage. © 2010 Elsevier Ltd. |
Stéphanie M. Morand; Marie-Hélène Grosbras; Roberto Caldara; Monika Harvey Looking away from faces: Influence of high-level visual processes on saccade programming Journal Article In: Journal of Vision, vol. 10, no. 3, pp. 1–10, 2010. @article{Morand2010, Human faces capture attention more than other visual stimuli. Here we investigated whether such face-specific biases rely on automatic (involuntary) or voluntary orienting responses. To this end, we used an anti-saccade paradigm, which requires the ability to inhibit a reflexive automatic response and to generate a voluntary saccade in the opposite direction of the stimulus. To control for potential low-level confounds in the eye-movement data, we manipulated the high-level visual properties of the stimuli while normalizing their global low-level visual properties. Eye movements were recorded in 21 participants who performed either pro- or anti-saccades to a face, car, or noise pattern, randomly presented to the left or right of a fixation point. For each trial, a symbolic cue instructed the observer to generate either a pro-saccade or an anti-saccade. We report a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, as well as faster pro-saccades to faces and cars in comparison to noise patterns. These results indicate that human faces induce stronger involuntary orienting responses than other visual objects, i.e., responses that are beyond the control of the observer. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors. |
Ava-Ann Allman; Chawki Benkelfat; France Durand; Igor Sibon; Alain Dagher; Marco Leyton; Glen B. Baker; Gillian A. O'Driscoll Effect of D-amphetamine on inhibition and motor planning as a function of baseline performance. Journal Article In: Psychopharmacology, vol. 211, no. 4, pp. 423–33, 2010. @article{Allman2010, RATIONALE: Baseline performance has been reported to predict dopamine (DA) effects on working memory, following an inverted-U pattern. This pattern may hold true for other executive functions that are DA-sensitive. OBJECTIVES: The objective of this study is to investigate the effect of D: -amphetamine, an indirect DA agonist, on two other putatively DA-sensitive executive functions, inhibition and motor planning, as a function of baseline performance. METHODS: Participants with no prior stimulant exposure participated in a double-blind crossover study of a single dose of 0.3 mg/kg, p.o. of D: -amphetamine and placebo. Participants were divided into high and low groups, based on their performance on the antisaccade and predictive saccade tasks on the baseline day. Executive functions, mood states, heart rate and blood pressure were assessed before (T0) and after drug administration, at 1.5 (T1), 2.5 (T2) and 3.5 h (T3) post-drug. RESULTS: Antisaccade errors decreased with D: -amphetamine irrespective of baseline performance (p = 0.025). For antisaccade latency, participants who generated short-latency antisaccades at baseline had longer latencies on D: -amphetamine than placebo, while those with long-latency antisaccades at baseline had shorter latencies on D: -amphetamine than placebo (drug x group |
Elaine J. Anderson; Sabira K. Mannan; Geraint Rees; Petroc Sumner; Christopher Kennard Overlapping functional anatomy for working memory and visual search. Journal Article In: Experimental Brain Research, vol. 200, no. 1, pp. 91–107, 2010. @article{Anderson2010, Recent behavioural findings using dual-task paradigms demonstrate the importance of both spatial and non-spatial working memory processes in inefficient visual search (Anderson et al. in Exp Psychol 55:301-312, 2008). Here, using functional magnetic resonance imaging (fMRI), we sought to determine whether brain areas recruited during visual search are also involved in working memory. Using visually matched spatial and non-spatial working memory tasks, we confirmed previous behavioural findings that show significant dual-task interference effects occur when inefficient visual search is performed concurrently with either working memory task. Furthermore, we find considerable overlap in the cortical network activated by inefficient search and both working memory tasks. Our findings suggest that the interference effects observed behaviourally may have arisen from competition for cortical processes subserved by these overlapping regions. Drawing on previous findings (Anderson et al. in Exp Brain Res 180:289-302, 2007), we propose that the most likely anatomical locus for these interference effects is the inferior and middle frontal cortex of the right hemisphere. These areas are associated with attentional selection from memory as well as manipulation of information in memory, and we propose that the visual search and working memory tasks used here compete for common processing resources underlying these mechanisms. |
A. J. Austin; Theodora Duka Mechanisms of attention for appetitive and aversive outcomes in Pavlovian conditioning Journal Article In: Behavioural Brain Research, vol. 213, no. 1, pp. 19–26, 2010. @article{Austin2010, Different mechanisms of attention controlling learning have been proposed in appetitive and aversive conditioning. The aim of the present study was to compare attention and learning in a Pavlovian conditioning paradigm using visual stimuli of varying predictive value of either monetary reward (appetitive conditioning; 10p or 50p) or blast of white noise (aversive conditioning; 97 dB or 102 dB). Outcome values were matched across the two conditions with regard to their emotional significance. Sixty-four participants were allocated to one of the four conditions matched for age and gender. All participants underwent a discriminative learning task using pairs of visual stimuli that signalled a 100%, 50%, or 0% probability of receiving an outcome. Learning was measured using a 9-point Likert scale of expectancy of the outcome, while attention using an eyetracker device. Arousal and emotional conditioning were also evaluated. Dwell time was greatest for the full predictor in the noise groups, while in the money groups attention was greatest for the partial predictor over the other two predictors. The progression of learning was the same for both groups. These findings suggest that in aversive conditioning attention is driven by the predictive salience of the stimulus while in appetitive conditioning attention is error-driven, when emotional value of the outcome is comparable. |
Jeremy B. Badler; Philippe Lefevre; Marcus Missal Causality attribution biases oculomotor responses Journal Article In: Journal of Neuroscience, vol. 30, no. 31, pp. 10517–10525, 2010. @article{Badler2010, When viewing one object move after being struck by another, humans perceive that the action of the first object "caused" the motion of the second, not that the two events occurred independently. Although established as a perceptual and linguistic concept, it is not yet known whether the notion of causality exists as a fundamental, preattentional "Gestalt" that can influence predictive motor processes. Therefore, eye movements of human observers were measured while viewing a display in which a launcher impacted a tool to trigger the motion of a second "reaction" target. The reaction target could move either in the direction predicted by transfer of momentum after the collision ("causal") or in a different direction ("noncausal"), with equal probability. Control trials were also performed with identical target motion, either with a 100 ms time delay between the collision and reactive motion, or without the interposed tool. Subjects made significantly more predictive movements (smooth pursuit and saccades) in the causal direction during standard trials, and smooth pursuit latencies were also shorter overall. These trends were reduced or absent in control trials. In addition, pursuit latencies in the noncausal direction were longer during standard trials than during control trials. The results show that causal context has a strong influence on predictive movements. |
Daniel H. Baker; Erich W. Graf Extrinsic factors in the perception of bistable motion stimuli Journal Article In: Vision Research, vol. 50, no. 13, pp. 1257–1265, 2010. @article{Baker2010, When viewing a drifting plaid stimulus, perceived motion alternates over time between coherent pattern motion and a transparent impression of the two component gratings. It is known that changing the intrinsic attributes of such patterns (e.g. speed, orientation and spatial frequency of components) can influence percept predominance. Here, we investigate the contribution of extrinsic factors to perception; specifically contextual motion and eye movements. In the first experiment, the percept most similar to the speed and direction of surround motion increased in dominance, implying a tuned integration process. This shift primarily involved an increase in dominance durations of the consistent percept. The second experiment measured eye movements under similar conditions. Saccades were not associated with perceptual transitions, though blink rate increased around the time of a switch. This indicates that saccades do not cause switches, yet saccades in a congruent direction might help to prolong a percept because (i) more saccades were directionally congruent with the currently reported percept than expected by chance, and (ii) when observers were asked to make deliberate eye movements along one motion axis, this increased percept reports in that direction. Overall, we find evidence that perception of bistable motion can be modulated by information from spatially adjacent regions, and changes to the retinal image caused by blinks and saccades. |
Sarah Bate; Catherine Haslam; Timothy L. Hodgson; Ashok Jansari; Nicola J. Gregory; Janice Kay Positive and negative emotion enhances the processing of famous faces in a semantic judgment task Journal Article In: Neuropsychology, vol. 24, no. 1, pp. 84–89, 2010. @article{Bate2010, Previous work has consistently reported a facilitatory influence of positive emotion in face recognition (e.g., D'Argembeau, Van der Linden, Comblain, & Etienne, 2003). However, these reports asked participants to make recognition judgments in response to faces, and it is unknown whether emotional valence may influence other stages of processing, such as at the level of semantics. Furthermore, other evidence suggests that negative rather than positive emotion facilitates higher level judgments when processing nonfacial stimuli (e.g., Mickley & Kensinger, 2008), and it is possible that negative emotion also influences latter stages of face processing. The present study addressed this issue, examining the influence of emotional valence while participants made semantic judgments in response to a set of famous faces. Eye movements were monitored while participants performed this task, and analyses revealed a reduction in information extraction for the faces of liked and disliked celebrities compared with those of emotionally neutral celebrities. Thus, in contrast to work using familiarity judgments, both positive and negative emotion facilitated processing in this semantic-based task. This pattern of findings is discussed in relation to current models of face processing. |
Oliver Baumann; Jason B. Mattingley Scaling of neural responses to visual and auditory motion in the human cerebellum Journal Article In: Journal of Neuroscience, vol. 30, no. 12, pp. 4489–4495, 2010. @article{Baumann2010, The human cerebellum contains approximately half of all the neurons within the cerebrum, yet most experimental work in human neuroscience over the last century has focused exclusively on the structure and functions of the forebrain. The cerebellum has an undisputed role in a range of motor functions (Thach et al., 1992), but its potential contributions to sensory and cognitive processes are widely debated (Stoodley and Schmahmann, 2009). Here we used functional magnetic resonance imaging to test the hypothesis that the human cerebellum is involved in the acquisition of auditory and visual sensory data. We monitored neural activity within the cerebellum while participants engaged in a task that required them to discriminate the direction of a visual or auditory motion signal in noise. We identified a distinct set of cerebellar regions that were differentially activated for visual stimuli (vermal lobule VI and right-hemispheric lobule X) and auditory stimuli (right-hemispheric lobules VIIIA and VIIIB and hemispheric lobule VI bilaterally). In addition, we identified a region in left crus I in which activity correlated significantly with increases in the perceptual demands of the task (i.e., with decreasing signal strength), for both auditory and visual stimuli. Our results support suggestions of a role for the cerebellum in the processing of auditory and visual motion and suggest that parts of cerebellar cortex are concerned with tracking movements of objects around the animal, rather than with controlling movements of the animal itself (Paulin, 1993). |
Paul M. Bays; V. Singh-Curry; N. Gorgoraptis; Jon Driver; Masud Husain Integration of goal- and stimulus-related visual signals revealed by damage to human parietal cortex Journal Article In: Journal of Neuroscience, vol. 30, no. 17, pp. 5968–5978, 2010. @article{Bays2010, Where we look is determined both by our current intentions and by the tendency of visually salient items to "catch our eye." After damage to parietal cortex, the normal process of directing attention is often profoundly impaired. Here, we tracked parietal patients' eye movements during visual search to separately map impairments in goal-directed orienting to targets versus stimulus-driven gaze shifts to salient but task-irrelevant probes. Deficits in these two distinct types of attentional selection are shown to be identical in both magnitude and spatial distribution, consistent with damage to a "priority map" that integrates goal- and stimulus-related signals to select visual targets. When goal-relevant and visually salient items compete for attention, the outcome depends on a biased competition in which the priority of contralesional targets is undervalued. On the basis of these findings, we further demonstrate that parietal patients' spatial bias (neglect) in goal-directed visual exploration can be corrected and even reversed by systematically manipulating the spatial distribution of stimulus salience in the visual array. |
Melissa R. Beck; Maura C. Lohrenz; J. Gregory Trafton Measuring search efficiency in complex visual search tasks: Global and local clutter Journal Article In: Journal of Experimental Psychology: Applied, vol. 16, no. 3, pp. 238–250, 2010. @article{Beck2010, Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e., digital aeronautical charts) to examine limits of attention in visual search. Undergraduates at a large southern university searched for a target among 4, 8, or 16 distractors in charts with high, medium, or low global clutter. The target was in a high or low local-clutter region of the chart. In Experiment 1, reaction time increased as global clutter increased, particularly when the target was in a high local-clutter region. However, there was no effect of distractor set size, supporting the notion that global clutter is a better measure of attention against competition in complex visual search tasks. As a control, Experiment 2 demonstrated that increasing the number of distractors leads to a typical set size effect when there is no additional clutter (i.e., no chart). In Experiment 3, the effects of global and local clutter were minimized when the target was highly salient. When the target was nonsalient, more fixations were observed in high global clutter charts, indicating that the number of elements competing with the target for attention was also high. The results suggest design techniques that could improve pilots' search performance in aeronautical charts. |
Stefanie I. Becker Testing a postselectional account of across-dimension switch costs Journal Article In: Psychonomic Bulletin & Review, vol. 17, no. 6, pp. 853–861, 2010. @article{Becker2010, In visual search for a pop-out target, responses are faster when the target dimension from the previous trial is repeated than when it changes. Currently, it is unclear whether these across-dimension switch costs originate from processes that guide attention to the target or from later processes (e.g., target identification or response selection). The present study tested two critical predictions of a response-selection account of across-dimension switch costs-namely, (1) that switch costs should occur even when visual attention is guided by a completely different feature and (2) that changing the target dimension should affect the speed of responding, but not the speed of eye movements to the target. The results supported both predictions, indicating that changes of the target dimension do not affect early processes that guide attention to the target but, rather, affect later processes, which commence after the target has been selected. |
Stefanie I. Becker The role of target-distractor relationships in guiding attention and the eyes in visual search Journal Article In: Journal of Experimental Psychology: General, vol. 139, no. 2, pp. 247–265, 2010. @article{Becker2010a, Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target differs from the irrelevant distractors (e.g., larger, redder, darker). Guidance by the relational properties of the target governed intertrial priming effects and capture by irrelevant distractors. First, intertrial switch costs occurred only upon reversals of the coarse relationship between target and nontargets, but they did not occur when the target and nontarget features changed such that the relation remained the same. Second, irrelevant distractors captured most strongly when they differed in the correct direction from all other items–despite the fact that they were less similar to the target. This suggests that priming and contingent capture, which have previously been regarded as prime evidence for feature-based selection, are really due to a relational selection mechanism. Here I propose a new relational vector account of guidance, which holds promise to synthesize a wide range of different findings that have previously been attributed to different mechanisms of visual search. |
Stefanie I. Becker Oculomotor capture by colour singletons depends on intertrial priming Journal Article In: Vision Research, vol. 50, no. 21, pp. 2116–2126, 2010. @article{Becker2010b, In visual search, an irrelevant colour singleton captures attention when the colour of the distractor changes across trials (e.g., from red to green), but not when the colour remains constant (Becker, 2007). The present study shows that intertrial changes of the distractor colour also modulate oculomotor capture: an irrelevant colour singleton distractor was only selected more frequently than the inconspicuous nontargets (1) when its features had switched (compared to the previous trial), or (2) when the distractor had been presented at the same position as the target on the previous trial. These results throw doubt on the notion that colour distractors capture attention and the eyes because of their high feature contrast, which is available at an earlier point in time than information about specific feature values. Instead, attention and eye movements are apparently controlled by a system that operates on feature-specific information, and gauges the informativity of nominally irrelevant features. |
Stefanie I. Becker; Charles L. Folk; Roger W. Remington The role of relational information in contingent capture Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 6, pp. 1460–1476, 2010. @article{Becker2010c, On the contingent capture account, top-down attentional control settings restrict involuntary attentional capture to items that match the features of the search target. Attention capture is involuntary, but contingent on goals and intentions. The observation that only target-similar items can capture attention has usually been taken to show that the content of the attentional control settings consists of specific feature values. In contrast, the present study demonstrates that the top-down target template can include information about the relationship between the target and nontarget features (e.g., redder, darker, larger). Several spatial cuing experiments show that a singleton cue that is less similar to the target but that shares the same relational property that distinguishes targets from nontargets can capture attention to the same extent as cues that are similar to the target. Moreover, less similar cues can even capture attention more than cues that are identical to the target when they are relationally better than identical cues. The implications for current theories of attentional capture and attentional guidance are discussed. |
Vidhya Navalpakkam; Christof Koch; Antonio Rangel; Pietro Perona Optimal reward harvesting in complex perceptual environments Journal Article In: Proceedings of the National Academy of Sciences, vol. 107, no. 11, pp. 5232–5237, 2010. @article{Navalpakkam2010, The ability to choose rapidly among multiple targets embedded in a complex perceptual environment is key to survival. Targets may differ in their reward value as well as in their low-level perceptual properties (e.g., visual saliency). Previous studies investigated separately the impact of either value or saliency on choice; thus, it is not known how the brain combines these two variables during decision making. We addressed this question with three experiments in which human subjects attempted to maximize their monetary earnings by rapidly choosing items from a brief display. Each display contained several worthless items (distractors) as well as two targets, whose value and saliency were varied systematically. We compared the behavioral data with the predictions of three computational models assuming that (i) subjects seek the most valuable item in the display, (ii) subjects seek the most easily detectable item, and (iii) subjects behave as an ideal Bayesian observer who combines both factors to maximize the expected reward within each trial. Regardless of the type of motor response used to express the choices, we find that decisions are influenced by both value and feature-contrast in a way that is consistent with the ideal Bayesian observer, even when the targets' feature-contrast is varied unpredictably between trials. This suggests that individuals are able to harvest rewards optimally and dynamically under time pressure while seeking multiple targets embedded in perceptual clutter. |
Mark B. Neider; Xin Chen; Christopher A. Dickinson; Susan E. Brennan; Gregory J. Zelinsky Coordinating spatial referencing using shared gaze Journal Article In: Psychonomic Bulletin & Review, vol. 17, no. 5, pp. 718–724, 2010. @article{Neider2010a, To better understand the problem of referencing a location in space under time pressure, we had two remotely located partners (A, B) attempt to locate and reach consensus on a sniper target, which appeared randomly in the windows of buildings in a pseudorealistic city scene. The partners were able to communicate using speech alone (shared voice), gaze cursors alone (shared gaze), or both. In the shared-gaze conditions, a gaze cursor representing Partner A's eye position was superimposed over Partner B's search display and vice versa. Spatial referencing times (for both partners to find and agree on targets) were faster with shared gaze than with speech, with this benefit due primarily to faster consensus (less time needed for one partner to locate the target after it was located by the other partner). These results suggest that sharing gaze can be more efficient than speaking when people collaborate on tasks requiring the rapid communication of spatial information. |
Dylan Nieman; Bhavin R. Sheth; Shinsuke Shimojo Perceiving a discontinuity in motion Journal Article In: Journal of Vision, vol. 10, no. 6, pp. 1–23, 2010. @article{Nieman2010, Studies have shown that the position of a target stimulus is misperceived owing to ongoing motion. Although static forces (fixation, landmarks) affect perceived position, motion remains the overwhelming force driving estimates of position. Motion endpoint estimates biased in the direction of motion are perceptual signatures of motion's dominant role in localization. We sought conditions in which static forces exert the predominant influence over perceived position: stimulus displays for which target position is perceived backward relative to motion. We used a target that moved diagonally with constant speed, abruptly turned 90 degrees and continued at constant speed; observers localized the discontinuity. This yielded a previously undescribed effect, "turn-point shift," the tendency of observers to estimate the position of orthogonal direction change backward relative to subsequent motion direction. Display and mislocalization direction differ from past studies. Static forces (foveal attraction, repulsion by subsequently occupied spatial positions) were found to be responsible. Delayed turn-point estimates, reconstructed from probing the entire trajectory, shifted the horizontal coordinate forward in the direction of motion. This implies more than one percept of turn-point position. As various estimates of turn-point position arise at different times, under different task demands, the perceptual system does not necessarily resolve conflicts between them. |
Tanja C. W. Nijboer; Anneloes Vree; Chris Dijkerman; Stefan Van der Stigchel Prism adaptation influences perception but not attention: Evidence from antisaccades Journal Article In: NeuroReport, vol. 21, no. 5, pp. 386–389, 2010. @article{Nijboer2010, Prism adaptation has been shown to successfully alleviate symptoms of hemispatial neglect, yet the underlying mechanism is still poorly understood. In this study, the antisaccade task was used to measure the effects of prism adaptation on spatial attention in healthy participants. Results indicated that prism adaptation did not influence the saccade latencies or antisaccade errors, both strong measures of attentional deployment, despite a successful prism adaptation procedure. In contrast to visual attention, prism adaptation evoked a perceptual bias in visual space as measured by the landmark task. We conclude that prism adaptation has a differential influence on visual attention and visual perception in healthy participants as measured by the tasks used. |
Lauri Nummenmaa; Jukka Hyönä; Manuel G. Calvo Semantic recognition precedes affective evaluation of visual scenes Journal Article In: Journal of Experimental Psychology: General, vol. 139, no. 2, pp. 222–246, 2010. @article{Nummenmaa2010, We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by a predefined target scene. The affective task involved saccading toward an unpleasant or pleasant scene, and the semantic task involved saccading toward a scene containing an animal. Both affective and semantic target scenes could be reliably categorized in less than 220 ms, but semantic categorization was always faster than affective categorization. This finding was replicated with singly, foveally presented scenes and manual responses. In comparison with foveal presentation, extrafoveal presentation slowed down the categorization of affective targets more than that of semantic targets. Exposure threshold for accurate categorization was lower for semantic information than for affective information. Superordinate-, basic-, and subordinate-level semantic categorizations were faster than affective evaluation. We conclude that affective analysis of scenes cannot bypass object recognition. Rather, semantic categorization precedes and is required for affective evaluation. |
Antje Nuthmann; John M. Henderson Object-based attentional selection in scene viewing Journal Article In: Journal of Vision, vol. 10, no. 8, pp. 1–19, 2010. @article{Nuthmann2010, Two contrasting views of visual attention in scenes are the visual salience and the cognitive relevance hypotheses. They fundamentally differ in their conceptualization of the visuospatial representation over which attention is directed. According to the saliency model, this representation is image-based, while the cognitive relevance framework advocates an object-based representation. Previous research has shown that (1) viewers prefer to look at objects over background and that (2) the saliency model predicts human fixation locations significantly better than chance. However, it could be that saliency mainly acts through objects. To test this hypothesis, we investigated where people fixate within real objects and saliency proto-objects. To this end, we recorded eye movements of human observers while they inspected photographs of natural scenes under different task instructions. We found a preferred viewing location (PVL) close to the center of objects within naturalistic scenes. Compared to the PVL for real objects, there was less evidence for a PVL for human fixations within saliency proto-objects. There was no evidence for a PVL when only saliency proto-objects that did not spatially overlap with annotated real objects were analyzed. The results suggest that saccade targeting and, by inference, attentional selection in scenes is object-based. |
Antje Nuthmann; Tim J. Smith; Ralf Engbert; John M. Henderson CRISP: A computational model of fixation durations in scene viewing Journal Article In: Psychological Review, vol. 117, no. 2, pp. 382–405, 2010. @article{Nuthmann2010a, Eye-movement control during scene viewing can be represented as a series of individual decisions about where and when to move the eyes. While substantial behavioral and computational research has been devoted to investigating the placement of fixations in scenes, relatively little is known about the mechanisms that control fixation durations. Here, we propose a computational model (CRISP) that accounts for saccade timing and programming and thus for variations in fixation durations in scene viewing. First, timing signals are modeled as continuous-time random walks. Second, difficulties at the level of visual and cognitive processing can inhibit and thus modulate saccade timing. Inhibition generates moment-by-moment changes in the random walk's transition rate and processing-related saccade cancellation. Third, saccade programming is completed in 2 stages: an initial, labile stage that is subject to cancellation and a subsequent, nonlabile stage. Several simulation studies tested the model's adequacy and generality. An initial simulation study explored the role of cognitive factors in scene viewing by examining how fixation durations differed under different viewing task instructions. Additional simulations investigated the degree to which fixation durations were under direct moment-to-moment control of the current visual scene. The present work further supports the conclusion that fixation durations, to a certain degree, reflect perceptual and cognitive activity in scene viewing. Computational model simulations contribute to an understanding of the underlying processes of gaze control. |
Adam P. Morris; Charles C. Liu; Simon J. Cropper; Jason D. Forte; Bart Krekelberg; Jason B. Mattingley Summation of visual motion across eye movements reflects a nonspatial decision mechanism Journal Article In: Journal of Neuroscience, vol. 30, no. 29, pp. 9821–9830, 2010. @article{Morris2010, Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g., “spatiotopic” receptive fields; Melcher and Morrone, 2003). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location before the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observer's uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed regardless of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior. |
Albert Moukheiber; Gilles Rautureau; Fernando Perez-Diaz; Robert Soussignan; Stéphanie Dubal; Roland Jouvent; Antoine Pelissolo Gaze avoidance in social phobia: Objective measure and correlates Journal Article In: Behaviour Research and Therapy, vol. 48, pp. 147–151, 2010. @article{Moukheiber2010, Gaze aversion could be a central component of the physiopathology of social phobia. The emotions of the people interacting with a person with social phobia seem to model this gaze aversion. Our research consists of testing gaze aversion in subjects with social phobia compared to control subjects in different emotional faces of men and women using an eye tracker. Twenty-six subjects with DSM-IV social phobia were recruited. Twenty-four healthy subjects aged and sex-matched constituted the control group. We looked at the number of fixations and the dwell time in the eyes area on the pictures. The main findings of this research are: confirming a significantly lower amount of fixations and dwell time in patients with social phobia as a general mean and for the 6 basic emotions independently from gender; observing a significant correlation between the severity of the phobia and the degree of gaze avoidance. However, no difference in gaze avoidance according to subject/picture gender matching was observed. These findings confirm and extend some previous results, and suggest that eye avoidance is a robust marker of persons with social phobia, which could be used as a behavioral phenotype for brain imagery studies on this disorder. |
Sven Mucke; Velitchko Manahilov; Niall C. Strang; Dirk Seidel; Lyle S. Gray; Uma Shahani Investigating the mechanisms that may underlie the reduction in contrast sensitivity during dynamic accommodation Journal Article In: Journal of Vision, vol. 10, no. 5, pp. 1–14, 2010. @article{Mucke2010, Head and eye movements, together with ocular accommodation enable us to explore our visual environment. The stability of this environment is maintained during saccadic and vergence eye movements due to reduced contrast sensitivity to low spatial frequency information. Our recent work has revealed a new type of selective reduction of contrast sensitivity to high spatial frequency patterns during the fast phase of dynamic accommodation responses compared with steady-state accommodation. Here were report data which show a strong correlation between the effects of reduced contrast sensitivity during dynamic accommodation and velocity of accommodation responses, elicited by ramp changes in accommodative demand. The results were accounted for by a contrast gain control model of a cortical mechanism for contrast detection during dynamic ocular accommodation. Sensitivity, however, was not altered during attempted accommodation responses in the absence of crystalline-lens changes due to cycloplegia. These findings suggest that contrast sensitivity reduction during dynamic accommodation may be a consequence of cortical inhibition driven by proprioceptive-like signals originating within the ciliary muscle, rather than by corollary discharge signals elicited simultaneously with the motor command to the ciliary muscle. |
Manon Mulckhuyse; Jan Theeuwes Unconscious cueing effects in saccadic eye movements - Facilitation and inhibition in temporal and nasal hemifield Journal Article In: Vision Research, vol. 50, no. 6, pp. 606–613, 2010. @article{Mulckhuyse2010, The current study investigated whether subliminal spatial cues can affect the oculomotor system. In addition, we performed the experiment under monocular viewing conditions. By limiting participants to monocular viewing conditions, we can examine behavioral temporal-nasal hemifield asymmetries. These behavioral asymmetries may arise from an anatomical asymmetry in the retinotectal pathway. The results show that even though our spatial cues were not consciously perceived they did affect the oculomotor system: relative to the neutral condition, saccade latencies to the validly cued location were shorter and saccade latencies to the invalidly cued location were longer. Although we did not observe an overall inhibition of return effect, there was a reliable effect of hemifield on IOR for those observers who showed an overall IOR effect. More specifically, consistent with the notion that processing via the retinotectal pathway is stronger in the temporal hemifield than in the nasal hemifield we found an IOR effect for cues presented in the temporal hemifield but not for cues presented in the nasal hemifield. We conclude that unconsciously processed spatial cues can affect the oculomotor system. In addition, the observed behavioral temporal-nasal hemifield asymmetry is consistent with retinotectal mediation. |
Hirokazu Ogawa; Katsumi Watanabe Time to learn: Evidence for two types of attentional guidance in contextual cueing Journal Article In: Perception, vol. 39, no. 1, pp. 72–80, 2010. @article{Ogawa2010, Repetition of the same spatial configurations of a search display implicitly facilitates performance of a visual-search task when the target location in the display is fixed. The improvement of performance is referred to as contextual cueing. We examined whether the association process between target location and surrounding configuration of distractors occurs during active search or at the instant the target is found. To dissociate these two processes, we changed the surrounding configuration of the distractors at the instant of target detection so that the layout where the participants had searched for the target and the layout presented at the instant of target detection differed. The results demonstrated that both processes are responsible for the contextual-cueing effect, but they differ in the accuracies of attentional guidance and their time courses, suggesting that two different types of attentional-guidance processes may be involved in contextual cueing. |
Anna Oleksiak; Miroslawa Mańko; Albert Postma; Ineke J. M. Ham; Albert V. Berg; Richard J. A. Wezel Distance estimation is influenced by encoding conditions Journal Article In: PLoS ONE, vol. 5, no. 3, pp. e9918, 2010. @article{Oleksiak2010, Background: It is well established that foveating a behaviorally relevant part of the visual field improves localization performance as compared to the situation where the gaze is directed elsewhere. Reduced localization performance in the peripheral encoding conditions has been attributed to an eccentricity-dependent increase in positional uncertainty. It is not known, however, whether and how the foveal and peripheral encoding conditions can influence spatial interval estimation. In this study we compare observers' estimates of a distance between two co-planar dots in the condition where they foveate the two sample dots and where they fixate a central dot while viewing the sample dots peripherally. Methodology/Principal Findings: Observers were required to reproduce, after a short delay, a distance between two sample dots based on a stationary reference dot and a movable mouse pointer. When both sample dots are foveated, we find that the distance estimation error is small but consistently increases with the dots-separation size. In comparison, distance judgment in peripheral encoding condition is significantly overestimated for smaller separations and becomes similar to the performance in foveal trials for distances from 10 to 16 degrees. Conclusions/Significance: Although we find improved accuracy of distance estimation in the foveal condition, the fact that the difference is related to the reduction of the estimation bias present in the peripheral conditon, challenges the simple account of reducing the eccentricity-dependent positional uncertainty. Contrary to this, we present evidence for an explanation in terms of neuronal populations activated by the two sample dots and their inhibitory interactions under different visual encoding conditions. We support our claims with simulations that take into account receptive fields size differences between the two encoding conditions. |
Jean-Jacques Orban de Xivry; Sébastien Coppe; Philippe Lefèvre; Marcus Missal Biological motion drives perception and action. Journal Article In: Journal of Vision, vol. 10, no. 2, pp. 1–11, 2010. @article{OrbandeXivry2010, Presenting a few dots moving coherently on a screen can yield to the perception of human motion. This perception is based on a specific network that is segregated from the traditional motion perception network and that includes the superior temporal sulcus (STS). In this study, we investigate whether this biological motion perception network could influence the smooth pursuit response evoked by a point-light walker. We found that smooth eye velocity during pursuit initiation was larger in response to the point-light walker than in response to one of its scrambled versions, to an inverted walker or to a single dot stimulus. In addition, we assessed the proximity to the point-light walker (i.e. the amount of information about the direction contained in the scrambled stimulus and extracted from local motion cue of biological motion) of each of our scrambled stimuli in a motion direction discrimination task with manual responses and found that the smooth pursuit response evoked by those stimuli moving across the screen was modulated by their proximity to the walker. Therefore, we conclude that biological motion facilitates smooth pursuit eye movements, hence influences both perception and action. |
Mathias Abegg; Hyung Lee; Jason J. S. Barton Systematic diagonal and vertical errors in antisaccades and memory-guided saccades Journal Article In: Journal of Eye Movement Research, vol. 3, no. 3, pp. 1–10, 2010. @article{Abegg2010, Studies of memory-guided saccades in monkeys show an upward bias, while studies of antisaccades in humans show a diagonal effect, a deviation of endpoints toward the 45° diagonal. To determine if these two different spatial biases are specific to different types of saccades, we studied prosaccades, antisaccades and memory-guided saccades in humans. The diagonal effect occurred not with prosaccades but with antisaccades and memory-guided saccades with long intervals, consistent with hypotheses that it originates in computations of goal location under conditions of uncertainty. There was a small upward bias for memory-guided saccades but not prosaccades or antisaccades. Thus this bias is not a general effect of target uncertainty but a property specific to memory-guided saccades. |
Mathias Abegg; Amadeo R. Rodriguez; Hyung Lee; Jason J. S. Barton ‘Alternate-goal bias' in antisaccades and the influence of expectation Journal Article In: Experimental Brain Research, vol. 203, no. 3, pp. 553–562, 2010. @article{Abegg2010a, Saccadic performance depends on the requirements of the current trial, but also may be influenced by other trials in the same experiment. This effect of trial context has been investigated most for saccadic error rate and reaction time but seldom for the positional accuracy of saccadic landing points. We investigated whether the direction of saccades towards one goal is affected by the location of a second goal used in other trials in the same experimental block. In our first experiment, landing points ('endpoints') of antisaccades but not prosaccades were shifted towards the location of the alternate goal. This spatial bias decreased with increasing angular separation between the current and alternative goals. In a second experiment, we explored whether expectancy about the goal location was responsible for the biasing of the saccadic endpoint. For this, we used a condition where the saccadic goal randomly changed from one trial to the next between locations on, above or below the horizontal meridian. We modulated the prior probability of the alternate-goal location by showing cues prior to stimulus onset. The results showed that expectation about the possible positions of the saccadic goal is sufficient to bias saccadic endpoints and can account for at least part of this phenomenon of 'alternate-goal bias'. |
Naotoshi Abekawa; Hiroaki Gomi Spatial coincidence of intentional actions modulates an implicit visuomotor control Journal Article In: Journal of Neurophysiology, vol. 103, no. 5, pp. 2717–2727, 2010. @article{Abekawa2010, We investigated a visuomotor mechanism contributing to reach correction: the manual following response (MFR), which is a quick response to background visual motion that frequently occurs as a reafference when the body moves. Although several visual specificities of the MFR have been elucidated, the functional and computational mechanisms of its motor coordination remain unclear mainly because it involves complex relationships among gaze, reaching target, and visual stimuli. To directly explore how these factors interact in the MFR, we assessed the impact of spatial coincidences among gaze, arm reaching, and visual motion on the MFR. When gaze location was displaced from the reaching target with an identical visual motion kept on the retina, the amplitude of the MFR significantly decreased as displacement increased. A factorial manipulation of gaze, reaching-target, and visual motion locations showed that the response decrease is due to the spatial separation between gaze and reaching target but is not due to the spatial separation between visual motion and reaching target. Additionally, elimination of visual motion around the fovea attenuated the MFR. The effects of these spatial coincidences on the MFR are completely different from their effects on the perceptual mislocalization of targets caused by visual motion. Furthermore, we found clear differences between the modulation sensitivities of the MFR and the ocular following response to spatial mismatch between gaze and reaching locations. These results suggest that the MFR modulation observed in our experiment is not due to changes in visual interaction between target and visual motion or to modulation of motion sensitivity in early visual processing. Instead the motor command of the MFR appears to be modulated by the spatial relationship between gaze and reaching. |
Alper Açik; Adjmal Sarwary; Rafael Schultze-Kraft; Selim Onat; Peter König Developmental changes in natural viewing behavior: Bottom-up and top-down differences between children, young adults and older adults Journal Article In: Frontiers in Psychology, vol. 1, pp. 207, 2010. @article{Acik2010, Despite the growing interest in fixation selection under natural conditions, there is a major gap in the literature concerning its developmental aspects. Early in life, bottom-up processes, such as local image feature - color, luminance contrast etc. - guided viewing, might be prominent but later overshadowed by more top-down processing. Moreover, with decline in visual functioning in old age, bottom-up processing is known to suffer. Here we recorded eye movements of 7- to 9-year-old children, 19- to 27-year-old adults, and older adults above 72 years of age while they viewed natural and complex images before performing a patch-recognition task. Task performance displayed the classical inverted U-shape, with young adults outperforming the other age groups. Fixation discrimination performance of local feature values dropped with age. Whereas children displayed the highest feature values at fixated points, suggesting a bottom-up mechanism, older adult viewing behavior was less feature-dependent, reminiscent of a top-down strategy. Importantly, we observed a double dissociation between children and elderly regarding the effects of active viewing on feature-related viewing: Explorativeness correlated with feature-related viewing negatively in young age, and positively in older adults. The results indicate that, with age, bottom-up fixation selection loses strength and/or the role of top-down processes becomes more important. Older adults who increase their feature-related viewing by being more explorative make use of this low-level information and perform better in the task. The present study thus reveals an important developmental change in natural and task-guided viewing. |
Xingshan Li; Gordon D. Logan; N. Jane Zbrodoff Where do we look when we count? The role of eye movements in enumeration Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 2, pp. 409–426, 2010. @article{Li2010, Two experiments addressed the coupling between eye movements and the cognitive processes underlying enumeration. Experiment 1 compared eye movements in a counting task with those in a “look” task, in which subjects were told to look at each dot in a pattern once and only once. Experiment 2 presented the same dot patterns to every subject twice, to measure the consistency with which dots were fixated between and within subjects. In both experiments, the number of fixations increased linearly with the number of objects to be enu- merated, consistent with tight coupling between eye movements and enumeration. However, analyses of fixation locations showed that subjects tended to look at dots in dense, central regions of the display and tended not to look at dots in sparse, peripheral regions of the display, suggesting a looser coupling between eye movements and enumeration. Thus, the eyes do not mirror the enumeration process very directly. |
Hanneke Liesker; Eli Brenner; Jeroen B. J. Smeets Eye-hand coupling is not the cause of manual return movements when searching Journal Article In: Experimental Brain Research, vol. 201, no. 2, pp. 221–227, 2010. @article{Liesker2010, When searching for a target with eye movements, saccades are planned and initiated while the visual information is still being processed, so that subjects often make saccades away from the target and then have to make an additional return saccade. Presumably, the cost of the additional saccades is outweighed by the advantage of short fixations. We previously showed that when the cost of passing the target was increased, by having subjects manually move a window through which they could see the visual scene, subjects still passed the target and made return movements (with their hand). When moving a window in this manner, the eyes and hand follow the same path. To find out whether the hand still passes the target and then returns when eye and hand movements are uncoupled, we here compared moving a window across a scene with moving a scene behind a stationary window. We ensured that the required movement of the hand was identical in both conditions. Subjects found the target faster when moving the window across the scene than when moving the scene behind the window, but at the expense of making larger return movements. The relationship between the return movements and movement speed when comparing the two conditions was the same as the relationship between these two when comparing different window sizes. We conclude that the hand passing the target and then returning is not directly related to the eyes doing so, but rather that moving on before the information has been fully processed is a general principle of visuomotor control. |
Angelika Lingnau; Jens Schwarzbach; Dirk Vorberg (Un)-coupling gaze and attention outside central vision Journal Article In: Journal of Vision, vol. 10, no. 11, pp. 1–13, 2010. @article{Lingnau2010, In normal vision, shifts of attention and gaze are tightly coupled. Here we ask if this coupling affects performance also when central vision is not available. To this aim, we trained normal-sighted participants to perform a visual search task while vision was restricted to a gaze-contingent viewing window (" forced field location ") either in the left, right, upper, or lower visual field. Gaze direction was manipulated within a continuous visual search task that required leftward, rightward, upward, or downward eye movements. We found no general performance advantage for a particular part of the visual field or for a specific gaze direction. Rather, performance depended on the coordination of visual attention and eye movements, with impaired performance when sustained attention and gaze have to be moved in opposite directions. Our results suggest that during early stages of central visual field loss, the optimal location for the substitution of foveal vision does not depend on the particular retinal location alone, as has previously been thought, but also on the gaze direction required by the task the patient wishes to perform. |
Chia-Lun Liu; Hui-Yan Chiau; Philip Tseng; Daisy L. Hung; Ovid J. L. Tzeng; Neil G. Muggleton; Chi-Hung Juan Antisaccade cost is modulated by contextual experience of location probability Journal Article In: Journal of Neurophysiology, vol. 103, no. 3, pp. 1438–1447, 2010. @article{Liu2010, It is well known that pro- and antisaccades may deploy different cognitive processes. However, the specific reason why antisaccades have longer latencies than prosaccades is still under debate. In three experiments, we studied the factors contributing to the antisaccade cost by taking attentional orienting and target location probabilities into account. In experiment 1, using a new antisaccade paradigm, we directly tested Olk and Kingstone's hypothesis, which attributes longer antisaccade latency to the time it takes to reorient from the visual target to the opposite saccadic target. By eliminating the reorienting component in our paradigm, we found no significant difference between the latencies of the two saccade types. In experiment 2, we varied the proportion of prosaccades made to certain locations and found that latencies in the high location-probability (75%) condition were faster than those in the low location-probability condition. Moreover, antisaccade latencies were significantly longer when location probability was high. This pattern can be explained by the notion of competing pathways for pro- and antisaccades in findings of others. In experiment 3, we further explored the degrees of modulation of location probability by decreasing the magnitude of high probability from 75 to 65%. We again observed a pattern similar to that seen in experiment 2 but with smaller modulation effects. Together, these experiments indicate that the reorienting process is a critical factor in producing the antisaccade cost. Furthermore, the antisaccade cost can be modulated by probabilistic contextual information such as location probabilities. |
Gerardo Cepeda Porras; Yann Gaël Guéhéneuc An empirical study on the efficiency of different design pattern representations in UML class diagrams Journal Article In: Empirical Software Engineering, vol. 15, no. 5, pp. 493–522, 2010. @article{Porras2010, Design patterns are recognized in the software engineering community as useful solutions to recurring design problems that improve the quality of programs. They are more and more used by developers in the design and implementation of their programs. Therefore, the visualization of the design patterns used in a program could be useful to efficiently understand how it works. Currently, a common representation to visualize design patterns is the UML collaboration notation. Previous work noticed some limitations in the UML representation and proposed new representations to tackle these limitations. However, none of these pieces of work conducted empirical studies to compare their new representations with the UML representation. We designed and conducted an empirical study to collect data on the performance of developers on basic tasks related to design pattern comprehension (i.e., identifying composition, role, participation) to evaluate the impact of three visual representations and to compare them with the UML one. We used eye-trackers to measure the developers' effort during the execution of the study. Collected data and their analyses show that stereotype-enhanced UML diagrams are more efficient for identifying composition and role than the UML collaboration notation. The UML representation and the pattern-enhanced class diagrams are more efficient for locating the classes participating in a design pattern (i.e., identifying participation). |
Gillian Porter; Andrea Tales; Ute Leonards What makes cast shadows hard to see? Journal Article In: Journal of Vision, vol. 10, no. 3, pp. 1–18, 2010. @article{Porter2010a, Visual search is slowed for cast shadows lit from above, as compared to the same search items inverted and so not interpreted as shadows (R. A. Rensink & P. Cavanagh, 2004). The underlying mechanisms for such impaired shadow processing are still not understood. Here we investigated the processing levels at which this shadow-related slowing might operate, by examining its interaction with a range of different phenomena including eye movements, perceptual learning, and stimulus presentation context. The data demonstrated that the shadow mechanism affects the number of saccades during the search rather than the duration until first saccade onset and can be overridden by prolonged training, which then transfers from one type of shadow stimulus to another. Shadow-related slowing did not differ for peripheral and central search items but was reduced when participants searched unilateral displays as compared to bilateral ones. Together our findings suggest that difficulties with perceiving shadows are due to visual processes linked to object recognition, rather than to shadow-specific identification and suppression mechanisms in low-level sensory visual areas. Findings are discussed in the context of the need for the visual system to distinguish between illumination and material. |
Melanie A. Porter; Tracey A. Shaw; Pamela J. Marsh An unusual attraction to the eyes in Williams-Beuren syndrome: A manipulation of facial affect while measuring face scanpaths Journal Article In: Cognitive Neuropsychiatry, vol. 15, no. 6, pp. 505–530, 2010. @article{Porter2010b, INTRODUCTION: This study aimed to investigate face scanpaths and emotion recognition in Williams-Beuren syndrome (WBS) and whether: (1) the eyes capture the attention of WBS individuals faster than typically developing mental age-matched controls; (2) WBS patients spend abnormally prolonged periods of time viewing the eye region; and (3) emotion recognition skills or eye gaze patterns change depending on the emotional valance of the face. METHODS: Visual scanpaths were recorded while 16 WBS patients and 16 controls passively viewed happy, angry, fearful, and neutral faces. Emotion recognition was subsequently measured. RESULTS: The eyes did not capture the attention of WBS patients faster than controls, but once WBS patients attended to the eyes, they spent significantly more time looking at this region. Unexpectedly, WBS patients showed an impaired ability to recognise angry faces, but face scanpaths were similar across the different facial expressions. CONCLUSIONS: Findings suggest that face processing is atypical in WBS and that emotion recognition and eye gaze abnormalities in WBS are likely to be more complex than previously thought. Findings highlight the need to develop remediation programmes to teach WBS patients how to explore all facial features, enhancing their emotion recognition skills and "normalising" their social interactions. |
Gang Luo; Tyler W. Garaas; Marc Pomplun; Eli Peli Inconsistency between peri-saccadic mislocalization and compression: evidence for separate "what" and "where" visual systems Journal Article In: Journal of Vision, vol. 10, no. 12, pp. 1–8, 2010. @article{Luo2010, The view of two separate "what" and "where" visual systems is supported by compelling neurophysiological evidence. However, very little direct psychophysical evidence has been presented to suggest that the two functions can be separated in neurologically intact persons. Using a peri-saccadic perception paradigm in which bars of different lengths were flashed around saccade onset, we directly measured the perceived object size (a "what" attribute) and location (a "where" attribute). We found that the perceived object location shifted toward the saccade target to show strongly compressed localization, whereas the perceived object size was not compressed accordingly. This dissociation indicates that the perceived size is not determined by spatial localization of the object boundary, providing direct psychophysical evidence to support that "what" and "where" attributes of objects are indeed processed separately. |
B. Machner; C. Klein; Andreas Sprenger; P. Baumbach; P. P. Pramstaller; Christoph Helmchen; Wolfgang Heide Eye movement disorders are different in Parkin-linked and idiopathic early-onset PD Journal Article In: Neurology, vol. 75, pp. 125–128, 2010. @article{Machner2010, OBJECTIVES Parkin gene mutations are the most common cause of early-onset parkinsonism. Patients with Parkin mutations may be clinically indistinguishable from patients with idiopathic early-onset Parkinson disease (EOPD) without Parkin mutations. Eye movement disorders have been shown to differentiate parkinsonian syndromes, but have never been systematically studied in Parkin mutation carriers. METHODS Eye movements were recorded in symptomatic (n = 9) and asymptomatic Parkin mutation carriers (n = 13), patients with idiopathic EOPD (n = 14), and age-matched control subjects (n = 27) during established oculomotor tasks. RESULTS Both patients with EOPD and symptomatic Parkin mutation carriers showed hypometric prosaccades toward visual stimuli, as well as deficits in suppressing reflexive saccades toward unintended targets (antisaccade task). When directing gaze toward memorized target positions, patients with EOPD exhibited hypometric saccades, whereas symptomatic Parkin mutation carriers showed normal saccades. In contrast to patients with EOPD, the symptomatic Parkin mutation carriers showed impaired tracking of a moving target (reduced smooth pursuit gain). The asymptomatic Parkin mutation carriers did not differ from healthy control subjects in any of the tasks. CONCLUSIONS Although clinically similarly affected, symptomatic Parkin mutation carriers and patients with idiopathic EOPD differed in several oculomotor tasks. This finding may point to distinct anatomic structures underlying either condition: dysfunctions of cortical areas involved in smooth pursuit (V5, frontal eye field) in Parkin-linked parkinsonism vs greater impairment of basal ganglia circuits in idiopathic Parkinson disease. |
Vincenzo Maffei; Emiliano Macaluso; Iole Indovina; Guy A. Orban; Francesco Lacquaniti Processing of targets in smooth or apparent motion along the vertical in the human brain: An fMRI study Journal Article In: Journal of Neurophysiology, vol. 103, no. 1, pp. 360–370, 2010. @article{Maffei2010, Neural substrates for processing constant speed visual motion have been extensively studied. Less is known about the brain activity patterns when the target speed changes continuously, for instance under the influence of gravity. Using functional MRI (fMRI), here we compared brain responses to accelerating/decelerating targets with the responses to constant speed targets. The target could move along the vertical under gravity (1g), under reversed gravity (-1g), or at constant speed (0g). In the first experiment, subjects observed targets moving in smooth motion and responded to a GO signal delivered at a random time after target arrival. As expected, we found that the timing of the motor responses did not depend significantly on the specific motion law. Therefore brain activity in the contrast between different motion laws was not related to motor timing responses. Average BOLD signals were significantly greater for 1g targets than either 0g or -1g targets in a distributed network including bilateral insulae, left lingual gyrus, and brain stem. Moreover, in these regions, the mean activity decreased monotonically from 1g to 0g and to -1g. In the second experiment, subjects intercepted 1g, 0g, and -1g targets either in smooth motion (RM) or in long-range apparent motion (LAM). We found that the sites in the right insula and left lingual gyrus, which were selectively engaged by 1g targets in the first experiment, were also significantly more active during 1g trials than during -1g trials both in RM and LAM. The activity in 0g trials was again intermediate between that in 1g trials and that in -1g trials. Therefore in these regions the global activity modulation with the law of vertical motion appears to hold for both RM and LAM. Instead, a region in the inferior parietal lobule showed a preference for visual gravitational motion only in LAM but not RM. |
Femke Maij; Eli Brenner; Hyung-Chul O. Li; Frans W. Cornelissen; Jeroen B. J. Smeets The use of the saccade target as a visual reference when localizing flashes during saccades Journal Article In: Journal of Vision, vol. 10, no. 4, pp. 1–9, 2010. @article{Maij2010, Flashes presented around the time of a saccade are often mislocalized. Such mislocalization is influenced by various factors. Here, we evaluate the role of the saccade target as a landmark when localizing flashes. The experiment was performed in a normally illuminated room to provide ample other visual references. Subjects were instructed to follow a randomly jumping target with their eyes. We flashed a black dot on the screen around the time of saccade onset. The subjects were asked to localize the black dot by touching the appropriate location on the screen. In a first experiment, the saccade target was displaced during the saccade. In a second experiment, it disappeared at different moments. Both manipulations affected the mislocalization. We conclude that our subjects' judgments are partly based on the flashed dot's position relative to the saccade target. |
George L. Malcolm; John M. Henderson Combining top-down processes to guide eye movements during real-world scene search Journal Article In: Journal of Vision, vol. 10, no. 2, pp. 1–11, 2010. @article{Malcolm2010, Eye movements can be guided by various types of information in real-world scenes. Here we investigated how the visual system combines multiple types of top-down information to facilitate search. We manipulated independently the specificity of the search target template and the usefulness of contextual constraint in an object search task. An eye tracker was used to segment search time into three behaviorally defined epochs so that influences on specific search processes could be identified. The results support previous studies indicating that the availability of either a specific target template or scene context facilitates search. The results also show that target template and contextual constraints combine additively in facilitating search. The results extend recent eye guidance models by suggesting the manner in which our visual system utilizes multiple types of top-down information. |
Sabira K. Mannan; Christopher Kennard; Daniela Potter; Yi Pan; David Soto Early oculomotor capture by new onsets driven by the contents of working memory Journal Article In: Vision Research, vol. 50, no. 16, pp. 1590–1597, 2010. @article{Mannan2010, Oculomotor capture can occur automatically in a bottom-up way through the sudden appearance of a new object or in a top-down fashion when a stimulus in the array matches the contents of working memory. However, it is not clear whether or not working memory processing can influence the early stages of oculomotor capture by abrupt onsets. Here we present clear evidence for an early modulation driven by stimulus matches to the contents of working memory in the colour dimension. Interestingly, verbal as well as visual information in working memory influenced the direction of the fastest saccades made in search, saccadic latencies and the curvature of the scan paths made to the search target. This pattern of results arose even though the contents of working memory were detrimental for search, demonstrating an early, automatic top-down mediation of oculomotor onset capture by the contents of working memory. |
Sebastiaan Mathôt; Jan Theeuwes Evidence for the predictive remapping of visual attention Journal Article In: Experimental Brain Research, vol. 200, no. 1, pp. 117–122, 2010. @article{Mathot2010, When attending an object in visual space, perception of the object remains stable despite frequent eye movements. It is assumed that visual stability is due to the process of remapping, in which retinotopically organized maps are updated to compensate for the retinal shifts caused by eye movements. Remapping is predictive when it starts before the actual eye movement. Until now, most evidence for predictive remapping has been obtained in single cell studies involving monkeys. Here, we report that predictive remapping affects visual attention prior to an eye movement. Immediately following a saccade, we show that attention has partly shifted with the saccade (Experiment 1). Importantly, we show that remapping is predictive and affects the locus of attention prior to saccade execution (Experiments 2 and 3): before the saccade was executed, there was attentional facilitation at the location which, after the saccade, would retinotopically match the attended location. |
Sebastiaan Mathôt; Jan Theeuwes Gradual remapping results in early retinotopic and late spatiotopic inhibition of return Journal Article In: Psychological Science, vol. 21, no. 12, pp. 1793–1798, 2010. @article{Mathot2010a, Here we report that immediately following the execution of an eye movement, oculomotor inhibition of return resides in retinotopic (eye-centered) coordinates. At longer postsaccadic intervals, inhibition resides in spatiotopic (world-centered) coordinates. These results are explained in terms of perisaccadic remapping. In the interval surrounding an eye movement, information is remapped within retinotopic maps to compensate for the retinal displacement. Because remapping is not an instantaneous process, a fast, but gradual, transfer of inhibition of return from retinotopic to spatiotopic coordinates can be observed in the postsaccadic interval. The observation that visual stability is preserved in inhibition of return is consistent with its function as a "foraging facilitator," which requires locations to be inhibited across multiple eye movements. The current results support the idea that the visual system is retinotopically organized and that the appearance of a spatiotopic organization is due to remapping of visual information to compensate for eye movements. |
Ellen Matthias; Peter Bublak; Hermann J. Muller; Werner X. Schneider; Joseph Krummenacher; Kathrin Finke The influence of alertness on spatial and nonspatial components of visual attention Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, pp. 38–56, 2010. @article{Matthias2010, Three experiments investigated whether spatial and nonspatial components of visual attention would be influenced by changes in (healthy, young) subjects' level of alertness and whether such effects on separable components would occur independently of each other. The experiments used a no-cue/alerting-cue design with varying cue-target stimulus onset asynchronies in two different whole-report paradigms based on Bundesen's (1990) theory of visual attention, which permits spatial and nonspatial components of selective attention to be assessed independently. The results revealed the level of alertness to affect both the spatial distribution of attentional weighting and processing speed, but not visual short-term memory capacity, with the effect on processing speed preceding that on the spatial distribution of attentional weighting. This pattern indicates that the level of alertness influences both spatial and nonspatial component mechanisms of visual attention and that these two effects develop independently of each other; moreover, it suggests that intrinsic and phasic alertness effects involve the same processing route, on which spatial and nonspatial mechanisms are mediated by independent processing systems that are activated, due to increased alertness, in temporal succession. |
Anna Ma-Wyatt; Martin Stritzke; Julia Trommershäuser Eye-hand coordination while pointing rapidly under risk Journal Article In: Experimental Brain Research, vol. 203, no. 1, pp. 131–145, 2010. @article{MaWyatt2010, Humans make rapid, goal-directed movements to interact with their environment. Saccadic eye movements usually accompany rapid hand movements, suggesting neural coupling, although it remains unclear what determines the strength of the coupling. Here, we present evidence that humans can alter eye-hand coordination in response to risk associated with endpoint variability. We used a paradigm in which human participants were forced to point rapidly under risk and were penalized or rewarded depending on the hand movement outcome. A separate reward schedule was employed for relative saccadic endpoint position. Participants received a monetary reward proportional to points won. We present a model that defines optimality of eye-hand coordination for this task depending on where the hand lands relative to the eye. A comparison of the results and model predictions showed that participants could optimize performance to maximize gain in some conditions, but not others. Participants produced near-optimal results when no feedback was given about relative saccade location and when negative feedback was provided for large distances between the saccade and hand. Participants were sub-optimal when given negative feedback for saccades very close to the hand endpoint. Our results suggest that eye-hand coordination is flexible when pointing rapidly under risk, but final eye position remains correlated with finger location. |
Claudio M. Privitera; Laura W. Renninger; Thom Carney; Stanley A. Klein; Mario Aguilar Pupil dilation during visual target detection Journal Article In: Journal of Vision, vol. 10, no. 10, pp. 1–14, 2010. @article{Privitera2010, It has long been documented that emotional and sensory events elicit a pupillary dilation. Is the pupil response a reliable marker of a visual detection event while viewing complex imagery? In two experiments where viewers were asked to report the presence of a visual target during rapid serial visual presentation (RSVP), pupil dilation was significantly associated with target detection. The amplitude of the dilation depended on the frequency of targets and the time of target presentation relative to the start of the trial. Larger dilations were associated with trials having fewer targets and with targets viewed earlier in the run. We found that dilation was influenced by, but not dependent on, the requirement of a button press. Interestingly, we also found that dilation occurred when viewers fixated a target but did not report seeing it. We will briefly discuss the role of noradrenaline in mediating these pupil behaviors. |
Christoph Rasche; Karl R. Gegenfurtner Visual orienting in dynamic broadband (1/f) noise sequences Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 1, pp. 100–113, 2010. @article{Rasche2010, Visual orienting has typically been characterized using simple displays—for example, displays with a static target placed on a homogeneous background. In the present study, visual orienting was investigated using a dynamic broadband (1/f) noise display that should mimic a more naturalistic setting and that should allow saccadic orienting experiments to be performed with fewer constraints. In Experiment 1, it was shown that the noise movie contains gaze-attracting features that are almost as distinct as the ones measured for (static) real-word scenes. The movie can therefore serve as a strong distractor. In Experiment 2, observers carried out a luminance target search that showed that saccadic amplitude errors were substantially higher (18%) than the ones measured in simple displays. That error is certainly one of the primary factors making gaze-fixation prediction in complex scenes difficult. |
Andrea L. Patalano; Barbara J. Juhasz; Joanna Dicke The relationship between indecisiveness and eye movement patterns in a decision making informational search task Journal Article In: Journal of Behavioral Decision Making, vol. 23, pp. 353–368, 2010. @article{Patalano2010, Indecisiveness is a trait-related general tendency to experience decision difficulties across a variety of situations, leading to decision delay, worry, and regret. Indecisive- ness is proposed (Rassin, 2007) to be associated with an increase in desire for information acquisition and reliance on compensatory strategies—as evidenced by alternative-based information search—during decision making. However existing studies provide conflicting findings. We conducted an information board study of indecisiveness, using eye tracking methodology, to test the hypotheses that the relationship between indecisiveness and choice strategy depends on being in the early stage of the decision making process, and that it depends on being in the presence of an opportunity to delay choice. We found strong evidence for the first hypothesis in that indecisive individuals changed shift behavior from the first to the second half of the task, consistent with a move from greater to lesser compensatory processing, while the shift behavior of decisive individuals suggested lesser compensatory processing over the whole task. Indecisiveness was also related to time spent viewing attributes of the selected course, and to time spent looking away from decision information. These findings resolve past discrepancies, suggest an interesting account of how the decision process unfolds for indecisive versus decisive individuals, and contribute to a better understanding of this tendency. |
Elena G. Patsenko; Erik M. Altmann How planful is routine behavior? A selective-attention model of performance in the Tower of Hanoi Journal Article In: Journal of Experimental Psychology: General, vol. 139, no. 1, pp. 95–116, 2010. @article{Patsenko2010, Routine human behavior has often been attributed to plans-mental representations of sequences goals and actions-but can also be attributed to more opportunistic interactions of mind and a structured environment. This study asks whether performance on a task traditionally analyzed in terms of plans can be better understood from a "situated" (or "embodied") perspective. A saccade-contingent display-updating paradigm is used to change the environment by adding, deleting, and moving task-relevant objects without participants' direct awareness. Response latencies, action patterns, and eye movements all indicate that performance is guided not by plans stored in memory but by a control routine bound to objects as needed by perception and selective attention. The results have implications for interpreting everyday task performance and particular neuropsychological deficits. |
Yoni Pertzov; Ehud Zohary; Galia Avidan Rapid formation of spatiotopic representations as revealed by inhibition of return Journal Article In: Journal of Neuroscience, vol. 30, no. 26, pp. 8882–8887, 2010. @article{Pertzov2010, Inhibition of return (IOR), a performance decrement for stimuli appearing at recently cued locations, occurs when the target and cue share the same screen position. This is in contrast to cue-based attention facilitation effects that were recently suggested to be mapped in a retinotopic reference frame, the prevailing representation throughout early visual processing stages. Here, we investigate the dynamics of IOR in both reference frames, using a modified cued-location saccadic reaction time task with an intervening saccade between cue and target presentation. Thus, on different trials, the target was present either at the same retinotopic location as the cue, or at the same screen position (e.g., spatiotopic location). IOR was primarily found for targets appearing at the same spatiotopic position as the initial cue, when the cue and target were presented at the same hemifield. This suggests that there is restricted information transfer of cue position across the two hemispheres. Moreover, the effect was maximal when the target was presented 10 ms after the intervening saccade ended and was attenuated in longer delays. In our case, therefore, the representation of previously attended locations (as revealed by IOR) is not remapped slowly after the execution of a saccade. Rather, either a retinotopic representation is remapped rapidly, adjacent to the end of the saccade (using a prospective motor command), or the positions of the cue and target are encoded in a spatiotopic reference frame, regardless of eye position. Spatial attention can therefore be allocated to target positions defined in extraretinal coordinates. |
Robert D. Gordon; Sarah D. Vollmer Episodic representation of diagnostic and nondiagnostic object colour Journal Article In: Visual Cognition, vol. 18, no. 5, pp. 728–750, 2010. @article{Gordon2010, In three experiments, we investigated transsaccadic object file representations. In each experiment, participants moved their eyes from a central fixation cross to a saccade target located between two peripheral objects. During the saccade, this preview display was replaced with a target display containing a single object to be named. On trials in which the target identity matched one of the preview objects, its color either matched or did not match the previewed object color. The results indicated that color changes disrupt perceptual continuity, but only for the class of objects for which color is diagnostic of object identity. When the color is not integral to identifying an object (for example, when the object is a letter or an object without a characteristic color), object continuity is preserved regardless of changes to the object's color. These results suggest that object features that are important for defining the object are incorporated into its episodic representation. Furthermore, the results are consistent with previous work showing that the quality of a feature's representation determines its importance in preserving continuity. |
Harold H. Greene; Alexander Pollatsek; Kathleen M. Masserang; Yen Ju Lee; Keith Rayner Directional processing within the perceptual span during visual target localization Journal Article In: Vision Research, vol. 50, no. 13, pp. 1274–1282, 2010. @article{Greene2010, In order to understand how processing occurs within the effective field of vision (i.e. perceptual span) during visual target localization, a gaze-contingent moving mask procedure was used to disrupt parafoveal information pickup along the vertical and the horizontal visual fields. When the mask was present within the horizontal visual field, there was a relative increase in saccade probability along the nearby vertical field, but not along the opposite horizontal field. When the mask was present either above or below fixation, saccades downwards were reduced in magnitude. This pattern of data suggests that parafoveal information selection (indexed by probability of saccade direction) and the extent of spatial parafoveal processing in a given direction (indexed by saccade amplitude) may be controlled by somewhat different mechanisms. |
Martin Groen; Jan Noyes Solving problems: How can guidance concerning task-relevancy be provided? Journal Article In: Computers in Human Behavior, vol. 26, no. 6, pp. 1318–1326, 2010. @article{Groen2010, The analysis of eye movements of people working on problem solving tasks has enabled a more thorough understanding than would have been possible with a traditional analysis of cognitive behavior. Recent studies report that influencing 'where we look' can affect task performance. However, some of the studies that reported these results have shortcomings, namely, it is unclear whether the reported effects are the result of 'attention guidance' or an effect of highlighting display elements alone; second, the selection of the highlighted display elements was based on subjective methods which could have introduced bias. In the study reported here, two experiments are described that attempt to address these shortcomings. Experiment 1 investigates the relative contribution of each display element to successful task realization and does so with an objective analysis method, namely signal detection analysis. Experiment 2 examines whether any performance effects of highlighting are due to foregrounding intrinsic task-relevant aspects or whether they are a result of the act of highlighting in itself. Results show that the chosen objective method is effective and that highlighting the display element thus identified improves task performance significantly. These findings are not an effect of the highlighting per se and thus indicate that the highlighted element is conveying task-relevant information. These findings improve on previous results as the objective selection and analysis methods reduce potential bias and provide a more reliable input to the design and provision of computer-based problem solving support. |
Nathalie Guyader; Jennifer Malsert; Christian Marendaz Having to identify a target reduces latencies in prosaccades but not in antisaccades Journal Article In: Psychological Research, vol. 74, no. 1, pp. 12–20, 2010. @article{Guyader2010, In a princeps study, Trottier and Pratt (2005) showed that saccadic latencies were dramatically reduced when subjects were instructed to not simply look at a peripheral target (reflexive saccade) but to identify some of its properties. According to the authors, the shortening of saccadic reactions times may arise from a top-down disinhibition of the superior colliculus (SC), potentially mediated by the direct pathway connecting frontal/prefrontal cortex structures to the SC. Using a "cue paradigm" (a cue preceded the appearance of the target), the present study tests if the task instruction (Identify vs. Glance) also reduces the latencies of antisaccades (AS), which involve prefrontal structures. We show that instruction reduces latencies for prosaccade but not for AS. An AS requires two processes: the inhibition of a reflexive saccade and the generation of a voluntary saccade. To separate these processes and to better understand the task effect we also test the effect of the task instruction only on voluntary saccades. The effect still exists but it is much weaker than for reflexive saccades. The instruction effect closely depends on task demands in executive resources. |
Joy J. Geng; Nicholas E. DiQuattro Attentional capture by a perceptually salient non-target facilitates target processing through inhibition and rapid rejection Journal Article In: Journal of Vision, vol. 10, no. 6, pp. 1–12, 2010. @article{Geng2010, Perceptually salient distractors typically interfere with target processing in visual search situations. Here we demonstrate that a perceptually salient distractor that captures attention can nevertheless facilitate task performance if the observer knows that it cannot be the target. Eye-position data indicate that facilitation is achieved by two strategies: inhibition when the first saccade was directed to the target, and rapid rejection when the first saccade was captured by the salient distractor. Both mechanisms relied on the distractor being perceptually salient and not just perceptually different. The results demonstrate how bottom-up attentional capture can play a critical role in constraining top-down attentional selection at multiple stages of processing throughout a single trial. |
Mackenzie G. Glaholt; Mei-Chun Wu; Eyal M. Reingold Evidence for top-down control of eye movements during visual decision making Journal Article In: Journal of Vision, vol. 10, no. 5, pp. 1–10, 2010. @article{Glaholt2010, Participants' eye movements were monitored while they viewed displays containing 6 exemplars from one of several categories of everyday items (belts, sunglasses, shirts, shoes), with a column of 3 items presented on the left and another column of 3 items presented on the right side of the display. Participants were either required to choose which of the two sets of 3 items was the most expensive (2-AFC) or which of the 6 items was the most expensive (6-AFC). Importantly, the stimulus display, and the relevant stimulus dimension, were held constant across conditions. Consistent with the hypothesis of top-down control of eye movements during visual decision making, we documented greater selectivity in the processing of stimulus information in the 6-AFC than the 2-AFC decision. In addition, strong spatial biases in looking behavior were demonstrated, but these biases were largely insensitive to the instructional manipulation, and did not substantially influence participants' choices. |
Kevin Fleming; Carole L. Bandy; Matthew O. Kimble Decisions to shoot in a weapon identification task: The influence of cultural stereotypes and perceived threat on false positive errors Journal Article In: Social Neuroscience, vol. 5, no. 2, pp. 201–220, 2010. @article{Fleming2010, The decision to shoot a gun engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC), where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and electroencephalogram (EEG) activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of Middle-Eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERNs were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERNs, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of Middle-Eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to Middle-Eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance. |
Tom Foulsham; Joey T. Cheng; Jessica L. Tracy; Joseph Henrich; Alan Kingstone Gaze allocation in a dynamic situation: Effects of social status and speaking Journal Article In: Cognition, vol. 117, no. 3, pp. 319–331, 2010. @article{Foulsham2010a, Human visual attention operates in a context that is complex, social and dynamic. To explore this, we recorded people taking part in a group decision-making task and then showed video clips of these situations to new participants while tracking their eye movements. Observers spent the majority of time looking at the people in the videos, and in particular at their eyes and faces. The social status of the people in the clips had been rated by their peers in the group task, and this status hierarchy strongly predicted where eye-tracker participants looked: high-status individuals were gazed at much more often, and for longer, than low-status individuals, even over short, 20-s videos. Fixation was temporally coupled to the person who was talking at any one time, but this did not account for the effect of social status on attention. These results are consistent with a gaze system that is attuned to the presence of other individuals, to their social status within a group, and to the information most useful for social interaction. |
Tom Foulsham; Alan Kingstone Asymmetries in the direction of saccades during perception of scenes and fractals: Effects of image type and image features Journal Article In: Vision Research, vol. 50, no. 8, pp. 779–795, 2010. @article{Foulsham2010, The direction in which people tend to move their eyes when inspecting images can reveal the different influences on eye guidance in scene perception, and their time course. We investigated biases in saccade direction during a memory-encoding task with natural scenes and computer-generated fractals. Images were rotated to disentangle egocentric and image-based guidance. Saccades in fractals were more likely to be horizontal, regardless of orientation. In scenes, the first saccade often moved down and subsequent eye movements were predominantly vertical, relative to the scene. These biases were modulated by the distribution of visual features (saliency and clutter) in the scene. The results suggest that image orientation, visual features and the scene frame-of-reference have a rapid effect on eye guidance. |
Alessio Fracasso; Alfonso Caramazza; David Melcher Continuous perception of motion and shape across saccadic eye movements Journal Article In: Journal of Vision, vol. 10, no. 13, pp. 1–17, 2010. @article{Fracasso2010, Although our naïve experience of visual perception is that it is smooth and coherent, the actual input from the retina involves brief and discrete fixations separated by saccadic eye movements. This raises the question of whether our impression of stable and continuous vision is merely an illusion. To test this, we examined whether motion perception can "bridge" a saccade in a two-frame apparent motion display in which the two frames were separated by a saccade. We found that transformational apparent motion, in which an object is seen to change shape and even move in three dimensions during the motion trajectory, continues across saccades. Moreover, participants preferred an interpretation of motion in spatial, rather than retinal, coordinates. The strength of the motion percept depended on the temporal delay between the two motion frames and was sufficient to give rise to a motion-from-shape aftereffect, even when the motion was defined by a second-order shape cue ("phantom transformational apparent motion"). These findings suggest that motion and shape information are integrated across saccades into a single, coherent percept of a moving object. |
Tom C. A. Freeman; Rebecca A. Champion; Paul A. Warren A Bayesian model of perceived head-centered Velocity during smooth pursuit eye movement Journal Article In: Current Biology, vol. 20, no. 8, pp. 757–762, 2010. @article{Freeman2010, During smooth pursuit eye movement, observers often misperceive velocity. Pursued stimuli appear slower (Aubert-Fleishl phenomenon [1, 2]), stationary objects appear to move (Filehne illusion [3]), the perceived direction of moving objects is distorted (trajectory misperception [4]), and self-motion veers away from its true path (e.g., the slalom illusion [5]). Each illusion demonstrates that eye speed is underestimated with respect to image speed, a finding that has been taken as evidence of early sensory signals that differ in accuracy [4, 6-11]. Here we present an alternative Bayesian account, based on the idea that perceptual estimates are increasingly influenced by prior expectations as signals become more uncertain [12-15]. We show that the speeds of pursued stimuli are more difficult to discriminate than fixated stimuli. Observers are therefore less certain about motion signals encoding the speed of pursued stimuli, a finding we use to quantify the Aubert-Fleischl phenomenon based on the assumption that the prior for motion is centered on zero [16-20]. In doing so, we reveal an important property currently overlooked by Bayesian models of motion perception. Two Bayes estimates are needed at a relatively early stage in processing, one for pursued targets and one for image motion. |
Amanda L. Gamble; Ronald M. Rapee The time-course of attention to emotional faces in social phobia Journal Article In: Journal of Behavior Therapy and Experimental Psychiatry, vol. 41, no. 1, pp. 39–44, 2010. @article{Gamble2010, This study investigated the time-course of attentional bias in socially phobic (SP) and non-phobic (NP) adults. Participants viewed angry and happy faces paired with neutral faces (i.e., face-face pairs) and angry, happy and neutral faces paired with household objects (i.e., face-object pairs) for 5000 ms. Eye movement (EM) was measured throughout to assess biases in early and sustained attention. Attentional bias occurred only for face-face pairs. SP adults were vigilant for angry faces relative to neutral faces in the first 500 ms of the 5000 ms exposure, relative to NP adults. SP adults were also vigilant for happy faces over 500 ms, although there were no group-based differences in attention to happy-neutral face pairs. There were no group differences in attention to faces throughout the remainder of the exposure. Results suggest that social phobia is characterised by early vigilance for social cues with no bias in subsequent processing. |
Norbert Hagemann; Jörg Schorer; R. Canal-Bruland; Simone Lotz; Bernd Strauss Visual perception in fencing: Do the eye movements of fencers represent their information pickup? Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 8, pp. 2204–2214, 2010. @article{Hagemann2010, The present study examined whether results of athletes' eye movements while they observe fencing attacks reflect their actual information pickup by comparing these results with others gained with temporal and spatial occlusion and cuing techniques. Fifteen top-ranking expert fencers, 15 advanced fencers, and 32 sport students predicted the target region of 405 fencing attacks on a computer monitor. Results of eye movement recordings showed a stronger foveal fixation on the opponent's trunk and weapon in the two fencer groups. Top-ranking expert fencers fixated particularly on the upper trunk. This matched their performance decrements in the spatial occlusion condition. However, when the upper trunk was occluded, participants also shifted eye movements to neighboring body regions. Adding cues to the video material had no positive effects on prediction performance. We conclude that gaze behavior does not necessarily represent information pickup, but that studies applying the spatial occlusion paradigm should also register eye movements to avoid underestimating the information contributed by occluded regions. |