Cognitive Eye-Tracking Publications
All EyeLink eye tracker cognitive and perception eye tracker research publications up until 2024 (with some early 2025s) are listed below by year. You can search the eye-tracking publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2010 |
Hanneke Liesker; Eli Brenner; Jeroen B. J. Smeets Eye-hand coupling is not the cause of manual return movements when searching Journal Article In: Experimental Brain Research, vol. 201, no. 2, pp. 221–227, 2010. @article{Liesker2010, When searching for a target with eye movements, saccades are planned and initiated while the visual information is still being processed, so that subjects often make saccades away from the target and then have to make an additional return saccade. Presumably, the cost of the additional saccades is outweighed by the advantage of short fixations. We previously showed that when the cost of passing the target was increased, by having subjects manually move a window through which they could see the visual scene, subjects still passed the target and made return movements (with their hand). When moving a window in this manner, the eyes and hand follow the same path. To find out whether the hand still passes the target and then returns when eye and hand movements are uncoupled, we here compared moving a window across a scene with moving a scene behind a stationary window. We ensured that the required movement of the hand was identical in both conditions. Subjects found the target faster when moving the window across the scene than when moving the scene behind the window, but at the expense of making larger return movements. The relationship between the return movements and movement speed when comparing the two conditions was the same as the relationship between these two when comparing different window sizes. We conclude that the hand passing the target and then returning is not directly related to the eyes doing so, but rather that moving on before the information has been fully processed is a general principle of visuomotor control. |
Angelika Lingnau; Jens Schwarzbach; Dirk Vorberg (Un)-coupling gaze and attention outside central vision Journal Article In: Journal of Vision, vol. 10, no. 11, pp. 1–13, 2010. @article{Lingnau2010, In normal vision, shifts of attention and gaze are tightly coupled. Here we ask if this coupling affects performance also when central vision is not available. To this aim, we trained normal-sighted participants to perform a visual search task while vision was restricted to a gaze-contingent viewing window (" forced field location ") either in the left, right, upper, or lower visual field. Gaze direction was manipulated within a continuous visual search task that required leftward, rightward, upward, or downward eye movements. We found no general performance advantage for a particular part of the visual field or for a specific gaze direction. Rather, performance depended on the coordination of visual attention and eye movements, with impaired performance when sustained attention and gaze have to be moved in opposite directions. Our results suggest that during early stages of central visual field loss, the optimal location for the substitution of foveal vision does not depend on the particular retinal location alone, as has previously been thought, but also on the gaze direction required by the task the patient wishes to perform. |
Chia-Lun Liu; Hui-Yan Chiau; Philip Tseng; Daisy L. Hung; Ovid J. L. Tzeng; Neil G. Muggleton; Chi-Hung Juan Antisaccade cost is modulated by contextual experience of location probability Journal Article In: Journal of Neurophysiology, vol. 103, no. 3, pp. 1438–1447, 2010. @article{Liu2010, It is well known that pro- and antisaccades may deploy different cognitive processes. However, the specific reason why antisaccades have longer latencies than prosaccades is still under debate. In three experiments, we studied the factors contributing to the antisaccade cost by taking attentional orienting and target location probabilities into account. In experiment 1, using a new antisaccade paradigm, we directly tested Olk and Kingstone's hypothesis, which attributes longer antisaccade latency to the time it takes to reorient from the visual target to the opposite saccadic target. By eliminating the reorienting component in our paradigm, we found no significant difference between the latencies of the two saccade types. In experiment 2, we varied the proportion of prosaccades made to certain locations and found that latencies in the high location-probability (75%) condition were faster than those in the low location-probability condition. Moreover, antisaccade latencies were significantly longer when location probability was high. This pattern can be explained by the notion of competing pathways for pro- and antisaccades in findings of others. In experiment 3, we further explored the degrees of modulation of location probability by decreasing the magnitude of high probability from 75 to 65%. We again observed a pattern similar to that seen in experiment 2 but with smaller modulation effects. Together, these experiments indicate that the reorienting process is a critical factor in producing the antisaccade cost. Furthermore, the antisaccade cost can be modulated by probabilistic contextual information such as location probabilities. |
Tomas Knapen; Martin Rolfs; Mark Wexler; Patrick Cavanagh The reference frame of the tilt aftereffect Journal Article In: Journal of Vision, vol. 10, no. 1, pp. 1–13, 2010. @article{Knapen2010, Perceptual aftereffects provide a sensitive tool to investigate the influence of eye and head position on visual processing. There have been recent indications that the TAE is remapped around the time of a saccade to remain aligned to the adapting location in the world. Here, we investigate the spatial frame of reference of the TAE by independently manipulating retinal position, gaze orientation, and head orientation between adaptation and test. The results show that the critical factor in the TAE is the correspondence between the adaptation and test locations in a retinotopic frame of reference, whereas world- and head-centric frames of reference do not play a significant role. Our results confirm that adaptation to orientation takes place at retinotopic levels of visual processing. We suggest that the remapping process that plays a role in visual stability does not transfer feature gain information around the time of eye (or head) movements. |
Peter Ko; Sepp Kollmorgen; Nora Nortmann; Sylvia Schröder; Peter König Influence of low-level stimulus features, task dependent factors, and spatial biases on overt visual attention Journal Article In: PLoS Computational Biology, vol. 6, no. 5, pp. e1000791, 2010. @article{Ko2010, Visual attention is thought to be driven by the interplay between low-level visual features and task dependent information content of local image regions, as well as by spatial viewing biases. Though dependent on experimental paradigms and model assumptions, this idea has given rise to varying claims that either bottom-up or top-down mechanisms dominate visual attention. To contribute toward a resolution of this discussion, here we quantify the influence of these factors and their relative importance in a set of classification tasks. Our stimuli consist of individual image patches (bubbles). For each bubble we derive three measures: a measure of salience based on low-level stimulus features, a measure of salience based on the task dependent information content derived from our subjects' classification responses and a measure of salience based on spatial viewing biases. Furthermore, we measure the empirical salience of each bubble based on our subjects' measured eye gazes thus characterizing the overt visual attention each bubble receives. A multivariate linear model relates the three salience measures to overt visual attention. It reveals that all three salience measures contribute significantly. The effect of spatial viewing biases is highest and rather constant in different tasks. The contribution of task dependent information is a close runner-up. Specifically, in a standardized task of judging facial expressions it scores highly. The contribution of low-level features is, on average, somewhat lower. However, in a prototypical search task, without an available template, it makes a strong contribution on par with the two other measures. Finally, the contributions of the three factors are only slightly redundant, and the semi-partial correlation coefficients are only slightly lower than the coefficients for full correlations. These data provide evidence that all three measures make significant and independent contributions and that none can be neglected in a model of human overt visual attention. |
Peter J. Kohler; G. P. Caplovitz; P. -J. Hsieh; J. Sun; P. U. Tse Motion fading is driven by perceived, not actual angular velocity Journal Article In: Vision Research, vol. 50, no. 11, pp. 1086–1094, 2010. @article{Kohler2010, After prolonged viewing of a slowly drifting or rotating pattern under strict fixation, the pattern appears to slow down and then momentarily stop. Here we examine the relationship between such 'motion fading' and perceived angular velocity. Using several different dot patterns that generate emergent virtual contours, we demonstrate that whenever there is a difference in the perceived angular velocity of two patterns of dots that are in fact rotating at the same angular velocity, there is also a difference in the time to undergo motion fading for those two patterns. Conversely, whenever two patterns show no difference in perceived angular velocity, even if in fact rotating at different angular velocities, we find no difference in the time to undergo motion fading. Thus, motion fading is driven by the perceived rather than actual angular velocity of a rotating stimulus. |
A. Kotowicz; Ueli Rutishauser; Christof Koch Time course of target recognition in visual search Journal Article In: Frontiers in Human Neuroscience, vol. 4, pp. 31, 2010. @article{Kotowicz2010, Visual search is a ubiquitous task of great importance: it allows us to quickly find the objects that we are looking for. During active search for an object (target), eye movements are made to different parts of the scene. Fixation locations are chosen based on a combination of information about the target and the visual input. At the end of a successful search, the eyes typically fixate on the target. But does this imply that target identification occurs while looking at it? The duration of a typical fixation ( approximately 170 ms) and neuronal latencies of both the oculomotor system and the visual stream indicate that there might not be enough time to do so. Previous studies have suggested the following solution to this dilemma: the target is identified extrafoveally and this event will trigger a saccade towards the target location. However this has not been experimentally verified. Here we test the hypothesis that subjects recognize the target before they look at it using a search display of oriented colored bars. Using a gaze-contingent real-time technique, we prematurely stopped search shortly after subjects fixated the target. Afterwards, we asked subjects to identify the target location. We find that subjects can identify the target location even when fixating on the target for less than 10 ms. Longer fixations on the target do not increase detection performance but increase confidence. In contrast, subjects cannot perform this task if they are not allowed to move their eyes. Thus, information about the target during conjunction search for colored oriented bars can, in some circumstances, be acquired at least one fixation ahead of reaching the target. The final fixation serves to increase confidence rather then performance, illustrating a distinct role of the final fixation for the subjective judgment of confidence rather than accuracy. |
Hyung Lee; Mathias Abegg; Amadeo Rodriguez; John D. Koehn; Jason J. S. Barton Why do humans make antisaccade errors? Journal Article In: Experimental Brain Research, vol. 201, no. 1, pp. 65–73, 2010. @article{Lee2010, Antisaccade errors are attributed to failure to inhibit the habitual prosaccade. We investigated whether the amount of information about the required response the patient has before the trial begins also contributes to error rate. Participants performed antisaccades in five conditions. The traditional design had two goals on the left and right horizontal meridians. In the second condition, stimulus-goal confusability between trials was eliminated by displacing one goal upward. In the third, hemifield uncertainty was eliminated by placing both goals in the same hemifield. In the fourth, goal uncertainty was eliminated by having only one goal, but interspersed with no-go trials. The fifth condition eliminated all uncertainty by having the same goal on every trial. Antisaccade error rate increased by 2% with each additional source of uncertainty, with the main effect being hemifield information, and a trend for stimulus-goal confusability. A control experiment for the effects of increasing angular separation between targets without changing these types of prior response information showed no effects on latency or error rate. We conclude that other factors besides prosaccade inhibition contribute to antisaccade error rates in traditional designs, possibly by modulating the strength of goal activation. |
Lucica Iordanescu; Marcia Grabowecky; Steven L. Franconeri; Jan Theeuwes; Satoru Suzuki Characteristic sounds make you look at target objects more quickly Journal Article In: Attention, Perception, & Psychophysics, vol. 72, no. 7, pp. 1736–1741, 2010. @article{Iordanescu2010, When you are looking for an object, does hearing its characteristic sound make you find it more quickly? Our recent results supported this possibility by demonstrating that when a cat target, for example, was presented among other objects, a simultaneously presented “meow” sound (containing no spatial information) reduced the manual response time for visual localization of the target. To extend these results, we determined how rapidly an object-specific auditory signal can facilitate target detection in visual search. On each trial, participants fixated a spec ified target object as quickly as possible. The target's characteristic sound speeded the saccadic search time within 215–220 msec and also guided the initial saccade toward the target, compared with presentation of a distractor's sound or with no sound. These results suggest that object-based auditory–visual interactions rapidly increase the target object's salience in visual search. |
Osman Iyilikci; Cordula Becker; Onur Güntürkün; Sonia Amado Visual processing asymmetries in change detection Journal Article In: Perception, vol. 39, no. 6, pp. 761–769, 2010. @article{Iyilikci2010, Change detection is critically dependent on attentional mechanisms. However, the relation between an asymmetrical distribution of visuo-spatial attention and the detection of changes in visual scenes is not clear. Spatial tasks are known to induce a stronger activation of the right hemisphere. The effects of such visual processing asymmetries induced by a spatial task on change detection were investigated. When required to detect changes in the left and in the right visual fields, participants were significantly faster in detecting changes on the left than on the right. Importantly, this left-side superiority in change detection is not influenced by inspection time, suggesting a critical role of visual processing benefit for the left visual field. |
Michal Jacob; Shaul Hochstein Graded recognition as a function of the number of target fixations Journal Article In: Vision Research, vol. 50, no. 1, pp. 107–117, 2010. @article{Jacob2010, Target recognition stages were studied by exposing observers to varying controlled numbers of target fixations. The target, present in half the displays, consisted of two identical cards (Identity Search Task; Jacob & Hochstein, 2009). Following more fixations, targets are better recognized, indicated by increased Hit-rate and detectability (according to Unequal Variance Signal Detection Theory), decreased Response Time and growing confidence, reflecting current stage in recognition process. Thus, gathering information over a specific scene region results from a growing number of fixations on that particular region. We conclude that several fixations on a scene location are necessary for achieving recognition. |
Richard H. A. H. Jacobs; Remco Renken; Stefan Thumfart; Frans W. Cornelissen Different judgments about visual textures invoke different eye movement patterns Journal Article In: Journal of Eye Movement Research, vol. 3, no. 4, pp. 1–13, 2010. @article{Jacobs2010a, Top-down influences on the guidance of the eyes are generally modeled as modulating influences on bottom-up salience maps. Interested in task-driven influences on how, rather than where, the eyes are guided, we expected differences in eye movement parameters accompanying beauty and roughness judgments about visual textures. Participants judged textures for beauty and roughness, while their gaze-behavior was recorded. Eye movement parameters differed between the judgments, showing task effects on how people look at images. Similarity in the spatial distribution of attention suggests that differences in the guidance of attention are non-spatial, possibly feature-based. During the beauty judgment, participants fixated on patches that were richer in color information, further supporting the idea that differences in the guidance of attention are feature-based. A finding of shorter fixation durations during beauty judgments may indicate that extraction of the relevant features is easier during this judgment. This finding is consistent with a more ambient scanning mode during this judgment. The differences in eye movement parameters during different judgments about highly repetitive stimuli highlight the need for models of eye guidance to go beyond salience maps, to include the temporal dynamics of eye guidance. |
Anshul Jain; Stuart Fuller; Benjamin T. Backus In: PLoS ONE, vol. 5, no. 10, pp. e13295, 2010. @article{Jain2010, The visual system can learn to use information in new ways to construct appearance. Thus, signals such as the location or translation direction of an ambiguously rotating wire frame cube, which are normally uninformative, can be learned as cues to determine the rotation direction. This perceptual learning occurs when the formerly uninformative signal is statistically associated with long-trusted visual cues (such as binocular disparity) that disambiguate appearance during training. In previous demonstrations, the newly learned cue was intrinsic to the perceived object, in that the signal was conveyed by the same image elements as the object itself. Here we used extrinsic new signals and observed no learning. We correlated three new signals with long-trusted cues in the rotating cube paradigm: one crossmodal (an auditory signal) and two within modality (visual). Cue recruitment did not occur in any of these conditions, either in single sessions or in ten sessions across as many days. These results suggest that the intrinsic/extrinsic distinction is important for the perceptual system in determining whether it can learn and use new information from the environment to construct appearance. Extrinsic cues do have perceptual effects (e.g. the "bounce-pass" illusion and McGurk effect), so we speculate that extrinsic signals must be recruited for perception, but only if certain conditions are met. These conditions might specify the age of the observer, the strength of the long-trusted cues, or the amount of exposure to the correlation. |
Gustav Kuhn; John M. Findlay Misdirection, attention and awareness: Inattentional blindness reveals temporal relationship between eye movements and visual awareness Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 1, pp. 136–146, 2010. @article{Kuhn2010, We designed a magic trick that could be used to investigate how misdirection can prevent people from perceiving a visually salient event, thus offering a novel paradigm to examine inattentional blindness. We demonstrate that participants' verbal reports reflect what they saw rather than inferences about how they thought the trick was done and thus provide a reliable index of conscious perception. Eye movements revealed that for a subset of participants their conscious perception was not related to where they were looking at the time of the event and thus demonstrate how overt and covert attention can be spatially dissociated. However, detection of the event resulted in rapid shifts of eye movements towards the detected event, thus indicating a strong temporal link between overt and covert attention, and that covert attention can be allocated at least 2 or 3 saccade targets ahead of where people are fixating. |
Victor Kuperman; Raymond Bertram; R. Harald Baayen Processing trade-offs in the reading of Dutch derived words Journal Article In: Journal of Memory and Language, vol. 62, no. 2, pp. 83–97, 2010. @article{Kuperman2010, This eye-tracking study explores visual recognition of Dutch suffixed words (e.g., plaats+ing "placing") embedded in sentential contexts, and provides new evidence on the interplay between storage and computation in morphological processing. We show that suffix length crucially moderates the use of morphological properties. In words with shorter suffixes, we observe a stronger effect of full-forms (derived word frequency) on reading times than in words with longer suffixes. Also, processing times increase if the base word (plaats) and the suffix (-ing) differ in the amount of information carried by their morphological families (sets of words that share the base or the suffix). We model this imbalance of informativeness in the morphological families with the information-theoretical measure of relative entropy and demonstrate its predictivity for the processing times. The observed processing trade-offs are discussed in the context of current models of morphological processing. |
Gregory J. Zelinsky; Andrei Todor The role of "rescue saccades" in tracking objects through occlusions Journal Article In: Journal of Vision, vol. 10, no. 14, pp. 29–29, 2010. @article{Zelinsky2010, We hypothesize that our ability to track objects through occlusions is mediated by timely assistance from gaze in the form of "rescue saccades"-eye movements to tracked objects that are in danger of being lost due to impending occlusion. Observers tracked 2-4 target sharks (out of 9) for 20 s as they swam through a rendered 3D underwater scene. Targets were either allowed to enter into occlusions (occlusion trials) or not (no occlusion trials). Tracking accuracy with 2-3 targets was 92% regardless of target occlusion but dropped to 74% on occlusion trials with four targets (no occlusion trials remained accurate; 83%). This pattern was mirrored in the frequency of rescue saccades. Rescue saccades accompanied approximatlely 50% of the Track 2-3 target occlusions, but only 34% of the Track 4 occlusions. Their frequency also decreased with increasing distance between a target and the nearest other object, suggesting that it is the potential for target confusion that summons a rescue saccade, not occlusion itself. These findings provide evidence for a tracking system that monitors for events that might cause track loss (e.g., occlusions) and requests help from the oculomotor system to resolve these momentary crises. As the number of crises increase with the number of targets, some requests for help go unsatisfied, resulting in degraded tracking. |
Felix A. Wichmann; Jan Drewes; Pedro Rosas; Karl R. Gegenfurtner Animal detection in natural scenes: Critical features revisited Journal Article In: Journal of Vision, vol. 10, no. 4, pp. 1–27, 2010. @article{Wichmann2010, S. J. Thorpe, D. Fize, and C. Marlot (1996) showed how rapidly observers can detect animals in images of natural scenes, but it is still unclear which image features support this rapid detection. A. B. Torralba and A. Oliva (2003) suggested that a simple image statistic based on the power spectrum allows the absence or presence of objects in natural scenes to be predicted. We tested whether human observers make use of power spectral differences between image categories when detecting animals in natural scenes. In Experiments 1 and 2 we found performance to be essentially independent of the power spectrum. Computational analysis revealed that the ease of classification correlates with the proposed spectral cue without being caused by it. This result is consistent with the hypothesis that in commercial stock photo databases a majority of animal images are pre-segmented from the background by the photographers and this pre-segmentation causes the power spectral differences between image categories and may, furthermore, help rapid animal detection. Data from a third experiment are consistent with this hypothesis. Together, our results make it exceedingly unlikely that human observers make use of power spectral differences between animal- and no-animal images during rapid animal detection. In addition, our results point to potential confounds in the commercially available “natural image” databases whose statistics may be less natural than commonly presumed. |
Carrick C. Williams Not all visual memories are created equal Journal Article In: Visual Cognition, vol. 18, no. 2, pp. 201–228, 2010. @article{Williams2010, Two experiments examined how well the long-term visual memories of objects that are encountered multiple times during visual search are updated. Participants searched for a target two or four times (e.g., white cat) among distractors that shared the target's colour, category, or were unrelated while their eye movements were recorded. Following the search, a surprise visual memory test was given. With additional object presentations, only target memory reliably improved; distractor memory was unaffected by the number of object presentations. Regression analyses using the eye movement variables as predictors indicated that number of object presentations predicted target memory with no additional impact of other viewing measures. In contrast, distractor memory was best predicted by the viewing pattern on the distractor objects. Finally, Experiment 2 showed that target memory was influenced by number of target object presentations, not number of searches for the target. Each of these experiments demonstrates visual memory differences between target and distractor objects and may provide insight into representational differences in visual memory. |
Bartholomäus Wissmath; Daniel Stricker; David Weibel; Eva Siegenthaler; Fred W. Mast The illusion of being located in dynamic virtual environments Journal Article In: Journal of Eye Movement Research, vol. 3, no. 5, pp. 1–8, 2010. @article{Wissmath2010, Attention allocation towards the mediated environment is assumed to be a necessary precondition to feel localized in a virtual world. In presence research, however, the potential of eye movement research has not been fully exploited so far. In this study, participants (N=44) rode on a virtual roller coaster simulation. We compare participants scoring high versus low on presence. During the ride, the eye movements and subjective ex post presence judgments were assessed. We found high sensations of presence to be associated with fewer fixations and a tendency towards longer fixation durations. In contrast to the immersive tendency trait, eye movement parameters can predict presence. |
Jan Zwickel; Melissa L. H. Võ How the presence of persons biases eye movements Journal Article In: Psychonomic Bulletin & Review, vol. 17, no. 2, pp. 257–262, 2010. @article{Zwickel2010, We investigated modulation of gaze behavior of observers viewing complex scenes that included a person. To assess spontaneous orientation-following, and in contrast to earlier studies, we did not make the person salient via instruction or low-level saliency. Still, objects that were referred to by the orientation of the person were visited earlier, more often, and longer than when they were not referred to. Analysis of fixation sequences showed that the number of saccades to the cued and uncued objects differed only for saccades that started from the head region, but not for saccades starting from a control object or from a body region. We therefore argue that viewing a person leads to an increase in spontaneous following of the person's viewing direction even when the person plays no role in scene understanding and is not made prominent. |
Joseph Tao-yi Wang; Michael L. Spezio; Colin F. Camerer Pinocchio's pupil: Using eyetracking and pupil dilation to understand truth telling and deception in games Journal Article In: American Economic Review, vol. 100, no. 3, pp. 984–1007, 2010. @article{Wang2010b, We report experiments on sender-receiver games with an incentive for senders to exaggerate. Subjects "overcommunicate" —messages are more informative of the true state than they should be, in equilibrium. Eyetracking shows that senders look at payoffs in a way that is consistent with a level-k model. A combination of sender messages and lookup patterns predicts the true state about twice as often as predicted by equilibrium. Using these measures to infer the state would enable receiver subjects to hypothetically earn 16–21 percent more than they actually do, an economic value of 60 percent of the maximum increment. |
Michael L. Waterston; Christopher C. Pack Improved discrimination of visual stimuli following repetitive transcranial magnetic stimulation Journal Article In: PLoS ONE, vol. 5, no. 4, pp. e10354, 2010. @article{Waterston2010, Repetitive transcranial magnetic stimulation (rTMS) at certain frequencies increases thresholds for motor-evoked potentials and phosphenes following stimulation of cortex. Consequently rTMS is often assumed to introduce a “virtual lesion” in stimulated brain regions, with correspondingly diminished behavioral performance. Here we investigated the effects of rTMS to visual cortex on subjects' ability to perform visual psychophysical tasks. Contrary to expectations of a visual deficit, we find that rTMS often improves the discrimination of visual features. For coarse orientation tasks, discrimination of a static stimulus improved consistently following theta-burst stimulation of the occipital lobe. Using a reaction-time task, we found that these improvements occurred throughout the visual field and lasted beyond one hour post-rTMS. Low-frequency (1 Hz) stimulation yielded similar improvements. In contrast, we did not find consistent effects of rTMS on performance in a fine orientation discrimination task. Overall our results suggest that rTMS generally improves or has no effect on visual acuity, with the nature of the effect depending on the type of stimulation and the task. We interpret our results in the context of an ideal-observer model of visual perception. |
Marcus R. Watson; Allison A. Brennan; Alan Kingstone; James T. Enns Looking versus seeing: Strategies alter eye movements during visual search Journal Article In: Psychonomic Bulletin & Review, vol. 17, no. 4, pp. 543–549, 2010. @article{Watson2010, Visual search can be made more efficient by adopting a passive cognitive strategy (i.e., letting the target "pop" into mind) rather than by trying to actively guide attention. In the present study, we examined how this strategic benefit is linked to eye movements. Results show that participants using a passive strategy wait longer before beginning to move their eyes and make fewer saccades than do active participants. Moreover, the passive advantage stems from more efficient use of the information in a fixation, rather than from a wider attentional window. Individual difference analyses indicate that strategies also change the way eye movements are related to search success, with a rapid saccade rate predicting success among active participants, and fewer and larger amplitude saccades predicting success among passive participants. A change in mindset, therefore, alters how oculomotor behaviors are harnessed in the service of visual search. |
Matthew David Weaver; Joseph Phillips; Johan Lauwereyns Semantic influences from a brief peripheral cue depend on task set Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 7, pp. 1249–1255, 2010. @article{Weaver2010, Previous research has shown semantic influence from irrelevant peripheral cues on the spatial allocation of covert visual attention. The present study explored whether the task set determines the extent of such semantic influence. A spatial cueing paradigm with strict eye movement control was used, where cues were either first names (male or female) or emotionally charged words (positive or negative) followed by a face target. Participants discriminated either the gender (male or female) or the emotion (positive or negative) of the face. When there was high information overlap between cue and task set, responses were faster when the cue and target value were semantically congruent than when they were incongruent. It was concluded that the semantically related cues primed a task-influencing response independently of spatial attention allocation processes, showing that semantic influences from brief peripheral cues depend on the degree of information overlap between cue and task set. |
Noriko Yamagishi; Stephen J. Anderson; Mitsuo Kawato The observant mind: Self-awareness of attentional status Journal Article In: Proceedings of the Royal Society B: Biological Sciences, vol. 277, no. 1699, pp. 3421–3426, 2010. @article{Yamagishi2010, Visual perception is dependent not only on low-level sensory input but also on high-level cognitive factors such as attention. In this paper, we sought to determine whether attentional processes can be internally monitored for the purpose of enhancing behavioural performance. To do so, we developed a novel paradigm involving an orientation discrimination task in which observers had the freedom to delay target presentation–by any amount required–until they judged their attentional focus to be complete. Our results show that discrimination performance is significantly improved when individuals self-monitor their level of visual attention and respond only when they perceive it to be maximal. Although target delay times varied widely from trial-to-trial (range 860 ms-12.84 s), we show that their distribution is Gaussian when plotted on a reciprocal latency scale. We further show that the neural basis of the delay times for judging attentional status is well explained by a linear rise-to-threshold model. We conclude that attentional mechanisms can be self-monitored for the purpose of enhancing human decision-making processes, and that the neural basis of such processes can be understood in terms of a simple, yet broadly applicable, linear rise-to-threshold model. |
Melissa L. -H. Võ; Werner X. Schneider A glimpse is not a glimpse: Differential processing of flashed scene previews leads to differential target search benefits Journal Article In: Visual Cognition, vol. 18, no. 2, pp. 171–200, 2010. @article{Vo2010a, What information can we extract from an initial glimpse of a scene and how do people differ in the way they process visual information? In Experiment 1, participants searched 3-D-rendered images of naturalistic scenes for embedded target objects through a gaze-contingent window. A briefly flashed scene preview (identical, background, objects, or control) preceded each search scene. We found that search performance varied as a function of the participants' reported ability to distinguish between previews. Experiment 2 further investigated the source of individual differences using a whole-report task. Data were analysed following the ‘‘Theory of Visual Attention'' approach, which allows the assessment of visual processing efficiency parameters. Results from both experiments indicate that during the first glimpse of a scene global processing of visual information predominates and that individual differences in initial scene processing and subsequent eye movement behaviour are based on individual differences in visual perceptual processing speed. |
Melissa L. -H. Võ; Jan Zwickel; Werner X. Schneider Has someone moved my plate? The immediate and persistent effects of object location changes on gaze allocation during natural scene viewing Journal Article In: Attention, Perception, & Psychophysics, vol. 72, no. 5, pp. 1251–1255, 2010. @article{Vo2010b, In this study, we investigated the immediate and persisting effects of object location changes on gaze control during scene viewing. Participants repeatedly inspected a randomized set of naturalistic scenes for later questioning. On the seventh presentation, an object was shown at a new location, whereas the change was reversed for all subsequent presentations of the scene. We tested whether deviations from stored scene representations would modify eye movements to the changed regions and whether these effects would persist. We found that changed objects were looked at longer and more often, regardless of change reportability. These effects were most pronounced immediately after the change occurred and quickly leveled off once a scene remained unchanged. However, participants continued to perform short validation checks to changed scene regions, which implies a persistent modulation of eye movement control beyond the occurrence of object location changes. |
Nicholas J. Wade; Benjamin W. Tatler Recognition and eye movements with partially hidden pictures of faces and cars in different orientations Journal Article In: i-Perception, vol. 1, no. 2, pp. 103–120, 2010. @article{Wade2010, Inverted faces are more difficult to identify than upright ones. This even applies when pictures of faces are partially hidden in geometrical designs so that it takes some seconds to recognise them. Similar, though not as pronounced, orientation preferences apply to familiar objects. We compared the recognition times and patterns of eye movements for two sets of familiar symmetrical objects. Pictures of faces and of cars were embedded in patterns of concentric circles in order to render them difficult to recognise. They were presented in four orientations, at 90° intervals from upright. Two experiments were conducted with the same set of stimuli; experiment 1 required participants to respond in terms of faces or cars, and in experiment 2 responses were made to the orientation of the embedded image independently of its class. Upright faces were recognised more accurately and faster than those in other orientations; fixation durations were longer for upright faces even before recognition. These results applied to both experiments. Orientation effects for cars were not pronounced and distinctions between 90°, 180°, and 270° embedded images were not consistent; this was the case in both experiments. |
Hang Zhang; Camille Morvan; Laurence T. Maloney Gambling in the visual periphery: A conjoint- measurement analysis of human ability to judge visual uncertainty Journal Article In: PLoS Computational Biology, vol. 6, no. 12, pp. e1001023, 2010. @article{Zhang2010a, Recent work in motor control demonstrates that humans take their own motor uncertainty into account, adjusting the timing and goals of movement so as to maximize expected gain. Visual sensitivity varies dramatically with retinal location and target, and models of optimal visual search typically assume that the visual system takes retinal inhomogeneity into account in planning eye movements. Such models can then use the entire retina rather than just the fovea to speed search. Using a simple decision task, we evaluated human ability to compensate for retinal inhomogeneity. We first measured observers' sensitivity for targets, varying contrast and eccentricity. Observers then repeatedly chose between targets differing in eccentricity and contrast, selecting the one they would prefer to attempt: e.g., a low contrast target at 2u versus a high contrast target at 10u. Observers knew they would later attempt some of their chosen targets and receive rewards for correct classifications. We evaluated performance in three ways. Equivalence: Do observers' judgments agree with their actual performance? Do they correctly trade off eccentricity and contrast and select the more discriminable target in each pair? Transitivity: Are observers' choices self-consistent? Dominance: Do observers understand that increased contrast improves performance? Decreased eccentricity? All observers exhibited patterned failures of equivalence, and seven out of eight observers failed transitivity. There were significant but small failures of dominance. All these failures together reduced their winnings by 10%–18%. |
Li Zhang; Wu Li Perceptual learning beyond retinotopic reference frame Journal Article In: Proceedings of the National Academy of Sciences, vol. 107, no. 36, pp. 15969–15974, 2010. @article{Zhang2010, Repetitive experience with the same visual stimulus and task can remarkably improve behavioral performance on the task. This well-known perceptual-learning phenomenon is usually specific to the trained retinal- or visual-field location, which is taken as an indication of plastic changes in retinotopic visual areas. In previous studies of perceptual learning, however, a change in stimulus location on the retina is accompanied by positional changes of the stimulus in nonretinotopic frames of reference, such as relative to the head and other objects. It is unclear, therefore, whether the putative location specificity is exclusively retinotopic or if it could also depend on nonretinotopic representation of the stimulus, which is particularly important for multisensory and sensorimotor integration as well as for maintenance of stable visual percepts. Here, by manipulating subjects' gaze direction to control spatial and retinal locations of stimuli independently, we found that, when the stimulated retinal regions were held constant, the improvement with training in motion-direction discrimination of two successively displayed stimuli was restricted to the relative spatial position of the stimuli but independent of their absolute locations in head- and world-centered frame. These findings indicate location specificity of perceptual learning beyond retinotopic frame of reference, suggesting a pliable spatiotopic mechanism that can be specifically shaped by experience for better spatiotemporal integration of the learned stimuli. |
Ting Zhang; Lu Qi Xiao; Stanley A. Klein; Dennis M. Levi; Cong Yu Decoupling location specificity from perceptual learning of orientation discrimination Journal Article In: Vision Research, vol. 50, no. 4, pp. 368–374, 2010. @article{Zhang2010b, Perceptual learning of orientation discrimination is reported to be precisely specific to the trained retinal location. This specificity is often taken as evidence for localizing the site of orientation learning to retinotopic cortical areas V1/V2. However, the extant physiological evidence for training improved orientation turning in V1/V2 neurons is controversial and weak. Here we demonstrate substantial transfer of orientation learning across retinal locations, either from the fovea to the periphery or amongst peripheral locations. Most importantly, we found that a brief pretest at a peripheral location before foveal training enabled complete transfer of learning, so that additional practice at that peripheral location resulted in no further improvement. These results indicate that location specificity in orientation learning depends on the particular training procedures, and is not necessarily a genuine property of orientation learning. We suggest that non-retinotopic high brain areas may be responsible for orientation learning, consistent with the extant neurophysiological data. |
Zhi-Lei Zhang; Christopher R. L. Cantor; Clifton M. Schor Perisaccadic stereo depth with zero retinal disparity Journal Article In: Current Biology, vol. 20, no. 13, pp. 1176–1181, 2010. @article{Zhang2010c, When an object is viewed binocularly, unequal perspective projections of the two eyes' half images (binocular disparity) provide a cue for the sensation of stereo depth. For almost 200 years, binocular disparity has remained synonymous with retinal disparity [1], which is computed by subtracting the distance of each half image from its respective fovea [2]. However, binocular disparity could also be coded in headcentric instead of retinal coordinates, by combining eye position and retinal image position in each eye and representing disparity as differences between visual directions of half images relative to the head [3]. Although these two disparity-coding schemes suggest very different neural mechanisms, both offer identical predictions for stereopsis in almost every viewing condition, making it difficult to empirically distinguish between them. We designed a novel stimulus that uses perisaccadic spatial distortion [4] to generate inconsistency between headcentric and retinal disparity. Foveal half images flashed asynchronously just before a horizontal saccade have zero retinal disparity, yet they produce a sensation of depth consistent with a nonzero headcentric disparity. Furthermore, this headcentric disparity can cancel and reverse the perceived depth stimulated with nonzero retinal disparity. This is the first demonstration that a coding scheme other than retinal disparity has a role in human stereopsis. |
Benjamin W. Tatler; Nicholas J. Wade; Hoi Kwan; John M. Findlay; Boris M. Velichkovsky Yarbus, eye movements, and vision Journal Article In: i-Perception, vol. 1, no. 1, pp. 7–27, 2010. @article{Tatler2010, The impact of Yarbus's research on eye movements was enormous following the translation of his book Eye Movements and Vision into English in 1967. In stark contrast, the published material in English concerning his life is scant. We provide a brief biography of Yarbus and assess his impact on contemporary approaches to research on eye movements. While early interest in his work focused on his study of stabilised retinal images, more recently this has been replaced with interest in his work on the cognitive influences on scanning patterns. We extended his experiment on the effect of instructions on viewing a picture using a portrait of Yarbus rather than a painting. The results obtained broadly supported those found by Yarbus. |
Jessica Taubert; Pamela J. Marsh; Tracey A. Shaw When you turn the other cheek: A preference for novel viewpoints of familiar faces Journal Article In: Perception, vol. 39, no. 3, pp. 429–432, 2010. @article{Taubert2010, Inferences about the psychobiological processes that underlie face perception have been drawn from the spontaneous behaviour of eyes. Using a visual paired-comparison task, we recorded the eye movements of twenty adults as they viewed pairs of faces that differed in their relative familiarity. The results indicate an advantage for novel viewpoints of familiar faces over familiar viewpoints of familiar faces and novel faces. We conclude that this preference serves the face recognition system by collecting the variation necessary to build robust representations of identity. |
Abtine Tavassoli; Dario L. Ringach When your eyes see more than you do Journal Article In: Current Biology, vol. 20, no. 3, pp. 93–94, 2010. @article{Tavassoli2010, Visual information is used by the brain to construct a conscious experience of the visual world and to guide motor actions [1]. Here we report a study of how eye movements and perception relate to each other. We compared the ability of human observers to perceive image motion with the reliability of their eyes to track the motion of a target [2], [3] and [4], the goal being to test whether both motor and sensory processes are based on the same set of signals and limited by a shared source of noise [2] and [4]. We found that the oculomotor system can detect fluctuations in the velocity of a moving target better than the observer. Surprisingly, in some conditions, eye movements reliably respond to the velocity fluctuations of a moving target that are otherwise perceptually invisible to the subjects. The implication is that visual motion signals exist in the brain that can be used to guide motor actions without evoking a perceptual outcome nor being accessible to conscious scrutiny. |
Illia Tchernikov; Mazyar Fallah A color hierarchy for automatic target selection Journal Article In: PLoS ONE, vol. 5, no. 2, pp. e9338, 2010. @article{Tchernikov2010, Visual processing of color starts at the cones in the retina and continues through ventral stream visual areas, called the parvocellular pathway. Motion processing also starts in the retina but continues through dorsal stream visual areas, called the magnocellular system. Color and motion processing are functionally and anatomically discrete. Previously, motion processing areas MT and MST have been shown to have no color selectivity to a moving stimulus; the neurons were colorblind whenever color was presented along with motion. This occurs when the stimuli are luminance-defined versus the background and is considered achromatic motion processing. Is motion processing independent of color processing? We find that motion processing is intrinsically modulated by color. Color modulated smooth pursuit eye movements produced upon saccading to an aperture containing a surface of coherently moving dots upon a black background. Furthermore, when two surfaces that differed in color were present, one surface was automatically selected based upon a color hierarchy. The strength of that selection depended upon the distance between the two colors in color space. A quantifiable color hierarchy for automatic target selection has wide-ranging implications from sports to advertising to human-computer interfaces. |
Anna L. Telling; Antje S. Meyer; Glyn W. Humphreys Distracted by relatives: Effects of frontal lobe damage on semantic distraction Journal Article In: Brain and Cognition, vol. 73, no. 3, pp. 203–214, 2010. @article{Telling2010, When young adults carry out visual search, distractors that are semantically related, rather than unrelated, to targets can disrupt target selection (see Belke, Humphreys, Watson, Meyer, & Telling, 2008; Moores, Laiti, & Chelazzi, 2003). This effect is apparent on the first eye movements in search, suggesting that attention is sometimes captured by related distractors. Here we assessed effects of semantically related distractors on search in patients with frontal-lobe lesions and compared them to the effects in age-matched controls. Compared with the controls, the patients were less likely to make a first saccade to the target and they were more likely to saccade to distractors (whether related or unrelated to the target). This suggests a deficit in a first stage of selecting a potential target for attention. In addition, the patients made more errors by responding to semantically related distractors on target-absent trials. This indicates a problem at a second stage of target verification, after items have been attended. The data suggest that frontal lobe damage disrupts both the ability to use peripheral information to guide attention, and the ability to keep separate the target of search from the related items, on occasions when related items achieve selection. |
Jan Theeuwes; Sebastiaan Mathôt; Alan Kingstone Object-based eye movements: The eyes prefer to stay within the same object Journal Article In: Attention, Perception, & Psychophysics, vol. 72, no. 3, pp. 597–601, 2010. @article{Theeuwes2010, The present study addressed the question of whether we prefer to make eye movements within or between objects. More specifically, when fixating one end of an object, are we more likely to make the next saccade within that same object or to another object? Observers had to discriminate small letters placed on rectangles similar to those used by Egly, Driver, and Rafal (1994). Following an exogenous cue, observers made a saccade to one end of one of the rectangles. The small target letter, which could be discriminated only after it had been fixated, could appear either within the same or at a different object. Consistent with object-based attention, we show that observers prefer to make an eye movement to the other end of the fixated same object, rather than to the equidistant end of a different object. It is concluded that there is a preference to make eye shifts within the same object, rather than between objects. |
Aidan A. Thompson; Denise Y. P. Henriques Locations of serial reach targets are coded in multiple reference frames Journal Article In: Vision Research, vol. 50, no. 24, pp. 2651–2660, 2010. @article{Thompson2010, Previous work from our lab, and elsewhere, has demonstrated that remembered target locations are stored and updated in an eye-fixed reference frame. That is, reach errors systematically vary as a function of gaze direction relative to a remembered target location, not only when the target is viewed in the periphery (Bock, 1986, known as the retinal magnification effect), but also when the target has been foveated, and the eyes subsequently move after the target has disappeared but prior to reaching (e.g., Henriques, Klier, Smith, Lowy, & Crawford, 1998; Sorrento & Henriques, 2008; Thompson & Henriques, 2008). These gaze-dependent errors, following intervening eye movements, cannot be explained by representations whose frame is fixed to the head, body or even the world. However, it is unknown whether targets presented sequentially would all be coded relative to gaze (i.e., egocentrically/absolutely), or if they would be coded relative to the previous target (i.e., allocentrically/relatively). It might be expected that the reaching movements to two targets separated by 5° would differ by that distance. But, if gaze were to shift between the first and second reaches, would the movement amplitude between the targets differ? If the target locations are coded allocentrically (i.e., the location of the second target coded relative to the first) then the movement amplitude should be about 5° But, if the second target is coded egocentrically (i.e., relative to current gaze direction), then the reaches to this target and the distances between the subsequent movements should vary systematically with gaze as described above. We found that requiring an intervening saccade to the opposite side of 2 briefly presented targets between reaches to them resulted in a pattern of reaching error that systematically varied as a function of the distance between current gaze and target, and led to a systematic change in the distance between the sequential reach endpoints as predicted by an egocentric frame anchored to the eye. However, the amount of change in this distance was smaller than predicted by a pure eye-fixed representation, suggesting that relative positions of the targets or allocentric coding was also used in sequential reach planning. The spatial coding and updating of sequential reach target locations seems to rely on a combined weighting of multiple reference frames, with one of them centered on the eye. |
Grayden J. F. Solman; Daniel Smilek Item-specific location memory in visual search Journal Article In: Vision Research, vol. 50, no. 23, pp. 2430–2438, 2010. @article{Solman2010, In two samples, we demonstrate that visual search performance is influenced by memory for the locations of specific search items across trials. We monitored eye movements as observers searched for a target letter in displays containing 16 or 24 letters. From trial to trial the configuration of the search items was either Random, fully Repeated or similar but not identical (i.e., Intermediate). We found a graded pattern of response times across conditions with slowest times in the Random condition and fastest responses in the Repeated condition. We also found that search was comparably efficient in the Intermediate and Random conditions but more efficient in the Repeated condition. Importantly, the target on a given trial was fixated more accurately in the Repeated and Intermediate conditions relative to the Random condition. We suggest a tradeoff between memory and perception in search as a function of the physical scale of the search space. |
Andreas Sprenger; Maren Lappe-Osthege; Silke Talamo; Steffen Gais; Hubert Kimmig; Christoph Helmchen Eye movements during REM sleep and imagination of visual scenes Journal Article In: NeuroReport, vol. 21, no. 1, pp. 45–49, 2010. @article{Sprenger2010, It has been hypothesized that rapid eye movements (REMs) during sleep reflect the process of looking around in dreams. We questioned whether REMs differ from eye movements in wakefulness while imagining previously seen visual stimuli (dots, static images, videos). After looking at these stimuli individuals were asked to remember and imagine them. Subsequently, their REMs were recorded at the sleep laboratory. Kinematic parameters of REMs were similar to saccadic eye movements to remembered stimuli with closed eyes, irrespective of the stimulus type. In contrast, peak velocity of eye movements with open eyes was similar to REMs when semantic, but not nonsemantic, contents were imagined. Thus, REMs may be related to exploratory saccadic behaviour in the awake to remember visual stimuli. |
Damian G. Stephen; Daniel Mirman Interactions dominate the dynamics of visual cognition Journal Article In: Cognition, vol. 115, no. 1, pp. 154–165, 2010. @article{Stephen2010, Many cognitive theories have described behavior as the summation of independent contributions from separate components. Contrasting views have emphasized the importance of multiplicative interactions and emergent structure. We describe a statistical approach to distinguishing additive and multiplicative processes and apply it to the dynamics of eye movements during classic visual cognitive tasks. The results reveal interaction-dominant dynamics in eye movements in each of the three tasks, and that fine-grained eye movements are modulated by task constraints. These findings reveal the interactive nature of cognitive processing and are consistent with theories that view cognition as an emergent property of processes that are broadly distributed over many scales of space and time rather than a componential assembly line. |
Catherine Stevens; Heather Winskel; Clare Howell; Lyne-Marine Vidal; Cyril Latimer; Josephine Milne-Home Perceiving dance: Schematic expectations guide experts' scanning of a contemporary dance film Journal Article In: Journal of Dance Medicine & Science, vol. 14, no. 1, pp. 19–25, 2010. @article{Stevens2010, Eye fixations and saccades (eye movements) of expert and novice dance observers were compared to determine the effect of acquired expectations on observations of human movement, body morphology, and dance configurations. As hypothesized, measured fixation times of dance experts were significantly shorter than those of novices. In a second viewing of the same sequences, novices recorded significantly shorter fixations than those recorded during viewing session 1. Saccades recorded from experts were significantly faster than those of novices. Although both experts and novices fixated background regions, most likely making use of extrafoveal or peripheral vision to anticipate movement and configurations, novices fixated background regions significantly more than experts in viewing session 1. Their enhanced speed of visual processing suggests that dance experts are adept at anticipating movement and rapidly processing material, probably aided by acquired schemata or expectations in long-term memory and recognition of body and movement configurations. |
Goedele Van Belle; Peter De Graef; Karl Verfaillie; Thomas Busigny; Bruno Rossion Whole not hole: Expert face recognition requires holistic perception Journal Article In: Neuropsychologia, vol. 48, no. 9, pp. 2620–2629, 2010. @article{VanBelle2010, Face recognition is an important ability of the human brain, yet its underlying mechanisms are still poorly understood. Two opposite views have been proposed to account for human face recognition expertise: the ability to extract the most diagnostic local information, feature-by feature (analytical view), or the ability to process all features at once over the whole face (holistic view). To help clarifying this debate, we used an original gaze-contingent stimulus presentation method to compare normal observers and a brain-damaged patient specifically impaired at face recognition (prosopagnosia). When a single central facial feature was revealed at a time through a gaze-contingent window, normal observers' performance at an individual face matching task decreased to the patient level. However, when only the central feature was masked, forcing normal observers to rely on the whole face but the fixated feature, their performance was almost not affected. In contrast, the prosopagnosic patient's performance decreased dramatically in this latter condition. These results were independent of the absolute size of the face and window/mask. This dissociation indicates that expertise in face recognition does not rest on the ability to analyze diagnostic local detailed features sequentially but rather on the ability to see the individual features of a face all at once, a function that is critically impaired in acquired prosopagnosia. |
Goedele Van Belle; Peter De Graef; Karl Verfaillie; Bruno Rossion; Philippe Lefèvre Face inversion impairs holistic perception: Evidence from gaze-contingent stimulation Journal Article In: Journal of Vision, vol. 10, no. 5, pp. 1–13, 2010. @article{VanBelle2010a, Human observers are experts at face recognition, yet a simple 180- rotation of a face photograph decreases recognition performance substantially. A full understanding of this phenomenonVwhich is believed to be important for clarifying the nature of our expertise in face recognitionVis still waiting. According to a long-standing and influential hypothesis, an inverted face cannot be perceived as holistically as an upright face and has to be analyzed local feature by local feature. Here, we tested this holistic perception hypothesis of the face inversion effect by means of a gaze-contingent stimulus presentation. When observers' perception was restricted to one fixated feature at a time by a gaze-contingent window, performance in an individual face matching task was almost unaffected by inversion. However, when a mask covered the fixated feature, preventing the use of local information at high resolution, the decrement of performance with inversion was even larger than in a normalVfull viewVcondition. These observations provide evidence that the face inversion effect is caused by an inability to perceive the individual face as a whole rather than as a collection of specific features and thus support the view that observers' expertise at upright face recognition is due to the ability to perceive an individual face holistically. |
Goedele Van Belle; Philippe Lefèvre; Renaud Laguesse; Thomas Busigny; Peter Graef; Karl Verfaillie; Bruno Rossion Feature-based processing of personally familiar faces in prosopagnosia: Evidence from eye-gaze contingency Journal Article In: Behavioural Neurology, vol. 23, no. 4, pp. 255–257, 2010. @article{VanBelle2010b, How familiar and unfamiliar faces are perceived remains largely unknown. Two views have dominated this field of research. On the one hand, recordings of eye fixations on faces and response classification experiments suggest that a face is processed in terms of its individual components, or facial features (mouth, eyes, nose,...), a strategy called analytical processing. On the other hand, there is strong behavioral evidence for interdependence in the processing of different features of a face, rather supporting holistic processing of the face. According to the latter holistic view, facial features are simultaneously perceived and integrated into a single representation, so that the perceptual field is that of the whole face. To shed light on this issue, in two recent studies, we recorded eye movements in a neurological patient suffering from a selective impairment in face recognition (acquired prosopagnosia). Previously, we showed that (1) PS fixates exactly on each of the main features of the face (mouth, left eye, right eye), contrary to normal observers who fixate mainly centrally on the top of the nose, around the geometric centre of the face. Moreover (2), an original gaze-contingent stimulus presentation method applied to an unfamiliar face discrimination task led us to demonstrate that, contrary to normal observers, PS' perceptual field appears to be limited to one central feature fixated at a time. These observations indicate that prosopagnosia prevents processing the multiple elements of a whole face simultaneously, and thus that this ability is a key aspect in human face recognition expertise. Here, we extend these observations by testing the same patient with eye gaze contingency while she attempts to identify a large set of personally familiar individuals from their face. |
Goedele Belle; Meike Ramon; Philippe Lefèvre; Bruno Rossion Fixation patterns during recognition of personally familiar and unfamiliar faces Journal Article In: Frontiers in Psychology, vol. 1, pp. 20, 2010. @article{Belle2010, Previous studies recording eye gaze during face perception have rendered somewhat inconclusive findings with respect to fixation differences between familiar and unfamiliar faces. This can be attributed to a number of factors that differ across studies: the type and extent of familiarity with the faces presented, the definition of areas of interest subject to analyses, as well as a lack of consideration for the time course of scan patterns. Here we sought to address these issues by recording fixations in a recognition task with personally familiar and unfamiliar faces. After a first common fixation on a central superior location of the face in between features, suggesting initial holistic encoding, and a subsequent left eye bias, local features were focused and explored more for familiar than unfamiliar faces. Although the number of fixations did not differ for un-/familiar faces, the locations of fixations began to differ before familiarity decisions were provided. This suggests that in the context of familiarity decisions without time constraints, differences in processing familiar and unfamiliar faces arise relatively early - immediately upon initiation of the first fixation to identity-specific information - and that the local features of familiar faces are processed more than those of unfamiliar faces. |
Jeroen J. A. Boxtel; Naotsugu Tsuchiya; Christof Koch Opposing effects of attention and consciousness on afterimages Journal Article In: Proceedings of the National Academy of Sciences, vol. 107, no. 19, pp. 8883–8888, 2010. @article{Boxtel2010, The brain's ability to handle sensory information is influenced by both selective attention and consciousness. There is no consensus on the exact relationship between these two processes and whether they are distinct. So far, no experiment has simultaneously manipulated both. We carried out a full factorial 2 x 2 study of the simultaneous influences of attention and consciousness (as assayed by visibility) on perception, correcting for possible concurrent changes in attention and consciousness. We investigated the duration of afterimages for all four combinations of high versus low attention and visible versus invisible. We show that selective attention and visual consciousness have opposite effects: paying attention to the grating decreases the duration of its afterimage, whereas consciously seeing the grating increases the afterimage duration. These findings provide clear evidence for distinctive influences of selective attention and consciousness on visual perception. |
Elizabeth R. Schotter; Raymond W. Berry; Craig R. M. McKenzie; Keith Rayner Gaze bias: Selective encoding and liking effects Journal Article In: Visual Cognition, vol. 18, no. 8, pp. 1113–1132, 2010. @article{Schotter2010, People look longer at things that they choose than things they do not choose. How much of this tendency—the gaze bias effect—is due to a liking effect compared to the information encoding aspect of the decision-making process? Do these processes compete under certain conditions? We monitored eye movements during a visual decision-making task with four decision prompts: Like, dislike, older, and newer. The gaze bias effect was present during the first dwell in all conditions except the dislike condition, when the preference to look at the liked item and the goal to identify the disliked item compete. Colour content (whether a photograph was colour or black-and-white), not decision type, influenced the gaze bias effect in the older/newer decisions because colour is a relevant feature for such decisions. These interactions appear early in the eye movement record, indicating that gaze bias is influenced during information encoding. |
Christopher R. Sears; Charmaine L. Thomas; Jessica M. Lehuquet; Jeremy C. S. Johnson Attentional biases in dysphoria: An eye-tracking study of the allocation and disengagement of attention Journal Article In: Cognition and Emotion, vol. 24, no. 8, pp. 1349–1368, 2010. @article{Sears2010, This study looked for evidence of biases in the allocation and disengagement of attention in dysphoric individuals. Participants studied images for a recognition memory test while their eye fixations were tracked and recorded. Four image types were presented (depression-related, anxiety- related, positive, neutral) in each of two study conditions. For the simultaneous study condition, four images (one of each type) were presented simultaneously for 10 seconds, and the number of fixations and the total fixation time to each image was measured, similar to the procedure used by Eizenman et al. (2003) and Kellough, Beevers, Ellis, and Wells (2008). For the sequential study condition, four images (one of each type) were presented consecutively, each for 4 seconds; to measure disengagement of attention an endogenous cuing procedure was used (Posner, 1980). Dysphoric individuals spent significantly less time attending to positive images than non-dysphoric individuals, but there were no group differences in attention to depression-related images. There was also no evidence of a dysphoria-related bias in initial shifts of attention. With respect to the disengagement of attention, dysphoric individuals were slower to disengage their attention from depression-related images. The recognition memory data showed that dysphoric individuals had poorer memory for emotional images, but there was no evidence of a conventional mood-congruent memory bias. Differences in the attentional and memory biases observed in depressed and dysphoric individuals are discussed. |
Lise Van der Haegen; Denis Drieghe; Marc Brysbaert The split fovea theory and the Leicester critique: What do the data say? Journal Article In: Neuropsychologia, vol. 48, no. 1, pp. 96–106, 2010. @article{VanderHaegen2010, According to the Split Fovea Theory (SFT) recognition of foveally presented words involves interhemispheric transfer. This is because letters to the left of the fixation location are initially sent to the right hemisphere, whereas letters to the right of the fixation position are projected to the left hemisphere. Both sources of information must be integrated for words to be recognized. Evidence for the SFT comes from the Optimal Viewing Position (OVP) paradigm, in which foveal word recognition is examined as a function of the letter fixated. OVP curves are different for left and right language dominant participants, indicating a time cost when information is presented in the half-field ipsilateral to the dominant hemisphere (Hunter, Brysbaert, & Knecht, 2007). The methodology of the SFT research has recently been questioned, because not enough efforts were made to ensure adequate fixation. The aim of the present study is to test the validity of this argument. Experiment 1 replicated the OVP effect in a naming task by presenting words at different fixation positions, with the experimental settings applied in previous OVP research. Experiment 2 monitored and controlled eye fixations of the participants and presented the stimuli within the boundaries of the fovea. Exactly the same OVP curve was obtained. In Experiment 3, the eyes were also tracked and monocular viewing was used. Results again revealed the same OVP effect, although latencies were remarkably higher than in the previous experiments. From these results we can conclude that although noise is present in classical SFT studies without eye-tracking, this does not change the OVP effect observed with left dominant individuals. |
Stefan Van der Stigchel; Mark Mills; Michael D. Dodd Shift and deviate: Saccades reveal that shifts of covert attention evoked by trained spatial stimuli are obligatory. Journal Article In: Attention, Perception, & Psychophysics, vol. 72, no. 5, pp. 1244–1250, 2010. @article{VanderStigchel2010d, The premotor theory of attention predicts that motor movements, including manual movements and eye movements, are preceded by an obligatory shift of attention to the location of the planned response. We investigated whether the shifts of attention evoked by trained spatial cues (e.g., Dodd & Wilson, 2009) are obligatory by using an extreme prediction of the premotor theory: If individuals are trained to associate a color cue with a manual movement to the left or right, the shift of attention evoked by the color cue should also influence eye movements in an unrelated task. Participants were trained to associate an irrelevant color cue with left/right space via a training session in which directional responses were made. Experiment 1 showed that, posttraining, vertical saccades deviated in the direction of the trained response, despite the fact that the color cue was irrelevant. Experiment 2 showed that latencies of horizontal saccades were shorter when an eye movement had to be made in the direction of the trained response. These results demonstrate that the shifts of attention evoked by trained stimuli are obligatory, in addition to providing support for the premotor theory and for a connection between the attentional, motor, and oculomotor systems. |
Stefan Van der Stigchel; Tanja C. W. Nijboer The imbalance of oculomotor capture in unilateral visual neglect Journal Article In: Consciousness and Cognition, vol. 19, no. 1, pp. 186–197, 2010. @article{VanderStigchel2010b, Visual neglect has been associated with an imbalance in the level of activity in the saccadic system: activity in the contralesional field is suppressed, which makes target selection unlikely. We recorded eye movements of a patient with hemispatial neglect and a group of healthy participants during an oculomotor distractor paradigm. Results showed that the interfering effects of a distractor were very strong when presented in her ipsilesional visual field. However, when the distractor was presented in her contralesional field, there were no interfering effects when the target was presented in her ipsilesional field. These findings could not be explained by the presence of a visual field defect as revealed by the results of two hemianopic patients. Our results are in line with an imbalance in the level of activity in the saccadic system in visual neglect because visual elements presented in the contralesional field did not compete for saccadic selection. |
Editha M. Loon; Fadhel Khashawi; Geoffrey Underwood Visual strategies used for time-to-arrival judgments in driving Journal Article In: Perception, vol. 39, no. 9, pp. 1216–1229, 2010. @article{Loon2010, To investigate the sources of visual information that are involved in the anticipation of collisions we recorded eye movements while participants made relative timing judgments about approaching vehicles at a junction. The avoidance of collisions is a critical aspect in driving, particularly where cars enter a line of traffic from a side road, and the present study required judgments about animations in a virtual driving environment. In two experiments we investigated the effects of (i) the angle of approach of the vehicle and the type of path (straight or curved) of the observer, and (ii) the speed of both the observer and the approaching car. Relative timing judgments depend on the angle of approach of the other vehicle (judgments are more accurate for perpendicular than for obtuse angles). Eye-movement analysis shows that visual strategies in relative timing judgments are characterised by saccadic eye movements back and forth between the approaching car and the road ahead, particularly the side line which may serve as a spatial reference point. Results suggest that observers use the distance of the car from this reference point for their timing judgments. |
Signe Vangkilde; Thomas Habekost Finding Wally: Prism adaptation improves visual search in chronic neglect Journal Article In: Neuropsychologia, vol. 48, no. 7, pp. 1994–2004, 2010. @article{Vangkilde2010, Several studies have found that visuo-motor adaptation to rightward deviating prismatic goggles (prism adaptation) can alleviate symptoms of neglect after brain damage, but the long-term effect and clinical relevance of this rehabilitation approach have been questioned. In particular, the effect on visual search performance is controversial. In the present study 6 patients with chronic spatial neglect due to rightsided focal brain damage were given 20 sessions of prism adaptation over a period of two weeks. These patients, as well as a matched control group of neglect patients (n=5), were tested using a variety of effect measures with special emphasis on visual search at baseline, shortly after training, and five weeks later. A positive and very consistent long-term effect of prism adaptation was found across clinical tests of neglect, lateral bias of eye movements, and measures of everyday function, including subjective reports. The results show that prism adaptation can provide durable and clinically significant alleviation of neglect symptoms, even in the stable phase of recovery. |
Astrid Vermeiren; Baptist Liefooghe; André Vandierendonck Switch performance in peripherally and centrally triggered saccades Journal Article In: Experimental Brain Research, vol. 206, no. 3, pp. 243–248, 2010. @article{Vermeiren2010, A common hypothesis is that the switch cost measured when switching between prosaccades and antisaccades mainly reflects the inhibition of the saccadic system after the execution of an antisaccade, which requires the inhibition of a gaze response. The present study further tested this hypothesis by comparing switch performance between peripherally triggered saccades and centrally triggered saccades with the latter type of saccades not requiring inhibition of a gaze response. For peripherally triggered saccades, a switch cost was present for prosaccades but not for antisaccades. For centrally triggered saccades, a switch cost was present both for prosaccades and for antisaccades. The difference between both saccade tasks further supports the hypothesis that the switch performance observed for peripherally triggered saccades is related to the inhibition of a gaze response that is required when executing a peripherally triggered antisaccade and the persisting inhibition in the saccadic system this entails. Furthermore, the switch costs observed for centrally triggered saccades indicate that more general processes besides the persisting inhibition in the saccadic system, such as reconfiguration and interference control, also contribute to the switch performance in saccades. |
Michael Vesia; Steven L. Prime; Xiaogang Yan; Lauren E. Sergio; J. Douglas Crawford Specificity of human parietal saccade and reach regions during transcranial magnetic stimulation Journal Article In: Journal of Neuroscience, vol. 30, no. 39, pp. 13053–13065, 2010. @article{Vesia2010, Single-unit recordings in macaque monkeys have identified effector-specific regions in posterior parietal cortex (PPC), but functional neuroimaging in the human has yielded controversial results. Here we used on-line repetitive transcranial magnetic stimulation (rTMS) to determine saccade and reach specificity in human PPC. A short train of three TMS pulses (separated by an interval of 100 ms) was delivered to superior parieto-occipital cortex (SPOC), a region over the midposterior intraparietal sulcus (mIPS), and a site close to caudal IPS situated over the angular gyrus (AG) during a brief memory interval while subjects planned either a saccade or reach with the left or right hand. Behavioral measures then were compared to controls without rTMS. Stimulation of mIPS and AG produced similar patterns: increased end-point variability for reaches and decreased saccade accuracy for contralateral targets. In contrast, stimulation of SPOC deviated reach end points toward visual fixation and had no effect on saccades. Contralateral-limb specificity was highest for AG and lowest for SPOC. Visual feedback of the hand negated rTMS-induced disruptions of the reach plan for mIPS and AG, but not SPOC. These results suggest that human SPOC is specialized for encoding retinally peripheral reach goals, whereas more anterior-lateral regions (mIPS and AG) along the IPS possess overlapping maps for saccade and reach planning and are more closely involved in motor details (i.e., planning the reach vector for a specific hand). This work provides the first causal evidence for functional specificity of these parietal regions in healthy humans. |
Daniel Smilek; Jonathan S. A. Carriere; J. Allan Cheyne Out of mind, out of sight: Eye blinking as indicator and embodiment of mind wandering. Journal Article In: Psychological Science, vol. 21, no. 6, pp. 786–789, 2010. @article{Smilek2010, Mind wandering, in which cognitive processing of the external environment decreases in favor of internal processing, has been consistently associated with errors on tasks requiring sustained attention and continuous stimulus monitoring. The present investigation is based on the idea that blink rate might serve to modulate trade-offs between attention to mindwandering thoughts and to external task-related stimuli. To assess the relation between eye blinks and mind wandering, we compared blink rates during probe-caught episodes of mind wandering and on-task periods of reading. We also analyzed fixation frequency and fixation duration as a function of mind wandering. Analysis of the rate of eye fixations revealed that the eyes fixated less often during mind wandering than when subjects were on task. Analyses of average fixation durations failed to detect any significant differences between episodes of mind wandering and on-task periods. |
Sonja Stork; Anna Schubö Human cognition in manual assembly: Theories and applications Journal Article In: Advanced Engineering Informatics, vol. 24, no. 3, pp. 320–328, 2010. @article{Stork2010, Human cognition in production environments is analyzed with respect to various findings and theories in cognitive psychology. This theoretical overview describes effects of task complexity and attentional demands on both mental workload and task performance as well as presents experimental data on these topics. A review of two studies investigating the benefit of augmented reality and spatial cueing in an assembly task is given. Results demonstrate an improvement in task performance with attentional guidance while using contact analog highlighting. Improvements were obvious in reduced performance times and eye fixations as well as in increased velocity and acceleration of reaching and grasping movements. These results have various implications for the development of an assistive system. Future directions in this line of applied research are suggested. The introduced methodology illustrates how the analysis of human information processes and psychological experiments can contribute to the evaluation of engineering applications. |
Mathias Abegg; Hyung Lee; Jason J. S. Barton Systematic diagonal and vertical errors in antisaccades and memory-guided saccades Journal Article In: Journal of Eye Movement Research, vol. 3, no. 3, pp. 1–10, 2010. @article{Abegg2010, Studies of memory-guided saccades in monkeys show an upward bias, while studies of antisaccades in humans show a diagonal effect, a deviation of endpoints toward the 45° diagonal. To determine if these two different spatial biases are specific to different types of saccades, we studied prosaccades, antisaccades and memory-guided saccades in humans. The diagonal effect occurred not with prosaccades but with antisaccades and memory-guided saccades with long intervals, consistent with hypotheses that it originates in computations of goal location under conditions of uncertainty. There was a small upward bias for memory-guided saccades but not prosaccades or antisaccades. Thus this bias is not a general effect of target uncertainty but a property specific to memory-guided saccades. |
Mathias Abegg; Amadeo R. Rodriguez; Hyung Lee; Jason J. S. Barton ‘Alternate-goal bias' in antisaccades and the influence of expectation Journal Article In: Experimental Brain Research, vol. 203, no. 3, pp. 553–562, 2010. @article{Abegg2010a, Saccadic performance depends on the requirements of the current trial, but also may be influenced by other trials in the same experiment. This effect of trial context has been investigated most for saccadic error rate and reaction time but seldom for the positional accuracy of saccadic landing points. We investigated whether the direction of saccades towards one goal is affected by the location of a second goal used in other trials in the same experimental block. In our first experiment, landing points ('endpoints') of antisaccades but not prosaccades were shifted towards the location of the alternate goal. This spatial bias decreased with increasing angular separation between the current and alternative goals. In a second experiment, we explored whether expectancy about the goal location was responsible for the biasing of the saccadic endpoint. For this, we used a condition where the saccadic goal randomly changed from one trial to the next between locations on, above or below the horizontal meridian. We modulated the prior probability of the alternate-goal location by showing cues prior to stimulus onset. The results showed that expectation about the possible positions of the saccadic goal is sufficient to bias saccadic endpoints and can account for at least part of this phenomenon of 'alternate-goal bias'. |
Naotoshi Abekawa; Hiroaki Gomi Spatial coincidence of intentional actions modulates an implicit visuomotor control Journal Article In: Journal of Neurophysiology, vol. 103, no. 5, pp. 2717–2727, 2010. @article{Abekawa2010, We investigated a visuomotor mechanism contributing to reach correction: the manual following response (MFR), which is a quick response to background visual motion that frequently occurs as a reafference when the body moves. Although several visual specificities of the MFR have been elucidated, the functional and computational mechanisms of its motor coordination remain unclear mainly because it involves complex relationships among gaze, reaching target, and visual stimuli. To directly explore how these factors interact in the MFR, we assessed the impact of spatial coincidences among gaze, arm reaching, and visual motion on the MFR. When gaze location was displaced from the reaching target with an identical visual motion kept on the retina, the amplitude of the MFR significantly decreased as displacement increased. A factorial manipulation of gaze, reaching-target, and visual motion locations showed that the response decrease is due to the spatial separation between gaze and reaching target but is not due to the spatial separation between visual motion and reaching target. Additionally, elimination of visual motion around the fovea attenuated the MFR. The effects of these spatial coincidences on the MFR are completely different from their effects on the perceptual mislocalization of targets caused by visual motion. Furthermore, we found clear differences between the modulation sensitivities of the MFR and the ocular following response to spatial mismatch between gaze and reaching locations. These results suggest that the MFR modulation observed in our experiment is not due to changes in visual interaction between target and visual motion or to modulation of motion sensitivity in early visual processing. Instead the motor command of the MFR appears to be modulated by the spatial relationship between gaze and reaching. |
Alper Açik; Adjmal Sarwary; Rafael Schultze-Kraft; Selim Onat; Peter König Developmental changes in natural viewing behavior: Bottom-up and top-down differences between children, young adults and older adults Journal Article In: Frontiers in Psychology, vol. 1, pp. 207, 2010. @article{Acik2010, Despite the growing interest in fixation selection under natural conditions, there is a major gap in the literature concerning its developmental aspects. Early in life, bottom-up processes, such as local image feature - color, luminance contrast etc. - guided viewing, might be prominent but later overshadowed by more top-down processing. Moreover, with decline in visual functioning in old age, bottom-up processing is known to suffer. Here we recorded eye movements of 7- to 9-year-old children, 19- to 27-year-old adults, and older adults above 72 years of age while they viewed natural and complex images before performing a patch-recognition task. Task performance displayed the classical inverted U-shape, with young adults outperforming the other age groups. Fixation discrimination performance of local feature values dropped with age. Whereas children displayed the highest feature values at fixated points, suggesting a bottom-up mechanism, older adult viewing behavior was less feature-dependent, reminiscent of a top-down strategy. Importantly, we observed a double dissociation between children and elderly regarding the effects of active viewing on feature-related viewing: Explorativeness correlated with feature-related viewing negatively in young age, and positively in older adults. The results indicate that, with age, bottom-up fixation selection loses strength and/or the role of top-down processes becomes more important. Older adults who increase their feature-related viewing by being more explorative make use of this low-level information and perform better in the task. The present study thus reveals an important developmental change in natural and task-guided viewing. |
Ava-Ann Allman; Chawki Benkelfat; France Durand; Igor Sibon; Alain Dagher; Marco Leyton; Glen B. Baker; Gillian A. O'Driscoll Effect of D-amphetamine on inhibition and motor planning as a function of baseline performance. Journal Article In: Psychopharmacology, vol. 211, no. 4, pp. 423–33, 2010. @article{Allman2010, RATIONALE: Baseline performance has been reported to predict dopamine (DA) effects on working memory, following an inverted-U pattern. This pattern may hold true for other executive functions that are DA-sensitive. OBJECTIVES: The objective of this study is to investigate the effect of D: -amphetamine, an indirect DA agonist, on two other putatively DA-sensitive executive functions, inhibition and motor planning, as a function of baseline performance. METHODS: Participants with no prior stimulant exposure participated in a double-blind crossover study of a single dose of 0.3 mg/kg, p.o. of D: -amphetamine and placebo. Participants were divided into high and low groups, based on their performance on the antisaccade and predictive saccade tasks on the baseline day. Executive functions, mood states, heart rate and blood pressure were assessed before (T0) and after drug administration, at 1.5 (T1), 2.5 (T2) and 3.5 h (T3) post-drug. RESULTS: Antisaccade errors decreased with D: -amphetamine irrespective of baseline performance (p = 0.025). For antisaccade latency, participants who generated short-latency antisaccades at baseline had longer latencies on D: -amphetamine than placebo, while those with long-latency antisaccades at baseline had shorter latencies on D: -amphetamine than placebo (drug x group |
Patrick A. Byrne; David C. Cappadocia; J. Douglas Crawford Interactions between gaze-centered and allocentric representations of reach target location in the presence of spatial updating Journal Article In: Vision Research, vol. 50, no. 24, pp. 2661–2670, 2010. @article{Byrne2010, Numerous studies have investigated the phenomenon of egocentric spatial updating in gaze-centered coordinates, and some have studied the use of allocentric cues in visually-guided movement, but it is not known how these two mechanisms interact. Here, we tested whether gaze-centered and allocentric information combine at the time of viewing the target, or if the brain waits until the last possible moment. To do this, we took advantage of the well-known fact that pointing and reaching movements show gaze-centered 'retinal magnification' errors (RME) that update across saccades. During gaze fixation, we found that visual landmarks, and hence allocentric information, reduces RME for targets in the left visual hemifield but not in the right. When a saccade was made between viewing and reaching, this landmark-induced reduction in RME only depended on gaze at reach, not at encoding. Based on this finding, we argue that egocentric-allocentric combination occurs after the intervening saccade. This is consistent with previous findings in healthy and brain damaged subjects suggesting that the brain updates early spatial representations during eye movement and combines them at the time of action. |
Eamon Caddigan; Alejandro Lleras Saccadic repulsion in pop-out search: How a target's dodgy history can push the eyes away from it Journal Article In: Journal of Vision, vol. 10, no. 14, pp. 1–9, 2010. @article{Caddigan2010, Previous studies have shown that even in the context of fairly easy selection tasks, as is the case in a pop-out task, selection of the pop-out stimulus can be sped up (in terms of eye movements) when the target-defining feature repeats across trials. Here, we show that selection of a pop-out target can actually be delayed (in terms of saccadic latencies) and made less accurate (in terms of saccade accuracy) when the target-defining feature has recently been associated with distractor status. This effect was observed even though participants' task was to fixate color oddballs (when present) and simply press a button when their eyes reached the target to advance to the next trial. Importantly, the inter-trial effect was also observed in response time (time to advance to the next trial). In contrast, this response time effect was completely eliminated in a second experiment when eye movements were eliminated from the task. That is, when participants still had to press a button to advance to the next trial when an oddball target was present in the display (an oddball detection task experiment). This pattern of results closely links the "need for selection" in a task to the presence of an inter-trial bias of attention (and eye movements) in pop-out search. |
Roberto Caldara; Xinyue Zhou; Sébastien Miellet Putting culture under the 'Spotlight' reveals universal information use for face recognition Journal Article In: PLoS ONE, vol. 5, no. 3, pp. e9708, 2010. @article{Caldara2010, Background: Eye movement strategies employed by humans to identify conspecifics are not universal. Westerners predominantly fixate the eyes during face recognition, whereas Easterners more the nose region, yet recognition accuracy is comparable. However, natural fixations do not unequivocally represent information extraction. So the question of whether humans universally use identical facial information to recognize faces remains unresolved. Methodology/Principal Findings: We monitored eye movements during face recognition of Western Caucasian (WC) and East Asian (EA) observers with a novel technique in face recognition that parametrically restricts information outside central vision. We used Spotlights with Gaussian apertures of 2, 5 or 8 dynamically centered on observers' fixations. Strikingly, in constrained Spotlight conditions (2 and 5) observers of both cultures actively fixated the same facial information: the eyes and mouth. When information from both eyes and mouth was simultaneously available when fixating the nose (8), as expected EA observers shifted their fixations towards this region. Conclusions/Significance: Social experience and cultural factors shape the strategies used to extract information from faces, but these results suggest that external forces do not modulate information use. Human beings rely on identical facial information to recognize conspecifics, a universal law that might be dictated by the evolutionary constraints of nature and not nurture. |
Manuel G. Calvo; Lauri Nummenmaa; Pedro Avero Recognition advantage of happy faces in extrafoveal vision: Featural and affective processing Journal Article In: Visual Cognition, vol. 18, no. 9, pp. 1274–1297, 2010. @article{Calvo2010, Happy, surprised, disgusted, angry, sad, fearful, and neutral facial expressions were presented extrafoveally (2.5° away from fixation) for 150 ms, followed by a probe word for recognition (Experiment 1) or a probe scene for affective valence evaluation (Experiment 2). Eye movements were recorded and gaze-contingent masking prevented foveal viewing of the faces. Results showed that (a) happy expressions were recognized faster than others in the absence of fixations on the faces, (b) the same pattern emerged when the faces were presented upright or upside-down, (c) happy prime faces facilitated the affective evaluation of emotionally congruent probe scenes, and (d) such priming effects occurred at 750 but not at 250 ms prime-probe stimulus-onset asynchrony. This reveals an advantage in the recognition of happy faces outside of overt visual attention, and suggests that this recognition advantage relies initially on featural processing and involves processing of positive affect at a later stage. |
Linda E. Campbell; Kathryn L. McCabe; Kate Leadbeater; Ulrich Schall; Carmel M. Loughland; Dominique Rich Visual scanning of faces in 22q11.2 deletion syndrome: Attention to the mouth or the eyes? Journal Article In: Psychiatry Research, vol. 177, no. 1-2, pp. 211–215, 2010. @article{Campbell2010, Previous research demonstrates that people with 22q11.2 deletion syndrome (22q11DS) have social and interpersonal skill deficits. However, the basis of this deficit is unknown. This study examined, for the first time, how people with 22q11DS process emotional face stimuli using visual scanpath technology. The visual scanpaths of 17 adolescents and age/gender matched healthy controls were recorded while they viewed face images depicting one of seven basic emotions (happy, sad, surprised, angry, fear, disgust and neutral). Recognition accuracy was measured concurrently. People with 22q11DS differed significantly from controls, displaying visual scanpath patterns that were characterised by fewer fixations and a shorter scanpath length. The 22q11DS group also spent significantly more time gazing at the mouth region and significantly less time looking at eye regions of the faces. Recognition accuracy was correspondingly impaired, with 22q11DS subjects displaying particular deficits for fear and disgust. These findings suggest that 22q11DS is associated with a maladaptive visual information processing strategy that may underlie affect recognition accuracy and social functioning deficits in this group. |
Elena Carbone; Werner X. Schneider The control of stimulus-driven saccades is subject not to central, but to visual attention limitations Journal Article In: Attention, Perception, & Psychophysics, vol. 72, no. 8, pp. 2168–2175, 2010. @article{Carbone2010, In three experiments, we investigated whether the control of reflexive saccades is subject to central attention limitations. In a dual-task procedure, Task 1 required either unspeeded reporting or ignoring of briefly presented masked stimuli, whereas Task 2 required a speeded saccade toward a visual target. The stimulus onset asyn- chrony (SOA) between the two tasks was varied. In Experiments 1 and 2, the Task 1 stimulus was one or three letters, and we asked how saccade target selection is influenced by the number of items. We found (1) longer saccade latencies at short than at long SOAs in the report condition, (2) a substantially larger latency increase for three letters than for one letter, and (3) a latency difference between SOAs in the ignore condition. Broadly, these results match the central interference theory. However, in Experiment 3, an auditory stimulus was used as the Task 1 stimulus, to test whether the interference effects in Experiments 1 and 2 were due to visual instead of central interference. Although there was a small saccade latency increase from short to long SOAs, this differ- ence did not increase from the ignore to the report condition. To explain visual interference effects between letter encoding and stimulus-driven saccade control, we propose an extended theory of visual attention. |
Paul M. Bays; V. Singh-Curry; N. Gorgoraptis; Jon Driver; Masud Husain Integration of goal- and stimulus-related visual signals revealed by damage to human parietal cortex Journal Article In: Journal of Neuroscience, vol. 30, no. 17, pp. 5968–5978, 2010. @article{Bays2010, Where we look is determined both by our current intentions and by the tendency of visually salient items to "catch our eye." After damage to parietal cortex, the normal process of directing attention is often profoundly impaired. Here, we tracked parietal patients' eye movements during visual search to separately map impairments in goal-directed orienting to targets versus stimulus-driven gaze shifts to salient but task-irrelevant probes. Deficits in these two distinct types of attentional selection are shown to be identical in both magnitude and spatial distribution, consistent with damage to a "priority map" that integrates goal- and stimulus-related signals to select visual targets. When goal-relevant and visually salient items compete for attention, the outcome depends on a biased competition in which the priority of contralesional targets is undervalued. On the basis of these findings, we further demonstrate that parietal patients' spatial bias (neglect) in goal-directed visual exploration can be corrected and even reversed by systematically manipulating the spatial distribution of stimulus salience in the visual array. |
Melissa R. Beck; Maura C. Lohrenz; J. Gregory Trafton Measuring search efficiency in complex visual search tasks: Global and local clutter Journal Article In: Journal of Experimental Psychology: Applied, vol. 16, no. 3, pp. 238–250, 2010. @article{Beck2010, Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e., digital aeronautical charts) to examine limits of attention in visual search. Undergraduates at a large southern university searched for a target among 4, 8, or 16 distractors in charts with high, medium, or low global clutter. The target was in a high or low local-clutter region of the chart. In Experiment 1, reaction time increased as global clutter increased, particularly when the target was in a high local-clutter region. However, there was no effect of distractor set size, supporting the notion that global clutter is a better measure of attention against competition in complex visual search tasks. As a control, Experiment 2 demonstrated that increasing the number of distractors leads to a typical set size effect when there is no additional clutter (i.e., no chart). In Experiment 3, the effects of global and local clutter were minimized when the target was highly salient. When the target was nonsalient, more fixations were observed in high global clutter charts, indicating that the number of elements competing with the target for attention was also high. The results suggest design techniques that could improve pilots' search performance in aeronautical charts. |
Stefanie I. Becker Testing a postselectional account of across-dimension switch costs Journal Article In: Psychonomic Bulletin & Review, vol. 17, no. 6, pp. 853–861, 2010. @article{Becker2010, In visual search for a pop-out target, responses are faster when the target dimension from the previous trial is repeated than when it changes. Currently, it is unclear whether these across-dimension switch costs originate from processes that guide attention to the target or from later processes (e.g., target identification or response selection). The present study tested two critical predictions of a response-selection account of across-dimension switch costs-namely, (1) that switch costs should occur even when visual attention is guided by a completely different feature and (2) that changing the target dimension should affect the speed of responding, but not the speed of eye movements to the target. The results supported both predictions, indicating that changes of the target dimension do not affect early processes that guide attention to the target but, rather, affect later processes, which commence after the target has been selected. |
Stefanie I. Becker Oculomotor capture by colour singletons depends on intertrial priming Journal Article In: Vision Research, vol. 50, no. 21, pp. 2116–2126, 2010. @article{Becker2010a, In visual search, an irrelevant colour singleton captures attention when the colour of the distractor changes across trials (e.g., from red to green), but not when the colour remains constant (Becker, 2007). The present study shows that intertrial changes of the distractor colour also modulate oculomotor capture: an irrelevant colour singleton distractor was only selected more frequently than the inconspicuous nontargets (1) when its features had switched (compared to the previous trial), or (2) when the distractor had been presented at the same position as the target on the previous trial. These results throw doubt on the notion that colour distractors capture attention and the eyes because of their high feature contrast, which is available at an earlier point in time than information about specific feature values. Instead, attention and eye movements are apparently controlled by a system that operates on feature-specific information, and gauges the informativity of nominally irrelevant features. |
Stefanie I. Becker The role of target-distractor relationships in guiding attention and the eyes in visual search Journal Article In: Journal of Experimental Psychology: General, vol. 139, no. 2, pp. 247–265, 2010. @article{Becker2010b, Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target differs from the irrelevant distractors (e.g., larger, redder, darker). Guidance by the relational properties of the target governed intertrial priming effects and capture by irrelevant distractors. First, intertrial switch costs occurred only upon reversals of the coarse relationship between target and nontargets, but they did not occur when the target and nontarget features changed such that the relation remained the same. Second, irrelevant distractors captured most strongly when they differed in the correct direction from all other items–despite the fact that they were less similar to the target. This suggests that priming and contingent capture, which have previously been regarded as prime evidence for feature-based selection, are really due to a relational selection mechanism. Here I propose a new relational vector account of guidance, which holds promise to synthesize a wide range of different findings that have previously been attributed to different mechanisms of visual search. |
Stefanie I. Becker; Charles L. Folk; Roger W. Remington The role of relational information in contingent capture Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 6, pp. 1460–1476, 2010. @article{Becker2010c, On the contingent capture account, top-down attentional control settings restrict involuntary attentional capture to items that match the features of the search target. Attention capture is involuntary, but contingent on goals and intentions. The observation that only target-similar items can capture attention has usually been taken to show that the content of the attentional control settings consists of specific feature values. In contrast, the present study demonstrates that the top-down target template can include information about the relationship between the target and nontarget features (e.g., redder, darker, larger). Several spatial cuing experiments show that a singleton cue that is less similar to the target but that shares the same relational property that distinguishes targets from nontargets can capture attention to the same extent as cues that are similar to the target. Moreover, less similar cues can even capture attention more than cues that are identical to the target when they are relationally better than identical cues. The implications for current theories of attentional capture and attentional guidance are discussed. |
Torsten Betz Investigating task-dependent top-down effects on overt visual attention Journal Article In: Journal of Vision, vol. 10, no. 3, pp. 1–14, 2010. @article{Betz2010, Different tasks can induce different viewing behavior, yet it is still an open question how or whether at all high-level task information interacts with the bottom-up processing of stimulus-related information. Two possible causal routes are considered in this paper. Firstly, the weak top-down hypothesis, according to which top-down effects are mediated by changes of feature weights in the bottom-up system. Secondly, the strong top-down hypothesis, which proposes that top-down information acts independently of the bottom-up process. To clarify the influences of these different routes, viewing behavior was recorded on web pages for three different tasks: free viewing, content awareness, and information search. The data reveal significant task-dependent differences in viewing behavior that are accompanied by minor changes in feature-fixation correlations. Extensive computational modeling shows that these small but significant changes are insufficient to explain the observed differences in viewing behavior. Collectively, the results show that task-dependent differences in the current setting are not mediated by a reweighting of features in the bottom-up hierarchy, ruling out the weak top-down hypothesis. Consequently, the strong top-down hypothesis is the most viable explanation for the observed data. |
Elaine J. Anderson; Sabira K. Mannan; Geraint Rees; Petroc Sumner; Christopher Kennard Overlapping functional anatomy for working memory and visual search. Journal Article In: Experimental Brain Research, vol. 200, no. 1, pp. 91–107, 2010. @article{Anderson2010, Recent behavioural findings using dual-task paradigms demonstrate the importance of both spatial and non-spatial working memory processes in inefficient visual search (Anderson et al. in Exp Psychol 55:301-312, 2008). Here, using functional magnetic resonance imaging (fMRI), we sought to determine whether brain areas recruited during visual search are also involved in working memory. Using visually matched spatial and non-spatial working memory tasks, we confirmed previous behavioural findings that show significant dual-task interference effects occur when inefficient visual search is performed concurrently with either working memory task. Furthermore, we find considerable overlap in the cortical network activated by inefficient search and both working memory tasks. Our findings suggest that the interference effects observed behaviourally may have arisen from competition for cortical processes subserved by these overlapping regions. Drawing on previous findings (Anderson et al. in Exp Brain Res 180:289-302, 2007), we propose that the most likely anatomical locus for these interference effects is the inferior and middle frontal cortex of the right hemisphere. These areas are associated with attentional selection from memory as well as manipulation of information in memory, and we propose that the visual search and working memory tasks used here compete for common processing resources underlying these mechanisms. |
A. J. Austin; Theodora Duka Mechanisms of attention for appetitive and aversive outcomes in Pavlovian conditioning Journal Article In: Behavioural Brain Research, vol. 213, no. 1, pp. 19–26, 2010. @article{Austin2010, Different mechanisms of attention controlling learning have been proposed in appetitive and aversive conditioning. The aim of the present study was to compare attention and learning in a Pavlovian conditioning paradigm using visual stimuli of varying predictive value of either monetary reward (appetitive conditioning; 10p or 50p) or blast of white noise (aversive conditioning; 97 dB or 102 dB). Outcome values were matched across the two conditions with regard to their emotional significance. Sixty-four participants were allocated to one of the four conditions matched for age and gender. All participants underwent a discriminative learning task using pairs of visual stimuli that signalled a 100%, 50%, or 0% probability of receiving an outcome. Learning was measured using a 9-point Likert scale of expectancy of the outcome, while attention using an eyetracker device. Arousal and emotional conditioning were also evaluated. Dwell time was greatest for the full predictor in the noise groups, while in the money groups attention was greatest for the partial predictor over the other two predictors. The progression of learning was the same for both groups. These findings suggest that in aversive conditioning attention is driven by the predictive salience of the stimulus while in appetitive conditioning attention is error-driven, when emotional value of the outcome is comparable. |
Holly Bridge; Stephen L. Hicks; Jingyi Xie; Thomas W. Okell; Sabira K. Mannan; Iona Alexander; Alan Cowey; Christopher Kennard Visual activation of extra-striate cortex in the absence of V1 activation Journal Article In: Neuropsychologia, vol. 48, no. 14, pp. 4148–4154, 2010. @article{Bridge2010, When the primary visual cortex (V1) is damaged, there are a number of alternative pathways that can carry visual information from the eyes to extrastriate visual areas. Damage to the visual cortex from trauma or infarct is often unilateral, extensive and includes gray matter and white matter tracts, which can disrupt other routes to residual visual function. We report an unusual young patient, SBR, who has bilateral damage to the gray matter of V1, sparing the adjacent white matter and surrounding visual areas. Using functional magnetic resonance imaging (fMRI), we show that area MT+/V5 is activated bilaterally to visual stimulation, while no significant activity could be measured in V1. Additionally, the white matter tracts between the lateral geniculate nucleus (LGN) and V1 appear to show some degeneration, while the tracts between LGN and MT+/V5 do not differ from controls. Furthermore, the bilateral nature of the damage suggests that residual visual capacity does not result from strengthened interhemispheric connections. The very specific lesion in SBR suggests that the ipsilateral connection between LGN and MT+/V5 may be important for residual visual function in the presence of damage to V1. |
James R. Brockmole; Melissa L. -H. Võ Semantic memory for contextual regularities within and across scene categories: Evidence from eye movements Journal Article In: Attention, Perception, & Psychophysics, vol. 72, no. 7, pp. 1803–1813, 2010. @article{Brockmole2010, When encountering familiar scenes, observers can use item-specific memory to facilitate the guidance of attention to objects appearing in known locations or configurations. Here, we investigated how memory for relational contingencies that emerge across different scenes can be exploited to guide attention. Participants searched for letter targets embedded in pictures of bedrooms. In a between-subjects manipulation, targets were either always on a bed pillow or randomly positioned. When targets were systematically located within scenes, search for targets became more efficient. Importantly, this learning transferred to bedrooms without pillows, ruling out learning that is based on perceptual contingencies. Learning also transferred to living room scenes, but it did not transfer to kitchen scenes, even though both scene types contained pillows. These results suggest that statistical regularities abstracted across a range of stimuli are governed by semantic expectations regarding the presence of target-predicting local landmarks. Moreover, explicit awareness of these contingencies led to a central tendency bias in recall memory for precise target positions that is similar to the spatial category effects observed in landmark memory. These results broaden the scope of conditions under which contextual cuing operates and demonstrate how semantic memory plays a causal and independent role in the learning of associations between objects in real-world scenes. |
Simona Buetti; Dirk Kerzel Effects of saccades and response type on the simon effect: If you look at the stimulus, the Simon effect may be gone Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 11, pp. 2172–2189, 2010. @article{Buetti2010, The Simon effect has most often been investigated with key-press responses and eye fixation. In the present study, we asked how the type of eye movement and the type of manual response affect response selection in a Simon task. We investigated three eye movement instructions (spontaneous, saccade, and fixation) while participants performed goal-directed (i.e., reaching) or symbolic (i.e., finger-lift) responses. Initially, no oculomotor constraints were imposed, and a Simon effect was present for both response types. Next, eye movements were constrained. Participants had to either make a saccade toward the stimulus or maintain gaze fixed in the screen centre. While a congruency effect was always observed in reaching responses, it disappeared in finger-lift responses. We suggest that the redirection of saccades from the stimulus to the correct response location in noncorresponding trials contributes to the Simon effect. Because of eye-hand coupling, this occurred in a mandatory manner with reaching responses but not with finger-lift responses. Thus, the Simon effect with key-presses disappears when participants do what they typically do–look at the stimulus. |
Jeremy B. Badler; Philippe Lefevre; Marcus Missal Causality attribution biases oculomotor responses Journal Article In: Journal of Neuroscience, vol. 30, no. 31, pp. 10517–10525, 2010. @article{Badler2010, When viewing one object move after being struck by another, humans perceive that the action of the first object "caused" the motion of the second, not that the two events occurred independently. Although established as a perceptual and linguistic concept, it is not yet known whether the notion of causality exists as a fundamental, preattentional "Gestalt" that can influence predictive motor processes. Therefore, eye movements of human observers were measured while viewing a display in which a launcher impacted a tool to trigger the motion of a second "reaction" target. The reaction target could move either in the direction predicted by transfer of momentum after the collision ("causal") or in a different direction ("noncausal"), with equal probability. Control trials were also performed with identical target motion, either with a 100 ms time delay between the collision and reactive motion, or without the interposed tool. Subjects made significantly more predictive movements (smooth pursuit and saccades) in the causal direction during standard trials, and smooth pursuit latencies were also shorter overall. These trends were reduced or absent in control trials. In addition, pursuit latencies in the noncausal direction were longer during standard trials than during control trials. The results show that causal context has a strong influence on predictive movements. |
Daniel H. Baker; Erich W. Graf Extrinsic factors in the perception of bistable motion stimuli Journal Article In: Vision Research, vol. 50, no. 13, pp. 1257–1265, 2010. @article{Baker2010, When viewing a drifting plaid stimulus, perceived motion alternates over time between coherent pattern motion and a transparent impression of the two component gratings. It is known that changing the intrinsic attributes of such patterns (e.g. speed, orientation and spatial frequency of components) can influence percept predominance. Here, we investigate the contribution of extrinsic factors to perception; specifically contextual motion and eye movements. In the first experiment, the percept most similar to the speed and direction of surround motion increased in dominance, implying a tuned integration process. This shift primarily involved an increase in dominance durations of the consistent percept. The second experiment measured eye movements under similar conditions. Saccades were not associated with perceptual transitions, though blink rate increased around the time of a switch. This indicates that saccades do not cause switches, yet saccades in a congruent direction might help to prolong a percept because (i) more saccades were directionally congruent with the currently reported percept than expected by chance, and (ii) when observers were asked to make deliberate eye movements along one motion axis, this increased percept reports in that direction. Overall, we find evidence that perception of bistable motion can be modulated by information from spatially adjacent regions, and changes to the retinal image caused by blinks and saccades. |
Sarah Bate; Catherine Haslam; Timothy L. Hodgson; Ashok Jansari; Nicola J. Gregory; Janice Kay Positive and negative emotion enhances the processing of famous faces in a semantic judgment task Journal Article In: Neuropsychology, vol. 24, no. 1, pp. 84–89, 2010. @article{Bate2010, Previous work has consistently reported a facilitatory influence of positive emotion in face recognition (e.g., D'Argembeau, Van der Linden, Comblain, & Etienne, 2003). However, these reports asked participants to make recognition judgments in response to faces, and it is unknown whether emotional valence may influence other stages of processing, such as at the level of semantics. Furthermore, other evidence suggests that negative rather than positive emotion facilitates higher level judgments when processing nonfacial stimuli (e.g., Mickley & Kensinger, 2008), and it is possible that negative emotion also influences latter stages of face processing. The present study addressed this issue, examining the influence of emotional valence while participants made semantic judgments in response to a set of famous faces. Eye movements were monitored while participants performed this task, and analyses revealed a reduction in information extraction for the faces of liked and disliked celebrities compared with those of emotionally neutral celebrities. Thus, in contrast to work using familiarity judgments, both positive and negative emotion facilitated processing in this semantic-based task. This pattern of findings is discussed in relation to current models of face processing. |
Oliver Baumann; Jason B. Mattingley Scaling of neural responses to visual and auditory motion in the human cerebellum Journal Article In: Journal of Neuroscience, vol. 30, no. 12, pp. 4489–4495, 2010. @article{Baumann2010, The human cerebellum contains approximately half of all the neurons within the cerebrum, yet most experimental work in human neuroscience over the last century has focused exclusively on the structure and functions of the forebrain. The cerebellum has an undisputed role in a range of motor functions (Thach et al., 1992), but its potential contributions to sensory and cognitive processes are widely debated (Stoodley and Schmahmann, 2009). Here we used functional magnetic resonance imaging to test the hypothesis that the human cerebellum is involved in the acquisition of auditory and visual sensory data. We monitored neural activity within the cerebellum while participants engaged in a task that required them to discriminate the direction of a visual or auditory motion signal in noise. We identified a distinct set of cerebellar regions that were differentially activated for visual stimuli (vermal lobule VI and right-hemispheric lobule X) and auditory stimuli (right-hemispheric lobules VIIIA and VIIIB and hemispheric lobule VI bilaterally). In addition, we identified a region in left crus I in which activity correlated significantly with increases in the perceptual demands of the task (i.e., with decreasing signal strength), for both auditory and visual stimuli. Our results support suggestions of a role for the cerebellum in the processing of auditory and visual motion and suggest that parts of cerebellar cortex are concerned with tracking movements of objects around the animal, rather than with controlling movements of the animal itself (Paulin, 1993). |
Markus Bindemann Scene and screen center bias early eye movements in scene viewing Journal Article In: Vision Research, vol. 50, no. 23, pp. 2577–2587, 2010. @article{Bindemann2010, In laboratory studies of visual perception, images of natural scenes are routinely presented on a computer screen. Under these conditions, observers look at the center of scenes first, which might reflect an advantageous viewing position for extracting visual information. This study examined an alternative possibility, namely that initial eye movements are drawn towards the center of the screen. Observers searched visual scenes in a person detection task, while the scenes were aligned with the screen center or offset horizontally (Experiment 1). Two central viewing effects were observed, reflecting early visual biases to the scene and the screen center. The scene effect was modified by person content but is not specific to person detection tasks, while the screen bias cannot be explained by the low-level salience of a computer display (Experiment 2). These findings support the notion of a central viewing tendency in scene analysis, but also demonstrate a bias to the screen center that forms a potential artifact in visual perception experiments. |
Markus Bindemann; Christoph Scheepers; Heather J. Ferguson; A. Mike Burton Face, body, and center of gravity mediate person detection in natural scenes Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 6, pp. 1477–1485, 2010. @article{Bindemann2010a, Person detection is an important prerequisite of social interaction, but is not well understood. Following suggestions that people in the visual field can capture a viewer's attention, this study examines the role of the face and the body for person detection in natural scenes. We observed that viewers tend first to look at the center of a scene, and only then to fixate on a person. When a person's face was rendered invisible in scenes, bodies were detected as quickly as faces without bodies, indicating that both are equally useful for person detection. Detection was optimized when face and body could be seen, but observers preferentially fixated faces, reinforcing the notion of a prominent role for the face in social perception. These findings have implications for claims of attention capture by faces in that they demonstrate a mediating influence of body cues and general scanning principles in natural scenes. |
Walter R. Boot; James R. Brockmole Irrelevant features at fixation modulate saccadic latency and direction in visual search Journal Article In: Visual Cognition, vol. 18, no. 4, pp. 481–491, 2010. @article{Boot2010, Do irrelevant visual features at fixation influence saccadic latency and direction? In a novel search paradigm, we found that when the feature of an irrelevant item at fixation matched the feature defining the target, oculomotor disengagement was delayed, and when it matched a salient distractor more eye movements were directed to that distractor. Latency effects were short-lived; direction effects persisted for up to 200 ms. We replicated latency results and demonstrated facilitated eye movements to the target when the fixated item matched the target colour. Irrelevant features of fixated items influence saccadic latency and direction and may be important considerations in predicting search behaviour. |
Kim Joris Boström; Anne Kathrin Warzecha Open-loop speed discrimination performance of ocular following response and perception Journal Article In: Vision Research, vol. 50, no. 9, pp. 870–882, 2010. @article{Bostroem2010, So far, it remains largely unresolved to what extent neuronal noise affects behavioral responses. Here, we investigate, where in the human visual motion pathway noise originates that limits the performance of the entire system. In particular, we ask whether perception and eye movements are limited by a common noise source, or whether processing stages after the separation into different streams limit their performance. We use the ocular following response of human subjects and a simultaneously performed psychophysical paradigm to directly compare perceptual and oculomotor system with respect to their speed discrimination ability. Our results show that on the open-loop condition the perceptual system is superior to the oculomotor system and that the responses of both systems are not correlated. Two alternative conclusions can be drawn from these findings. Either the perceptual and oculomotor pathway are effectively separate, or the amount of post-sensory (motor) noise is not negligible in comparison to the amount of sensory noise. In view of well-established experimental findings and due to plausibility considerations, we favor the latter conclusion. |
Robert D. Gordon; Sarah D. Vollmer Episodic representation of diagnostic and nondiagnostic object colour Journal Article In: Visual Cognition, vol. 18, no. 5, pp. 728–750, 2010. @article{Gordon2010, In three experiments, we investigated transsaccadic object file representations. In each experiment, participants moved their eyes from a central fixation cross to a saccade target located between two peripheral objects. During the saccade, this preview display was replaced with a target display containing a single object to be named. On trials in which the target identity matched one of the preview objects, its color either matched or did not match the previewed object color. The results indicated that color changes disrupt perceptual continuity, but only for the class of objects for which color is diagnostic of object identity. When the color is not integral to identifying an object (for example, when the object is a letter or an object without a characteristic color), object continuity is preserved regardless of changes to the object's color. These results suggest that object features that are important for defining the object are incorporated into its episodic representation. Furthermore, the results are consistent with previous work showing that the quality of a feature's representation determines its importance in preserving continuity. |
Harold H. Greene; Alexander Pollatsek; Kathleen M. Masserang; Yen Ju Lee; Keith Rayner Directional processing within the perceptual span during visual target localization Journal Article In: Vision Research, vol. 50, no. 13, pp. 1274–1282, 2010. @article{Greene2010, In order to understand how processing occurs within the effective field of vision (i.e. perceptual span) during visual target localization, a gaze-contingent moving mask procedure was used to disrupt parafoveal information pickup along the vertical and the horizontal visual fields. When the mask was present within the horizontal visual field, there was a relative increase in saccade probability along the nearby vertical field, but not along the opposite horizontal field. When the mask was present either above or below fixation, saccades downwards were reduced in magnitude. This pattern of data suggests that parafoveal information selection (indexed by probability of saccade direction) and the extent of spatial parafoveal processing in a given direction (indexed by saccade amplitude) may be controlled by somewhat different mechanisms. |
Martin Groen; Jan Noyes Solving problems: How can guidance concerning task-relevancy be provided? Journal Article In: Computers in Human Behavior, vol. 26, no. 6, pp. 1318–1326, 2010. @article{Groen2010, The analysis of eye movements of people working on problem solving tasks has enabled a more thorough understanding than would have been possible with a traditional analysis of cognitive behavior. Recent studies report that influencing 'where we look' can affect task performance. However, some of the studies that reported these results have shortcomings, namely, it is unclear whether the reported effects are the result of 'attention guidance' or an effect of highlighting display elements alone; second, the selection of the highlighted display elements was based on subjective methods which could have introduced bias. In the study reported here, two experiments are described that attempt to address these shortcomings. Experiment 1 investigates the relative contribution of each display element to successful task realization and does so with an objective analysis method, namely signal detection analysis. Experiment 2 examines whether any performance effects of highlighting are due to foregrounding intrinsic task-relevant aspects or whether they are a result of the act of highlighting in itself. Results show that the chosen objective method is effective and that highlighting the display element thus identified improves task performance significantly. These findings are not an effect of the highlighting per se and thus indicate that the highlighted element is conveying task-relevant information. These findings improve on previous results as the objective selection and analysis methods reduce potential bias and provide a more reliable input to the design and provision of computer-based problem solving support. |
Nathalie Guyader; Jennifer Malsert; Christian Marendaz Having to identify a target reduces latencies in prosaccades but not in antisaccades Journal Article In: Psychological Research, vol. 74, no. 1, pp. 12–20, 2010. @article{Guyader2010, In a princeps study, Trottier and Pratt (2005) showed that saccadic latencies were dramatically reduced when subjects were instructed to not simply look at a peripheral target (reflexive saccade) but to identify some of its properties. According to the authors, the shortening of saccadic reactions times may arise from a top-down disinhibition of the superior colliculus (SC), potentially mediated by the direct pathway connecting frontal/prefrontal cortex structures to the SC. Using a "cue paradigm" (a cue preceded the appearance of the target), the present study tests if the task instruction (Identify vs. Glance) also reduces the latencies of antisaccades (AS), which involve prefrontal structures. We show that instruction reduces latencies for prosaccade but not for AS. An AS requires two processes: the inhibition of a reflexive saccade and the generation of a voluntary saccade. To separate these processes and to better understand the task effect we also test the effect of the task instruction only on voluntary saccades. The effect still exists but it is much weaker than for reflexive saccades. The instruction effect closely depends on task demands in executive resources. |
Norbert Hagemann; Jörg Schorer; R. Canal-Bruland; Simone Lotz; Bernd Strauss Visual perception in fencing: Do the eye movements of fencers represent their information pickup? Journal Article In: Attention, Perception, & Psychophysics, vol. 72, no. 8, pp. 2204–2214, 2010. @article{Hagemann2010, The present study examined whether results of athletes' eye movements while they observe fencing attacks reflect their actual information pickup by comparing these results with others gained with temporal and spatial occlusion and cuing techniques. Fifteen top-ranking expert fencers, 15 advanced fencers, and 32 sport students predicted the target region of 405 fencing attacks on a computer monitor. Results of eye movement recordings showed a stronger foveal fixation on the opponent's trunk and weapon in the two fencer groups. Top-ranking expert fencers fixated particularly on the upper trunk. This matched their performance decrements in the spatial occlusion condition. However, when the upper trunk was occluded, participants also shifted eye movements to neighboring body regions. Adding cues to the video material had no positive effects on prediction performance. We conclude that gaze behavior does not necessarily represent information pickup, but that studies applying the spatial occlusion paradigm should also register eye movements to avoid underestimating the information contributed by occluded regions. |
Jessica K. Hall; Samuel B. Hutton; Michael J. Morgan Sex differences in scanning faces: Does attention to the eyes explain female superiority in facial expression recognition? Journal Article In: Cognition and Emotion, vol. 24, no. 4, pp. 629–637, 2010. @article{Hall2010, Previous meta-analyses support a female advantage in decoding non-verbal emotion (Hall, 1978, 1984), yet the mechanisms underlying this advantage are not understood. The present study examined whether the female advantage is related to greater female attention to the eyes. Eye-tracking techniques were used to measure attention to the eyes in 19 males and 20 females during a facial expression recognition task. Women were faster and more accurate in their expression recognition compared with men, and women looked more at the eyes than men. Positive relationships were observed between dwell time and number of fixations to the eyes and both accuracy of facial expression recognition and speed of facial expression recognition. These results support the hypothesis that the female advantage in facial expression recognition is related to greater female attention to the eyes. |
S. N. Hamid; B. Stankiewicz; Mary Hayhoe Gaze patterns in navigation: Encoding information in large-scale environments Journal Article In: Journal of Vision, vol. 10, no. 12, pp. 1–11, 2010. @article{Hamid2010, We investigated the role of gaze in encoding of object landmarks in navigation. Gaze behavior was measured while participants learnt to navigate in a virtual large-scale environment in order to understand the sampling strategies subjects use to select visual information during navigation. The results showed a consistent sampling pattern. Participants preferentially directed gaze at a subset of the available object landmarks with a preference for object landmarks at the end of hallways and T-junctions. In a subsequent test of knowledge of the environment, we removed landmarks depending on how frequently they had been viewed. Removal of infrequently viewed landmarks had little effect on performance, whereas removal of the most viewed landmarks impaired performance substantially. Thus, gaze location during learning reveals the information that is selectively encoded, and landmarks at choice points are selected in preference to less informative landmarks. |
Ben M. Harvey; O. J. Braddick; A. Cowey In: Journal of Vision, vol. 10, no. 5, pp. 1–15, 2010. @article{Harvey2010, Our recent psychophysical experiments have identified differences in the spatial summation characteristics of pattern detection and position discrimination tasks performed with rotating, expanding, and contracting stimuli. Areas MT and MST are well established to be involved in processing these stimuli. fMRI results have shown retinotopic activation of area V3A depending on the location of the center of radial motion in vision. This suggests the possibility that V3A may be involved in position discrimination tasks with these motion patterns. Here we use repetitive transcranial magnetic stimulation (rTMS) over MT+ and a dorsomedial extrastriate region including V3A to try to distinguish between TMS effects on pattern detection and position discrimination tasks. If V3A were involved in position discrimination, we would expect to see effects on position discrimination tasks, but not pattern detection tasks, with rTMS over this dorsomedial extrastriate region. In fact, we could not dissociate TMS effects on the two tasks, suggesting that they are performed by the same extrastriate area, in MT+. |
J. Stephen Higgins; Ranxiao Frances Wang A landmark effect in the perceived displacement of objects Journal Article In: Vision Research, vol. 50, no. 2, pp. 242–248, 2010. @article{Higgins2010, Perceiving the displacement of an object after a visual distraction is an essential ability to interact with the world. Previous research has shown a bias to perceive the first object seen after a saccade as stable while the second one moving (landmark effect). The present study examines the generality and nature of this phenomenon. The landmark effect was observed in the absence of eye movements, when the two objects were obscured by a blank screen, a moving-pattern mask, or simply disappeared briefly before reappearing one after the other. The first reappearing object was not required to remain visible while the second object reappeared to induce the bias. The perceived direction of the displacement was mainly determined by the relative displacement of the two objects, suggesting that the landmark effect is primarily due to a landmark calibration mechanism. |
Yoriko Hirose Perception and memory across viewpoint changes in moving images Journal Article In: Journal of Vision, vol. 10, no. 4, pp. 1–19, 2010. @article{Hirose2010, Current understanding of scene perception derives largely from experiments using static scenes and psychological understanding of how moving images are processed is under-developed. We examined eye movement patterns and recognition memory performance as observers looked at short movies involving a change in viewpoint (a cut). At the time of the cut, four types of object property (color, position, identity and shape) were manipulated. Results show differential sensitivity to object property changes, reflected in both eye movement behavior after the cut and memory performance when object properties are remembered after viewing. When object properties change across a cut, memory is generally biased towards information present after the cut, except for position information which showed no bias. Our findings suggest that spatial information is represented differently to other forms of object information when viewing movies that include changes in viewpoint. |