Cognitive Eye-Tracking Publications
All EyeLink eye tracker cognitive and perception eye tracker research publications up until 2025 (with some early 2026s) are listed below by year. You can search the eye-tracking publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2010 |
Tien Ho-Phuoc; Nathalie Guyader; Anne Guérin-Dugué A functional and statistical bottom-up saliency model to reveal the relative contributions of low-level visual guiding factors Journal Article In: Cognitive Computation, vol. 2, no. 4, pp. 344–359, 2010. @article{HoPhuoc2010,When looking at a scene, we frequently move our eyes to place consecutive interesting regions on the fovea, the retina centre. At each fixation, only this specific foveal region is analysed in detail by the visual system. The visual attention mechanisms control eye movements and depend on two types of factor: bottom-up and top-down factors. Bottom-up factors include different visual features such as colour, luminance, edges, and orientations. In this paper, we evaluate quantitatively the relative contribution of basic low-level features as candidate guiding factors to visual attention and hence to eye movements. We also study how these visual features can be combined in a bottom-up saliency model. Our work consists of three interactive parts: a functional saliency model, a statistical model and eye movement data recorded during free viewing of natural scenes. The functional saliency model, inspired by the primate visual system, decomposes a visual scene into different feature maps. The statistical model indicates which features best explain the recorded eye movements. We show an essential role of high frequency luminance and an important contribution of central fixation bias. The relative contribution of features, calculated by the statistical model, is then used to combine the different feature maps into a saliency map. Finally, the comparison between the saliency model and experimental data confirmed the influence of these contributions. |
Gesche M. Huebner; Karl R. Gegenfurtner Effects of viewing time, fixations, and viewing strategies on visual memory for briefly presented natural objects Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 7, pp. 1398–1413, 2010. @article{Huebner2010,We investigated the impact of viewing time and fixations on visual memory for briefly presented natural objects. Participants saw a display of eight natural objects arranged in a circle and used a partial report procedure to assign one object to the position it previously occupied during stimulus presentation. At the longest viewing time of 7,000 ms or 10 fixations, memory performance was significantly higher than at the shorter times. This increase was accompanied by a primacy effect, suggesting a contribution of another memory component-for example, visual long-term memory (VLTM). We found a very limited beneficial effect of fixations on objects; fixated objects were only remembered better at the shortest viewing times. Our results revealed an intriguing difference between the use of a blocked versus an interleaved experimental design. When trial length was predictable, in the blocked design, target fixation durations increased with longer viewing times. When trial length was unpredictable, fixation durations stayed the same for all viewing lengths. Memory performance was not affected by this design manipulation, thus also supporting the idea that the number and duration of fixations are not closely coupled to memory performance. |
Lynn Huestegge Effects of vowel length on gaze durations in silent and oral reading Journal Article In: Journal of Eye Movement Research, vol. 3, no. 5, pp. 1–18, 2010. @article{Huestegge2010a,Vowel length is known to affect reaction times in single word reading. Eye movement studies involving silent sentence reading showed that phonological information of a word can be acquired even before it is fixated. However, it remained an open question whether vowel length directly influences oculomotor control in sentence reading. In the present eye tracking study, subjects read sentences that included target words of varying vowel length and frequency. In Experiment 1, subjects read silently for comprehension, whereas Experiment 2 involved oral reading. Experiments 3 and 4 additionally included an articulatory suppression task and a foot tapping task. Results indicated that in conditions that did not require additional articulation (Experiments 1 and 4) gaze durations were increased for words with long vowels compared to words with short vowels. Conditions that required simultaneous articulation (Experiments 2 and 3) did not yield a vowel length effect. The results point to an influence of phonetic properties on oculomotor control during silent reading around the time of the completion of lexical access. |
Lynn Huestegge; Iring Koch Crossmodal action selection: Evidence from dual-task compatibility Journal Article In: Memory & Cognition, vol. 38, no. 4, pp. 493–501, 2010. @article{Huestegge2010,Response-related mechanisms of multitasking were studied by analyzing simultaneous processing of responses in different modalities (i.e., crossmodal action). Participants responded to a single auditory stimulus with a saccade, a manual response (single-task conditions), or both (dual-task condition). We used a spatially incompatible stimulus-response mapping for one task, but not for the other. Critically, inverting these mappings varied temporal task overlap in dual-task conditions while keeping spatial incompatibility across responses constant. Unlike previous paradigms, temporal task overlap was manipulated without utilizing sequential stimulus presentation, which might induce strategic serial processing. The results revealed dual-task costs, but these were not affected by an increase of temporal task overlap. This finding is evidence for parallel response selection in multitasking. We propose that crossmodal action is processed by a central mapping-selection mechanism in working memory and that the dual-task costs are mainly caused by mapping-related crosstalk. |
Lynn Huestegge; Iring Koch Fixation disengagement enhances peripheral perceptual processing: Evidence for a perceptual gap effect Journal Article In: Experimental Brain Research, vol. 201, no. 4, pp. 631–640, 2010. @article{Huestegge2010c,Temporal gaps between the offset of a central fixation stimulus and the onset of an eccentric target typically reduce saccade latencies (saccadic gap effect). Here, we test whether temporal gaps also affect perceptual performance in peripheral vision. In Experiment 1, subjects executed saccades to briefly presented peripheral target letters and reported letter identity afterwards. A central fixation stimulus either remained visible throughout the trial (overlap) or disappeared 200 ms before letter onset (gap). Experiment 2 tested perceptual performance without saccade execution, whereas Experiment 3 tested saccade execution without perceptual demands. Peripheral letter perception performance was enhanced in gap as compared to overlap conditions (perceptual gap effect) irrespective of concurrent oculomotor demands. Furthermore, the saccadic gap effect was modulated by concurrent perceptual demands. Experiment 4 ruled out a general warning explanation of the perceptual gap effect. These findings extend recent theories assuming a strong coupling between the preparation of goal-directed saccades and shifts of visual attention from the spatial to the temporal domain. |
Lucica Iordanescu; Marcia Grabowecky; Steven L. Franconeri; Jan Theeuwes; Satoru Suzuki Characteristic sounds make you look at target objects more quickly Journal Article In: Attention, Perception, & Psychophysics, vol. 72, no. 7, pp. 1736–1741, 2010. @article{Iordanescu2010,When you are looking for an object, does hearing its characteristic sound make you find it more quickly? Our recent results supported this possibility by demonstrating that when a cat target, for example, was presented among other objects, a simultaneously presented “meow” sound (containing no spatial information) reduced the manual response time for visual localization of the target. To extend these results, we determined how rapidly an object-specific auditory signal can facilitate target detection in visual search. On each trial, participants fixated a spec ified target object as quickly as possible. The target's characteristic sound speeded the saccadic search time within 215–220 msec and also guided the initial saccade toward the target, compared with presentation of a distractor's sound or with no sound. These results suggest that object-based auditory–visual interactions rapidly increase the target object's salience in visual search. |
Osman Iyilikci; Cordula Becker; Onur Güntürkün; Sonia Amado Visual processing asymmetries in change detection Journal Article In: Perception, vol. 39, no. 6, pp. 761–769, 2010. @article{Iyilikci2010,Change detection is critically dependent on attentional mechanisms. However, the relation between an asymmetrical distribution of visuo-spatial attention and the detection of changes in visual scenes is not clear. Spatial tasks are known to induce a stronger activation of the right hemisphere. The effects of such visual processing asymmetries induced by a spatial task on change detection were investigated. When required to detect changes in the left and in the right visual fields, participants were significantly faster in detecting changes on the left than on the right. Importantly, this left-side superiority in change detection is not influenced by inspection time, suggesting a critical role of visual processing benefit for the left visual field. |
Michal Jacob; Shaul Hochstein Graded recognition as a function of the number of target fixations Journal Article In: Vision Research, vol. 50, no. 1, pp. 107–117, 2010. @article{Jacob2010,Target recognition stages were studied by exposing observers to varying controlled numbers of target fixations. The target, present in half the displays, consisted of two identical cards (Identity Search Task; Jacob & Hochstein, 2009). Following more fixations, targets are better recognized, indicated by increased Hit-rate and detectability (according to Unequal Variance Signal Detection Theory), decreased Response Time and growing confidence, reflecting current stage in recognition process. Thus, gathering information over a specific scene region results from a growing number of fixations on that particular region. We conclude that several fixations on a scene location are necessary for achieving recognition. |
Richard H. A. H. Jacobs; Remco Renken; Stefan Thumfart; Frans W. Cornelissen Different judgments about visual textures invoke different eye movement patterns Journal Article In: Journal of Eye Movement Research, vol. 3, no. 4, pp. 1–13, 2010. @article{Jacobs2010a,Top-down influences on the guidance of the eyes are generally modeled as modulating influences on bottom-up salience maps. Interested in task-driven influences on how, rather than where, the eyes are guided, we expected differences in eye movement parameters accompanying beauty and roughness judgments about visual textures. Participants judged textures for beauty and roughness, while their gaze-behavior was recorded. Eye movement parameters differed between the judgments, showing task effects on how people look at images. Similarity in the spatial distribution of attention suggests that differences in the guidance of attention are non-spatial, possibly feature-based. During the beauty judgment, participants fixated on patches that were richer in color information, further supporting the idea that differences in the guidance of attention are feature-based. A finding of shorter fixation durations during beauty judgments may indicate that extraction of the relevant features is easier during this judgment. This finding is consistent with a more ambient scanning mode during this judgment. The differences in eye movement parameters during different judgments about highly repetitive stimuli highlight the need for models of eye guidance to go beyond salience maps, to include the temporal dynamics of eye guidance. |
Anshul Jain; Stuart Fuller; Benjamin T. Backus In: PLoS ONE, vol. 5, no. 10, pp. e13295, 2010. @article{Jain2010,The visual system can learn to use information in new ways to construct appearance. Thus, signals such as the location or translation direction of an ambiguously rotating wire frame cube, which are normally uninformative, can be learned as cues to determine the rotation direction. This perceptual learning occurs when the formerly uninformative signal is statistically associated with long-trusted visual cues (such as binocular disparity) that disambiguate appearance during training. In previous demonstrations, the newly learned cue was intrinsic to the perceived object, in that the signal was conveyed by the same image elements as the object itself. Here we used extrinsic new signals and observed no learning. We correlated three new signals with long-trusted cues in the rotating cube paradigm: one crossmodal (an auditory signal) and two within modality (visual). Cue recruitment did not occur in any of these conditions, either in single sessions or in ten sessions across as many days. These results suggest that the intrinsic/extrinsic distinction is important for the perceptual system in determining whether it can learn and use new information from the environment to construct appearance. Extrinsic cues do have perceptual effects (e.g. the "bounce-pass" illusion and McGurk effect), so we speculate that extrinsic signals must be recruited for perception, but only if certain conditions are met. These conditions might specify the age of the observer, the strength of the long-trusted cues, or the amount of exposure to the correlation. |
Aaron P. Johnson; Rick Gurnsey Size scaling compensates for sensitivity loss produced by a simulated central scotoma in a shape-from-texture task Journal Article In: Journal of Vision, vol. 10, no. 12, pp. 1–16, 2010. @article{Johnson2010,Studies of eccentricity-dependent sensitivity loss typically require participants to maintain fixation while making judgments about stimuli presented at a range of sizes and eccentricities. However, training participants to fixate can prove difficult, and as stimulus size increases, they become poorly localized and may even encroach on the fovea. In the present experiment, we controlled eccentricity of stimulus presentation using a simulated central scotoma of variable size. Participants were asked to perform a 27-alternative forced-choice shape-from-texture task in the presence of a simulated scotoma, with stimulus size and scotoma radius as the independent variables. The resulting psychometric functions for each simulated scotoma were shifted versions of each other on a log size axis. Therefore, stimulus magnification was sufficient to equate sensitivity to shape from texture for all scotoma radii. Increasing scotoma radius also disrupts eye movements, producing increases in fixation frequency and duration, as well as saccade length. |
Stephanie A. H. Jones; Denise Y. P. Henriques Memory for proprioceptive and multisensory targets is partially coded relative to gaze Journal Article In: Neuropsychologia, vol. 48, no. 13, pp. 3782–3792, 2010. @article{Jones2010,We examined the effect of gaze direction relative to target location on reach endpoint errors made to proprioceptive and multisensory targets. We also explored if and how visual and proprioceptive information about target location are integrated to guide reaches. Participants reached to their unseen left hand in one of three target locations (left of body midline, body midline, or right or body midline), while it remained at a target site (online), or after it was removed from this location (remembered), and also after the target hand had been briefly lit before reaching (multisensory target). The target hand was guided to a target location using a robot-generated path. Reaches were made with the right hand in complete darkness, while gaze was varied in one of four eccentric directions. Horizontal reach errors systematically varied relative to gaze for all target modalities; not only for visually remembered and online proprioceptive targets as has been found in previous studies, but for the first time, also for remembered proprioceptive targets and proprioceptive targets that were briefly visible. These results suggest that the brain represents the locations of online and remembered proprioceptive reach targets, as well as visual-proprioceptive reach targets relative to gaze, along with other motor-related representations. Our results, however, do not suggest that visual and proprioceptive information are optimally integrated when coding the location of multisensory reach targets in this paradigm. |
Donatas Jonikaitis; Torsten Schubert; Heiner Deubel Preparing coordinated eye and hand movements: Dual-task costs are not attentional Journal Article In: Journal of Vision, vol. 10, no. 14, pp. 1–17, 2010. @article{Jonikaitis2010,Dual-task costs are observed when people perform two tasks at the same time. It has been suggested that these costs arise from limitations of movement goal selection when multiple goal-directed movements are made simultaneously. To investigate this, we asked participants to reach and look at different locations while we varied the time between the cues to start the eye and the hand movement between 150 ms and 900 ms. In Experiment 1, participants executed the reach first, and the saccade second, in Experiment 2 the order of the movements was reversed. We observed dual-task costs-participants were slower to start the eye or hand movement if they were planning another movement at that time. In Experiment 3, we investigated whether these dual-task costs were due to limited attentional resources needed to select saccade and reach goal locations. We found that the discrimination of a probe improved at both saccade and reach locations, indicating that attention shifted to both movement goals. Importantly, while we again observed the expected dual-task costs as reflected in movement latencies, there was no apparent delay of the associated attention shifts. Our results rule out attentional goal selection as the causal factor leading to the dual-task costs occurring in eye-hand movements. |
Mortier Karen; Wieske Zoest; Martijn Meeter; Jan Theeuwes Word cues affect detection but not localization responses Journal Article In: Attention, Perception, & Psychophysics, vol. 72, no. 1, pp. 65–75, 2010. @article{Karen2010,Many theories assume that pre-knowledge of an upcoming target helps visual selection. In those theories, a top-down set can alter the salience of the target, such that attention can be deployed to the target more efficiently and responses are faster. Evidence for this account stems from visual search studies in which the identity of the upcoming target is cued in advance. In five experiments, we show that top-down knowledge affects the speed with which a singleton target can be detected but not the speed with which it can be localized. Furthermore, we show that these results are independent of the mode of responding (manual or saccadic) and are not due to a ceiling effect. Our results suggest that in singleton search, top-down information does not affect visual selection but most likely does affect response selection. We argue that such an effect is found only when information from different dimensions needs to be integrated to generate a response and that this is the case in singleton detection tasks but not in other singleton search tasks. |
David J. Kelly; Sébastien Miellet; Roberto Caldara Culture shapes eye movements for visually homogeneous objects Journal Article In: Frontiers in Psychology, vol. 1, pp. 6, 2010. @article{Kelly2010,Culture affects the way people move their eyes to extract information in their visual world. Adults from Eastern societies (e.g., China) display a disposition to process information holistically, whereas individuals from Western societies (e.g., Britain) process information analytically. In terms of face processing, adults from Western cultures typically fixate the eyes and mouth, while adults from Eastern cultures fixate centrally on the nose region, yet face recognition accuracy is comparable across populations. A potential explanation for the observed differences relates to social norms concerning eye gaze avoidance/engagement when interacting with conspecifics. Furthermore, it has been argued that faces represent a 'special' stimulus category and are processed holistically, with the whole face processed as a single unit. The extent to which the holistic eye movement strategy deployed by East Asian observers is related to holistic processing for faces is undetermined. To investigate these hypotheses, we recorded eye movements of adults from Western and Eastern cultural backgrounds while learning and recognizing visually homogeneous objects: human faces, sheep faces and greebles. Both group of observers recognized faces better than any other visual category, as predicted by the specificity of faces. However, East Asian participants deployed central fixations across all the visual categories. This cultural perceptual strategy was not specific to faces, discarding any parallel between the eye movements of Easterners with the holistic processing specific to faces. Cultural diversity in the eye movements used to extract information from visual homogenous objects is rooted in more general and fundamental mechanisms. |
Aarlenne Zein Khan; Stephen J. Heinen; Robert M. McPeek Attentional cueing at the saccade goal, not at the target location, facilitates saccades Journal Article In: Journal of Neuroscience, vol. 30, no. 16, pp. 5481–5488, 2010. @article{Khan2010,Presenting a behaviorally irrelevant cue shortly before a target at the same location decreases the latencies of saccades to the target, a phenomenon known as exogenous attention facilitation. It remains unclear whether exogenous attention interacts with early, sensory stages or later, motor planning stages of saccade production. To distinguish between these alternatives, we used a saccadic adaptation paradigm to dissociate the location of the visual target from the saccade goal. Three male and four female human subjects performed both control trials, in which saccades were made to one of two target eccentricities, and adaptation trials, in which the target was shifted from one location to the other during the saccade. This manipulation adapted saccades so that they eventually were directed to the shifted location. In both conditions, a behaviorally irrelevant cue was flashed 66.7 ms before target appearance at a randomly selected one of seven positions that included the two target locations. In control trials, saccade latencies were shortest when the cue was presented at the target location and increased with cue-target distance. In contrast, adapted saccade latencies were shortest when the cue was presented at the adapted saccade goal, and not at the visual target location. The dynamics of adapted saccades were also altered, consistent with prior adaptation studies, except when the cue was flashed at the saccade goal. Overall, the results suggest that attentional cueing facilitates saccade planning rather than visual processing of the target. |
Matthew O. Kimble; Kevin Fleming; Carole Bandy; Julia Kim; Andrea Zambetti Eye tracking and visual attention to threating stimuli in veterans of the Iraq war Journal Article In: Journal of Anxiety Disorders, vol. 24, no. 3, pp. 293–299, 2010. @article{Kimble2010,Theoretical and clinical characterizations of attention in PTSD acknowledge the possibility for both hypervigilance and avoidance of trauma-relevant stimuli. This study used eye tracking technology to investigate visual orientation and attention to traumatic and neutral stimuli in nineteen veterans of the Iraq war. Veterans saw slides in which half the screen had a negatively valenced image and half had a neutral image. Negatively valenced stimuli were further divided into stimuli that varied in trauma relevance (either Iraq war or civilian motor vehicle accidents). Veterans reporting relatively higher levels of PSTD symptoms had larger pupils to all negatively valenced pictures and spent more time looking at them than did veterans lower in PTSD symptoms. Veterans higher in PTSD symptoms also showed a trend towards looking first at Iraq images. The findings suggest that post-traumatic pathology is associated with vigilance rather than avoidance when visually processing negatively valenced and trauma-relevant stimuli. |
Yosuke Kita; Atsuko Gunji; Kotoe Sakihara; Masumi Inagaki; Makiko Kaga; Eiji Nakagawa; Toru Hosokawa Scanning strategies do not modulate face identification: Eye-tracking and near-infrared spectroscopy study Journal Article In: PLoS ONE, vol. 5, no. 6, pp. e11050, 2010. @article{Kita2010,BACKGROUND: During face identification in humans, facial information is sampled (seeing) and handled (processing) in ways that are influenced by the kind of facial image type, such as a self-image or an image of another face. However, the relationship between seeing and information processing is seldom considered. In this study, we aimed to reveal this relationship using simultaneous eye-tracking measurements and near-infrared spectroscopy (NIRS) in face identification tasks. METHODOLOGY/PRINCIPAL FINDINGS: 22 healthy adult subjects (8 males and 14 females) were shown facial morphing movies in which an initial facial image gradually changed into another facial image (that is, the subject's own face was changed to a familiar face). The fixation patterns on facial features were recorded, along with changes in oxyhemoglobin (oxyHb) levels in the frontal lobe, while the subjects identified several faces. In the self-face condition (self-face as the initial image), hemodynamic activity around the right inferior frontal gyrus (IFG) was significantly greater than in the familiar-face condition. On the other hand, the scanning strategy was similar in almost all conditions with more fixations on the eyes and nose than on other areas. Fixation time on the eye area did not correlate with changes in oxyHb levels, and none of the scanning strategy indices could estimate the hemodynamic changes. CONCLUSIONS/SIGNIFICANCE: We conclude that hemodynamic activity, i.e., the means of processing facial information, is not always modulated by the face-scanning strategy, i.e., the way of seeing, and that the right IFG plays important roles in both self-other facial discrimination and self-evaluation. |
Tomas Knapen; Martin Rolfs; Mark Wexler; Patrick Cavanagh The reference frame of the tilt aftereffect Journal Article In: Journal of Vision, vol. 10, no. 1, pp. 1–13, 2010. @article{Knapen2010,Perceptual aftereffects provide a sensitive tool to investigate the influence of eye and head position on visual processing. There have been recent indications that the TAE is remapped around the time of a saccade to remain aligned to the adapting location in the world. Here, we investigate the spatial frame of reference of the TAE by independently manipulating retinal position, gaze orientation, and head orientation between adaptation and test. The results show that the critical factor in the TAE is the correspondence between the adaptation and test locations in a retinotopic frame of reference, whereas world- and head-centric frames of reference do not play a significant role. Our results confirm that adaptation to orientation takes place at retinotopic levels of visual processing. We suggest that the remapping process that plays a role in visual stability does not transfer feature gain information around the time of eye (or head) movements. |
Peter Ko; Sepp Kollmorgen; Nora Nortmann; Sylvia Schröder; Peter König Influence of low-level stimulus features, task dependent factors, and spatial biases on overt visual attention Journal Article In: PLoS Computational Biology, vol. 6, no. 5, pp. e1000791, 2010. @article{Ko2010,Visual attention is thought to be driven by the interplay between low-level visual features and task dependent information content of local image regions, as well as by spatial viewing biases. Though dependent on experimental paradigms and model assumptions, this idea has given rise to varying claims that either bottom-up or top-down mechanisms dominate visual attention. To contribute toward a resolution of this discussion, here we quantify the influence of these factors and their relative importance in a set of classification tasks. Our stimuli consist of individual image patches (bubbles). For each bubble we derive three measures: a measure of salience based on low-level stimulus features, a measure of salience based on the task dependent information content derived from our subjects' classification responses and a measure of salience based on spatial viewing biases. Furthermore, we measure the empirical salience of each bubble based on our subjects' measured eye gazes thus characterizing the overt visual attention each bubble receives. A multivariate linear model relates the three salience measures to overt visual attention. It reveals that all three salience measures contribute significantly. The effect of spatial viewing biases is highest and rather constant in different tasks. The contribution of task dependent information is a close runner-up. Specifically, in a standardized task of judging facial expressions it scores highly. The contribution of low-level features is, on average, somewhat lower. However, in a prototypical search task, without an available template, it makes a strong contribution on par with the two other measures. Finally, the contributions of the three factors are only slightly redundant, and the semi-partial correlation coefficients are only slightly lower than the coefficients for full correlations. These data provide evidence that all three measures make significant and independent contributions and that none can be neglected in a model of human overt visual attention. |
Peter J. Kohler; G. P. Caplovitz; P. -J. Hsieh; J. Sun; P. U. Tse Motion fading is driven by perceived, not actual angular velocity Journal Article In: Vision Research, vol. 50, no. 11, pp. 1086–1094, 2010. @article{Kohler2010,After prolonged viewing of a slowly drifting or rotating pattern under strict fixation, the pattern appears to slow down and then momentarily stop. Here we examine the relationship between such 'motion fading' and perceived angular velocity. Using several different dot patterns that generate emergent virtual contours, we demonstrate that whenever there is a difference in the perceived angular velocity of two patterns of dots that are in fact rotating at the same angular velocity, there is also a difference in the time to undergo motion fading for those two patterns. Conversely, whenever two patterns show no difference in perceived angular velocity, even if in fact rotating at different angular velocities, we find no difference in the time to undergo motion fading. Thus, motion fading is driven by the perceived rather than actual angular velocity of a rotating stimulus. |
A. Kotowicz; Ueli Rutishauser; Christof Koch Time course of target recognition in visual search Journal Article In: Frontiers in Human Neuroscience, vol. 4, pp. 31, 2010. @article{Kotowicz2010,Visual search is a ubiquitous task of great importance: it allows us to quickly find the objects that we are looking for. During active search for an object (target), eye movements are made to different parts of the scene. Fixation locations are chosen based on a combination of information about the target and the visual input. At the end of a successful search, the eyes typically fixate on the target. But does this imply that target identification occurs while looking at it? The duration of a typical fixation ( approximately 170 ms) and neuronal latencies of both the oculomotor system and the visual stream indicate that there might not be enough time to do so. Previous studies have suggested the following solution to this dilemma: the target is identified extrafoveally and this event will trigger a saccade towards the target location. However this has not been experimentally verified. Here we test the hypothesis that subjects recognize the target before they look at it using a search display of oriented colored bars. Using a gaze-contingent real-time technique, we prematurely stopped search shortly after subjects fixated the target. Afterwards, we asked subjects to identify the target location. We find that subjects can identify the target location even when fixating on the target for less than 10 ms. Longer fixations on the target do not increase detection performance but increase confidence. In contrast, subjects cannot perform this task if they are not allowed to move their eyes. Thus, information about the target during conjunction search for colored oriented bars can, in some circumstances, be acquired at least one fixation ahead of reaching the target. The final fixation serves to increase confidence rather then performance, illustrating a distinct role of the final fixation for the subjective judgment of confidence rather than accuracy. |
Gustav Kuhn; John M. Findlay Misdirection, attention and awareness: Inattentional blindness reveals temporal relationship between eye movements and visual awareness Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 1, pp. 136–146, 2010. @article{Kuhn2010,We designed a magic trick that could be used to investigate how misdirection can prevent people from perceiving a visually salient event, thus offering a novel paradigm to examine inattentional blindness. We demonstrate that participants' verbal reports reflect what they saw rather than inferences about how they thought the trick was done and thus provide a reliable index of conscious perception. Eye movements revealed that for a subset of participants their conscious perception was not related to where they were looking at the time of the event and thus demonstrate how overt and covert attention can be spatially dissociated. However, detection of the event resulted in rapid shifts of eye movements towards the detected event, thus indicating a strong temporal link between overt and covert attention, and that covert attention can be allocated at least 2 or 3 saccade targets ahead of where people are fixating. |
Victor Kuperman; Raymond Bertram; R. Harald Baayen Processing trade-offs in the reading of Dutch derived words Journal Article In: Journal of Memory and Language, vol. 62, no. 2, pp. 83–97, 2010. @article{Kuperman2010,This eye-tracking study explores visual recognition of Dutch suffixed words (e.g., plaats+ing "placing") embedded in sentential contexts, and provides new evidence on the interplay between storage and computation in morphological processing. We show that suffix length crucially moderates the use of morphological properties. In words with shorter suffixes, we observe a stronger effect of full-forms (derived word frequency) on reading times than in words with longer suffixes. Also, processing times increase if the base word (plaats) and the suffix (-ing) differ in the amount of information carried by their morphological families (sets of words that share the base or the suffix). We model this imbalance of informativeness in the morphological families with the information-theoretical measure of relative entropy and demonstrate its predictivity for the processing times. The observed processing trade-offs are discussed in the context of current models of morphological processing. |
Hyung Lee; Mathias Abegg; Amadeo Rodriguez; John D. Koehn; Jason J. S. Barton Why do humans make antisaccade errors? Journal Article In: Experimental Brain Research, vol. 201, no. 1, pp. 65–73, 2010. @article{Lee2010,Antisaccade errors are attributed to failure to inhibit the habitual prosaccade. We investigated whether the amount of information about the required response the patient has before the trial begins also contributes to error rate. Participants performed antisaccades in five conditions. The traditional design had two goals on the left and right horizontal meridians. In the second condition, stimulus-goal confusability between trials was eliminated by displacing one goal upward. In the third, hemifield uncertainty was eliminated by placing both goals in the same hemifield. In the fourth, goal uncertainty was eliminated by having only one goal, but interspersed with no-go trials. The fifth condition eliminated all uncertainty by having the same goal on every trial. Antisaccade error rate increased by 2% with each additional source of uncertainty, with the main effect being hemifield information, and a trend for stimulus-goal confusability. A control experiment for the effects of increasing angular separation between targets without changing these types of prior response information showed no effects on latency or error rate. We conclude that other factors besides prosaccade inhibition contribute to antisaccade error rates in traditional designs, possibly by modulating the strength of goal activation. |
Xingshan Li; Gordon D. Logan; N. Jane Zbrodoff Where do we look when we count? The role of eye movements in enumeration Journal Article In: Attention, Perception, & Psychophysics, vol. 72, no. 2, pp. 409–426, 2010. @article{Li2010,Two experiments addressed the coupling between eye movements and the cognitive processes underlying enumeration. Experiment 1 compared eye movements in a counting task with those in a “look” task, in which subjects were told to look at each dot in a pattern once and only once. Experiment 2 presented the same dot patterns to every subject twice, to measure the consistency with which dots were fixated between and within subjects. In both experiments, the number of fixations increased linearly with the number of objects to be enu- merated, consistent with tight coupling between eye movements and enumeration. However, analyses of fixation locations showed that subjects tended to look at dots in dense, central regions of the display and tended not to look at dots in sparse, peripheral regions of the display, suggesting a looser coupling between eye movements and enumeration. Thus, the eyes do not mirror the enumeration process very directly. |
Hanneke Liesker; Eli Brenner; Jeroen B. J. Smeets Eye-hand coupling is not the cause of manual return movements when searching Journal Article In: Experimental Brain Research, vol. 201, no. 2, pp. 221–227, 2010. @article{Liesker2010,When searching for a target with eye movements, saccades are planned and initiated while the visual information is still being processed, so that subjects often make saccades away from the target and then have to make an additional return saccade. Presumably, the cost of the additional saccades is outweighed by the advantage of short fixations. We previously showed that when the cost of passing the target was increased, by having subjects manually move a window through which they could see the visual scene, subjects still passed the target and made return movements (with their hand). When moving a window in this manner, the eyes and hand follow the same path. To find out whether the hand still passes the target and then returns when eye and hand movements are uncoupled, we here compared moving a window across a scene with moving a scene behind a stationary window. We ensured that the required movement of the hand was identical in both conditions. Subjects found the target faster when moving the window across the scene than when moving the scene behind the window, but at the expense of making larger return movements. The relationship between the return movements and movement speed when comparing the two conditions was the same as the relationship between these two when comparing different window sizes. We conclude that the hand passing the target and then returning is not directly related to the eyes doing so, but rather that moving on before the information has been fully processed is a general principle of visuomotor control. |
Angelika Lingnau; Jens Schwarzbach; Dirk Vorberg (Un)-coupling gaze and attention outside central vision Journal Article In: Journal of Vision, vol. 10, no. 11, pp. 1–13, 2010. @article{Lingnau2010,In normal vision, shifts of attention and gaze are tightly coupled. Here we ask if this coupling affects performance also when central vision is not available. To this aim, we trained normal-sighted participants to perform a visual search task while vision was restricted to a gaze-contingent viewing window (" forced field location ") either in the left, right, upper, or lower visual field. Gaze direction was manipulated within a continuous visual search task that required leftward, rightward, upward, or downward eye movements. We found no general performance advantage for a particular part of the visual field or for a specific gaze direction. Rather, performance depended on the coordination of visual attention and eye movements, with impaired performance when sustained attention and gaze have to be moved in opposite directions. Our results suggest that during early stages of central visual field loss, the optimal location for the substitution of foveal vision does not depend on the particular retinal location alone, as has previously been thought, but also on the gaze direction required by the task the patient wishes to perform. |
Chia-Lun Liu; Hui-Yan Chiau; Philip Tseng; Daisy L. Hung; Ovid J. L. Tzeng; Neil G. Muggleton; Chi-Hung Juan Antisaccade cost is modulated by contextual experience of location probability Journal Article In: Journal of Neurophysiology, vol. 103, no. 3, pp. 1438–1447, 2010. @article{Liu2010,It is well known that pro- and antisaccades may deploy different cognitive processes. However, the specific reason why antisaccades have longer latencies than prosaccades is still under debate. In three experiments, we studied the factors contributing to the antisaccade cost by taking attentional orienting and target location probabilities into account. In experiment 1, using a new antisaccade paradigm, we directly tested Olk and Kingstone's hypothesis, which attributes longer antisaccade latency to the time it takes to reorient from the visual target to the opposite saccadic target. By eliminating the reorienting component in our paradigm, we found no significant difference between the latencies of the two saccade types. In experiment 2, we varied the proportion of prosaccades made to certain locations and found that latencies in the high location-probability (75%) condition were faster than those in the low location-probability condition. Moreover, antisaccade latencies were significantly longer when location probability was high. This pattern can be explained by the notion of competing pathways for pro- and antisaccades in findings of others. In experiment 3, we further explored the degrees of modulation of location probability by decreasing the magnitude of high probability from 75 to 65%. We again observed a pattern similar to that seen in experiment 2 but with smaller modulation effects. Together, these experiments indicate that the reorienting process is a critical factor in producing the antisaccade cost. Furthermore, the antisaccade cost can be modulated by probabilistic contextual information such as location probabilities. |
Gang Luo; Tyler W. Garaas; Marc Pomplun; Eli Peli Inconsistency between peri-saccadic mislocalization and compression: evidence for separate "what" and "where" visual systems Journal Article In: Journal of Vision, vol. 10, no. 12, pp. 1–8, 2010. @article{Luo2010,The view of two separate "what" and "where" visual systems is supported by compelling neurophysiological evidence. However, very little direct psychophysical evidence has been presented to suggest that the two functions can be separated in neurologically intact persons. Using a peri-saccadic perception paradigm in which bars of different lengths were flashed around saccade onset, we directly measured the perceived object size (a "what" attribute) and location (a "where" attribute). We found that the perceived object location shifted toward the saccade target to show strongly compressed localization, whereas the perceived object size was not compressed accordingly. This dissociation indicates that the perceived size is not determined by spatial localization of the object boundary, providing direct psychophysical evidence to support that "what" and "where" attributes of objects are indeed processed separately. |
B. Machner; C. Klein; Andreas Sprenger; P. Baumbach; P. P. Pramstaller; Christoph Helmchen; Wolfgang Heide Eye movement disorders are different in Parkin-linked and idiopathic early-onset PD Journal Article In: Neurology, vol. 75, pp. 125–128, 2010. @article{Machner2010,OBJECTIVES Parkin gene mutations are the most common cause of early-onset parkinsonism. Patients with Parkin mutations may be clinically indistinguishable from patients with idiopathic early-onset Parkinson disease (EOPD) without Parkin mutations. Eye movement disorders have been shown to differentiate parkinsonian syndromes, but have never been systematically studied in Parkin mutation carriers. METHODS Eye movements were recorded in symptomatic (n = 9) and asymptomatic Parkin mutation carriers (n = 13), patients with idiopathic EOPD (n = 14), and age-matched control subjects (n = 27) during established oculomotor tasks. RESULTS Both patients with EOPD and symptomatic Parkin mutation carriers showed hypometric prosaccades toward visual stimuli, as well as deficits in suppressing reflexive saccades toward unintended targets (antisaccade task). When directing gaze toward memorized target positions, patients with EOPD exhibited hypometric saccades, whereas symptomatic Parkin mutation carriers showed normal saccades. In contrast to patients with EOPD, the symptomatic Parkin mutation carriers showed impaired tracking of a moving target (reduced smooth pursuit gain). The asymptomatic Parkin mutation carriers did not differ from healthy control subjects in any of the tasks. CONCLUSIONS Although clinically similarly affected, symptomatic Parkin mutation carriers and patients with idiopathic EOPD differed in several oculomotor tasks. This finding may point to distinct anatomic structures underlying either condition: dysfunctions of cortical areas involved in smooth pursuit (V5, frontal eye field) in Parkin-linked parkinsonism vs greater impairment of basal ganglia circuits in idiopathic Parkinson disease. |
Vincenzo Maffei; Emiliano Macaluso; Iole Indovina; Guy A. Orban; Francesco Lacquaniti Processing of targets in smooth or apparent motion along the vertical in the human brain: An fMRI study Journal Article In: Journal of Neurophysiology, vol. 103, no. 1, pp. 360–370, 2010. @article{Maffei2010,Neural substrates for processing constant speed visual motion have been extensively studied. Less is known about the brain activity patterns when the target speed changes continuously, for instance under the influence of gravity. Using functional MRI (fMRI), here we compared brain responses to accelerating/decelerating targets with the responses to constant speed targets. The target could move along the vertical under gravity (1g), under reversed gravity (-1g), or at constant speed (0g). In the first experiment, subjects observed targets moving in smooth motion and responded to a GO signal delivered at a random time after target arrival. As expected, we found that the timing of the motor responses did not depend significantly on the specific motion law. Therefore brain activity in the contrast between different motion laws was not related to motor timing responses. Average BOLD signals were significantly greater for 1g targets than either 0g or -1g targets in a distributed network including bilateral insulae, left lingual gyrus, and brain stem. Moreover, in these regions, the mean activity decreased monotonically from 1g to 0g and to -1g. In the second experiment, subjects intercepted 1g, 0g, and -1g targets either in smooth motion (RM) or in long-range apparent motion (LAM). We found that the sites in the right insula and left lingual gyrus, which were selectively engaged by 1g targets in the first experiment, were also significantly more active during 1g trials than during -1g trials both in RM and LAM. The activity in 0g trials was again intermediate between that in 1g trials and that in -1g trials. Therefore in these regions the global activity modulation with the law of vertical motion appears to hold for both RM and LAM. Instead, a region in the inferior parietal lobule showed a preference for visual gravitational motion only in LAM but not RM. |
Femke Maij; Eli Brenner; Hyung-Chul O. Li; Frans W. Cornelissen; Jeroen B. J. Smeets The use of the saccade target as a visual reference when localizing flashes during saccades Journal Article In: Journal of Vision, vol. 10, no. 4, pp. 1–9, 2010. @article{Maij2010,Flashes presented around the time of a saccade are often mislocalized. Such mislocalization is influenced by various factors. Here, we evaluate the role of the saccade target as a landmark when localizing flashes. The experiment was performed in a normally illuminated room to provide ample other visual references. Subjects were instructed to follow a randomly jumping target with their eyes. We flashed a black dot on the screen around the time of saccade onset. The subjects were asked to localize the black dot by touching the appropriate location on the screen. In a first experiment, the saccade target was displaced during the saccade. In a second experiment, it disappeared at different moments. Both manipulations affected the mislocalization. We conclude that our subjects' judgments are partly based on the flashed dot's position relative to the saccade target. |
George L. Malcolm; John M. Henderson Combining top-down processes to guide eye movements during real-world scene search Journal Article In: Journal of Vision, vol. 10, no. 2, pp. 1–11, 2010. @article{Malcolm2010,Eye movements can be guided by various types of information in real-world scenes. Here we investigated how the visual system combines multiple types of top-down information to facilitate search. We manipulated independently the specificity of the search target template and the usefulness of contextual constraint in an object search task. An eye tracker was used to segment search time into three behaviorally defined epochs so that influences on specific search processes could be identified. The results support previous studies indicating that the availability of either a specific target template or scene context facilitates search. The results also show that target template and contextual constraints combine additively in facilitating search. The results extend recent eye guidance models by suggesting the manner in which our visual system utilizes multiple types of top-down information. |
Sabira K. Mannan; Christopher Kennard; Daniela Potter; Yi Pan; David Soto Early oculomotor capture by new onsets driven by the contents of working memory Journal Article In: Vision Research, vol. 50, no. 16, pp. 1590–1597, 2010. @article{Mannan2010,Oculomotor capture can occur automatically in a bottom-up way through the sudden appearance of a new object or in a top-down fashion when a stimulus in the array matches the contents of working memory. However, it is not clear whether or not working memory processing can influence the early stages of oculomotor capture by abrupt onsets. Here we present clear evidence for an early modulation driven by stimulus matches to the contents of working memory in the colour dimension. Interestingly, verbal as well as visual information in working memory influenced the direction of the fastest saccades made in search, saccadic latencies and the curvature of the scan paths made to the search target. This pattern of results arose even though the contents of working memory were detrimental for search, demonstrating an early, automatic top-down mediation of oculomotor onset capture by the contents of working memory. |
Sebastiaan Mathôt; Jan Theeuwes Gradual remapping results in early retinotopic and late spatiotopic inhibition of return Journal Article In: Psychological Science, vol. 21, no. 12, pp. 1793–1798, 2010. @article{Mathot2010,Here we report that immediately following the execution of an eye movement, oculomotor inhibition of return resides in retinotopic (eye-centered) coordinates. At longer postsaccadic intervals, inhibition resides in spatiotopic (world-centered) coordinates. These results are explained in terms of perisaccadic remapping. In the interval surrounding an eye movement, information is remapped within retinotopic maps to compensate for the retinal displacement. Because remapping is not an instantaneous process, a fast, but gradual, transfer of inhibition of return from retinotopic to spatiotopic coordinates can be observed in the postsaccadic interval. The observation that visual stability is preserved in inhibition of return is consistent with its function as a "foraging facilitator," which requires locations to be inhibited across multiple eye movements. The current results support the idea that the visual system is retinotopically organized and that the appearance of a spatiotopic organization is due to remapping of visual information to compensate for eye movements. |
Sebastiaan Mathôt; Jan Theeuwes Evidence for the predictive remapping of visual attention Journal Article In: Experimental Brain Research, vol. 200, no. 1, pp. 117–122, 2010. @article{Mathot2010a,When attending an object in visual space, perception of the object remains stable despite frequent eye movements. It is assumed that visual stability is due to the process of remapping, in which retinotopically organized maps are updated to compensate for the retinal shifts caused by eye movements. Remapping is predictive when it starts before the actual eye movement. Until now, most evidence for predictive remapping has been obtained in single cell studies involving monkeys. Here, we report that predictive remapping affects visual attention prior to an eye movement. Immediately following a saccade, we show that attention has partly shifted with the saccade (Experiment 1). Importantly, we show that remapping is predictive and affects the locus of attention prior to saccade execution (Experiments 2 and 3): before the saccade was executed, there was attentional facilitation at the location which, after the saccade, would retinotopically match the attended location. |
Ellen Matthias; Peter Bublak; Hermann J. Muller; Werner X. Schneider; Joseph Krummenacher; Kathrin Finke The influence of alertness on spatial and nonspatial components of visual attention Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, pp. 38–56, 2010. @article{Matthias2010,Three experiments investigated whether spatial and nonspatial components of visual attention would be influenced by changes in (healthy, young) subjects' level of alertness and whether such effects on separable components would occur independently of each other. The experiments used a no-cue/alerting-cue design with varying cue-target stimulus onset asynchronies in two different whole-report paradigms based on Bundesen's (1990) theory of visual attention, which permits spatial and nonspatial components of selective attention to be assessed independently. The results revealed the level of alertness to affect both the spatial distribution of attentional weighting and processing speed, but not visual short-term memory capacity, with the effect on processing speed preceding that on the spatial distribution of attentional weighting. This pattern indicates that the level of alertness influences both spatial and nonspatial component mechanisms of visual attention and that these two effects develop independently of each other; moreover, it suggests that intrinsic and phasic alertness effects involve the same processing route, on which spatial and nonspatial mechanisms are mediated by independent processing systems that are activated, due to increased alertness, in temporal succession. |
Anna Ma-Wyatt; Martin Stritzke; Julia Trommershäuser Eye-hand coordination while pointing rapidly under risk Journal Article In: Experimental Brain Research, vol. 203, no. 1, pp. 131–145, 2010. @article{MaWyatt2010,Humans make rapid, goal-directed movements to interact with their environment. Saccadic eye movements usually accompany rapid hand movements, suggesting neural coupling, although it remains unclear what determines the strength of the coupling. Here, we present evidence that humans can alter eye-hand coordination in response to risk associated with endpoint variability. We used a paradigm in which human participants were forced to point rapidly under risk and were penalized or rewarded depending on the hand movement outcome. A separate reward schedule was employed for relative saccadic endpoint position. Participants received a monetary reward proportional to points won. We present a model that defines optimality of eye-hand coordination for this task depending on where the hand lands relative to the eye. A comparison of the results and model predictions showed that participants could optimize performance to maximize gain in some conditions, but not others. Participants produced near-optimal results when no feedback was given about relative saccade location and when negative feedback was provided for large distances between the saccade and hand. Participants were sub-optimal when given negative feedback for saccades very close to the hand endpoint. Our results suggest that eye-hand coordination is flexible when pointing rapidly under risk, but final eye position remains correlated with finger location. |
Sébastien Miellet; Xinyue Zhou; Lingnan He; Helen Rodger; Roberto Caldara Investigating cultural diversity for extrafoveal information use in visual scenes Journal Article In: Journal of Vision, vol. 10, no. 6, pp. 1–18, 2010. @article{Miellet2010,Culture shapes how people gather information from the visual world. We recently showed that Western observers focus on the eyes region during face recognition, whereas Eastern observers fixate predominantly the center of faces, suggesting a more effective use of extrafoveal information for Easterners compared to Westerners. However, the cultural variation in eye movements during scene perception is a highly debated topic. Additionally, the extent to which those perceptual differences across observers from different cultures rely on modulations of extrafoveal information use remains to be clarified. We used a gaze-contingent technique designed to dynamically mask central vision, the Blindspot, during a visual search task of animals in natural scenes. We parametrically controlled the Blindspots and target animal sizes (0°, 2°, 5°, or 8°). We processed eye-tracking data using an unbiased data-driven approach based on fixation maps and we introduced novel spatiotemporal analyses in order to finely characterize the dynamics of scene exploration. Both groups of observers, Eastern and Western, showed comparable animal identification performance, which decreased as a function of the Blindspot sizes. Importantly, dynamic analysis of the exploration pathways revealed identical oculomotor strategies for both groups of observers during animal search in scenes. Culture does not impact extrafoveal information use during the ecologically valid visual search of animals in natural scenes. |
Milica Milosavljevic; Jonathan Malmaud; Alexander Huth; Christof Koch; Antonio Rangel The drift diffusion model can account for the accuracy and reaction time of value-based choices under high and low time pressure Journal Article In: Judgment and Decision Making, vol. 5, no. 6, pp. 437–449, 2010. @article{Milosavljevic2010,An important open problem is how values are compared to make simple choices. A natural hypothesis is that the brain carries out the computations associated with the value comparisons in a manner consistent with the Drift Diffusion Model (DDM), since this model has been able to account for a large amount of data in other domains. We investigated the ability of four different versions of the DDM to explain the data in a real binary food choice task under conditions of high and low time pressure. We found that a seven-parameter version of the DDM can account for the choice and reaction time data with high-accuracy, in both the high and low time pressure conditions. The changes associated with the introduction of time pressure could be traced to changes in two key model parameters: the barrier height and the noise in the slope of the drift process. |
Anish R. Mitra; Mathias Abegg; Jayalakshmi Viswanathan; Jason J. S. Barton Line bisection in simulated homonymous hemianopia Journal Article In: Neuropsychologia, vol. 48, no. 6, pp. 1742–1749, 2010. @article{Mitra2010,Hemianopic patients make a systematic error in line bisection, showing a contra-lesional bias towards their blind side, which is the opposite of that in hemineglect patients. This error has been attributed variously to the visual field defect, to long-term strategic adaptation, or to independent effects of damage to extrastriate cortex. To determine if hemianopic bisection error can occur without the latter two factors, we studied line bisection in healthy subjects with simulated homonymous hemianopia using a gaze-contingent display, with different line-lengths, and with or without markers at both ends of the lines. Simulated homonymous hemianopia did induce a contra-lesional bisection error and this was associated with increased fixations towards the blind field. This error was found with end-marked lines and was greater with very long lines. In a second experiment we showed that eccentric fixation alone produces a similar bisection error and eliminates the effect of line-end markers. We conclude that a homonymous hemianopic field defect alone is sufficient to induce both a contra-lesional line bisection error and previously described alterations in fixation distribution, and does not require long-term adaptation or extrastriate damage. © 2010 Elsevier Ltd. |
Stéphanie M. Morand; Marie-Hélène Grosbras; Roberto Caldara; Monika Harvey Looking away from faces: Influence of high-level visual processes on saccade programming Journal Article In: Journal of Vision, vol. 10, no. 3, pp. 1–10, 2010. @article{Morand2010,Human faces capture attention more than other visual stimuli. Here we investigated whether such face-specific biases rely on automatic (involuntary) or voluntary orienting responses. To this end, we used an anti-saccade paradigm, which requires the ability to inhibit a reflexive automatic response and to generate a voluntary saccade in the opposite direction of the stimulus. To control for potential low-level confounds in the eye-movement data, we manipulated the high-level visual properties of the stimuli while normalizing their global low-level visual properties. Eye movements were recorded in 21 participants who performed either pro- or anti-saccades to a face, car, or noise pattern, randomly presented to the left or right of a fixation point. For each trial, a symbolic cue instructed the observer to generate either a pro-saccade or an anti-saccade. We report a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, as well as faster pro-saccades to faces and cars in comparison to noise patterns. These results indicate that human faces induce stronger involuntary orienting responses than other visual objects, i.e., responses that are beyond the control of the observer. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors. |
Adam P. Morris; Charles C. Liu; Simon J. Cropper; Jason D. Forte; Bart Krekelberg; Jason B. Mattingley Summation of visual motion across eye movements reflects a nonspatial decision mechanism Journal Article In: Journal of Neuroscience, vol. 30, no. 29, pp. 9821–9830, 2010. @article{Morris2010,Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g., “spatiotopic” receptive fields; Melcher and Morrone, 2003). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location before the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observer's uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed regardless of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior. |
Albert Moukheiber; Gilles Rautureau; Fernando Perez-Diaz; Robert Soussignan; Stéphanie Dubal; Roland Jouvent; Antoine Pelissolo Gaze avoidance in social phobia: Objective measure and correlates Journal Article In: Behaviour Research and Therapy, vol. 48, pp. 147–151, 2010. @article{Moukheiber2010,Gaze aversion could be a central component of the physiopathology of social phobia. The emotions of the people interacting with a person with social phobia seem to model this gaze aversion. Our research consists of testing gaze aversion in subjects with social phobia compared to control subjects in different emotional faces of men and women using an eye tracker. Twenty-six subjects with DSM-IV social phobia were recruited. Twenty-four healthy subjects aged and sex-matched constituted the control group. We looked at the number of fixations and the dwell time in the eyes area on the pictures. The main findings of this research are: confirming a significantly lower amount of fixations and dwell time in patients with social phobia as a general mean and for the 6 basic emotions independently from gender; observing a significant correlation between the severity of the phobia and the degree of gaze avoidance. However, no difference in gaze avoidance according to subject/picture gender matching was observed. These findings confirm and extend some previous results, and suggest that eye avoidance is a robust marker of persons with social phobia, which could be used as a behavioral phenotype for brain imagery studies on this disorder. |
Sven Mucke; Velitchko Manahilov; Niall C. Strang; Dirk Seidel; Lyle S. Gray; Uma Shahani Investigating the mechanisms that may underlie the reduction in contrast sensitivity during dynamic accommodation Journal Article In: Journal of Vision, vol. 10, no. 5, pp. 1–14, 2010. @article{Mucke2010,Head and eye movements, together with ocular accommodation enable us to explore our visual environment. The stability of this environment is maintained during saccadic and vergence eye movements due to reduced contrast sensitivity to low spatial frequency information. Our recent work has revealed a new type of selective reduction of contrast sensitivity to high spatial frequency patterns during the fast phase of dynamic accommodation responses compared with steady-state accommodation. Here were report data which show a strong correlation between the effects of reduced contrast sensitivity during dynamic accommodation and velocity of accommodation responses, elicited by ramp changes in accommodative demand. The results were accounted for by a contrast gain control model of a cortical mechanism for contrast detection during dynamic ocular accommodation. Sensitivity, however, was not altered during attempted accommodation responses in the absence of crystalline-lens changes due to cycloplegia. These findings suggest that contrast sensitivity reduction during dynamic accommodation may be a consequence of cortical inhibition driven by proprioceptive-like signals originating within the ciliary muscle, rather than by corollary discharge signals elicited simultaneously with the motor command to the ciliary muscle. |
Manon Mulckhuyse; Jan Theeuwes Unconscious cueing effects in saccadic eye movements - Facilitation and inhibition in temporal and nasal hemifield Journal Article In: Vision Research, vol. 50, no. 6, pp. 606–613, 2010. @article{Mulckhuyse2010,The current study investigated whether subliminal spatial cues can affect the oculomotor system. In addition, we performed the experiment under monocular viewing conditions. By limiting participants to monocular viewing conditions, we can examine behavioral temporal-nasal hemifield asymmetries. These behavioral asymmetries may arise from an anatomical asymmetry in the retinotectal pathway. The results show that even though our spatial cues were not consciously perceived they did affect the oculomotor system: relative to the neutral condition, saccade latencies to the validly cued location were shorter and saccade latencies to the invalidly cued location were longer. Although we did not observe an overall inhibition of return effect, there was a reliable effect of hemifield on IOR for those observers who showed an overall IOR effect. More specifically, consistent with the notion that processing via the retinotectal pathway is stronger in the temporal hemifield than in the nasal hemifield we found an IOR effect for cues presented in the temporal hemifield but not for cues presented in the nasal hemifield. We conclude that unconsciously processed spatial cues can affect the oculomotor system. In addition, the observed behavioral temporal-nasal hemifield asymmetry is consistent with retinotectal mediation. |
Vidhya Navalpakkam; Christof Koch; Antonio Rangel; Pietro Perona Optimal reward harvesting in complex perceptual environments Journal Article In: Proceedings of the National Academy of Sciences, vol. 107, no. 11, pp. 5232–5237, 2010. @article{Navalpakkam2010,The ability to choose rapidly among multiple targets embedded in a complex perceptual environment is key to survival. Targets may differ in their reward value as well as in their low-level perceptual properties (e.g., visual saliency). Previous studies investigated separately the impact of either value or saliency on choice; thus, it is not known how the brain combines these two variables during decision making. We addressed this question with three experiments in which human subjects attempted to maximize their monetary earnings by rapidly choosing items from a brief display. Each display contained several worthless items (distractors) as well as two targets, whose value and saliency were varied systematically. We compared the behavioral data with the predictions of three computational models assuming that (i) subjects seek the most valuable item in the display, (ii) subjects seek the most easily detectable item, and (iii) subjects behave as an ideal Bayesian observer who combines both factors to maximize the expected reward within each trial. Regardless of the type of motor response used to express the choices, we find that decisions are influenced by both value and feature-contrast in a way that is consistent with the ideal Bayesian observer, even when the targets' feature-contrast is varied unpredictably between trials. This suggests that individuals are able to harvest rewards optimally and dynamically under time pressure while seeking multiple targets embedded in perceptual clutter. |
Mark B. Neider; Xin Chen; Christopher A. Dickinson; Susan E. Brennan; Gregory J. Zelinsky Coordinating spatial referencing using shared gaze Journal Article In: Psychonomic Bulletin & Review, vol. 17, no. 5, pp. 718–724, 2010. @article{Neider2010a,To better understand the problem of referencing a location in space under time pressure, we had two remotely located partners (A, B) attempt to locate and reach consensus on a sniper target, which appeared randomly in the windows of buildings in a pseudorealistic city scene. The partners were able to communicate using speech alone (shared voice), gaze cursors alone (shared gaze), or both. In the shared-gaze conditions, a gaze cursor representing Partner A's eye position was superimposed over Partner B's search display and vice versa. Spatial referencing times (for both partners to find and agree on targets) were faster with shared gaze than with speech, with this benefit due primarily to faster consensus (less time needed for one partner to locate the target after it was located by the other partner). These results suggest that sharing gaze can be more efficient than speaking when people collaborate on tasks requiring the rapid communication of spatial information. |
Dylan Nieman; Bhavin R. Sheth; Shinsuke Shimojo Perceiving a discontinuity in motion Journal Article In: Journal of Vision, vol. 10, no. 6, pp. 1–23, 2010. @article{Nieman2010,Studies have shown that the position of a target stimulus is misperceived owing to ongoing motion. Although static forces (fixation, landmarks) affect perceived position, motion remains the overwhelming force driving estimates of position. Motion endpoint estimates biased in the direction of motion are perceptual signatures of motion's dominant role in localization. We sought conditions in which static forces exert the predominant influence over perceived position: stimulus displays for which target position is perceived backward relative to motion. We used a target that moved diagonally with constant speed, abruptly turned 90 degrees and continued at constant speed; observers localized the discontinuity. This yielded a previously undescribed effect, "turn-point shift," the tendency of observers to estimate the position of orthogonal direction change backward relative to subsequent motion direction. Display and mislocalization direction differ from past studies. Static forces (foveal attraction, repulsion by subsequently occupied spatial positions) were found to be responsible. Delayed turn-point estimates, reconstructed from probing the entire trajectory, shifted the horizontal coordinate forward in the direction of motion. This implies more than one percept of turn-point position. As various estimates of turn-point position arise at different times, under different task demands, the perceptual system does not necessarily resolve conflicts between them. |
Tanja C. W. Nijboer; Anneloes Vree; Chris Dijkerman; Stefan Van der Stigchel Prism adaptation influences perception but not attention: Evidence from antisaccades Journal Article In: NeuroReport, vol. 21, no. 5, pp. 386–389, 2010. @article{Nijboer2010,Prism adaptation has been shown to successfully alleviate symptoms of hemispatial neglect, yet the underlying mechanism is still poorly understood. In this study, the antisaccade task was used to measure the effects of prism adaptation on spatial attention in healthy participants. Results indicated that prism adaptation did not influence the saccade latencies or antisaccade errors, both strong measures of attentional deployment, despite a successful prism adaptation procedure. In contrast to visual attention, prism adaptation evoked a perceptual bias in visual space as measured by the landmark task. We conclude that prism adaptation has a differential influence on visual attention and visual perception in healthy participants as measured by the tasks used. |
Lauri Nummenmaa; Jukka Hyönä; Manuel G. Calvo Semantic recognition precedes affective evaluation of visual scenes Journal Article In: Journal of Experimental Psychology: General, vol. 139, no. 2, pp. 222–246, 2010. @article{Nummenmaa2010,We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by a predefined target scene. The affective task involved saccading toward an unpleasant or pleasant scene, and the semantic task involved saccading toward a scene containing an animal. Both affective and semantic target scenes could be reliably categorized in less than 220 ms, but semantic categorization was always faster than affective categorization. This finding was replicated with singly, foveally presented scenes and manual responses. In comparison with foveal presentation, extrafoveal presentation slowed down the categorization of affective targets more than that of semantic targets. Exposure threshold for accurate categorization was lower for semantic information than for affective information. Superordinate-, basic-, and subordinate-level semantic categorizations were faster than affective evaluation. We conclude that affective analysis of scenes cannot bypass object recognition. Rather, semantic categorization precedes and is required for affective evaluation. |
Antje Nuthmann; John M. Henderson Object-based attentional selection in scene viewing Journal Article In: Journal of Vision, vol. 10, no. 8, pp. 1–19, 2010. @article{Nuthmann2010,Two contrasting views of visual attention in scenes are the visual salience and the cognitive relevance hypotheses. They fundamentally differ in their conceptualization of the visuospatial representation over which attention is directed. According to the saliency model, this representation is image-based, while the cognitive relevance framework advocates an object-based representation. Previous research has shown that (1) viewers prefer to look at objects over background and that (2) the saliency model predicts human fixation locations significantly better than chance. However, it could be that saliency mainly acts through objects. To test this hypothesis, we investigated where people fixate within real objects and saliency proto-objects. To this end, we recorded eye movements of human observers while they inspected photographs of natural scenes under different task instructions. We found a preferred viewing location (PVL) close to the center of objects within naturalistic scenes. Compared to the PVL for real objects, there was less evidence for a PVL for human fixations within saliency proto-objects. There was no evidence for a PVL when only saliency proto-objects that did not spatially overlap with annotated real objects were analyzed. The results suggest that saccade targeting and, by inference, attentional selection in scenes is object-based. |
Antje Nuthmann; Tim J. Smith; Ralf Engbert; John M. Henderson CRISP: A computational model of fixation durations in scene viewing Journal Article In: Psychological Review, vol. 117, no. 2, pp. 382–405, 2010. @article{Nuthmann2010a,Eye-movement control during scene viewing can be represented as a series of individual decisions about where and when to move the eyes. While substantial behavioral and computational research has been devoted to investigating the placement of fixations in scenes, relatively little is known about the mechanisms that control fixation durations. Here, we propose a computational model (CRISP) that accounts for saccade timing and programming and thus for variations in fixation durations in scene viewing. First, timing signals are modeled as continuous-time random walks. Second, difficulties at the level of visual and cognitive processing can inhibit and thus modulate saccade timing. Inhibition generates moment-by-moment changes in the random walk's transition rate and processing-related saccade cancellation. Third, saccade programming is completed in 2 stages: an initial, labile stage that is subject to cancellation and a subsequent, nonlabile stage. Several simulation studies tested the model's adequacy and generality. An initial simulation study explored the role of cognitive factors in scene viewing by examining how fixation durations differed under different viewing task instructions. Additional simulations investigated the degree to which fixation durations were under direct moment-to-moment control of the current visual scene. The present work further supports the conclusion that fixation durations, to a certain degree, reflect perceptual and cognitive activity in scene viewing. Computational model simulations contribute to an understanding of the underlying processes of gaze control. |
Hirokazu Ogawa; Katsumi Watanabe Time to learn: Evidence for two types of attentional guidance in contextual cueing Journal Article In: Perception, vol. 39, no. 1, pp. 72–80, 2010. @article{Ogawa2010,Repetition of the same spatial configurations of a search display implicitly facilitates performance of a visual-search task when the target location in the display is fixed. The improvement of performance is referred to as contextual cueing. We examined whether the association process between target location and surrounding configuration of distractors occurs during active search or at the instant the target is found. To dissociate these two processes, we changed the surrounding configuration of the distractors at the instant of target detection so that the layout where the participants had searched for the target and the layout presented at the instant of target detection differed. The results demonstrated that both processes are responsible for the contextual-cueing effect, but they differ in the accuracies of attentional guidance and their time courses, suggesting that two different types of attentional-guidance processes may be involved in contextual cueing. |
Anna Oleksiak; Miroslawa Mańko; Albert Postma; Ineke J. M. Ham; Albert V. Berg; Richard J. A. Wezel Distance estimation is influenced by encoding conditions Journal Article In: PLoS ONE, vol. 5, no. 3, pp. e9918, 2010. @article{Oleksiak2010,Background: It is well established that foveating a behaviorally relevant part of the visual field improves localization performance as compared to the situation where the gaze is directed elsewhere. Reduced localization performance in the peripheral encoding conditions has been attributed to an eccentricity-dependent increase in positional uncertainty. It is not known, however, whether and how the foveal and peripheral encoding conditions can influence spatial interval estimation. In this study we compare observers' estimates of a distance between two co-planar dots in the condition where they foveate the two sample dots and where they fixate a central dot while viewing the sample dots peripherally. Methodology/Principal Findings: Observers were required to reproduce, after a short delay, a distance between two sample dots based on a stationary reference dot and a movable mouse pointer. When both sample dots are foveated, we find that the distance estimation error is small but consistently increases with the dots-separation size. In comparison, distance judgment in peripheral encoding condition is significantly overestimated for smaller separations and becomes similar to the performance in foveal trials for distances from 10 to 16 degrees. Conclusions/Significance: Although we find improved accuracy of distance estimation in the foveal condition, the fact that the difference is related to the reduction of the estimation bias present in the peripheral conditon, challenges the simple account of reducing the eccentricity-dependent positional uncertainty. Contrary to this, we present evidence for an explanation in terms of neuronal populations activated by the two sample dots and their inhibitory interactions under different visual encoding conditions. We support our claims with simulations that take into account receptive fields size differences between the two encoding conditions. |
Jean-Jacques Orban de Xivry; Sébastien Coppe; Philippe Lefèvre; Marcus Missal Biological motion drives perception and action. Journal Article In: Journal of Vision, vol. 10, no. 2, pp. 1–11, 2010. @article{OrbandeXivry2010,Presenting a few dots moving coherently on a screen can yield to the perception of human motion. This perception is based on a specific network that is segregated from the traditional motion perception network and that includes the superior temporal sulcus (STS). In this study, we investigate whether this biological motion perception network could influence the smooth pursuit response evoked by a point-light walker. We found that smooth eye velocity during pursuit initiation was larger in response to the point-light walker than in response to one of its scrambled versions, to an inverted walker or to a single dot stimulus. In addition, we assessed the proximity to the point-light walker (i.e. the amount of information about the direction contained in the scrambled stimulus and extracted from local motion cue of biological motion) of each of our scrambled stimuli in a motion direction discrimination task with manual responses and found that the smooth pursuit response evoked by those stimuli moving across the screen was modulated by their proximity to the walker. Therefore, we conclude that biological motion facilitates smooth pursuit eye movements, hence influences both perception and action. |
Andrea L. Patalano; Barbara J. Juhasz; Joanna Dicke The relationship between indecisiveness and eye movement patterns in a decision making informational search task Journal Article In: Journal of Behavioral Decision Making, vol. 23, pp. 353–368, 2010. @article{Patalano2010,Indecisiveness is a trait-related general tendency to experience decision difficulties across a variety of situations, leading to decision delay, worry, and regret. Indecisive- ness is proposed (Rassin, 2007) to be associated with an increase in desire for information acquisition and reliance on compensatory strategies—as evidenced by alternative-based information search—during decision making. However existing studies provide conflicting findings. We conducted an information board study of indecisiveness, using eye tracking methodology, to test the hypotheses that the relationship between indecisiveness and choice strategy depends on being in the early stage of the decision making process, and that it depends on being in the presence of an opportunity to delay choice. We found strong evidence for the first hypothesis in that indecisive individuals changed shift behavior from the first to the second half of the task, consistent with a move from greater to lesser compensatory processing, while the shift behavior of decisive individuals suggested lesser compensatory processing over the whole task. Indecisiveness was also related to time spent viewing attributes of the selected course, and to time spent looking away from decision information. These findings resolve past discrepancies, suggest an interesting account of how the decision process unfolds for indecisive versus decisive individuals, and contribute to a better understanding of this tendency. |
Elena G. Patsenko; Erik M. Altmann How planful is routine behavior? A selective-attention model of performance in the Tower of Hanoi Journal Article In: Journal of Experimental Psychology: General, vol. 139, no. 1, pp. 95–116, 2010. @article{Patsenko2010,Routine human behavior has often been attributed to plans-mental representations of sequences goals and actions-but can also be attributed to more opportunistic interactions of mind and a structured environment. This study asks whether performance on a task traditionally analyzed in terms of plans can be better understood from a "situated" (or "embodied") perspective. A saccade-contingent display-updating paradigm is used to change the environment by adding, deleting, and moving task-relevant objects without participants' direct awareness. Response latencies, action patterns, and eye movements all indicate that performance is guided not by plans stored in memory but by a control routine bound to objects as needed by perception and selective attention. The results have implications for interpreting everyday task performance and particular neuropsychological deficits. |
Yoni Pertzov; Ehud Zohary; Galia Avidan Rapid formation of spatiotopic representations as revealed by inhibition of return Journal Article In: Journal of Neuroscience, vol. 30, no. 26, pp. 8882–8887, 2010. @article{Pertzov2010,Inhibition of return (IOR), a performance decrement for stimuli appearing at recently cued locations, occurs when the target and cue share the same screen position. This is in contrast to cue-based attention facilitation effects that were recently suggested to be mapped in a retinotopic reference frame, the prevailing representation throughout early visual processing stages. Here, we investigate the dynamics of IOR in both reference frames, using a modified cued-location saccadic reaction time task with an intervening saccade between cue and target presentation. Thus, on different trials, the target was present either at the same retinotopic location as the cue, or at the same screen position (e.g., spatiotopic location). IOR was primarily found for targets appearing at the same spatiotopic position as the initial cue, when the cue and target were presented at the same hemifield. This suggests that there is restricted information transfer of cue position across the two hemispheres. Moreover, the effect was maximal when the target was presented 10 ms after the intervening saccade ended and was attenuated in longer delays. In our case, therefore, the representation of previously attended locations (as revealed by IOR) is not remapped slowly after the execution of a saccade. Rather, either a retinotopic representation is remapped rapidly, adjacent to the end of the saccade (using a prospective motor command), or the positions of the cue and target are encoded in a spatiotopic reference frame, regardless of eye position. Spatial attention can therefore be allocated to target positions defined in extraretinal coordinates. |
Gerardo Cepeda Porras; Yann Gaël Guéhéneuc An empirical study on the efficiency of different design pattern representations in UML class diagrams Journal Article In: Empirical Software Engineering, vol. 15, no. 5, pp. 493–522, 2010. @article{Porras2010,Design patterns are recognized in the software engineering community as useful solutions to recurring design problems that improve the quality of programs. They are more and more used by developers in the design and implementation of their programs. Therefore, the visualization of the design patterns used in a program could be useful to efficiently understand how it works. Currently, a common representation to visualize design patterns is the UML collaboration notation. Previous work noticed some limitations in the UML representation and proposed new representations to tackle these limitations. However, none of these pieces of work conducted empirical studies to compare their new representations with the UML representation. We designed and conducted an empirical study to collect data on the performance of developers on basic tasks related to design pattern comprehension (i.e., identifying composition, role, participation) to evaluate the impact of three visual representations and to compare them with the UML one. We used eye-trackers to measure the developers' effort during the execution of the study. Collected data and their analyses show that stereotype-enhanced UML diagrams are more efficient for identifying composition and role than the UML collaboration notation. The UML representation and the pattern-enhanced class diagrams are more efficient for locating the classes participating in a design pattern (i.e., identifying participation). |
Gillian Porter; Andrea Tales; Ute Leonards What makes cast shadows hard to see? Journal Article In: Journal of Vision, vol. 10, no. 3, pp. 1–18, 2010. @article{Porter2010a,Visual search is slowed for cast shadows lit from above, as compared to the same search items inverted and so not interpreted as shadows (R. A. Rensink & P. Cavanagh, 2004). The underlying mechanisms for such impaired shadow processing are still not understood. Here we investigated the processing levels at which this shadow-related slowing might operate, by examining its interaction with a range of different phenomena including eye movements, perceptual learning, and stimulus presentation context. The data demonstrated that the shadow mechanism affects the number of saccades during the search rather than the duration until first saccade onset and can be overridden by prolonged training, which then transfers from one type of shadow stimulus to another. Shadow-related slowing did not differ for peripheral and central search items but was reduced when participants searched unilateral displays as compared to bilateral ones. Together our findings suggest that difficulties with perceiving shadows are due to visual processes linked to object recognition, rather than to shadow-specific identification and suppression mechanisms in low-level sensory visual areas. Findings are discussed in the context of the need for the visual system to distinguish between illumination and material. |
Melanie A. Porter; Tracey A. Shaw; Pamela J. Marsh An unusual attraction to the eyes in Williams-Beuren syndrome: A manipulation of facial affect while measuring face scanpaths Journal Article In: Cognitive Neuropsychiatry, vol. 15, no. 6, pp. 505–530, 2010. @article{Porter2010b,INTRODUCTION: This study aimed to investigate face scanpaths and emotion recognition in Williams-Beuren syndrome (WBS) and whether: (1) the eyes capture the attention of WBS individuals faster than typically developing mental age-matched controls; (2) WBS patients spend abnormally prolonged periods of time viewing the eye region; and (3) emotion recognition skills or eye gaze patterns change depending on the emotional valance of the face. METHODS: Visual scanpaths were recorded while 16 WBS patients and 16 controls passively viewed happy, angry, fearful, and neutral faces. Emotion recognition was subsequently measured. RESULTS: The eyes did not capture the attention of WBS patients faster than controls, but once WBS patients attended to the eyes, they spent significantly more time looking at this region. Unexpectedly, WBS patients showed an impaired ability to recognise angry faces, but face scanpaths were similar across the different facial expressions. CONCLUSIONS: Findings suggest that face processing is atypical in WBS and that emotion recognition and eye gaze abnormalities in WBS are likely to be more complex than previously thought. Findings highlight the need to develop remediation programmes to teach WBS patients how to explore all facial features, enhancing their emotion recognition skills and "normalising" their social interactions. |
Claudio M. Privitera; Laura W. Renninger; Thom Carney; Stanley A. Klein; Mario Aguilar Pupil dilation during visual target detection Journal Article In: Journal of Vision, vol. 10, no. 10, pp. 1–14, 2010. @article{Privitera2010,It has long been documented that emotional and sensory events elicit a pupillary dilation. Is the pupil response a reliable marker of a visual detection event while viewing complex imagery? In two experiments where viewers were asked to report the presence of a visual target during rapid serial visual presentation (RSVP), pupil dilation was significantly associated with target detection. The amplitude of the dilation depended on the frequency of targets and the time of target presentation relative to the start of the trial. Larger dilations were associated with trials having fewer targets and with targets viewed earlier in the run. We found that dilation was influenced by, but not dependent on, the requirement of a button press. Interestingly, we also found that dilation occurred when viewers fixated a target but did not report seeing it. We will briefly discuss the role of noradrenaline in mediating these pupil behaviors. |
Christoph Rasche; Karl R. Gegenfurtner Visual orienting in dynamic broadband (1/f) noise sequences Journal Article In: Attention, Perception, & Psychophysics, vol. 72, no. 1, pp. 100–113, 2010. @article{rg10,Visual orienting has typically been characterized using simple displays—for example, displays with a static target placed on a homogeneous background. In the present study, visual orienting was investigated using a dynamic broadband (1/f) noise display that should mimic a more naturalistic setting and that should allow saccadic orienting experiments to be performed with fewer constraints. In Experiment 1, it was shown that the noise movie contains gaze-attracting features that are almost as distinct as the ones measured for (static) real-word scenes. The movie can therefore serve as a strong distractor. In Experiment 2, observers carried out a luminance target search that showed that saccadic amplitude errors were substantially higher (18%) than the ones measured in simple displays. That error is certainly one of the primary factors making gaze-fixation prediction in complex scenes difficult. |
Helen Rodger; David J. Kelly; Caroline Blais; Roberto Caldara Inverting faces does not abolish cultural diversity in eye movements Journal Article In: Perception, vol. 39, no. 11, pp. 1491–1503, 2010. @article{rkbc10,Face processing is widely understood to be a basic, universal visual function effortlessly achieved by people from all cultures and races. The remarkable recognition performance for faces is markedly and specifically affected by picture-plane inversion: the so-called face-inversion effect (FIE), a finding often used as evidence for face-specific mechanisms. However, it has recently been shown that culture shapes the way people deploy eye movements to extract information from faces. Interestingly, the comparable lack of experience with inverted faces across cultures offers a unique opportunity to establish the extent to which such cultural perceptual biases in eye movements are robust, but also to assess whether face-specific mechanisms are universally tuned. Here we monitored the eye movements of Western Caucasian (WC) and East Asian (EA) observers while they learned and recognised WC and EA inverted faces. Both groups of observers showed a comparable impairment in recognising inverted faces of both races. WC observers deployed a scattered inverted triangular scanpath with a bias towards the mouth, whereas EA observers uniformly extended the focus of their fixations from the centre towards the eyes. Overall, our data show that cultural perceptual differences in eye movements persist during the FIE, questioning the universality of face-processing mechanisms. |
Pieter R. Roelfsema; Roos Houtkamp; Ilia Korjoukov Further evidence for the spread of attention during contour grouping: A reply to Crundall, Dewhurst, and Underwood (2008) Journal Article In: Attention, Perception, & Psychophysics, vol. 72, no. 3, pp. 849–862, 2010. @article{rhk10,In a contour-grouping task, subjects decide whether contour elements belong to the same or different curves. Houtkamp, Spekreijse, and Roelfsema (2003) demonstrated that object-based attention spreads gradually over contour elements that have to be grouped in perception. Crundall, Dewhurst, and Underwood (2008) challenged this spreading-attention model and suggested that attention in the contour-grouping task is not object based but rather has the shape of a zoom lens that moves along the relevant curve. To distinguish between object-based and spatial attention, they changed the stimulus and measured the impact on performance. Subjects were not able to correct for changes at the start of the relevant curve toward the end of the trial. They suggested that attention did not stay at the beginning of the curve, in accordance with a moving zoom lens model. Here, we examine the task of Crundall et al. and find that subjects perceive the changes but fail to correct their response. By measuring change detection directly, we find that performance is much better for the start of the relevant curve than for an irrelevant curve, at all times. Our findings do not support the zoom lens model but provide further support for the spreading-attention model. |
Jennifer D. Ryan; Lily Riggs; Douglas A. McQuiggan Eye movement monitoring of memory Journal Article In: Journal of Visualized Experiments, vol. 42, pp. 1–5, 2010. @article{rrm10,Explicit (often verbal) reports are typically used to investigate memory (e.g. "Tell me what you remember about the person you saw at the bank yesterday."), however such reports can often be unreliable or sensitive to response bias, and may be unobtainable in some participant populations. Furthermore, explicit reports only reveal when information has reached consciousness and cannot comment on when memories were accessed during processing, regardless of whether the information is subsequently accessed in a conscious manner. Eye movement monitoring (eye tracking) provides a tool by which memory can be probed without asking participants to comment on the contents of their memories, and access of such memories can be revealed on-line. Video-based eye trackers (either head-mounted or remote) use a system of cameras and infrared markers to examine the pupil and corneal reflection in each eye as the participant views a display monitor. For head-mounted eye trackers, infrared markers are also used to determine head position to allow for head movement and more precise localization of eye position. Here, we demonstrate the use of a head-mounted eye tracking system to investigate memory performance in neurologically-intact and neurologically-impaired adults. Eye movement monitoring procedures begin with the placement of the eye tracker on the participant, and setup of the head and eye cameras. Calibration and validation procedures are conducted to ensure accuracy of eye position recording. Real-time recordings of X,Y-coordinate positions on the display monitor are then converted and used to describe periods of time in which the eye is static (i.e. fixations) versus in motion (i.e., saccades). Fixations and saccades are time-locked with respect to the onset/offset of a visual display or another external event (e.g. button press). Experimental manipulations are constructed to examine how and when patterns of fixations and saccades are altered through different types of prior experience. The influence of memory is revealed in the extent to which scanning patterns to new images differ from scanning patterns to images that have been previously studied. Memory can also be interrogated for its specificity; for instance, eye movement patterns that differ between an identical and an altered version of a previously studied image reveal the storage of the altered detail in memory. These indices of memory can be compared across participant populations, thereby providing a powerful tool by which to examine the organization of memory in healthy individuals, and the specific changes that occur to memory with neurological insult or decline. |
Victor Sander; Brian Soper; Stefan Everling Nonhuman primate event-related potentials associated with pro- and anti-saccades Journal Article In: NeuroImage, vol. 49, no. 2, pp. 1650–1658, 2010. @article{sse10,Non-invasive event-related potential (ERP) recordings have become a popular technique to study neural activity associated with saccades in humans. To date, it is not known whether nonhuman primates exhibit similar saccade-related ERPs. Here, we recorded ERPs associated with the performance of randomly interleaved pro- and anti-saccades in macaque monkeys. Stimulus-aligned ERPs showed short-latency visual component with more negative P2 and N2 peak amplitudes on anti- than on pro-saccade trials. Saccade-aligned ERPs showed a larger presaccadic negativity on anti- than pro-saccade trials, and a presaccadic positivity on pro-saccade trials, which was attenuated or absent on anti-saccade trials. This was followed by sharp negative spike potential immediately prior to the movement. Overall, these findings demonstrate that macaque monkeys, like humans, exhibit task-related differences of visual ERPs associated with pro- and anti-saccades and furthermore share presaccadic positivity as well as a spike potential prior to these tasks. We suggest that the presaccadic positivity on pro-saccade trials is generated by a source in the contralateral frontal eye fields and that the more negative voltage on anti-saccade trials is the result of additional sources of opposite polarity in neighboring frontal areas. |
Daniel R. Saunders; David K. Williamson; Nikolaus F. Troje Gaze patterns during perception of direction and gender from biological motion Journal Article In: Journal of Vision, vol. 10, no. 11, pp. 1–10, 2010. @article{swt10,Humans can perceive many properties of a creature in motion from the movement of the major joints alone. However it is likely that some regions of the body are more informative than others, dependent on the task. We recorded eye movements while participants performed two tasks with point-light walkers: determining the direction of walking, or determining the walker's gender. To vary task difficulty, walkers were displayed from different view angles and with different degrees of expressed gender. The effects on eye movement were evaluated by generating fixation maps, and by analyzing the number of fixations in regions of interest representing the shoulders, pelvis, and feet. In both tasks participants frequently fixated the pelvis region, but there were relatively more fixations at the shoulders in the gender task, and more fixations at the feet in the direction task. Increasing direction task difficulty increased the focus on the foot region. An individual's task performance could not be predicted by their distribution of fixations. However by showing where observers seek information, the study supports previous findings that the feet play an important part in the perception of walking direction, and that the shoulders and hips are particularly important for the perception of gender. |
Paige E. Scalf; Diane M. Beck Competition in visual cortex impedes attention to multiple items Journal Article In: Journal of Neuroscience, vol. 30, no. 1, pp. 161–169, 2010. @article{sb10,Traditional explanations of our limited attentional capacity focus on our ability to direct attention to multiple items. We ask whether this difficulty in simultaneously attending to multiple items stems from an inability to effectively represent multiple attended items. Although attending to one of a set of neighboring stimuli can isolate it from competitive interactions in visual cortex, no such isolation should occur if multiple competing items are attended. Indeed, we find that attention is ineffective at enhancing blood oxygen level-dependent signal in visual cortical area V4 when it is directed to three stimuli simultaneously, but only when those three stimuli compete in visual cortex. This suggests that competition may prevent attention from acting as effectively on representations of multiple items as it does on representations of a single item. In contrast to traditional explanations that posit limits in the sources of attentional control, we show that mechanisms at the sites of stimulus representation may also impose limits on our ability to attend to multiple items simultaneously. |
Elizabeth R. Schotter; Raymond W. Berry; Craig R. M. McKenzie; Keith Rayner Gaze bias: Selective encoding and liking effects Journal Article In: Visual Cognition, vol. 18, no. 8, pp. 1113–1132, 2010. @article{sbmr10,People look longer at things that they choose than things they do not choose. How much of this tendency—the gaze bias effect—is due to a liking effect compared to the information encoding aspect of the decision-making process? Do these processes compete under certain conditions? We monitored eye movements during a visual decision-making task with four decision prompts: Like, dislike, older, and newer. The gaze bias effect was present during the first dwell in all conditions except the dislike condition, when the preference to look at the liked item and the goal to identify the disliked item compete. Colour content (whether a photograph was colour or black-and-white), not decision type, influenced the gaze bias effect in the older/newer decisions because colour is a relevant feature for such decisions. These interactions appear early in the eye movement record, indicating that gaze bias is influenced during information encoding. |
Christopher R. Sears; Charmaine L. Thomas; Jessica M. Lehuquet; Jeremy C. S. Johnson Attentional biases in dysphoria: An eye-tracking study of the allocation and disengagement of attention Journal Article In: Cognition and Emotion, vol. 24, no. 8, pp. 1349–1368, 2010. @article{Sears2010,This study looked for evidence of biases in the allocation and disengagement of attention in dysphoric individuals. Participants studied images for a recognition memory test while their eye fixations were tracked and recorded. Four image types were presented (depression-related, anxiety- related, positive, neutral) in each of two study conditions. For the simultaneous study condition, four images (one of each type) were presented simultaneously for 10 seconds, and the number of fixations and the total fixation time to each image was measured, similar to the procedure used by Eizenman et al. (2003) and Kellough, Beevers, Ellis, and Wells (2008). For the sequential study condition, four images (one of each type) were presented consecutively, each for 4 seconds; to measure disengagement of attention an endogenous cuing procedure was used (Posner, 1980). Dysphoric individuals spent significantly less time attending to positive images than non-dysphoric individuals, but there were no group differences in attention to depression-related images. There was also no evidence of a dysphoria-related bias in initial shifts of attention. With respect to the disengagement of attention, dysphoric individuals were slower to disengage their attention from depression-related images. The recognition memory data showed that dysphoric individuals had poorer memory for emotional images, but there was no evidence of a conventional mood-congruent memory bias. Differences in the attentional and memory biases observed in depressed and dysphoric individuals are discussed. |
Daniel Smilek; Jonathan S. A. Carriere; J. Allan Cheyne Out of mind, out of sight: Eye blinking as indicator and embodiment of mind wandering. Journal Article In: Psychological Science, vol. 21, no. 6, pp. 786–789, 2010. @article{scc10,Mind wandering, in which cognitive processing of the external environment decreases in favor of internal processing, has been consistently associated with errors on tasks requiring sustained attention and continuous stimulus monitoring. The present investigation is based on the idea that blink rate might serve to modulate trade-offs between attention to mindwandering thoughts and to external task-related stimuli. To assess the relation between eye blinks and mind wandering, we compared blink rates during probe-caught episodes of mind wandering and on-task periods of reading. We also analyzed fixation frequency and fixation duration as a function of mind wandering. Analysis of the rate of eye fixations revealed that the eyes fixated less often during mind wandering than when subjects were on task. Analyses of average fixation durations failed to detect any significant differences between episodes of mind wandering and on-task periods. |
Grayden J. F. Solman; Daniel Smilek Item-specific location memory in visual search Journal Article In: Vision Research, vol. 50, no. 23, pp. 2430–2438, 2010. @article{ss10,In two samples, we demonstrate that visual search performance is influenced by memory for the locations of specific search items across trials. We monitored eye movements as observers searched for a target letter in displays containing 16 or 24 letters. From trial to trial the configuration of the search items was either Random, fully Repeated or similar but not identical (i.e., Intermediate). We found a graded pattern of response times across conditions with slowest times in the Random condition and fastest responses in the Repeated condition. We also found that search was comparably efficient in the Intermediate and Random conditions but more efficient in the Repeated condition. Importantly, the target on a given trial was fixated more accurately in the Repeated and Intermediate conditions relative to the Random condition. We suggest a tradeoff between memory and perception in search as a function of the physical scale of the search space. |
Andreas Sprenger; Maren Lappe-Osthege; Silke Talamo; Steffen Gais; Hubert Kimmig; Christoph Helmchen Eye movements during REM sleep and imagination of visual scenes Journal Article In: NeuroReport, vol. 21, no. 1, pp. 45–49, 2010. @article{Sprenger2010,It has been hypothesized that rapid eye movements (REMs) during sleep reflect the process of looking around in dreams. We questioned whether REMs differ from eye movements in wakefulness while imagining previously seen visual stimuli (dots, static images, videos). After looking at these stimuli individuals were asked to remember and imagine them. Subsequently, their REMs were recorded at the sleep laboratory. Kinematic parameters of REMs were similar to saccadic eye movements to remembered stimuli with closed eyes, irrespective of the stimulus type. In contrast, peak velocity of eye movements with open eyes was similar to REMs when semantic, but not nonsemantic, contents were imagined. Thus, REMs may be related to exploratory saccadic behaviour in the awake to remember visual stimuli. |
Damian G. Stephen; Daniel Mirman Interactions dominate the dynamics of visual cognition Journal Article In: Cognition, vol. 115, no. 1, pp. 154–165, 2010. @article{Stephen2010,Many cognitive theories have described behavior as the summation of independent contributions from separate components. Contrasting views have emphasized the importance of multiplicative interactions and emergent structure. We describe a statistical approach to distinguishing additive and multiplicative processes and apply it to the dynamics of eye movements during classic visual cognitive tasks. The results reveal interaction-dominant dynamics in eye movements in each of the three tasks, and that fine-grained eye movements are modulated by task constraints. These findings reveal the interactive nature of cognitive processing and are consistent with theories that view cognition as an emergent property of processes that are broadly distributed over many scales of space and time rather than a componential assembly line. |
Catherine Stevens; Heather Winskel; Clare Howell; Lyne-Marine Vidal; Cyril Latimer; Josephine Milne-Home Perceiving dance: Schematic expectations guide experts' scanning of a contemporary dance film Journal Article In: Journal of Dance Medicine & Science, vol. 14, no. 1, pp. 19–25, 2010. @article{Stevens2010,Eye fixations and saccades (eye movements) of expert and novice dance observers were compared to determine the effect of acquired expectations on observations of human movement, body morphology, and dance configurations. As hypothesized, measured fixation times of dance experts were significantly shorter than those of novices. In a second viewing of the same sequences, novices recorded significantly shorter fixations than those recorded during viewing session 1. Saccades recorded from experts were significantly faster than those of novices. Although both experts and novices fixated background regions, most likely making use of extrafoveal or peripheral vision to anticipate movement and configurations, novices fixated background regions significantly more than experts in viewing session 1. Their enhanced speed of visual processing suggests that dance experts are adept at anticipating movement and rapidly processing material, probably aided by acquired schemata or expectations in long-term memory and recognition of body and movement configurations. |
Sonja Stork; Anna Schubö Human cognition in manual assembly: Theories and applications Journal Article In: Advanced Engineering Informatics, vol. 24, no. 3, pp. 320–328, 2010. @article{Stork2010,Human cognition in production environments is analyzed with respect to various findings and theories in cognitive psychology. This theoretical overview describes effects of task complexity and attentional demands on both mental workload and task performance as well as presents experimental data on these topics. A review of two studies investigating the benefit of augmented reality and spatial cueing in an assembly task is given. Results demonstrate an improvement in task performance with attentional guidance while using contact analog highlighting. Improvements were obvious in reduced performance times and eye fixations as well as in increased velocity and acceleration of reaching and grasping movements. These results have various implications for the development of an assistive system. Future directions in this line of applied research are suggested. The introduced methodology illustrates how the analysis of human information processes and psychological experiments can contribute to the evaluation of engineering applications. |
Benjamin W. Tatler; Nicholas J. Wade; Hoi Kwan; John M. Findlay; Boris M. Velichkovsky Yarbus, eye movements, and vision Journal Article In: i-Perception, vol. 1, no. 1, pp. 7–27, 2010. @article{Tatler2010,The impact of Yarbus's research on eye movements was enormous following the translation of his book Eye Movements and Vision into English in 1967. In stark contrast, the published material in English concerning his life is scant. We provide a brief biography of Yarbus and assess his impact on contemporary approaches to research on eye movements. While early interest in his work focused on his study of stabilised retinal images, more recently this has been replaced with interest in his work on the cognitive influences on scanning patterns. We extended his experiment on the effect of instructions on viewing a picture using a portrait of Yarbus rather than a painting. The results obtained broadly supported those found by Yarbus. |
Jessica Taubert; Pamela J. Marsh; Tracey A. Shaw When you turn the other cheek: A preference for novel viewpoints of familiar faces Journal Article In: Perception, vol. 39, no. 3, pp. 429–432, 2010. @article{Taubert2010,Inferences about the psychobiological processes that underlie face perception have been drawn from the spontaneous behaviour of eyes. Using a visual paired-comparison task, we recorded the eye movements of twenty adults as they viewed pairs of faces that differed in their relative familiarity. The results indicate an advantage for novel viewpoints of familiar faces over familiar viewpoints of familiar faces and novel faces. We conclude that this preference serves the face recognition system by collecting the variation necessary to build robust representations of identity. |
Abtine Tavassoli; Dario L. Ringach When your eyes see more than you do Journal Article In: Current Biology, vol. 20, no. 3, pp. 93–94, 2010. @article{Tavassoli2010,Visual information is used by the brain to construct a conscious experience of the visual world and to guide motor actions [1]. Here we report a study of how eye movements and perception relate to each other. We compared the ability of human observers to perceive image motion with the reliability of their eyes to track the motion of a target [2], [3] and [4], the goal being to test whether both motor and sensory processes are based on the same set of signals and limited by a shared source of noise [2] and [4]. We found that the oculomotor system can detect fluctuations in the velocity of a moving target better than the observer. Surprisingly, in some conditions, eye movements reliably respond to the velocity fluctuations of a moving target that are otherwise perceptually invisible to the subjects. The implication is that visual motion signals exist in the brain that can be used to guide motor actions without evoking a perceptual outcome nor being accessible to conscious scrutiny. |
Illia Tchernikov; Mazyar Fallah A color hierarchy for automatic target selection Journal Article In: PLoS ONE, vol. 5, no. 2, pp. e9338, 2010. @article{Tchernikov2010,Visual processing of color starts at the cones in the retina and continues through ventral stream visual areas, called the parvocellular pathway. Motion processing also starts in the retina but continues through dorsal stream visual areas, called the magnocellular system. Color and motion processing are functionally and anatomically discrete. Previously, motion processing areas MT and MST have been shown to have no color selectivity to a moving stimulus; the neurons were colorblind whenever color was presented along with motion. This occurs when the stimuli are luminance-defined versus the background and is considered achromatic motion processing. Is motion processing independent of color processing? We find that motion processing is intrinsically modulated by color. Color modulated smooth pursuit eye movements produced upon saccading to an aperture containing a surface of coherently moving dots upon a black background. Furthermore, when two surfaces that differed in color were present, one surface was automatically selected based upon a color hierarchy. The strength of that selection depended upon the distance between the two colors in color space. A quantifiable color hierarchy for automatic target selection has wide-ranging implications from sports to advertising to human-computer interfaces. |
Anna L. Telling; Antje S. Meyer; Glyn W. Humphreys Distracted by relatives: Effects of frontal lobe damage on semantic distraction Journal Article In: Brain and Cognition, vol. 73, no. 3, pp. 203–214, 2010. @article{Telling2010,When young adults carry out visual search, distractors that are semantically related, rather than unrelated, to targets can disrupt target selection (see Belke, Humphreys, Watson, Meyer, & Telling, 2008; Moores, Laiti, & Chelazzi, 2003). This effect is apparent on the first eye movements in search, suggesting that attention is sometimes captured by related distractors. Here we assessed effects of semantically related distractors on search in patients with frontal-lobe lesions and compared them to the effects in age-matched controls. Compared with the controls, the patients were less likely to make a first saccade to the target and they were more likely to saccade to distractors (whether related or unrelated to the target). This suggests a deficit in a first stage of selecting a potential target for attention. In addition, the patients made more errors by responding to semantically related distractors on target-absent trials. This indicates a problem at a second stage of target verification, after items have been attended. The data suggest that frontal lobe damage disrupts both the ability to use peripheral information to guide attention, and the ability to keep separate the target of search from the related items, on occasions when related items achieve selection. |
Jan Theeuwes; Sebastiaan Mathôt; Alan Kingstone Object-based eye movements: The eyes prefer to stay within the same object Journal Article In: Attention, Perception, & Psychophysics, vol. 72, no. 3, pp. 597–601, 2010. @article{Theeuwes2010,The present study addressed the question of whether we prefer to make eye movements within or between objects. More specifically, when fixating one end of an object, are we more likely to make the next saccade within that same object or to another object? Observers had to discriminate small letters placed on rectangles similar to those used by Egly, Driver, and Rafal (1994). Following an exogenous cue, observers made a saccade to one end of one of the rectangles. The small target letter, which could be discriminated only after it had been fixated, could appear either within the same or at a different object. Consistent with object-based attention, we show that observers prefer to make an eye movement to the other end of the fixated same object, rather than to the equidistant end of a different object. It is concluded that there is a preference to make eye shifts within the same object, rather than between objects. |
Aidan A. Thompson; Denise Y. P. Henriques Locations of serial reach targets are coded in multiple reference frames Journal Article In: Vision Research, vol. 50, no. 24, pp. 2651–2660, 2010. @article{Thompson2010,Previous work from our lab, and elsewhere, has demonstrated that remembered target locations are stored and updated in an eye-fixed reference frame. That is, reach errors systematically vary as a function of gaze direction relative to a remembered target location, not only when the target is viewed in the periphery (Bock, 1986, known as the retinal magnification effect), but also when the target has been foveated, and the eyes subsequently move after the target has disappeared but prior to reaching (e.g., Henriques, Klier, Smith, Lowy, & Crawford, 1998; Sorrento & Henriques, 2008; Thompson & Henriques, 2008). These gaze-dependent errors, following intervening eye movements, cannot be explained by representations whose frame is fixed to the head, body or even the world. However, it is unknown whether targets presented sequentially would all be coded relative to gaze (i.e., egocentrically/absolutely), or if they would be coded relative to the previous target (i.e., allocentrically/relatively). It might be expected that the reaching movements to two targets separated by 5° would differ by that distance. But, if gaze were to shift between the first and second reaches, would the movement amplitude between the targets differ? If the target locations are coded allocentrically (i.e., the location of the second target coded relative to the first) then the movement amplitude should be about 5° But, if the second target is coded egocentrically (i.e., relative to current gaze direction), then the reaches to this target and the distances between the subsequent movements should vary systematically with gaze as described above. We found that requiring an intervening saccade to the opposite side of 2 briefly presented targets between reaches to them resulted in a pattern of reaching error that systematically varied as a function of the distance between current gaze and target, and led to a systematic change in the distance between the sequential reach endpoints as predicted by an egocentric frame anchored to the eye. However, the amount of change in this distance was smaller than predicted by a pure eye-fixed representation, suggesting that relative positions of the targets or allocentric coding was also used in sequential reach planning. The spatial coding and updating of sequential reach target locations seems to rely on a combined weighting of multiple reference frames, with one of them centered on the eye. |
Goedele Van Belle; Peter De Graef; Karl Verfaillie; Thomas Busigny; Bruno Rossion Whole not hole: Expert face recognition requires holistic perception Journal Article In: Neuropsychologia, vol. 48, no. 9, pp. 2620–2629, 2010. @article{VanBelle2010,Face recognition is an important ability of the human brain, yet its underlying mechanisms are still poorly understood. Two opposite views have been proposed to account for human face recognition expertise: the ability to extract the most diagnostic local information, feature-by feature (analytical view), or the ability to process all features at once over the whole face (holistic view). To help clarifying this debate, we used an original gaze-contingent stimulus presentation method to compare normal observers and a brain-damaged patient specifically impaired at face recognition (prosopagnosia). When a single central facial feature was revealed at a time through a gaze-contingent window, normal observers' performance at an individual face matching task decreased to the patient level. However, when only the central feature was masked, forcing normal observers to rely on the whole face but the fixated feature, their performance was almost not affected. In contrast, the prosopagnosic patient's performance decreased dramatically in this latter condition. These results were independent of the absolute size of the face and window/mask. This dissociation indicates that expertise in face recognition does not rest on the ability to analyze diagnostic local detailed features sequentially but rather on the ability to see the individual features of a face all at once, a function that is critically impaired in acquired prosopagnosia. |
Goedele Van Belle; Peter De Graef; Karl Verfaillie; Bruno Rossion; Philippe Lefèvre Face inversion impairs holistic perception: Evidence from gaze-contingent stimulation Journal Article In: Journal of Vision, vol. 10, no. 5, pp. 1–13, 2010. @article{VanBelle2010a,Human observers are experts at face recognition, yet a simple 180- rotation of a face photograph decreases recognition performance substantially. A full understanding of this phenomenonVwhich is believed to be important for clarifying the nature of our expertise in face recognitionVis still waiting. According to a long-standing and influential hypothesis, an inverted face cannot be perceived as holistically as an upright face and has to be analyzed local feature by local feature. Here, we tested this holistic perception hypothesis of the face inversion effect by means of a gaze-contingent stimulus presentation. When observers' perception was restricted to one fixated feature at a time by a gaze-contingent window, performance in an individual face matching task was almost unaffected by inversion. However, when a mask covered the fixated feature, preventing the use of local information at high resolution, the decrement of performance with inversion was even larger than in a normalVfull viewVcondition. These observations provide evidence that the face inversion effect is caused by an inability to perceive the individual face as a whole rather than as a collection of specific features and thus support the view that observers' expertise at upright face recognition is due to the ability to perceive an individual face holistically. |
Goedele Van Belle; Philippe Lefèvre; Renaud Laguesse; Thomas Busigny; Peter Graef; Karl Verfaillie; Bruno Rossion Feature-based processing of personally familiar faces in prosopagnosia: Evidence from eye-gaze contingency Journal Article In: Behavioural Neurology, vol. 23, no. 4, pp. 255–257, 2010. @article{VanBelle2010b,How familiar and unfamiliar faces are perceived remains largely unknown. Two views have dominated this field of research. On the one hand, recordings of eye fixations on faces and response classification experiments suggest that a face is processed in terms of its individual components, or facial features (mouth, eyes, nose,...), a strategy called analytical processing. On the other hand, there is strong behavioral evidence for interdependence in the processing of different features of a face, rather supporting holistic processing of the face. According to the latter holistic view, facial features are simultaneously perceived and integrated into a single representation, so that the perceptual field is that of the whole face. To shed light on this issue, in two recent studies, we recorded eye movements in a neurological patient suffering from a selective impairment in face recognition (acquired prosopagnosia). Previously, we showed that (1) PS fixates exactly on each of the main features of the face (mouth, left eye, right eye), contrary to normal observers who fixate mainly centrally on the top of the nose, around the geometric centre of the face. Moreover (2), an original gaze-contingent stimulus presentation method applied to an unfamiliar face discrimination task led us to demonstrate that, contrary to normal observers, PS' perceptual field appears to be limited to one central feature fixated at a time. These observations indicate that prosopagnosia prevents processing the multiple elements of a whole face simultaneously, and thus that this ability is a key aspect in human face recognition expertise. Here, we extend these observations by testing the same patient with eye gaze contingency while she attempts to identify a large set of personally familiar individuals from their face. |
Goedele Belle; Meike Ramon; Philippe Lefèvre; Bruno Rossion Fixation patterns during recognition of personally familiar and unfamiliar faces Journal Article In: Frontiers in Psychology, vol. 1, pp. 20, 2010. @article{Belle2010,Previous studies recording eye gaze during face perception have rendered somewhat inconclusive findings with respect to fixation differences between familiar and unfamiliar faces. This can be attributed to a number of factors that differ across studies: the type and extent of familiarity with the faces presented, the definition of areas of interest subject to analyses, as well as a lack of consideration for the time course of scan patterns. Here we sought to address these issues by recording fixations in a recognition task with personally familiar and unfamiliar faces. After a first common fixation on a central superior location of the face in between features, suggesting initial holistic encoding, and a subsequent left eye bias, local features were focused and explored more for familiar than unfamiliar faces. Although the number of fixations did not differ for un-/familiar faces, the locations of fixations began to differ before familiarity decisions were provided. This suggests that in the context of familiarity decisions without time constraints, differences in processing familiar and unfamiliar faces arise relatively early - immediately upon initiation of the first fixation to identity-specific information - and that the local features of familiar faces are processed more than those of unfamiliar faces. |
Jeroen J. A. Boxtel; Naotsugu Tsuchiya; Christof Koch Opposing effects of attention and consciousness on afterimages Journal Article In: Proceedings of the National Academy of Sciences, vol. 107, no. 19, pp. 8883–8888, 2010. @article{Boxtel2010,The brain's ability to handle sensory information is influenced by both selective attention and consciousness. There is no consensus on the exact relationship between these two processes and whether they are distinct. So far, no experiment has simultaneously manipulated both. We carried out a full factorial 2 x 2 study of the simultaneous influences of attention and consciousness (as assayed by visibility) on perception, correcting for possible concurrent changes in attention and consciousness. We investigated the duration of afterimages for all four combinations of high versus low attention and visible versus invisible. We show that selective attention and visual consciousness have opposite effects: paying attention to the grating decreases the duration of its afterimage, whereas consciously seeing the grating increases the afterimage duration. These findings provide clear evidence for distinctive influences of selective attention and consciousness on visual perception. |
Lise Van der Haegen; Denis Drieghe; Marc Brysbaert The split fovea theory and the Leicester critique: What do the data say? Journal Article In: Neuropsychologia, vol. 48, no. 1, pp. 96–106, 2010. @article{VanderHaegen2010,According to the Split Fovea Theory (SFT) recognition of foveally presented words involves interhemispheric transfer. This is because letters to the left of the fixation location are initially sent to the right hemisphere, whereas letters to the right of the fixation position are projected to the left hemisphere. Both sources of information must be integrated for words to be recognized. Evidence for the SFT comes from the Optimal Viewing Position (OVP) paradigm, in which foveal word recognition is examined as a function of the letter fixated. OVP curves are different for left and right language dominant participants, indicating a time cost when information is presented in the half-field ipsilateral to the dominant hemisphere (Hunter, Brysbaert, & Knecht, 2007). The methodology of the SFT research has recently been questioned, because not enough efforts were made to ensure adequate fixation. The aim of the present study is to test the validity of this argument. Experiment 1 replicated the OVP effect in a naming task by presenting words at different fixation positions, with the experimental settings applied in previous OVP research. Experiment 2 monitored and controlled eye fixations of the participants and presented the stimuli within the boundaries of the fovea. Exactly the same OVP curve was obtained. In Experiment 3, the eyes were also tracked and monocular viewing was used. Results again revealed the same OVP effect, although latencies were remarkably higher than in the previous experiments. From these results we can conclude that although noise is present in classical SFT studies without eye-tracking, this does not change the OVP effect observed with left dominant individuals. |
Stefan Van der Stigchel; Mark Mills; Michael D. Dodd Shift and deviate: Saccades reveal that shifts of covert attention evoked by trained spatial stimuli are obligatory. Journal Article In: Attention, Perception, & Psychophysics, vol. 72, no. 5, pp. 1244–1250, 2010. @article{VanderStigchel2010d,The premotor theory of attention predicts that motor movements, including manual movements and eye movements, are preceded by an obligatory shift of attention to the location of the planned response. We investigated whether the shifts of attention evoked by trained spatial cues (e.g., Dodd & Wilson, 2009) are obligatory by using an extreme prediction of the premotor theory: If individuals are trained to associate a color cue with a manual movement to the left or right, the shift of attention evoked by the color cue should also influence eye movements in an unrelated task. Participants were trained to associate an irrelevant color cue with left/right space via a training session in which directional responses were made. Experiment 1 showed that, posttraining, vertical saccades deviated in the direction of the trained response, despite the fact that the color cue was irrelevant. Experiment 2 showed that latencies of horizontal saccades were shorter when an eye movement had to be made in the direction of the trained response. These results demonstrate that the shifts of attention evoked by trained stimuli are obligatory, in addition to providing support for the premotor theory and for a connection between the attentional, motor, and oculomotor systems. |
Stefan Van der Stigchel; Tanja C. W. Nijboer The imbalance of oculomotor capture in unilateral visual neglect Journal Article In: Consciousness and Cognition, vol. 19, no. 1, pp. 186–197, 2010. @article{VanderStigchel2010b,Visual neglect has been associated with an imbalance in the level of activity in the saccadic system: activity in the contralesional field is suppressed, which makes target selection unlikely. We recorded eye movements of a patient with hemispatial neglect and a group of healthy participants during an oculomotor distractor paradigm. Results showed that the interfering effects of a distractor were very strong when presented in her ipsilesional visual field. However, when the distractor was presented in her contralesional field, there were no interfering effects when the target was presented in her ipsilesional field. These findings could not be explained by the presence of a visual field defect as revealed by the results of two hemianopic patients. Our results are in line with an imbalance in the level of activity in the saccadic system in visual neglect because visual elements presented in the contralesional field did not compete for saccadic selection. |
Editha M. Loon; Fadhel Khashawi; Geoffrey Underwood Visual strategies used for time-to-arrival judgments in driving Journal Article In: Perception, vol. 39, no. 9, pp. 1216–1229, 2010. @article{Loon2010,To investigate the sources of visual information that are involved in the anticipation of collisions we recorded eye movements while participants made relative timing judgments about approaching vehicles at a junction. The avoidance of collisions is a critical aspect in driving, particularly where cars enter a line of traffic from a side road, and the present study required judgments about animations in a virtual driving environment. In two experiments we investigated the effects of (i) the angle of approach of the vehicle and the type of path (straight or curved) of the observer, and (ii) the speed of both the observer and the approaching car. Relative timing judgments depend on the angle of approach of the other vehicle (judgments are more accurate for perpendicular than for obtuse angles). Eye-movement analysis shows that visual strategies in relative timing judgments are characterised by saccadic eye movements back and forth between the approaching car and the road ahead, particularly the side line which may serve as a spatial reference point. Results suggest that observers use the distance of the car from this reference point for their timing judgments. |
Signe Vangkilde; Thomas Habekost Finding Wally: Prism adaptation improves visual search in chronic neglect Journal Article In: Neuropsychologia, vol. 48, no. 7, pp. 1994–2004, 2010. @article{Vangkilde2010,Several studies have found that visuo-motor adaptation to rightward deviating prismatic goggles (prism adaptation) can alleviate symptoms of neglect after brain damage, but the long-term effect and clinical relevance of this rehabilitation approach have been questioned. In particular, the effect on visual search performance is controversial. In the present study 6 patients with chronic spatial neglect due to rightsided focal brain damage were given 20 sessions of prism adaptation over a period of two weeks. These patients, as well as a matched control group of neglect patients (n=5), were tested using a variety of effect measures with special emphasis on visual search at baseline, shortly after training, and five weeks later. A positive and very consistent long-term effect of prism adaptation was found across clinical tests of neglect, lateral bias of eye movements, and measures of everyday function, including subjective reports. The results show that prism adaptation can provide durable and clinically significant alleviation of neglect symptoms, even in the stable phase of recovery. |
Astrid Vermeiren; Baptist Liefooghe; André Vandierendonck Switch performance in peripherally and centrally triggered saccades Journal Article In: Experimental Brain Research, vol. 206, no. 3, pp. 243–248, 2010. @article{Vermeiren2010,A common hypothesis is that the switch cost measured when switching between prosaccades and antisaccades mainly reflects the inhibition of the saccadic system after the execution of an antisaccade, which requires the inhibition of a gaze response. The present study further tested this hypothesis by comparing switch performance between peripherally triggered saccades and centrally triggered saccades with the latter type of saccades not requiring inhibition of a gaze response. For peripherally triggered saccades, a switch cost was present for prosaccades but not for antisaccades. For centrally triggered saccades, a switch cost was present both for prosaccades and for antisaccades. The difference between both saccade tasks further supports the hypothesis that the switch performance observed for peripherally triggered saccades is related to the inhibition of a gaze response that is required when executing a peripherally triggered antisaccade and the persisting inhibition in the saccadic system this entails. Furthermore, the switch costs observed for centrally triggered saccades indicate that more general processes besides the persisting inhibition in the saccadic system, such as reconfiguration and interference control, also contribute to the switch performance in saccades. |
Michael Vesia; Steven L. Prime; Xiaogang Yan; Lauren E. Sergio; J. Douglas Crawford Specificity of human parietal saccade and reach regions during transcranial magnetic stimulation Journal Article In: Journal of Neuroscience, vol. 30, no. 39, pp. 13053–13065, 2010. @article{Vesia2010,Single-unit recordings in macaque monkeys have identified effector-specific regions in posterior parietal cortex (PPC), but functional neuroimaging in the human has yielded controversial results. Here we used on-line repetitive transcranial magnetic stimulation (rTMS) to determine saccade and reach specificity in human PPC. A short train of three TMS pulses (separated by an interval of 100 ms) was delivered to superior parieto-occipital cortex (SPOC), a region over the midposterior intraparietal sulcus (mIPS), and a site close to caudal IPS situated over the angular gyrus (AG) during a brief memory interval while subjects planned either a saccade or reach with the left or right hand. Behavioral measures then were compared to controls without rTMS. Stimulation of mIPS and AG produced similar patterns: increased end-point variability for reaches and decreased saccade accuracy for contralateral targets. In contrast, stimulation of SPOC deviated reach end points toward visual fixation and had no effect on saccades. Contralateral-limb specificity was highest for AG and lowest for SPOC. Visual feedback of the hand negated rTMS-induced disruptions of the reach plan for mIPS and AG, but not SPOC. These results suggest that human SPOC is specialized for encoding retinally peripheral reach goals, whereas more anterior-lateral regions (mIPS and AG) along the IPS possess overlapping maps for saccade and reach planning and are more closely involved in motor details (i.e., planning the reach vector for a specific hand). This work provides the first causal evidence for functional specificity of these parietal regions in healthy humans. |
Melissa L. -H. Võ; Werner X. Schneider A glimpse is not a glimpse: Differential processing of flashed scene previews leads to differential target search benefits Journal Article In: Visual Cognition, vol. 18, no. 2, pp. 171–200, 2010. @article{Vo2010a,What information can we extract from an initial glimpse of a scene and how do people differ in the way they process visual information? In Experiment 1, participants searched 3-D-rendered images of naturalistic scenes for embedded target objects through a gaze-contingent window. A briefly flashed scene preview (identical, background, objects, or control) preceded each search scene. We found that search performance varied as a function of the participants' reported ability to distinguish between previews. Experiment 2 further investigated the source of individual differences using a whole-report task. Data were analysed following the ‘‘Theory of Visual Attention'' approach, which allows the assessment of visual processing efficiency parameters. Results from both experiments indicate that during the first glimpse of a scene global processing of visual information predominates and that individual differences in initial scene processing and subsequent eye movement behaviour are based on individual differences in visual perceptual processing speed. |
Melissa L. -H. Võ; Jan Zwickel; Werner X. Schneider Has someone moved my plate? The immediate and persistent effects of object location changes on gaze allocation during natural scene viewing Journal Article In: Attention, Perception, & Psychophysics, vol. 72, no. 5, pp. 1251–1255, 2010. @article{Vo2010b,In this study, we investigated the immediate and persisting effects of object location changes on gaze control during scene viewing. Participants repeatedly inspected a randomized set of naturalistic scenes for later questioning. On the seventh presentation, an object was shown at a new location, whereas the change was reversed for all subsequent presentations of the scene. We tested whether deviations from stored scene representations would modify eye movements to the changed regions and whether these effects would persist. We found that changed objects were looked at longer and more often, regardless of change reportability. These effects were most pronounced immediately after the change occurred and quickly leveled off once a scene remained unchanged. However, participants continued to perform short validation checks to changed scene regions, which implies a persistent modulation of eye movement control beyond the occurrence of object location changes. |
