全部EyeLink出版物
以下按年份列出了截至2024年(2025年初)的所有13000多篇同行评审的EyeLink研究出版物。您可以使用视觉搜索、平滑追求、帕金森氏症等关键字搜索出版物库。您还可以搜索单个作者的姓名。按研究领域分组的眼动追踪研究可以在解决方案页面上找到。如果我们错过了任何EyeLink眼动追踪论文,请给我们发电子邮件!
2009 |
Gary Feng Mixed responses: Why readers spend less time at unfavorable landing positions Journal Article In: Journal of Eye Movement Research, vol. 3, no. 2, pp. 1–26, 2009. @article{Feng2009, This paper investigates why the average fixation duration tends to decrease from the center to the two ends of a word. Specifically, it examines (a) whether unfavorable landing positions trigger a corrective mechanism, (b) whether the triggering is based on the internal efference copy mechanism, and (c) whether the corrective mechanism is specific to fixations that missed their targeted words. To estimate the mean and proportion of the corrective fixations, a 3-parameter mixture model was fitted to distributions of first fixation duration from two large eye movement databases in studies 1 and 2. Study 3 experimentally created mislocated fixations using a gaze-contingent screen shift paradigm. There is little evidence for the efference copy mechanism and limited support for the mislocated fixations hypothesis. Overall, data suggest a process that terminates fixations sooner than would during normal reading; it is triggered by the visual input during a fixation, and is flexibly engaged at eccentric landing positions and in reading short words. Implications to theories of reading eye movements are discussed. |
Gary Feng; Kevin Miller; Hua Shu; Houcan Zhang Orthography and the development of reading processes: An eye-movement study of Chinese and English Journal Article In: Child Development, vol. 80, no. 3, pp. 720–735, 2009. @article{Feng2009a, As children become proficient readers, there are substantial changes in the eye movements that subserve reading. Some of these changes reflect universal developmental factors while others may be specific to a particular writing system. This study attempts to disentangle effects of universal and script-dependent factors by comparing the development of eye movements of English and Chinese speakers. Third-grade (English: mean age = 9.1 years |
Mackenzie G. Glaholt; Eyal M. Reingold The time course of gaze bias in visual decision tasks Journal Article In: Visual Cognition, vol. 17, no. 8, pp. 1228–1243, 2009. @article{Glaholt2009a, In three experiments, we used eyetracking to investigate the time course of biases in looking behaviour during visual decision making. Our study replicated and extended prior research by Shimojo, Simion, Shimojo, and Scheier (2003), and Simion and Shimojo (2006). Three groups of participants performed forced-choice decisions in a two-alternative free-viewing condition (Experiment 1a), a two-alternative gaze-contingent window condition (Experiment 1b), and an eight-alternative free-viewing condition (Experiment 1c). Participants viewed photographic art images and were instructed to select the one that they preferred (preference task), or the one that they judged to be photographed most recently (recency task). Across experiments and tasks, we demonstrated robust bias towards the chosen item in either gaze duration, gaze frequency or both. The present gaze bias effect was less task specific than those reported previously. Importantly, in the eight-alternative condition we demonstrated a very early gaze bias effect, which rules out a postdecision response-related explanation. [ABSTRACT FROM AUTHOR] |
Mackenzie G. Glaholt; Mei-Chun Wu; Eyal M. Reingold Predicting preference from fixations Journal Article In: PsychNology Journal, vol. 7, no. 2, pp. 141–158, 2009. @article{Glaholt2009, We measured the strength of the association between looking behaviour and preference. Participants selected the most preferred face out of a grid of 8 faces. Fixation times were correlated with selection on a trial-by-trial basis, as well as with explicit preference ratings. Furthermore, by ranking features based on fixation times, we were able to successfully predict participants' preferences for novel feature combinations in a two-alternative forced choice task. In addition, we obtained a similar pattern of findings in a very different stimulus domain: mock company logos. Our results indicated that fixation times can be used to predict selection in large arrays and they might also be employed to estimate preferences for whole stimuli as well as their constituent features. |
Diana J. Gorbet; Lauren E. Sergio The behavioural consequences of dissociating the spatial directions of eye and arm movements Journal Article In: Brain Research, vol. 1284, pp. 77–88, 2009. @article{Gorbet2009, Many of our daily movements use visual information to guide our arms toward objects of interest. Typically, these visually guided movements involve first focusing our gaze on the intended target and then reaching toward the direction of our gaze. The literature on eye-hand coordination provides a great deal of evidence that circuitry in the brain exists which can couple eye and arm movements. Moving both of these effectors towards a common spatial direction may be a default setting used by the brain to simplify the planning of movements. We tested this idea in 20 subjects using two experimental tasks. In a "Standard" condition, the eyes and a cursor were guided to the same spatial location by moving the arm (on a touchpad) and the eyes in the same direction. In a "Dissociated" condition, the eye and cursor were again guided to the same spatial location but the arm was required to move in a direction opposite to the eyes to successfully achieve this goal. In this study, we observed that dissociating the directions of eye and arm movement significantly changed the kinematic properties of both effectors including the latency and peak velocity of eye movements and the curvature of hand-path trajectories. Thus, forcing the brain to plan simultaneous eye and arm movements in different directions alters some of the basic (and often stereotyped) characteristics of motor responses. We suggest that interference with the function of a neural network that couples gaze and reach to congruent spatial locations underlies these kinematic alterations. |
H. S. Greenwald; David C. Knill Cue integration outside central fixation: A study of grasping in depth Journal Article In: Journal of Vision, vol. 9, no. 2, pp. 1–16, 2009. @article{Greenwald2009, We assessed the usefulness of stereopsis across the visual field by quantifying how retinal eccentricity and distance from the horopter affect humans' relative dependence on monocular and binocular cues about 3D orientation. The reliabilities of monocular and binocular cues both decline with eccentricity, but the reliability of binocular information decreases more rapidly. Binocular cue reliability also declines with increasing distance from the horopter, whereas the reliability of monocular cues is virtually unaffected. We measured how subjects integrated these cues to orient their hands when grasping oriented discs at different eccentricities and distances from the horopter. Subjects relied increasingly less on binocular disparity as targets' retinal eccentricity and distance from the horopter increased. The measured cue influences were consistent with what would be predicted from the relative cue reliabilities at the various target locations. Our results showed that relative reliability affects how cues influence motor control and that stereopsis is of limited use in the periphery and away from the horopter because monocular cues are more reliable in these regions. |
Stefan Grondelaers; Dirk Speelman; Denis Drieghe; Marc Brysbaert; Dirk Geeraerts In: Acta Psychologica, vol. 130, no. 2, pp. 1–33, 2009. @article{Grondelaers2009, This paper reports on the ways in which new entities are introduced into discourse. First, we present the evidence in support of a model of indefinite reference processing based on three principles: the listener's ability to make predictive inferences in order to decrease the unexpectedness of upcoming words, the availability to the speaker of grammatical constructions that customize predictive inferences, and the use of "expectancy monitors" to signal and facilitate the introduction of highly unpredictable entities. We provide evidence that one of these expectancy monitors in Dutch is the post-verbal variant of existential er (the equivalent of the unstressed existential "there" in English). In an eye-tracking experiment we demonstrate that the presence of er decreases the processing difficulties caused by low subject expectancy. A corpus-based regression analysis subsequently confirms that the production of er is determined almost exclusively by seven parameters of low subject expectancy. Together, the comprehension and production data suggest that while existential er functions as an expectancy monitor in much the same way as speech disfluencies (hesitations, pauses and filled pauses), er is a higher-level expectancy monitor because it is available in spoken and written discourse and because it is produced more systematically than any disfluency. |
Emmanuel Guzman-Martinez; Parkson Leung; Steven L. Franconeri; Marcia Grabowecky; Satoru Suzuki Rapid eye-fixation training without eyetracking Journal Article In: Psychonomic Bulletin & Review, vol. 16, no. 3, pp. 491–496, 2009. @article{GuzmanMartinez2009, Maintenance of stable central eye fixation is crucial for a variety of behavioral, electrophysiological, and neuroimaging experiments. Naive observers in these experiments are not typically accustomed to fixating, either requiring the use of cumbersome and costly eyetracking or producing confounds in results. We devised a flicker display that produced an easily detectable visual phenomenon whenever the eyes moved. A few minutes of training using this display dramatically improved the accuracy of eye fixation while observers performed a demanding spatial attention cuing task. The same amount of training using control displays did not produce significant fixation improvements, and some observers consistently made eye movements to the peripheral attention cue, contaminating the cuing effect. Our results indicate that (1) eye fixation can be rapidly improved in naive observers by providing real-time feedback about eye movements, and (2) our simple flicker technique provides an easy and effective method for providing this feedback. |
Amanda L. Gamble; Ronald M. Rapee The time-course of attentional bias in anxious children and adolescents Journal Article In: Journal of Anxiety Disorders, vol. 23, no. 7, pp. 841–847, 2009. @article{Gamble2009, This study examined the time-course of attentional bias in anxious and non-anxious children and adolescents aged 7-17 years using eye movement as an index of selective attention. Participants completed two eye-tracking tasks in which they viewed happy-neutral and negative-neutral face pairs for 3000 and 500 ms, respectively. When face pairs were presented for 3000 ms eye movement data showed no evidence of an attentional bias at any stage of attentional processing. When face pairs were presented for 500 ms a bias in initial orienting occurred; anxious adolescents directed their first fixation away from negative faces and anxious children directed their first fixation away from happy faces. Results suggest that childhood anxiety is characterized by a bias in initial orienting, with no bias in sustained attention, although only for briefly presented faces. |
Karsten Georg; Markus Lappe Effects of saccadic adaptation on visual localization before and during saccades Journal Article In: Experimental Brain Research, vol. 192, no. 1, pp. 9–23, 2009. @article{Georg2009, Short-term saccadic adaptation is a mechanism that adjusts saccade amplitude to accurately reach an intended saccade target. Short-term saccadic adaptation induces a shift of perceived localization of objects flashed before the saccade. This shift, being detectable only before an adapted saccade, disappears at some time around saccade onset. Up to now, the exact time course of this effect has remained unknown. In previous experiments, the mislocalization caused by this adaptation-induced shift was overlapping with the mislocalization caused by a different, saccade-related localization error, the peri-saccadic compression. Due to peri-saccadic compression, objects flashed immediately at saccade onset appear compressed towards the saccade target. First, we tested whether the adaptation-induced shift and the peri-saccadic compression were either independent or related processes. We performed experiments with two different luminance-contrast conditions to separate the adaptation-induced shift and the peri-saccadic compression. Human participants had to indicate the perceived location of briefly presented stimuli before, during or after an adapted saccade. Adaptation-induced shift occurred similarly in either contrast condition, with or without peri-saccadic compression. Second, after validating the premise of both processes being independent and superimposing, we aimed at characterizing the time course of the adaptation-induced shift in more detail. Being present up to 1 s before an adapted saccade, the adaptation-induced shift begins to gradually decline from about 150 ms before saccade onset, and ceases during the saccade. A final experiment revealed that visual references make a major contribution to adaptation-induced mislocalization. |
Sarah Brown-Schmidt The role of executive function in perspective taking during online language comprehension Journal Article In: Psychonomic Bulletin & Review, vol. 16, no. 5, pp. 893–900, 2009. @article{BrownSchmidt2009, During conversation, interlocutors build on the set of shared beliefs known as common ground. Although there is general agreement that interlocutors maintain representations of common ground, there is no consensus regarding whether common-ground representations constrain initial language interpretation processes. Here, I propose that executive functioning–specifically, failures in inhibition control–can account for some occasional insensitivities to common-ground information. The present article presents the results of an experiment that demonstrates that individual differences in inhibition control determine the degree to which addressees successfully inhibit perspective-inappropriate interpretations of temporary referential ambiguities in their partner's speech. Whether mentioned information was grounded or not also played a role, suggesting that addressees may show sensitivity to common ground only when it is established collaboratively. The results suggest that, in conversation, perspective information routinely guides online language processing and that occasional insensitivities to perspective can be attributed partly to difficulties in inhibiting perspective-inappropriate interpretations. |
Sarah Brown-Schmidt Partner-specific interpretation of maintained referential precedents during interactive dialog Journal Article In: Journal of Memory and Language, vol. 61, no. 2, pp. 171–190, 2009. @article{BrownSchmidt2009a, In dialog settings, conversational partners converge on similar names for referents. These lexically entrained terms [Garrod, S., & Anderson, A. (1987). Saying what you mean in dialog: A study in conceptual and semantic co-ordination. Cognition, 27, 181-218] are part of the common ground between the particular individuals who established the entrained term [Brennan, S. E., & Clark, H. H. (1996). Conceptual pacts and lexical choice in conversation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 1482-1493], and are thought to be encoded in memory with a partner-specific cue. Thus far, analyses of the time-course of interpretation suggest that partner-specific information may not constrain the initial interpretation of referring expressions [Barr, D. J., & Keysar, B. (2002). Anchoring comprehension in linguistic precedents. Journal of Memory and Language, 46, 391-418; Kronmüller, E., & Barr, D. J. (2007). Perspective-free pragmatics: Broken precedents and the recovery-from-preemption hypothesis. Journal of Memory and Language, 56, 436-455]. However, these studies used non-interactive paradigms, which may limit the use of partner-specific representations. This article presents the results of three eye-tracking experiments. Experiment 1a used an interactive conversation methodology in which the experimenter and participant jointly established entrained terms for various images. On critical trials, the same experimenter, or a new experimenter described a critical image using an entrained term, or a new term. The results demonstrated an early, on-line partner-specific effect for interpretation of entrained terms, as well as preliminary evidence for an early, partner-specific effect for new terms. Experiment 1b used a non-interactive paradigm in which participants completed the same task by listening to image descriptions recorded during Experiment 1a; the results showed that partner-specific effects were eliminated. Experiment 2 replicated the partner-specific findings of Experiment 1a with an interactive paradigm and scenes that contained previously unmentioned images. The results suggest that partner-specific interpretation is most likely to occur in interactive dialog settings; the number of critical trials and stimulus characteristics may also play a role. The results are consistent with a large body of work demonstrating that the language processing system uses a rich source of contextual and pragmatic representations to guide on-line processing decisions. |
Claudio Brozzoli; Francesco Pavani; Christian Urquizar; Lucilla Cardinali; Alessandro Farnè Grasping actions remap peripersonal space Journal Article In: NeuroReport, vol. 20, no. 10, pp. 913–917, 2009. @article{Brozzoli2009, The portion of space that closely surrounds our body parts is termed peripersonal space, and it has been shown to be represented in the brain through multisensory processing systems. Here, we tested whether voluntary actions, such as grasping an object, may remap such multisensory spatial representation. Participants discriminated touches on the hand they used to grasp an object containing task-irrelevant visual distractors. Compared with a static condition, reach-to-grasp movements increased the interference exerted by visual distractors over tactile targets. This remapping of multisensory space was triggered by action onset and further enhanced in real time during the early action execution phase. Additional experiments showed that this phenomenon is hand-centred. These results provide the first evidence of a functional link between voluntary object-oriented actions and multisensory coding of the space around us. |
Paul M. Brunet; Jennifer J. Heisz; Catherine J. Mondloch; David I. Shore; Louis A. Schmidt Shyness and face scanning in children Journal Article In: Journal of Anxiety Disorders, vol. 23, no. 7, pp. 909–914, 2009. @article{Brunet2009, Contrary to popular beliefs, a recent empirical study using eye tracking has shown that a non-clinical sample of socially anxious adults did not avoid the eyes during face scanning. Using eye-tracking measures, we sought to extend these findings by examining the relation between stable shyness and face scanning patterns in a non-clinical sample of 11-year-old children. We found that shyness was associated with longer dwell time to the eye region than the mouth, suggesting that some shy children were not avoiding the eyes. Shyness was also correlated with fewer first fixations to the nose, which is thought to reflect the typical global strategy of face processing. Present results replicate and extend recent work on social anxiety and face scanning in adults to shyness in children. These preliminary findings also provide support for the notion that some shy children may be hypersensitive to detecting social cues and intentions in others conveyed by the eyes. Theoretical and practical implications for understanding the social cognitive correlates and treatment of shyness are discussed. |
Stephen H. Butler; Stéphanie Rossit; Iain D. Gilchrist; Casimir J. H. Ludwig; Bettina Olk; Keith Muir; Ian Reeves; Monika Harvey Non-lateralised deficits in anti-saccade performance in patients with hemispatial neglect Journal Article In: Neuropsychologia, vol. 47, no. 12, pp. 2488–2495, 2009. @article{Butler2009, We tested patients suffering from hemispatial neglect on the anti-saccade paradigm to assess voluntary control of saccades. In this task participants are required to saccade away from an abrupt onset target. As has been previously reported, in the pro-saccade condition neglect patients showed increased latencies towards targets presented on the left and their accuracy was reduced as a result of greater undershoot. To our surprise though, in the anti-saccade condition, we found strong bilateral effects: the neglect patients produced large numbers of erroneous pro-saccades to both left and right stimuli. This deficit in voluntary control was present even in patients whose lesions spared the frontal lobes. These results suggest that the voluntary control of action is supported by an integrated network of cortical regions, including more posterior areas. Damage to one or more components within this network may result in impaired voluntary control. Crown Copyright © 2009. |
Sauman Chu; Nora Paul; Laura Ruel Using eye tracking technology to examine the effectiveness of design elements on news websites Journal Article In: Information Design Journal, vol. 17, no. 1, pp. 31–43, 2009. @article{Chu2009, Online environments allow for a richer expression for certain design elements. The goal of this collaborative research project is to identify, design, and examine various online news features in order to determine the impact of different digital design combinations on news audiences. Eye tracking was the primary method we used to examine three main areas: navigation for slide shows, effectiveness of breaking news formats, and design options for supplemental links. The project used an applied research approach by taking academically rigorous research and using that to inform and guide industry practice. [ABSTRACT FROM AUTHOR] Copyright of Information Design Journal (IDJ) is the property of John Benjamins Publishing Co. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. |
Francis Colas; Fabien Flacher; Thomas Tanner; Pierre Bessière; Benoît Girard Bayesian models of eye movement selection with retinotopic maps Journal Article In: Biological Cybernetics, vol. 100, no. 3, pp. 203–214, 2009. @article{Colas2009, Among the various possible criteria guiding eye movement selection, we investigate the role of position uncertainty in the peripheral visual field. In particular, we suggest that, in everyday life situations of object tracking, eye movement selection probably includes a principle of reduction of uncertainty. To evaluate this hypothesis, we confront the movement predictions of computational models with human results from a psychophysical task. This task is a freely moving eye version of the multiple object tracking task, where the eye movements may be used to compensate for low peripheral resolution. We design several Bayesian models of eye movement selection with increasing complexity, whose layered structures are inspired by the neurobiology of the brain areas implied in this process. Finally, we compare the relative performances of these models with regard to the prediction of the recorded human movements, and show the advantage of taking explicitly into account uncertainty for the prediction of eye movements. |
Geoff G. Cole; Gustav Kuhn Appearance matters: Attentional orienting by new objects in the precueing paradigm Journal Article In: Visual Cognition, vol. 17, no. 5, pp. 755–776, 2009. @article{Cole2009, Five experiments examined whether the appearance of a new object is able to orient attention in the absence of an accompanying sensory transient. A variant of the precueing paradigm (Posner & Cohen, 1984) was employed in which the cue was the onset of a new object. Crucially, the new object's appearance was not associated with any unique sensory transient. This was achieved by using a variant of the ‘‘annulus'' procedure recently developed by Franconeri, Hollingworth, and Simons (2005). Results showed that unless observers had an attentional set explicitly biased against onset, a validity effect was observed such that response times were shorter for targets occurring at the location of the new object relative to when targets occurred at the location of the 'old' object. We conclude that new onsets do not need to be associated with a unique sensory transient in order to orient attention. |
Charles A. Collin; Patricia A. McMullen; Julie Anne Séguin A significant bilateral field advantage for shapes defined by static and motion cues Journal Article In: Perception, vol. 38, no. 8, pp. 1132–1143, 2009. @article{Collin2009, Matching performance is better when pairs of visual stimuli are presented in bilateral conditions—in which one stimulus is presented to each side of the visual field—than in unilateral presentations—when both stimuli are presented to one side of the field. This is called the bilateral field advantage (BFA). The processing of visual motion has also been found to be more strongly integrated across the cerebral hemispheres than is processing of static cues. However, in these studies higher-order motion tasks, such as processing motion-defined form, have not been examined. To determine if the BFA generalises to such tasks, we measured the magnitude of the effect using a shape-matching task in which the stimuli were random polygons that were either in motion, motion-defined, or static. The polygon pairs were presented either: (i) bilaterally, one to either side of the vertical meridian; (ii) unilaterally, both to one side of the vertical meridian (left or right visual fields); or (iii) centrally, vertically separated across the horizontal meridian (a control condition). An equal advantage of bilateral conditions over unilateral ones was found for all three types of polygon shape cues, showing that the BFA generalises to conditions where shapes are in motion and where shape is defined by motion. These findings are compatible with the notion that motion processing is strongly integrated across the cerebral hemispheres, and with the idea that this integration manifests itself with simple motion information, rather than with higher-order motion processing such as matching shapes defined by motion. |
Jens Bölte; Andrea Böhl; Christian Dobel; Pienie Zwitserlood Effects of referential ambiguity, time constraints and addressee orientation on the production of morphologically complex words Journal Article In: European Journal of Cognitive Psychology, vol. 21, no. 8, pp. 1166–1199, 2009. @article{Boelte2009, In five experiments, participants were asked to describe unambiguously a target picture in a picture-picture paradigm. In the same-category condition, target (e. g., water bucket) and distractor picture (e. g., ice bucket) had identical names when their preferred, morphologically simple, name was used (e. g., bucket). The ensuing lexical ambiguity could be resolved by compound use (e. g., water bucket). Simple names sufficed as means of specification in other conditions, with distractors identical to the target, completely unrelated, or geometric figures. With standard timing parameters, participants produced mainly ambiguous answers in Experiment 1. An increase in available processing time hardly improved unambiguous responding (Experiment 2). A referential communication instruction (Experiment 3) increased the number of compound responses considerably, but morphologically simple answers still prevailed. Unambiguous responses outweighed ambiguous ones in Experiment 4, when timing parameters were further relaxed. Finally, the requirement to name both objects resulted in a nearly perfect ambiguity resolution (Experiment 5). Together, the results showed that speakers overcome lexical ambiguity only when time permits, when an addressee perspective is given and, most importantly, when their own speech overtly signals the ambiguity. |
Walter R. Boot; Ensar Becic; Arthur F. Kramer Stable individual differences in search strategy?: The effect of task demands and motivational factors on scanning strategy in visual search Journal Article In: Journal of Vision, vol. 9, no. 3, pp. 1–16, 2009. @article{Boot2009, Previous studies have demonstrated large individual differences in scanning strategy during a dynamic visual search task (E. Becic, A. F. Kramer, & W. R. Boot, 2007; W. R. Boot, A. F. Kramer, E. Becic, D. A. Wiegmann, & T. Kubose, 2006). These differences accounted for substantial variance in performance. Participants who chose to search covertly (without eye movements) excelled, participants who searched overtly (with eye movements) performed poorly. The aim of the current study was to investigate the stability of scanning strategies across different visual search tasks in an attempt to explain why a large percentage of observers might engage in maladaptive strategies. Scanning strategy was assessed for a group of observers across a variety of search tasks without feedback (efficient search, inefficient search, change detection, dynamic search). While scanning strategy was partly determined by task demands, stable individual differences emerged. Participants who searched either overtly or covertly tended to adopt the same strategy regardless of the demands of the search task, even in tasks in which such a strategy was maladaptive. However, when participants were given explicit feedback about their performance during search and performance incentives, strategies across tasks diverged. Thus it appears that observers by default will favor a particular search strategy but can modify this strategy when it is clearly maladaptive to the task. |
Kim Joris Boström; Anne Kathrin Warzecha Ocular following response to sampled motion Journal Article In: Vision Research, vol. 49, no. 13, pp. 1693–1701, 2009. @article{Bostroem2009, We investigate the impact of monitor frame rate on the human ocular following response (OFR) and find that the response latency considerably depends on the frame rate in the range of 80-160 Hz, which is far above the flicker fusion limit. From the lowest to the highest frame rate the latency declines by roughly 10 ms. Moreover, the relationship between response latency and stimulus speed is affected by the frame rate, compensating and even inverting the effect at lower frame rates. In contrast to that, the initial response acceleration is not affected by the frame rate and its expected dependence on stimulus speed remains stable. The nature of these phenomena reveals insights into the neural mechanism of low-level motion detection underlying the ocular following response. |
Christian Boucheny; Georges Pierre Bonneau; Jacques Droulez; Guillaume Thibault; Stephane Ploix A perceptive evaluation of volume rendering techniques Journal Article In: ACM Transactions on Applied Perception, vol. 5, no. 4, pp. 1–24, 2009. @article{Boucheny2009, The display of space filling data is still a challenge for the community of visualization. Direct volume rendering (DVR) is one of the most important techniques developed to achieve direct perception of such volumetric data. It is based on semitransparent representations, where the data are accumulated in a depth-dependent order. However, it produces images that may be difficult to understand, and thus several techniques have been proposed so as to improve its effectiveness, using for instance lighting models or simpler representations (e.g., maximum intensity projection). In this article, we present three perceptual studies that examine how DVR meets its goals, in either static or dynamic context. We show that a static representation is highly ambiguous, even in simple cases, but this can be counterbalanced by use of dynamic cues (i.e., motion parallax) provided that the rendering parameters are correctly tuned. In addition, perspective projections are demonstrated to provide relevant information to disambiguate depth perception in dynamic displays. |
Julie A. Brefczynski-Lewis; Ritobrato Datta; James W. Lewis; Edgar A. DeYoe The topography of visuospatial attention as revealed by a novel visual field mapping technique Journal Article In: Journal of Cognitive Neuroscience, vol. 21, no. 7, pp. 1447–1460, 2009. @article{BrefczynskiLewis2009, Previously, we and others have shown that attention can enhance visual processing in a spatially specific manner that is retinotopically mapped in the occipital cortex. However, it is difficult to appreciate the functional significance of the spatial pattern of cortical activation just by examining the brain maps. In this study, we visualize the neural representation of the "spotlight" of attention using a back-projection of attention-related brain activation onto a diagram of the visual field. In the two main experiments, we examine the topography of attentional activation in the occipital and parietal cortices. In retinotopic areas, attentional enhancement is strongest at the locations of the attended target, but also spreads to nearby locations and even weakly to restricted locations in the opposite visual field. The dispersion of attentional effects around an attended site increases with the eccentricity of the target in a manner that roughly corresponds to a constant area of spread within the cortex. When averaged across multiple observers, these patterns appear consistent with a gradient model of spatial attention. However, individual observers exhibit complex variations that are unique but reproducible. Overall, these results suggest that the topography of visual attention for each individual is composed of a common theme plus a personal variation that may reflect their own unique "attentional style." |
Eli Brenner; Jeroen B. J. Smeets Sources of variability in interceptive movements Journal Article In: Experimental Brain Research, vol. 195, no. 1, pp. 117–133, 2009. @article{Brenner2009, In order to successfully intercept a moving target one must be at the right place at the right time. But simply being there is seldom enough. One usually needs to make contact in a certain manner, for instance to hit the target in a certain direction. How this is best achieved depends on the exact task, but to get an idea of what factors may limit performance we asked people to hit a moving virtual disk through a virtual goal, and analysed the spatial and temporal variability in the way in which they did so. We estimated that for our task the standard deviations in timing and spatial accuracy are about 20 ms and 5 mm. Additional variability arises from individual movements being planned slightly differently and being adjusted during execution. We argue that the way that our subjects moved was precisely tailored to the task demands, and that the movement accuracy is not only limited by the muscles and their activation, but also-and probably even mainly-by the resolution of visual perception. |
Leonard A. Breslow; J. Gregory Trafton; Raj M. Ratwani A perceptual process approach to selecting color scales for complex visualizations Journal Article In: Journal of Experimental Psychology: Applied, vol. 15, no. 1, pp. 25–34, 2009. @article{Breslow2009, Previous research has shown that multicolored scales are superior to ordered brightness scales for supporting identification tasks on complex visualizations (categorization, absolute numeric value judgments, etc.), whereas ordered brightness scales are superior for relative comparison tasks (greater/less). We examined the processes by which such tasks are performed. By studying eye movements and by comparing performance on scales of different sizes, we argued that (a) people perform identification tasks by conducting a serial visual search of the legend, whose speed is sensitive to the number of scale colors and the discriminability of the colors; and (b) people perform relative comparison tasks using different processes for multicolored versus brightness scales. With multicolored scales, they perform a parallel search of the legend, whose speed is relatively insensitive to the size of the scale, whereas with brightness scales, people usually directly compare the target colors in the visualization, while making little reference to the legend. Performance of comparisons was relatively robust against increases in scale size, whereas performance of identifications deteriorated markedly, especially with brightness scales, once scale sizes reached 10 colors or more. |
James R. Brockmole; Walter R. Boot Should I stay or should I go? Attentional disengagement from visually unique and unexpected items at fixation Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 3, pp. 808–815, 2009. @article{Brockmole2009, Distinctive aspects of a scene can capture attention even when they are irrelevant to one's goals. The authors address whether visually unique, unexpected, but task-irrelevant features also tend to hold attention. Observers searched through displays in which the color of each item was irrelevant. At the start of search, all objects changed color. Critically, the foveated item changed to an unexpected color (it was novel), became a color singleton (it was unique), or both. Saccade latency revealed the time required to disengage overt attention from this object. Singletons resulted in longer latencies, but only if they were unexpected. Conversely, unexpected items only delayed disengagement if they were singletons. Thus, the time spent overtly attending to an object is determined, at least in part, by task-irrelevant stimulus properties, but this depends on the confluence of expectation and visual salience. |
Anne-Marie M. Brouwer; Volker H. Franz; Karl R. Gegenfurtner Differences in fixations between grasping and viewing objects Journal Article In: Journal of Vision, vol. 9, no. 1, pp. 1–24, 2009. @article{Brouwer2009, Where exactly do people look when they grasp an object? An object is usually contacted at two locations, whereas the gaze can only be at one location at the time. We investigated participants' fixation locations when they grasp objects with the contact positions of both index finger and thumb being visible and compared these to fixation locations when they only viewed the objects. Participants grasped with the index finger at the top and the thumb at the bottom of a flat shape. The main difference between grasping and viewing was that after a saccade roughly directed to the object's center of gravity, participants saccaded more upward and more into the direction of a region that was difficult to contact during grasping. A control experiment indicated that it was not the upper part of the shape that attracted fixation, while the results were consistent with an attraction by the index finger. Participants did not try to fixate both contact locations. Fixations were closer to the object's center of gravity in the viewing than in the grasping task. In conclusion, participants adapt their eye movements to the need of the task, such as acquiring information about regions with high required contact precision in grasping, even with small (graspable) objects. We suggest that in grasping, the main function of fixations is to acquire visual feedback of the approaching digits. |
Moran Cerf; E. Paxon Frady; Christof Koch Faces and text attract gaze independent of the task: Experimental data and computer model Journal Article In: Journal of Vision, vol. 9, no. 12, pp. 1–15, 2009. @article{Cerf2009, Previous studies of eye gaze have shown that when looking at images containing human faces, observers tend to rapidly focus on the facial regions. But is this true of other high-level image features as well? We here investigate the extent to which natural scenes containing faces, text elements, and cell phones-as a suitable control-attract attention by tracking the eye movements of subjects in two types of tasks-free viewing and search. We observed that subjects in free-viewing conditions look at faces and text 16.6 and 11.1 times more than similar regions normalized for size and position of the face and text. In terms of attracting gaze, text is almost as effective as faces. Furthermore, it is difficult to avoid looking at faces and text even when doing so imposes a cost. We also found that subjects took longer in making their initial saccade when they were told to avoid faces/text and their saccades landed on a non-face/non-text object. We refine a well-known bottom-up computer model of saliency-driven attention that includes conspicuity maps for color, orientation, and intensity by adding high-level semantic information (i.e., the location of faces or text) and demonstrate that this significantly improves the ability to predict eye fixations in natural images. Our enhanced model's predictions yield an area under the ROC curve over 84% for images that contain faces or text when compared against the actual fixation pattern of subjects. This suggests that the primate visual system allocates attention using such an enhanced saliency map. |
George Chahine; Bart Krekelberg Cortical contributions to saccadic suppression Journal Article In: PLoS ONE, vol. 4, no. 9, pp. e6900, 2009. @article{Chahine2009, The stability of visual perception is partly maintained by saccadic suppression: the selective reduction of visual sensitivity that accompanies rapid eye movements. The neural mechanisms responsible for this reduced perisaccadic visibility remain unknown, but the Lateral Geniculate Nucleus (LGN) has been proposed as a likely site. Our data show, however, that the saccadic suppression of a target flashed in the right visual hemifield increased with an increase in background luminance in the left visual hemifield. Because each LGN only receives retinal input from a single hemifield, this hemifield interaction cannot be explained solely on the basis of neural mechanisms operating in the LGN. Instead, this suggests that saccadic suppression must involve processing in higher level cortical areas that have access to a considerable part of the ipsilateral hemifield. |
Craig G. Chambers; Hilary Cooke Lexical competition during second-language listening: Sentence context, but not proficiency, constrains interference from the native lexicon Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 35, no. 4, pp. 1029–1040, 2009. @article{Chambers2009, A spoken language eye-tracking methodology was used to evaluate the effects of sentence context and proficiency on parallel language activation during spoken language comprehension. Nonnative speakers with varying proficiency levels viewed visual displays while listening to French sentences (e.g., Marie va décrire la poule [Marie will describe the chicken]). Displays depicted several objects including the final noun target (chicken) and an interlingual near-homophone (e.g., pool) whose name in English is phonologically similar to the French target (poule). Listeners' eye movements reflected temporary consideration of the interlingual competitor when hearing the target noun, demonstrating cross-language lexical competition. However, competitor fixations were dramatically reduced when prior sentence information was incompatible with the competitor (e.g., Marie va nourrir... [Marie will feed...]). In contrast, interlingual competition from English did not vary according to participants' rated proficiency in French, even though proficiency reliably predicted other aspects of processing behavior, suggesting higher proficiency in the active language does not provide a significant independent source of control over interlingual competition. The results provide new insights into the nature of parallel language activation in naturalistic sentential contexts. |
Manuel G. Calvo; M. Dolores Castillo Semantic word priming in the absence of eye fixations: Relative contributions of overt and covert attention Journal Article In: Psychonomic Bulletin & Review, vol. 16, no. 1, pp. 51–56, 2009. @article{Calvo2009, In the present study, we investigated the role of covert and overt attention in word identification. In repetition and semantic priming paradigms, prime words were followed by a probe for lexical decision. To make the primes available only to covert attention, we presented them for 150 msec, parafoveally (2.2 degrees away from fixation), and under gaze-contingent foveal masking. To make the primes available to overt attention, we presented them for 150 msec, at fixation, with no masking. Results showed both repetition and semantic priming in the absence of eye fixations on the primes: There was facilitation for identical and semantically related probe words, relative to an unrelated prime-probe condition. This revealed that both word form and meaning can be processed by covert attention alone. The pattern of relative contributions of covert (approximately 25%) and overt (approximately 75%) attention was similar for repetition and semantic priming. |
Manuel G. Calvo; Lauri Nummenmaa Lateralised covert attention in word identification Journal Article In: Laterality, vol. 14, no. 2, pp. 178–195, 2009. @article{Calvo2009a, The right visual field superiority in word recognition has been attributed to an attentional advantage by the left brain hemisphere. We investigated whether such advantage involves lateralised covert attention, in the absence of overt fixations on prime words. In a lexical decision task target words were preceded by an identical or an unrelated prime word. Eye movements were monitored. In Experiment 1 lateralised (to the left or right of fixation) prime words were parafoveally visible but foveally masked, thus allowing for covert attention but preventing overt attention. In Experiment 2 prime words were presented at fixation, thus allowing for both overt and covert attention. Results revealed positive priming in the absence of fixations on the primes when these were presented in the right visual field. The effects of covertly attended primes were nevertheless significantly reduced in comparison with those of overtly attended primes. It is concluded that word identification can be accomplished to a significant extent by lateralised covert attention alone, with right visual field advantage. |
Manuel G. Calvo; Lauri Nummenmaa Eye-movement assessment of the time course in facial expression recognition: Neurophysiological implications Journal Article In: Cognitive, Affective, & Behavioral Neuroscience, vol. 9, no. 4, pp. 398–411, 2009. @article{Calvo2009b, Happy, surprised, disgusted, angry, sad, fearful, and neutral faces were presented extrafoveally, with fixations on faces allowed or not. The faces were preceded by a cue word that designated the face to be saccaded in a two-alternative forced-choice discrimination task (2AFC; Experiments 1 and 2), or were followed by a probe word for recognition (Experiment 3). Eye tracking was used to decompose the recognition process into stages. Relative to the other expressions, happy faces (1) were identified faster (as early as 160 msec from stimulus onset) in extrafoveal vision, as revealed by shorter saccade latencies in the 2AFC task; (2) required less encoding effort, as indexed by shorter first fixations and dwell times; and (3) required less decision-making effort, as indicated by fewer refixations on the face after the recognition probe was presented. This reveals a happy-face identification advantage both prior to and during overt attentional processing. The results are discussed in relation to prior neurophysiological findings on latencies in facial expression recognition. © 2009 The Psychonomic Society, Inc. |
Karen L. Campbell; Naseem Al-Aidroos; Jay Pratt; Lynn Hasher Repelling the young and attracting the old: Examining age-related differences in saccade trajectory deviations Journal Article In: Psychology and Aging, vol. 24, no. 1, pp. 163–168, 2009. @article{Campbell2009a, In the present study, the authors examined age-related differences in saccade curvature as older and younger adults looked to an X target that appeared concurrently with an O distractor. They used a fixation gap procedure to introduce variance into the saccadic latencies of both groups. Consistent with earlier findings, younger adults' early onset saccades curved toward the distractor (as the distractor competed with the target for response selection), while late-onset saccades curved away from the distractor (as the distractor location became inhibited over time). In contrast, older adults' saccades gradually decreased in curvature toward the distractor, but at no point along the latency continuum did they show deviations away. These results suggest that while the local inhibitory mechanisms responsible for decreases in curvature toward distractors may be preserved with age, aging may lead to a selective decline in the frontal inhibitory mechanisms responsible for deviations away from distractors. |
Karen L. Campbell; Jennifer D. Ryan The effects of practice and external support on older adults' control of reflexive eye movements Journal Article In: Aging, Neuropsychology, and Cognition, vol. 16, no. 6, pp. 745–763, 2009. @article{Campbell2009, The present study examined whether external support and practice could reduce age differences in oculomotor control. Participants were to avoid fixating an abrupt onset and on some trials, were provided with a predictive cue regarding the onset location or identity. Older adults demonstrated more capture than younger adults, but both groups improved with practice. Whereas the older group benefited from a location preview (Experiment 1), neither group showed less capture when given a preview of the onset object itself (Experiment 2), suggesting that location-based inhibition, but not object-based inhibition, was sufficient to support oculomotor control within this paradigm. To test the generalizability of these skills, displays in a final block were manipulated such that the onset could appear in a different location or be a different object altogether. Viewing patterns were similar for changed vs. unchanged displays, suggesting that participants' practice-related gains could withstand a change in the task materials. |
P. Cardoso-Leite; Andrei Gorea Comparison of perceptual and motor decisions via confidence judgments and saccade curvature Journal Article In: Journal of Neurophysiology, vol. 101, no. 6, pp. 2822–2836, 2009. @article{CardosoLeite2009, This study investigated the effects on perceptual and motor decisions of low-contrast distractors, presented 5 degrees on the left and/or the right of the fixation point. Perceptual decisions were assessed with a yes/no (distractor) detection task. Motor decisions were assessed via these distractors' effects on the trajectory of an impending saccade to a distinct imperative stimulus, presented 10 degrees above fixation 50 ms after the distractor(s). Saccade curvature models postulate that distractors activate loci on a motor map that evoke reflexive saccades and that the distractor evoked activity is inhibited to prevent reflexive orienting to the cost of causing a saccade curvature away from the distractor. Depending on whether or not each of these processes depends on perceptual detection, one can predict the relationships between saccades' curvature and perceptual responses (classified as correct rejections, misses, false alarms, and hits). The results show that saccades curve away from distractors only when observers report them to be present. Furthermore, saccade deviation is correlated (on a trial-by-trial basis) with the inferred internal response associated with the perceptual report: the stronger the distractor-evoked perceptual response, the more saccades deviate away from the distractor. Also in contrast with a supersensitive motor system, perceptual sensitivity is systematically higher than the motor sensitivity derived from the distributions of the saccades' curvatures. Finally, when both distractors are present (and straight saccades are expected), the sign of saccades' curvature is correlated with observers' perceptual bias/criterion. Overall the results point to a strong perceptual-motor association. |
Gustav Kuhn; Alan Kingstone Look away! Eyes and arrows engage oculomotor responses automatically Journal Article In: Attention, Perception, & Psychophysics, vol. 71, no. 2, pp. 314–327, 2009. @article{Kuhn2009, The present study investigates how people's voluntary saccades are influenced by where another person is looking, even when this is counterpredictive of the intended saccade direction. The color of a fixation point instructed participants to make saccades either to the left or right. These saccade directions were either congru- ent or incongruent with the eye gaze of a centrally presented schematic face. Participants were asked to ignore the eyes, which were congruent only 20% of the time. At short gaze–fixation-cue stimulus onset asynchronies (SOAs; 0 and 100 msec), participants made more directional errors on incongruent than on congruent trials. At a longer SOA (900 msec), the pattern tended to reverse. We demonstrate that a perceived eye gaze results in an automatic saccade following the gaze and that the gaze cue cannot be ignored, even when attending to it is detrimental to the task. Similar results were found for centrally presented arrow cues, suggesting that this interference is not unique to gazes. |
Gustav Kuhn; Benjamin W. Tatler; Geoff G. Cole You look where I look! Effect of gaze cues on overt and covert attention in misdirection Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 925–944, 2009. @article{Kuhn2009a, We designed a magic trick in which misdirection was used to orchestrate observers' attention in order to prevent them from detecting the to-be-concealed event. By experimentally manipulating the magician's gaze direction we investigated the role that gaze cues have in attentional orienting, independently of any low level features. Participants were significantly less likely to detect the to-be-concealed event if the misdirection was supported by the magician's gaze, thus demonstrating that the gaze plays an important role in orienting people's attention. Moreover, participants spent less time looking at the critical hand when the magician's gaze was used to misdirect their attention away from the hand. Overall, the magician's face, and in particular the eyes, accounted for a large proportion of the fixations. The eyes were popular when the magician was looking towards the observer; once he looked towards the actions and objects being manipulated, participants typically fixated the gazed-at areas. Using a highly naturalistic paradigm using a dynamic display we demonstrate gaze following that is independent of the low level features of the scene. |
Anil Kumar; Nagini Sarvananthan; Frank A. Proudlock; Mervyn G. Thomas; Eryl O. Roberts; Irene Gottlob Asperger syndrome associated with idiopathic infantile nystagmus-A report of 2 cases Journal Article In: Strabismus, vol. 17, no. 2, pp. 63–65, 2009. @article{Kumar2009, Asperger syndrome is a severe and chronic developmental disorder. It is closely associated with autism and is grouped under autism spectrum disorder (ASD). Various eye movement abnormalities in AS have been reported in literature such as increased errors and latencies on the antisaccadic task implicating dysfunction of the prefrontal cortex, impairment of the pursuit especially for targets presented in the right visual hemisphere, suggesting disturbance in the left extrastraite cortex. There are no reports in the literature of association between idiopathic infantile nystagmus (IIN) and AS. We report 2 cases of Asperger syndrome associated with idiopathic infantile nystagmus. |
Anil Kumar; Shery Thomas; Rebecca J. McLean; Frank A. Proudlock; Eryl O. Roberts; Mike Boggild; Irene Gottlob Treatment of acquired periodic alternating nystagmus with memantine: A case report Journal Article In: Clinical Neuropharmacology, vol. 32, no. 2, pp. 109–110, 2009. @article{Kumar2009a, We report a case of acquired periodic alternating nystagmus associated with common variable immunodeficiency and cutaneous sarcoid. The patient was initially treated with baclofen with minimal subjective improvement. We found a significant improvement in the patient's symptoms and nystagmus intensity after treatment with memantine. |
Feng Yang Kuo; Chiung-Wen Hsu; Rong-Fuh Day An exploratory study of cognitive effort involved in decision under Framing-an application of the eye-tracking technology Journal Article In: Decision Support Systems, vol. 48, no. 1, pp. 81–91, 2009. @article{Kuo2009, The framing effect, proposed by Tversky and Kahneman [A. Tversky, D. Kahneman, The framing of decisions and the psychology of choice, Science 211 (4481) (1981) 453-458.], refers to the phenomenon that varying the presentations of the same problem can systematically affect the choice one makes. In this research we have reviewed a literature related to the framing effect and neurobiological studies of emotion. This review leads us to conceptualize that framing may induce emotion, which in turn impinges on the level of cognitive effort that subsequently shapes the framing effect. We then employ the eye-tracking technology to explore the differences in cognitive effort under both positive and negative framing conditions. Among the four experimental problems, disease and gambling problems are found to exhibit the framing effect, while the kittens' therapy and the plant problem do not. In analyzing the level of eye movement for the four problems, we find that cognitive effort asymmetry plays a critical role in the production of the framing effect. That is, for the two problems that display the framing effect, subjects expend more effort in the negative framing condition than they do in the positive, yet the framing effect persists, indicating that they cannot change their cognitive inertia despite this increase in cognitive effort. The finding has potential implications for the design of information presentation to facilitate decision making. © 2009 Elsevier B.V. All rights reserved. |
Victor Kuperman; Robert Schreuder; Raymond Bertram; R. Harald Baayen Reading polymorphemic Dutch compounds: toward a multiple route model of lexical processing. Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 3, pp. 876–895, 2009. @article{Kuperman2009, This article reports an eye-tracking experiment with 2,500 polymorphemic Dutch compounds presented in isolation for visual lexical decision while readers' eye movements were registered. The authors found evidence that both full forms of compounds (dishwasher) and their constituent morphemes (e.g., dish, washer) and morphological families of constituents (sets of compounds with a shared constituent) played a role in compound processing. They observed simultaneous effects of compound frequency, left constituent frequency, and family size early (i.e., before the whole compound has been scanned) and also observed effects of right constituent frequency and family size that emerged after the compound frequency effect. The temporal order of these and other observed effects goes against assumptions of many models of lexical processing. The authors propose specifications for a new multiple-route model of polymorphemic compound processing that is based on time-locked, parallel, and interactive use of all morphological cues as soon as they become even partly available to the visual uptake system. |
Antonella C. Kis; Vaughan W. A. Singh; Matthias Niemeier Short- and long-term plasticity of eye position information: Examining perceptual, attentional, and motor influences on perisaccadic perception Journal Article In: Journal of vision, vol. 9, no. 6, pp. 1–21, 2009. @article{Kis2009, Spatial vision requires information about eye position to account for eye movements. But integrating eye position information and information about objects in the world is imperfect and can lead to transient misperceptions around the time of saccadic eye movements most likely because the signals are prone to temporal errors making it difficult to tell when the retinas move relative to when retinal images move. To clarify where this uncertainty comes from, in four experiments we examined influences of eye posture, attentional cueing, and trial history on perisaccadic misperceptions. We found evidence for one longer-term modulation of perisaccadic shift that evolved over the time of the test session due to biased eye posture. Another, short-term influence on perisaccadic shift was related to eye posture during preceding trials or the direction of the preceding saccade. Both perceptual effects could not be explained with visual delays, influences of attention or changes in saccade metrics. Our data are consistent with the idea that perisaccadic shift is caused by neural representations of eye position or space that are plastic and that arise from non-motor, extraretinal mechanisms. This suggests a perceptual system that continuously calibrates itself in response to changes in oculomotor and muscle systems to reconstruct a stable percept of the world. |
Reinhold Kliegl; Martin Rolfs; Jochen Laubrock; Ralf Engbert Microsaccadic modulation of response times in spatial attention tasks Journal Article In: Psychological Research, vol. 73, no. 2, pp. 136–146, 2009. @article{Kliegl2009, Covert shifts of attention are usually reflected in RT differences between responses to valid and invalid cues in the Posner spatial attention task. Such inferences about covert shifts of attention do not control for microsaccades in the cue-target interval. We analyzed the effects of microsaccade orientation on RTs in four conditions, crossing peripheral visual and auditory cues with peripheral visual and auditory discrimination targets. Reaction time was generally faster on trials without microsaccades in the cue-target interval. If microsaccades occurred, the target-location congruency of the last microsaccade in the cue-target interval interacted in a complex way with cue validity. For valid visual cues, irrespective of whether the discrimination target was visual or auditory, target-congruent microsaccades delayed RT. For invalid cues, target-incongruent microsaccades facilitated RTs for visual target discrimination but delayed RT for auditory target discrimination. No reliable effects on RT were associated with auditory cues or with the first microsaccade in the cue-target interval. We discuss theoretical implications on the relation about spatial attention and oculomotor processes. © 2008 Springer-Verlag. |
Steffen Klingenhoefer; Frank Bremmer Perisaccadic localization of auditory stimuli Journal Article In: Experimental Brain Research, vol. 198, no. 2-3, pp. 411–423, 2009. @article{Klingenhoefer2009, Interaction with the outside world requires the knowledge about where objects are with respect to one's own body. Such spatial information is represented in various topographic maps in different sensory systems. From a computational point of view, however, a single, modality-invariant map of the incoming sensory signals appears to be a more efficient strategy for spatial representations. If such a single supra-modal map existed and were used for perceptual purposes, localization characteristics should be similar across modalities. Previous studies had shown mislocalization of brief visual stimuli presented in the temporal vicinity of saccadic eye-movements. Here, we tested, if such mislocalizations could also be found for auditory stimuli. We presented brief noise bursts before, during, and after visually guided saccades. Indeed, we found localization errors for these auditory stimuli. The spatio-temporal pattern of this mislocalization, however, clearly differed from the one found for visual stimuli. The spatial error also depended on the exact type of eye-movement (visually guided vs. memory guided saccades). Finally, results obtained in fixational control paradigms under different conditions suggest that auditory localization can be strongly influenced by both static and dynamic visual stimuli. Visual localization on the other hand is not influenced by distracting visual stimuli but can be inaccurate in the temporal vicinity of eye-movements. Taken together, our results argue against a single, modality-independent spatial representation of sensory signals. |
Wilhelm Bernhard Kloke; Wolfgang Jaschinski; Stephanie Jainta Microsaccades under monocular viewing conditions Journal Article In: Journal of Eye Movement Research, vol. 3, no. 1, pp. 1–7, 2009. @article{Kloke2009, Among the eye movements during fixation, the function of small saccades occuring quite commonly at fixation is still unclear. It has been reported that a substantial number of these microsaccades seem to occur in only one of the eyes. The aim of the present study is to investigate microsaccades in monocular stimulation conditions. Although this is an artificial test condition which does not occur in natural vision, this monocular presentation paradigm allows for a critical test of a presumptive monoc- ular mechanism of saccade generation. Results in these conditions can be compared with the normal binocular stimulation mode. We checked the statistical properties of microsaccades under monocular stimulation conditions and found no indication for specific interactions for monocularly detected small saccades, which might be present if they were based on a monocular physiological activation mechanism. |
Tomas Knapen; Jan Brascamp; Wendy J. Adams; Erich W. Graf The spatial scale of perceptual memory in ambiguous figure perception Journal Article In: Journal of Vision, vol. 9, no. 13, pp. 1–12, 2009. @article{Knapen2009, Ambiguous visual stimuli highlight the constructive nature of vision: perception alternates between two plausible interpretations of unchanging input. However, when a previously viewed ambiguous stimulus reappears, its earlier perception almost entirely determines the new interpretation; memory disambiguates the input. Here, we investigate the spatial properties of this perceptual memory, taking into account strong anisotropies in percept preference across the visual field. Countering previous findings, we show that perceptual memory is not confined to the location in which it was instilled. Rather, it spreads to noncontiguous regions of the visual field, falling off at larger distances. Furthermore, this spread of perceptual memory takes place in a frame of reference that is tied to the surface of the retina. These results place the neural locus of perceptual memory in retinotopically organized sensory cortical areas, with implications for the wider function of perceptual memory in facilitating stable vision in natural, dynamic environments. |
Tomas Knapen; Martin Rolfs; Patrick Cavanagh The reference frame of the motion aftereffect is retinotopic Journal Article In: Journal of Vision, vol. 9, no. 5, pp. 1–6, 2009. @article{Knapen2009a, Although eye-, head- and body-movements can produce large-scale translations of the visual input on the retina, perception is notable for its spatiotemporal continuity. The visual system might achieve this by the creation of a detailed map in world coordinates–a spatiotopic representation. We tested the coordinate system of the motion aftereffect by adapting observers to translational motion and then tested (1) at the same retinal and spatial location (full aftereffect condition), (2) at the same retinal location, but at a different spatial location (retinotopic condition), (3) at the same spatial, but at a different retinal location (spatiotopic condition), or (4) at a different spatial and retinal location (general transfer condition). We used large stimuli moving at high speed to maximize the likelihood of motion integration across space. In a second experiment, we added a contrast-decrement detection task to the motion stimulus to ensure attention was directed at the adapting location. Strong motion aftereffects were found when observers were tested in the full and retinotopic aftereffect conditions. We also found a smaller aftereffect at the spatiotopic location but it did not differ from that at the location that was neither spatiotopic nor retinotopic. This pattern of results did not change when attention was explicitly directed at the adapting stimulus. We conclude that motion adaptation took place at retinotopic levels of visual cortex and that no spatiotopic interaction of motion adaptation and test occurred across saccades. |
Christopher M. Knapp; Irene Gottlob; Rebecca J. McLean; Suzzane Rafelt; Frank A. Proudlock Effect of distance upon horizontal and vertical look and stare OKN Journal Article In: Journal of Vision, vol. 9, no. 12, pp. 1–9, 2009. @article{Knapp2009, Previous reports suggest that distance influences horizontal stare OKN gains; however, the effect of distance on vertical OKN and look OKN is unknown. Horizontal and vertical look and stare OKN gains were recorded in 16 healthy volunteers (velocity 38.4 degrees /s) at three distances (0.3 m, 1 m, and 2.5 m) and two different stimulus sizes. Asymmetry of responses and correlation of gains in different directions were compared. Measurements at near were compared with and without glasses. Distance did not significantly affect horizontal look and stare OKN or vertical look OKN, however, downward stare OKN gains were reduced at greater distances (p = 0.002). Mean downward stare OKN gains recorded in each individual were strongly correlated to leftward and rightward gains but not upward gains. In contrast, upward OKN gains were not correlated to gains in leftward, rightward, or downward directions. Downward stare OKN responses are significantly sensitive to the effects of distance, whereas stare OKN in other directions and look OKN responses in all directions are not. Individual mean downward stare OKN gains are more closely related to horizontal responses rather than upward responses. This suggests that the downward OKN system is more functionally related to the horizontal system rather than the upward OKN system. |
Pia Knoeferle; Matthew W. Crocker Constituent order and semantic parallelism in online comprehension Eye- tracking evidence from German Journal Article In: Quarterly Journal of Experimental Psychology, vol. 62, no. 12, pp. 2338–2371, 2009. @article{Knoeferle2009, Reading times for the second conjunct of and-coordinated clauses are faster when the second conjunct parallels the first conjunct in its syntactic or semantic (animacy) structure than when its structure differs (Frazier, Munn, & Clifton, 2000; Frazier, Taft, Roeper, & Clifton, 1984). What remains unclear, however, is the time course ofparallelism effects, their scope, and the kinds oflinguistic information to which they are sensitive. Findings from the first two eye-tracking experiments revealed incremental constituent order parallelism across the board – both during structural disambiguation (Experiment 1) and in sentences with unambiguously case-marked constituent order (Experiment 2), as well as for both marked and unmarked constituent orders (Experiments 1 and 2). Findings from Experiment 3 revealed effects of both constituent order and subtle semantic (noun phrase similarity) parallelism. Together our findings provide evidence for an across-the-board account of parallelism for processing and coordinated clauses, in which both constituent order and semantic aspects of representations contribute towards incremental parallelism effects. We discuss our findings in the context ofexisting findings on parallelism and priming, as well as mechanisms ofsentence processing. |
Peter J. Kohler; Gideon P. Caplovitz; Peter Ulric Tse The whole moves less than the spin of its parts Journal Article In: Attention, Perception, & Psychophysics, vol. 71, no. 4, pp. 675–679, 2009. @article{Kohler2009, When individually moving elements in the visual scene are perceptually grouped together into a coherently moving object, they can appear to slow down. In the present article, we show that the perceived speed of a particular global-motion percept is not dictated completely by the speed of the local moving elements. We investigated a stimulus that leads to bistable percepts, in which local and global motion may be perceived in an alternating fashion. Four rotating dot pairs, when arranged into a square-like configuration, may be perceived either locally, as independently rotating dot pairs, or globally, as two large squares translating along overlapping circular trajectories. Using a modified version of this stimulus, we found that the perceptually grouped squares appeared to move more slowly than the locally perceived rotating dot pairs, suggesting that perceived motion magnitude is computed following a global analysis of form. Supplemental demos related to this article can be downloaded from app.psychonomic-journals.org/content/supplemental. |
Dirk Kerzel; Sabine Born; David Souto Smooth pursuit eye movements and perception share target selection, but only some central resources Journal Article In: Behavioural Brain Research, vol. 201, no. 1, pp. 66–73, 2009. @article{Kerzel2009, Smooth pursuit eye movements have been linked to perception by a common attentional mechanism. We investigated whether perceptual performance was traded for smooth pursuit performance. While tracking a red target cross, observers had to discriminate the orientation of a flashed peripheral grating. We manipulated the priority given to the two tasks. Pursuit gain changed according to observers' effort to pursue the target, but perceptual discrimination of the peripheral flash was not affected by these changes, suggesting that smooth pursuit does not use the same resources as perception. Complete resource sharing may be confined to situations involving multiple moving objects. Next, we added a second perceptual task on the foveal pursuit target. Foveal discrimination performance was traded for peripheral discrimination performance and pursuit gain followed the perceptual priorities. Thus, smooth pursuit gain is affected by which target has been selected for enhanced perceptual processing, but that does not imply shared perceptual resources. |
F. A. Khawaja; James M. G. Tsui; Christopher C. Pack Pattern motion selectivity of spiking outputs and local field potentials in macaque visual cortex Journal Article In: Journal of Neuroscience, vol. 29, no. 43, pp. 13702–13709, 2009. @article{Khawaja2009, The dorsal pathway of the primate visual cortex is involved in the processing of motion signals that are useful for perception and behavior. Along this pathway, motion information is first measured by the primary visual cortex (V1), which sends specialized projections to extrastriate regions such as the middle temporal area (MT). Previous work with plaid stimuli has shown that most V1 neurons respond to the individual components of moving stimuli, whereas some MT neurons are capable of estimating the global motion of the pattern. In this work, we show that the majority of neurons in the medial superior temporal area (MST), which receives input from MT, have this pattern-selective property. Interestingly, the local field potentials (LFPs) measured simultaneously with the spikes often exhibit properties similar to that of the presumptive feedforward input to each area: in the high-gamma frequency band, the LFPs in MST are as component selective as the spiking outputs of MT, and MT LFPs have plaid responses that are similar to the spiking outputs of V1. In the lower LFP frequency bands (beta and low gamma), component selectivity is very common, and pattern selectivity is almost entirely absent in both MT and MST. Together, these results suggest a surprisingly strong link between the sensory tuning of cortical LFPs and afferent inputs, with important implications for the interpretation of imaging studies and for models of cortical function. |
Wolfe Kienzle; Matthias O. Franz; Bernhard Scholkopf; Felix A. Wichmann Center-surround patterns emerge as optimal predictors for human saccade targets Journal Article In: Journal of Vision, vol. 9, no. 5, pp. 1–15, 2009. @article{Kienzle2009, The human visual system is foveated, that is, outside the central visual field resolution and acuity drop rapidly. Nonetheless much of a visual scene is perceived after only a few saccadic eye movements, suggesting an effective strategy for selecting saccade targets. It has been known for some time that local image structure at saccade targets influences the selection process. However, the question of what the most relevant visual features are is still under debate. Here we show that center-surround patterns emerge as the optimal solution for predicting saccade targets from their local image structure. The resulting model, a one-layer feed-forward network, is surprisingly simple compared to previously suggested models which assume much more complex computations such as multi-scale processing and multiple feature channels. Nevertheless, our model is equally predictive. Furthermore, our findings are consistent with neurophysiological hardware in the superior colliculus. Bottom-up visual saliency may thus not be computed cortically as has been thought previously. |
Franziska Kretzschmar; Ina Bornkessel-Schlesewsky; Matthias Schlesewsky Parafoveal versus foveal N400s dissociate spreading activation from contextual fit Journal Article In: NeuroReport, vol. 20, no. 18, pp. 1613–1618, 2009. @article{Kretzschmar2009, Using concurrent electroencephalogram and eye movement measures to track natural reading, this study shows that N400 effects reflecting predictability are dissociable from those owing to spreading activation. In comparing predicted sentence endings with related and unrelated unpredicted endings in antonym constructions ('the opposite of black is white/yellow/nice'), fixation-related potentials at the critical word revealed a predictability-based N400 effect (unpredicted vs. predicted words). By contrast, event-related potentials time locked to the last fixation before the critical word showed an N400 only for the nonrelated unpredicted condition (nice). This effect is attributed to a parafoveal mismatch between the critical word and preactivated lexical features (i.e. features of the predicted word and its associates). In addition to providing the first demonstration of a parafoveally induced N400 effect, our results support the view that the N400 is best viewed as a component family. |
Gregory D. Keating Sensitivity to violation of gender agreement in native and nonnative Spanish Journal Article In: Language Learning, vol. 59, no. 3, pp. 503 – 535, 2009. @article{Keating2009, This article reports the results of an eye‐tracking experiment that investigated the effects of structural distance on readers' sensitivity to violations of Spanish gender agreement during online sentence comprehension. The study tracked the eye movements of native Spanish speakers and English‐speaking learners of Spanish as they read sentences that contained nouns modified by postnominal adjectives located in three syntactic domains: (a) in the DP, (b) in the VP, or (c) in a subordinate clause. In half of the sentences in each condition, adjectives agreed with the noun in gender, and in half, they did not. The results indicate that gender agreement is acquirable in adulthood, contra the failed functional features hypothesis, and that the distance that separates nouns and adjectives affects the detection of gender anomalies in the second language. The findings support Clahsen and Felser's (2006a) shallow structure hypothesis, as it pertains to morphological processing. |
Brandon Keehn; Laurie A. Brenner; Aurora I. Ramos; Alan J. Lincoln; Sandra P. Marshall; Muller Ralph-Axel Brief report: Eye-movement patterns during an embedded figures test in children with ASD Journal Article In: Journal of Autism and Developmental Disorders, vol. 39, no. 2, pp. 383–387, 2009. @article{Keehn2009, The present study examined fixation frequency and duration during an Embedded Figures Test (EFT) in an effort to better understand the attentional and perceptual processes by which individuals with autism spectrum disorder (ASD) achieve accelerated EFT performance. In particular, we aimed to elucidate differences in the patterns of eye-movement in ASD and typically developing (TD) children, thus providing evidence relevant to the competing theories of weak central coherence (WCC) and enhanced perceptual functioning. Consistent with prior EFT studies, we found accelerated response time (RT) in children with ASD. No group differences were seen for fixation frequency, but the ASD group made significantly shorter fixations compared to the TD group. Eye-movement results indicate that RT advantage in ASD is related to both WCC and enhanced perceptual functioning. |
Georgiana Juravle; Heiner Deubel Action preparation enhances the processing of tactile targets Journal Article In: Experimental Brain Research, vol. 198, no. 2-3, pp. 301–311, 2009. @article{Juravle2009, We present two experiments in which we investigated whether tactile attention is modulated by action preparation. In Experiment 1, participants prepared a saccade toward either the left or right index finger, depending on the pitch of a non-predictive auditory cue. In Experiment 2, participants prepared to lift the left or right index finger in response to the auditory cue. In half of the trials in both experiments, a suprathreshold vibratory stimulus was presented with equal probability to either finger, to which the participants made a speeded foot response. The results showed facilitation in the processing of targets delivered at the goal location of the prepared movement (Experiment 1), as well as at the effector of the prepared movement (Experiment 2). These results are discussed within the framework of theories on motor preparation and spatial attention. |
Roger Kalla; Neil G. Muggleton; Alan Cowey; Vincent Walsh Human dorsolateral prefrontal cortex is involved in visual search for conjunctions but not features: A theta TMS study Journal Article In: Cortex, vol. 45, no. 9, pp. 1085–1090, 2009. @article{Kalla2009, Functional neuroimaging studies have shown that the detection of a target defined by more than one feature (for example, a conjunction of colour and orientation) amongst distractors is associated with the activation of a network of brain areas. Dorsolateral prefrontal cortex (DLPFC), along with areas such as the frontal eye fields (FEF) and posterior parietal cortex (PPC), is a component of this network. While transcranial magnetic stimulation (TMS) had shown that both FEF and PPC are necessary for, and not just correlated with, successful conjunction search, this is not the case for DLPFC. To test the hypothesis that this area is also necessary for efficient conjunction search, TMS was applied over DLPFC and the effects on conjunction and feature (in this case colour) search performance compared with those when TMS was delivered over area MT/V5 and a vertex control stimulation condition. DLPFC TMS impaired performance on the conjunction search task but was without effect on feature search, similar to findings when TMS is delivered over PPC or FEF. Vertex TMS had no effects whereas MT/V5 TMS significantly improved performance with a time course that may indicate that this was due to modulation of V4 activity. These findings illustrate that, like FEF and PPC, DLPFC is necessary for fully effective conjunction visual search performance. |
Andre Kaminiarz; K. Konigs; Frank Bremmer The main sequence of human optokinetic afternystagmus (OKAN) Journal Article In: Journal of Neurophysiology, vol. 101, no. 6, pp. 2889–2897, 2009. @article{Kaminiarz2009, Different types of fast eye movements, including saccades and fast phases of optokinetic nystagmus (OKN) and optokinetic afternystagmus (OKAN), are coded by only partially overlapping neural networks. This is a likely cause for the differences that have been reported for the dynamic parameters of fast eye movements. The dependence of two of these parameters-peak velocity and duration-on saccadic amplitude has been termed "main sequence." The main sequence of OKAN fast phases has not yet been analyzed. These eye movements are unique in that they are generated by purely subcortical control mechanisms and that they occur in complete darkness. In this study, we recorded fast phases of OKAN and OKN as well as visually guided and spontaneous saccades under identical background conditions because background characteristics have been reported to influence the main sequence of saccades. Our data clearly show that fast phases of OKAN and OKN differ with respect to their main sequence. OKAN fast phases were characterized by their lower peak velocities and longer durations compared with those of OKN fast phases. Furthermore we found that the main sequence of spontaneous saccades depends heavily on background characteristics, with saccades in darkness being slower and lasting longer. On the contrary, the main sequence of visually guided saccades depended on background characteristics only very slightly. This implies that the existence of a visual saccade target largely cancels out the effect of background luminance. Our data underline the critical role of environmental conditions (light vs. darkness), behavioral tasks (e.g., spontaneous vs. visually guided), and the underlying neural networks for the exact spatiotemporal characteristics of fast eye movements. |
Andre Kaminiarz; K. Konigs; Frank Bremmer Task influences on the dynamic properties of fast eye movements Journal Article In: Journal of Vision, vol. 9, no. 13, pp. 1–11, 2009. @article{Kaminiarz2009a, It is widely debated whether fast phases of the reflexive optokinetic nystagmus (OKN) share properties with another class of fast eye movements, visually guided saccades. Conclusions drawn from previous studies were complicated by the fact that a subject's task influences the exact type of OKN: stare vs. look nystagmus. With our current study we set out to determine in the same subjects the exact dynamic properties (main sequence) of various forms of fast eye movements. We recorded fast phases of look and stare nystagmus as well as visually guided saccades. Our data clearly show that fast phases of look and stare nystagmus differ with respect to their main sequence. Fast phases of stare nystagmus were characterized by their lower peak velocities and longer durations as compared to fast phases of look nystagmus. Furthermore we found no differences between fast phases of stare nystagmus evoked with limited and unlimited dot lifetimes. Visually guided saccades were on the same main sequence as fast phases of look nystagmus, while they had higher peak velocities and shorter durations than fast phases of stare nystagmus. Our data underline the critical role of behavioral tasks (e.g., reflexive vs. intentional) for the exact spatiotemporal characteristics of fast eye movements. |
Min Jeong Kang; Ming Hsu; Ian M. Krajbich; George Loewenstein; Samuel M. McClure; Joseph Tao-yi Wang; Colin F. Camerer The wick in the candle of learning: Epistemic curiosity activates reward circuitry and enhances memory Journal Article In: Psychological Science, vol. 20, no. 8, pp. 963–974, 2009. @article{Kang2009, Curiosity has been described as a desire for learning and knowledge, but its underlying mechanisms are not well understood. We scanned subjects with functional magnetic resonance imaging while they read trivia questions. The level of curiosity when reading questions was correlated with activity in caudate regions previously suggested to be involved in anticipated reward. This finding led to a behavioral study, which showed that subjects spent more scarce resources (either limited tokens or waiting time) to find out answers when they were more curious. The functional imaging also showed that curiosity increased activity in memory areas when subjects guessed incorrectly, which suggests that curiosity may enhance memory for surprising new information. This prediction about memory enhancement was confirmed in a behavioral study: Higher curiosity in an initial session was correlated with better recall of surprising answers 1 to 2 weeks later. |
Timo Järvilehto; Veli-Matti Nurkkala; Kyösti Koskela The role of anticipation in reading Journal Article In: Pragmatics & Cognition, vol. 17, no. 3, pp. 509–526, 2009. @article{Jaervilehto2009, Learning in educational settings most o en emphasizes declarative and proce- dural knowledge. Studies of expertise, however, point to other, equally important components of learning, especially improvements produced by experience in the extraction of information: Perceptual learning. Here we describe research that combines principles of perceptual learning with computer technology to address persistent di culties in mathematics learning. We report three experiments in which we developed and tested perceptual learning modules (PLMs) to address issues of structure extraction and uency in relation to algebra and fractions. PLMs focus students' learning on recognizing and discriminating, or map- ping key structures across di erent representations or transformations. Results showed signi cant and persisting learning gains for students using PLMs. PLM technology o ers promise for addressing neglected components of learning: Pat- tern recognition, structural intuition, and uency. Using PLMs as a complement to other modes of instruction may allow students to overcome chronic problems in learning |
Rebecca L. Johnson The quiet clam is quite calm: Transposed-letter neighborhood effects on eye movements during reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 35, no. 4, pp. 943–969, 2009. @article{Johnson2009, In responses time tasks, inhibitory neighborhood effects have been found for word pairs that differ in a transposition of two adjacent letters (e.g., clam/calm). Here, the author describes two eye-tracking experiments conducted to explore transposed-letter (TL) neighborhood effects within the context of normal silent reading. In Experiment 1, sentences contained a target word that either has a TL neighbor (e.g., angel, which has the TL neighbor angle) or does not (e.g., alien). In Experiment 2, the context was manipulated to examine whether semantic constraints attenuate neighborhood effects. Readers took longer to process words that have a TL neighbor than control words but only when either member of the TL pair was likely. Furthermore, this interference effect occurred very late in processing and was not affected by relative word frequency. These interference effects can be explained either by the spreading of activation from the target word to its TL neighbor or by the misidentification of target words for their TL neighbors. Implications for models of orthographic input coding and models of eye-movement control are discussed. |
Ming Yan; Eike M. Richter; Hua Shu; Reinhold Kliegl Chinese readers extract semantic information from parafoveal words during reading Journal Article In: Psychonomic Bulletin & Review, vol. 16, pp. 561–566, 2009. @article{Yan2009, Evidence for semantic preview benefit (PB) from parafoveal words has been elusive for reading alphabetic scripts such as English. Here we report semantic PB for noncompound characters in Chinese reading with the boundary paradigm. In addition, PBs for orthographic relatedness and, as a numeric trend, for phono- logical relatedness were obtained. Results are in agreement with other research suggesting that the Chinese writing system is based on a closer association between graphic form and meaning than is alphabetic script. We discuss implications for notions of serial attention shifts and parallel distributed processing of words during reading. |
Ming Yan; Eike M. Richter; Hua Shu; Reinhold Kliegl Readers of Chinese extract semantic information from parafoveal words Journal Article In: Psychonomic Bulletin & Review, vol. 16, no. 3, pp. 561–566, 2009. @article{Yan2009a, Evidence for semantic preview benefit (PB) from parafoveal words has been elusive for reading alphabetic scripts such as English. Here we report semantic PB for noncompound characters in Chinese reading with the boundary paradigm. In addition, PBs for orthographic relatedness and, as a numeric trend, for phonological relatedness were obtained. Results are in agreement with other research suggesting that the Chinese writing system is based on a closer association between graphic form and meaning than is alphabetic script. We discuss implications for notions of serial attention shifts and parallel distributed processing of words during reading. |
Hyejin Yang; Xin Chen; Gregory J. Zelinsky A new look at novelty effects: Guiding search away from old distractors Journal Article In: Attention, Perception, & Psychophysics, vol. 71, no. 3, pp. 554–564, 2009. @article{Yang2009c, We examined whether search is guided to novel distractors. In Experiment 1, subjects searched for a target among one new and a variable number of old distractors. Search displays in Experiment 2 consisted of an equal number of new, old, and familiar distractors (the latter repeated occasionally). We found that eye movements were preferentially directed to a new distractor on target-absent trials and that subjects tended to immediately fixate a new distractor after leaving the target on target-present trials. In both cases, first fixations on old distrac- tors were consistently less frequent than could be explained by chance. We interpret these patterns as evidence for negative guidance: Subjects learn the visual features associated with the set of old distractors and then guide their search away from these features, ultimately resulting in the preferential fixation of novel distractors. |
Hyejin Yang; Gregory J. Zelinsky Visual search is guided to categorically defined targets Journal Article In: Vision Research, vol. 49, no. 16, pp. 2095–2103, 2009. @article{Yang2009b, To determine whether categorical search is guided we had subjects search for teddy bear targets either with a target preview (specific condition) or without (categorical condition). Distractors were random realistic objects. Although subjects searched longer and made more eye movements in the categorical condition, targets were fixated far sooner than was expected by chance. By varying target repetition we also determined that this categorical guidance was not due to guidance from specific previously viewed targets. We conclude that search is guided to categorically-defined targets, and that this guidance uses a categorical model composed of features common to the target class. |
Jinmian Yang; Suiping Wang; Hsuan-Chih Chen; Keith Rayner The time course of semantic and syntactic processing in Chinese sentence comprehension: Evidence from eye movements Journal Article In: Memory & Cognition, vol. 37, no. 8, pp. 1164–1176, 2009. @article{Yang2009, In the present study, we examined the time course of semantic and syntactic processing when Chinese is read. Readers' eye movements were monitored, and the relation between a single-character critical word and the sentence context was manipulated such that three kinds of sentences were developed: (1) congruent, (2) those with a semantic violation, and (3) those with both a semantic and a syntactic violation. The eye movement data showed that the first-pass reading times were significantly longer for the target region in the two violation conditions than in the congruent condition. Moreover, the semantic+syntactic violation caused more severe disruption than did the pure semantic violation, as reflected by longer first-pass reading times for the target region and by longer go-past times for the target region and posttarget region in the former than in the latter condition. These results suggest that the effects of, at least, a semantic violation can be detected immediately by Chinese readers and that the processing of syntactic and semantic information is distinct in both first-pass and second-pass reading. Adapted from the source document. |
Jinmian Yang; Suiping Wang; Yimin Xu; Keith Rayner Do Chinese readers obtain preview benefit from word n + 2? Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 4, pp. 1192–1204, 2009. @article{Yang2009d, The boundary paradigm (K. Rayner, 1975) was used to determine the extent to which Chinese readers obtain information from the right of fixation during reading. As characters are the basic visual unit in written Chinese, they were used as targets in Experiment 1 to examine whether readers obtain preview information from character n + 1 and character n + 2. The results from Experiment 1 suggest they do. In Experiment 2, 2-character target words were used to determine whether readers obtain preview information from word n + 2 as well as word n + 1. Robust preview effects were obtained for word n + 1. There was also evidence from gaze duration (but not first fixation duration), suggesting preview effects for word n + 2. Moreover, there was evidence for parafoveal-on-foveal effects in Chinese reading in both experiments. Implications of these results for models of eye movement control are discussed. |
Shun-Nan Yang Effects of gaze-contingent text changes on fixation duration in reading Journal Article In: Vision Research, vol. 49, no. 23, pp. 2843–2855, 2009. @article{Yang2009a, In reading, a text change during an eye fixation can increase the duration of that fixation. This increased fixation duration could be the result of disrupted text processing, or from the effect of perceiving the brief visual change (a visual transient). The present study was designed to test those two hypotheses. Subjects read multiple-line text while their eye movements were monitored. During randomly selected saccades, the text was masked with an alternate page, which was then replaced with a second alternate page, 75 or 150 ms after the onset of the subsequent (critical) fixation. The effect of the initial masking page, the text change during fixation, and the content of the second page on the likelihood of saccade initiation during the critical fixation, was measured. Results showed that a text change during fixation resulted in similar bilateral (forward and regressive) saccade suppression regardless of the nature of the first and second pages, or the timing of text change. This result likely reflects the effect of a low-level visual transient caused by text change. In addition, there was delay effect reflecting the content of the initial masking. How the suppression dissipated after text change depended on the nature of the first and second pages. These effects are attributed to high-level text processing. The present results suggest that in reading, visual and cognitive processes both can disrupt saccade initiation. The combination of processing difficulty and visually-induced saccade suppression is responsible for the change in fixation duration when gaze-contingent display change is utilized. Therefore, it is prudent to consider both factors when interpreting the effect of text change on eye movement patterns. |
Shun-nan Yang; Yu-chi Tai; Hannu Laukkanen; James Sheedy Effects of ocular transverse chromatic aberration on near foveal letter recognition Journal Article In: Vision Research, vol. 49, no. 23, pp. 2881–2890, 2009. @article{Yang2009e, Transverse chromatic aberration (TCA) smears retinal images of peripheral stimuli. In reading, text information is extracted from both foveal and near fovea, where TCA magnitude is relatively small and variable. The present study investigated whether TCA significantly affects near foveal letter identification. Subjects were briefly presented a string of five letters centered one degree of visual angle to the left or right of fixation. They indicated whether the middle letter was the same as a comparison letter subsequently presented. Letter strings were rendered with a reddish fringe on the left edge of each letter and a bluish fringe on the right edge, consistent with expected left periphery TCA, or with the opposite fringe consistent with expected right periphery TCA. Effect of the color fringing on letter recognition was measured by comparing the response accuracy for fringed and non-fringed stimuli. Effects of lateral interference were examined by manipulating inter-letter spacing and similarity of neighboring letters. Results demonstrated significantly improved response accuracy with the color fringe opposite to the expected TCA, but decreased accuracy when consistent with it. Narrower letter spacing exacerbated the effect of the color fringe, whereas letter similarity did not. Our results suggest that TCA significantly reduces the ability to recognize letters in the near fovea by impeding recognition of individual letters and by enhancing lateral interference between letters. |
Kiyomi Yatabe; Martin J. Pickering; Scott A. McDonald Lexical processing during saccades in text comprehension Journal Article In: Psychonomic Bulletin & Review, vol. 16, no. 1, pp. 62–66, 2009. @article{Yatabe2009, We asked whether people process words during saccades when reading sentences. Irwin (1998) demonstrated that such processing occurs when words are presented in isolation. In our experiment, participants read part of a sentence ending in a high- or low-frequency target word and then made a long (40 degrees) or short (10 degrees) saccade to the rest of the sentence. We found a frequency effect on the target word and the first word after the saccade, but the effect was greater for short than for long saccades. Readers therefore performed more lexical processing during long saccades than during short ones. Hence, lexical processing takes place during saccades in text comprehension. |
Eiling Yee; Eve Overton; Sharon L. Thompson-Schill Looking for meaning: Eye movements are sensitive to overlapping semantic features, not association Journal Article In: Psychonomic Bulletin & Review, vol. 16, no. 5, pp. 869–874, 2009. @article{Yee2009, Theories of semantic memory differ in the extent to which relationships among concepts are captured via associative or via semantic relatedness. We examined the contributions of these two factors, using a visual world paradigm in which participants selected the named object from a four-picture display. We controlled for semantic relatedness while manipulating associative strength by using the visual world paradigm's analogue to presenting asymmetrically associated pairs in either their forward or backward associative direction (e.g., ham-eggs vs. eggs-ham). Semantically related objects were preferentially fixated regardless of the direction of presentation (and the effect size was unchanged by presentation direction). However, when pairs were associated but not semantically related (e.g., iceberg-lettuce), associated objects were not preferentially fixated in either direction. These findings lend support to theories in which semantic memory is organized according to semantic relatedness (e.g., distributed models) and suggest that association by itself has little effect on this organization. |
Miao-Hsuan Yen; Ralph Radach; Ovid J. L. Tzeng; Daisy L. Hung; Jie-Li Tsai Early parafoveal processing in reading Chinese sentences Journal Article In: Acta Psychologica, vol. 131, no. 1, pp. 24–33, 2009. @article{Yen2009, The possibility that during Chinese reading information is extracted at the beginning of the current fixation was examined in this study. Twenty-four participants read for comprehension while their eye movements were being recorded. A pretarget-target two-character word pair was embedded in each sentence and target word visibility was manipulated in two time intervals (initial 140 ms or after 140 ms) during pretarget viewing. Substantial beginning- and end-of-fixation preview effects were observed together with beginning-of-fixation effects on the pretarget. Apparently parafoveal information at least at the character level can be extracted relatively early during ongoing fixations. Results are highly relevant for ongoing debates on spatially distributed linguistic processing and address fundamental questions about how the human mind solves the task of reading within the constraints of different writing systems. |
Peng Zhou; Liqun Gao Scope processing in Chinese Journal Article In: Journal of Psycholinguistic Research, vol. 38, no. 1, pp. 11–24, 2009. @article{Zhou2009, The standard view maintains that quantifier scope interpretation results from an interaction between different modules: the syntax, the semantics as well as the pragmatics. Thus, by examining the mechanism of quantifier scope interpretation, we will certainly gain some insight into how these different modules interact with one another. To observe it, two experiments, an offline judgment task and an eye-tracking experiment, were conducted to investigate the interpretation of doubly quantified sentences in Chinese, like Mei-ge qiangdao dou qiang-le yi-ge yinhang (Every robber robbed a bank). According to current literature, doubly quantified sentences in Chinese like the above are unambiguous, which can only be interpreted as "for every robber x, there is a bank y, such that x robbed y"(surface scope reading), contrary to their ambiguous English counterparts, which also allow the interpretation that "there is a bank y, such that for every robber x, x robbed y"(inverse scope reading). Specifically, three questions were examined, that is, (i) What is the initial reading of doubly quantified sentences in Chinese? (ii) Whether inverse scope interpretation can be available if appropriate contexts are provided? (iii) What are the processing time courses engaged in quantifier scope interpretation? The results showed that (i) Initially, the language processor computes the surface scope representation and the inverse scope representation in parallel, thus, doubly quantified sentences in Chinese are ambiguous; (ii) The discourse information is not employed in initial processing of relative scope, it serves to evaluate the two representations in reanalysis; (iii) The lexical information of verbs affects their scope-taking patterns. We suggest that these findings provide evidence for the Modular Model, one of the major contenders in the literature on sentence processing. |
Weilei Yi; Dana Ballard Recognizing behavior in hand-eye coordination patterns Journal Article In: International Journal of Humanoid Robotics, vol. 06, no. 03, pp. 337–359, 2009. @article{Yi2009, Modeling human behavior is important for the design of robots as well as human-computer interfaces that use humanoid avatars. Constructive models have been built, but they have not captured all of the detailed structure of human behavior such as the moment-to-moment deployment and coordination of hand, head and eye gaze used in complex tasks. We show how this data from human subjects performing a task can be used to program a dynamic Bayes network (DBN) which in turn can be used to recognize new performance instances. As a specific demonstration we show that the steps in a complex activity such as sandwich making can be recognized by a DBN in real time. |
Gregory J. Zelinsky; Joseph Schmidt An effect of referential scene constraint on search implies scene segmentation Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 1004–1028, 2009. @article{Zelinsky2009, Subjects searched aerial images for a UFO target, which appeared hovering over one of five scene regions: Water, fields, foliage, roads, or buildings. Prior to search scene onset, subjects were either told the scene region where the target could be found (specified condition) or not (unspecified condition). Search times were faster and fewer eye movements were needed to acquire targets when the target region was specified. Subjects also distributed their fixations disproportionately in this region and tended to fixate the cued region sooner. We interpret these patterns as evidence for the use of referential scene constraints to partially confine search to a specified scene region. Importantly, this constraint cannot be due to learned associations between the scene and its regions, as these spatial relationships were unpredictable. These findings require the modification of existing theories of scene constraint to include segmentation processes that can rapidly bias search to cued regions. |
Eckart Zimmermann; Markus Lappe Mislocalization of flashed and stationary visual stimuli after adaptation of reactive and scanning saccades Journal Article In: Journal of Neuroscience, vol. 29, no. 35, pp. 11055–11064, 2009. @article{Zimmermann2009, When we look around and register the location of visual objects, our oculomotor system continuously prepares targets for saccadic eye movements. The preparation of saccade targets may be directly involved in the perception of object location because modification of saccade amplitude by saccade adaptation leads to a distortion of the visual localization of briefly flashed spatial probes. Here, we investigated effects of adaptation on the localization of continuously visible objects. We compared adaptation-induced mislocalization of probes that were present for 20 ms during the saccade preparation period and of probes that were present for >1 s before saccade initiation. We studied the mislocalization of these probes for two different saccade types, reactive saccades to a suddenly appearing target and scanning saccades in the self-paced viewing of a stationary scene. Adaptation of reactive saccades induced mislocalization of flashed probes. Adaptation of scanning saccades induced in addition also mislocalization of stationary objects. The mislocalization occurred in the absence of visual landmarks and must therefore originate from the change in saccade motor parameters. After adaptation of one type of saccade, the saccade amplitude change and the mislocalization transferred only weakly to the other saccade type. Mislocalization of flashed and stationary probes thus followed the selectivity of saccade adaptation. Since the generation and adaptation of reactive and scanning saccades are known to involve partially different brain mechanisms, our results suggest that visual localization of objects in space is linked to saccade targeting at multiple sites in the brain. |
Jan Zwickel; Hermann J. Müller Eye movements as a means to evaluate and improve robots Journal Article In: International Journal of Social Robotics, vol. 1, no. 4, pp. 357–366, 2009. @article{Zwickel2009, Abstract With an increase in their capabilities, robots start to play a role in everyday settings. This necessitates a step from a robot-centered (i.e., teaching humans to adapt to robots) to a more human-centered approach (where robots integrate naturally into human activities). Achieving this will increase the effectiveness of robot usage (e.g., shortening the time required for learning), reduce errors, and increase user acceptance. Robotic camera control will play an important role for a more natural and easier-to-interpret behavior, owing to the central importance of gaze in human communication. This study is intended to provide a first step towards improving camera control by a better understanding of human gaze behavior in social situations. To this end, we registered the eye movements of humans watching different types of movies. In all movies, the same two triangles moved around in a self-propelled fashion. However, crucially, some of the movies elicited the attribution of mental states to the triangles, while others did not. This permitted us to directly distinguish eye movement patterns relating to the attribution of mental states in (perceived) social situations, from the patterns in non-social situations. We argue that a better understanding of what characterizes human gaze patterns in social situations will help shape robotic behavior, make it more natural for humans to communicate with robots, and establish joint attention (to certain objects) between humans and robots. In addition, a better understanding of human gaze in social situations will provide a measure for evaluating whether robots are perceived as social agents rather than non-intentional machines. This could help decide which behaviors a robot should display in order to be perceived as a social interaction partner. |
Michael Rohs; Robert Schleicher; Johannes Schöning; Georg Essl; Anja Naumann; Antonio Krüger Impact of item density on the utility of visual context in magic lens interactions Journal Article In: Personal and Ubiquitous Computing, vol. 13, no. 8, pp. 633–646, 2009. @article{Rohs2009, This article reports on two user studies investi- gating the effect of visual context in handheld augmented reality interfaces.Adynamic peephole interface (without vi- sual context beyond the device display) was compared to a magic lens interface (with video see-through augmentation of external visual context). The task was to explore items on a map and look for a specific attribute. We tested dif- ferent sizes of visual context as well as different numbers of items per area, i.e. different item densities. Hand motion patterns and eye movements were recorded. We found that visual context is most effective for sparsely distributed items and gets less helpful with increasing item density. User per- formance in the magic lens case is generally better than in the dynamic peephole case, but approaches the performance of the latter the more densely the items are spaced. In all conditions, subjective feedback indicates that participants generally prefer visual context over the lack thereof. The insights gained from this study are relevant for designers of mobile AR and dynamic peephole interfaces, involving spa- tially tracked personal displays or combined personal and public displays, by suggesting when to use visual context. |
M. Carmen Romano; Marco Thiel; Jürgen Aurths; Konstantin Mergenthaler; Ralf Engbert Hypothesis test for synchronization: Twin surrogates revisited Journal Article In: Chaos, vol. 19, no. 1, pp. 1–14, 2009. @article{Romano2009, The method of twin surrogates has been introduced to test for phase synchronization of complex systems in the case of passive experiments. In this paper we derive new analytical expressions for the number of twins depending on the size of the neighborhood, as well as on the length of the trajectory. This allows us to determine the optimal parameters for the generation of twin surrogates. Furthermore, we determine the quality of the twin surrogates with respect to several linear and nonlinear statistics depending on the parameters of the method. In the second part of the paper we perform a hypothesis test for phase synchronization in the case of experimental data from fixational eye movements. These miniature eye movements have been shown to play a central role in neural information processing underlying the perception of static visual scenes. The high number of data sets (21 subjects and 30 trials per person) allows us to compare the generated twin surrogates with the "natural" surrogates that correspond to the different trials. We show that the generated twin surrogates reproduce very well all linear and nonlinear characteristics of the underlying experimental system. The synchronization analysis of fixational eye movements by means of twin surrogates reveals that the synchronization between the left and right eye is significant, indicating that either the centers in the brain stem generating fixational eye movements are closely linked, or, alternatively that there is only one center controlling both eyes. |
Jessica Rosenberg; Kathrin Pusch; Rainer Dietrich; Christian Cajochen The tick-tock of language: Is language processing sensitive to circadian rhythmicity and elevated sleep pressure? Journal Article In: Chronobiology International, vol. 26, no. 5, pp. 974–991, 2009. @article{Rosenberg2009, The master circadian pacemaker emits signals that trigger organ-specific oscillators and, therefore, constitutes a basic biological process that enables organisms to anticipate daily environmental changes by adjusting behavior, physiology, and gene regulation. Although circadian rhythms are well characterized on a physiological level, little is known about circadian modulations of higher cognitive functions. Thus, we investigated circadian repercussions on language performance at the level of minimal syntactic processing by means of German noun phrases in ten young healthy men under the unmasking conditions of a 40 h constant-routine protocol. Language performance for both congruent and incongruent noun phrases displayed a clear diurnal rhythm with a peak performance decrement during the biological night. The nadirs, however, differed such that worst syntactic processing of incongruent noun phrases occurred 3 h earlier (07:00 h) than that of congruent noun phrases (10:00 h). Our results indicate that language performance displays an internally generated circadian rhythmicity with optimal time for parsing language between 3 to 6 h after the habitual wake time, which usually corresponds to 10:00-13:00 h. These results may have important ramifications for establishing optimal times for shiftwork changes or testing linguistically impaired people. |
Keith Rayner; Tim J. Smith; George L. Malcolm; John M. Henderson Eye movements and visual encoding during scene perception Journal Article In: Psychological Science, vol. 20, no. 1, pp. 6–10, 2009. @article{Rayner2009, The amount of time viewers could process a scene during eye fixations was varied by a mask that appeared at a certain point in each eye fixation. The scene did not reappear until the viewer made an eye movement. The main finding in the studies was that in order to normally process a scene, viewers needed to see the scene for at least 150 ms during each eye fixation. This result is surprising because viewers can extract the gist ofa scene from a brief 40- to 100-ms exposure. It also stands in marked contrast to reading, as readers need only to view the words in the text for 50 to 60 ms to read normally. Thus, although the same neural mechanisms control eye movements in scene perception and reading, the cognitive processes associated with each task drive processing in different ways. |
Bob Rehder; Robert M. Colner; Aaron B. Hoffman Feature inference learning and eyetracking Journal Article In: Journal of Memory and Language, vol. 60, no. 3, pp. 393–419, 2009. @article{Rehder2009, Besides traditional supervised classification learning, people can learn categories by inferring the missing features of category members. It has been proposed that feature inference learning promotes learning a category's internal structure (e.g., its typical features and interfeature correlations) whereas classification promotes the learning of diagnostic information. We tracked learners' eye movements and found in Experiment 1 that inference learners indeed fixated features that were unnecessary for inferring the missing feature, behavior consistent with acquiring the categories' internal structure. However, Experiments 3 and 4 showed that fixations were generally limited to features that needed to be predicted on future trials. We conclude that inference learning induces both supervised and unsupervised learning of category-to-feature associations rather than a general motivation to learn the internal structure of categories. |
Michael G. Reynolds; John D. Eastwood; Marita Partanen; Alexandra Frischen; Daniel Smilek Monitoring eye movements while searching for affective faces Journal Article In: Visual Cognition, vol. 17, no. 3, pp. 318–333, 2009. @article{Reynolds2009, A single experiment is reported in which we provide a novel analysis of eye movements during visual search to disentangle the contributions of unattended guidance and focal target processing to visual search performance. This technique is used to examine the controversial claim that unattended affective faces can guide attention during search. Results indicated that facial expression influences how efficiently the target was fixated for the first time as a function of set size. However, affective faces did not influence how efficiently the target was identified as a function of set size after it was first fixated. These findings suggest that, in the present context, facial expression can influence search before the target is attended and that the present measures are able to distinguish between the guidance of attention by targets and the processing of targets within the focus of attention. |
Paola Ricciardelli; Elena Betta; Sonia Pruner; Massimo Turatto Is there a direct link between gaze perception and joint attention behaviours? Effects of gaze contrast polarity on oculomotor behaviour Journal Article In: Experimental Brain Research, vol. 194, no. 3, pp. 347–357, 2009. @article{Ricciardelli2009, Previous studies have found that attention is oriented in the direction of other people's gaze suggesting that gaze perception is related to the mechanisms of joint attention. However, the role of the perception of gaze direction on joint attention has been challenged. We investigated the effects of disrupting gaze perception on the orienting of observers' attention, in particular, whether orienting to gaze direction is affected by the disruptive effect of negative contrast polarity on gaze perception. A dynamic distracting gaze was presented to observers performing an endogenous saccadic task. Gaze perception was manipulated by reversing the contrast polarity between the sclera and the iris. With positive display polarity, eye movement recordings showed shorter saccadic latencies when the direction of the instructed saccade matched the direction of the distracting gaze, and a substantial number of erroneous saccades towards the direction of the perceived gaze when the latter did not match the instruction. Crucially, such effects were not found when gaze contrast polarity was reversed and gaze perception was impaired. These results extend previous studies by demonstrating the existence of a direct link between joint attention and the perception of gaze direction, and show how orienting of attention to other people's gaze can be suppressed. |
Stephen V. Shepherd; Jeffrey T. Klein; Robert O. Deaner; Michael L. Platt Mirroring of attention by neurons in macaque parietal cortex Journal Article In: Proceedings of the National Academy of Sciences, vol. 106, no. 23, pp. 9489–9494, 2009. @article{Shepherd2009, Macaques, like humans, rapidly orient their attention in the direction other individuals are looking. Both cortical and subcortical pathways have been proposed as neural mediators of social gaze following, but neither pathway has been characterized electrophysiologically in behaving animals. To address this gap, we recorded the activity of single neurons in the lateral intraparietal area (LIP) of rhesus macaques to determine whether and how this area might contribute to gaze following. A subset of LIP neurons mirrored observed attention by firing both when the subject looked in the preferred direction of the neuron, and when observed monkeys looked in the preferred direction of the neuron, despite the irrelevance of the monkey images to the task. Importantly, the timing of these modulations matched the time course of gaze-following behavior. A second population of neurons was suppressed by social gaze cues, possibly subserving task demands by maintaining fixation on the observed face. These observations suggest that LIP contributes to sharing of observed attention and link mirror representations in parietal cortex to a well studied imitative behavior. |
Heather Sheridan; Eyal M. Reingold; Meredyth Daneman Using puns to study contextual influences on lexical ambiguity resolution: Evidence from eye movements Journal Article In: Psychonomic Bulletin & Review, vol. 16, no. 5, pp. 875–881, 2009. @article{Sheridan2009, Participants' eye movements were monitored while they read sentences containing biased homographs in either a single-meaning context condition that instantiated the subordinate meaning of the homograph without ruling out the dominant meaning (e.g., "The man with a toothache had a crown made by the best dentist in town") or a dual-meaning pun context condition that supported both the subordinate and dominant meanings (e.g., "The king with a toothache had a crown made by the best dentist in town"). In both of these conditions, the homographs were followed by disambiguating material that supported the subordinate meaning and ruled out the dominant meaning. Fixation times on the homograph were longer in the single-meaning condition than in the dual-meaning condition, whereas the reverse pattern was demonstrated for fixation times on the disambiguating region; these effects were observed as early as first-fixation duration. The findings strongly support the reordered access model of lexical ambiguity resolution. |
Joseph Schmidt; Gregory J. Zelinsky Search guidance is proportional to the categorical specificity of a target cue Journal Article In: Quarterly Journal of Experimental Psychology, vol. 62, no. 10, pp. 1904–1914, 2009. @article{Schmidt2009, Visual search studies typically assume the availability of precise target information to guide search, often a picture of the exact target. However, search targets in the real world are often defined categorically and with varying degrees of visual specificity. In five target preview conditions we manipulated the availability of target visual information in a search task for common real-world objects. Previews were: a picture of the target, an abstract textual description of the target, a precise textual description, an abstract + colour textual description, or a precise + colour textual description. Guidance generally increased as information was added to the target preview. We conclude that the information used for search guidance need not be limited to a picture of the target. Although generally less precise, to the extent that visual information can be extracted from a target label and loaded into working memory, this information too can be used to guide search. |
Franziska Schrammel; Sebastian Pannasch; Sven-Thomas Graupner; Andreas Mojzisch; Boris M. Velichkovsky Virtual friend or threat? The effects of facial expression and gaze interaction on psychophysiological responses and emotional experience Journal Article In: Psychophysiology, vol. 46, no. 5, pp. 922–931, 2009. @article{Schrammel2009, The present study aimed to investigate the impact of facial expression, gaze interaction, and gender on attention allocation, physiological arousal, facial muscle responses, and emotional experience in simulated social interactions. Participants viewed animated virtual characters varying in terms of gender, gaze interaction, and facial expression. We recorded facial EMG, fixation duration, pupil size, and subjective experience. Subject's rapid facial reactions (RFRs) differentiated more clearly between the character's happy and angry expression in the condition of mutual eye-to-eye contact. This finding provides evidence for the idea that RFRs are not simply motor responses, but part of an emotional reaction. Eye movement data showed that fixations were longer in response to both angry and neutral faces than to happy faces, thereby suggesting that attention is preferentially allocated to cues indicating potential threat during social interaction. |
Clive R. Rosenthal; Emma E. Roche-Kelly; Masud Husain; Christopher Kennard Response-dependent contributions of human primary motor cortex and angular gyrus to manual and perceptual sequence learning Journal Article In: Journal of Neuroscience, vol. 29, no. 48, pp. 15115–15125, 2009. @article{Rosenthal2009, Motor sequence learning on the serial reaction time task involves the integration of response-, stimulus-, and effector-based information. Human primary motor cortex (M1) and the inferior parietal lobule (IPL) have been identified with supporting the learning of effector-dependent and -independent information, respectively. Current neurocognitive data are, however, exclusively based on learning complex sequence information via perceptual-motor responses. Here, we investigated the effects of continuous theta-burst transcranial magnetic stimulation (cTBS)-induced disruption of M1 and the angular gyrus (AG) of the IPL on learning a probabilistic sequence via sequential perceptual-motor responses (experiment 1) or covert orienting of visuospatial attention (experiment 2). Functional effects on manual sequence learning were evident during 75% of training trials in the cTBS M1 condition, whereas cTBS over the AG resulted in interference confined to a midpoint during the training phase. Posttraining direct (declarative) tests of sequence knowledge revealed that cTBS over M1 modulated the availability of newly acquired sequence knowledge, whereby sequence knowledge was implicit in the cTBS M1 condition but was available to conscious awareness in the cTBS AG and control conditions. In contrast, perceptual sequence learning was abolished in the perceptual cTBS AG condition, whereas learning was intact and available to conscious awareness in the cTBS M1 and control conditions. These results show that the right AG had a critical role in perceptual sequence learning, whereas M1 had a causal role in developing experience-dependent functional attributes relevant to conscious knowledge on manual but not perceptual sequence learning. |
T. Roth; Alexander N. Sokolov; A. Messias; P. Roth; M. Weller; Susanne Trauzettel-Klosinski Comparing explorative saccade and flicker training in hemianopia: A randomized controlled study Journal Article In: Neurology, vol. 72, pp. 324–331, 2009. @article{Roth2009, Objective: Patients with homonymous hemianopia are disabled on everyday exploratory activities. We examined whether explorative saccade training (EST), compared with flicker-stimulation training (FT), would selectively improve saccadic behavior on the patients' blind side and benefit performance on natural exploratory tasks. Methods: Twenty-eight hemianopic patients were randomly assigned to distinct groups performing for 6 weeks either EST (a digit-search task) or FT (blind-hemifield stimulation by flickering letters). Outcome variables (response times [RTs] during natural search, number of fixations during natural scene exploration, fixation stability, visual fields, and quality-of-life scores) were collected before, directly after, and 6 weeks after training. Results: EST yielded a reduced (post/pre, 47%) digit-search RT for the blind side. Natural search RT decreased (post/pre, 23%) on the blind side but not on the seeing side. After FT, both sides' RT remained unchanged. Only with EST did the number of fixations during natural scene exploration increase toward the blind and decrease on the seeing side (follow-up/pre difference, 238%). Even with the target located on the seeing side, after EST more fixations occurred toward the blind side. The EST group showed decreased (post/pre, 43%) fixation stability and increased (post/pre, 482%) asymmetry of fixations toward the blind side. Visual field size remained constant after both treatments. EST patients reported improvements in social domain. Conclusions: Explorative saccade training selectively improves saccadic behavior, natural search, and scene exploration on the blind side. Flicker-stimulation training does not improve saccadic behavior or visual fields. The findings show substantial benefits of compensatory exploration training, including subjective improvements in mastering daily-life activities, in a randomized controlled trial. |
Annie Roy-Charland; Jean Saint-Aubin; Michael A. Lawrence; Raymond M. Klein Solving the chicken-and-egg problem of letter detection and fixation duration in reading Journal Article In: Attention, Perception, & Psychophysics, vol. 71, no. 7, pp. 1553–1562, 2009. @article{RoyCharland2009, When asked to detect target letters while reading a text, participants miss more letters in frequent function words than in less frequent content words. According to the truncation assumption that characterizes most models of this effect, misses occur when word-processing time is shorter than letter-processing time. Fixation durations for detections and omissions were compared with fixation durations from a baseline condition when participants were searching for a target letter embedded in different words. Although, as predicted by truncation, fixation durations were longer for detections than for omissions, fixation durations for detections were also longer than those for the same words in the baseline condition, demonstrating that longer fixation durations when targets are detected are more likely to be due to demands associated with producing a detection response than to truncation. Also, contrary to predictions from the truncation assumption, the standard deviation of fixation durations for detections was larger than that from the baseline condition. |
Gary S. Rubin; Mary P. Feely The role of eye movements during reading in patients with age-related macular degeneration (AMD) Journal Article In: Neuro-Ophthalmology, vol. 33, no. 3, pp. 120–126, 2009. @article{Rubin2009, AMD patients often have particular difficulty reading, even when the text is magnified to compensate for reduced visual acuity. This study explores whether reading performance can be explained by eye movement factors. Forty patients with advanced AMD were tested with a high-speed video eye tracker to evaluate fixation stability and saccadic eye movements. Reading speed was measured for standardized texts viewed at the critical print size. Visual acuity and contrast sensitivity were unrelated to reading speed, but fixation stability, proportion of regressive saccades and size of forward saccades were all significantly associated with reading performance, accounting for 74% of the variance. The implications of these findings for low-vision training programmes are discussed. |
Jennifer D. Ryan; Christina Villate Building visual representations: The binding of relative spatial relations across time Journal Article In: Visual Cognition, vol. 17, no. 1-2, pp. 254–272, 2009. @article{Ryan2009, In this study, the construction of, and subsequent access to, representations regarding the relative spatial and temporal relations among sequentially presented objects was examined using eye movement monitoring. Participants were presented with a series of single objects. Subsequently, a test display revealed all three objects simultaneously and participants judged whether the relative relations were maintained. Eye movements revealed the binding of relations across study images; eye movements transitioned between the location of the presented object and the locations that were previously occupied by objects in prior study images. For the test displays, changes in the relative relations were accurately detected. Eye movements distinguished intact displays from those in which the relations had been altered. Order of fixations to objects in test images mimicked the temporal order in which objects had been studied, but disruption of temporal order was observed for manipulated images. The present findings suggest that memory representations regarding the visual world include information about the relative spatial and temporal relations among objects. Eye movements may be the conduit by which information is integrated into a lasting representation, and by which current information is compared to stored representations. |
Alexander C. Schütz; Doris I. Braun; Karl R. Gegenfurtner Object recognition during foveating eye movements Journal Article In: Vision Research, vol. 49, no. 18, pp. 2241–2253, 2009. @article{Schuetz2009, We studied how saccadic and smooth pursuit eye movements affect the recognition of briefly presented letters appearing within the eye movement target. First we compared the recognition performance during steady-state pursuit and during fixation. Single letters were presented for seven different durations ranging from 10 to 400 ms and four contrast levels ranging from 5% to 40%. For both types of eye movements the recognition rates increased with duration and contrast, but they were on average 11% lower during pursuit. In daily life humans use a combination of saccadic and smooth pursuit eye movements to foveate a peripheral moving object. To investigate this more natural situation, we presented a peripheral target that was either stationary or moving horizontally, above or below the fixation spot. Participants were asked to saccade to the target and to keep it foveated. The letters were presented at different times relative to the first target directed saccade. As would be expected from retinal masking and motion blur during saccades, the discrimination performance increased with increasing post-saccadic delay. If the target moved and the saccade was followed by pursuit, letter recognition performance was on average 16% lower than if the target was stationary and the saccade was followed by fixation. |
Alexander C. Schütz; Doris I. Braun; Karl R. Gegenfurtner Chromatic contrast sensitivity during optokinetic nystagmus, visually enhanced vestibulo-ocular reflex, and smooth pursuit eye movements Journal Article In: Journal of Neurophysiology, vol. 101, no. 5, pp. 2317–2327, 2009. @article{Schuetz2009a, Recently we showed that sensitivity for chromatic- and high-spatial frequency luminance stimuli is enhanced during smooth-pursuit eye movements (SPEMs). Here we investigated whether this enhancement is a general property of slow eye movements. Besides SPEM there are two other classes of eye movements that operate in a similar range of eye velocities: the optokinetic nystagmus (OKN) is a reflexive pattern of alternating fast and slow eye movements elicited by wide-field visual motion and the vestibulo-ocular reflex (VOR) stabilizes the gaze during head movements. In a natural environment all three classes of eye movements act synergistically to allow clear central vision during self- and object motion. To test whether the same improvement of chromatic sensitivity occurs during all of these eye movements, we measured human detection performance of chromatic and luminance line stimuli during OKN and contrast sensitivity during VOR and SPEM at comparable velocities. For comparison, performance in the same tasks was tested during fixation. During the slow phase of OKN we found a similar enhancement of chromatic detection rate like that during SPEM, whereas no enhancement was observable during VOR. This result indicates similarities between slow-phase OKN and SPEM, which are distinct from VOR. |
Alexander C. Schütz; Doris I. Braun; Karl R. Gegenfurtner; Alexander C. Schu Improved visual sensitivity during smooth pursuit eye movements: Temporal and spatial characteristics Journal Article In: Visual Neuroscience, vol. 26, no. 3, pp. 329–340, 2009. @article{Schuetz2009b, Recently, we showed that contrast sensitivity for color and high–spatial frequency luminance stimuli is enhanced during smooth pursuit eye movements (Schütz et al., 2008). In this study, we investigated the enhancement over a wide range of temporal and spatial frequencies. In Experiment 1, we measured the temporal impulse response function (TIRF) for colored stimuli. The TIRF for pursuit and fixation differed mostly with respect to the gain but not with respect to the natural temporal frequency. Hence, the sensitivity enhancement seems to be rather independent of the temporal frequency of the stimuli. In Experiment 2, we measured the spatial contrast sensitivity function for luminance-defined Gabor patches with spatial frequencies ranging from 0.2 to 7 cpd. We found a sensitivity improvement during pursuit for spatial frequencies above 2–3 cpd. Between 0.5 and 3 cpd, sensitivity was impaired by smooth pursuit eye movements, but no consistent difference was observed below 0.5 cpd. The results of both experiments are consistent with an increased contrast gain of the parvocellular retinogeniculate pathway. |