All EyeLink Publications
All 13,000+ peer-reviewed EyeLink research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2009 |
Ladislao Salmerón; Thierry Baccino; José J. Cañas; Rafael I. Madrid; Inmaculada Fajardo Do graphical overviews facilitate or hinder comprehension in hypertext? Journal Article In: Computers & Education, vol. 53, no. 4, pp. 1308–1319, 2009. @article{Salmeron2009, Educational hypertexts usually include graphical overviews, conveying the structure of the text schematically with the aim of fostering comprehension. Despite the claims about their relevance, there is currently no consensus on the impact that hypertext overviews have on the reader's comprehension. In the present paper we have explored how hypertext overviews might affect comprehension with regard to (a) the time at which students read the overview and (b) the hypertext difficulty. The results from two eye-tracking studies revealed that reading a graphical overview at the beginning of the hypertext is related to an improvement in the participant's comprehension of quite difficult hypertexts, whereas reading an overview at the end of the hypertext is linked to a decrease in the student's comprehension of easier hypertexts. These findings are interpreted in light of the Assimilation Theory and the Active Processing model. Finally, the key educational and hypertext design implications of the results are discussed. |
Naoyuki Sato; Yoko Yamaguchi A computational predictor of human episodic memory based on a theta phase precession network Journal Article In: PLoS ONE, vol. 4, no. 10, pp. e7536, 2009. @article{Sato2009, In the rodent hippocampus, a phase precession phenomena of place cell firing with the local field potential (LFP) theta is called "theta phase precession" and is considered to contribute to memory formation with spike time dependent plasticity (STDP). On the other hand, in the primate hippocampus, the existence of theta phase precession is unclear. Our computational studies have demonstrated that theta phase precession dynamics could contribute to primate-hippocampal dependent memory formation, such as object-place association memory. In this paper, we evaluate human theta phase precession by using a theory-experiment combined analysis. Human memory recall of object-place associations was analyzed by an individual hippocampal network simulated by theta phase precession dynamics of human eye movement and EEG data during memory encoding. It was found that the computational recall of the resultant network is significantly correlated with human memory recall performance, while other computational predictors without theta phase precession are not significantly correlated with subsequent memory recall. Moreover the correlation is larger than the correlation between human recall and traditional experimental predictors. These results indicate that theta phase precession dynamics are necessary for the better prediction of human recall performance with eye movement and EEG data. In this analysis, theta phase precession dynamics appear useful for the extraction of memory-dependent components from the spatio-temporal pattern of eye movement and EEG data as an associative network. Theta phase precession may be a common neural dynamic between rodents and humans for the formation of environmental memories. |
Sébastien Miellet; Patrick J. O'Donnell; Sara C. Sereno Parafoveal magnification: Visual acuity does not modulate the perceptual span in reading Journal Article In: Psychological Science, vol. 20, no. 6, pp. 721–728, 2009. @article{Miellet2009, Models of eye guidance in reading rely on the concept of the perceptual span—the amount of information perceived during a single eye fixation, which is considered to be a consequence of visual and attentional constraints. To directly investigate attentional mechanisms underlying the perceptual span, we implemented a new reading paradigm—parafoveal magnification (PM)— that compensates for how visual acuity drops off as a function of retinal eccentricity. On each fixation and in real time, parafoveal text is magnified to equalize its perceptual impact with that of concurrent foveal text. Experiment 1 demonstrated that PM does not increase the amount of text that is processed, supporting an attentional-based account ofeye movements in reading. Experiment 2 explored a contentious issue that differentiates competing models of eye movement control and showed that, even when parafoveal information is enlarged, visual attention in reading is allocated in a serial fashion from word to word. |
Bob McMurray; Michael K. Tanenhaus; Richard N. Aslin Within-category VOT affects recovery from "lexical" garden-paths: Evidence against phoneme-level inhibition Journal Article In: Journal of Memory and Language, vol. 60, no. 1, pp. 65–91, 2009. @article{McMurray2009, Spoken word recognition shows gradient sensitivity to within-category voice onset time (VOT), as predicted by several current models of spoken word recognition, including TRACE (McClelland, J., & Elman, J. (1986). The TRACE model of speech perception. Cognitive Psychology, 18, 1-86). It remains unclear, however, whether this sensitivity is short-lived or whether it persists over multiple syllables. VOT continua were synthesized for pairs of words like barricade and parakeet, which differ in the voicing of their initial phoneme, but otherwise overlap for at least four phonemes, creating an opportunity for "lexical garden-paths" when listeners encounter the phonemic information consistent with only one member of the pair. Simulations established that phoneme-level inhibition in TRACE eliminates sensitivity to VOT too rapidly to influence recovery. However, in two Visual World experiments, look-contingent and response-contingent analyses demonstrated effects of word initial VOT on lexical garden-path recovery. These results are inconsistent with inhibition at the phoneme level and support models of spoken word recognition in which sub-phonetic detail is preserved throughout the processing system. |
Eugene McSorley; Alice G. Cruickshank; Laura A. Inman The development of the spatial extent of oculomotor inhibition Journal Article In: Brain Research, vol. 1298, pp. 92–98, 2009. @article{McSorley2009b, Inhibition is intimately involved in the ability to select a target for a goal-directed movement. The effect of distracters on the deviation of oculomotor trajectories and landing positions provides evidence of such inhibition. Individual saccade trajectories and landing positions may deviate initially either towards, or away from, a competing distracter-the direction and extent of this deviation depends upon saccade latency and the target to distracter separation. However, the underlying commonality of the sources of oculomotor inhibition has not been investigated. Here we report the relationship between distracter-related deviation of saccade trajectory, landing position and saccade latency. Observers saccaded to a target which could be accompanied by a distracter shown at various distances from very close (10 angular degrees) to far away (120 angular degrees). A fixation-gap paradigm was used to manipulate latency independently of the influence of competing distracters. When distracters were close to the target, saccade trajectory and landing position deviated toward the distracter position, while at greater separations landing position was always accurate but trajectories deviated away from the distracters. Different spatial patterns of deviations across latency were found. This pattern of results is consistent with the metrics of the saccade reflecting coarse pooling of the ongoing activity at the distracter location: saccade trajectory reflects activity at saccade initiation while landing position reveals activity at saccade end. |
Eugene McSorley; Patrick Haggard; Robin Walker The spatial and temporal shape of oculomotor inhibition Journal Article In: Vision Research, vol. 49, no. 6, pp. 608–614, 2009. @article{McSorley2009, Selecting a stimulus as the target for a goal-directed movement involves inhibiting other competing possible responses. Inhibition has generally proved hard to study behaviorally, because it results in no measurable output. The effect of distractors on the shape of oculomotor and manual trajectories provide evidence of such inhibition. Individual saccades may deviate initially either towards, or away from, a competing distractor - the direction and extent of this deviation depends upon saccade latency, target predictability and the target to distractor separation. The experiment reported here used these effects to show how inhibition of distractor locations develops over time. Distractors could be presented at various distances from unpredictable and predictable targets in two separate experiments. The deviation of saccade trajectories was compared between trials with and without distractors. Inhibition was measured by saccade trajectory deviation. Inhibition was found to increase as the distractor distance from target decreased but was found to increase with saccade latency at all distractor distances (albeit to different peaks). Surprisingly, no differences were found between unpredictable and predictable targets perhaps because our saccade latencies were generally long (∼260-280 ms.). We conclude that oculomotor inhibition of saccades to possible target objects involves the same mechanisms for all distractor distances and target types. |
Eugene McSorley; Rachel McCloy Saccadic eye movements as an index of perceptual decision-making Journal Article In: Experimental Brain Research, vol. 198, no. 4, pp. 513–520, 2009. @article{McSorley2009a, One of the most common decisions we make is the one about where to move our eyes next. Here we examine the impact that processing the evidence supporting competing options has on saccade programming. Participants were asked to saccade to one of two possible visual targets indicated by a cloud of moving dots. We varied the evidence which supported saccade target choice by manipulating the proportion of dots moving towards one target or the other. The task was found to become easier as the evidence supporting target choice increased. This was reflected in an increase in percent correct and a decrease in saccade latency. The trajectory and landing position of saccades were found to deviate away from the non-selected target reflecting the choice of the target and the inhibition of the non-target. The extent of the deviation was found to increase with amount of sensory evidence supporting target choice. This shows that decision-making processes involved in saccade target choice have an impact on the spatial control of a saccade. This would seem to extend the notion of the processes involved in the control of saccade metrics beyond a competition between visual stimuli to one also reflecting a competition between options. |
Tanja C. W. Nijboer; Stefan Van der Stigchel Is attention essential for inducing synesthetic colors? Evidence from oculomotor distractors Journal Article In: Journal of Vision, vol. 9, no. 6, pp. 1–9, 2009. @article{Nijboer2009, In studies investigating visual attention in synesthesia, the targets usually induce a synesthetic color. To measure to what extent attention is necessary to induce synesthetic color experiences, one needs a task in which the synesthetic color is induced by a task-irrelevant distractor. In the current study, an oculomotor distractor task was used in which an eye movement was to be made to a physically colored target while ignoring a single physically colored or synesthetic distractor. Whereas many erroneous eye movements were made to distractors with an identical hue as the target (i.e., capture), much less interference was found with synesthetic distractors. The interference of synesthetic distractors was comparable with achromatic non-digit distractors. These results suggest that attention and hence overt recognition of the inducing stimulus are essential for the synesthetic color experience to occur. |
Satoshi Nishida; Tomohiro Shibata; Kazushi Ikeda Prediction of human eye movements in facial discrimination tasks Journal Article In: Artificial Life and Robotics, vol. 14, no. 3, pp. 348–351, 2009. @article{Nishida2009, Under natural viewing conditions, human observers selectively allocate their attention to subsets of the visual input. Since overt allocation of attention appears as eye movements, the mechanism of selective attention can be uncovered through computational studies of eyemovement predictions. Since top-down attentional control in a task is expected to modulate eye movements significantly, the models that take a bottom-up approach based on low-level local properties are not expected to suffice for prediction. In this study, we introduce two representative models, apply them to a facial discrimination task with morphed face images, and evaluate their performance by comparing them with the human eye-movement data. The result shows that they are not good at predicting eye movements in this task. |
Atsushi Noritake; Bob Uttl; Masahiko Terao; Masayoshi Nagai; Junji Watanabe; Akihiro Yagi Saccadic compression of rectangle and Kanizsa figures: Now you see it, now you don't Journal Article In: PLoS ONE, vol. 4, no. 7, pp. e6383, 2009. @article{Noritake2009, BACKGROUND: Observers misperceive the location of points within a scene as compressed towards the goal of a saccade. However, recent studies suggest that saccadic compression does not occur for discrete elements such as dots when they are perceived as unified objects like a rectangle. METHODOLOGY/PRINCIPAL FINDINGS: We investigated the magnitude of horizontal vs. vertical compression for Kanizsa figure (a collection of discrete elements unified into single perceptual objects by illusory contours) and control rectangle figures. Participants were presented with Kanizsa and control figures and had to decide whether the horizontal or vertical length of stimulus was longer using the two-alternative force choice method. Our findings show that large but not small Kanizsa figures are perceived as compressed, that such compression is large in the horizontal dimension and small or nil in the vertical dimension. In contrast to recent findings, we found no saccadic compression for control rectangles. CONCLUSIONS: Our data suggest that compression of Kanizsa figure has been overestimated in previous research due to methodological artifacts, and highlight the importance of studying perceptual phenomena by multiple methods. |
Ulrich Nuding; Roger Kalla; Neil G. Muggleton; Ulrich Büttner; Vincent Walsh; Stefan Glasauer TMS evidence for smooth pursuit gain control by the frontal eye fields Journal Article In: Cerebral Cortex, vol. 19, no. 5, pp. 1144–1150, 2009. @article{Nuding2009, Smooth pursuit eye movements are used to continuously track slowly moving visual objects. A peculiar property of the smooth pursuit system is the nonlinear increase in sensitivity to changes in target motion with increasing pursuit velocities. We investigated the role of the frontal eye fields (FEFs) in this dynamic gain control mechanism by application of transcranial magnetic stimulation. Subjects were required to pursue a slowly moving visual target whose motion consisted of 2 components: a constant velocity component at 4 different velocities (0, 8, 16, and 24 deg/s) and a superimposed high-frequency sinusoidal oscillation (4 Hz, +/-8 deg/s). Magnetic stimulation of the FEFs reduced not only the overall gain of the system, but also the efficacy of the dynamic gain control. We thus provide the first direct evidence that the FEF population is significantly involved in the nonlinear computation necessary for continuously adjusting the feedforward gain of the pursuit system. We discuss this with relation to current models of smooth pursuit. |
Lauri Nummenmaa; Jukka Hyönä; Manuel G. Calvo Emotional scene content drives the saccade generation system reflexively Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 2, pp. 305–323, 2009. @article{Nummenmaa2009, The authors assessed whether parafoveal perception of emotional content influences saccade programming. In Experiment 1, paired emotional and neutral scenes were presented to parafoveal vision. Participants performed voluntary saccades toward either of the scenes according to an imperative signal (color cue). Saccadic reaction times were faster when the cue pointed toward the emotional picture rather than toward the neutral picture. Experiment 2 replicated these findings with a reflexive saccade task, in which abrupt luminosity changes were used as exogenous saccade cues. In Experiment 3, participants performed vertical reflexive saccades that were orthogonal to the emotional-neutral picture locations. Saccade endpoints and trajectories deviated away from the visual field in which the emotional scenes were presented. Experiment 4 showed that computationally modeled visual saliency does not vary as a function of scene content and that inversion abolishes the rapid orienting toward the emotional scenes. Visual confounds cannot thus explain the results. The authors conclude that early saccade target selection and execution processes are automatically influenced by emotional picture content. This reveals processing of meaningful scene content prior to overt attention to the stimulus. |
Lauri Nummenmaa; Jukka Hyönä; Jari K. Hietanen I'll walk this way: Eyes reveal the direction of locomotion and make passersby look and go the other way Journal Article In: Psychological Science, vol. 20, no. 12, pp. 1454–1458, 2009. @article{Nummenmaa2009a, This study shows that humans (a) infer other people's movement trajectories from their gaze direction and (b) use this information to guide their own visual scanning of the environment and plan their own movement. In two eye-tracking experiments, participants viewed an animated character walking directly toward them on a street. The character looked constantly to the left or to the right (Experiment 1) or suddenly shifted his gaze from direct to the left or to the right (Experiment 2). Participants had to decide on which side they would skirt the character. They shifted their gaze toward the direction in which the character was not gazing, that is, away from his gaze, and chose to skirt him on that side. Gaze following is not always an obligatory social reflex; social-cognitive evaluations of gaze direction can lead to reversed gaze-following behavior. |
Antje Nuthmann; Ralf Engbert Mindless reading revisited: An analysis based on the SWIFT model of eye-movement control Journal Article In: Vision Research, vol. 49, no. 3, pp. 322–336, 2009. @article{Nuthmann2009, In this article, we revisit the mindless reading paradigm from the perspective of computational modeling. In the standard version of the paradigm, participants read sentences in both their normal version as well as the transformed (or mindless) version where each letter is replaced with a z. z-String scanning shares the oculomotor requirements with reading but none of the higher-level lexical and semantic processes. Here we use the z-string scanning task to validate the SWIFT model of saccade generation [Engbert, R., Nuthmann, A., Richter, E., & Kliegl, R. (2005). SWIFT: A dynamical model of saccade generation during reading. Psychological Review, 112(4), 777-813] as an example for an advanced theory of eye-movement control in reading. We test the central assumption of spatially distributed processing across an attentional gradient proposed by the SWIFT model. Key experimental results like prolonged average fixation durations in z-string scanning compared to normal reading and the existence of a string-length effect on fixation durations and probabilities were reproduced by the model, which lends support to the model's assumptions on visual processing. Moreover, simulation results for patterns of regressive saccades in z-string scanning confirm SWIFT's concept of activation field dynamics for the selection of saccade targets. |
Antje Nuthmann; Reinhold Kliegl An examination of binocular reading fixations based on sentence corpus data Journal Article In: Journal of Vision, vol. 9, no. 5, pp. 31–31, 2009. @article{Nuthmann2009a, Binocular eye movements of normal adult readers were examined as they read single sentences. Analyses of horizontal and vertical fixation disparities indicated that the most prevalent type of disparate fixation is crossed (i.e., the left eye is located further to the right than the right eye) while the left eye frequently fixates somewhat above the right eye. The Gaussian distribution of the binocular fixation point peaked 2.6 cm in front of the plane of text, reflecting the prevalence of horizontally crossed fixations. Fixation disparity accumulates during the course of successive saccades and fixations within a line of text, but only to an extent that does not compromise single binocular vision. In reading, the version and vergence system interact in a way that is qualitatively similar to what has been observed in simple nonreading tasks. Finally, results presented here render it unlikely that vergence movements in reading aim at realigning the eyes at a given saccade target word. |
Weimin Mou; Xianyun Liu; Timothy P. McNamara Layout geometry in encoding and retrieval of spatial memory Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 1, pp. 83–93, 2009. @article{Mou2009, Two experiments investigated whether the spatial reference directions that are used to specify objects' locations in memory can be solely determined by layout geometry. Participants studied a layout of objects from a single viewpoint while their eye movements were recorded. Subsequently, participants used memory to make judgments of relative direction (e.g., "Imagine you are standing at X, facing Y, please point to Z"). When the layout had a symmetric axis that was different from participants' viewing direction, the sequence of eye fixations on objects during learning and the preferred directions in pointing judgments were both determined by the direction of the symmetric axis. These results provide further evidence that interobject spatial relations are represented in memory with intrinsic frames of reference. |
Mulckhuyse Mulckhuyse; Stefan Van der Stigchel; Jan Theeuwes Early and late modulation of saccade deviations by target distractor similarity Journal Article In: Journal of Neurophysiology, vol. 102, no. 3, pp. 1451–1458, 2009. @article{Mulckhuyse2009, In this study, we investigated the time course of oculomotor competition between bottom-up and top-down selection processes using saccade trajectory deviations as a dependent measure. We used a paradigm in which we manipulated saccade latency by offsetting the fixation point at different time points relative to target onset. In experiment 1, observers made a saccade to a filled colored circle while another irrelevant distractor circle was presented. The distractor was either similar (i.e., identical) or dissimilar to the target. Results showed that the strength of saccade deviation was modulated by target distractor similarity for short saccade latencies. To rule out the possibility that the similar distractor affected the saccade trajectory merely because it was identical to the target, the distractor in experiment 2 was a square shape of which only the color was similar or dissimilar to the target. The results showed that deviations for both short and long latencies were modulated by target distractor similarity. When saccade latencies were short, we found less saccade deviation away from a similar than from a dissimilar distractor. When saccade latencies were long, the opposite pattern was found: more saccade deviation away from a similar than from a dissimilar distractor. In contrast to previous findings, our study shows that task-relevant information can already influence the early processes of oculomotor control. We conclude that competition between saccadic goals is subject to two different processes with different time courses: one fast activating process signaling the saliency and task relevance of a location and one slower inhibitory process suppressing that location. |
Jérôme Munuera; Pierre Morel; Jean-Rene Duhamel; Sophie Deneve Optimal sensorimotor control in eye movement sequences Journal Article In: Journal of Neuroscience, vol. 29, no. 10, pp. 3026–3035, 2009. @article{Munuera2009, Fast and accurate motor behavior requires combining noisy and delayed sensory information with knowledge of self-generated body motion; much evidence indicates that humans do this in a near-optimal manner during arm movements. However, it is unclear whether this principle applies to eye movements. We measured the relative contributions of visual sensory feedback and the motor efference copy (and/or proprioceptive feedback) when humans perform two saccades in rapid succession, the first saccade to a visual target and the second to a memorized target. Unbeknownst to the subject, we introduced an artificial motor error by randomly "jumping" the visual target during the first saccade. The correction of the memory-guided saccade allowed us to measure the relative contributions of visual feedback and efferent copy (and/or proprioceptive feedback) to motor-plan updating. In a control experiment, we extinguished the target during the saccade rather than changing its location to measure the relative contribution of motor noise and target localization error to saccade variability without any visual feedback. The motor noise contribution increased with saccade amplitude, but remained <30% of the total variability. Subjects adjusted the gain of their visual feedback for different saccade amplitudes as a function of its reliability. Even during trials where subjects performed a corrective saccade to compensate for the target-jump, the correction by the visual feedback, while stronger, remained far below 100%. In all conditions, an optimal controller predicted the visual feedback gain well, suggesting that humans combine optimally their efferent copy and sensory feedback when performing eye movements. |
René M. Müri; D. Cazzoli; Thomas Nyffeler; Tobias Pflugshaupt Visual exploration pattern in hemineglect Journal Article In: Psychological Research, vol. 73, no. 2, pp. 147–157, 2009. @article{Mueri2009, The analysis of eye movement parameters in visual neglect such as cumulative fixation duration, saccade amplitude, or the numbers of saccades has been used to probe attention deficits in neglect patients, since the pattern of exploratory eye movements has been taken as a strong index of attention distribution. The current overview of the literature of visual neglect has its emphasis on studies dealing with eye movement and exploration analysis. We present our own results in 15 neglect patients. The free exploration behavior was analyzed in these patients presenting 32 naturalistic color photographs of everyday scenes. Cumulative fixation duration, spatial distribution of fixations in the horizontal and vertical plane, the number and amplitude of exploratory saccades was analyzed and compared with the results of an age-matched control group. A main result of our study was that in neglect patients, fixation distribution of free exploration of natural scenes is not only influenced by the left-right bias in the horizontal direction but also by the vertical direction. |
Holger Mitterer; James M. McQueen Processing reduced word-forms in speech perception using probabilistic knowledge about speech production Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 1, pp. 244–263, 2009. @article{Mitterer2009, Two experiments examined how Dutch listeners deal with the effects of connected-speech processes, specifically those arising from word-final /t/ reduction (e.g., whether Dutch [tas] is tas, bag, or a reduced-/t/ version of tast, touch). Eye movements of Dutch participants were tracked as they looked at arrays containing 4 printed words, each associated with a geometrical shape. Minimal pairs (e.g., tas/tast) were either both above (boven) or both next to (naast) different shapes. Spoken instructions (e.g., “Klik op het woordje tas boven de ster,” [Click on the word bag above the star]) thus became unambiguous only on their final words. Prior to disambiguation, listeners' fixations were drawn to /t/-final words more when boven than when naast followed the ambiguous sequences. This behavior reflects Dutch speech- production data: /t/ is reduced more before /b/ than before /n/. We thus argue that probabilistic knowledge about the effect of following context in speech production is used prelexically in perception to help resolve lexical ambiguities caused by continuous-speech processes. |
Korbinian Moeller; Martin H. Fischer; Hans-Christoph Nuerk; Klaus Willmes Sequential or parallel decomposed processing of two-digit numbers? Evidence from eye-tracking Journal Article In: Quarterly Journal of Experimental Psychology, vol. 62, no. 2, pp. 323–334, 2009. @article{Moeller2009a, While reaction time data have shown that decomposed processing of two-digit numbers occurs, there is little evidence about how decomposed processing functions. Poltrock and Schwartz (1984) argued that multi-digit numbers are compared in a sequential digit-by-digit fashion starting at the leftmost digit pair. In contrast, Nuerk and Willmes (2005) favoured parallel processing of the digits constituting a number. These models (i.e., sequential decomposition, parallel decomposition) make different predictions regarding the fixation pattern in a two-digit number magnitude comparison task and can therefore be differentiated by eye fixation data. We tested these models by evaluating participants' eye fixation behaviour while selecting the larger of two numbers. The stimulus set consisted of within-decade comparisons (e.g., 53_57) and between-decade comparisons (e.g., 42_57). The between-decade comparisons were further divided into compatible and incompatible trials (cf. Nuerk, Weger, & Willmes, 2001) and trials with different decade and unit distances. The observed fixation pattern implies that the comparison of two-digit numbers is not executed by sequentially comparing decade and unit digits as proposed by Poltrock and Schwartz (1984) but rather in a decomposed but parallel fashion. Moreover, the present fixation data provide first evidence that digit processing in multi-digit numbers is not a pure bottom-up effect, but is also influenced by top-down factors. Finally, implications for multi-digit number processing beyond the range of two-digit numbers are discussed. |
Korbinian Moeller; S. Neuburger; L. Kaufmann; K. Landerl; Hans-Christoph Nuerk Basic number processing deficits in developmental dyscalculia: Evidence from eye tracking Journal Article In: Cognitive Development, vol. 24, no. 4, pp. 371–386, 2009. @article{Moeller2009, Recent research suggests that developmental dyscalculia is associated with a subitizing deficit (i.e., the inability to quickly enumerate small sets of up to 3 objects). However, the nature of this deficit has not previously been investigated. In the present study the eye-tracking methodology was employed to clarify whether (a) the subitizing deficit of two boys with dyscalculia resulted from a general slowing in the access to magnitude representation, or (b) children with dyscalculia resort to a back-up counting strategy even for small object sets. In a dot-counting task, a standard problem size effect for the number of fixations required to encode the presented numerosity within the subitizing range was observed. Together with the finding that problem size had no impact on the average fixation duration, this result suggested that children with dyscalculia may indeed have to count, while typically developing controls are able to enumerate the number of dots in parallel, i.e., subitize. Implications for the understanding of developmental dyscalculia are considered. |
Anna Montagnini; Leonardo Chelazzi In: Vision Research, vol. 49, no. 10, pp. 1316–1328, 2009. @article{Montagnini2009, We investigated human oculomotor behaviour in a Go-NoGo saccadic task in which the saccadic response to a peripheral visual target was to be inhibited in a minority of trials (NoGo trials). Different from classical experimental paradigms on the inhibitory control of intended actions, in our task the inhibitory cue was identical to the saccadic target (used in Go trials) in timing, location and shape-the only difference being its colour. By analysing the latency and the metrics of saccades erroneously executed after a NoGo instruction (NoGo-escapes), we observed a characteristic pattern of performance: first, we observed a decrease in the amplitude of NoGo-escapes with increasing latency; second, we revealed a consistent population of long-latency small saccades opposite in direction to the NoGo cue; finally, we found a strong side-specific inhibitory effect in terms of saccadic reaction times, on trials immediately following a NoGo trial. In addition, we manipulated the readiness to initiate a saccade towards the visual target, by introducing a probability bias in the random sequence of target locations. We found that the capacity to inhibit the impending saccade was improved for the most likely target location, i.e. the condition corresponding to the increased readiness for movement execution. Overall, our results challenge the notion of a central inhibitory mechanism independent from movement preparation. More precisely, they indicate that the two mechanisms (action preparation and action inhibition) interact dynamically, possibly sharing spatially-specific mechanisms, and are similarly affected by particular contextual manipulations. |
Celia J. A. Morgan; Vyv Huddy; Michelle Lipton; H. Valerie Curran; Eileen M. Joyce Is persistent ketamine use a valid model of the cognitive and oculomotor deficits in schizophrenia? Journal Article In: Biological Psychiatry, vol. 65, no. 12, pp. 1099–1102, 2009. @article{Morgan2009, Background: Acute ketamine has been shown to model features of schizophrenia such as psychotic symptoms, cognitive deficits and smooth pursuit eye movement dysfunction. There have been suggestions that chronic ketamine may also produce an analogue of the disorder. In this study, we investigated the effect of persistent recreational ketamine use on tests of episodic and working memory and on oculomotor tasks of smooth pursuit and pro- and antisaccades. Methods: Twenty ketamine users were compared with 1) 20 first-episode schizophrenia patients, 2) 17 polydrug control subjects who did not use ketamine but were matched to the ketamine users for other drug use, and 3) 20 non-drug-using control subjects. All groups were matched for estimated premorbid IQ. Results: Ketamine users made more antisaccade errors than both control groups but did not differ from patients. Ketamine users performed better than schizophrenia patients on smooth pursuit, antisaccade metrics, and both memory tasks but did not differ from control groups. Conclusions: Problems inhibiting reflexive eye movements may be a consequence of repeated ketamine self-administration. The absence of any other oculomotor or cognitive deficit present in schizophrenia suggests that chronic self-administration of ketamine may not be a good model of these aspects of the disorder. |
Camille Morvan; Mark Wexler The nonlinear structure of motion perception during smooth eye movements Journal Article In: Journal of Vision, vol. 9, no. 7, pp. 1–13, 2009. @article{Morvan2009, To perceive object motion when the eyes themselves undergo smooth movement, we can either perceive motion directly-by extracting motion relative to a background presumed to be fixed-or through compensation, by correcting retinal motion by information about eye movement. To isolate compensation, we created stimuli in which, while the eye undergoes smooth movement due to inertia, only one object is visible-and the motion of this stimulus is decoupled from that of the eye. Using a wide variety of stimulus speeds and directions, we rule out a linear model of compensation, in which stimulus velocity is estimated as a linear combination of retinal and eye velocities multiplied by a constant gain. In fact, we find that when the stimulus moves in the same direction as the eyes, there is little compensation, but when movement is in the opposite direction, compensation grows in a nonlinear way with speed. We conclude that eye movement is estimated from a combination of extraretinal and retinal signals, the latter based on an assumption of stimulus stationarity. Two simple models, in which the direction of eye movement is computed from the extraretinal signal and the speed from the retinal signal, account well for our results. |
Michi Matsukura; James R. Brockmole; John M. Henderson Overt attentional prioritization of new objects and feature changes during real-world scene viewing Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 835–855, 2009. @article{Matsukura2009, The authors investigated the extent to which a change to an object's colour is overtly prioritized for fixation relative to the appearance of a new object during real-world scene viewing. Both types of scene change captured gaze (and attention) when introduced during a fixation, although colour changes captured attention less often than new objects. Neither of these scene changes captured attention when they occurred during a saccade, but slower and less reliable memory-based mechanisms were nevertheless able to prioritize new objects and colour changes relative to the other stable objects in the scene. These results indicate that online memory for object identity and at least some object features are functional in detecting changes to real-world scenes. Additionally, visual factors such as the salience of onsets and colour changes did not affect prioritization of these events. We discuss these results in terms of current theories of attention allocation within, and online memory representations of, real-world scenes. |
Jason S. McCarley Effects of speed - accuracy instructions on oculomotor scanning and target recognition in a simulated baggage X-ray screening task Journal Article In: Ergonomics, vol. 52, no. 3, pp. 325–333, 2009. @article{McCarley2009, Visual search tasks are often carried out under high levels of time stress. Transportation security screeners, for example, face demands to achieve high levels of accuracy while maintaining rapid passenger throughput. An experiment examined the strategies by which operators regulate visual search performance under such conditions. Observers performed a simulated baggage-screening task under instructions to emphasise either response speed or accuracy. Behavioural measures and eye movements were recorded. Observers made fewer and briefer fixations under emphasise-speed than under emphasise-accuracy instructions. Losses in accuracy were produced by more frequent failures to fixate on targets and a decrease in the detection rate of non-fixated targets. The likelihood with which observers detected a fixated target was similar across speed-accuracy instructions. Results will inform efforts to model visual search in naturalistic tasks, allowing more accurate prediction of response times and error rate and may aid the design of training programmes and other interventions to improve search performance under stress. |
Ayelet McKyton; Yoni Pertzov; Ehud Zohary Pattern matching is assessed in retinotopic coordinates Journal Article In: Journal of Vision, vol. 9, no. 13, pp. 1–10, 2009. @article{McKyton2009, We typically examine scenes by performing multiple saccades to different objects of interest within the image. Therefore, an extra-retinotopic representation, invariant to the changes in the retinal image caused by eye movements, might be useful for high-level visual processing. We investigate here, using a matching task, whether the representation of complex natural images is retinotopic or screen-based. Subjects observed two simultaneously presented images, made a saccadic eye movement to a new fixation point, and viewed a third image. Their task was to judge whether the third image was identical to one of the two earlier images or different. Identical images could appear either in the same retinotopic position, in the same screen position, or in totally different locations. Performance was best when the identical images appeared in the same retinotopic position and worst when they appeared in the opposite hemifield. Counter to commonplace intuition, no advantage was conferred from presenting the identical images in the same screen position. This, together with performance sensitivity for image translation of a few degrees, suggests that image matching, which can often be judged without overall recognition of the scene, is mostly determined by neuronal activity in earlier brain areas containing a strictly retinotopic representation and small receptive fields. |
Patricia A. McMullen; Lesley E. MacSween; Charles A. Collin Behavioral effects of visual field location on processing motion- and luminance-defined form Journal Article In: Journal of Vision, vol. 9, no. 6, pp. 1–11, 2009. @article{McMullen2009, Traditional theories posit a ventral cortical visual pathway subserving object recognition regardless of the information defining the contour. However, functional magnetic resonance imaging (fMRI) studies have shown dorsal cortical activity during visual processing of static luminance-defined (SL) and motion-defined form (MDF). It is unknown if this activity is supported behaviorally, or if it depends on central or peripheral vision. The present study compared behavioral performance with two types of MDF [one without translational motion (MDF) and another with (TM)] and SL shapes in a shape matching task where shape pairs appeared in the upper or lower visual fields or along the horizontal meridian of central or peripheral vision. MDF matching was superior to the other contour types regardless of location in central vision. Both MDF and TM matching was superior to SL matching for presentations in peripheral vision. Importantly, there was an advantage for MDF and TM matching in the lower peripheral visual field that was not present for SL forms. These results are consistent with previous behavioral findings that show no field advantage for static form processing and a lower field advantage for motion processing. They are also suggestive of more dorsal cortical involvement in the processing of shapes defined by motion than luminance. |
2008 |
S. M. Emrich; J. D. N. Ruppel; N. Al-Aidroos; J. Pratt; S. Ferber Out with the old: Inhibition of old items in a preview search is limited Journal Article In: Perception and Psychophysics, vol. 70, no. 8, pp. 1552–1557, 2008. @article{Emrich2008, If some of the distractors in a visual search task are previewed prior to the presentation of the remaining distractors and the target, search time is reduced relative to when all of the items are displayed simultaneously. Here, we tested whether the ability to preferentially search new items during such a preview search is limited. We confirmed previous studies: The proportion of fixations on old items was significantly less than chance. However, the probability of fixating old locations was negatively affected by increasing the number of previewed distractors, suggesting that inhibition is limited to a small number of old items. Furthermore, the ability to inhibit old locations was limited to the first four fixations, indicating that by the fifth fixation, the resources required to sustain inhibition had been depleted. Together, these findings suggest that inhibition of old items in a preview search is a top-down mediated process dependent on capacity-limited cognitive resources. |
Yasuhiro Seya; Hidetoshi Nakayasu; Patrick Patterson Visual search of trained and untrained drivers in a driving simulator Journal Article In: Japanese Psychological Research, vol. 50, no. 4, pp. 242–252, 2008. @article{Seya2008, To investigate the effects of driving experience on visual search during driving, we measured eye movements during driving tasks using a driving simulator. We evaluated trained and untrained drivers for selected driving road section types (for example, intersections and straight roads). Participants in the trained group had received driving training by the simulator before the experiment, while the others had no driving training by it. In the experiment, the participants were instructed to drive safely in the simulator. The results of scan paths showed that eye positions were less variable in the trained group than in the untrained group. Total eye-movement distances were shorter, and fixation durations were longer in the trained group than in the untrained group. These results suggest that trained drivers may perceive relevant information efficiently with few eye movements by using their anticipation skills and useful field of view, which may have been developed through their driving training in the simulator. |
R. GODIJN; A. F. KRAMER The effect of attentional demands on the antisaccade cost Journal Article In: Perception and Psychophysics, vol. 70, no. 5, pp. 795–806, 2008. @article{GODIJN2008, In the present study, we examined the effect of attentional demands on the antisaccade cost (the latency difference between antisaccades and prosaccades). Participants performed a visual search for a target digit and were required to execute a saccade toward (prosaccade) or away from (antisaccade) the target. The results of Experiment 1 revealed that the antisaccade cost was greater when the target was premasked (i.e., presented through the removal of line segments) than when it appeared as an onset. Furthermore, in premasked target conditions, the antisaccade cost was increased by the presentation of onset distractors. The results of Experiment 2 revealed that the antisaccade cost was greater in a difficult search task (a numeral 2 among 5s) than in an easy one (a 2 among 7s). The findings provide evidence that attentional demands increase the antisaccade cost. We propose that the attentional demands of the search task interfere with the attentional control required to select the antisaccade goal. |
Jay Pratt; Bas Neggers Inhibition of return in single and dual tasks: Examining saccadic, keypress, and pointing responses Journal Article In: Perception and Psychophysics, vol. 70, no. 2, pp. 257–265, 2008. @article{Pratt2008, Two experiments are reported in which inhibition of return (IOR) was examined with single-response tasks (ither manual responses alone or saccadic responses alone) and dual-response tasks (simultaneous manual and saccadic responses). The first experiment—using guided limb movements that require considerable spatial information—showed more IOR for saccades than for pointing responses. In addition, saccadic IOR was reduced with concurrent pointing movements, but manual IOR was not affected by concurrent saccades. Importantly, at the time of saccade initiation, the arm movements did not start yet, indicating that the influence on saccade IOR is due to arm-movement preparation. In the second experiment, using localization keypress responses that required only minimal spatial information, greater IOR was again found for saccadic than for manual responses, but no effect of concurrent movements was found. These findings add further support that there is a dissociation between oculomotor and skeletal-motor IOR. Moreover, the results show that the preparation manual responses tend to mediate saccadic behavior—but only when the manual responses require high levels of spatial accuracy—and that the superior colliculus is the likely neural substrate integrating IOR for eye and arm movements. |
Archana Pradeep; Shery Thomas; Eryl O. Roberts; Frank A. Proudlock; Irene Gottlob Reduction of congenital nystagmus in a patient after smoking cannabis Journal Article In: Strabismus, vol. 16, no. 1, pp. 29–32, 2008. @article{Pradeep2008, INTRODUCTION: Smoking cannabis has been described to reduce acquired pendular nystagmus in MS, but its effect on congenital nystagmus is not known. PURPOSE: To report the effect of smoking cannabis in a case of congenital nystagmus. METHODS: A 19-year-old male with congenital horizontal nystagmus presented to the clinic after smoking 10 mg of cannabis. He claimed that the main reason for smoking cannabis was to improve his vision. At the next clinic appointment, he had not smoked cannabis for 3weeks. Full ophthalmologic examination and eye movement recordings were performed at each visit. RESULTS: Visual acuity improved by 3 logMar lines in the left eye and by 2 logMar lines in the right eye after smoking cannabis. The nystagmus intensities were reduced by 30% in primary position and 44%, 11%, 10% and 40% at 20-degree eccentricity to the right, left, elevation and depression, respectively, after smoking cannabis. CONCLUSION: Cannabis may be beneficial in the treatment of congenital idiopathic nystagmus (CIN). Further research to clarify the safety and efficacy of cannabis in patients with CIN, administered for example by capsules or spray, would be important. |
Heinz-Werner Priess; Sabine Born; Ulrich Ansorge Inhibition of return after color singletons Journal Article In: Journal of Eye Movement Research, vol. 5, no. 5, pp. 1–12, 2008. @article{Priess2008, Inhibition of return (IOR) is the faster selection of hitherto unattended than previously attended positions. Some previous studies failed to find evidence for IOR after attention capture by color singletons. Others, however, did report IOR effects after color singletons. The current study examines the role of cue relevance for obtaining IOR effects. By using a potentially more sensitive method - saccadic IOR - we tested and found IOR after relevant color singleton cues that required an attention shift (Experiment 1). In contrast, irrelevant color singletons failed to produce reliable IOR effects in Experiment 2. Also, Experiment 2 rules out an alternative explanation of our IOR findings in terms of masking. We discuss our results in light of pertaining theories of IOR. |
Xiaochuan Pan; Kosuke Sawa; Ichiro Tsuda; Minoru Tsukada; Masamichi Sakagami Reward prediction based on stimulus categorization in primate lateral prefrontal cortex Journal Article In: Nature Neuroscience, vol. 11, no. 6, pp. 703–712, 2008. @article{Pan2008, To adapt to changeable or unfamiliar environments, it is important that animals develop strategies for goal-directed behaviors that meet the new challenges. We used a sequential paired-association task with asymmetric reward schedule to investigate how prefrontal neurons integrate multiple already-acquired associations to predict reward. Two types of reward-related neurons were observed in the lateral prefrontal cortex: one type predicted reward independent of physical properties of visual stimuli and the other encoded the reward value specific to a category of stimuli defined by the task requirements. Neurons of the latter type were able to predict reward on the basis of stimuli that had not yet been associated with reward, provided that another stimulus from the same category was paired with reward. The results suggest that prefrontal neurons can represent reward information on the basis of category and propagate this information to category members that have not been linked directly with any experience of reward. |
Sebastian Pannasch; Jens R. Helmert; Katharina Roth; Ann-Katrin Herbold; Henrik Walter Visual fixation durations and saccade amplitudes: Shifting relationship in a variety of conditions Journal Article In: Journal of Eye Movement Research, vol. 2, no. 2, pp. 1–19, 2008. @article{Pannasch2008, Is there any relationship between visual fixation durations and saccade amplitudes in free exploration of pictures and scenes? In four experiments with naturalistic stimuli, we compared eye movements during early and late phases of scene perception. Influences of repeated presentation of similar stimuli (Experiment 1), object density (Experiment 2), emotional stimuli (Experiment 3) and mood induction (Experiment 4) were examined. The results demonstrate a systematic increase in the durations of fixations and a decrease for saccadic amplitudes over the time course of scene perception. This relationship was very stable across the variety of studied conditions. It can be interpreted in terms of a shifting balance of the two modes of visual information processing. |
Alicia Peltsch; Aaron B. Hoffman; I. T. Armstrong; Giovanna Pari; D. P. Munoz Saccadic impairments in Huntington's disease Journal Article In: Experimental Brain Research, vol. 186, no. 3, pp. 457–469, 2008. @article{Peltsch2008, Huntington's disease (HD), a progressive neurological disorder involving degeneration in basal ganglia structures, leads to abnormal control of saccadic eye movements. We investigated whether saccadic impairments in HD (N = 9) correlated with clinical disease severity to determine the relationship between saccadic control and basal ganglia pathology. HD patients and age/sex-matched controls performed various eye movement tasks that required the execution or suppression of automatic or voluntary saccades. In the "immediate" saccade tasks, subjects were instructed to look either toward (pro-saccade) or away from (anti-saccade) a peripheral stimulus. In the "delayed" saccade tasks (pro-/anti-saccades; delayed memory-guided sequential saccades), subjects were instructed to wait for a central fixation point to disappear before initiating saccades towards or away from a peripheral stimulus that had appeared previously. In all tasks, mean saccadic reaction time was longer and more variable amongst the HD patients. On immediate anti-saccade trials, the occurrence of direction errors (pro-saccades initiated toward stimulus) was higher in the HD patients. In the delayed tasks, timing errors (eye movements made prior to the go signal) were also greater in the HD patients. The increased variability in saccadic reaction times and occurrence of errors (both timing and direction errors) were highly correlated with disease severity, as assessed with the Unified Huntington's Disease Rating Scale, suggesting that saccadic impairments worsen as the disease progresses. Thus, performance on voluntary saccade paradigms provides a sensitive indicator of disease progression in HD. |
Angélica Pérez Fornos; Jörg Sommerhalder; Alexandre Pittard; Avinoam B. Safran; Marco Pelizzone Simulation of artificial vision: IV. Visual information required to achieve simple pointing and manipulation tasks Journal Article In: Vision Research, vol. 48, no. 16, pp. 1705–1718, 2008. @article{PerezFornos2008, Retinal prostheses attempt to restore some amount of vision to totally blind patients. Vision evoked this way will be however severely constrained because of several factors (e.g., size of the implanted device, number of stimulating contacts, etc.). We used simulations of artificial vision to study how such restrictions of the amount of visual information provided would affect performance on simple pointing and manipulation tasks. Five normal subjects participated in the study. Two tasks were used: pointing on random targets (LEDs task) and arranging wooden chips according to a given model (CHIPs task). Both tasks had to be completed while the amount of visual information was limited by reducing the resolution (number of pixels) and modifying the size of the effective field of view. All images were projected on a 10° × 7° viewing area, stabilised at a given position on the retina. In central vision, the time required to accomplish the tasks remained systematically slower than with normal vision. Accuracy was close to normal at high image resolutions and decreased at 500 pixels or below, depending on the field of view used. Subjects adapted quite rapidly (in less than 15 sessions) to performing both tasks in eccentric vision (15° in the lower visual field), achieving after adaptation performances close to those observed in central vision. These results demonstrate that, if vision is restricted to a small visual area stabilised on the retina (as would be the case in a retinal prosthesis), the perception of several hundreds of retinotopically arranged phosphenes is still needed to restore accurate but slow performance on pointing and manipulation tasks. Considering that present prototypes afford less than 100 stimulation contacts and that our simulations represent the most favourable visual input conditions that the user might experience, further development is required to achieve optimal rehabilitation prospects. |
Matthew S. Peterson; Melissa R. Beck; Jason H. Wong Were you paying attention to where you looked? The role of executive working memory in visual search Journal Article In: Psychonomic Bulletin & Review, vol. 15, no. 2, pp. 372–377, 2008. @article{Peterson2008, Recent evidence has indicated that performing a working memory task that loads executive working memory leads to less efficient visual search (Han & Kim, 2004). We explored the role that executive functioning plays in visual search by examining the pattern of eye movements while participants performed a search task with or without a secondary executive working memory task. Results indicate that executive functioning plays two roles in visual search: the identification of objects and the control of the disengagement of attention. |
Tobias Pflugshaupt; Thomas Nyffeler; Roman Von Wartburg; Christian W. Hess; René M. Müri Loss of exploratory vertical saccades after unilateral frontal eye field damage Journal Article In: Journal of Neurology, Neurosurgery and Psychiatry, vol. 79, no. 4, pp. 474–477, 2008. @article{Pflugshaupt2008, Despite their relevance for locomotion and social interaction in everyday situations, little is known about the cortical control of vertical saccades in humans. Results from microstimulation studies indicate that both frontal eye fields (FEFs) contribute to these eye movements. Here, we present a patient with a damaged right FEF, who hardly made vertical saccades during visual exploration. This finding suggests that, for the cortical control of exploratory vertical saccades, integrity of both FEFs is indeed important. |
Matthew H. Phillips; Jay A. Edelman The dependence of visual scanning performance on search direction and difficulty Journal Article In: Vision Research, vol. 48, no. 21, pp. 2184–2192, 2008. @article{Phillips2008, Phillips and Edelman [Phillips, M. H., & Edelman, J. A. (2008). The dependence of visual scanning performance on saccade, fixation, and perceptual metrics. Vision Research, 48(7), 926-936] presented evidence that performance variability in a visual scanning task depends on oculomotor variables related to saccade amplitude rather than fixation duration, and that saccade-related metrics reflects perceptual span. Here, we extend these results by showing that even for extremely difficult searches trial-to-trial performance variability still depends on saccade-related metrics and not fixation duration. We also show that scanning speed is faster for horizontal than for vertical searches, and that these differences derive again from differences in saccade-based metrics and not from differences in fixation duration. We find perceptual span to be larger for horizontal than vertical searches, and approximately symmetric about the line of gaze. |
Hans P. Op De Beeck; Jennifer A. Deutsch; Wim Vanduffel; Nancy Kanwisher; James J. DiCarlo A stable topography of selectivity for unfamiliar shape classes in monkey inferior temporal cortex Journal Article In: Cerebral Cortex, vol. 18, no. 7, pp. 1676–1694, 2008. @article{OpDeBeeck2008, The inferior temporal (IT) cortex in monkeys plays a central role in visual object recognition and learning. Previous studies have observed patches in IT cortex with strong selectivity for highly familiar object classes (e.g., faces), but the principles behind this functional organization are largely unknown due to the many properties that distinguish different object classes. To unconfound shape from meaning and memory, we scanned monkeys with functional magnetic resonance imaging while they viewed classes of initially novel objects. Our data revealed a topography of selectivity for these novel object classes across IT cortex. We found that this selectivity topography was highly reproducible and remarkably stable across a 3-month interval during which monkeys were extensively trained to discriminate among exemplars within one of the object classes. Furthermore, this selectivity topography was largely unaffected by changes in behavioral task and object retinal position, both of which preserve shape. In contrast, it was strongly influenced by changes in object shape. The topography was partially related to, but not explained by, the previously described pattern of face selectivity. Together, these results suggest that IT cortex contains a large-scale map of shape that is largely independent of meaning, familiarity, and behavioral task. |
Jorge Otero-Millan; Xoana G. Troncoso; Stephen L. Macknik; Ignacio Serrano-Pedraza; Susana Martinez-Conde Saccades and microsaccades during visual fixation, exploration, and search: Foundations for a common saccadic generator Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–18, 2008. @article{OteroMillan2008, Microsaccades are known to occur during prolonged visual fixation, but it is a matter of controversy whether they also happen during free-viewing. Here we set out to determine: 1) whether microsaccades occur during free visual exploration and visual search, 2) whether microsaccade dynamics vary as a function of visual stimulation and viewing task, and 3) whether saccades and microsaccades share characteristics that might argue in favor of a common saccade-microsaccade oculomotor generator. Human subjects viewed naturalistic stimuli while performing various viewing tasks, including visual exploration, visual search, and prolonged visual fixation. Their eye movements were simultaneously recorded with high precision. Our results show that microsaccades are produced during the fixation periods that occur during visual exploration and visual search. Microsaccade dynamics during free-viewing moreover varied as a function of visual stimulation and viewing task, with increasingly demanding tasks resulting in increased microsaccade production. Moreover, saccades and microsaccades had comparable spatiotemporal characteristics, including the presence of equivalent refractory periods between all pair-wise combinations of saccades and microsaccades. Thus our results indicate a microsaccade-saccade continuum and support the hypothesis of a common oculomotor generator for saccades and microsaccades. |
Elmar H. Pinkhardt; Reinhart Jürgens; Wolfgang Becker; Federica Valdarno; Albert C. Ludolph; Jan Kassubek Differential diagnostic value of eye movement recording in PSP-parkinsonism, Richardson's syndrome, and idiopathic Parkinson's disease Journal Article In: Journal of Neurology, vol. 255, no. 12, pp. 1916–1925, 2008. @article{Pinkhardt2008, Vertical gaze palsy is a highly relevant clinical sign in parkinsonian syndromes. As the eponymous sign of progressive supranuclear palsy (PSP), it is one of the core features in the diagnosis of this disease. Recent studies have suggested a further differentiation of PSP in Richardson's syndrome (RS) and PSP-parkinsonism (PSPP). The aim of this study was to search for oculomotor abnormalities in the PSP-P subset of a sample of PSP patients and to compare these findings with those of (i) RS patients, (ii) patients with idiopathic Parkinson's disease (IPD), and (iii) a control group. Twelve cases of RS, 5 cases of PSP-P, and 27 cases of IPD were examined by use of video-oculography (VOG) and compared to 23 healthy normal controls. Both groups of PSP patients (RS, PSP-P) had significantly slower saccades than either IPD patients or controls, whereas no differences in saccadic eye peak velocity were found between the two PSP groups or in the comparison of IPD with controls. RS and PSP-P were also similar to each other with regard to smooth pursuit eye movements (SPEM), with both groups having significantly lower gain than controls (except for downward pursuit); however, SPEM gain exhibited no consistent difference between PSP and IPD. A correlation between eye movement data and clinical data (Hoehn & Yahr scale or disease duration) could not be observed. As PSP-P patients were still in an early stage of the disease when a differentiation from IPD is difficult on clinical grounds, the clear-cut separation between PSP-P and IPD obtained by measuring saccade velocity suggests that VOG could contribute to the early differentiation between these patient groups. |
Alexander Pollatsek; Timothy J. Slattery; Barbara J. Juhasz The processing of novel and lexicalised prefixed words in reading Journal Article In: Language and Cognitive Processes, vol. 23, no. 7-8, pp. 1133–1158, 2008. @article{Pollatsek2008, Two experiments compared how relatively long novel prefixed words (e.g., overfarm) and existing prefixed words were processed in reading. The use of novel prefixed words allows one to examine the roles of whole-word access and decompositional processing in the processing of non-novel prefixed words. The two experiments found that, although there was a large cost to novelty (e.g., gaze durations were about 100 ms longer for novel prefixedwords), the effect of the frequency of the root morpheme on fixation measures was about the same for novel and non-novel prefixed words for most measures. This finding rules out a (‘‘horse-race'') dual-route model of processing for existing prefixed words in which the whole-word and decompositional route are parallel and independent, as such a model would predict a substantially larger root frequency effect for novel words (where whole-word processes do not exist). The most likely model to explain the processing of prefixed words is a parallel interactive one. |
Cliodhna Quigley; Selim Onat; Sue Harding; Martin Cooke; Peter König Audio-visual integration during overt visual attention Journal Article In: Journal of Eye Movement Research, vol. 1, no. 2, pp. 4, 2008. @article{Quigley2008, How do different sources of information arising from different modalities interact to control where we look? To answer this question with respect to real-world operational conditions we presented natural images and spatially localized sounds in (V)isual, Audiovisual (AV) and (A)uditory conditions and measured subjects' eye-movements. Our results demonstrate that eye-movements in AV conditions are spatially biased towards the part of the image corresponding to the sound source. Interestingly, this spatial bias is dependent on the probability of a given image region to be fixated (saliency) in the V condition. This indicates that fixation behaviour during the AV conditions is the result of an integration process. Regression analysis shows that this integration is best accounted for by a linear combination of unimodal saliencies. |
Ralph Radach; Lynn Huestegge; Ronan G. Reilly The role of global top-down factors in local eye-movement control in reading Journal Article In: Psychological Research, vol. 72, no. 6, pp. 675–688, 2008. @article{Radach2008, Although the development of the field of reading has been impressive, there are a number of issues that still require much more attention. One of these concerns the variability of skilled reading within the individual. This paper explores the topic in three ways: (1) it quantifies the extent to which, two factors, the specific reading task (comprehension vs. word verification) and the format of reading material (sentence vs. passage) influence the temporal aspects of reading as expressed in word-viewing durations; (2) it examines whether they also affect visuomotor aspects of eye-movement control; and (3) determine whether they can modulate local lexical processing. The results reveal reading as a dynamic, interactive process involving semi-autonomous modules, with top-down influences clearly evident in the eye-movement record. |
Christoph Rasche; Karl R. Gegenfurtner Orienting during gaze guidance in a letter-identification task Journal Article In: Journal of Eye Movement Research, vol. 3, no. 4, pp. 1–10, 2008. @article{Rasche2008, The idea of gaze guidance is to lead a viewer's gaze through a visual display in order to facilitate the viewer's search for specific information in a least-obtrusive manner. This study investigates saccadic orienting when a viewer is guided in a fast-paced, low-contrast letter identification task. Despite the task's difficulty and although guiding cues were ad-justed to gaze eccentricity, observers preferred attentional over saccadic shifts to obtain a letter identification judgment; and if a saccade was carried out its saccadic constant error was 50%. From those results we derive a number of design recommendations for the process of gaze guidance. |
Thomas Geyer; Hermann J. Müller; Joseph Krummenacher Expectancies modulate attentional capture by salient color singletons Journal Article In: Vision Research, vol. 48, no. 11, pp. 1315–1326, 2008. @article{Geyer2008, In singleton feature search for a form-defined target, the presentation of a task-irrelevant, but salient singleton color distractor is known to interfere with target detection [Theeuwes, J. (1991). Cross-dimensional perceptual selectivity. Perception & Psychophysics, 50, 184-193; Theeuwes, J. (1992). Perceptual selectivity for color and form. Perception & Psychophysics, 51, 599-606]. The present study was designed to re-examine this effect, by presenting observers with a singleton form target (on each trial) that could be accompanied by a salient) singleton color distractor, with the proportion of distractor to no-distractor trials systematically varying across blocks of trials. In addition to RTs, eye movements were recorded in order to examine the mechanisms underlying the distractor interference effect. The results showed that singleton distractors did interfere with target detection only when they were presented on a relatively small (but not on a large) proportion of trials. Overall, the findings suggest that cross-dimensional interference is a covert attention effect, arising from the competition of the target with the distractor for attentional selection [Kumada, T., & Humphreys, G. W. (2002). Cross-dimensional interference and cross-trial inhibition. Perception & Psychophysics, 64, 493-503], with the strength of the competition being modulated by observers' (top-down) incentive to suppress the distractor dimension. |
Richard Godijn; Arthur F. Kramer Oculomotor capture by surprising onsets Journal Article In: Visual Cognition, vol. 16, no. 2-3, pp. 279–289, 2008. @article{Godijn2008b, The present study examined the effect of surprising onsets on oculomotor behaviour. Participants were required to execute a saccadic eye movement to a colour singleton target. After a series of trials an unexpected onset distractor was abruptly presented on the surprise trial. The presentation of the onset was repeated on subsequent trials. The results showed that the onset captured the eyes for 28% of the participants on the surprise trial, but this percentage decreased after repeated exposure to the onset. Furthermore, saccade latencies to the target were increased when a surprising onset was presented. After repeated exposure to the onset, latencies to the target decreased to the preonset level. The results suggest that when the onset is not part of participants' task set it has a strong effect on oculomotor behaviour. Once the task set has been updated and the onset no longer comes as a surprise its effect on oculomotor behaviour is dramatically reduced. |
Ali Ezzati; Ashkan Golzar; Arash S. R. Afraz Topography of the motion aftereffect with and without eye movements Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–16, 2008. @article{Ezzati2008, Although a lot is known about various properties of the motion aftereffect (MAE), there is no systematic study of the topographic organization of MAE. In the current study, first we provided a topographic map of the MAE to investigate its spatial properties in detail. To provide a fine topographic map, we measured MAE with small test stimuli presented at different loci after adaptation to motion in a large region within the visual field. We found that strength of MAE is highest on the internal edge of the adapted area. Our results show a sharper aftereffect boundary for the shearing motion compared to compression and expansion boundaries. In the second experiment, using a similar paradigm, we investigated topographic deformation of the MAE area after a single saccadic eye movement. Surprisingly, we found that topographic map of MAE splits into two separate regions after the saccade: one corresponds to the retinal location of the adapted stimulus and the other matches the spatial location of the adapted region on the display screen. The effect was stronger at the retinotopic location. The third experiment is basically replication of the second experiment in a smaller zone that confirms the results of previous experiments in individual subjects. The eccentricity of spatiotopic area is different from retinotopic area in the second experiment; Experiment 3 controls the effect of eccentricity and confirms the major results of the second experiment. |
Tom Foulsham; Alan Kingstone; Geoffrey Underwood Turning the world around: Patterns in saccade direction vary with picture orientation Journal Article In: Vision Research, vol. 48, pp. 1777–1790, 2008. @article{Foulsham2008a, The eye movements made by viewers of natural images often feature a predominance of horizontal saccades. Can this behaviour be explained by the distribution of saliency around the horizon, low-level oculomotor factors, top-down control or laboratory artefacts? Two experiments explored this bias by recording saccades whilst subjects viewed photographs rotated to varying extents, but within a constant square frame. The findings show that the dominant saccade direction follows the orientation of the scene, though this pattern varies in interiors and during recognition of previously seen pictures. This demonstrates that a horizon bias is robust and affected by both the distribution of features and more global representations of the scene layout. |
Tom Foulsham; Geoffrey Underwood What can saliency models predict about eye movements? Spatial and sequential aspects of fixations during encoding and recognition Journal Article In: Journal of Vision, vol. 8, no. 2, pp. 1–17, 2008. @article{Foulsham2008, Saliency map models account for a small but significant amount of the variance in where people fixate, but evaluating these models with natural stimuli has led to mixed results. In the present study, the eye movements of participants were recorded while they viewed color photographs of natural scenes in preparation for a memory test (encoding) and when recognizing them later. These eye movements were then compared to the predictions of a well defined saliency map model (L. Itti & C. Koch, 2000), in terms of both individual fixation locations and fixation sequences (scanpaths). The saliency model is a significantly better predictor of fixation location than random models that take into account bias toward central fixations, and this is the case at both encoding and recognition. However, similarity between scanpaths made at multiple viewings of the same stimulus suggests that repetitive scanpaths also contribute to where people look. Top-down recapitulation of scanpaths is a key prediction of scanpath theory (D. Noton & L. Stark, 1971), but it might also be explained by bottom-up guidance. The present data suggest that saliency cannot account for scanpaths and that incorporating these sequences could improve model predictions. |
Hans Peter Frey; Christian Honey; P. König; Peter Konig What's color got to do with it? The influence of color on visual attention in different categories Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–17, 2008. @article{Frey2008, Certain locations attract human gaze in natural visual scenes. Are there measurable features, which distinguish these locations from others? While there has been extensive research on luminance-defined features, only few studies have examined the influence of color on overt attention. In this study, we addressed this question by presenting color-calibrated stimuli and analyzing color features that are known to be relevant for the responses of LGN neurons. We recorded eye movements of 15 human subjects freely viewing colored and grayscale images of seven different categories. All images were also analyzed by the saliency map model (L. Itti, C. Koch, & E. Niebur, 1998). We find that human fixation locations differ between colored and grayscale versions of the same image much more than predicted by the saliency map. Examining the influence of various color features on overt attention, we find two extreme categories: while in rainforest images all color features are salient, none is salient in fractals. In all other categories, color features are selectively salient. This shows that the influence of color on overt attention depends on the type of image. Also, it is crucial to analyze neurophysiologically relevant color features for quantifying the influence of color on attention. |
Steven Frisson; Brian McElree Complement coercion is not modulated by competition: Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 1, pp. 1–11, 2008. @article{Frisson2008a, An eye-movement study examined the processing of expressions requiring complement coercion (J. Pustejovsky, 1995), in which a noun phrase that does not denote an event (e.g., the book) appears as the complement of an event-selecting verb (e.g., began the book). Previous studies demonstrated that these expressions are more costly to process than are control expressions that can be processed with basic compositional operations (L. Pylkka ̈nen & B. McElree, 2006). Complement coercion is thought to be costly because comprehenders need to construct an event sense of the complement to satisfy the semantic restrictions of the verb (e.g., began writing the book). The reported experiment tests the alternative hypotheses that the cost arises from the need to select 1 interpretation from several or from competition between alternative interpretations. Expressions with weakly constrained interpretations (no dominant interpretation and several alternative interpretations) were not more costly to process than expressions with a strongly constrained interpretation (1 dominant interpretation and few alternative interpretations). These results are consistent with the hypothesis that the cost reflects the on-line construction of an event sense for the complement. |
Steven Frisson; Elizabeth Niswander-Klement; Alexander Pollatsek The role of semantic transparency in the processing of English compound words Journal Article In: British Journal of Psychology, vol. 99, no. 1, pp. 87–107, 2008. @article{Frisson2008, Experiment 1 examined whether the semantic transparency of an English unspaced compound word affected how long it took to process it in reading. Three types of opaque words were each compared with a matched set of transparent words (i.e. matched on the length and frequency of the constituents and the frequency of the word as a whole). Two sets of the opaque words were partially opaque: either the first constituent was not related to the meaning of the compound (opaque-transparent) or the second constituent was not related to the meaning of the compound (transparent-opaque). In the third set (opaque-opaque), neither constituent was related to the meaning of the compound. For all three sets, there was no significant difference between the opaque and the transparent words on any eye-movement measure. This replicates an earlier finding with Finnish compound words (Pollatsek & Hyönä, 2005) and indicates that, although there is now abundant evidence that the component constituents play a role in the encoding of compound words, the meaning of the compound word is not constructed from the parts, at least for compound words for which a lexical entry exists. Experiment 2 used the same compounds but with a space between the constituents. This presentation resulted in a transparency effect, indicating that when an assembly route is 'forced', transparency does play a role. |
Steffen Gais; Sabine Köster; Andreas Sprenger; Judith Bethke; Wolfgang Heide; Hubert Kimmig Sleep is required for improving reaction times after training on a procedural visuo-motor task Journal Article In: Neurobiology of Learning and Memory, vol. 90, no. 4, pp. 610–615, 2008. @article{Gais2008, Sleep has been found to enhance consolidation of many different forms of memory. However in most procedural tasks, a sleep-independent, fast learning component interacts with slow, sleep-dependent improvements. Here, we show that in humans a visuo-motor saccade learning task shows no improvements during training, but only during a delayed recall testing after a period of sleep. Subjects were trained in a prosaccade task (saccade to a visual target). Performance was tested in the prosaccade and the antisaccade task (saccade to opposite direction of the target) before training, after a night of sleep or sleep deprivation, after a night of recovery sleep, and finally in a follow-up test 4 weeks later. We found no immediate improvement in saccadic reaction time (SRT) during training, but a delayed reduction in SRT, indicating a slow-learning process. This reduction occurred only after a period of sleep, i.e. after the first night in the sleep group and after recovery sleep in the sleep deprivation group. This improvement was stable during the 4-week follow-up. Saccadic training can thus induce covert changes in the saccade generation pathway. During the following sleep period, these changes in turn bring about overt performance improvements, presuming a learning effect based on synaptic tagging. |
Tyler W. Garaas; Tyson Nieuwenhuis; Marc Pomplun A gaze-contingent paradigm for studying continuous saccadic adaptation Journal Article In: Journal of Neuroscience Methods, vol. 168, no. 2, pp. 334–340, 2008. @article{Garaas2008a, Saccadic eye movements are used to quickly and accurately orient our fovea within our visual field to obtain detailed information from various locations. The accuracy of these eye movements is maintained throughout life despite constant pressure on oculomotor muscles and neuronal structures by growth and aging; this maintenance appears to be a product of an adaptive mechanism that continuously accounts for consistent post-saccadic visual error, and is referred to as saccadic adaptation. In this paper, we present a new paradigm to test saccadic adaptation under circumstances that more closely resemble natural visual error in everyday vision, whereas previous saccadic adaptation paradigms study adaptation in a largely restricted form. The paradigm achieves this by positioning a stimulus panel atop an identically colored background relative to the gaze position of the participant. We demonstrate the paradigm by successfully decreasing participants' saccadic amplitudes during a common visual search task by shifting the stimulus panel in the opposite direction of the saccade by 50% of the saccadic amplitude. Participants' adaptation reached approximately 60% of the 50% back-shift during the adaptation phase, and was uniformly distributed across saccadic direction. The adaptation time-course found using the new paradigm is consistent with that achieved using previous paradigms. Task-performance results and the manner in which eye movements changed during adaptation were also analyzed. |
Tyler W. Garaas; Marc Pomplun Inspection time and visual-perceptual processing Journal Article In: Vision Research, vol. 48, no. 4, pp. 523–537, 2008. @article{Garaas2008, Inspection time (IT) is the most popular simple psychometric measure that is used to account for a large part of the variance in human mental ability, with the estimated corrected correlation between IT and IQ being -0.50. In this study, we investigate the relationship between IT and the performance and oculomotor variables measured during three simple visual tasks. Participants' ITs were first measured using a slight variation of the standard IT task, which was followed by the three simple visual tasks that were designed to test participants' visual-attentional control and visual working memory under varying degrees of difficulty; they included a visual search task, a comparative visual search task, and a visual memorization task. Significant correlations were found between IT and performance variables for each of the visual tasks. The implications of the correlation between IT and performance-related variables are discussed. Oculomotor variables on the other hand only correlated significantly with IT during the retrieval phase of the visual memorization task, which is likely a product of differences in participants' ability to memorize objects during the loading phase of the experiment. This leads us to the conclusion that the oculomotor variables we measured do not correlate with IT in general, but may in the case where a systematic benefit would be realized. |
Valérie Gaveau; Denis Pélisson; Annabelle Blangero; Christian Urquizar; Claude Prablanc; Alain Vighetto; Laure Pisella Saccade control and eye-hand coordination in optic ataxia Journal Article In: Neuropsychologia, vol. 46, no. 2, pp. 475–486, 2008. @article{Gaveau2008, The aim of this work was to investigate ocular control in patients with optic ataxia (OA). Following a lesion in the posterior parietal cortex (PPC), these patients exhibit a deficit for fast visuo-motor control of reach-to-grasp movements. Here, we assessed the fast visuo-motor control of saccades as well as spontaneous eye-hand coordination in two bilateral OA patients and five neurologically intact controls in an ecological "look and point" paradigm. To test fast saccadic control, trials with unexpected target-jumps synchronised with saccade onset were randomly intermixed with stationary target trials. Results confirmed that control subjects achieved visual capture (foveation) of the displaced targets with the same timing as stationary targets (fast saccadic control) and began their hand movement systematically at the end of the primary saccade. In contrast, the two bilateral OA patients exhibited a delayed visual capture, especially of displaced targets, resulting from an impairment of fast saccadic control. They also exhibited a peculiar eye-hand coordination pattern, spontaneously delaying their hand movement onset until the execution of a final corrective saccade, which allowed target foveation. To test whether this pathological behaviour results from a delay in updating visual target location, we had subjects perform a second experiment in the same control subjects in which the target-jump was synchronised with saccade offset. With less time for target location updating, the control subjects exhibited the same lack of fast saccadic control as the OA patients. We propose that OA corresponds to an impairment of fast updating of target location, therefore affecting both eye and hand movements. |
Katharina Georg; Fred H. Hamker; Markus Lappe Influence of adaptation state and stimulus luminance on peri-saccadic localization Journal Article In: Journal of Vision, vol. 8, no. 1, pp. 1–11, 2008. @article{Georg2008, Spatial localization of flashed stimuli across saccades shows transient distortions of perceived position: Stimuli appear shifted in saccade direction and compressed towards the saccade target. The strength and spatial pattern of this mislocalization is influenced by contrast, duration, and spatial and temporal arrangement of stimuli and background. Because mislocalization of stimuli on a background depends on contrast, we asked whether mislocalization of stimuli in darkness depends on luminance. Since dark adaptation changes luminance thresholds, we compared mislocalization in dark-adapted and light-adapted states. Peri-saccadic mislocalization was measured with near-threshold stimuli and above-threshold stimuli in dark-adapted and light-adapted subjects. In both adaptation states, near-threshold stimuli gave much larger mislocalization than above-threshold stimuli. Furthermore, when the stimulus was presented near-threshold, the perceived positions of the stimuli clustered closer together. Stimulus luminance that produced strong mislocalization in the light-adapted state produced very little mislocalization in the dark-adapted state because it was now well above threshold. We conclude that the strength of peri-saccadic mislocalization depends on the strength of the stimulus: stimuli with near-threshold luminance, and hence low visibility, are more mis-localized than clearly visible stimuli with high luminance. |
Robert D. Gordon; Sarah D. Vollmer; Megan L. Frankl Object continuity and the transsaccadic representation of form Journal Article In: Perception and Psychophysics, vol. 70, no. 4, pp. 667–679, 2008. @article{Gordon2008, Transsaccadic object file representations were investigated in three experiments. Subjects moved their eyes from a central fixation cross to a location between two peripheral objects. During the saccade, this preview display was replaced with a target display containing a single object to be named. On trials on which the target identity matched one of the preview objects, its orientation either matched or did not match the previewed orientation. The results of Experiments 1 and 2 revealed that orientation changes disrupt perceptual continuity for objects located near fixation, but not for objects located further from fixation. The results of Experiment 3 confirmed that orientation changes do not disrupt continuity for distant objects, while showing that subjects nevertheless maintain an object-specific representation of the orientation of such objects. Together, the results suggest that object files represent orientation but that whether or not orientation plays a role in the processes that determine continuity depends on the quality of the perceptual representation. While |
Melissa J. Green; Jennifer H. Waldron; Ian Simpson; Max Coltheart Visual processing of social context during mental state perception in schizophrenia Journal Article In: Journal of Psychiatry and Neuroscience, vol. 33, no. 1, pp. 34–42, 2008. @article{Green2008, OBJECTIVE: To examine schizophrenia patients' visual attention to social contextual information during a novel mental state perception task. METHOD: Groups of healthy participants (n = 26) and schizophrenia patients (n = 24) viewed 7 image pairs depicting target characters presented context-free and context-embedded (i.e., within an emotion-congruent social context). Gaze position was recorded with the EyeLink I Gaze Tracker while participants performed a mental state inference task. Mean eye movement variables were calculated for each image series (context-embedded v. context-free) to examine group differences in social context processing. RESULTS: The schizophrenia patients demonstrated significantly fewer saccadic eye movements when viewing context-free images and significantly longer eye-fixation durations when viewing context-embedded images. Healthy individuals significantly shortened eye-fixation durations when viewing context-embedded images, compared with context-free images, to enable rapid scanning and uptake of social contextual information; however, this pattern of visual attention was not pronounced in schizophrenia patients. In association with limited scanning and reduced visual attention to contextual information, schizophrenia patients' assessment of the mental state of characters embedded in social contexts was less accurate. CONCLUSION: In people with schizophrenia, inefficient integration of social contextual information in real-world situations may negatively affect the ability to infer mental and emotional states from facial expressions. |
Harold H. Greene Distance-from-target dynamics during visual search Journal Article In: Vision Research, vol. 48, no. 23-24, pp. 2476–2484, 2008. @article{Greene2008, Tseng, Y. C., & Li, C. S. (2004). Oculomotor correlates of context-guided learning in visual search. Perception & Psychophysics, 66, 1368-1378 noted that visual search with eye movements may be characterized by a search phase in which fixations do not move towards the target, followed by a phase in which fixations move steadily towards the target. They speculated that the phases are related to memory and recognition processes. Human visual search and Monte Carlo simulations are described towards an explanation. Distance-from-target dynamics were demonstrated to be sensitive to geometric constraints and therefore do not provide a solution to the question of memory in visual search. Finally, it is concluded that the specific distance-from-target dynamics noted by Tseng, Y. C., & Li, C. S. (2004). Oculomotor correlates of context-guided learning in visual search. Perception & Psychophysics, 66, 1368-1378 are parsimoniously explained by random walks that were initialized at the centre of their stimulus displays. |
N. Alahyane; V. Fonteille; C. Urquizar; Roméo Salemme; Norbert Nighoghossian; Denis Pelisson; C. Tilikete Separate neural substrates in the human cerebellum for sensory-motor adaptation of reactive and of scanning voluntary saccades Journal Article In: Cerebellum, vol. 7, no. 4, pp. 595–601, 2008. @article{Alahyane2008, Sensory-motor adaptation processes are critically involved in maintaining accurate motor behavior throughout life. Yet their underlying neural substrates and task-dependency bases are still poorly understood. We address these issues here by studying adaptation of saccadic eye movements, a well-established model of sensory-motor plasticity. The cerebellum plays a major role in saccadic adaptation but it has not yet been investigated whether this role can account for the known specificity of adaptation to the saccade type (e.g., reactive versus voluntary). Two patients with focal lesions in different parts of the cerebellum were tested using the double-step target paradigm. Each patient was submitted to two separate sessions: one for reactive saccades (RS) triggered by the sudden appearance of a visual target and the second for scanning voluntary saccades (SVS) performed when exploring a more complex scene. We found that a medial cerebellar lesion impaired adaptation of reactive-but not of voluntary-saccades, whereas a lateral lesion affected adaptation of scanning voluntary saccades, but not of reactive saccades. These findings provide the first evidence of an involvement of the lateral cerebellum in saccadic adaptation, and extend the demonstrated role of the cerebellum in RS adaptation to adaptation of SVS. The double dissociation of adaptive abilities is also consistent with our previous hypothesis of the involvement in saccadic adaptation of partially separated cerebellar areas specific to the reactive or voluntary task (Alahyane et al. Brain Res 1135:107-121 (2007)). |
Nadia Alahyane; Anne-Dominique Devauchelle; Roméo Salemme; Denis Pélisson Spatial transfer of adaptation of scanning voluntary saccades in humans Journal Article In: Neuroreport, vol. 19, no. 1, pp. 37–41, 2008. @article{Alahyane2008a, The properties and neural substrates of the adaptive mechanisms that maintain over time the accuracy of voluntary, internally triggered saccades are still poorly understood. Here, we used transfer tests to evaluate the spatial properties of adaptation of scanning voluntary saccades. We found that an adaptive reduction of the size of a horizontal rightward 7 degrees saccade transferred to other saccades of a wide range of amplitudes and directions. This transfer decreased as tested saccades increasingly differed in amplitude or direction from the trained saccade, being null for vertical and leftward saccades. Voluntary saccade adaptation thus presents bounded, but large adaptation fields, suggesting that at least part of the underlying neural substrate encodes saccades as vectors. |
Naseem Al-aidroos; Jos J. Adam; Martin H. Fischer; Jay Pratt Structured perceptual arrays and the modulation of Fitts's Law: Examining saccadic eye movements Journal Article In: Journal of Motor Behavior, vol. 40, no. 2, pp. 155–164, 2008. @article{Alaidroos2008, On the basis of recent observations of a modulation of Fitts's law for manual pointing movements in structured visual arrays (J. J. Adam, R. Mol, J. Pratt, & M. H. Fischer, 2006; J. Pratt, J. J. Adam, & M. H. Fischer, 2007), the authors examined whether a similar modulation occurs for saccadic eye move- ments. Healthy participants (N = 19) made horizontal saccades to targets that appeared randomly in 1 of 4 positions, either on an empty background or within 1 of 4 placeholder boxes. Whereas in previous studies, placeholders caused a decrease in movement time (MT) without the normal decrease in movement accuracy predicted by Fitts's law, placeholders in the present experiment increased saccadic accuracy (decreased endpoint variability) with- out an increase in MT. The present results extend the findings of J. J. Adam et al. of a modulation of Fitts's law from the temporal domain to the spatial domain and from manual movements to eye movements. |
Britt Anderson; Ryan E. B. Mruczek; Keisuke Kawasaki; David L. Sheinberg Effects of familiarity on neural activity in monkey inferior temporal lobe Journal Article In: Cerebral Cortex, vol. 18, no. 11, pp. 2540–2552, 2008. @article{Anderson2008a, Long-term familiarity facilitates recognition of visual stimuli. To better understand the neural basis for this effect, we measured the local field potential (LFP) and multiunit spiking activity (MUA) from the inferior temporal (IT) lobe of behaving monkeys in response to novel and familiar images. In general, familiar images evoked larger amplitude LFPs whereas MUA responses were greater for novel images. Familiarity effects were attenuated by image rotations in the picture plane of 45 degrees. Decreasing image contrast led to more pronounced decreases in LFP response magnitude for novel, compared with familiar images, and resulted in more selective MUA response profiles for familiar images. The shape of individual LFP traces could be used for stimulus classification, and classification performance was better for the familiar image category. Recording the visual and auditory evoked LFP at multiple depths showed significant alterations in LFP morphology with distance changes of 2 mm. In summary, IT cortex shows local processing differences for familiar and novel images at a time scale and in a manner consistent with the observed behavioral advantage for classifying familiar images and rapidly detecting novel stimuli. |
Britt Anderson; David L. Sheinberg Effects of temporal context and temporal expectancy on neural activity in inferior temporal cortex Journal Article In: Neuropsychologia, vol. 46, no. 4, pp. 947–957, 2008. @article{Anderson2008, Timing is critical. The same event can mean different things at different times and some events are more likely to occur at one time than another. We used a cued visual classification task to evaluate how changes in temporal context affect neural responses in inferior temporal cortex, an extrastriate visual area known to be involved in object processing. On each trial a first image cued a temporal delay before a second target image appeared. The animal's task was to classify the second image by pressing one of two buttons previously associated with that target. All images were used as both cues and targets. Whether an image cued a delay time or signaled a button press depended entirely upon whether it was the first or second picture in a trial. This paradigm allowed us to compare inferior temporal cortex neural activity to the same image subdivided by temporal context and expectation. Neuronal spiking was more robust and visually evoked local field potentials (LFP's) larger for target presentations than for cue presentations. On invalidly cued trials, when targets appeared unexpectedly early, the magnitude of the evoked LFP was reduced and delayed and neuronal spiking was attenuated. Spike field coherence increased in the beta-gamma frequency range for expected targets. In conclusion, different neural responses in higher order ventral visual cortex may occur for the same visual image based on manipulations of temporal attention. |
Eva Belke; Glyn W. Humphreys; Derrick G. Watson; Antje S. Meyer; Anna L. Telling Top-down effects of semantic knowledge in visual search are modulated by cognitive but not perceptual load Journal Article In: Perception and Psychophysics, vol. 70, no. 8, pp. 1444–1458, 2008. @article{Belke2008, Moores, Laiti, and Chelazzi (2003) found semantic interference from associate competitors during visual object search, demonstrating the existence of top-down semantic influences on the deployment of attention to objects. We examined whether effects of semantically related competitors (same-category members or associates) interacted with the effects of perceptual or cognitive load. We failed to find any interaction between competitor effects and perceptual load. However, the competitor effects increased significantly when participants were asked to retain one or five digits in memory throughout the search task. Analyses of eye movements and viewing times showed that a cognitive load did not affect the initial allocation of attention but rather the time it took participants to accept or reject an object as the target. We discuss the implications of our findings for theories of conceptual short-term memory and visual attention. |
Hillel Aviezer; Ran R. Hassin; Jennifer D. Ryan; Cheryl L. Grady; Josh Susskind; Adam Anderson; Morris Moscovitch; Shlomo Bentin Angry, disgusted, or afraid? Studies on the malleability of emotion perception Journal Article In: Psychological Science, vol. 19, no. 7, pp. 724–732, 2008. @article{Aviezer2008, Current theories of emotion perception posit that basic facial expressions signal categorically discrete emotions or affective dimensions of valence and arousal. In both cases, the information is thought to be directly ‘‘read out'' from the face in a way that is largely immune to context. In contrast, the three studies reported here demonstrated that identical facial configurations convey strikingly different emotions and dimensional values depending on the affective context in which they are embedded. This effect is modulated by the similarity between the target facial expression and the facial expression typically associated with the context. Moreover, by monitoring eye movements, we demonstrated that characteristic fixation patterns previously thought to be determined solely by the facial expression are systematically modulated by emotional context already at very early stages of visual processing, even by the first time the face is fixated. Our results indicate that the perception of basic facial expressions is not context invariant and can be categorically altered by context at early perceptual levels. |
Jeremy B. Badler; Philippe Lefèvre; Marcus Missal Anticipatory pursuit is influenced by a concurrent timing task Journal Article In: Journal of Vision, vol. 8, no. 16, pp. 1–9, 2008. @article{Badler2008, The ability to predict upcoming events is important to compensate for relatively long sensory-motor delays. When stimuli are temporally regular, their prediction depends on a representation of elapsed time. However, it is well known that the allocation of attention to the timing of an upcoming event alters this representation. The role of attention on the temporal processing component of prediction was investigated in a visual smooth pursuit task that was performed either in isolation or concurrently with a manual response task. Subjects used smooth pursuit eye movements to accurately track a moving target after a constant-duration delay interval. In the manual response task, subjects had to estimate the instant of target motion onset by pressing a button. The onset of anticipatory pursuit eye movements was used to quantify the subject's estimate of elapsed time. We found that onset times were delayed significantly in the presence of the concurrent manual task relative to the pursuit task in isolation. There was also a correlation between the oculomotor and manual response latencies. In the framework of Scalar Timing Theory, the results are consistent with a centralized attentional gating mechanism that allocates clock resources between smooth pursuit preparation and the parallel timing task. |
Xuejun Bai; Guoli Yan; Simon P. Liversedge; Chuanli Zang; Keith Rayner Reading spaced and unspaced Chinese text: Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 34, no. 5, pp. 1277–1287, 2008. @article{Bai2008, Native Chinese readers' eye movements were monitored as they read text that did or did not demark word boundary information. In Experiment 1, sentences had 4 types of spacing: normal unspaced text, text with spaces between words, text with spaces between characters that yielded nonwords, and finally text with spaces between every character. The authors investigated whether the introduction of spaces into unspaced Chinese text facilitates reading and whether the word or, alternatively, the character is a unit of information that is of primary importance in Chinese reading. Global and local measures indicated that sentences with unfamiliar word spaced format were as easy to read as visually familiar unspaced text. Nonword spacing and a space between every character produced longer reading times. In Experiment 2, highlighting was used to create analogous conditions: normal Chinese text, highlighting that marked words, highlighting that yielded nonwords, and highlighting that marked each character. The data from both experiments clearly indicated that words, and not individual characters, are the unit of primary importance in Chinese reading. |
Brian P. Bailey; Shamsi T. Iqbal Understanding changes in mental workload during execution of goal-directed tasks and its application for interruption management Journal Article In: ACM Transactions on Computer-Human Interaction, vol. 14, no. 4, pp. 1–28, 2008. @article{Bailey2008, Notifications can have reduced interruption cost if delivered at moments of lower mental workload during task execution. Cognitive theorists have speculated that these moments occur at subtask boundaries. In this article, we empirically test this speculation by examining how workload changes during execution of goal-directed tasks, focusing on regions between adjacent chunks within the tasks, that is, the subtask boundaries. In a controlled experiment, users performed several interactive tasks while their pupil dilation, a reliable measure of workload, was continuously measured using an eye tracking system. The workload data was extracted from the pupil data, precisely aligned to the corresponding task models, and analyzed. Our principal findings include (i) workload changes throughout the execution of goal-directed tasks; (ii) workload exhibits transient decreases at subtask boundaries relative to the preceding subtasks; (iii) the amount of decrease tends to be greater at boundaries corresponding to the completion of larger chunks of the task; and (iv) different types of subtasks induce different amounts of workload. We situate these findings within resource theories of attention and discuss important implications for interruption management systems. |
Daniel Baldauf; Heiner Deubel Visual attention during the preparation of bimanual movements Journal Article In: Vision Research, vol. 48, no. 4, pp. 549–563, 2008. @article{Baldauf2008, We investigated the deployment of visual attention during the preparation of bimanually coordinated actions. In a dual-task paradigm participants had to execute bimanual pointing movements to different peripheral locations, and to identify target letters that had been briefly presented at various peripheral locations during the latency period before movement initialisation. The discrimination targets appeared either at the movement goal of the left or the right hand, or at other locations that were not movement-relevant in the particular trial. Performance in the letter discrimination task served as a measure for the distribution of visual attention during the action preparation. The results showed that the goal positions of both hands are selected before movement onset, revealing a superior discrimination performance at the action-relevant locations (Experiment 1). Selection-for-action in the preparation of bimanual movements involved attention being spread to both goal locations in parallel, independently of whether the targets had been cued by colour or semantically (Experiment 2). A comparison with perceptual performance in unimanual reaching suggested that the total amount of attentional resources that are distributed over the visual field depended on the demands of the primary motor task, with more attentional resources being deployed for the selection of multiple goal positions than for the selection of a single goal (Experiment 3). |
M. S. Baptista; C. Bohn; Reinhold Kliegl; Ralf Engbert; Jürgen Kurths Reconstruction of eye movements during blinks Journal Article In: Chaos, vol. 18, no. 1, pp. 1–15, 2008. @article{Baptista2008, In eye movement research in reading, the amount of data plays a crucial role for the validation of results. A methodological problem for the analysis of the eye movement in reading are blinks, when readers close their eyes. Blinking rate increases with increasing reading time, resulting in high data losses, especially for older adults or reading impaired subjects. We present a method, based on the symbolic sequence dynamics of the eye movements, that reconstructs the horizontal position of the eyes while the reader blinks. The method makes use of an observed fact that the movements of the eyes before closing or after opening contain information about the eyes movements during blinks. Test results indicate that our reconstruction method is superior to methods that use simpler interpolation approaches. In addition, analyses of the reconstructed data show no significant deviation from the usual behavior observed in readers. |
Dale J. Barr Pragmatic expectations and linguistic evidence: Listeners anticipate but do not integrate common ground Journal Article In: Cognition, vol. 109, no. 1, pp. 18–40, 2008. @article{Barr2008, When listeners search for the referent of a speaker's expression, they experience interference from privileged knowledge, knowledge outside of their 'common ground' with the speaker. Evidence is presented that this interference reflects limitations in lexical processing. In three experiments, listeners' eye movements were monitored as they searched for the target of a speaker's referring expression in a display that also contained a phonological competitor (e.g., bucket/buckle). Listeners anticipated that the speaker would refer to something in common ground, but they did not experience less interference from a competitor in privileged ground than from a matched competitor in common ground. In contrast, interference from the competitor was eliminated when it was ruled out by a semantic constraint. These findings support a view of comprehension as relying on multiple systems with distinct access to information and present a challenge for constraint-based views of common ground. |
Luke Barrington; Tim K. Marks; Janet Hui-wen Hsiao; Garrison W. Cottrell NIMBLE: A kernel density model of saccade-based visual memory Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 17–17, 2008. @article{Barrington2008, We present a Bayesian version of J. Lacroix, J. Murre, and E. Postma's (2006) Natural Input Memory (NIM) model of saccadic visual memory. Our model, which we call NIMBLE (NIM with Bayesian Likelihood Estimation), uses a cognitively plausible image sampling technique that provides a foveated representation of image patches. We conceive of these memorized image fragments as samples from image class distributions and model the memory of these fragments using kernel density estimation. Using these models, we derive class-conditional probabilities of new image fragments and combine individual fragment probabilities to classify images. Our Bayesian formulation of the model extends easily to handle multi-class problems. We validate our model by demonstrating human levels of performance on a face recognition memory task and high accuracy on multi-category face and object identification. We also use NIMBLE to examine the change in beliefs as more fixations are taken from an image. Using fixation data collected from human subjects, we directly compare the performance of NIMBLE's memory component to human performance, demonstrating that using human fixation locations allows NIMBLE to recognize familiar faces with only a single fixation. |
Sarah Bate; Catherine Haslam; Jeremy J. Tree; Timothy L. Hodgson Evidence of an eye movement-based memory effect in congenital prosopagnosia Journal Article In: Cortex, vol. 44, no. 7, pp. 806–819, 2008. @article{Bate2008, While extensive work has examined the role of covert recognition in acquired prosopagnosia, little attention has been directed to this process in the congenital form of the disorder. Indeed, evidence of covert recognition has only been demonstrated in one congenital case in which autonomic measures provided evidence of recognition (Jones and Tranel, 2001), whereas two investigations using behavioural indicators failed to demonstrate the effect (de Haan and Campbell, 1991; Bentin et al., 1999). In this paper, we use a behavioural indicator, an "eye movement-based memory effect" (Althoff and Cohen, 1999), to provide evidence of covert recognition in congenital prosopagnosia. In an initial experiment, we examined viewing strategies elicited to famous and novel faces in control participants, and found fewer fixations and reduced regional sampling for famous compared to novel faces. In a second experiment, we examined the same processes in a patient with congenital prosopagnosia (AA), and found some evidence of an eye movement-based memory effect regardless of his recognition accuracy. Finally, we examined whether a difference in scanning strategy was evident for those famous faces AA failed to explicitly recognise, and again found evidence of reduced sampling for famous faces. We use these findings to (a) provide evidence of intact structural representations in a case of congenital prosopagnosia, and (b) to suggest that covert recognition can be demonstrated using behavioural indicators in this disorder. |
Ensar Becic; Walter R. Boot; Arthur F. Kramer Training older adults to search more effectively: Scanning strategy and visual search in dynamic displays Journal Article In: Psychology and Aging, vol. 23, no. 2, pp. 461–466, 2008. @article{Becic2008, The authors examined the ability of older adults to modify their search strategies to detect changes in dynamic displays. Older adults who made few eye movements during search (i.e., covert searchers) were faster and more accurate compared with individuals who made many eye movements (i.e., overt searchers). When overt searchers were instructed to adopt a covert search strategy, target detection performance increased to the level of natural covert searchers. Similarly, covert searchers instructed to search overtly exhibited a decrease in target detection performance. These data suggest that with instructions and minimal practice, older adults can ameliorate the cost of a poor search strategy. |
Mark W. Becker; Ian P. Rasmussen Guidance of attention to objects and locations by long-term memory of natural scenes Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 6, pp. 1325–1338, 2008. @article{Becker2008, Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants performed an unanticipated 2nd block of trials. When the same scene occurred in the 2nd block, the change within the scene was (a) identical to the original change, (b) a new object appearing in the original change location, (c) the same object appearing in a new location, or (d) a new object appearing in a new location. Results suggest that attention is rapidly allocated to previously relevant locations and then to previously relevant objects. This pattern of locations dominating objects remained when object identity information was made more salient. Eye tracking verified that scene memory results in more direct scan paths to previously relevant locations and objects. This contextual guidance suggests that a high-capacity long-term memory for scenes is used to insure that limited attentional capacity is allocated efficiently rather than being squandered. |
Larry Allen Abel; Zhong I. Wang; Louis F. Dell'Osso Wavelet analysis in infantile nystagmus syndrome: Limitations and abilities Journal Article In: Investigative Ophthalmology & Visual Science, vol. 49, no. 8, pp. 3413–3423, 2008. @article{Abel2008, PURPOSE: To investigate the proper usage of wavelet analysis in infantile nystagmus syndrome (INS) and determine its limitations and abilities. METHODS: Data were analyzed from accurate eye-movement recordings of INS patients. Wavelet analysis was performed to examine the foveation characteristics, morphologic characteristics and time variation in different INS waveforms. Also compared were the wavelet analysis and the expanded nystagmus acuity function (NAFX) analysis on sections of pre- and post-tenotomy data. RESULTS: Wavelet spectra showed some sensitivity to different features of INS waveforms and reflected their variations across time. However, wavelet analysis was not effective in detecting foveation periods, especially in a complicated INS waveform. NAFX, on the other hand, was a much more direct way of evaluating waveform changes after nystagmus treatments. CONCLUSIONS: Wavelet analysis is a tool that performs, with difficulty, some things that can be done faster and better by directly operating on the nystagmus waveform itself. It appears, however, to be insensitive to the subtle but visually important improvements brought about by INS therapies. Wavelet analysis may have a role in developing automated waveform classifiers where its time-dependent characterization of the waveform can be used. The limitations of wavelet analysis outweighed its abilities in INS waveform-characteristic examination. |
Joana Acha; Manuel Perea The effect of neighborhood frequency in reading: Evidence with transposed-letter neighbors Journal Article In: Cognition, vol. 108, pp. 290–300, 2008. @article{Acha2008, Transposed-letter effects (e.g., jugde activates judge) pose serious models for models of visual-word recognition that use position-specific coding schemes. However, even though the evidence of transposed-letter effects with nonword stimuli is strong, the evidence for word stimuli is scarce and inconclusive. The present experiment examined the effect of neighborhood frequency during normal silent reading using transposed-letter neighbors (e.g., silver, sliver). Two sets of low-frequency words were created (equated in the number of substitution neighbors, word frequency, and number of letters), which were embedded in sentences. In one set, the target word had a higher frequency transposed-letter neighbor, and in the other set, the target word had no transposed-letter neighbors. An inhibitory effect of neighborhood frequency was observed in measures that reflect late processing in words (number of regressions back to the target word, and total time). We examine the implications of these findings for models of visual-word recognition and reading. |
Elaine J. Anderson; Sabira K. Mannan; Geraint Rees; Petroc Sumner; Christopher Kennard A role for spatial and nonspatial working memory processes in visual search Journal Article In: Experimental Psychology, vol. 55, no. 5, pp. 301–312, 2008. @article{Anderson2008b, Searching a cluttered visual scene for a specific item of interest can take several seconds to perform if the target item is difficult to discriminate from surrounding items. Whether working memory processes are utilized to guide the path of attentional selection during such searches remains under debate. Previous studies have found evidence to support a role for spatial working memory in inefficient search, but the role of nonspatial working memory remains unclear. Here, we directly compared the role of spatial and nonspatial working memory for both an efficient and inefficient search task. In Experiment 1, we used a dual-task paradigm to investigate the effect of performing visual search within the retention interval of a spatial working memory task. Importantly, by incorporating two working memory loads (low and high) we were able to make comparisons between dual-task conditions, rather than between dual-task and single-task conditions. This design allows any interference effects observed to be attributed to changes in memory load, rather than to nonspecific effects related to "dual-task" performance. We found that the efficiency of the inefficient search task declined as spatial memory load increased, but that the efficient search task remained efficient. These results suggest that spatial memory plays an important role in inefficient but not efficient search. In Experiment 2, participants performed the same visual search tasks within the retention interval of visually matched spatial and verbal working memory tasks. Critically, we found comparable dual-task interference between inefficient search and both the spatial and nonspatial working memory tasks, indicating that inefficient search recruits working memory processes common to both domains. |
Bernhard Angele; Timothy J. Slattery; Jinmian Yang; Reinhold Kliegl; Keith Rayner Parafoveal processing in reading: Manipulating n+1 and n+2 previews simultaneously Journal Article In: Visual Cognition, vol. 16, no. 6, pp. 697–707, 2008. @article{Angele2008, The boundary paradigm (Rayner, 1975) with a novel preview manipulation was used to examine the extent of parafoveal processing of words to the right of fixation. Words n + 1 and n + 2 had either correct or incorrect previews prior to fixation (prior to crossing the boundary location). In addition, the manipulation utilized either a high or low frequency word in word n + 1 location on the assumption that it would be more likely that n + 2 preview effects could be obtained when word n + 1 was high frequency. The primary findings were that there was no evidence for a preview benefit for word n + 2 and no evidence for parafoveal-on-foveal effects when word n + 1 is at least four letters long. We discuss implications for models of eye-movement control in reading. |
Jennifer E. Arnold THE BACON not the bacon: How children and adults understand accented and unaccented noun phrases Journal Article In: Cognition, vol. 108, no. 1, pp. 69–99, 2008. @article{Arnold2008, Two eye-tracking experiments examine whether adults and 4- and 5-year-old children use the presence or absence of accenting to guide their interpretation of noun phrases (e.g., the bacon) with respect to the discourse context. Unaccented nouns tend to refer to contextually accessible referents, while accented variants tend to be used for less accessible entities. Experiment 1 confirms that accenting is informative for adults, who show a bias toward previously-mentioned objects beginning 300 ms after the onset of unaccented nouns and pronouns. But contrary to findings in the literature, accented words produced no observable bias. In Experiment 2, 4 and 5 year olds were also biased toward previously-mentioned objects with unaccented nouns and pronouns. This builds on findings of limits on children's on-line reference comprehension [Arnold, J. E., Brown-Schmidt, S., & Trueswell, J. C. (2007). Children's use of gender and order-of-mention during pronoun comprehension. Language and Cognitive Processes], showing that children's interpretation of unaccented nouns and pronouns is constrained in contexts with one single highly accessible object. |
Jennifer E. Arnold; Shin-Yi C. Lao Put in last position something previously unmentioned: Word order effects on referential expectancy and reference comprehension Journal Article In: Language and Cognitive Processes, vol. 23, no. 2, pp. 282–295, 2008. @article{Arnold2008a, Research has shown that the comprehension of definite referring expressions (e.g., "the triangle") tends to be faster for "given" (previously mentioned) referents, compared with new referents. This has been attributed to the presence of given information in the consciousness of discourse participants (e.g., Chafe, 1994) suggesting that given is always more accessible. By contrast, we find a bias toward new referents during the on-line comprehension of the direct object in heavy-NP-shifted word orders, e.g., "Put on the star the...." This order tends to be used for new direct objects; canonical unshifted orders are more common with given direct objects. Thus, word order provides probabilistic information about the givenness or newness of the direct object. Results from eyetracking and gating experiments show that the traditional given bias only occurs with unshifted orders; with heavy-NP-shifted orders, comprehenders expect the object to be new, and comprehension for new referents is facilitated. (Contains 2 figures and 3 tables.) |
Elina Birmingham; Walter F. Bischof; Alan Kingstone Social attention and real-world scenes: The roles of action, competition and social content Journal Article In: Quarterly Journal of Experimental Psychology, vol. 61, no. 7, pp. 986–998, 2008. @article{Birmingham2008, The present study examined how social attention is influenced by social content and the presence of items that are available for attention. We monitored observers' eye movements while they freely viewed real-world social scenes containing either 1 or 3 people situated among a variety of objects. Building from the work of Yarbus (1965/1967) we hypothesized that observers would demonstrate a preferential bias to fixate the eyes of the people in the scene, although other items would also receive attention. In addition, we hypothesized that fixations to the eyes would increase as the social content (i.e., number of people) increased. Both hypotheses were supported by the data, and we also found that the level of activity in the scene influenced attention to eyes when social content was high. The present results provide support for the notion that the eyes are selected by others in order to extract social information. Our study also suggests a simple and surreptitious methodology for studying social attention to real-world stimuli in a range of populations, such as those with autism spectrum disorders. |
Elina Birmingham; Walter Bischof; Alan Kingstone Gaze selection in complex social scenes Journal Article In: Visual Cognition, vol. 16, no. 2-3, pp. 341–355, 2008. @article{Birmingham2008a, A great deal of recent research has sought to understand the factors and neural systems that mediate the orienting of spatial attention to a gazed-at location. What have rarely been examined, however, are the factors that are critical to the initial selection of gaze information from complex visual scenes. For instance, is gaze prioritized relative to other possible body parts and objects within a scene? The present study springboards from the seminal work of Yarbus (1965/1967), who had originally examined participants? scan paths while they viewed visual scenes containing one or more people. His work suggested to us that the selection of gaze information may depend on the task that is assigned to participants, the social content of the scene, and/or the activity level depicted within the scene. Our results show clearly that all of these factors can significantly modulate the selection of gaze information. Specifically, the selection of gaze was enhanced when the task was to describe the social attention within a scene, and when the social content and activity level in a scene were high. Nevertheless, it is also the case that participants always selected gaze information more than any other stimulus. Our study has broad implications for future investigations of social attention as well as resolving a number of longstanding issues that had undermined the classic original work of Yarbus. A great deal of recent research has sought to understand the factors and neural systems that mediate the orienting of spatial attention to a gazed-at location. What have rarely been examined, however, are the factors that are critical to the initial selection of gaze information from complex visual scenes. For instance, is gaze prioritized relative to other possible body parts and objects within a scene? The present study springboards from the seminal work of Yarbus (1965/1967), who had originally examined participants? scan paths while they viewed visual scenes containing one or more people. His work suggested to us that the selection of gaze information may depend on the task that is assigned to participants, the social content of the scene, and/or the activity level depicted within the scene. Our results show clearly that all of these factors can significantly modulate the selection of gaze information. Specifically, the selection of gaze was enhanced when the task was to describe the social attention within a scene, and when the social content and activity level in a scene were high. Nevertheless, it is also the case that participants always selected gaze information more than any other stimulus. Our study has broad implications for future investigations of social attention as well as resolving a number of longstanding issues that had undermined the classic original work of Yarbus. |
Caroline Blais; Rachael E. Jack; Christoph Scheepers; Daniel Fiset; Roberto Caldara Culture shapes how we look at faces Journal Article In: PLoS ONE, vol. 3, no. 8, pp. e3022, 2008. @article{Blais2008, Background: Face processing, amongst many basic visual skills, is thought to be invariant across all humans. From as early as 1965, studies of eye movements have consistently revealed a systematic triangular sequence of fixations over the eyes and the mouth, suggesting that faces elicit a universal, biologically-determined information extraction pattern. Methodology/Principal Findings: Here we monitored the eye movements of Western Caucasian and East Asian observers while they learned, recognized, and categorized by race Western Caucasian and East Asian faces. Western Caucasian observers reproduced a scattered triangular pattern of fixations for faces of both races and across tasks. Contrary to intuition, East Asian observers focused more on the central region of the face. Conclusions/Significance: These results demonstrate that face processing can no longer be considered as arising from a universal series of perceptual events. The strategy employed to extract visual information from faces differs across cultures. |
Lizzy Bleumers; Peter De Graef; Karl Verfaillie; Johan Wagemans Eccentric grouping by proximity in multistable dot lattices Journal Article In: Vision Research, vol. 48, no. 2, pp. 179–192, 2008. @article{Bleumers2008, The Pure Distance Law predicts grouping by proximity in dot lattices that can be organised in four ways by grouping dots along parallel lines. It specifies a quantitative relationship between the relative probability of perceiving an organisation and the relative distance between the grouped dots. The current study was set up to investigate whether this principle holds both for centrally and for eccentrically displayed dot lattices. To this end, dot lattices were displayed either in central vision, or to the right of fixation with their closest border at 3° or 15°. We found that the Pure Distance Law adequately predicted grouping of centrally displayed dot lattices but did not capture the eccentric data well, even when the eccentric dot lattices were scaled. Specifically, a better fit was obtained when we included the possibility in the model that in some trials participants could not report an organisation and consequently responded randomly. A plausible interpretation for the occurrence of random responses in the eccentric conditions is that under these circumstances an attention shift is required from the locus of fixation towards the dot lattice, which occasionally fails to take place. When grouping could be reported, scale and eccentricity appeared to interact. The effect of the relative interdot distances on the perceptual organisation of the dot lattices was estimated to be stronger in peripheral vision than in central vision at the two largest scales, but this difference disappeared when the smallest scale was applied. |
Stan Van Pelt; W. Pieter Medendorp Updating target distance across eye movements in depth Journal Article In: Journal of Neurophysiology, vol. 99, no. 5, pp. 2281–2290, 2008. @article{VanPelt2008, We tested between two coding mechanisms that the brain may use to retain distance information about a target for a reaching movement across vergence eye movements. If the brain was to encode a retinal disparity representation (retinal model), i.e., target depth relative to the plane of fixation, each vergence eye movement would require an active update of this representation to preserve depth constancy. Alternatively, if the brain was to store an egocentric distance representation of the target by integrating retinal disparity and vergence signals at the moment of target presentation, this representation should remain stable across subsequent vergence shifts (nonretinal model). We tested between these schemes by measuring errors of human reaching movements (n = 14 subjects) to remembered targets, briefly presented before a vergence eye movement. For comparison, we also tested their directional accuracy across version eye movements. With intervening vergence shifts, the memory-guided reaches showed an error pattern that was based on the new eye position and on the depth of the remembered target relative to that position. This suggests that target depth is recomputed after the gaze shift, supporting the retinal model. Our results also confirm earlier literature showing retinal updating of target direction. Furthermore, regression analyses revealed updating gains close to one for both target depth and direction, suggesting that the errors arise after the updating stage during the subsequent reference frame transformations that are involved in reaching. |
Wieske Zoest; Mieke Donk Goal-driven modulation as a function of time in saccadic target selection Journal Article In: Quarterly Journal of Experimental Psychology, vol. 61, no. 10, pp. 1553–1572, 2008. @article{Zoest2008, Four experiments were performed to investigate the contribution of goal-driven modulation in saccadic target selection as a function of time. Observers were required to make an eye movement to a prespecified target that was concurrently presented with multiple nontargets and possibly one distractor. Target and distractor were defined in different dimensions (orientation dimension and colour dimension in Experiment 1), or were both defined in the same dimension (i.e., both defined in the orientation dimension in Experiment 2, or both defined in the colour dimension in Experiments 3 and 4). The identities of target and distractor were switched over conditions. Speed-accuracy functions were computed to examine the full time course of selection in each condition. There were three major results. First, the ability to exert goal-driven control increased as a function of response latency. Second, this ability depended on the specific target-distractor combination, yet was not a function of whether target and distractor were defined within or across dimensions. Third, goal-driven control was available earlier when target and distractor were dissimilar than when they were similar. It was concluded that the influence of goal-driven control in visual selection is not all or none, but is of a continuous nature. |
Wieske Zoest; Stefan Van der Stigchel; Jason J. S. Barton Distractor effects on saccade trajectories: A comparison of prosaccades, antisaccades, and memory-guided saccades Journal Article In: Experimental Brain Research, vol. 186, no. 3, pp. 431–442, 2008. @article{Zoest2008a, The present study investigated the contribution of the presence of a visual signal at the saccade goal on saccade trajectory deviations and measured distractor-related inhibition as indicated by deviation away from an irrelevant distractor. Performance in a prosaccade task where a visual target was present at the saccade goal was compared to performance in an anti- and memory-guided saccade task. In the latter two tasks no visual signal is present at the location of the saccade goal. It was hypothesized that if saccade deviation can be ultimately explained in terms of relative activation levels between the saccade goal location and distractor locations, the absence of a visual stimulus at the goal location will increase the competition evoked by the distractor and affect saccade deviations. The results of Experiment 1 showed that saccade deviation away from a distractor varied significantly depending on whether a visual target was presented at the saccade goal or not: when no visual target was presented, saccade deviation away from a distractor was increased compared to when the visual target was present. The results of Experiments 2-4 showed that saccade deviation did not systematically change as a function of time since the offset of the target. Moreover, Experiments 3 and 4 revealed that the disappearance of the target immediately increased the effect of a distractor on saccade deviations, suggesting that activation at the target location decays very rapidly once the visual signal has disappeared from the display. |
André Vandierendonck; Maud Deschuyteneer; Ann Depoorter; Denis Drieghe Input monitoring and response selection as components of executive control in pro-saccades and anti-saccades Journal Article In: Psychological Research, vol. 72, no. 1, pp. 1–11, 2008. @article{Vandierendonck2008, Several studies have shown that anti-saccades, more than pro-saccades, are executed under executive control. It is argued that executive control subsumes a variety of controlled processes. The present study tested whether some of these underlying processes are involved in the execution of anti-saccades. An experiment is reported in which two such processes were parametrically varied, namely input monitoring and response selection. This resulted in four selective interference conditions obtained by factorially combining the degree of input monitoring and the presence of response selection in the interference task. The four tasks were combined with a primary task which required the participants to perform either pro-saccades or anti-saccades. By comparison of performance in these dual-task conditions and performance in single-task conditions, it was shown that anti-saccades, but not pro-saccades, were delayed when the secondary task required input monitoring or response selection. The results are discussed with respect to theoretical attempts to deconstruct the concept of executive control. |
Seppo Vainio; Jukka Hyönä; Anneli Pajunen Processing modifier-head agreement in reading: Evidence for a delayed effect of agreement Journal Article In: Memory & Cognition, vol. 36, no. 2, pp. 329–340, 2008. @article{Vainio2008, The present study examined whether type of inflectional case (semantic or grammatical) and phonological and morphological transparency affect the processing of Finnish modifier-head agreement in reading. Readers' eye movement patterns were registered. In Experiment 1, an agreeing modifier condition (agreement was transparent) was compared with a no-modifier condition, and in Experiment 2, similar constructions with opaque agreement were used. In both experiments, agreement was found to affect the processing of the target noun with some delay. In Experiment 3, unmarked and case-marked modifiers were used. The results again demonstrated a delayed agreement effect, ruling out the possibility that the agreement effects observed in Experiments 1 and 2 reflect a mere modifier-presence effect. We concluded that agreement exerts its effect at the level of syntactic integration but not at the level of lexical access. |
Matteo Valsecchi; Sven Saage; Brian J. White; Karl R. Gegenfurtner Advantage in reading lexical bundles is reduced in non-native speakers Journal Article In: Journal of Eye Movement Research, vol. 6, no. 5:2, pp. 1–15, 2008. @article{Valsecchi2008, Formulaic sequences such as idioms, collocations, and lexical bundles, which may be processed as holistic units, make up a large proportion of natural language. For language learners, however, formulaic patterns are a major barrier to achieving native like compe- tence. The present study investigated the processing of lexical bundles by native speakers and less advanced non-native English speakers using corpus analysis for the identification of lexical bundles and eye-tracking to measure the reading times. The participants read sentences containing 4-grams and control phrases which were matched for sub-string fre- quency. The results for native speakers demonstrate a processing advantage for formulaic sequences over the matched control units. We do not find any processing advantage for non-native speakers which suggests that native like processing of lexical bundles comes only late in the acquisition process |
Ronald Berg; Frans W. Cornelissen; Jos B. T. M. Roerdink Perceptual dependencies in information visualization assessed by complex visual search Journal Article In: ACM Transactions on Applied Perception, vol. 4, no. 4, pp. 1–21, 2008. @article{Berg2008, A common approach for visualizing data sets is to map them to images in which distinct data dimensions are mapped to distinct visual features, such as color, size and orientation. Here, we consider visualizations in which different data dimensions should receive equal weight and attention. Many of the end-user tasks performed on these images involve a form of visual search. Often, it is simply assumed that features can be judged independently of each other in such tasks. However, there is evidence for perceptual dependencies when simultaneously presenting multiple features. Such dependencies could potentially affect information visualizations that contain combinations of features for encoding information and, thereby, bias subjects into unequally weighting the relevance of different data dimensions. We experimentally assess (1) the presence of judgment dependencies in a visualization task (searching for a target node in a node-link diagram) and (2) how feature contrast relates to salience. From a visualization point of view, our most relevant findings are that (a) to equalize saliency (and thus bottom-up weighting) of size and color, color contrasts have to become very low. Moreover, orientation is less suitable for representing information that consists of a large range of data values, because it does not show a clear relationship between contrast and salience; (b) color and size are features that can be used independently to represent information, at least as far as the range of colors that were used in our study are concerned; (c) the concept of (static) feature salience hierarchies is wrong; how salient a feature is compared to another is not fixed, but a function of feature contrasts; (d) final decisions appear to be as good an indicator of perceptual performance as indicators based on measures obtained from individual fixations. Eye tracking, therefore, does not necessarily present a benefit for user studies that aim at evaluating performance in search tasks. |
Menno Van Der Schoot; Alain L. Vasbinder; Tako M. Horsley; Ernest C. D. M. Van Lieshout The role of two reading strategies in text comprehension: An eye fixation study in primary school children Journal Article In: Journal of Research in Reading, vol. 31, no. 2, pp. 203–223, 2008. @article{VanDerSchoot2008, This study examined whether 1012-year-old children use two reading strategies to aid their text comprehension: (1) distinguishing between important and unimportant words; and (2) resolving anaphoric references. Of interest was the question to what extent use of these reading strategies was predictive of reading comprehension skill over and above decoding skill and vocabulary. Reading strategy use was examined by the recording of eye fixations on specific target words. In contrast to less successful comprehenders, more successful comprehenders invested more processing time in important than in unimportant words. On the other hand, they needed less time to determine the antecedent of an anaphor. The results suggest that more successful comprehenders build a more effective mental model of the text than less successful comprehenders in at least two ways. First, they allocate more attention to the incorporation of goal-relevant than goal-irrelevant information into the model. Second, they ascertain that the text model is coherent and richly connected. |